Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
25 changes: 25 additions & 0 deletions .gitattributes
Original file line number Diff line number Diff line change
@@ -0,0 +1,25 @@
# Force LF for source files so `sqlx::migrate!` (compile-time macro) embeds
# byte-stable migration content regardless of contributors' `core.autocrlf`
# setting. Archives produced by one machine must remain importable on any
# other build — drifting line endings would change the SHA-384 sqlx stores
# in `_sqlx_migrations.checksum` and break re-import even though the SQL is
# semantically identical.
*.sql text eol=lf
*.rs text eol=lf
*.ts text eol=lf
*.tsx text eol=lf
*.js text eol=lf
*.jsx text eol=lf
*.json text eol=lf
*.toml text eol=lf
*.yml text eol=lf
*.yaml text eol=lf
*.md text eol=lf
*.html text eol=lf
*.css text eol=lf

# Lockfiles change frequently and benefit from native diff/merge handling,
# but should still stay LF so CI builds on Linux runners don't see noisy
# diffs against Windows contributors.
Cargo.lock text eol=lf
bun.lockb binary
2 changes: 1 addition & 1 deletion docs/features/ui.md
Original file line number Diff line number Diff line change
Expand Up @@ -170,7 +170,7 @@ Per-profile isolated database (libraries, playlists, settings, play history); sh
[`commands/profile_io.rs`](../../src-tauri/src/commands/profile_io.rs) packages a profile into a single `.waveflow` (zip) file containing `manifest.json` + `data.db` + the per-profile `artwork/` directory. Settings → Stockage exposes both buttons.

- **Export:** the active-profile path runs `PRAGMA wal_checkpoint(TRUNCATE)` first so the bundled DB captures every committed page (otherwise a busy WAL would leave the archive holding a partial snapshot). The CPU-bound zip work runs on `tokio::task::spawn_blocking`.
- **Import:** always allocates a fresh profile row — never overwrites — then extracts the archive under `profiles/<new_id>/`. Failures roll the row back so a half-imported profile doesn't survive the error. Once extracted, the new pool is opened once so any pending sqlx migrations replay before the user switches to it.
- **Import:** always allocates a fresh profile row — never overwrites — then extracts the archive under `profiles/<new_id>/`. Failures roll the row back so a half-imported profile doesn't survive the error. Before the sqlx migrator runs, [`normalise_migration_checksums`](../../src-tauri/src/commands/profile_io.rs) rewrites `_sqlx_migrations.checksum` for every version present in both the archive and the local migrator — older builds checked out migration files with CRLF endings (Windows `core.autocrlf=true` + no `.gitattributes` lock) so their stored SHA-384 differs from the same SQL re-hashed today, even though the DDL is identical. A `.gitattributes` at repo root now pins `*.sql` / `*.rs` / `*.ts` / etc. to LF so future archives stay byte-stable. Once normalised, the new pool is opened once so any pending sqlx migrations replay before the user switches to it. An archive whose `_sqlx_migrations` lists a version unknown to the local migrator is rejected — that means the export came from a newer build.
- **Out of scope:** the shared `app.db` (Last.fm key, Discord opt-in, `network.offline_mode`) belongs to the install, not the profile. The shared `metadata_artwork/` cache (Deezer pictures, etc.) is re-fetchable so we skip it to keep archives small.
- **Manifest:** `archive_version` (currently `1`) gates compatibility — a future schema-incompatible bump refuses imports rather than silently corrupting the new profile. `app_version` and the source profile name / id are recorded for diagnostics.

Expand Down
83 changes: 81 additions & 2 deletions src-tauri/src/commands/profile_io.rs
Original file line number Diff line number Diff line change
Expand Up @@ -25,7 +25,8 @@ use std::path::{Path, PathBuf};

use chrono::Utc;
use serde::{Deserialize, Serialize};
use sqlx::SqlitePool;
use sqlx::sqlite::SqliteConnectOptions;
use sqlx::{ConnectOptions, Connection, SqlitePool};
use walkdir::WalkDir;
use zip::write::SimpleFileOptions;
use zip::{CompressionMethod, ZipArchive, ZipWriter};
Expand Down Expand Up @@ -212,7 +213,19 @@ pub async fn import_profile(
return Err(err);
}

// 4. Open + close the imported pool once so any pending migrations
// 4. Normalise the bundled `_sqlx_migrations.checksum` column against
// the local migration files before running the migrator. Archives
// produced by a build whose migration files happened to be
// checked out with CRLF endings (Git `core.autocrlf=true` on
// Windows + no `.gitattributes` lock) store SHA-384 hashes
// computed on different bytes than the current LF-normalised
// sources, even though the SQL is semantically identical. Without
// this step sqlx refuses the import with
// "migration X was previously applied but has been modified".
// See `.gitattributes` for the forward fix.
normalise_migration_checksums(&state.paths.profile_db(new_profile_id)).await?;

// 5. Open + close the imported pool once so any pending migrations
// (the source might be older than the local schema) replay
// immediately. This matches the create_profile flow and gives
// the user a usable profile by the time the call returns.
Expand Down Expand Up @@ -350,6 +363,72 @@ fn extract_archive(

// ── helpers ────────────────────────────────────────────────────────

/// Rewrite `_sqlx_migrations.checksum` for every previously-applied
/// migration so it matches the SHA-384 of the *local* migration file
/// bundled into the running binary. Called on a freshly extracted
/// `data.db` before the sqlx migrator runs.
///
/// Two failure modes the caller surfaces verbatim:
/// - Local migrator missing a version present in the archive
/// → the archive is from a *newer* build than the one importing it,
/// and we genuinely can't roll the schema forward.
/// - Anything else → propagated as a generic `Other` error.
///
/// Same-version + same-content but different-checksum is treated as
/// benign byte-level drift (line endings, BOM) and silently fixed: the
/// "migrations are immutable once merged" rule means a version that
/// exists in both sides represents the same DDL by construction.
async fn normalise_migration_checksums(db_path: &Path) -> AppResult<()> {
let migrator = sqlx::migrate!("./migrations/profile");

let opts = SqliteConnectOptions::new()
.filename(db_path)
.create_if_missing(false)
// Skip the noisy "executing statement" log line on every checksum
// UPDATE — these are pure plumbing rewrites, not user-visible
// DB activity.
.disable_statement_logging();
let mut conn = opts.connect().await?;

// The archive may predate the introduction of `_sqlx_migrations`
// (very unlikely, but we don't want to crash on the bootstrap case).
let table_exists: Option<String> = sqlx::query_scalar(
"SELECT name FROM sqlite_master WHERE type='table' AND name='_sqlx_migrations'",
)
.fetch_optional(&mut conn)
.await?;
if table_exists.is_none() {
conn.close().await?;
return Ok(());
}

let stored: Vec<(i64, Vec<u8>)> =
sqlx::query_as("SELECT version, checksum FROM _sqlx_migrations")
.fetch_all(&mut conn)
.await?;

for (version, stored_checksum) in stored {
let local = migrator.iter().find(|m| m.version == version);
let Some(local) = local else {
return Err(AppError::Other(format!(
"archive contains migration {version} not present in this build — \
export was produced by a newer WaveFlow version"
)));
};
if local.checksum.as_ref() == stored_checksum.as_slice() {
continue;
}
sqlx::query("UPDATE _sqlx_migrations SET checksum = ? WHERE version = ?")
.bind(local.checksum.as_ref())
.bind(version)
.execute(&mut conn)
.await?;
}

conn.close().await?;
Ok(())
}

/// Force a full WAL checkpoint so the archive captures every committed
/// page. `TRUNCATE` resets the WAL file to zero length on success,
/// which also keeps `.waveflow` archives from carrying a stale
Expand Down
Loading