feat: Migrate database from PostgreSQL to SQLite.#3
Conversation
Summary of ChangesHello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request transitions the application's data persistence layer from PostgreSQL to SQLite. This strategic shift aims to reduce operational complexity, particularly for local development and simplified deployments, by embedding the database directly within the application. The changes encompass comprehensive updates across the codebase, including dependency management, database interaction logic, schema definitions, and infrastructure configurations, ensuring full compatibility and functionality with SQLite. Highlights
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for GitHub and other Google products, sign up here. Footnotes
|
There was a problem hiding this comment.
Code Review
This pull request successfully migrates the database backend from PostgreSQL to SQLite. The changes are comprehensive, covering dependency updates, database connection logic, SQL query syntax, and Docker Compose configurations. The transition to an embedded SQLite database simplifies the deployment and local development environment by removing the need for a separate PostgreSQL service. However, there are a few areas that require attention, particularly regarding UUID generation and data type handling for enums and timestamps, to ensure data integrity and consistency.
PR Review: PostgreSQL → SQLite MigrationCritical Bugs1. Missing UUID defaults will break all inserts
Fix: Use 2. No migration runner in production
Fix: Add Performance / Reliability3. SQLite write contention with
Fix: Set .pragma("journal_mode", "WAL")
.pragma("busy_timeout", "5000")Silent Data Failures4. JSON/date parsing swallows errors ( // tags: serde_json::from_str(&s).ok() — silently returns None on corrupt data
// published_at: row.try_get("published_at").unwrap_or_default() — silently returns epoch on failureBoth will silently hide data integrity issues. Consider logging the errors at minimum. Minor5. Unquoted variable in shell script ( cargo sqlx migrate run -D $DATABASE_URL # should be "$DATABASE_URL"6. Hardcoded DATABASE_URL in prod compose ( DATABASE_URL=sqlite://data/sqlite.db # not using an env var like the old postgres config |
There was a problem hiding this comment.
Pull request overview
This PR migrates the application’s persistence layer from a Postgres-backed setup (via Docker Compose) to an embedded SQLite database, updating runtime configuration, migrations, and SQLx pool usage accordingly.
Changes:
- Switch backend SQLx pool/queries from
PgPooltoSqlitePool, and create SQLite DB files if missing. - Update SQL migrations/schema to be SQLite-compatible (timestamps, arrays/enums → text).
- Remove Postgres services from compose files and adjust dev/prod setup to use a local DB file.
Reviewed changes
Copilot reviewed 10 out of 11 changed files in this pull request and generated 6 comments.
Show a summary per file
| File | Description |
|---|---|
| scripts/setup-dev.sh | Removes Postgres container startup; runs SQLite migrations and seeds admin user via sqlite3. |
| migrations/20260110000000_initial_schema.sql | Converts schema to SQLite-friendly types/defaults (DATETIME, TEXT for arrays/enums). |
| migrations/20260210120000_seed_admin_password.sql | Seeds/updates admin password with a fixed UUID for SQLite. |
| docker-compose.yaml | Removes the standalone Postgres service definition. |
| docker-compose.prod.yaml | Switches production DATABASE_URL to SQLite and mounts /app/data for DB persistence. |
| backend/src/state.rs | AppState pool type changed from PgPool to SqlitePool. |
| backend/src/main.rs | Uses SqliteConnectOptions/SqlitePoolOptions and creates DB if missing; runs migrations on startup. |
| backend/src/api/public.rs | Updates query row mapping/types for SQLite; adds JSON parsing for tags. |
| backend/src/api/admin.rs | Updates parameter binding placeholders from $n to ? and switches to SqlitePool. |
| backend/Cargo.toml | Enables sqlx sqlite feature (replaces postgres). |
| Cargo.toml | Removes the migration workspace member. |
| Cargo.lock | Dependency graph updated due to DB/backend changes. |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
PR Review: PostgreSQL → SQLite MigrationCritical Issues1. No UUID defaults — all inserts will fail
-- Every table has this pattern:
id UUID PRIMARY KEY, -- No DEFAULT!PostgreSQL had 2. SQLite write contention — pool size too large
let pool = SqlitePoolOptions::new()
.max_connections(5) // SQLite = 1 writer at a time
.connect_with(connect_options)SQLite allows only one concurrent writer. Without WAL mode and a busy timeout, concurrent requests will return use sqlx::sqlite::SqliteJournalMode;
use std::time::Duration;
let connect_options = SqliteConnectOptions::from_str(&database_url)?
.create_if_missing(true)
.journal_mode(SqliteJournalMode::Wal)
.busy_timeout(Duration::from_secs(5));
let pool = SqlitePoolOptions::new()
.max_connections(1) // or keep higher only with WAL mode
.connect_with(connect_options)3. UUID decoding is fragile
id: row.get::<uuid::Uuid, _>("id"),SQLite stores High Severity4. Silent datetime error suppression
published_at: row.try_get("published_at").unwrap_or_default(),
5. Shell injection / unquoted variable
cargo sqlx migrate run -D $DATABASE_URL # unquotedShould be Minor6. Empty 7. Hardcoded fallback UUID in setup script ( Summary of must-fixes before merge:
|
PR Review: PostgreSQL → SQLite MigrationCritical Bugs1. UUID deserialization will panic at runtime ( SQLite stores UUIDs as // Current (will panic):
id: row.get::<uuid::Uuid, _>("id"),
// Fix:
id: row.get::<String, _>("id").parse().unwrap_or_default(),
// or use sqlx::types::Uuid with the "uuid" feature properly configured2. SQLite write contention under any concurrency (
let connect_options = SqliteConnectOptions::from_str(&database_url)?
.create_if_missing(true)
.journal_mode(SqliteJournalMode::Wal);3. No UUID defaults in schema ( All tables drop their Security Issues4. Shell injection vector in setup script ( The sqlite3 sqlite.db "INSERT INTO users (id, ...) VALUES ('$ADMIN_UUID', ...)"While 5. Hardcoded fallback UUID ( ADMIN_UUID=$(uuidgen 2>/dev/null || echo "dfbfb952-b8ec-4bd8-b1aa-ed154109addf")Every installation without Other Issues6. Relative SQLite path in production ( DATABASE_URL=sqlite://data/sqlite.dbThis is relative to the working directory, which may not be 7. Invalid docker-compose syntax ( An empty 8. Silent failure for published_at: row.try_get("published_at").unwrap_or_default(),If there's a type mismatch (e.g., SQLite |
PR Review: PostgreSQL → SQLite MigrationCritical: SQLite connection pool will deadlock under load
let pool = SqlitePoolOptions::new()
.max_connections(5) // ❌ SQLite only allows one concurrent writerSQLite has a single-writer lock. With 5 connections and any concurrent writes, you'll get .max_connections(1)
// Or enable WAL mode via connect_options:
.journal_mode(SqliteJournalMode::Wal)
.max_connections(5) // WAL allows concurrent readsCritical: Production migrations never runThe Fix: either call Bug: UUID column affinity in SQLite
id UUID PRIMARY KEY, -- ❌ SQLite has no UUID type; affinity becomes NUMERICSQLite's affinity rules don't recognize Bug: Silent migration failure in dev script
cargo sqlx migrate run -D $DATABASE_URL || true # ❌ silences all errorsAlso, cargo sqlx migrate run --database-url "$DATABASE_URL" || echo "⚠️ Migration failed"Security: SQL injection via unquoted shell variable
sqlite3 sqlite.db "INSERT INTO users ... VALUES ('$ADMIN_UUID', ...)"
sqlite3 sqlite.db <<SQL
INSERT INTO users (id, username, password_hash) VALUES ('$ADMIN_UUID', 'admin', '...')
SQLOr better, use Minor: Relative DATABASE_URL path in production
DATABASE_URL=sqlite://data/sqlite.db # relative pathThis depends on the process working directory at runtime. Prefer an absolute path matching the volume mount: Minor: Broken
|
| Issue | Severity |
|---|---|
max_connections(5) with SQLite writes |
Critical |
| Migrations never run in production | Critical |
UUID column affinity (NUMERIC) |
High |
| Silent migration failure | Medium |
| Shell SQL injection | Medium |
| Relative DB path | Low |
… and `MediaContext` enums and configure their `serde` renaming.
PR Review: PostgreSQL → SQLite MigrationCritical Issues1. No migrations in production (data loss risk)
// backend/src/main.rs — add after pool creation:
sqlx::migrate!("../migrations")
.run(&pool)
.await
.map_err(|e| format!("Migration failed: {}", e))?;2. SQLite WAL mode not configured ( With let connect_options = SqliteConnectOptions::from_str(&database_url)?
.create_if_missing(true)
.journal_mode(sqlx::sqlite::SqliteJournalMode::Wal)
.synchronous(sqlx::sqlite::SqliteSynchronous::Normal);3. Invalid volumes:
# empty — invalid YAML, will fail to parseEither remove the Important Issues4. Panicking
// Risk: panics if UUID text is malformed
id: row.get::<uuid::Uuid, _>("id"),5. UUID generation correctness ( The hand-rolled UUID expression is structurally correct for v4 format but hardcodes variant bits as 6. Silent error swallowing in API handlers
Minor Issues
|
…of using `try_get` with a default.
Critical Issues1.
|
…601 timestamp defaults, and improve dev setup script.
…nt `uuidgen` fallback behavior in the dev setup script.
PR Review: PostgreSQL → SQLite MigrationOverall this is a clean migration, but there are several issues ranging from critical to moderate. Critical1. No migration runner in production ( The // backend/src/main.rs — add after pool creation
sqlx::migrate!("../migrations")
.run(&pool)
.await
.map_err(|e| format!("Migration failed: {}", e))?;2. Missing UUID defaults on all primary keys ( Removing -- Option: generate in app code (preferred with sqlx)
-- Option: use a trigger per table
CREATE TRIGGER set_users_id AFTER INSERT ON users
WHEN NEW.id IS NULL
BEGIN UPDATE users SET id = lower(hex(randomblob(16))) WHERE rowid = NEW.rowid; END;Significant3. SQLite connection pool too large ( .max_connections(5) // problematic for SQLiteSQLite serializes all writers regardless of WAL mode. Five pooled connections increase write lock contention without benefit. For SQLite, 4. Fragile relative path in Docker ( DATABASE_URL=sqlite://data/sqlite.db # relative path
volumes:
- ./data:/app/data
Moderate5. Shell injection risk in ADMIN_UUID=$(uuidgen 2>/dev/null || echo "dfbfb952-...")
sqlite3 sqlite.db <<EOF
INSERT INTO users (id, ...) VALUES ('${ADMIN_UUID}', ...
EOF
6. version: "3.8"
# services:
# The web and backend services will now use a local SQLite DB file.There's no Minor7. This predates the PR but remains — on |
PR Review: PostgreSQL → SQLite MigrationCritical Issues1. Schema missing UUID defaults — runtime INSERT failures
id UUID PRIMARY KEY, -- No default! Any INSERT without explicit id will failAny application-level 2. After the diff, the file has: version: "3.8"
# services:
# The web and backend services will now use a local SQLite DB file.No valid 3. UUID deserialization from SQLite is fragile
id: row.get::<uuid::Uuid, _>("id"),SQLite has no native UUID type — UUIDs are stored as TEXT or BLOB depending on the driver. The id: row.get::<String, _>("id").parse::<uuid::Uuid>().unwrap_or_default(),Or better, return an error rather than unwrapping. Moderate Issues4. SQLite relative path is deployment-fragile
DATABASE_URL=sqlite://data/sqlite.dbThis is relative to the process working directory. If the working directory ever changes (entrypoint, exec, etc.), the database silently opens/creates a new file. Use an absolute path: 5. Shell heredoc interpolation without validation
sqlite3 sqlite.db <<EOF
INSERT INTO users ... VALUES ('${ADMIN_UUID}', ...
EOF
6. Write contention under load
Minor
|
…, image linking, paragraph wrapping, and admin article editing capabilities.
Code Review: PostgreSQL → SQLite MigrationCritical Bugs1. UUID Triggers Are Dead Code (migration SQL)
-- This trigger never fires because PRIMARY KEY rejects NULL
CREATE TRIGGER set_users_id AFTER INSERT ON users
WHEN NEW.id IS NULL
BEGIN UPDATE users SET id = lower(hex(randomblob(16))) WHERE rowid = NEW.rowid; END;2. UUID Storage Format Mismatch The seed migration stores UUIDs as hyphenated strings (e.g.,
The fix is to pick one representation and be consistent. Using 3. Hardcoded JWT Secret fn get_jwt_secret() -> &'static [u8] {
// In production, use environment variable...
b"change-me-in-production"
}This is exploitable as-is. The env var read is commented out. Forging a valid admin JWT requires only knowing this public string. This needs to be fixed before any deployment, not left as a TODO comment. Security Issues4. Hardcoded Fallback UUID in Setup Script ADMIN_UUID=$(uuidgen 2>/dev/null || echo "dfbfb952-b8ec-4bd8-b1aa-ed154109addf")The comment acknowledges this "results in shared admin IDs across affected dev installs." Even for dev, a static UUID for an admin account is a bad pattern — it enables ID enumeration and is easy to accidentally carry into staging/prod images. Bugs5. Three different formats are used across the project:
SQLx accepts both 6. Silent Data Loss on let tags = tags_str.and_then(|s| match serde_json::from_str(&s) {
Ok(t) => Some(t),
Err(e) => {
tracing::warn!("Failed to deserialize tags: {}", e);
None // tags silently dropped
}
});A malformed tags column returns Performance7. SQLite Connection Pool Size let pool = SqlitePoolOptions::new()
.max_connections(5)
.connect_with(connect_options)SQLite allows only one concurrent writer. With 5 connections, write requests will queue and hit the 5-second |
…ions, and refine frontend code and Docker configurations.
…API queries, remove docker-compose stub, and add dev setup validation.
…e triggers, and enforce JWT secret via environment variables.
Code ReviewCritical Security Bug:
|
…enames, validate JWT in the `me` endpoint, increase database connection pool size, and update admin seed ID generation.
…lic API rate limits, and refactor database row mapping.
PR Review: PostgreSQL → SQLite + Rate LimitingCritical Issues1.
The dummy hash is computed on the first call (first login with a non-existent username). That first request takes full Argon2 time to initialize the hash and verify against it, while all subsequent requests only verify. This creates a measurable timing difference for the very first probe. Pre-compute it in // main.rs, after init_jwt_secret()
let _ = crate::api::admin::init_dummy_hash(); // expose a pub fn that calls get_dummy_hash()2.
sqlite3 sqlite.db -cmd ".param set @id '$SAFE_UUID'" -cmd ".param set @hash '$ESCAPED_HASH'" \
"INSERT INTO users ..."The sqlite3 sqlite.db \
"INSERT INTO users (id, username, password_hash) VALUES ('$SAFE_UUID', 'admin', '$ESCAPED_HASH') ON CONFLICT (username) DO NOTHING;"The argon2 PHC format contains no single-quotes, so the existing 3. Separate per-endpoint rate limit buckets
Security Issues4. No username length check in
Password length is capped at 128 bytes, but if req.username.len() > 64 {
return Err((StatusCode::BAD_REQUEST, "Username too long".to_string()));
}5.
.key_extractor(crate::api::admin::TrustedProxyIpKeyExtractor)This creates an implicit coupling between two API modules. Move Bug6.
Notes (Low Severity)
|
…lidation for query parameters and username length, and initialize a dummy hash at startup.
Code ReviewCritical: Cursor pagination timestamp format mismatch
if chrono::DateTime::parse_from_rfc3339(&before).is_err() {
return Err(axum::http::StatusCode::BAD_REQUEST);
}
sqlx::query("...WHERE published_at < ?").bind(before)...The DB stores dates as let dt = chrono::DateTime::parse_from_rfc3339(&before)
.map_err(|_| StatusCode::BAD_REQUEST)?
.to_utc();
let normalized = dt.format("%Y-%m-%dT%H:%M:%6f").to_string() + "Z";
sqlx::query("...WHERE published_at < ?").bind(normalized)...Bug: Shared rate limiter state across two routes
let common_governor_config = std::sync::Arc::new(...);
let articles_governor_layer = tower_governor::GovernorLayer { config: common_governor_config.clone() };
let blog_governor_layer = tower_governor::GovernorLayer { config: common_governor_config.clone() };Because both layers share the same Security: Constant-time guarantee is incomplete
#[inline(never)]
fn verify_password(password: &str, password_hash: &str) -> bool {
...
get_argon2().verify_password(password.as_bytes(), &parsed_hash).is_ok()
}
Security: Production startup check misses IPv6 private ranges
if let Ok(std::net::IpAddr::V4(v4)) = ip_str.parse::<std::net::IpAddr>() {
// checks 10.x, 172.16-31.x, 192.168.x
}IPv6 private ranges ( Bug: Identical rate limiter configs duplicated
Minor:
|
…sted proxy IP detection to include IPv6, and standardize date query parameter formatting.
Code ReviewCritical Bug —
|
… SQLite string escaping, and refine X-Forwarded-For IP extraction logic.
PR Review: PostgreSQL → SQLite Migration with Rate LimitingOverall this is a well-structured migration with good security hygiene. Here are the issues ranked by severity. Critical Bugs1. Un-gated Three web_sys::console::log_1(&"Checking auth token...".into()); // un-gated
// ...
web_sys::console::log_1(&"Admin mode enabled".into()); // un-gated
// ...
web_sys::console::log_1(&"No token found in localStorage".into()); // un-gatedThese will appear in production WASM builds, leaking admin session detection behavior to anyone with devtools open. Security Issues2. Shell SQL injection in sqlite3 sqlite.db "$(printf "INSERT INTO users ... VALUES ('%s', 'admin', '%s') ..." \
"$SAFE_UUID" "$(printf '%s' "$ADMIN_HASH" | sed "s/'/''/g")")"The UUID is validated by regex and the argon2 hash is validated with sqlite3 sqlite.db <<SQL
INSERT INTO users (id, username, password_hash) VALUES ('$SAFE_UUID', 'admin', '$ADMIN_HASH')
ON CONFLICT (username) DO NOTHING;
SQLOr better, use 3.
if secret.len() < 32 {
panic!("JWT_SECRET must be at least 32 characters long...");
4. Hardcoded Docker bridge IPs in auto-generated TRUSTED_PROXY_IPS=172.18.0.2,172.18.0.3Docker bridge IPs are reassigned on network recreation. If the container gets Minor Issues5. Both routes share the same 6. The PostgreSQL schema had 7.
let normalized = dt.format("%Y-%m-%dT%H:%M:%3fZ").to_string();This normalizes to millisecond precision before the Acknowledged Limitations (No Action Needed)The in-memory rate limiter restart bypass, SQLite single-writer bottleneck, and deleted-user JWT validity window are all clearly documented in code comments and README. The dummy-hash timing attack mitigation with |
…, refine backend login rate limits, and enable debug-only frontend logging.
Code ReviewCritical Bug: Shell Heredoc Corrupts Argon2 Hash (
|
| Severity | Issue |
|---|---|
| Critical | setup-dev.sh heredoc expands $ in Argon2 hash → invalid hash, login broken |
| High | Separate governor configs for /login and /password double the effective attempt rate |
| Medium | X-Real-IP trust depends entirely on Nginx config being correct — no runtime defense |
| Low | UUID round-trip in change_password — fragile coincidence, not a correctness risk today |
| Low | published_at decode relies on exact string format — no migration safety net |
…admin API rate limiters.
Code ReviewCritical Bug: Shared Rate Limiter State
let auth_governor_config = std::sync::Arc::new(...);
let login_governor_layer = tower_governor::GovernorLayer { config: auth_governor_config.clone() };
let password_governor_layer = tower_governor::GovernorLayer { config: auth_governor_config }; // same Arc
Bug:
|
| Severity | Issue | Location |
|---|---|---|
| High | Shared rate limiter between /login and /password |
admin.rs:router() |
| Medium | updated_at never updates without triggers |
20260325000000_add_updated_at.sql |
| Medium | Unescaped shell SQL interpolation | setup-dev.sh |
| Low | Cursor pagination fragile to timestamp format variance | public.rs |
| Info | peer_info always Some in production, Option adds noise |
admin.rs:me() |
…published_at` format validation, and refactor backend rate limiters and the dev setup script.
Code ReviewCritical: Trigger chain causes silent UPDATE failures
When any The normalization migration ( Fix: Either make the check trigger only fire when Security:
|
…date triggers, and improve dev script SQL execution.
PR Review: PostgreSQL → SQLite Migration + Rate LimitingOverall this is a well-structured migration with good security thinking (dummy hash timing mitigation, algorithm-pinned JWT validation, Critical: Argon2 blocks the async runtime
// login handler — blocks the executor:
let password_match = verify_password(&req.password, hash_to_verify);Fix: wrap with let hash = hash_to_verify.to_string();
let pw = req.password.clone();
let password_match = tokio::task::spawn_blocking(move || verify_password(&pw, &hash))
.await
.unwrap_or(false);Same fix needed in Security: Log injection via untrusted headers in
|
…te limiters, offload password operations to blocking tasks, sanitize client IP logging, and correct a SQL `published_at` format error message
PR Review: PostgreSQL → SQLite Migration + Rate LimitingThis is a well-structured refactor with solid security improvements (timing-safe auth, centralized JWT secret, Argon2 params). Below are the critical issues. 1. Log Injection in
|
| # | Location | Severity | Type |
|---|---|---|---|
| 1 | admin.rs → me() |
Medium | Security (log injection) |
| 2 | hgen/src/main.rs |
Medium | Bug (wrong password hash) |
| 3 | Initial schema migration | Medium | Data integrity |
| 4 | remote_build.sh |
Medium | Ops security |
| 5 | public.rs → parse_flexible_datetime |
Low | Performance |
The timing-safe dummy-hash path, spawn_blocking for Argon2, #[inline(never)] on verify_password, and the JWT secret centralization are all solid improvements worth keeping.
…P logging based on trusted proxies, and simplify password trimming and datetime parsing logic.
PR ReviewCritical Bug: AFTER UPDATE Triggers — Latent Infinite Loop
All triggers do CREATE TRIGGER update_articles_updated_at
AFTER UPDATE ON articles FOR EACH ROW
BEGIN
UPDATE articles SET updated_at = ... WHERE id = NEW.id;
END;With WHEN NEW.updated_at IS OLD.updated_atThis fires the trigger only when the application didn't explicitly change Security: Hardcoded Docker Bridge IPs in
|
… `updated_at` database triggers, and improve setup script robustness.
PR Review: PostgreSQL → SQLite Migration + Rate LimitingCritical Bugs1. Production deployment will panic on startup ( The generated Ok(ips) if ips.trim().is_empty() => panic!("TRUSTED_PROXY_IPS is set but empty..."),The app will never start with the default generated config. Either use 2. Login and password-change share the same rate-limit bucket ( let auth_governor_config = std::sync::Arc::new(...);
let login_governor_layer = tower_governor::GovernorLayer { config: auth_governor_config.clone() };
let password_governor_layer = tower_governor::GovernorLayer { config: auth_governor_config }; // same ArcBoth layers point to the same underlying Security Issues3. Ok(Json(serde_json::json!({
"authenticated": true,
"sub": _token_data.claims.sub // exposes internal UUID
})))The original only returned 4. let new_content = format!("{}{}{}", wrapper_start, img_tag, wrapper_end);Only the Correctness / Quality Issues5. let _token_data = jsonwebtoken::decode::<Claims>(...)...?;
// ...
"sub": _token_data.claims.subThe underscore prefix signals an intentionally unused variable but it is used. Rename to 6. SQLite can return timestamps in other formats (e.g., 7. if query.before.is_some() && query.offset.is_some() {
return Err(axum::http::StatusCode::BAD_REQUEST);
}The error gives no message to the client. Since this is an API, returning a body like Summary Table
|
…ne admin rate limiting, ensure safe image tag rendering, and clarify `TRUSTED_PROXY_IPS` documentation.
PR Review: PostgreSQL → SQLite + Rate Limiting MigrationCritical1. cat <<EOF > .env
...
JWT_SECRET=$(openssl rand -base64 48 | tr -d '\n')
EOFNo Fix: Add 2. Missing let wrapper_start = format!(
"<a href=\"{}\" target=\"_blank\" class=\"article-image-link\">",
safe_url
);Without Fix: Add High3. extract_client_ip(req.headers(), peer_ip)
.ok_or(tower_governor::GovernorError::UnableToExtractKey)When Fix: Either panic at startup if 4. async fn me(
headers: HeaderMap,
axum::extract::ConnectInfo(peer_addr): axum::extract::ConnectInfo<std::net::SocketAddr>,
) -> ...
Fix: Use Medium5. Duplicate placeholder rendered when article has no image ( {if has_image { ... } else {
view! { <svg ...><text ...>"Image coming soon"</text></svg> }.into_any()
}}
{(!has_image).then(|| view! { <div class="journalism-placeholder-text">"Image coming soon"</div> })}When Fix: Remove the 6. std::io::stdin().read_line(&mut password).expect("Failed to read password");
let password = password.trim_end_matches(['\r', '\n']);If a password with an embedded newline is ever piped (unusual but plausible via a file), only the first line is used silently. Use |
…obustness, set `.env` file permissions, and refine password input.
PR Review: PostgreSQL → SQLite Migration + Rate LimitingOverall this is a well-structured migration. The timing-attack mitigations (dummy hash, Critical: Rate Limiter "0.0.0.0" Fallback (DoS Risk)
Ok(extract_client_ip(req.headers(), peer_ip).unwrap_or_else(|| "0.0.0.0".to_string()))If let peer_ip = connect_info.map(|ci| ci.0.ip());
let key = extract_client_ip(req.headers(), peer_ip)
.ok_or(tower_governor::GovernorError::UnableToExtractKey)?;
Ok(key)Significant: X-Forwarded-For Warning Falls Through to Proxy IP
When the trusted proxy is detected but the rightmost Bug:
|
… for unreliable IPs and refactor shared governor configuration.
Code ReviewThis is a solid migration from PostgreSQL to SQLite with meaningful security improvements. Here are the critical issues: Critical: Deployment Startup Panic
cat <<EOF > .env
...
ENVIRONMENT=production
JWT_SECRET=$(openssl rand -base64 48 | tr -d '\n')
# Warning: ...manually add TRUSTED_PROXY_IPS=<ip>...
EOF
if std::env::var("ENVIRONMENT").as_deref() == Ok("production") {
match std::env::var("TRUSTED_PROXY_IPS").as_deref() {
Err(_) => panic!("TRUSTED_PROXY_IPS must be set in production..."),Every fresh deployment will fail to start until the operator manually edits Security: Rate Limiter Returns 500 for Untrusted Proxy ConfigsIn let key = extract_client_ip(req.headers(), peer_ip)
.ok_or(tower_governor::GovernorError::UnableToExtractKey)?;If a trusted proxy IP is configured but doesn't send Security:
|
No description provided.