Skip to content

feat: use docker compose for development#1463

Merged
Marukome0743 merged 1 commit intoOpenUp-LabTakizawa:mainfrom
Marukome0743:pr1463
Apr 7, 2026
Merged

feat: use docker compose for development#1463
Marukome0743 merged 1 commit intoOpenUp-LabTakizawa:mainfrom
Marukome0743:pr1463

Conversation

@Marukome0743
Copy link
Copy Markdown
Member

@Marukome0743 Marukome0743 commented Apr 7, 2026

Summary by Sourcery

Postgres と RustFS による S3 互換ストレージをバックエンドにした、完全ローカルな Docker ベースの開発・テスト環境を導入し、アプリの S3 クライアントおよびツール群をローカル利用向けのカスタムエンドポイントをサポートするように接続します。

新機能:

  • 完全ローカルスタックに対してアプリとテストを実行できるよう、Postgres と RustFS を含む Docker Compose セットアップを追加。
  • ローカル環境のブートストラップ(環境変数生成、マイグレーション、S3 バケット作成を含む)のための dev-up スクリプトと mise タスクを追加。
  • S3 互換サービス向けにパススタイルアクセスを有効にできる S3_ENDPOINT 環境変数経由で、S3 バックエンドを構成できるように対応。

改善:

  • ストレージファクトリのテストを洗練し、明示的なモックバックエンドとローカルに定義した createStorageClient を使用することで、より堅牢なバックエンド選択検証を実現。
  • labeler 設定を更新し、compose.yaml の変更を docker 関連として扱うようにし、TypeScript 設定を調整してテストをコンパイル対象に含めるように変更。

CI:

  • Playwright ワークフローを pull_request_target ではなく pull_request で実行するよう変更し、Postgres サービスイメージをアップグレードするとともに、E2E テスト用に S3 関連の環境変数配線とバケットブートストラップを備えた RustFS サービスを追加。

デプロイ:

  • ローカル開発向けに、Postgres と RustFS サービスおよび永続ボリュームを定義した Docker Compose 設定を追加。

ドキュメント:

  • S3_ENDPOINT 設定を文書化し、新しい Docker ベースのローカル開発環境および RustFS の利用方法に関する README の手順を追加。

テスト:

  • S3 クライアントのカスタムエンドポイント動作と S3 ラウンドトリップに対するプロパティベーステスト、createS3Client のエンドポイント選択の単体テスト、dev-up スクリプトでの冪等な S3 バケット作成のテストを追加。
Original summary in English

Summary by Sourcery

Introduce a fully local Docker-based development and test environment backed by Postgres and RustFS S3-compatible storage, and wire the app’s S3 client and tooling to support custom endpoints for local usage.

New Features:

  • Add Docker Compose setup with Postgres and RustFS for running the app and tests against a fully local stack.
  • Introduce a dev-up script and mise tasks to bootstrap the local environment, including env generation, migrations, and S3 bucket creation.
  • Support configuring the S3 backend via an S3_ENDPOINT environment variable that enables path-style access for S3-compatible services.

Enhancements:

  • Refine storage factory tests to use explicit mock backends and a locally defined createStorageClient for more robust backend selection verification.
  • Update labeler configuration to treat compose.yaml changes as docker-related and adjust TypeScript config to include tests in compilation.

CI:

  • Run Playwright workflow on pull_request instead of pull_request_target, upgrade the Postgres service image, and add a RustFS service with S3-related env wiring and bucket bootstrapping for E2E tests.

Deployment:

  • Add Docker Compose configuration defining Postgres and RustFS services and persistent volumes for local development.

Documentation:

  • Document the S3_ENDPOINT configuration and add README instructions for the new Docker-based local development environment and RustFS usage.

Tests:

  • Add property-based tests for S3 client custom endpoint behavior and S3 round-trips, unit tests for createS3Client endpoint selection, and tests for idempotent S3 bucket creation in the dev-up script.

@vercel
Copy link
Copy Markdown

vercel bot commented Apr 7, 2026

@Marukome0743 is attempting to deploy a commit to the OpenUp Lab Takizawa Team on Vercel.

A member of the Team first needs to authorize it.

@github-actions
Copy link
Copy Markdown

github-actions bot commented Apr 7, 2026

Dependency Review

✅ No vulnerabilities or license issues or OpenSSF Scorecard issues found.

Snapshot Warnings

⚠️: No snapshots were found for the head SHA 73ceac1.
Ensure that dependencies are being submitted on PR branches and consider enabling retry-on-snapshot-warnings. See the documentation for more information and troubleshooting advice.

Scanned Files

None

@github-actions
Copy link
Copy Markdown

github-actions bot commented Apr 7, 2026

🪄 Deploy Preview for ready!

Open in Codeflow
Learn more about StackBlitz Codeflow.

@github-actions github-actions bot added 📝documentation Improvements or additions to documentation 🚀enhancement New feature or request ♻️ci Changes to CI configuration files and scripts ⚙️config Update configuration files 🧰infrastructure Improvements of facility such as linter etc ✅test Something related to test docker Pull requests that update Docker code labels Apr 7, 2026
@sourcery-ai
Copy link
Copy Markdown

sourcery-ai bot commented Apr 7, 2026

レビュー担当者向けガイド

Docker Compose ベースのローカル開発環境(PostgreSQL と RustFS)を導入し、アプリおよび CI を S3 互換エンドポイントを使うように接続します。また、S3 クライアント設定、ストレージバックエンドの選択、開発用ブートストラップスクリプトに関する集中的なテストとドキュメントを追加します。

dev-up スクリプトによるブートストラップフローのシーケンス図

sequenceDiagram
  actor Developer
  participant Mise as Mise_task_runner
  participant Bun as Bun_runtime
  participant DevUp as DevUp_script
  participant Docker as Docker_Compose
  participant DB as Postgres
  participant RustFS as RustFS_S3

  Developer->>Mise: run task dev:up
  Mise->>Bun: bun run scripts/dev-up.ts
  Bun->>DevUp: execute main()
  DevUp->>DevUp: generateEnvFile()
  DevUp->>Docker: docker compose up -d --wait
  Docker-->>DB: start postgres:18
  Docker-->>RustFS: start rustfs/rustfs
  DB-->>Docker: healthy
  RustFS-->>Docker: healthy
  DevUp->>Bun: bun run migrate
  Bun->>DB: apply_database_migrations
  DevUp->>RustFS: createBucketIfNotExists(S3Client,bucket)
  RustFS-->>DevUp: bucket_created_or_already_exists
  DevUp-->>Bun: main() resolves
  Bun-->>Mise: task complete
  Mise-->>Developer: local env ready
Loading

S3 クライアント設定と dev-up ユーティリティのクラス図

classDiagram
  class S3BackendModule {
    +createS3Client() S3Client
    -getEnv(name string) string
    -accessKeyId string
    -secretAccessKey string
    -region string
    -endpoint string
  }

  class DevUpScript {
    +generateEnvFile() void
    +createBucketIfNotExists(s3Client S3Client, bucket string) Promise~void~
    +main() Promise~void~
    -ENV_FILE_PATH string
    -ENV_CONTENT string
  }

  class S3Client {
    +S3Client(config S3ClientConfig)
    +send(command S3Command) Promise~unknown~
  }

  class CreateBucketCommand {
    +CreateBucketCommand(params CreateBucketParams)
  }

  class CreateBucketParams {
    +Bucket string
  }

  class S3ClientConfig {
    +accessKeyId string
    +secretAccessKey string
    +region string
    +endpoint string
    +forcePathStyle boolean
  }

  S3BackendModule --> S3Client : creates
  DevUpScript --> S3Client : creates
  DevUpScript --> CreateBucketCommand : uses
  CreateBucketCommand --> CreateBucketParams : config
  S3Client --> S3ClientConfig : configured_with
Loading

ファイル単位の変更点

Change Details Files
PostgreSQL と RustFS を使用した Docker Compose ベースのローカル開発環境を追加し、ツール群および CI に組み込みます。
  • postgres と rustfs サービスをヘルスチェックとボリューム付きで定義する compose.yaml を追加します。
  • Docker Compose の起動、ボリュームのリセット、開発用ブートストラップスクリプトの実行を行う mise タスク dev:up および dev:reset を追加します。
  • Playwright の GitHub Actions ワークフローを postgres:18 を使用するよう更新し、rustfs サービスを追加し、S3 関連の環境変数を設定し、テスト前に S3 バケットを作成します。
  • docker 関連の labeler ルールを拡張し、compose.yaml を対象に含めます。
compose.yaml
mise.toml
.github/workflows/playwright.yml
.github/labeler.yml
S3 バックエンドで RustFS などの S3 互換エンドポイントを設定可能にし、新しい環境変数とワークフローをドキュメント化します。
  • S3 バックエンドのクライアントファクトリを更新し、S3_ENDPOINT を読み取り、指定されている場合は forcePathStyle を有効にします。
  • S3_ENDPOINT、Docker ベースのローカル開発環境コマンド、および依存コンポーネントとしての RustFS に関する README ドキュメントを追加します。
  • test ディレクトリを TypeScript コンパイル対象外から外すのをやめ、.dockerignore を調整します。
app/lib/storage/s3-backend.ts
README.md
tsconfig.json
.dockerignore
S3 クライアント設定、S3 のラウンドトリップ動作、ストレージバックエンドの選択、および開発用ブートストラップスクリプトの冪等性に関するテストを追加・リファクタリングします。
  • カスタムエンドポイント設定、S3 へのアップロード/取得のラウンドトリップ、S3_ENDPOINT の有無それぞれにおける createS3Client の明示的な挙動について、プロパティベースのチェックを追加して s3-backend のユニットテストを拡張します。
  • ストレージファクトリのテストをリファクタリングし、明示的なモックバックエンドクラスを使用し、createStorageClient ロジックをインライン化し、追加の mock.module オーバーライドによってテスト間の分離を保証します。
  • dev-up スクリプトの createBucketIfNotExists ヘルパーに対し、冪等性、成功パス、エラーパスを含むユニットテストを追加します。
test/unit/lib/storage/s3-backend.test.ts
test/unit/lib/storage/factory/storage.test.ts
test/unit/scripts/dev-up.test.ts
ローカル .env の生成、サービスの起動、マイグレーションの実行、および S3 バケットの存在保証を行う開発用ブートストラップスクリプトを導入します。
  • ローカル開発用の .env を書き出し、docker compose up -d --wait を実行し、マイグレーションを実行し、S3 クライアントを使って設定された S3 バケットを作成する scripts/dev-up.ts を追加します。
  • 再利用およびテストのために generateEnvFile と createBucketIfNotExists をエクスポートし、バケット作成時に BucketAlreadyOwnedByYou を穏当に処理し、予期しないエラーは再スローします。
scripts/dev-up.ts

Tips and commands

Sourcery とのやりとり

  • 新しいレビューをトリガーする: プルリクエスト上で @sourcery-ai review とコメントします。
  • ディスカッションを続ける: Sourcery のレビューコメントに直接返信します。
  • レビューコメントから GitHub Issue を生成する: レビューコメントに返信して、Sourcery に対しそのコメントから Issue を作成するよう依頼します。また、レビューコメントに @sourcery-ai issue と返信して Issue を作成することもできます。
  • プルリクエストタイトルを生成する: プルリクエストのタイトル内のどこかに @sourcery-ai と書くと、いつでもタイトルを生成できます。プルリクエスト上で @sourcery-ai title とコメントして、タイトルを再生成することも可能です。
  • プルリクエストのサマリを生成する: プルリクエスト本文のどこかに @sourcery-ai summary と書くと、その位置に PR サマリをいつでも生成できます。プルリクエスト上で @sourcery-ai summary とコメントして、サマリを再生成することもできます。
  • レビュー担当者向けガイドを生成する: プルリクエスト上で @sourcery-ai guide とコメントすると、レビュー担当者向けガイドをいつでも(再)生成できます。
  • すべての Sourcery コメントを解決済みにする: プルリクエスト上で @sourcery-ai resolve とコメントすると、すべての Sourcery コメントを解決済みにできます。すでにすべてのコメントに対応済みで、これ以上表示させたくない場合に便利です。
  • すべての Sourcery レビューを破棄する: プルリクエスト上で @sourcery-ai dismiss とコメントすると、既存の Sourcery レビューをすべて破棄できます。特に、新しいレビューでやり直したい場合に便利です。その際は @sourcery-ai review とコメントして新しいレビューをトリガーするのを忘れないでください。

体験のカスタマイズ

ダッシュボード にアクセスすると、以下が行えます:

  • Sourcery が生成するプルリクエストサマリ、レビュー担当者向けガイドなどのレビュー機能を有効化または無効化します。
  • レビュー言語を変更します。
  • カスタムレビュー指示を追加・削除・編集します。
  • その他のレビュー設定を調整します。

サポートの利用

Original review guide in English

Reviewer's Guide

Introduces a Docker Compose-driven local development environment using PostgreSQL and RustFS, wires the app and CI to use an S3-compatible endpoint, and adds focused tests and docs around S3 client configuration, storage backend selection, and dev bootstrap scripts.

Sequence diagram for dev-up script bootstrap flow

sequenceDiagram
  actor Developer
  participant Mise as Mise_task_runner
  participant Bun as Bun_runtime
  participant DevUp as DevUp_script
  participant Docker as Docker_Compose
  participant DB as Postgres
  participant RustFS as RustFS_S3

  Developer->>Mise: run task dev:up
  Mise->>Bun: bun run scripts/dev-up.ts
  Bun->>DevUp: execute main()
  DevUp->>DevUp: generateEnvFile()
  DevUp->>Docker: docker compose up -d --wait
  Docker-->>DB: start postgres:18
  Docker-->>RustFS: start rustfs/rustfs
  DB-->>Docker: healthy
  RustFS-->>Docker: healthy
  DevUp->>Bun: bun run migrate
  Bun->>DB: apply_database_migrations
  DevUp->>RustFS: createBucketIfNotExists(S3Client,bucket)
  RustFS-->>DevUp: bucket_created_or_already_exists
  DevUp-->>Bun: main() resolves
  Bun-->>Mise: task complete
  Mise-->>Developer: local env ready
Loading

Class diagram for S3 client configuration and dev-up utilities

classDiagram
  class S3BackendModule {
    +createS3Client() S3Client
    -getEnv(name string) string
    -accessKeyId string
    -secretAccessKey string
    -region string
    -endpoint string
  }

  class DevUpScript {
    +generateEnvFile() void
    +createBucketIfNotExists(s3Client S3Client, bucket string) Promise~void~
    +main() Promise~void~
    -ENV_FILE_PATH string
    -ENV_CONTENT string
  }

  class S3Client {
    +S3Client(config S3ClientConfig)
    +send(command S3Command) Promise~unknown~
  }

  class CreateBucketCommand {
    +CreateBucketCommand(params CreateBucketParams)
  }

  class CreateBucketParams {
    +Bucket string
  }

  class S3ClientConfig {
    +accessKeyId string
    +secretAccessKey string
    +region string
    +endpoint string
    +forcePathStyle boolean
  }

  S3BackendModule --> S3Client : creates
  DevUpScript --> S3Client : creates
  DevUpScript --> CreateBucketCommand : uses
  CreateBucketCommand --> CreateBucketParams : config
  S3Client --> S3ClientConfig : configured_with
Loading

File-Level Changes

Change Details Files
Add Docker Compose-based local dev environment with PostgreSQL and RustFS and wire it into tooling and CI.
  • Introduce compose.yaml defining postgres and rustfs services with health checks and volumes.
  • Add mise tasks dev:up and dev:reset to spin up Docker Compose, reset volumes, and run the dev bootstrap script.
  • Update Playwright GitHub Actions workflow to use postgres:18, add a rustfs service, configure S3-related env vars, and create the S3 bucket before tests.
  • Extend docker-related labeler rule to include compose.yaml.
compose.yaml
mise.toml
.github/workflows/playwright.yml
.github/labeler.yml
Support configurable S3-compatible endpoints (e.g. RustFS) in the S3 backend and document the new environment variables and workflow.
  • Update S3 backend client factory to read S3_ENDPOINT and enable forcePathStyle when present.
  • Add README docs for S3_ENDPOINT, the Docker-based local dev environment commands, and RustFS as a dependency.
  • Stop excluding the test directory from TypeScript compilation and adjust .dockerignore.
app/lib/storage/s3-backend.ts
README.md
tsconfig.json
.dockerignore
Add and refactor tests for S3 client configuration, S3 round-trip behavior, storage backend selection, and dev bootstrap script idempotency.
  • Extend s3-backend unit tests with property-based checks for custom endpoint configuration, S3 upload/get round-trips, and explicit createS3Client behavior with and without S3_ENDPOINT.
  • Refactor storage factory tests to use explicit mock backend classes, inline createStorageClient logic, and ensure cross-test isolation via additional mock.module overrides.
  • Add unit tests for the dev-up script’s createBucketIfNotExists helper including idempotency, success, and error paths.
test/unit/lib/storage/s3-backend.test.ts
test/unit/lib/storage/factory/storage.test.ts
test/unit/scripts/dev-up.test.ts
Introduce a dev bootstrap script to generate a local .env, start services, run migrations, and ensure the S3 bucket exists.
  • Add scripts/dev-up.ts that writes a local dev .env, runs docker compose up -d --wait, executes migrations, and creates the configured S3 bucket using an S3 client.
  • Export generateEnvFile and createBucketIfNotExists for reuse and testing, with bucket creation handling BucketAlreadyOwnedByYou gracefully and rethrowing unexpected errors.
scripts/dev-up.ts

Tips and commands

Interacting with Sourcery

  • Trigger a new review: Comment @sourcery-ai review on the pull request.
  • Continue discussions: Reply directly to Sourcery's review comments.
  • Generate a GitHub issue from a review comment: Ask Sourcery to create an
    issue from a review comment by replying to it. You can also reply to a
    review comment with @sourcery-ai issue to create an issue from it.
  • Generate a pull request title: Write @sourcery-ai anywhere in the pull
    request title to generate a title at any time. You can also comment
    @sourcery-ai title on the pull request to (re-)generate the title at any time.
  • Generate a pull request summary: Write @sourcery-ai summary anywhere in
    the pull request body to generate a PR summary at any time exactly where you
    want it. You can also comment @sourcery-ai summary on the pull request to
    (re-)generate the summary at any time.
  • Generate reviewer's guide: Comment @sourcery-ai guide on the pull
    request to (re-)generate the reviewer's guide at any time.
  • Resolve all Sourcery comments: Comment @sourcery-ai resolve on the
    pull request to resolve all Sourcery comments. Useful if you've already
    addressed all the comments and don't want to see them anymore.
  • Dismiss all Sourcery reviews: Comment @sourcery-ai dismiss on the pull
    request to dismiss all existing Sourcery reviews. Especially useful if you
    want to start fresh with a new review - don't forget to comment
    @sourcery-ai review to trigger a new review!

Customizing Your Experience

Access your dashboard to:

  • Enable or disable review features such as the Sourcery-generated pull request
    summary, the reviewer's guide, and others.
  • Change the review language.
  • Add, remove or edit custom review instructions.
  • Adjust other review settings.

Getting Help

@github-actions github-actions bot added the dependencies Pull requests that update a dependency file label Apr 7, 2026
@vercel
Copy link
Copy Markdown

vercel bot commented Apr 7, 2026

The latest updates on your projects. Learn more about Vercel for GitHub.

Project Deployment Actions Updated (UTC)
dcrs Ready Ready Preview, Comment Apr 7, 2026 1:29pm

Copy link
Copy Markdown

@sourcery-ai sourcery-ai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hey - 5つの指摘と、いくつか全体的なフィードバックを残しました。

  • process.env を変更する新しいテスト(S3 や storage factory のテストなど)が、元の環境変数を復元しておらず、テスト間で状態が干渉する可能性があります。beforeEach / afterEach で関連する環境変数をキャプチャしてリセットすることを検討してください。
  • scripts/dev-up.tsgenerateEnvFile は、条件なしに .env を上書きしており、開発者が独自に設定した内容を消してしまう可能性があります。ファイルがすでに存在する場合は書き込みをスキップするか、上書きにフラグを必要とするようにするとよさそうです。
  • mise.tomldev:reset タスクは bun run migration を実行していますが、他の箇所では bun run migrate を使っています。同じスクリプト名に揃えることで、混乱やコマンドの失敗を防げます。
Prompt for AI Agents
Please address the comments from this code review:

## Overall Comments
- The new tests that mutate `process.env` (e.g. S3 and storage factory tests) don’t restore the original environment, which can lead to cross-test interference; consider capturing and resetting relevant env vars in `beforeEach`/`afterEach`.
- In `scripts/dev-up.ts`, `generateEnvFile` unconditionally overwrites `.env`, which may clobber a developer’s custom configuration; you might want to skip writing if the file already exists or gate overwrites behind a flag.
- The `dev:reset` task in `mise.toml` runs `bun run migration` while other places use `bun run migrate`; aligning these to the same script name would avoid confusion or broken commands.

## Individual Comments

### Comment 1
<location path="scripts/dev-up.ts" line_range="20-22" />
<code_context>
+S3_ENDPOINT=http://localhost:9000
+`
+
+export function generateEnvFile(): void {
+  writeFileSync(ENV_FILE_PATH, ENV_CONTENT)
+  console.log(`✅ Generated ${ENV_FILE_PATH}`)
+}
+
</code_context>
<issue_to_address>
**suggestion:** Always overwriting `.env` may clobber a developer’s local customizations.

Since this unconditionally rewrites `.env`, it can overwrite local overrides (e.g. DB, feature flags, secrets). Consider only creating the file if it’s missing, backing up an existing one, or requiring an explicit flag to force overwrite so `dev:up` doesn’t disrupt local setups.

Suggested implementation:

```typescript
import { existsSync, writeFileSync } from "fs"

```

```typescript
export function generateEnvFile(): void {
  if (existsSync(ENV_FILE_PATH)) {
    console.log(
      `ℹ️ Skipping .env generation: ${ENV_FILE_PATH} already exists. ` +
        `Move or delete it if you want to regenerate the default file.`,
    )
    return
  }

  writeFileSync(ENV_FILE_PATH, ENV_CONTENT)
  console.log(`✅ Generated ${ENV_FILE_PATH}`)
}

```

If the existing import from "fs" is named/imported differently (e.g. `import fs from "fs"`), adapt the import change accordingly by adding `existsSync` in that style rather than the named import shown here.
</issue_to_address>

### Comment 2
<location path="compose.yaml" line_range="2-11" />
<code_context>
     timeout-minutes: 30
     services:
       postgres:
-        image: postgres
+        image: postgres:18
</code_context>
<issue_to_address>
**issue (bug_risk):** Postgres image tag and data directory volume mapping may cause container startup issues.

Two things to verify:
- `postgres:18` may not be published yet (as with the CI workflow), which would break `docker compose up` on fresh environments.
- The official image expects data under `/var/lib/postgresql/data`; mounting a volume to `/var/lib/postgresql` instead can skip the default data directory and init logic. Using `/var/lib/postgresql/data` is usually safer for persistence and upgrades.
</issue_to_address>

### Comment 3
<location path="test/unit/scripts/dev-up.test.ts" line_range="69-73" />
<code_context>
+    consoleSpy.mockRestore()
+  })
+
+  it("throws on unexpected errors", async () => {
+    const sendMock = mock(() => Promise.reject(new Error("Connection refused")))
+    const client = createMockS3Client(sendMock)
+
+    expect(createBucketIfNotExists(client, "bucket")).rejects.toThrow(
+      "Connection refused",
+    )
</code_context>
<issue_to_address>
**issue (testing):** The rejection assertion is not awaited, so the test may pass even if no error is thrown.

Here, `createBucketIfNotExists` returns a promise, but the test neither `await`s nor `return`s the `expect(...).rejects` chain. Update to either:

```ts
await expect(
  createBucketIfNotExists(client, "bucket"),
).rejects.toThrow("Connection refused")
```

or return the expectation from the test. This makes the test actually assert the rejection behavior.
</issue_to_address>

### Comment 4
<location path="test/unit/lib/storage/s3-backend.test.ts" line_range="137-146" />
<code_context>
+describe("Feature: local-dev-environment, Property 2: S3 operation round-trip", () => {
</code_context>
<issue_to_address>
**issue (testing):** The property-based test mutates `process.env` without restoring it, which can leak state across test runs.

Inside this property-based test, each run overwrites `process.env.S3_ACCESS_KEY_ID`, `S3_SECRET_ACCESS_KEY`, and `S3_REGION` without restoring them. With `fc.assert` running many times, this can interfere with other tests that rely on different or unset env values.

Capture and restore the relevant env vars via `beforeEach`/`afterEach` or within the property itself, for example:

```ts
const originalEnv = { ...process.env }

await fc.assert(
  fc.asyncProperty(..., async (...) => {
    try {
      process.env.S3_ACCESS_KEY_ID = "test-key"
      // ...
    } finally {
      process.env = { ...originalEnv }
    }
  }),
)
```
</issue_to_address>

### Comment 5
<location path="test/unit/lib/storage/s3-backend.test.ts" line_range="196-198" />
<code_context>
+  })
+})
+
+describe("createS3Client unit tests", () => {
+  it("uses default AWS endpoint when S3_ENDPOINT is not set", () => {
+    delete process.env.S3_ENDPOINT
+    process.env.S3_ACCESS_KEY_ID = "test-key"
+    process.env.S3_SECRET_ACCESS_KEY = "test-secret"
</code_context>
<issue_to_address>
**suggestion (testing):** The `createS3Client` tests re-implement the client construction logic and rely on AWS SDK internals, which makes them brittle and less effective.

These tests rebuild `new S3Client({...})` directly and only assert on `s3.config.forcePathStyle` / `s3.config.endpoint`, so they neither call `createS3Client` nor validate its behavior. This duplicates the helper’s logic and tightly couples the tests to AWS SDK internals that may change.

Instead, import `createS3Client`, mock `@aws-sdk/client-s3`’s `S3Client` constructor, and assert that it is invoked with the expected options (including `endpoint`/`forcePathStyle`) for different `process.env.S3_ENDPOINT` values. That way you’re testing your helper’s behavior rather than the SDK’s internal config shape.

Suggested implementation:

```typescript
import { S3Client } from "@aws-sdk/client-s3"
// adjust this import path to where createS3Client actually lives
import { createS3Client } from "../../../lib/storage/s3-backend"

jest.mock("@aws-sdk/client-s3", () => {
  const actual = jest.requireActual("@aws-sdk/client-s3")
  return {
    ...actual,
    S3Client: jest.fn(),
  }
})

describe("createS3Client unit tests", () => {
  const ORIGINAL_ENV = process.env

  beforeEach(() => {
    jest.resetModules()
    process.env = { ...ORIGINAL_ENV }
    delete process.env.S3_ENDPOINT
    delete process.env.S3_ACCESS_KEY_ID
    delete process.env.S3_SECRET_ACCESS_KEY
    delete process.env.S3_REGION
  })

  afterAll(() => {
    process.env = ORIGINAL_ENV
  })

  it("uses default AWS endpoint when S3_ENDPOINT is not set", () => {
    process.env.S3_ACCESS_KEY_ID = "test-key"
    process.env.S3_SECRET_ACCESS_KEY = "test-secret"
    process.env.S3_REGION = "us-east-1"

    createS3Client()

    expect(S3Client).toHaveBeenCalledTimes(1)
    expect(S3Client).toHaveBeenCalledWith(
      expect.objectContaining({
        region: "us-east-1",
        credentials: {
          accessKeyId: "test-key",
          secretAccessKey: "test-secret",
        },
      }),
    )

    // when S3_ENDPOINT is not set, we expect no explicit endpoint override
    // and default SDK behavior for path-style vs virtual-hosted-style
    const callArgs = (S3Client as jest.Mock).mock.calls[0][0]
    expect(callArgs.endpoint).toBeUndefined()
    expect(callArgs.forcePathStyle).toBeUndefined()
  })

  it("passes custom endpoint and enables path-style access when S3_ENDPOINT is set", () => {
    process.env.S3_ENDPOINT = "http://localhost:9000"
    process.env.S3_ACCESS_KEY_ID = "test-key"
    process.env.S3_SECRET_ACCESS_KEY = "test-secret"
    process.env.S3_REGION = "us-west-2"

    createS3Client()

    expect(S3Client).toHaveBeenCalledTimes(1)
    expect(S3Client).toHaveBeenCalledWith(
      expect.objectContaining({
        region: "us-west-2",
        credentials: {
          accessKeyId: "test-key",
          secretAccessKey: "test-secret",
        },
        endpoint: "http://localhost:9000",
        forcePathStyle: true,
      }),
    )
  })

  it("supports https custom endpoint without forcing path-style if helper is configured that way", () => {
    process.env.S3_ENDPOINT = "https://example-bucket.s3.custom-endpoint.com"
    process.env.S3_ACCESS_KEY_ID = "test-key"
    process.env.S3_SECRET_ACCESS_KEY = "test-secret"
    process.env.S3_REGION = "eu-central-1"

    createS3Client()

    expect(S3Client).toHaveBeenCalledTimes(1)
    const callArgs = (S3Client as jest.Mock).mock.calls[0][0]

    expect(callArgs).toEqual(
      expect.objectContaining({
        region: "eu-central-1",
        credentials: {
          accessKeyId: "test-key",
          secretAccessKey: "test-secret",
        },
        endpoint: "https://example-bucket.s3.custom-endpoint.com",
      }),
    )

    // Adjust this expectation if your helper always sets forcePathStyle for any custom endpoint
    // or only for certain patterns (e.g. non-AWS / non-https endpoints).
    // Keeping it flexible here and asserting explicitly to document intended behavior.
    expect(callArgs.forcePathStyle === true || callArgs.forcePathStyle === undefined).toBe(true)
  })

```

1. **Import path**: Update `import { createS3Client } from "../../../lib/storage/s3-backend"` to the correct relative path for your project (e.g. `../../../../src/lib/storage/s3-backend` or a TS path alias if configured).
2. **Test framework**: If you are using Vitest instead of Jest:
   - Replace `jest.mock` with `vi.mock`.
   - Replace `jest.requireActual` with `vi.importActual`.
   - Replace `jest.resetModules` with `vi.resetModules`.
   - Replace all `jest.*` matcher helpers and types with their `vi` equivalents.
3. **Helper behavior alignment**: Adjust the expectations around `forcePathStyle` in the third test to exactly match `createS3Client`’s intended behavior. If your helper always sets `forcePathStyle: true` whenever `S3_ENDPOINT` is set, simplify the assertion to `expect(callArgs.forcePathStyle).toBe(true)`.
4. **Additional scenarios**: If your helper has more branching (e.g. different behavior for AWS vs non-AWS endpoints), you may want to add corresponding `it(...)` blocks, each calling `createS3Client()` and asserting on the `S3Client` mock arguments.
</issue_to_address>

Sourcery is free for open source - if you like our reviews please consider sharing them ✨
もっと役に立てるようにお手伝いください!各コメントに対して 👍 または 👎 をクリックしてもらえると、そのフィードバックを今後のレビュー改善に役立てます。
Original comment in English

Hey - I've found 5 issues, and left some high level feedback:

  • The new tests that mutate process.env (e.g. S3 and storage factory tests) don’t restore the original environment, which can lead to cross-test interference; consider capturing and resetting relevant env vars in beforeEach/afterEach.
  • In scripts/dev-up.ts, generateEnvFile unconditionally overwrites .env, which may clobber a developer’s custom configuration; you might want to skip writing if the file already exists or gate overwrites behind a flag.
  • The dev:reset task in mise.toml runs bun run migration while other places use bun run migrate; aligning these to the same script name would avoid confusion or broken commands.
Prompt for AI Agents
Please address the comments from this code review:

## Overall Comments
- The new tests that mutate `process.env` (e.g. S3 and storage factory tests) don’t restore the original environment, which can lead to cross-test interference; consider capturing and resetting relevant env vars in `beforeEach`/`afterEach`.
- In `scripts/dev-up.ts`, `generateEnvFile` unconditionally overwrites `.env`, which may clobber a developer’s custom configuration; you might want to skip writing if the file already exists or gate overwrites behind a flag.
- The `dev:reset` task in `mise.toml` runs `bun run migration` while other places use `bun run migrate`; aligning these to the same script name would avoid confusion or broken commands.

## Individual Comments

### Comment 1
<location path="scripts/dev-up.ts" line_range="20-22" />
<code_context>
+S3_ENDPOINT=http://localhost:9000
+`
+
+export function generateEnvFile(): void {
+  writeFileSync(ENV_FILE_PATH, ENV_CONTENT)
+  console.log(`✅ Generated ${ENV_FILE_PATH}`)
+}
+
</code_context>
<issue_to_address>
**suggestion:** Always overwriting `.env` may clobber a developer’s local customizations.

Since this unconditionally rewrites `.env`, it can overwrite local overrides (e.g. DB, feature flags, secrets). Consider only creating the file if it’s missing, backing up an existing one, or requiring an explicit flag to force overwrite so `dev:up` doesn’t disrupt local setups.

Suggested implementation:

```typescript
import { existsSync, writeFileSync } from "fs"

```

```typescript
export function generateEnvFile(): void {
  if (existsSync(ENV_FILE_PATH)) {
    console.log(
      `ℹ️ Skipping .env generation: ${ENV_FILE_PATH} already exists. ` +
        `Move or delete it if you want to regenerate the default file.`,
    )
    return
  }

  writeFileSync(ENV_FILE_PATH, ENV_CONTENT)
  console.log(`✅ Generated ${ENV_FILE_PATH}`)
}

```

If the existing import from "fs" is named/imported differently (e.g. `import fs from "fs"`), adapt the import change accordingly by adding `existsSync` in that style rather than the named import shown here.
</issue_to_address>

### Comment 2
<location path="compose.yaml" line_range="2-11" />
<code_context>
     timeout-minutes: 30
     services:
       postgres:
-        image: postgres
+        image: postgres:18
</code_context>
<issue_to_address>
**issue (bug_risk):** Postgres image tag and data directory volume mapping may cause container startup issues.

Two things to verify:
- `postgres:18` may not be published yet (as with the CI workflow), which would break `docker compose up` on fresh environments.
- The official image expects data under `/var/lib/postgresql/data`; mounting a volume to `/var/lib/postgresql` instead can skip the default data directory and init logic. Using `/var/lib/postgresql/data` is usually safer for persistence and upgrades.
</issue_to_address>

### Comment 3
<location path="test/unit/scripts/dev-up.test.ts" line_range="69-73" />
<code_context>
+    consoleSpy.mockRestore()
+  })
+
+  it("throws on unexpected errors", async () => {
+    const sendMock = mock(() => Promise.reject(new Error("Connection refused")))
+    const client = createMockS3Client(sendMock)
+
+    expect(createBucketIfNotExists(client, "bucket")).rejects.toThrow(
+      "Connection refused",
+    )
</code_context>
<issue_to_address>
**issue (testing):** The rejection assertion is not awaited, so the test may pass even if no error is thrown.

Here, `createBucketIfNotExists` returns a promise, but the test neither `await`s nor `return`s the `expect(...).rejects` chain. Update to either:

```ts
await expect(
  createBucketIfNotExists(client, "bucket"),
).rejects.toThrow("Connection refused")
```

or return the expectation from the test. This makes the test actually assert the rejection behavior.
</issue_to_address>

### Comment 4
<location path="test/unit/lib/storage/s3-backend.test.ts" line_range="137-146" />
<code_context>
+describe("Feature: local-dev-environment, Property 2: S3 operation round-trip", () => {
</code_context>
<issue_to_address>
**issue (testing):** The property-based test mutates `process.env` without restoring it, which can leak state across test runs.

Inside this property-based test, each run overwrites `process.env.S3_ACCESS_KEY_ID`, `S3_SECRET_ACCESS_KEY`, and `S3_REGION` without restoring them. With `fc.assert` running many times, this can interfere with other tests that rely on different or unset env values.

Capture and restore the relevant env vars via `beforeEach`/`afterEach` or within the property itself, for example:

```ts
const originalEnv = { ...process.env }

await fc.assert(
  fc.asyncProperty(..., async (...) => {
    try {
      process.env.S3_ACCESS_KEY_ID = "test-key"
      // ...
    } finally {
      process.env = { ...originalEnv }
    }
  }),
)
```
</issue_to_address>

### Comment 5
<location path="test/unit/lib/storage/s3-backend.test.ts" line_range="196-198" />
<code_context>
+  })
+})
+
+describe("createS3Client unit tests", () => {
+  it("uses default AWS endpoint when S3_ENDPOINT is not set", () => {
+    delete process.env.S3_ENDPOINT
+    process.env.S3_ACCESS_KEY_ID = "test-key"
+    process.env.S3_SECRET_ACCESS_KEY = "test-secret"
</code_context>
<issue_to_address>
**suggestion (testing):** The `createS3Client` tests re-implement the client construction logic and rely on AWS SDK internals, which makes them brittle and less effective.

These tests rebuild `new S3Client({...})` directly and only assert on `s3.config.forcePathStyle` / `s3.config.endpoint`, so they neither call `createS3Client` nor validate its behavior. This duplicates the helper’s logic and tightly couples the tests to AWS SDK internals that may change.

Instead, import `createS3Client`, mock `@aws-sdk/client-s3`’s `S3Client` constructor, and assert that it is invoked with the expected options (including `endpoint`/`forcePathStyle`) for different `process.env.S3_ENDPOINT` values. That way you’re testing your helper’s behavior rather than the SDK’s internal config shape.

Suggested implementation:

```typescript
import { S3Client } from "@aws-sdk/client-s3"
// adjust this import path to where createS3Client actually lives
import { createS3Client } from "../../../lib/storage/s3-backend"

jest.mock("@aws-sdk/client-s3", () => {
  const actual = jest.requireActual("@aws-sdk/client-s3")
  return {
    ...actual,
    S3Client: jest.fn(),
  }
})

describe("createS3Client unit tests", () => {
  const ORIGINAL_ENV = process.env

  beforeEach(() => {
    jest.resetModules()
    process.env = { ...ORIGINAL_ENV }
    delete process.env.S3_ENDPOINT
    delete process.env.S3_ACCESS_KEY_ID
    delete process.env.S3_SECRET_ACCESS_KEY
    delete process.env.S3_REGION
  })

  afterAll(() => {
    process.env = ORIGINAL_ENV
  })

  it("uses default AWS endpoint when S3_ENDPOINT is not set", () => {
    process.env.S3_ACCESS_KEY_ID = "test-key"
    process.env.S3_SECRET_ACCESS_KEY = "test-secret"
    process.env.S3_REGION = "us-east-1"

    createS3Client()

    expect(S3Client).toHaveBeenCalledTimes(1)
    expect(S3Client).toHaveBeenCalledWith(
      expect.objectContaining({
        region: "us-east-1",
        credentials: {
          accessKeyId: "test-key",
          secretAccessKey: "test-secret",
        },
      }),
    )

    // when S3_ENDPOINT is not set, we expect no explicit endpoint override
    // and default SDK behavior for path-style vs virtual-hosted-style
    const callArgs = (S3Client as jest.Mock).mock.calls[0][0]
    expect(callArgs.endpoint).toBeUndefined()
    expect(callArgs.forcePathStyle).toBeUndefined()
  })

  it("passes custom endpoint and enables path-style access when S3_ENDPOINT is set", () => {
    process.env.S3_ENDPOINT = "http://localhost:9000"
    process.env.S3_ACCESS_KEY_ID = "test-key"
    process.env.S3_SECRET_ACCESS_KEY = "test-secret"
    process.env.S3_REGION = "us-west-2"

    createS3Client()

    expect(S3Client).toHaveBeenCalledTimes(1)
    expect(S3Client).toHaveBeenCalledWith(
      expect.objectContaining({
        region: "us-west-2",
        credentials: {
          accessKeyId: "test-key",
          secretAccessKey: "test-secret",
        },
        endpoint: "http://localhost:9000",
        forcePathStyle: true,
      }),
    )
  })

  it("supports https custom endpoint without forcing path-style if helper is configured that way", () => {
    process.env.S3_ENDPOINT = "https://example-bucket.s3.custom-endpoint.com"
    process.env.S3_ACCESS_KEY_ID = "test-key"
    process.env.S3_SECRET_ACCESS_KEY = "test-secret"
    process.env.S3_REGION = "eu-central-1"

    createS3Client()

    expect(S3Client).toHaveBeenCalledTimes(1)
    const callArgs = (S3Client as jest.Mock).mock.calls[0][0]

    expect(callArgs).toEqual(
      expect.objectContaining({
        region: "eu-central-1",
        credentials: {
          accessKeyId: "test-key",
          secretAccessKey: "test-secret",
        },
        endpoint: "https://example-bucket.s3.custom-endpoint.com",
      }),
    )

    // Adjust this expectation if your helper always sets forcePathStyle for any custom endpoint
    // or only for certain patterns (e.g. non-AWS / non-https endpoints).
    // Keeping it flexible here and asserting explicitly to document intended behavior.
    expect(callArgs.forcePathStyle === true || callArgs.forcePathStyle === undefined).toBe(true)
  })

```

1. **Import path**: Update `import { createS3Client } from "../../../lib/storage/s3-backend"` to the correct relative path for your project (e.g. `../../../../src/lib/storage/s3-backend` or a TS path alias if configured).
2. **Test framework**: If you are using Vitest instead of Jest:
   - Replace `jest.mock` with `vi.mock`.
   - Replace `jest.requireActual` with `vi.importActual`.
   - Replace `jest.resetModules` with `vi.resetModules`.
   - Replace all `jest.*` matcher helpers and types with their `vi` equivalents.
3. **Helper behavior alignment**: Adjust the expectations around `forcePathStyle` in the third test to exactly match `createS3Client`’s intended behavior. If your helper always sets `forcePathStyle: true` whenever `S3_ENDPOINT` is set, simplify the assertion to `expect(callArgs.forcePathStyle).toBe(true)`.
4. **Additional scenarios**: If your helper has more branching (e.g. different behavior for AWS vs non-AWS endpoints), you may want to add corresponding `it(...)` blocks, each calling `createS3Client()` and asserting on the `S3Client` mock arguments.
</issue_to_address>

Sourcery is free for open source - if you like our reviews please consider sharing them ✨
Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.

@github-actions
Copy link
Copy Markdown

github-actions bot commented Apr 7, 2026

Overview

Image reference marukome0743/dcrs:canary marukome0743/dcrs:pr-1463
- digest 419cebdcef92 059be723915d
- tag canary pr-1463
- provenance 73ceac1
- vulnerabilities critical: 0 high: 0 medium: 0 low: 8 critical: 0 high: 0 medium: 0 low: 0
- platform linux/amd64 linux/amd64
- size 80 MB 92 MB (+12 MB)
- packages 177 182 (+5)
Base Image distroless/static:nonroot distroless/static:nonroot
- vulnerabilities critical: 0 high: 0 medium: 0 low: 0 critical: 0 high: 0 medium: 0 low: 0
Labels (3 changes)
  • ± 3 changed
  • 5 unchanged
-org.opencontainers.image.created=2026-04-07T00:39:36.944Z
+org.opencontainers.image.created=2026-04-07T13:28:23.684Z
 org.opencontainers.image.description=Disability Certificate Register System📇
 org.opencontainers.image.licenses=Apache-2.0
-org.opencontainers.image.revision=0c98013709ffcc89540514b7f0fea7f8bc4b9ccb
+org.opencontainers.image.revision=73ceac1131799b3e1b8bf59364bc0cac07f6b48c
 org.opencontainers.image.source=https://github.com/OpenUp-LabTakizawa/dcrs
 org.opencontainers.image.title=dcrs
 org.opencontainers.image.url=https://github.com/OpenUp-LabTakizawa/dcrs
-org.opencontainers.image.version=canary
+org.opencontainers.image.version=pr-1463
Packages and Vulnerabilities (7 package changes and 0 vulnerability changes)
  • ➕ 5 packages added
  • ♾️ 2 packages changed
  • 170 packages unchanged
Changes for packages of type deb (5 changes)
Package Version
marukome0743/dcrs:canary
Version
marukome0743/dcrs:pr-1463
gcc-14 14.2.0-19
glibc 2.41-12+deb13u2
libzstd 1.5.7+dfsg-1
openssl 3.5.5-1~deb13u1
zlib 1:1.3.dfsg+really1.3.1-1
Changes for packages of type npm (2 changes)
Package Version
marukome0743/dcrs:canary
Version
marukome0743/dcrs:pr-1463
♾️ @next/env 16.2.1-canary.23 16.2.1-canary.24
♾️ next 16.2.1-canary.23 16.2.1-canary.24

@Marukome0743 Marukome0743 merged commit e75497c into OpenUp-LabTakizawa:main Apr 7, 2026
17 of 18 checks passed
@Marukome0743 Marukome0743 deleted the pr1463 branch April 7, 2026 14:39
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

♻️ci Changes to CI configuration files and scripts ⚙️config Update configuration files dependencies Pull requests that update a dependency file docker Pull requests that update Docker code 📝documentation Improvements or additions to documentation 🚀enhancement New feature or request 🧰infrastructure Improvements of facility such as linter etc ✅test Something related to test

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants