feat: use docker compose for development#1463
feat: use docker compose for development#1463Marukome0743 merged 1 commit intoOpenUp-LabTakizawa:mainfrom
Conversation
|
@Marukome0743 is attempting to deploy a commit to the OpenUp Lab Takizawa Team on Vercel. A member of the Team first needs to authorize it. |
Dependency Review✅ No vulnerabilities or license issues or OpenSSF Scorecard issues found.Snapshot WarningsEnsure that dependencies are being submitted on PR branches and consider enabling retry-on-snapshot-warnings. See the documentation for more information and troubleshooting advice. Scanned FilesNone |
🪄 Deploy Preview for ready!
|
レビュー担当者向けガイドDocker Compose ベースのローカル開発環境(PostgreSQL と RustFS)を導入し、アプリおよび CI を S3 互換エンドポイントを使うように接続します。また、S3 クライアント設定、ストレージバックエンドの選択、開発用ブートストラップスクリプトに関する集中的なテストとドキュメントを追加します。 dev-up スクリプトによるブートストラップフローのシーケンス図sequenceDiagram
actor Developer
participant Mise as Mise_task_runner
participant Bun as Bun_runtime
participant DevUp as DevUp_script
participant Docker as Docker_Compose
participant DB as Postgres
participant RustFS as RustFS_S3
Developer->>Mise: run task dev:up
Mise->>Bun: bun run scripts/dev-up.ts
Bun->>DevUp: execute main()
DevUp->>DevUp: generateEnvFile()
DevUp->>Docker: docker compose up -d --wait
Docker-->>DB: start postgres:18
Docker-->>RustFS: start rustfs/rustfs
DB-->>Docker: healthy
RustFS-->>Docker: healthy
DevUp->>Bun: bun run migrate
Bun->>DB: apply_database_migrations
DevUp->>RustFS: createBucketIfNotExists(S3Client,bucket)
RustFS-->>DevUp: bucket_created_or_already_exists
DevUp-->>Bun: main() resolves
Bun-->>Mise: task complete
Mise-->>Developer: local env ready
S3 クライアント設定と dev-up ユーティリティのクラス図classDiagram
class S3BackendModule {
+createS3Client() S3Client
-getEnv(name string) string
-accessKeyId string
-secretAccessKey string
-region string
-endpoint string
}
class DevUpScript {
+generateEnvFile() void
+createBucketIfNotExists(s3Client S3Client, bucket string) Promise~void~
+main() Promise~void~
-ENV_FILE_PATH string
-ENV_CONTENT string
}
class S3Client {
+S3Client(config S3ClientConfig)
+send(command S3Command) Promise~unknown~
}
class CreateBucketCommand {
+CreateBucketCommand(params CreateBucketParams)
}
class CreateBucketParams {
+Bucket string
}
class S3ClientConfig {
+accessKeyId string
+secretAccessKey string
+region string
+endpoint string
+forcePathStyle boolean
}
S3BackendModule --> S3Client : creates
DevUpScript --> S3Client : creates
DevUpScript --> CreateBucketCommand : uses
CreateBucketCommand --> CreateBucketParams : config
S3Client --> S3ClientConfig : configured_with
ファイル単位の変更点
Tips and commandsSourcery とのやりとり
体験のカスタマイズダッシュボード にアクセスすると、以下が行えます:
サポートの利用
Original review guide in EnglishReviewer's GuideIntroduces a Docker Compose-driven local development environment using PostgreSQL and RustFS, wires the app and CI to use an S3-compatible endpoint, and adds focused tests and docs around S3 client configuration, storage backend selection, and dev bootstrap scripts. Sequence diagram for dev-up script bootstrap flowsequenceDiagram
actor Developer
participant Mise as Mise_task_runner
participant Bun as Bun_runtime
participant DevUp as DevUp_script
participant Docker as Docker_Compose
participant DB as Postgres
participant RustFS as RustFS_S3
Developer->>Mise: run task dev:up
Mise->>Bun: bun run scripts/dev-up.ts
Bun->>DevUp: execute main()
DevUp->>DevUp: generateEnvFile()
DevUp->>Docker: docker compose up -d --wait
Docker-->>DB: start postgres:18
Docker-->>RustFS: start rustfs/rustfs
DB-->>Docker: healthy
RustFS-->>Docker: healthy
DevUp->>Bun: bun run migrate
Bun->>DB: apply_database_migrations
DevUp->>RustFS: createBucketIfNotExists(S3Client,bucket)
RustFS-->>DevUp: bucket_created_or_already_exists
DevUp-->>Bun: main() resolves
Bun-->>Mise: task complete
Mise-->>Developer: local env ready
Class diagram for S3 client configuration and dev-up utilitiesclassDiagram
class S3BackendModule {
+createS3Client() S3Client
-getEnv(name string) string
-accessKeyId string
-secretAccessKey string
-region string
-endpoint string
}
class DevUpScript {
+generateEnvFile() void
+createBucketIfNotExists(s3Client S3Client, bucket string) Promise~void~
+main() Promise~void~
-ENV_FILE_PATH string
-ENV_CONTENT string
}
class S3Client {
+S3Client(config S3ClientConfig)
+send(command S3Command) Promise~unknown~
}
class CreateBucketCommand {
+CreateBucketCommand(params CreateBucketParams)
}
class CreateBucketParams {
+Bucket string
}
class S3ClientConfig {
+accessKeyId string
+secretAccessKey string
+region string
+endpoint string
+forcePathStyle boolean
}
S3BackendModule --> S3Client : creates
DevUpScript --> S3Client : creates
DevUpScript --> CreateBucketCommand : uses
CreateBucketCommand --> CreateBucketParams : config
S3Client --> S3ClientConfig : configured_with
File-Level Changes
Tips and commandsInteracting with Sourcery
Customizing Your ExperienceAccess your dashboard to:
Getting Help
|
|
The latest updates on your projects. Learn more about Vercel for GitHub.
|
There was a problem hiding this comment.
Hey - 5つの指摘と、いくつか全体的なフィードバックを残しました。
process.envを変更する新しいテスト(S3 や storage factory のテストなど)が、元の環境変数を復元しておらず、テスト間で状態が干渉する可能性があります。beforeEach/afterEachで関連する環境変数をキャプチャしてリセットすることを検討してください。scripts/dev-up.tsのgenerateEnvFileは、条件なしに.envを上書きしており、開発者が独自に設定した内容を消してしまう可能性があります。ファイルがすでに存在する場合は書き込みをスキップするか、上書きにフラグを必要とするようにするとよさそうです。mise.tomlのdev:resetタスクはbun run migrationを実行していますが、他の箇所ではbun run migrateを使っています。同じスクリプト名に揃えることで、混乱やコマンドの失敗を防げます。
Prompt for AI Agents
Please address the comments from this code review:
## Overall Comments
- The new tests that mutate `process.env` (e.g. S3 and storage factory tests) don’t restore the original environment, which can lead to cross-test interference; consider capturing and resetting relevant env vars in `beforeEach`/`afterEach`.
- In `scripts/dev-up.ts`, `generateEnvFile` unconditionally overwrites `.env`, which may clobber a developer’s custom configuration; you might want to skip writing if the file already exists or gate overwrites behind a flag.
- The `dev:reset` task in `mise.toml` runs `bun run migration` while other places use `bun run migrate`; aligning these to the same script name would avoid confusion or broken commands.
## Individual Comments
### Comment 1
<location path="scripts/dev-up.ts" line_range="20-22" />
<code_context>
+S3_ENDPOINT=http://localhost:9000
+`
+
+export function generateEnvFile(): void {
+ writeFileSync(ENV_FILE_PATH, ENV_CONTENT)
+ console.log(`✅ Generated ${ENV_FILE_PATH}`)
+}
+
</code_context>
<issue_to_address>
**suggestion:** Always overwriting `.env` may clobber a developer’s local customizations.
Since this unconditionally rewrites `.env`, it can overwrite local overrides (e.g. DB, feature flags, secrets). Consider only creating the file if it’s missing, backing up an existing one, or requiring an explicit flag to force overwrite so `dev:up` doesn’t disrupt local setups.
Suggested implementation:
```typescript
import { existsSync, writeFileSync } from "fs"
```
```typescript
export function generateEnvFile(): void {
if (existsSync(ENV_FILE_PATH)) {
console.log(
`ℹ️ Skipping .env generation: ${ENV_FILE_PATH} already exists. ` +
`Move or delete it if you want to regenerate the default file.`,
)
return
}
writeFileSync(ENV_FILE_PATH, ENV_CONTENT)
console.log(`✅ Generated ${ENV_FILE_PATH}`)
}
```
If the existing import from "fs" is named/imported differently (e.g. `import fs from "fs"`), adapt the import change accordingly by adding `existsSync` in that style rather than the named import shown here.
</issue_to_address>
### Comment 2
<location path="compose.yaml" line_range="2-11" />
<code_context>
timeout-minutes: 30
services:
postgres:
- image: postgres
+ image: postgres:18
</code_context>
<issue_to_address>
**issue (bug_risk):** Postgres image tag and data directory volume mapping may cause container startup issues.
Two things to verify:
- `postgres:18` may not be published yet (as with the CI workflow), which would break `docker compose up` on fresh environments.
- The official image expects data under `/var/lib/postgresql/data`; mounting a volume to `/var/lib/postgresql` instead can skip the default data directory and init logic. Using `/var/lib/postgresql/data` is usually safer for persistence and upgrades.
</issue_to_address>
### Comment 3
<location path="test/unit/scripts/dev-up.test.ts" line_range="69-73" />
<code_context>
+ consoleSpy.mockRestore()
+ })
+
+ it("throws on unexpected errors", async () => {
+ const sendMock = mock(() => Promise.reject(new Error("Connection refused")))
+ const client = createMockS3Client(sendMock)
+
+ expect(createBucketIfNotExists(client, "bucket")).rejects.toThrow(
+ "Connection refused",
+ )
</code_context>
<issue_to_address>
**issue (testing):** The rejection assertion is not awaited, so the test may pass even if no error is thrown.
Here, `createBucketIfNotExists` returns a promise, but the test neither `await`s nor `return`s the `expect(...).rejects` chain. Update to either:
```ts
await expect(
createBucketIfNotExists(client, "bucket"),
).rejects.toThrow("Connection refused")
```
or return the expectation from the test. This makes the test actually assert the rejection behavior.
</issue_to_address>
### Comment 4
<location path="test/unit/lib/storage/s3-backend.test.ts" line_range="137-146" />
<code_context>
+describe("Feature: local-dev-environment, Property 2: S3 operation round-trip", () => {
</code_context>
<issue_to_address>
**issue (testing):** The property-based test mutates `process.env` without restoring it, which can leak state across test runs.
Inside this property-based test, each run overwrites `process.env.S3_ACCESS_KEY_ID`, `S3_SECRET_ACCESS_KEY`, and `S3_REGION` without restoring them. With `fc.assert` running many times, this can interfere with other tests that rely on different or unset env values.
Capture and restore the relevant env vars via `beforeEach`/`afterEach` or within the property itself, for example:
```ts
const originalEnv = { ...process.env }
await fc.assert(
fc.asyncProperty(..., async (...) => {
try {
process.env.S3_ACCESS_KEY_ID = "test-key"
// ...
} finally {
process.env = { ...originalEnv }
}
}),
)
```
</issue_to_address>
### Comment 5
<location path="test/unit/lib/storage/s3-backend.test.ts" line_range="196-198" />
<code_context>
+ })
+})
+
+describe("createS3Client unit tests", () => {
+ it("uses default AWS endpoint when S3_ENDPOINT is not set", () => {
+ delete process.env.S3_ENDPOINT
+ process.env.S3_ACCESS_KEY_ID = "test-key"
+ process.env.S3_SECRET_ACCESS_KEY = "test-secret"
</code_context>
<issue_to_address>
**suggestion (testing):** The `createS3Client` tests re-implement the client construction logic and rely on AWS SDK internals, which makes them brittle and less effective.
These tests rebuild `new S3Client({...})` directly and only assert on `s3.config.forcePathStyle` / `s3.config.endpoint`, so they neither call `createS3Client` nor validate its behavior. This duplicates the helper’s logic and tightly couples the tests to AWS SDK internals that may change.
Instead, import `createS3Client`, mock `@aws-sdk/client-s3`’s `S3Client` constructor, and assert that it is invoked with the expected options (including `endpoint`/`forcePathStyle`) for different `process.env.S3_ENDPOINT` values. That way you’re testing your helper’s behavior rather than the SDK’s internal config shape.
Suggested implementation:
```typescript
import { S3Client } from "@aws-sdk/client-s3"
// adjust this import path to where createS3Client actually lives
import { createS3Client } from "../../../lib/storage/s3-backend"
jest.mock("@aws-sdk/client-s3", () => {
const actual = jest.requireActual("@aws-sdk/client-s3")
return {
...actual,
S3Client: jest.fn(),
}
})
describe("createS3Client unit tests", () => {
const ORIGINAL_ENV = process.env
beforeEach(() => {
jest.resetModules()
process.env = { ...ORIGINAL_ENV }
delete process.env.S3_ENDPOINT
delete process.env.S3_ACCESS_KEY_ID
delete process.env.S3_SECRET_ACCESS_KEY
delete process.env.S3_REGION
})
afterAll(() => {
process.env = ORIGINAL_ENV
})
it("uses default AWS endpoint when S3_ENDPOINT is not set", () => {
process.env.S3_ACCESS_KEY_ID = "test-key"
process.env.S3_SECRET_ACCESS_KEY = "test-secret"
process.env.S3_REGION = "us-east-1"
createS3Client()
expect(S3Client).toHaveBeenCalledTimes(1)
expect(S3Client).toHaveBeenCalledWith(
expect.objectContaining({
region: "us-east-1",
credentials: {
accessKeyId: "test-key",
secretAccessKey: "test-secret",
},
}),
)
// when S3_ENDPOINT is not set, we expect no explicit endpoint override
// and default SDK behavior for path-style vs virtual-hosted-style
const callArgs = (S3Client as jest.Mock).mock.calls[0][0]
expect(callArgs.endpoint).toBeUndefined()
expect(callArgs.forcePathStyle).toBeUndefined()
})
it("passes custom endpoint and enables path-style access when S3_ENDPOINT is set", () => {
process.env.S3_ENDPOINT = "http://localhost:9000"
process.env.S3_ACCESS_KEY_ID = "test-key"
process.env.S3_SECRET_ACCESS_KEY = "test-secret"
process.env.S3_REGION = "us-west-2"
createS3Client()
expect(S3Client).toHaveBeenCalledTimes(1)
expect(S3Client).toHaveBeenCalledWith(
expect.objectContaining({
region: "us-west-2",
credentials: {
accessKeyId: "test-key",
secretAccessKey: "test-secret",
},
endpoint: "http://localhost:9000",
forcePathStyle: true,
}),
)
})
it("supports https custom endpoint without forcing path-style if helper is configured that way", () => {
process.env.S3_ENDPOINT = "https://example-bucket.s3.custom-endpoint.com"
process.env.S3_ACCESS_KEY_ID = "test-key"
process.env.S3_SECRET_ACCESS_KEY = "test-secret"
process.env.S3_REGION = "eu-central-1"
createS3Client()
expect(S3Client).toHaveBeenCalledTimes(1)
const callArgs = (S3Client as jest.Mock).mock.calls[0][0]
expect(callArgs).toEqual(
expect.objectContaining({
region: "eu-central-1",
credentials: {
accessKeyId: "test-key",
secretAccessKey: "test-secret",
},
endpoint: "https://example-bucket.s3.custom-endpoint.com",
}),
)
// Adjust this expectation if your helper always sets forcePathStyle for any custom endpoint
// or only for certain patterns (e.g. non-AWS / non-https endpoints).
// Keeping it flexible here and asserting explicitly to document intended behavior.
expect(callArgs.forcePathStyle === true || callArgs.forcePathStyle === undefined).toBe(true)
})
```
1. **Import path**: Update `import { createS3Client } from "../../../lib/storage/s3-backend"` to the correct relative path for your project (e.g. `../../../../src/lib/storage/s3-backend` or a TS path alias if configured).
2. **Test framework**: If you are using Vitest instead of Jest:
- Replace `jest.mock` with `vi.mock`.
- Replace `jest.requireActual` with `vi.importActual`.
- Replace `jest.resetModules` with `vi.resetModules`.
- Replace all `jest.*` matcher helpers and types with their `vi` equivalents.
3. **Helper behavior alignment**: Adjust the expectations around `forcePathStyle` in the third test to exactly match `createS3Client`’s intended behavior. If your helper always sets `forcePathStyle: true` whenever `S3_ENDPOINT` is set, simplify the assertion to `expect(callArgs.forcePathStyle).toBe(true)`.
4. **Additional scenarios**: If your helper has more branching (e.g. different behavior for AWS vs non-AWS endpoints), you may want to add corresponding `it(...)` blocks, each calling `createS3Client()` and asserting on the `S3Client` mock arguments.
</issue_to_address>Sourcery is free for open source - if you like our reviews please consider sharing them ✨
Original comment in English
Hey - I've found 5 issues, and left some high level feedback:
- The new tests that mutate
process.env(e.g. S3 and storage factory tests) don’t restore the original environment, which can lead to cross-test interference; consider capturing and resetting relevant env vars inbeforeEach/afterEach. - In
scripts/dev-up.ts,generateEnvFileunconditionally overwrites.env, which may clobber a developer’s custom configuration; you might want to skip writing if the file already exists or gate overwrites behind a flag. - The
dev:resettask inmise.tomlrunsbun run migrationwhile other places usebun run migrate; aligning these to the same script name would avoid confusion or broken commands.
Prompt for AI Agents
Please address the comments from this code review:
## Overall Comments
- The new tests that mutate `process.env` (e.g. S3 and storage factory tests) don’t restore the original environment, which can lead to cross-test interference; consider capturing and resetting relevant env vars in `beforeEach`/`afterEach`.
- In `scripts/dev-up.ts`, `generateEnvFile` unconditionally overwrites `.env`, which may clobber a developer’s custom configuration; you might want to skip writing if the file already exists or gate overwrites behind a flag.
- The `dev:reset` task in `mise.toml` runs `bun run migration` while other places use `bun run migrate`; aligning these to the same script name would avoid confusion or broken commands.
## Individual Comments
### Comment 1
<location path="scripts/dev-up.ts" line_range="20-22" />
<code_context>
+S3_ENDPOINT=http://localhost:9000
+`
+
+export function generateEnvFile(): void {
+ writeFileSync(ENV_FILE_PATH, ENV_CONTENT)
+ console.log(`✅ Generated ${ENV_FILE_PATH}`)
+}
+
</code_context>
<issue_to_address>
**suggestion:** Always overwriting `.env` may clobber a developer’s local customizations.
Since this unconditionally rewrites `.env`, it can overwrite local overrides (e.g. DB, feature flags, secrets). Consider only creating the file if it’s missing, backing up an existing one, or requiring an explicit flag to force overwrite so `dev:up` doesn’t disrupt local setups.
Suggested implementation:
```typescript
import { existsSync, writeFileSync } from "fs"
```
```typescript
export function generateEnvFile(): void {
if (existsSync(ENV_FILE_PATH)) {
console.log(
`ℹ️ Skipping .env generation: ${ENV_FILE_PATH} already exists. ` +
`Move or delete it if you want to regenerate the default file.`,
)
return
}
writeFileSync(ENV_FILE_PATH, ENV_CONTENT)
console.log(`✅ Generated ${ENV_FILE_PATH}`)
}
```
If the existing import from "fs" is named/imported differently (e.g. `import fs from "fs"`), adapt the import change accordingly by adding `existsSync` in that style rather than the named import shown here.
</issue_to_address>
### Comment 2
<location path="compose.yaml" line_range="2-11" />
<code_context>
timeout-minutes: 30
services:
postgres:
- image: postgres
+ image: postgres:18
</code_context>
<issue_to_address>
**issue (bug_risk):** Postgres image tag and data directory volume mapping may cause container startup issues.
Two things to verify:
- `postgres:18` may not be published yet (as with the CI workflow), which would break `docker compose up` on fresh environments.
- The official image expects data under `/var/lib/postgresql/data`; mounting a volume to `/var/lib/postgresql` instead can skip the default data directory and init logic. Using `/var/lib/postgresql/data` is usually safer for persistence and upgrades.
</issue_to_address>
### Comment 3
<location path="test/unit/scripts/dev-up.test.ts" line_range="69-73" />
<code_context>
+ consoleSpy.mockRestore()
+ })
+
+ it("throws on unexpected errors", async () => {
+ const sendMock = mock(() => Promise.reject(new Error("Connection refused")))
+ const client = createMockS3Client(sendMock)
+
+ expect(createBucketIfNotExists(client, "bucket")).rejects.toThrow(
+ "Connection refused",
+ )
</code_context>
<issue_to_address>
**issue (testing):** The rejection assertion is not awaited, so the test may pass even if no error is thrown.
Here, `createBucketIfNotExists` returns a promise, but the test neither `await`s nor `return`s the `expect(...).rejects` chain. Update to either:
```ts
await expect(
createBucketIfNotExists(client, "bucket"),
).rejects.toThrow("Connection refused")
```
or return the expectation from the test. This makes the test actually assert the rejection behavior.
</issue_to_address>
### Comment 4
<location path="test/unit/lib/storage/s3-backend.test.ts" line_range="137-146" />
<code_context>
+describe("Feature: local-dev-environment, Property 2: S3 operation round-trip", () => {
</code_context>
<issue_to_address>
**issue (testing):** The property-based test mutates `process.env` without restoring it, which can leak state across test runs.
Inside this property-based test, each run overwrites `process.env.S3_ACCESS_KEY_ID`, `S3_SECRET_ACCESS_KEY`, and `S3_REGION` without restoring them. With `fc.assert` running many times, this can interfere with other tests that rely on different or unset env values.
Capture and restore the relevant env vars via `beforeEach`/`afterEach` or within the property itself, for example:
```ts
const originalEnv = { ...process.env }
await fc.assert(
fc.asyncProperty(..., async (...) => {
try {
process.env.S3_ACCESS_KEY_ID = "test-key"
// ...
} finally {
process.env = { ...originalEnv }
}
}),
)
```
</issue_to_address>
### Comment 5
<location path="test/unit/lib/storage/s3-backend.test.ts" line_range="196-198" />
<code_context>
+ })
+})
+
+describe("createS3Client unit tests", () => {
+ it("uses default AWS endpoint when S3_ENDPOINT is not set", () => {
+ delete process.env.S3_ENDPOINT
+ process.env.S3_ACCESS_KEY_ID = "test-key"
+ process.env.S3_SECRET_ACCESS_KEY = "test-secret"
</code_context>
<issue_to_address>
**suggestion (testing):** The `createS3Client` tests re-implement the client construction logic and rely on AWS SDK internals, which makes them brittle and less effective.
These tests rebuild `new S3Client({...})` directly and only assert on `s3.config.forcePathStyle` / `s3.config.endpoint`, so they neither call `createS3Client` nor validate its behavior. This duplicates the helper’s logic and tightly couples the tests to AWS SDK internals that may change.
Instead, import `createS3Client`, mock `@aws-sdk/client-s3`’s `S3Client` constructor, and assert that it is invoked with the expected options (including `endpoint`/`forcePathStyle`) for different `process.env.S3_ENDPOINT` values. That way you’re testing your helper’s behavior rather than the SDK’s internal config shape.
Suggested implementation:
```typescript
import { S3Client } from "@aws-sdk/client-s3"
// adjust this import path to where createS3Client actually lives
import { createS3Client } from "../../../lib/storage/s3-backend"
jest.mock("@aws-sdk/client-s3", () => {
const actual = jest.requireActual("@aws-sdk/client-s3")
return {
...actual,
S3Client: jest.fn(),
}
})
describe("createS3Client unit tests", () => {
const ORIGINAL_ENV = process.env
beforeEach(() => {
jest.resetModules()
process.env = { ...ORIGINAL_ENV }
delete process.env.S3_ENDPOINT
delete process.env.S3_ACCESS_KEY_ID
delete process.env.S3_SECRET_ACCESS_KEY
delete process.env.S3_REGION
})
afterAll(() => {
process.env = ORIGINAL_ENV
})
it("uses default AWS endpoint when S3_ENDPOINT is not set", () => {
process.env.S3_ACCESS_KEY_ID = "test-key"
process.env.S3_SECRET_ACCESS_KEY = "test-secret"
process.env.S3_REGION = "us-east-1"
createS3Client()
expect(S3Client).toHaveBeenCalledTimes(1)
expect(S3Client).toHaveBeenCalledWith(
expect.objectContaining({
region: "us-east-1",
credentials: {
accessKeyId: "test-key",
secretAccessKey: "test-secret",
},
}),
)
// when S3_ENDPOINT is not set, we expect no explicit endpoint override
// and default SDK behavior for path-style vs virtual-hosted-style
const callArgs = (S3Client as jest.Mock).mock.calls[0][0]
expect(callArgs.endpoint).toBeUndefined()
expect(callArgs.forcePathStyle).toBeUndefined()
})
it("passes custom endpoint and enables path-style access when S3_ENDPOINT is set", () => {
process.env.S3_ENDPOINT = "http://localhost:9000"
process.env.S3_ACCESS_KEY_ID = "test-key"
process.env.S3_SECRET_ACCESS_KEY = "test-secret"
process.env.S3_REGION = "us-west-2"
createS3Client()
expect(S3Client).toHaveBeenCalledTimes(1)
expect(S3Client).toHaveBeenCalledWith(
expect.objectContaining({
region: "us-west-2",
credentials: {
accessKeyId: "test-key",
secretAccessKey: "test-secret",
},
endpoint: "http://localhost:9000",
forcePathStyle: true,
}),
)
})
it("supports https custom endpoint without forcing path-style if helper is configured that way", () => {
process.env.S3_ENDPOINT = "https://example-bucket.s3.custom-endpoint.com"
process.env.S3_ACCESS_KEY_ID = "test-key"
process.env.S3_SECRET_ACCESS_KEY = "test-secret"
process.env.S3_REGION = "eu-central-1"
createS3Client()
expect(S3Client).toHaveBeenCalledTimes(1)
const callArgs = (S3Client as jest.Mock).mock.calls[0][0]
expect(callArgs).toEqual(
expect.objectContaining({
region: "eu-central-1",
credentials: {
accessKeyId: "test-key",
secretAccessKey: "test-secret",
},
endpoint: "https://example-bucket.s3.custom-endpoint.com",
}),
)
// Adjust this expectation if your helper always sets forcePathStyle for any custom endpoint
// or only for certain patterns (e.g. non-AWS / non-https endpoints).
// Keeping it flexible here and asserting explicitly to document intended behavior.
expect(callArgs.forcePathStyle === true || callArgs.forcePathStyle === undefined).toBe(true)
})
```
1. **Import path**: Update `import { createS3Client } from "../../../lib/storage/s3-backend"` to the correct relative path for your project (e.g. `../../../../src/lib/storage/s3-backend` or a TS path alias if configured).
2. **Test framework**: If you are using Vitest instead of Jest:
- Replace `jest.mock` with `vi.mock`.
- Replace `jest.requireActual` with `vi.importActual`.
- Replace `jest.resetModules` with `vi.resetModules`.
- Replace all `jest.*` matcher helpers and types with their `vi` equivalents.
3. **Helper behavior alignment**: Adjust the expectations around `forcePathStyle` in the third test to exactly match `createS3Client`’s intended behavior. If your helper always sets `forcePathStyle: true` whenever `S3_ENDPOINT` is set, simplify the assertion to `expect(callArgs.forcePathStyle).toBe(true)`.
4. **Additional scenarios**: If your helper has more branching (e.g. different behavior for AWS vs non-AWS endpoints), you may want to add corresponding `it(...)` blocks, each calling `createS3Client()` and asserting on the `S3Client` mock arguments.
</issue_to_address>Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.
Overview
Labels (3 changes)
-org.opencontainers.image.created=2026-04-07T00:39:36.944Z
+org.opencontainers.image.created=2026-04-07T13:28:23.684Z
org.opencontainers.image.description=Disability Certificate Register System📇
org.opencontainers.image.licenses=Apache-2.0
-org.opencontainers.image.revision=0c98013709ffcc89540514b7f0fea7f8bc4b9ccb
+org.opencontainers.image.revision=73ceac1131799b3e1b8bf59364bc0cac07f6b48c
org.opencontainers.image.source=https://github.com/OpenUp-LabTakizawa/dcrs
org.opencontainers.image.title=dcrs
org.opencontainers.image.url=https://github.com/OpenUp-LabTakizawa/dcrs
-org.opencontainers.image.version=canary
+org.opencontainers.image.version=pr-1463Packages and Vulnerabilities (7 package changes and 0 vulnerability changes)
Changes for packages of type
|
| Package | Versionmarukome0743/dcrs:canary |
Versionmarukome0743/dcrs:pr-1463 |
|
|---|---|---|---|
| ➕ | gcc-14 | 14.2.0-19 |
|
| ➕ | glibc | 2.41-12+deb13u2 |
|
| ➕ | libzstd | 1.5.7+dfsg-1 |
|
| ➕ | openssl | 3.5.5-1~deb13u1 |
|
| ➕ | zlib | 1:1.3.dfsg+really1.3.1-1 |
Changes for packages of type npm (2 changes)
| Package | Versionmarukome0743/dcrs:canary |
Versionmarukome0743/dcrs:pr-1463 |
|
|---|---|---|---|
| ♾️ | @next/env | 16.2.1-canary.23 |
16.2.1-canary.24 |
| ♾️ | next | 16.2.1-canary.23 |
16.2.1-canary.24 |
Summary by Sourcery
Postgres と RustFS による S3 互換ストレージをバックエンドにした、完全ローカルな Docker ベースの開発・テスト環境を導入し、アプリの S3 クライアントおよびツール群をローカル利用向けのカスタムエンドポイントをサポートするように接続します。
新機能:
改善:
CI:
デプロイ:
ドキュメント:
テスト:
Original summary in English
Summary by Sourcery
Introduce a fully local Docker-based development and test environment backed by Postgres and RustFS S3-compatible storage, and wire the app’s S3 client and tooling to support custom endpoints for local usage.
New Features:
Enhancements:
CI:
Deployment:
Documentation:
Tests: