Skip to content

Comments

test: required integration test with real provider#316

Open
hanakannzashi wants to merge 41 commits intomainfrom
test/real-provider
Open

test: required integration test with real provider#316
hanakannzashi wants to merge 41 commits intomainfrom
test/real-provider

Conversation

@hanakannzashi
Copy link
Contributor

@hanakannzashi hanakannzashi commented Dec 29, 2025

Resolves #307


Note

Separates and automates real-provider E2E testing, adds shared test utilities, and adopts nextest across CI and Makefile.

  • CI: New test-real.yml workflow runs only real_test_ suites with Postgres and required secrets; existing test.yml integration step excludes them via -E 'not test(/^real_test_/)'
  • Testing config: Add .config/nextest.toml with parallel threads and extended 500s timeout for real_test_ cases
  • Makefile: Switch unit/integration to nextest; add test-integration-real target with filter
  • Test helpers: New tests/common/endpoints.rs provides HTTP helpers for chat/completions, responses (incl. SSE parsing), conversation CRUD, file upload, and signature/attestation endpoints (URL-encoding)
  • Tests: Add real E2E suites for conversations and signature-attestation validation; refactor existing e2e tests to use common helpers

Written by Cursor Bugbot for commit b117aa7. This will update automatically on new commits. Configure here.

@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @hanakannzashi, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request enhances the testing framework by introducing the capability to run end-to-end tests against real inference providers. This is a significant step towards validating the system's behavior in a production-like environment, especially for critical security features. A new test specifically leverages this setup to confirm that the cryptographic signatures generated for chat completions originate from an attested source, thereby bolstering the trustworthiness of model outputs.

Highlights

  • New Test Server Setup for Real Inference Providers: Introduced new utility functions (build_test_server_components_with_real_provider and setup_test_server_with_real_provider) to configure a test server environment that utilizes actual inference providers rather than mocked ones. This allows for more realistic end-to-end testing.
  • End-to-End Signature Verification Test: Added a new end-to-end test case (test_signature_signing_address_matches_model_attestation) that verifies the integrity of chat completion signatures. This test ensures that the signing address provided in a chat completion's signature matches one of the attested signing addresses for the model used, enhancing trust in model interactions.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

@claude
Copy link

claude bot commented Dec 29, 2025

Code Review

I've reviewed this PR which adds a real inference provider integration test for signature verification. Here are the critical findings:

⚠️ CRITICAL ISSUES

1. Missing Error Handling for Real Provider Initialization

In build_test_server_components_with_real_provider() at common/mod.rs:230:

The real provider initialization can fail (as seen in init_inference_providers at lib.rs:437 where it logs a warning on failure), but this test setup doesn't handle or propagate that error. If the model discovery server is unavailable, the test will proceed with an uninitialized pool and fail cryptically later.

Fix: Handle initialization errors explicitly in the test setup function.

2. Race Condition: Insufficient Sleep Duration

At e2e_signature_verification.rs:275 there is a fixed 1-second sleep that assumes the signature will be ready, but with real providers there's no guarantee:

  • Real inference may take longer than 1 second
  • Network latency to external services
  • Database write propagation delays

Fix: Implement retry logic with timeout instead of a fixed sleep. Poll the signature endpoint with exponential backoff or fixed retries (e.g., 10 retries with 500ms delays).

3. Missing Test-Specific Documentation

The new test function lacks documentation explaining:

  • That it requires a REAL inference provider (not mocks)
  • Required environment variables (MODEL_DISCOVERY_SERVER_URL, etc.)
  • Expected behavior if provider is unavailable

Fix: Add comprehensive doc comments explaining test requirements.

⚠️ PRODUCTION SAFETY CONCERNS

4. Resource Leaks in Test Setup

The init_inference_providers function at lib.rs:444 starts a background refresh task that continues running even after the test completes. With many parallel tests, this could exhaust system resources.

Fix: Tests should either:

  • Provide a way to stop the refresh task when test completes
  • Use a very long refresh interval (already set to 3600s in config, which is good)
  • Document that the refresh task will continue until test process exits

5. Potential Test Flakiness

The test makes real network calls and depends on external services. This makes it inherently flaky and potentially slow. Consider:

  • Adding #[ignore] attribute so it only runs when explicitly requested
  • Adding timeout annotations
  • Clear naming that this is an integration test, not a unit test

✅ POSITIVE OBSERVATIONS

  • Good separation of concerns: build_test_server_components_with_real_provider vs mock version
  • Proper use of test database isolation with TestDatabaseGuard
  • Address normalization logic is correct (handles 0x prefix and case)
  • Test validates end-to-end flow including attestation verification

RECOMMENDATION

⚠️ CHANGES REQUIRED - Fix critical issues nearai/chat-api#105 and #2 before merge. Issue #3 is important for maintainability. Issues #4 and #5 are recommendations for test reliability.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces new test helpers and an end-to-end test for using a real inference provider, specifically for verifying that signature signing addresses match model attestations. The changes are logical and the new test covers an important aspect of the system. I've identified a few areas for improvement regarding test code maintainability and robustness, including refactoring duplicated test setup logic, avoiding fixed-duration sleeps that can cause flakiness, and using strongly-typed structs for JSON deserialization to improve type safety.

@think-in-universe think-in-universe marked this pull request as draft December 29, 2025 08:44
@think-in-universe
Copy link
Contributor

Ideally this test should be run after the latest commit in main is deployed into staging. Let's include the below tests:

  1. Smoke test
  2. Verifying cloud-api gateway & models, domains and chat signature, etc.

cursor[bot]

This comment was marked as outdated.

@think-in-universe
Copy link
Contributor

I think the test coverage is probably not enough, but I think it's fine to start with a small test suite and make sure the workflow is working as expected. We can add more tests in a separate PR.

To trigger the tests agains the real providers, just reply a comment with "test". This will take effect after we merge this PR into main branch.

@PierreLeGuen
Copy link
Contributor

@claude review

@claude
Copy link

claude bot commented Jan 22, 2026

Code Review - PR #316

Status: ✅ APPROVED - Ready to Merge

This PR successfully separates real provider E2E tests from mock-based tests. The code quality is high with no critical issues found.

Positive Aspects

  1. Good separation of concerns with clear naming (real_test_* prefix)
  2. DRY refactoring - common HTTP helpers eliminate duplication
  3. Proper CI workflow design with on-demand trigger
  4. RAII guard pattern for database cleanup
  5. Appropriate 500s timeout for real provider tests

Minor Observations (Non-blocking)

1. Workflow Comment Documentation (.github/workflows/test-real.yml:25)

  • Comment mentions @tester test but condition checks for test only
  • Consider updating: types: [ created ] # Only trigger on PR comments with test

2. URL Encoding (crates/api/tests/common/endpoints.rs:630, 662)

  • Correctly handles model names with special chars (e.g., Qwen/Qwen3-30B)
  • This is the right approach ✓

3. Sleep Delays (real_e2e tests)

  • 1s delays after signature requests may cause flakiness
  • Acceptable for real provider coordination; retry logic could be added later

Architecture & Safety Checks

✅ Privacy compliant (no customer data logged)
✅ No SQL injection risks
✅ Proper resource cleanup
✅ Connection pooling configured correctly
✅ No blocking operations in async

Reviewed per CLAUDE.md guidelines: logic correctness, production safety, database handling, performance, Rust-specific concerns

@PierreLeGuen PierreLeGuen self-requested a review January 22, 2026 08:32
Copy link
Contributor

@PierreLeGuen PierreLeGuen left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you! Real tests worked on my end

        PASS [  24.043s] api::real_e2e_conversations real_test_conversation_items_populated_by_responses_non_stream
        PASS [  24.548s] api::real_e2e_conversations real_test_update_conversation_metadata
        PASS [  26.927s] api::real_e2e_conversations real_test_conversation_items_populated_by_responses_streaming
        PASS [  26.369s] api::real_e2e_response_signature_attestation real_test_signature_signing_address_matches_gateway_attestation_stream
        PASS [  25.840s] api::real_e2e_signature_verification real_test_signature_signing_address_matches_model_attestation_non_stream
        PASS [  27.401s] api::real_e2e_response_signature_attestation real_test_signature_signing_address_matches_gateway_attestation_non_stream
        PASS [  27.555s] api::real_e2e_conversations real_test_delete_conversation
        PASS [  27.518s] api::real_e2e_signature_verification real_test_signature_signing_address_matches_model_attestation_stream
────────────
     Summary [ 249.122s] 653 tests run: 653 passed (16 slow, 1 leaky), 14 skipped

cursor[bot]

This comment was marked as outdated.

@think-in-universe
Copy link
Contributor

@claude review

@claude
Copy link

claude bot commented Jan 22, 2026

Code Review - Critical Issues Found ⚠️

I've reviewed this PR and identified 3 critical issues that should be addressed before merge:


1. ⚠️ Test Flakiness: Fixed-Duration Sleeps

Files: real_e2e_signature_verification.rs, real_e2e_response_signature_attestation.rs

Issue: Tests use fixed 1-second sleeps after completion requests:

tokio::time::sleep(tokio::time::Duration::from_millis(1000)).await;

Problem: This assumes signatures will always be ready in 1 second. Under load, slow CI runners, or network latency, this can cause intermittent test failures.

Recommendation: Implement polling with exponential backoff:

// Retry with exponential backoff (max ~10 seconds)
for attempt in 0..10 {
    tokio::time::sleep(Duration::from_millis(100 * 2_u64.pow(attempt.min(3)))).await;
    if let Ok(signature) = try_get_signature(&server, &api_key, &response_id, &model_name, "ecdsa").await {
        return signature;
    }
}
panic!("Signature not available after retries");

2. ⚠️ Excessive Test Timeout: 500 Seconds

File: .config/nextest.toml:14

slow-timeout = { period = "500s", terminate-after = 1 }

Problem: 8+ minute timeout for E2E tests is excessive and masks:

  • Performance regressions (tests slowing down over time)
  • Deadlocks/hangs (won't be detected for 8 minutes)
  • CI resource waste

Recommendation: Use 120-180s maximum. If tests genuinely need 8 minutes, they should be split into smaller, focused tests.


3. ⚠️ Workflow Trigger Mismatch

File: .github/workflows/test-real.yml:23

github.event.comment.body == 'test'

Problem: Exact string match means only the comment "test" (nothing else) will trigger the workflow. Comments like "test this" or "please test" won't work. The PR description mentions "@tester test" but the code checks for just "test".

Recommendation: Clarify the exact trigger phrase or use:

contains(github.event.comment.body, '/test-real')

Notes

  • The refactoring to shared endpoints.rs helpers is excellent and improves test maintainability ✅
  • Test coverage for signature verification is comprehensive ✅
  • The PR was already approved, but these issues could cause production flakiness

Status: ⚠️ Issues found - recommend fixes before merge


Review Context

Reviewed against: CLAUDE.md privacy/logging guidelines, Rust best practices, production safety for multi-cluster deployment

Methodology: Focused on critical issues only (logic flaws, production safety, test reliability, performance)

Copy link

@cursor cursor bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Cursor Bugbot has reviewed your changes and found 1 potential issue.

Bugbot Autofix is OFF. To automatically fix reported issues with Cloud Agents, enable Autofix in the Cursor dashboard.

GH_TOKEN: ${{ github.token }}
run: |
set -e
PR_NUMBER=${{ github.event.issue.number }}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Workflow fails on pull_request events due to missing PR number

High Severity

The "Get PR head SHA" step uses github.event.issue.number which is only defined for issue_comment events. When the workflow triggers on pull_request events (opened, ready_for_review), this value is empty, causing gh pr view "" to fail. The existing claude-code-review.yml handles this correctly with github.event.pull_request.number || github.event.issue.number.

Fix in Cursor Fix in Web

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Keep Some Real Provider Tests for Production Coverage

3 participants