Skip to content

Feat/abstraction#7

Open
JustAGhosT wants to merge 64 commits intomasterfrom
feat/abstraction
Open

Feat/abstraction#7
JustAGhosT wants to merge 64 commits intomasterfrom
feat/abstraction

Conversation

@JustAGhosT
Copy link

@JustAGhosT JustAGhosT commented May 21, 2025

➕ What does this PR do?

🔨 Changes

  • Feature / enhancement
  • Bug fix
  • Chore / refactor

✅ Checklist

  • Tests added/updated
  • Docs updated (or NA)
  • x] No secrets in diff

🗒 Notes for reviewer

Description by Korbit AI

What change is being made?

Integrate AI guidelines, establish event-driven architecture, and overhaul the Phoenix MarketData platform with updated infrastructure, validation, and security features.

Why are these changes being made?

These changes are intended to improve the scalability, maintainability, and flexibility of the platform, enhance the secure handling of application secrets, and provide robust guidelines for consistent and ethical AI integration on VeritasVault.net. The implementation of an event-driven architecture and enhanced validation system optimizes data processing workflows and ensures data credibility.

Is this description stale? Ask me to generate a new description by commenting /korbit-generate-pr-description

Summary by CodeRabbit

  • New Features

    • Introduced comprehensive AI guidelines and architecture implementation guides.
    • Added robust OpenAPI/Swagger documentation and UI for the Market Data API.
    • Implemented new Azure Functions endpoints for saving market data.
    • Added support for FX volatility surface and crypto ordinal spot price data models.
    • Enabled secret management with Azure Key Vault integration and in-memory caching.
    • Introduced repository and service abstractions for scalable, event-driven data operations.
    • Enhanced validation, error handling, and code coverage reporting.
  • Bug Fixes

    • Improved JSON serialization and deserialization for date and time types.
    • Enhanced validation logic for market data entities.
  • Documentation

    • Expanded README and inline API documentation.
    • Added OpenAPI specification and example requests for easier integration.
  • Tests

    • Added extensive unit tests for repositories, validation, event processing, and mapping.
    • Automated test coverage reporting with scripts and CI integration.
  • Chores

    • Improved .gitignore and VS Code workspace configuration.
    • Added scripts and Git hooks for linting, testing, and coverage automation.
    • Updated project and package configurations for streamlined development and CI/CD.

JustAGhosT and others added 18 commits May 19, 2025 17:46
Add AI guidelines for VeritasVault.net with implementation standards, ethical considerations, and workflow practices. Include configuration files for AI context providers and settings. Set up VSCode launch and task configurations for Phoenix.MarketData.Functions development.
Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
Add foundational components for the market data service including:
- MarketDataService implementation with push/pull data operations
- Domain interfaces for repository, event publishing, and validation
- Infrastructure implementations for secrets, events, and validation
- Repository factory pattern for data access abstraction

The implementation supports market data publishing, retrieval, and querying with proper validation and event notification.
Add core interfaces and utilities for the Phoenix.MarketData module including:
- Configuration interfaces for secret management
- Event publishing abstractions
- Repository patterns
- Model mapping interfaces
- Comprehensive validation framework with FluentValidation and JSON Schema support
- Move Phoenix.MarketData.Functions to src directory
- Add Phoenix.MarketData.Core project to solution
- Update paths in tasks.json to reflect new structure
- Increase context provider limits for AI guidance
- Add VSCode settings with custom dictionary words
Add secret caching mechanism to improve performance when retrieving secrets from Azure Key Vault. Implement a no-op cache for when caching isn't required. Add new methods to retrieve EventGrid endpoint and EventHub connection string.

Replace the MarketDataRepository with a generic Repository implementation and update related factories. Modernize serialization by switching from Newtonsoft.Json to System.Text.Json. Add validation classes for market data types.

Enhance event publishing capabilities with EventGridPublisher implementation that supports both single and batch event publishing.
Add VSCode configurations for Azure Functions development including:
- New extensions.json with recommended extensions
- Updated launch.json with .NET Functions debugging config
- Enhanced settings.json with Azure Functions deployment settings
- Added architecture implementation guide documentation
- Reorganized project structure in solution file
This commit implements a major refactoring of the market data system:

- Created generic CosmosRepository<T> with comprehensive CRUD operations
- Moved domain models to Core.Models namespace for better organization
- Added proper event publishing infrastructure with EventGridPublisher
- Implemented repository interfaces and extensions for versioning
- Improved validation with better exception handling
- Added proper DI registration in Functions app
- Fixed unit tests to work with the new repository pattern
- Consolidated DTOs and mapping logic for consistency
- Added support for soft delete pattern in repositories
- Improved error handling and logging throughout the codebase
Add pre-commit hook, validation decorators, and event subscriber implementation with retry logic. Includes unit tests for the ValidationDecorator and MarketDataEventSubscriber classes. Also adds husky for Git hooks and fixes project references in solution file.
This commit introduces several improvements to the validation system and event handling:

1. Replaces the old validation system with a more flexible and async-based approach
2. Adds a dedicated MarketDataEventProcessor to separate event handling logic
3. Enhances integration event models with a common interface and base class
4. Restructures tests directory and updates tests to use xUnit
5. Adds VS Code tasks for building and testing the solution
6. Configures dotnet-test-explorer to use the new test project structure
Add code coverage configuration and reporting tools to the project:
- Configure coverlet collector with proper settings
- Add PowerShell and Bash scripts for generating coverage reports
- Create VS Code task for running coverage reports
- Update .gitignore to exclude coverage artifacts
- Add CompositeValidator tests
- Update documentation with coverage instructions and current status
Add CI workflow to automatically build and test the project on push/PR to main branches. The workflow includes:
- .NET 8 setup and build
- Test execution with coverage reporting
- Coverage threshold checks
- PR comment with coverage results
- Upload of coverage artifacts

Also adds test files for validators and mappers, fixes existing tests, and improves coverage reporting configuration.
…se formatting

Add DisplayAssetId property to BaseMarketData class to maintain original case formatting of asset IDs while still using normalized lowercase versions for ID calculations and matching. Refactor mappers to use this new property when converting between domain models and DTOs. Update validation logic and tests to support the new property.
…nalSpotPriceDataDto.cs

Co-authored-by: greptile-apps[bot] <165735046+greptile-apps[bot]@users.noreply.github.com>
Update src/Phoenix.MarketData.Infrastructure/Serialization/CryptoOrdi…
Copilot AI review requested due to automatic review settings May 21, 2025 22:31
@korbit-ai
Copy link

korbit-ai bot commented May 21, 2025

Korbit doesn't automatically review large (3000+ lines changed) pull requests such as this one. If you want me to review anyway, use /korbit-review.

@coderabbitai
Copy link

coderabbitai bot commented May 21, 2025

Walkthrough

This update introduces a comprehensive overhaul and expansion of the Phoenix Market Data Platform. It adds new core, application, and infrastructure projects with modular architecture, advanced repository patterns, event-driven integration, validation layers, and secret management. The Azure Functions API is refactored for OpenAPI support, robust error handling, and DTO/domain mapping. Extensive test coverage, CI/CD integration, and developer tooling are included.

Changes

File(s) / Group Change Summary
.aiguidance/ai-guidelines.md, .aiguidance/context-providers.json, .aiguidance/settings.json Added detailed AI guidelines and configuration files for context providers and AI settings.
.github/workflows/build-and-test.yml, .github/workflows/update-guidelines-timestamp.yml Added GitHub Actions workflows for CI build/test and auto-updating guidelines timestamps.
.gitignore, .prettierignore Expanded ignore patterns for coverage, Node.js, build, environment, IDE, and temp files.
.husky/pre-commit, .husky/post-commit, .husky/post-commit.cmd Added Husky hooks for pre-commit linting/tests and post-commit coverage reporting (shell and Windows).
.vscode/extensions.json, .vscode/launch.json, .vscode/settings.json, .vscode/tasks.json Added VS Code workspace configuration for extensions, debugging, tasks, and settings.
package.json, scripts/coverage-report.js, scripts/run-coverage.js, scripts/test-wrapper.js, scripts/update-guidelines.js Added Node.js tooling for formatting, test running, coverage, and guideline updates.
docs/README.md, docs/architecture-implementation-guide.md Enhanced documentation with testing, coverage, and a detailed architecture guide.
Phoenix.MarketDataPlatform.sln Updated solution to include new core, application, and infrastructure projects; fixed paths.
src/Phoenix.MarketData.Core/* Introduced core interfaces for events, repositories, mapping, validation, and configuration.
src/Phoenix.MarketData.Domain/* Added/updated domain models, enums, constants, service interfaces, and repository contracts.
src/Phoenix.MarketData.Application/* Added application services for event processing, validation decorators, and market data service logic.
src/Phoenix.MarketData.Functions/* Refactored Azure Functions entry point, OpenAPI/Swagger integration, HTTP function for saving market data, and supporting models.
src/Phoenix.MarketData.Infrastructure/* Implemented Cosmos DB generic repository, event publishing to Event Grid, secret management, mapping, validation, and batch operations.
tests/Phoenix.MarketData.Application.Tests/*, tests/Phoenix.MarketData.Infrastructure.Tests/* Added extensive unit tests for services, validators, mappers, repositories, and decorators; included scripts and configs for coverage.
src/Phoenix.MarketData.Domain/Models/Interfaces/IMarketData.cs, src/Phoenix.MarketData.Infrastructure/Cosmos/*, Phoenix.MarketData.Infrastructure.Tests/MarketDataRepositoryTests.cs, Phoenix.MarketData.Infrastructure.Tests/Phoenix.MarketData.Infrastructure.Tests.csproj, Phoenix.MarketData.Functions/Program.cs, Phoenix.MarketData.Functions/SaveDocumentToDb.cs, src/Phoenix.MarketData.Infrastructure/Cosmos/VersionManager.cs Removed legacy interfaces, repository, and program files replaced by new architecture.
src/Phoenix.MarketData.Functions/HttpRequests/SaveMarketData.http Added documentation and comments for HTTP request examples.
src/Phoenix.MarketData.Functions/openapi.json, src/Phoenix.MarketData.Functions/wwwroot/index.html Added OpenAPI specification and Swagger UI HTML for API documentation.
tests/Phoenix.MarketData.Infrastructure.Tests/coverage-threshold.xml, tests/Phoenix.MarketData.Infrastructure.Tests/coverage.runsettings, tests/Phoenix.MarketData.Infrastructure.Tests/generate-coverage-report.sh, tests/Phoenix.MarketData.Infrastructure.Tests/Generate-CoverageReport.ps1 Added code coverage configuration and scripts for reporting.
src/Phoenix.MarketData.Infrastructure/Repositories/* Added/updated repository interfaces, batch, query, partition key, and exception handling logic.
src/Phoenix.MarketData.Infrastructure/Validation/* Added composite validators and entity-specific validation logic.

Sequence Diagram(s)

sequenceDiagram
    participant Client
    participant AzureFunction
    participant Validator
    participant Mapper
    participant Repository
    participant EventPublisher
    participant CosmosDB

    Client->>AzureFunction: HTTP POST /api/SaveDocumentToDb
    AzureFunction->>Validator: Validate JSON schema & DTO
    Validator-->>AzureFunction: ValidationResult
    AzureFunction->>Mapper: Map DTO -> Domain Entity
    Mapper-->>AzureFunction: Domain Entity
    AzureFunction->>Repository: SaveAsync(Domain Entity)
    Repository->>CosmosDB: Upsert Entity
    CosmosDB-->>Repository: Save Result
    Repository->>EventPublisher: Publish DataChanged Event
    EventPublisher-->>Repository: Ack
    Repository-->>AzureFunction: Save Result
    AzureFunction-->>Client: HTTP 200/400/500 Response
Loading

Possibly related PRs

  • Feature/new #2: Adds initial HTTP request examples and scaffolding for the SaveDocumentToDb function, closely related to this PR's refactored and fully implemented Azure Function for saving market data with validation and error handling.

Poem

(\(\
( -.-) A hop, a leap, a code refactor bright,
((")(") Repos and functions now shining with light.
Mappers and validators, events on the breeze,
Cosmos and coverage—ship it with ease!
Swagger docs flutter, tests run with delight—
The platform now sparkles, all through the night.

✨ Finishing Touches
  • 📝 Generate Docstrings

🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Explain this complex logic.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai explain this code block.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and explain its main purpose.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Support

Need help? Create a ticket on our support page for assistance with any issues or questions.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai generate docstrings to generate docstrings for this PR.
  • @coderabbitai generate sequence diagram to generate a sequence diagram of the changes in this PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

@JustAGhosT
Copy link
Author

/korbit-review

Copy link

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull Request Overview

This PR introduces a new Application project for handling market data events, integrates retry logic, and updates solution structure and CI/CD tooling for coverage and workspace configuration.

  • Add Phoenix.MarketData.Application project with event subscriber and retry policy
  • Refresh solution file, move tests under tests/, and wire up coverage scripts and GitHub Actions
  • Update docs and VS Code tasks/settings for testing and coverage reporting

Reviewed Changes

Copilot reviewed 115 out of 115 changed files in this pull request and generated 4 comments.

Show a summary per file
File Description
src/Phoenix.MarketData.Application/Phoenix.MarketData.Application.csproj Add Application project with framework references and dependencies
src/Phoenix.MarketData.Application/Events/MarketDataEventSubscriber.cs Implement event-driven subscriber with Polly retry policy
run-tests-with-coverage.cmd Add script to run .NET tests with coverage and generate HTML report
package.json Configure Husky pre-commit hook and placeholder npm test script
docs/architecture-implementation-guide.md Add high-level architecture implementation guide
docs/README.md Extend README with testing & coverage instructions
Phoenix.MarketDataPlatform.sln Update solution to include new Core and Application projects
Phoenix.MarketData.Infrastructure.Tests/Phoenix.MarketData.Infrastructure.Tests.csproj Remove legacy tests project file (tests relocated)
Phoenix.MarketData.Infrastructure.Tests/MarketDataRepositoryTests.cs Delete old repository tests (migrating into new structure)
.vscode/tasks.json Add VS Code tasks for build, function host, test and coverage report
.vscode/settings.json Update editor settings (spellings, Azure Functions paths, test explorer)
.vscode/launch.json Add attach-to-Functions host launch configuration
.vscode/extensions.json Recommend Azure Functions and C# extensions
.husky/pre-commit Hook to run npm test before commits
.github/workflows/build-and-test.yml CI workflow for build, test, coverage and coverage PR comments
.aiguidance/settings.json Add AI-guidance engine settings
.aiguidance/context-providers.json Configure context providers for AI guidance
.aiguidance/ai-guidelines.md Add enterprise AI guidelines
Comments suppressed due to low confidence (1)

Phoenix.MarketData.Infrastructure.Tests/Phoenix.MarketData.Infrastructure.Tests.csproj:1

  • The tests project file is removed; ensure that all existing tests are migrated to the new tests/Phoenix.MarketData.Infrastructure.Tests folder and included in the solution to maintain coverage.
-<Project Sdk="Microsoft.NET.Sdk">

"command": "cd tests/Phoenix.MarketData.Infrastructure.Tests && powershell -ExecutionPolicy Bypass -File .\\Generate-CoverageReport.ps1"
},
"linux": {
"command": "cd tests/Phoenix.MarketData.Infrastructure.Tests && ./generate-coverage-report.sh"
Copy link

Copilot AI May 21, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The task references generate-coverage-report.sh but only a PowerShell script (Generate-CoverageReport.ps1) exists; ensure the shell script is added or adjust the command/path to match the actual file name.

Copilot uses AI. Check for mistakes.
Copy link

@greptile-apps greptile-apps bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

PR Summary

This PR implements a comprehensive architectural overhaul of the Phoenix Market Data Platform, introducing event-driven patterns, validation frameworks, and improved repository abstractions.

  • Added src/Phoenix.MarketData.Core project with core abstractions including IMarketDataEntity, validation framework, and event interfaces
  • Implemented event-driven architecture in src/Phoenix.MarketData.Application/Events with retry capabilities and proper error handling
  • Added robust validation framework in src/Phoenix.MarketData.Infrastructure/Validation with composite pattern and type-specific validators
  • Introduced comprehensive test suite with 60% coverage target and proper CI integration via GitHub Actions
  • Migrated from Newtonsoft.Json to System.Text.Json across the codebase with proper converter implementations

💡 (5/5) You can turn off certain types of comments like style here!

95 file(s) reviewed, 128 comment(s)
Edit PR Review Bot Settings | Greptile

<UseSourceLink>true</UseSourceLink>
<IncludeTestAssembly>false</IncludeTestAssembly>
<SkipAutoProps>true</SkipAutoProps>
<DeterministicReport>false</DeterministicReport>
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

style: DeterministicReport=false may cause inconsistent coverage reports between runs. Consider enabling for CI/CD

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 55

🔭 Outside diff range comments (2)
src/Phoenix.MarketData.Infrastructure/Mapping/BaseMarketDataMapper.cs (1)

1-35: 💡 Verification agent

🧩 Analysis chain

Verify no residual imports of the old namespace
To ensure consistency across the codebase, please search for any remaining references to Phoenix.MarketData.Domain.Models and update them to the new Core.Models namespace if found.


🏁 Script executed:

#!/bin/bash
# Search for obsolete namespace imports in the repo
rg "Phoenix\.MarketData\.Domain\.Models" -n

Length of output: 3776


Update remaining legacy imports of Phoenix.MarketData.Domain.Models
We’re still seeing numerous references to the old namespace—these need to be updated to Phoenix.MarketData.Core.Models for consistency:

• tests/Phoenix.MarketData.Application.Tests/Services/MarketDataServiceTests.cs
• tests/Phoenix.MarketData.Infrastructure.Tests/Validation/ValidationDecoratorTests.cs
• tests/Phoenix.MarketData.Infrastructure.Tests/Validation/CryptoOrdinalSpotPriceDataValidatorTests.cs
• tests/Phoenix.MarketData.Infrastructure.Tests/Validation/CompositeValidatorTests.cs
• tests/Phoenix.MarketData.Infrastructure.Tests/Mapping/FxSpotPriceDataMapperTests.cs
• tests/Phoenix.MarketData.Infrastructure.Tests/Mapping/CryptoOrdinalSpotPriceDataMapperTests.cs
• tests/Phoenix.MarketData.Infrastructure.Tests/Validation/FxSpotPriceDataValidatorTests.cs
• tests/Phoenix.MarketData.Infrastructure.Tests/Events/MarketDataEventSubscriberTests.cs
• src/Phoenix.MarketData.Domain/Models/BaseMarketData.cs
• src/Phoenix.MarketData.Domain/Services/IMarketDataService.cs
• src/Phoenix.MarketData.Domain/Models/Interfaces/IMarketDataEntity.cs
• src/Phoenix.MarketData.Domain/Repositories/IRepository.cs
• src/Phoenix.MarketData.Domain/Events/IMarketDataEventPublisher.cs
• src/Phoenix.MarketData.Core/Domain/MarketDataChangedEvent.cs
• src/Phoenix.MarketData.Core/Domain/MarketDataCreatedEvent.cs
• src/Phoenix.MarketData.Application/Services/MarketDataService.cs
• src/Phoenix.MarketData.Application/Services/MarketDataEventProcessor.cs
• src/Phoenix.MarketData.Application/Events/MarketDataEventSubscriber.cs
• src/Phoenix.MarketData.Application/Services/Decorators/ValidationMarketDataServiceDecorator.cs
• src/Phoenix.MarketData.Infrastructure/Events/EventGridPublisher.cs
• src/Phoenix.MarketData.Infrastructure/Validation/FxSpotPriceDataValidator.cs
• src/Phoenix.MarketData.Infrastructure/Repositories/CosmosRepositoryExtensions.cs
• src/Phoenix.MarketData.Infrastructure/Repositories/Interfaces.cs
• src/Phoenix.MarketData.Infrastructure/Repositories/CosmosRepositoryEvents.cs
• src/Phoenix.MarketData.Infrastructure/Validation/CryptoOrdinalSpotPriceDataValidator.cs
• src/Phoenix.MarketData.Infrastructure/Repositories/CosmosRepository.cs
• src/Phoenix.MarketData.Infrastructure/Repositories/VersioningExtensions.cs
• src/Phoenix.MarketData.Infrastructure/Validation/CompositeValidator.cs
• src/Phoenix.MarketData.Infrastructure/Mapping/IMarketDataMapper.cs
• src/Phoenix.MarketData.Infrastructure/Mapping/CryptoOrdinalSpotPriceDataMapper.cs
• src/Phoenix.MarketData.Infrastructure/Mapping/FxSpotPriceDataMapper.cs
• src/Phoenix.MarketData.Functions/SaveDocumentToDb.cs

Please replace all using Phoenix.MarketData.Domain.Models; entries with using Phoenix.MarketData.Core.Models;.

src/Phoenix.MarketData.Domain/Models/BaseMarketData.cs (1)

8-21: 🛠️ Refactor suggestion

Improved ID caching with lazy calculation

The addition of the private _id field with lazy calculation in the Id property is a good optimization. The field is correctly invalidated whenever a property that affects the ID changes.

However, the current implementation isn't thread-safe. If multiple threads access the Id property simultaneously when _id is null, they could all enter the CalculateId method and potentially overwrite each other's results.

Consider using the Lazy type for thread-safe lazy initialization:

-    private string? _id;
+    private Lazy<string> _lazyId;
+    
+    protected BaseMarketData()
+    {
+        _lazyId = new Lazy<string>(CalculateId);
+    }

-    public string Id => _id ??= CalculateId();
+    public string Id => _lazyId.Value;

// Then replace all instances of '_id = null' with:
+    _lazyId = new Lazy<string>(CalculateId);
🧹 Nitpick comments (104)
.aiguidance/settings.json (1)

4-4: Normalize JSON Spacing
The extra spaces before the opening brace in the performance entry are inconsistent with other object literals. Consider using a single space after the colon for uniform styling.

.aiguidance/context-providers.json (1)

1-6: Document Provider Constraints
Consider adding inline comments or updating project documentation to clarify behaviors when these limits are exceeded (e.g., truncation strategies or error handling).

.aiguidance/ai-guidelines.md (1)

107-110: Grammar: Missing Article
In the Sustainability section, revise “Consider environmental impact of training and inference” to “Consider the environmental impact of training and inference” for improved clarity.

🧰 Tools
🪛 LanguageTool

[uncategorized] ~108-~108: You might be missing the article “the” here.
Context: ... computational efficiency - Consider environmental impact of training and inference - B...

(AI_EN_LECTOR_MISSING_DETERMINER_THE)

run-tests-with-coverage.cmd (2)

4-4: Consider security implications of bypassing PowerShell execution policy

While bypassing the execution policy is common in development environments, it has security implications. Consider signing your PowerShell scripts or using a more restrictive policy if possible.


3-4: Add error handling for directory change and script execution

The script doesn't handle potential errors when changing directory or executing the PowerShell script.

@echo off
dotnet test --settings tests/Phoenix.MarketData.Infrastructure.Tests/coverage.runsettings
-cd tests/Phoenix.MarketData.Infrastructure.Tests
-powershell -ExecutionPolicy Bypass -File .\Generate-CoverageReport.ps1
+cd tests/Phoenix.MarketData.Infrastructure.Tests || (echo Failed to change directory & exit /b 1)
+powershell -ExecutionPolicy Bypass -File .\Generate-CoverageReport.ps1
+if %ERRORLEVEL% neq 0 (
+    echo Failed to generate coverage report
+    exit /b %ERRORLEVEL%
+)
+echo Coverage report generated successfully
src/Phoenix.MarketData.Application/Phoenix.MarketData.Application.csproj (1)

19-24: Project references may violate clean architecture principles

The Application project has references to both Infrastructure and Core projects. In typical clean architecture:

  • Infrastructure should depend on Application (not vice versa)
  • Both should depend on Core/Domain

This reference structure might create circular dependencies or tight coupling.

Consider reviewing the dependency direction to ensure it follows clean architecture principles:

  1. Domain/Core should have no dependencies on other project layers
  2. Application should depend only on Domain/Core
  3. Infrastructure should depend on Application and Domain/Core
docs/README.md (1)

52-52: Remove trailing punctuation from heading

According to markdown lint rules (MD026), headings should not have trailing punctuation.

-#### Areas Needing Coverage Improvement:
+#### Areas Needing Coverage Improvement
🧰 Tools
🪛 markdownlint-cli2 (0.17.2)

52-52: Trailing punctuation in heading
Punctuation: ':'

(MD026, no-trailing-punctuation)

src/Phoenix.MarketData.Core/Configuration/ISecretCache.cs (1)

15-16: Fix missing XML summary opening tag

The opening tag for the summary documentation is missing.

-        /// Caches a secret for future retrieval
+        /// <summary>
+        /// Caches a secret for future retrieval
src/Phoenix.MarketData.Core/Validation/FluentValidationExtensions.cs (1)

6-22: Enhance extension method robustness with null checks and optimization.

The extension method implementation is clean and follows the single responsibility principle, but could benefit from a few improvements:

  1. Missing null check for the result parameter
  2. The errors collection is enumerated twice (once for Select() and once for Any())

Here's an optimized implementation:

public static ValidationResult ToValidationResult(this FluentValidation.Results.ValidationResult result)
{
+    if (result == null)
+        throw new ArgumentNullException(nameof(result));
+
+    var errorsList = result.Errors.Select(e => new ValidationError
-    var errors = result.Errors.Select(e => new ValidationError
    {
        PropertyName = e.PropertyName,
        ErrorCode = e.ErrorCode,
        ErrorMessage = e.ErrorMessage,
        Source = "FluentValidation"
-    });
+    }).ToList();

-    return errors.Any()
+    return errorsList.Count > 0
        ? ValidationResult.Failure(errors)
        : ValidationResult.Success();
}

Also, consider adding XML documentation to explain the method's purpose and usage.

tests/Phoenix.MarketData.Infrastructure.Tests/generate-coverage-report.sh (2)

29-30: Add error handling for reportgenerator command.

The script should check if the reportgenerator command succeeds.

-reportgenerator "-reports:$COVERAGE_PATTERN" "-targetdir:$REPORT_DIR" -reporttypes:Html
+reportgenerator "-reports:$COVERAGE_PATTERN" "-targetdir:$REPORT_DIR" -reporttypes:Html || { echo "Failed to generate coverage report"; exit 1; }

26-27: Verify coverage files exist before generating report.

The script should verify that coverage files exist before attempting to generate a report.

echo "Searching for coverage files..."
COVERAGE_PATTERN="$TEST_RESULTS_DIR/**/coverage.cobertura.xml"
+# Check if coverage files exist
+if ! ls $COVERAGE_PATTERN 1> /dev/null 2>&1; then
+    echo "No coverage files found. Ensure tests are configured to collect coverage."
+    exit 1
+fi
src/Phoenix.MarketData.Core/Validation/ValidationException.cs (2)

5-5: Add SerializableAttribute to the ValidationException class.

Exception classes should be marked with the SerializableAttribute to support proper serialization.

+[Serializable]
public class ValidationException : Exception

20-21: Consider defensive null check for improved code robustness.

The null check in CreateMessage can be improved to be more explicit.

-if (errors == null)
+if (errors is null)
src/Phoenix.MarketData.Domain/Models/FxVolSurfaceData.cs (2)

18-18: Consider using concurrent or immutable collection for thread safety.

If this class might be accessed concurrently, consider using a thread-safe collection type.

-public required Dictionary<string, Dictionary<string, decimal>> SurfaceMatrix { get; set; }
+public required ConcurrentDictionary<string, ConcurrentDictionary<string, decimal>> SurfaceMatrix { get; set; }

Don't forget to add the namespace:

+using System.Collections.Concurrent;

Alternatively, consider using immutable collections if the data shouldn't be modified after construction.


23-23: Initialize Metadata to an empty dictionary.

Consider initializing the Metadata property to an empty dictionary to avoid null reference exceptions.

-public Dictionary<string, string>? Metadata { get; set; }
+public Dictionary<string, string>? Metadata { get; set; } = new();
src/Phoenix.MarketData.Core/Events/IMarketDataEventSubscriber.cs (1)

10-25: Well-designed event subscriber interface.

The interface follows best practices with:

  • Clear method names with async pattern
  • Proper XML documentation
  • Support for cancellation tokens
  • Clearly defined event handling methods

Consider adding a method for handling exceptions during event processing, depending on your error handling strategy.

tests/Phoenix.MarketData.Infrastructure.Tests/Validation/ValidationDecoratorTests.cs (2)

54-69: Well-written test for validation failure flow.

Good test coverage of the failure path:

  • Properly verifies exception is thrown
  • Confirms the service is never called when validation fails
  • Uses realistic validation errors

Consider adding a test for null input validation to ensure complete coverage.

+ [Fact]
+ public async Task PublishMarketDataAsync_WhenMarketDataIsNull_ShouldThrowException()
+ {
+     // Arrange
+     IMarketDataEntity? nullMarketData = null;
+ 
+     // Act & Assert
+     await Assert.ThrowsAsync<ArgumentNullException>(
+         async () => await _decorator.PublishMarketDataAsync(nullMarketData!));
+ 
+     _mockMarketDataService.Verify(s => s.PublishMarketDataAsync(It.IsAny<IMarketDataEntity>()), Times.Never);
+     _mockValidator.Verify(v => v.ValidateAsync(It.IsAny<IMarketDataEntity>(), It.IsAny<CancellationToken>()), Times.Never);
+ }

15-29: Good test setup with appropriate mocking.

The test constructor properly initializes all required dependencies and the system under test.

Consider using a concrete implementation of IMarketDataEntity instead of a mock for more realistic testing, depending on the complexity of the entity.

src/Phoenix.MarketData.Core/Mapping/IModelMapper.cs (2)

13-19: Consider documenting the expected behavior for null inputs

While the implementation will define how nulls are handled, it would be helpful to explicitly document in the interface what should happen when source or destination are null.

/// <summary>
/// Maps from source object to a new destination object
/// </summary>
+/// <remarks>
+/// If source is null, the implementation should [define behavior, e.g., return default(TDestination) or throw ArgumentNullException]
+/// </remarks>
TDestination Map<TSource, TDestination>(TSource source);

/// <summary>
/// Maps from source object to an existing destination object
/// </summary>
+/// <remarks>
+/// If source or destination is null, the implementation should [define behavior]
+/// </remarks>
void Map<TSource, TDestination>(TSource source, TDestination destination);

13-14: Consider adding type constraints where applicable

If this interface is intended to be used primarily with DTOs or domain models that require parameterless constructors, adding constraints could prevent implementation errors.

-TDestination Map<TSource, TDestination>(TSource source);
+TDestination Map<TSource, TDestination>(TSource source) where TDestination : new();
src/Phoenix.MarketData.Domain/Constants/CurrencyCodes.cs (2)

21-27: Consider adding a utility method to check for valid currency codes

Adding a method to check if a string is a valid currency code would be useful for validation scenarios.

/// <summary>
/// Returns a list of all defined codes.
/// </summary>
public static IReadOnlyList<string> All => new[]
{
    USD, EUR, GBP, ZAR, JPY, CHF, AUD, CAD, CNY, BTC, ETH
};

+/// <summary>
+/// Checks if the provided code is a valid currency code.
+/// </summary>
+/// <param name="code">The currency code to check</param>
+/// <returns>True if the code is valid, false otherwise</returns>
+public static bool IsValid(string code)
+{
+    return !string.IsNullOrWhiteSpace(code) && All.Contains(code, StringComparer.OrdinalIgnoreCase);
+}

6-24: Consider grouping currencies by type for better organization

As the list of currencies grows, it might be helpful to group them by type (fiat vs crypto).

public static class CurrencyCodes
{
+    // Fiat currencies
    public const string USD = "USD";
    public const string EUR = "EUR";
    public const string GBP = "GBP";
    public const string ZAR = "ZAR";
    public const string JPY = "JPY";
    public const string CHF = "CHF";
    public const string AUD = "AUD";
    public const string CAD = "CAD";
    public const string CNY = "CNY";
+
+    // Cryptocurrencies
    public const string BTC = "BTC";
    public const string ETH = "ETH";
    // Add more as you need, but don't add tokens or meme coins unless you're running a casino.

+    /// <summary>
+    /// Returns a list of all fiat currency codes.
+    /// </summary>
+    public static IReadOnlyList<string> Fiat => new[]
+    {
+        USD, EUR, GBP, ZAR, JPY, CHF, AUD, CAD, CNY
+    };
+
+    /// <summary>
+    /// Returns a list of all cryptocurrency codes.
+    /// </summary>
+    public static IReadOnlyList<string> Crypto => new[]
+    {
+        BTC, ETH
+    };

    /// <summary>
    /// Returns a list of all defined codes.
    /// </summary>
    public static IReadOnlyList<string> All => new[]
    {
        USD, EUR, GBP, ZAR, JPY, CHF, AUD, CAD, CNY, BTC, ETH
    };
tests/Phoenix.MarketData.Infrastructure.Tests/Generate-CoverageReport.ps1 (1)

22-25: Consider cleaning previous coverage reports

The script creates a report directory if it doesn't exist but doesn't clean previous coverage files. This could lead to stale data being included in the report.

# Create the report directory if it doesn't exist
if (-not (Test-Path $reportDir)) {
    New-Item -Path $reportDir -ItemType Directory -Force | Out-Null
}
+ else {
+     Write-Host "Cleaning previous coverage reports..."
+     Remove-Item -Path $reportDir\* -Recurse -Force
+ }
.github/workflows/build-and-test.yml (4)

16-16: Update GitHub Actions checkout to latest version.

The GitHub Actions checkout version is outdated. Newer versions include performance improvements and security fixes.

- uses: actions/checkout@v3
+ uses: actions/checkout@v4
🧰 Tools
🪛 actionlint (1.7.7)

16-16: the runner of "actions/checkout@v3" action is too old to run on GitHub Actions. update the action's version to fix this issue

(action)


19-21: Update setup-dotnet action to latest version.

The setup-dotnet action version is outdated. Version 4 includes improved stability and additional features.

- uses: actions/setup-dotnet@v3
+ uses: actions/setup-dotnet@v4
🧰 Tools
🪛 actionlint (1.7.7)

19-19: the runner of "actions/setup-dotnet@v3" action is too old to run on GitHub Actions. update the action's version to fix this issue

(action)


54-55: Update upload-artifact action to latest version.

The upload-artifact action version is outdated. Version 4 includes performance improvements and additional features.

- uses: actions/upload-artifact@v3
+ uses: actions/upload-artifact@v4
🧰 Tools
🪛 actionlint (1.7.7)

54-54: the runner of "actions/upload-artifact@v3" action is too old to run on GitHub Actions. update the action's version to fix this issue

(action)


65-65: Add newline at end of file.

YAML best practices recommend having a newline at the end of the file.

  path: code-coverage-results.md
+
🧰 Tools
🪛 YAMLlint (1.37.1)

[error] 65-65: no new line character at the end of file

(new-line-at-end-of-file)

.vscode/tasks.json (1)

74-74: Fix path separator inconsistency.

The path contains a mix of forward slashes and backslashes. For consistency and cross-platform compatibility, use forward slashes throughout.

- "cwd": "${workspaceFolder}/src\\Phoenix.MarketData.Functions/bin/Debug/net9.0"
+ "cwd": "${workspaceFolder}/src/Phoenix.MarketData.Functions/bin/Debug/net9.0"
src/Phoenix.MarketData.Infrastructure/Repositories/VersioningExtensions.cs (1)

30-37: Simplify the query iterator pattern.

The current while-loop pattern is more complex than necessary when you're only processing the first result. You can simplify this code.

- while (query.HasMoreResults)
- {
-     var response = await query.ReadNextAsync();
-     var latest = response.FirstOrDefault();
-     if (latest != null)
-         return latest.Version + 1;
- }
- return 1;
+ var response = await query.ReadNextAsync();
+ var latest = response.FirstOrDefault();
+ return latest != null ? latest.Version + 1 : 1;
src/Phoenix.MarketData.Core/Validation/SchemaValidationExtensions.cs (1)

5-7: Remove redundant namespace declaration.

The using statement on line 5 and the namespace declaration on line 7 are redundant as they reference the same namespace.

using System.Linq;
using System.Text.Json;
using Json.Schema;
-using Phoenix.MarketData.Core.Validation; 

namespace Phoenix.MarketData.Core.Validation;
src/Phoenix.MarketData.Infrastructure/Validation/FxSpotPriceDataValidator.cs (1)

44-44: Consider removing development note

The comment "Add more validation rules as needed" appears to be a development note and could be removed or replaced with more specific documentation about what additional validations might be appropriate.

src/Phoenix.MarketData.Infrastructure/Repositories/Interfaces.cs (2)

1-10: Remove unused imports

Several imported namespaces aren't used in this file and should be removed:

  • Microsoft.Azure.Cosmos
  • Microsoft.Azure.Cosmos.Linq
  • Microsoft.Extensions.Logging
  • Phoenix.MarketData.Core.Events

These seem more relevant for implementations rather than interface definitions.

 using System;
 using System.Collections.Generic;
 using System.Linq;
 using System.Threading.Tasks;
-using Microsoft.Azure.Cosmos;
-using Microsoft.Azure.Cosmos.Linq;
-using Microsoft.Extensions.Logging;
-using Phoenix.MarketData.Core.Events;
 using Phoenix.MarketData.Domain.Models;

16-24: Add XML documentation for the repository interface

Consider adding XML documentation for the interface and its methods to improve API usability. This would help developers understand the purpose of each method, expectations for parameters, and return values.

+    /// <summary>
+    /// Generic repository interface for entity operations
+    /// </summary>
+    /// <typeparam name="T">Entity type that implements IEntity</typeparam>
     public interface IRepository<T> where T : class, IEntity
     {
+        /// <summary>
+        /// Retrieves an entity by its unique identifier
+        /// </summary>
+        /// <param name="id">The unique identifier of the entity</param>
+        /// <returns>The entity if found, null otherwise</returns>
         Task<T?> GetByIdAsync(string id);
+        /// <summary>
+        /// Retrieves all entities of type T
+        /// </summary>
+        /// <param name="includeSoftDeleted">Whether to include soft-deleted entities</param>
+        /// <returns>An enumerable collection of entities</returns>
         Task<IEnumerable<T>> GetAllAsync(bool includeSoftDeleted = false);
+        /// <summary>
+        /// Adds a new entity to the repository
+        /// </summary>
+        /// <param name="entity">The entity to add</param>
+        /// <returns>The added entity with any repository-generated properties</returns>
         Task<T> AddAsync(T entity);
+        /// <summary>
+        /// Updates an existing entity in the repository
+        /// </summary>
+        /// <param name="entity">The entity to update</param>
+        /// <returns>The updated entity</returns>
         Task<T> UpdateAsync(T entity);
+        /// <summary>
+        /// Deletes an entity by its unique identifier
+        /// </summary>
+        /// <param name="id">The unique identifier of the entity to delete</param>
+        /// <param name="soft">Whether to perform a soft delete</param>
+        /// <returns>True if the entity was deleted, false otherwise</returns>
         Task<bool> DeleteAsync(string id, bool soft = false);
+        /// <summary>
+        /// Inserts multiple entities in a single operation
+        /// </summary>
+        /// <param name="entities">The entities to insert</param>
+        /// <returns>The number of entities inserted</returns>
         Task<int> BulkInsertAsync(IEnumerable<T> entities);
     }
src/Phoenix.MarketData.Infrastructure/Serialization/CryptoOrdinalSpotPriceDataDto.cs (1)

53-53: Add more descriptive error message for ArgumentOutOfRangeException

The exception thrown in the default case of the switch expression lacks a descriptive message, which could make debugging more difficult.

-            _ => throw new ArgumentOutOfRangeException(nameof(side))
+            _ => throw new ArgumentOutOfRangeException(nameof(side), side, "Unexpected PriceSide value")
src/Phoenix.MarketData.Core/Validation/ValidationPipelineBehaviour.cs (1)

18-40: Consider custom error details in ValidationException

The validation implementation correctly aggregates errors from multiple validators, but consider whether hardcoding "FluentValidation" as the source is appropriate for future extensibility. If you plan to add other validation sources later, you might want to make this configurable.

tests/Phoenix.MarketData.Infrastructure.Tests/Mapping/FxSpotPriceDataMapperTests.cs (2)

101-129: Consider adding more assertions for static method equivalence

While the test verifies key properties match between instance and static method results, consider adding assertions for all relevant properties or using a deep equality comparison to ensure complete equivalence.

 // Assert
 Assert.Equal(instanceResult.AssetId, staticResult.AssetId);
 Assert.Equal(instanceResult.Price, staticResult.Price);
 Assert.Equal(instanceResult.Side, staticResult.Side);
+Assert.Equal(instanceResult.AsOfDate, staticResult.AsOfDate);
+Assert.Equal(instanceResult.AsOfTime, staticResult.AsOfTime);
+Assert.Equal(instanceResult.SchemaVersion, staticResult.SchemaVersion);

131-162: Same feedback applies to static domain mapping test

Consider expanding the assertions to verify all relevant properties for complete equivalence testing.

 // Assert
 Assert.Equal(instanceResult.AssetId, staticResult.AssetId);
 Assert.Equal(instanceResult.Price, staticResult.Price);
 Assert.Equal(instanceResult.Side, staticResult.Side);
+Assert.Equal(instanceResult.DisplayAssetId, staticResult.DisplayAssetId);
+Assert.Equal(instanceResult.AsOfDate, staticResult.AsOfDate);
+Assert.Equal(instanceResult.AsOfTime, staticResult.AsOfTime);
tests/Phoenix.MarketData.Infrastructure.Tests/Mapping/CryptoOrdinalSpotPriceDataMapperTests.cs (2)

115-147: Consider expanding static method equivalence assertions

Similar to the FX mapper tests, consider adding more assertions to verify equivalence across all relevant properties.

 // Assert
 Assert.Equal(instanceResult.AssetId, staticResult.AssetId);
 Assert.Equal(instanceResult.Price, staticResult.Price);
 Assert.Equal(instanceResult.CollectionName, staticResult.CollectionName);
+Assert.Equal(instanceResult.ParentInscriptionId, staticResult.ParentInscriptionId);
+Assert.Equal(instanceResult.InscriptionId, staticResult.InscriptionId);
+Assert.Equal(instanceResult.Currency, staticResult.Currency);

149-185: Same feedback applies to static domain mapping test

Consider expanding assertions to verify all relevant properties for comprehensive equivalence testing.

 // Assert
 Assert.Equal(instanceResult.AssetId, staticResult.AssetId);
 Assert.Equal(instanceResult.Price, staticResult.Price);
 Assert.Equal(instanceResult.CollectionName, staticResult.CollectionName);
+Assert.Equal(instanceResult.DisplayAssetId, staticResult.DisplayAssetId);
+Assert.Equal(instanceResult.ParentInscriptionId, staticResult.ParentInscriptionId);
+Assert.Equal(instanceResult.InscriptionId, staticResult.InscriptionId);
+Assert.Equal(instanceResult.Currency, staticResult.Currency);
src/Phoenix.MarketData.Functions/JsonConverters/JsonConverters.cs (1)

1-21: Add XML documentation to the converter classes

Adding clear XML documentation would improve API discoverability and help other developers understand the purpose of these converters.

using System;
using System.Text.Json;
using System.Text.Json.Serialization;

+namespace Phoenix.MarketData.Functions.JsonConverters;
+
+/// <summary>
+/// JSON converter for System.DateOnly that serializes to/from ISO 8601 date format (yyyy-MM-dd).
+/// </summary>
public class DateOnlyJsonConverter : JsonConverter<DateOnly>
{
    // Implementation remains the same
}

+/// <summary>
+/// JSON converter for System.TimeOnly that serializes to/from 24-hour time format (HH:mm:ss).
+/// </summary>
public class TimeOnlyJsonConverter : JsonConverter<TimeOnly>
{
    // Implementation remains the same
}
src/Phoenix.MarketData.Domain/Services/IMarketDataService.cs (2)

8-20: Add XML documentation to the interface and methods

Adding XML documentation to the interface and its methods would improve API discoverability and help users of the interface understand the purpose and behavior of each method.

namespace Phoenix.MarketData.Core.Services
{
+    /// <summary>
+    /// Service interface for publishing and retrieving market data entities
+    /// </summary>
    public interface IMarketDataService
    {
+        /// <summary>
+        /// Publishes a market data entity
+        /// </summary>
+        /// <typeparam name="T">Type of market data entity</typeparam>
+        /// <param name="marketData">The market data entity to publish</param>
+        /// <returns>A string identifier for the published entity</returns>
        Task<string> PublishMarketDataAsync<T>(T marketData) where T : IMarketDataEntity;

+        /// <summary>
+        /// Gets the latest market data entity matching the specified criteria
+        /// </summary>
+        /// <typeparam name="T">Type of market data entity to retrieve</typeparam>
+        /// <param name="dataType">The data type identifier</param>
+        /// <param name="assetClass">The asset class identifier</param>
+        /// <param name="assetId">The asset identifier</param>
+        /// <param name="region">The region identifier</param>
+        /// <param name="asOfDate">The reference date for the data</param>
+        /// <param name="documentType">The document type identifier</param>
+        /// <returns>The latest market data entity or null if not found</returns>
        Task<T?> GetLatestMarketDataAsync<T>(
            string dataType, string assetClass, string assetId, string region,
            DateOnly asOfDate, string documentType) where T : IMarketDataEntity;

+        /// <summary>
+        /// Queries for market data entities matching the specified criteria
+        /// </summary>
+        /// <typeparam name="T">Type of market data entities to retrieve</typeparam>
+        /// <param name="dataType">The data type identifier</param>
+        /// <param name="assetClass">The asset class identifier</param>
+        /// <param name="assetId">Optional asset identifier filter</param>
+        /// <param name="fromDate">Optional start date for filtering</param>
+        /// <param name="toDate">Optional end date for filtering</param>
+        /// <returns>A collection of market data entities matching the criteria</returns>
        Task<IEnumerable<T>> QueryMarketDataAsync<T>(
            string dataType, string assetClass, string? assetId = null,
            DateTime? fromDate = null, DateTime? toDate = null) where T : IMarketDataEntity;
    }
}

16-18: Consider consistent date type usage

GetLatestMarketDataAsync uses DateOnly for the asOfDate parameter, while QueryMarketDataAsync uses DateTime? for date range parameters. This inconsistency might lead to confusion. Consider using DateOnly consistently.

        Task<IEnumerable<T>> QueryMarketDataAsync<T>(
            string dataType, string assetClass, string? assetId = null,
-            DateTime? fromDate = null, DateTime? toDate = null) where T : IMarketDataEntity;
+            DateOnly? fromDate = null, DateOnly? toDate = null) where T : IMarketDataEntity;
src/Phoenix.MarketData.Core/Validation/ValidationDecorator.cs (4)

10-14: Fix incorrect XML documentation

The XML comment references a generic type parameter TService which doesn't exist in the class. The actual generic parameter is TRequest.

    /// <summary>
    /// Generic decorator for adding validation to any service operation
    /// </summary>
-    /// <typeparam name="TService">Type of service being decorated</typeparam>
+    /// <typeparam name="TRequest">Type of request being validated</typeparam>
    public class ValidationDecorator<TRequest> : IRequestHandler<TRequest>
        where TRequest : IRequest

20-26: Add null check for validator in constructor

While the ValidateAsync method checks for null validators, it would be better to enforce a non-null validator in the constructor for early error detection.

        public ValidationDecorator(
            IRequestHandler<TRequest> decoratedHandler,
            IValidator<TRequest> validator)
        {
            _decoratedHandler = decoratedHandler ?? throw new System.ArgumentNullException(nameof(decoratedHandler));
-            _validator = validator;
+            _validator = validator ?? throw new System.ArgumentNullException(nameof(validator));
        }

43-61: Simplify the ValidateAsync method

The ValidateAsync method is unnecessarily generic with type parameter TEntity but is only ever called with TRequest. Also, the early return for null validators becomes unnecessary if we enforce non-null validators in the constructor.

-        private async Task ValidateAsync<TEntity>(TEntity entity, IValidator<TEntity> validator, CancellationToken cancellationToken)
+        private async Task ValidateAsync(TRequest request, CancellationToken cancellationToken)
        {
-            if (validator == null)
-                return;
-
-            var validationResult = await validator.ValidateAsync(entity, cancellationToken);
+            var validationResult = await _validator.ValidateAsync(request, cancellationToken);
            if (!validationResult.IsValid)
            {
                var errors = validationResult.Errors.Select(f => new ValidationError
                {
                    PropertyName = f.PropertyName,
                    ErrorMessage = f.ErrorMessage,
                    ErrorCode = f.ErrorCode,
                    Source = "FluentValidation"
                });

                throw new ValidationException(errors);
            }
        }

And adjust the call in the Handle method:

        public async Task Handle(TRequest request, CancellationToken cancellationToken)
        {
-            await ValidateAsync(request, _validator, cancellationToken);
+            await ValidateAsync(request, cancellationToken);
            await _decoratedHandler.Handle(request, cancellationToken);
        }

28-32: Update Handle method to match changes in ValidateAsync

If you simplify the ValidateAsync method as suggested above, you'll need to update the Handle method to match:

        public async Task Handle(TRequest request, CancellationToken cancellationToken)
        {
-            await ValidateAsync(request, _validator, cancellationToken);
+            await ValidateAsync(request, cancellationToken);
            await _decoratedHandler.Handle(request, cancellationToken);
        }
tests/Phoenix.MarketData.Infrastructure.Tests/Repository/MarketDataRepositoryTests.cs (3)

6-6: Remove unnecessary comment

The comment "// <-- Adjust if necessary" appears to be a development note that should be removed before final submission.

-using Phoenix.MarketData.Infrastructure.Repositories; // <-- Adjust if necessary
+using Phoenix.MarketData.Infrastructure.Repositories;

48-48: Improve PartitionKey verification

The test comment indicates a "loose check for demo," which suggests the verification could be more rigorous in a production test.

                It.Is<string>(v => v == _id),
-                It.Is<PartitionKey>(pk => pk.ToString().Contains(_partitionKey)), // Loose check for demo
+                It.Is<PartitionKey>(pk => pk.ToString() == $"[\"{_partitionKey}\"]"),
                It.IsAny<ItemRequestOptions>(),

19-20: Consider verifying logger usage

A mock logger is created but never used for verification. Consider either verifying log calls or simplifying if logging is not important for these tests.

        _mockContainer = new Mock<Container>();
-        _repository = new CosmosRepository<FxSpotPriceData>(_mockContainer.Object, Mock.Of<Microsoft.Extensions.Logging.ILogger<CosmosRepository<FxSpotPriceData>>>());
+        var mockLogger = new Mock<Microsoft.Extensions.Logging.ILogger<CosmosRepository<FxSpotPriceData>>>();
+        _repository = new CosmosRepository<FxSpotPriceData>(_mockContainer.Object, mockLogger.Object);

Then add verification in test methods:

// In test methods, verify logging
mockLogger.Verify(
    x => x.Log(
        It.IsAny<LogLevel>(),
        It.IsAny<EventId>(),
        It.Is<It.IsAnyType>((v, t) => true),
        It.IsAny<Exception>(),
        It.Is<Func<It.IsAnyType, Exception, string>>((v, t) => true)),
    Times.AtLeastOnce);
src/Phoenix.MarketData.Infrastructure/Repositories/CosmosRepositoryExtensions.cs (1)

16-36: Add documentation for public methods

The extension methods lack XML documentation comments, which would help other developers understand their purpose and usage.

Consider adding XML documentation:

+    /// <summary>
+    /// Queries the repository using a predicate builder function.
+    /// </summary>
+    /// <typeparam name="T">The entity type.</typeparam>
+    /// <param name="repo">The repository to query.</param>
+    /// <param name="predicateBuilder">A function that builds a LINQ query.</param>
+    /// <param name="includeSoftDeleted">Whether to include soft-deleted entities in the results.</param>
+    /// <returns>A collection of entities matching the predicate.</returns>
     public static async Task<IEnumerable<T>> QueryByPredicateAsync<T>(
         this CosmosRepository<T> repo,
         Func<IQueryable<T>, IQueryable<T>> predicateBuilder,
         bool includeSoftDeleted = false)
         where T : class, IVersionedMarketDataEntity
src/Phoenix.MarketData.Functions/SaveDocumentToDb.cs (3)

32-39: Consider case-insensitive query parameter handling

The query parameter validation doesn't account for case sensitivity, which could lead to issues if clients use inconsistent casing.

Consider using case-insensitive comparisons for query parameters:

-        string? dataType = req.Query["datatype"];
-        string? assetClass = req.Query["assetclass"]; 
-        string? schemaVersion = req.Query["schemaversion"];
+        string? dataType = req.Query["datatype"].ToString().ToLowerInvariant();
+        string? assetClass = req.Query["assetclass"].ToString().ToLowerInvariant();
+        string? schemaVersion = req.Query["schemaversion"].ToString();

42-52: Consider a more extensible dispatch mechanism

The current dispatch logic for handling different data types and asset classes is hardcoded, making it challenging to add new combinations without modifying the function.

Consider implementing a more extensible approach:

+        private readonly Dictionary<(string dataType, string assetClass), Func<HttpRequest, string, string, string, Task<IActionResult>>> _handlers;
+
+        public SaveDocumentToDb(ILogger<SaveDocumentToDb> logger, CosmosRepository<IMarketDataEntity> repository)
+        {
+            _repository = repository ?? throw new ArgumentNullException(nameof(repository));
+            _logger = logger ?? throw new ArgumentNullException(nameof(logger));
+            
+            // Initialize handlers
+            _handlers = new Dictionary<(string, string), Func<HttpRequest, string, string, string, Task<IActionResult>>>
+            {
+                { ("price.spot", "fx"), async (req, dt, ac, sv) => 
+                    await HandleDocumentAsync<FxSpotPriceDataDto>(req, dt, ac, sv, FxSpotPriceDataMapper.MapToDomain) },
+                { ("price.ordinals.spot", "crypto"), async (req, dt, ac, sv) => 
+                    await HandleDocumentAsync<CryptoOrdinalSpotPriceDataDto>(req, dt, ac, sv, CryptoOrdinalSpotPriceDataMapper.MapToDomain) }
+            };
+        }
+
+        // In Run method:
+        if (_handlers.TryGetValue((dataType, assetClass), out var handler))
+        {
+            return await handler(req, dataType, assetClass, schemaVersion);
+        }

94-105: Improve result processing logic

The ProcessActionResult method assumes a non-null result means success but doesn't verify any other properties of the result.

Consider enhancing this method to check for additional success indicators:

    private IActionResult ProcessActionResult(IMarketDataEntity result)
    {
-        if (result != null)
+        if (result != null && !string.IsNullOrEmpty(result.Id))
        {
            var msg = $"Document saved successfully to {result.Id}.";
            _logger.LogInformation(msg);
-            return new OkObjectResult(msg);
+            return new OkObjectResult(new { message = msg, id = result.Id });
        }
        _logger.LogError("Error saving document (null result).");
        return new BadRequestObjectResult("Error saving document.");
    }
src/Phoenix.MarketData.Domain/Models/Interfaces/IMarketDataEntity.cs (4)

8-21: Consider immutability for certain properties

Some properties like SchemaVersion and Version might be better protected from unintended modifications after entity creation.

Consider making certain properties read-only or using init-only setters:

-    public interface IMarketDataEntity : IEntity
+    public interface IMarketDataEntity : IEntity
     {
-        string SchemaVersion { get; set; }
-        int? Version { get; set; }
+        string SchemaVersion { get; }
+        int? Version { get; }
         string AssetId { get; set; }
         string AssetClass { get; set; }
         string DataType { get; set; }
         string Region { get; set; }
-        List<string> Tags { get; set; }
+        IReadOnlyList<string> Tags { get; set; }
         string DocumentType { get; set; }
         DateTimeOffset CreateTimestamp { get; }
         DateOnly AsOfDate { get; set; }
         TimeOnly? AsOfTime { get; set; }
     }

16-16: Consider using IReadOnlyList for Tags collection

Using List<string> for Tags exposes mutating methods like Add/Remove. Using an immutable collection type would protect against unintended modifications.

-        List<string> Tags { get; set; }
+        IReadOnlyList<string> Tags { get; set; }

20-20: Document the purpose of nullable AsOfTime

The AsOfTime property is nullable, but there's no documentation explaining when it should/shouldn't be null.

Add XML documentation to clarify the nullable property:

+        /// <summary>
+        /// The time component of when the data was sampled or became effective.
+        /// May be null if only the date is relevant (e.g., for daily data).
+        /// </summary>
         TimeOnly? AsOfTime { get; set; }

23-27: Document the implications of the 'new' keyword

Using the 'new' keyword to hide the base interface's nullable Version property could cause confusion.

Add a more detailed comment explaining the purpose of the derived interface:

-    // For strict versioning (non-null version)
+    /// <summary>
+    /// Extends IMarketDataEntity to enforce a non-nullable Version property.
+    /// This interface should be used for entities that must always have a version number.
+    /// The 'new' keyword hides the nullable Version property from the base interface.
+    /// </summary>
     public interface IVersionedMarketDataEntity : IMarketDataEntity
     {
         new int Version { get; set; }
     }
src/Phoenix.MarketData.Infrastructure/Validation/CompositeValidator.cs (3)

27-31: Fix indentation in ValidatorWrapper constructor.

The indentation in this constructor is inconsistent with the rest of the class. There are extra spaces at the beginning of lines 29-30.

public ValidatorWrapper(IValidator<TEntity> validator)
-                {
-                _validator = validator;
-                }
+            {
+                _validator = validator;
+            }

44-47: Consider adding duplicate registration handling.

The current implementation silently overwrites existing validators if the same entity type is registered multiple times. Consider adding a check to prevent accidental overwrites or provide explicit overwrite behavior.

public void RegisterValidator<TEntity>(IValidator<TEntity> validator) where TEntity : T
{
+    if (_validators.ContainsKey(typeof(TEntity)))
+    {
+        throw new InvalidOperationException($"A validator for type {typeof(TEntity).Name} is already registered.");
+    }
+    
    _validators[typeof(TEntity)] = new ValidatorWrapper<TEntity>(validator);
}

56-63: Consider enhancing type matching for derived types.

The current implementation only looks for exact type matches. Consider supporting derived types by looking for the closest base type that has a registered validator. This would allow more flexible validation hierarchies.

if (_validators.TryGetValue(entityType, out var wrapper))
{
    return await wrapper.ValidateAsync(entity, cancellationToken);
}

+// Try to find a validator for a base type if exact type is not found
+foreach (var registeredType in _validators.Keys)
+{
+    if (registeredType.IsAssignableFrom(entityType))
+    {
+        return await _validators[registeredType].ValidateAsync(entity, cancellationToken);
+    }
+}

// If no specific validator is found, return success
return ValidationResult.Success();
src/Phoenix.MarketData.Infrastructure/Schemas/JsonSchemaValidator.cs (2)

52-54: Fix code style inconsistency in conditional statements.

There's an inconsistency in the style of the conditional statement. Here you're using a single line without braces, but in the nested loop below (lines 59-60) you're using braces for a similar construct.

-        if (!string.IsNullOrWhiteSpace(results.Message))
-            yield return results.Message;
+        if (!string.IsNullOrWhiteSpace(results.Message))
+        {
+            yield return results.Message;
+        }

39-47: Consider preserving additional error metadata.

Currently, you're only capturing the error message and setting a fixed source. Consider preserving additional metadata from the validation results, such as error location in the JSON document, which could help with debugging.

var errors = GetAllMessages(results)
    .Select(msg => new ValidationError
    {
        ErrorMessage = msg,
        Source = "JsonSchema.Net"
+       // Consider adding additional properties like:
+       // Location = error.InstanceLocation?.ToString(),
+       // SchemaLocation = error.SchemaLocation?.ToString()
    })
    .ToList();
src/Phoenix.MarketData.Functions/Program.cs (2)

59-64: Refactor configuration validation to reduce repetition.

The configuration validation is repetitive. Consider refactoring to a helper method to improve readability and maintainability.

+private static void ValidateRequiredConfiguration(string value, string configName)
+{
+    if (string.IsNullOrWhiteSpace(value))
+        throw new InvalidOperationException($"Missing configuration: {configName}");
+}

-if (string.IsNullOrWhiteSpace(cosmosConnectionString))
-    throw new InvalidOperationException("Missing configuration: MarketDataHistoryCosmosDb:ConnectionString");
-if (string.IsNullOrWhiteSpace(cosmosDatabaseId))
-    throw new InvalidOperationException("Missing configuration: MarketDataHistoryCosmosDb:DatabaseId");
-if (string.IsNullOrWhiteSpace(cosmosContainerId))
-    throw new InvalidOperationException("Missing configuration: MarketDataHistoryCosmosDb:ContainerId");
+ValidateRequiredConfiguration(cosmosConnectionString, "MarketDataHistoryCosmosDb:ConnectionString");
+ValidateRequiredConfiguration(cosmosDatabaseId, "MarketDataHistoryCosmosDb:DatabaseId");
+ValidateRequiredConfiguration(cosmosContainerId, "MarketDataHistoryCosmosDb:ContainerId");

83-105: Refactor repository registrations to reduce duplication.

The repository registration code is duplicated for each entity type. Consider using a generic method to register repositories for multiple entity types.

+private static void RegisterRepository<T>(IServiceCollection services, string databaseId, string containerId) where T : class
+{
+    services.AddScoped<IRepository<T>>(sp =>
+        new CosmosRepository<T>(
+            sp.GetRequiredService<CosmosClient>().GetContainer(databaseId, containerId),
+            sp.GetRequiredService<ILogger<CosmosRepository<T>>>(),
+            sp.GetRequiredService<IEventPublisher>()
+        )
+    );
+}

-builder.Services.AddScoped<IRepository<FxSpotPriceData>>(sp =>
-    new CosmosRepository<FxSpotPriceData>(
-        sp.GetRequiredService<CosmosClient>().GetContainer(cosmosDatabaseId, cosmosContainerId),
-        sp.GetRequiredService<ILogger<CosmosRepository<FxSpotPriceData>>>(),
-        sp.GetRequiredService<IEventPublisher>()
-    )
-);
-
-builder.Services.AddScoped<IRepository<FxVolSurfaceData>>(sp =>
-    new CosmosRepository<FxVolSurfaceData>(
-        sp.GetRequiredService<CosmosClient>().GetContainer(cosmosDatabaseId, cosmosContainerId),
-        sp.GetRequiredService<ILogger<CosmosRepository<FxVolSurfaceData>>>(),
-        sp.GetRequiredService<IEventPublisher>()
-    )
-);
-
-builder.Services.AddScoped<IRepository<CryptoOrdinalSpotPriceData>>(sp =>
-    new CosmosRepository<CryptoOrdinalSpotPriceData>(
-        sp.GetRequiredService<CosmosClient>().GetContainer(cosmosDatabaseId, cosmosContainerId),
-        sp.GetRequiredService<ILogger<CosmosRepository<CryptoOrdinalSpotPriceData>>>(),
-        sp.GetRequiredService<IEventPublisher>()
-    )
-);
+RegisterRepository<FxSpotPriceData>(builder.Services, cosmosDatabaseId, cosmosContainerId);
+RegisterRepository<FxVolSurfaceData>(builder.Services, cosmosDatabaseId, cosmosContainerId);
+RegisterRepository<CryptoOrdinalSpotPriceData>(builder.Services, cosmosDatabaseId, cosmosContainerId);
tests/Phoenix.MarketData.Infrastructure.Tests/Validation/CryptoOrdinalSpotPriceDataValidatorTests.cs (4)

14-41: Reduce test setup duplication with a helper method.

The test entity creation is duplicated across all test methods. Consider extracting a helper method to create a valid entity that can be modified for specific test cases.

+private CryptoOrdinalSpotPriceData CreateValidEntity()
+{
+    return new CryptoOrdinalSpotPriceData
+    {
+        AssetId = "BTC/USD",
+        AssetClass = "crypto",
+        DataType = "spotprice",
+        SchemaVersion = "1.0",
+        Region = "global",
+        DocumentType = "price",
+        Currency = "USD",
+        CollectionName = "Bitcoin",
+        ParentInscriptionId = "12345",
+        InscriptionId = "67890",
+        AsOfDate = DateOnly.FromDateTime(DateTime.Today),
+        Price = 50000.00m
+    };
+}

[Fact]
public async Task ValidateAsync_WithValidData_ReturnsSuccess()
{
    // Arrange
    var validator = new CryptoOrdinalSpotPriceDataValidator();
-    var entity = new CryptoOrdinalSpotPriceData
-    {
-        AssetId = "BTC/USD",
-        AssetClass = "crypto",
-        DataType = "spotprice",
-        SchemaVersion = "1.0",
-        Region = "global",
-        DocumentType = "price",
-        Currency = "USD",
-        CollectionName = "Bitcoin",
-        ParentInscriptionId = "12345",
-        InscriptionId = "67890",
-        AsOfDate = DateOnly.FromDateTime(DateTime.Today),
-        Price = 50000.00m
-    };
+    var entity = CreateValidEntity();

    // Act
    var result = await validator.ValidateAsync(entity);

    // Assert
    Assert.True(result.IsValid);
    Assert.Empty(result.Errors);
}

113-140: Enhance error message verification.

The tests only verify that errors exist for specific properties but don't verify the content of error messages. Consider adding assertions for specific error messages to ensure the validator provides the expected feedback.

// Assert
Assert.False(result.IsValid);
Assert.Contains(result.Errors, e => e.PropertyName == "Price");
+// Verify specific error message content
+var priceError = result.Errors.First(e => e.PropertyName == "Price");
+Assert.Contains("must be greater than or equal to 0", priceError.ErrorMessage);

141-142: Add test for multiple validation errors.

The existing tests only check for single validation failures. Consider adding a test case that triggers multiple validation errors simultaneously to ensure all errors are reported correctly.

[Fact]
public async Task ValidateAsync_WithMultipleInvalidFields_ReturnsMultipleErrors()
{
    // Arrange
    var validator = new CryptoOrdinalSpotPriceDataValidator();
    var entity = new CryptoOrdinalSpotPriceData
    {
        // Invalid fields:
        AssetId = string.Empty,  // Missing AssetId
        AssetClass = "invalid",  // Invalid AssetClass
        DataType = "spotprice",
        SchemaVersion = "1.0",
        Region = "global",
        DocumentType = "price",
        Currency = "USD",
        CollectionName = "Bitcoin",
        ParentInscriptionId = "12345",
        InscriptionId = "67890",
        AsOfDate = DateOnly.FromDateTime(DateTime.Today),
        Price = -100.0m         // Negative Price
    };

    // Act
    var result = await validator.ValidateAsync(entity);

    // Assert
    Assert.False(result.IsValid);
    // Should have at least 3 errors
    Assert.True(result.Errors.Count >= 3);
    // Verify specific errors exist
    Assert.Contains(result.Errors, e => e.PropertyName == "AssetId");
    Assert.Contains(result.Errors, e => e.PropertyName == "AssetClass");
    Assert.Contains(result.Errors, e => e.PropertyName == "Price");
}

49-52: Add test for cancellation token handling.

The tests don't verify cancellation token behavior. Consider adding a test case to ensure the validator respects cancellation requests correctly.

[Fact]
public async Task ValidateAsync_WithCancellation_RespectsToken()
{
    // Arrange
    var validator = new CryptoOrdinalSpotPriceDataValidator();
    var entity = CreateValidEntity();
    var cancellationTokenSource = new CancellationTokenSource();
    cancellationTokenSource.Cancel();

    // Act & Assert
    await Assert.ThrowsAsync<OperationCanceledException>(() =>
        validator.ValidateAsync(entity, cancellationTokenSource.Token));
}

Note: This test assumes the validator checks the cancellation token. If it doesn't, you should consider implementing proper cancellation support in the validator.

src/Phoenix.MarketData.Infrastructure/Configuration/MarketDataSecretProvider.cs (2)

24-42: Consider adding cancellation token support to async methods.

Async methods that perform I/O operations should typically accept a CancellationToken parameter to allow for cancellation, especially for long-running operations or in web contexts.

- public async Task<string> GetCosmosConnectionStringAsync()
+ public async Task<string> GetCosmosConnectionStringAsync(CancellationToken cancellationToken = default)
{
-   return await GetSecretAsync("CosmosDbConnectionString");
+   return await GetSecretAsync("CosmosDbConnectionString", cancellationToken);
}

- public async Task<string> GetEventGridKeyAsync()
+ public async Task<string> GetEventGridKeyAsync(CancellationToken cancellationToken = default)
{
-   return await GetSecretAsync("EventGridKey");
+   return await GetSecretAsync("EventGridKey", cancellationToken);
}

Similar changes for the other async methods would be necessary, including updating the interface definition.


1-63: Consider adding retry policy for transient failures.

Azure services can experience transient failures. Consider implementing a retry policy using Polly or the built-in Azure SDK retry mechanisms to handle temporary connectivity issues.

You could enhance the SecretClient initialization to include retry policies:

_secretClient = new SecretClient(
    new Uri(keyVaultUrl),
    new DefaultAzureCredential(),
    new SecretClientOptions
    {
        Retry =
        {
            MaxRetries = 3,
            Delay = TimeSpan.FromSeconds(2),
            MaxDelay = TimeSpan.FromSeconds(10),
            Mode = RetryMode.Exponential
        }
    });
src/Phoenix.MarketData.Core/Repositories/IRepository.cs (1)

6-42: Consider adding transaction support.

The repository interfaces don't provide any mechanism for transaction management. Consider adding methods to support operations within a transaction context.

You could add a transaction-related interface:

public interface IUnitOfWork
{
    Task BeginTransactionAsync(CancellationToken cancellationToken = default);
    Task CommitTransactionAsync(CancellationToken cancellationToken = default);
    Task RollbackTransactionAsync(CancellationToken cancellationToken = default);
}

Or add transaction support directly to the repositories:

public interface ICommandRepository<T, in TKey> where T : class
{
    // Existing methods...
    
    Task<ITransaction> BeginTransactionAsync(CancellationToken cancellationToken = default);
}

public interface ITransaction : IAsyncDisposable
{
    Task CommitAsync(CancellationToken cancellationToken = default);
    Task RollbackAsync(CancellationToken cancellationToken = default);
}
src/Phoenix.MarketData.Core/Validation/ValidationError.cs (3)

6-12: Consider making ValidationError immutable.

The class has public setters for all properties, making it mutable. For a value object representing validation errors, consider making it immutable by removing setters and initializing properties only through constructors.

public class ValidationError
{
-   public string? PropertyName { get; set; }
-   public string ErrorMessage { get; set; }
-   public string? ErrorCode { get; set; }
-   public string? Source { get; set; }
+   public string? PropertyName { get; }
+   public string ErrorMessage { get; }
+   public string? ErrorCode { get; }
+   public string? Source { get; }

18-21: Default constructor creates an empty error message.

The default constructor initializes ErrorMessage to an empty string. Consider documenting this behavior or requiring a non-empty error message in all cases.

public ValidationError()
{
    ErrorMessage = string.Empty;
+   // Default constructor required for serialization. Not recommended for direct use.
}

6-70: Consider implementing equality methods for validation errors.

Since ValidationError represents a value object that might be used in collections or comparisons, consider implementing Equals, GetHashCode, and equality operators.

You could add:

public override bool Equals(object? obj)
{
    if (obj is not ValidationError other)
        return false;
        
    return string.Equals(PropertyName, other.PropertyName) &&
           string.Equals(ErrorMessage, other.ErrorMessage) &&
           string.Equals(ErrorCode, other.ErrorCode) &&
           string.Equals(Source, other.Source);
}

public override int GetHashCode()
{
    return HashCode.Combine(
        PropertyName ?? string.Empty,
        ErrorMessage,
        ErrorCode ?? string.Empty,
        Source ?? string.Empty);
}

public static bool operator ==(ValidationError? left, ValidationError? right)
{
    if (left is null) return right is null;
    return left.Equals(right);
}

public static bool operator !=(ValidationError? left, ValidationError? right)
    => !(left == right);
tests/Phoenix.MarketData.Infrastructure.Tests/Validation/FxSpotPriceDataValidatorTests.cs (3)

52-76: Enhance assertion to verify error message content.

The test verifies that validation fails with an error for the AssetId property, but doesn't check the specific error message. Consider adding an assertion for the error message content.

// Assert
Assert.False(result.IsValid);
Assert.Contains(result.Errors, e => e.PropertyName == "AssetId");
+ var assetIdError = result.Errors.First(e => e.PropertyName == "AssetId");
+ Assert.Contains("format", assetIdError.ErrorMessage, StringComparison.OrdinalIgnoreCase);

78-102: Enhance assertion to verify error message content.

Similar to the previous test, this test verifies that validation fails with an error for the Price property, but doesn't check the specific error message.

// Assert
Assert.False(result.IsValid);
Assert.Contains(result.Errors, e => e.PropertyName == "Price");
+ var priceError = result.Errors.First(e => e.PropertyName == "Price");
+ Assert.Contains("negative", priceError.ErrorMessage, StringComparison.OrdinalIgnoreCase);

1-104: Consider adding parameterized tests for validation cases.

The tests for invalid data have similar structures with different test data. Consider using xUnit's theory and inline data to create parameterized tests that reduce duplication.

Here's an example of how you could refactor the tests for invalid data:

[Theory]
[InlineData("EURUSD", "AssetId", "Missing separator")] // Invalid asset ID
[InlineData("EUR/USD", "Price", "Negative value", -1.05)] // Negative price
public async Task ValidateAsync_WithInvalidData_ReturnsFailure(
    string assetId, string expectedErrorProperty, string description, decimal price = 1.05m)
{
    // Arrange
    var validator = new FxSpotPriceDataValidator();
    var entity = new FxSpotPriceData
    {
        AssetId = assetId,
        AssetClass = "fx",
        DataType = "spotprice",
        AsOfDate = DateOnly.FromDateTime(DateTime.Today),
        Price = price,
        Side = PriceSide.Bid,
        SchemaVersion = "1.0",
        Region = "global",
        DocumentType = "price"
    };

    // Act
    var result = await validator.ValidateAsync(entity);

    // Assert
    Assert.False(result.IsValid, $"Validation should fail for {description}");
    Assert.Contains(result.Errors, e => e.PropertyName == expectedErrorProperty);
}
src/Phoenix.MarketData.Infrastructure/Serialization/BaseMarketDataDto.cs (1)

46-73: Minor naming inconsistency in constructor

There's a casing inconsistency between the constructor parameter createTimeStamp (camelCase) and the property CreateTimestamp (PascalCase) it maps to.

-        DateTimeOffset createTimeStamp,
+        DateTimeOffset createTimestamp,

And update the corresponding assignment:

-        CreateTimestamp = createTimeStamp;
+        CreateTimestamp = createTimestamp;
src/Phoenix.MarketData.Application/Events/MarketDataEventSubscriber.cs (4)

1-10: Consider organizing imports

While the imports are all necessary, consider organizing them by separating system namespaces from application namespaces for better readability.

 using System;
 using System.Threading;
 using System.Threading.Tasks;
 using Microsoft.Extensions.Logging;
 using Polly;
+
 using Phoenix.MarketData.Core.Events;
 using Phoenix.MarketData.Application.Services;
 using Phoenix.MarketData.Domain.Models;
 using Phoenix.MarketData.Core.Events.IntegrationEvents;

44-68: Consider extracting common event handling logic

Both event handler methods share similar structure. Consider extracting the common pattern to a private helper method to reduce code duplication.

+ private async Task HandleEventWithRetryAsync<T>(
+     T eventData,
+     Func<T, CancellationToken, Task> processAction,
+     string eventType,
+     CancellationToken cancellationToken)
+     where T : class, IIntegrationEvent
+ {
+     _logger.LogInformation("Received market data {EventType} event for {DataType} {AssetId}",
+         eventType, eventData.DataType, eventData.AssetId);
+
+     await _retryPolicy.ExecuteAsync(async () =>
+     {
+         try
+         {
+             _logger.LogInformation("Processing market data {EventType} event: {EventId}", eventType, eventData.Id);
+             await processAction(eventData, cancellationToken);
+         }
+         catch (Exception ex) when (!(ex is OperationCanceledException))
+         {
+             _logger.LogError(ex, "Error processing market data {EventType} event: {EventId}", eventType, eventData.Id);
+             throw; // Rethrowing to trigger retry
+         }
+     });
+ }
+
  public async Task HandleMarketDataCreatedEventAsync(MarketDataCreatedIntegrationEvent eventData, CancellationToken cancellationToken = default)
  {
-     _logger.LogInformation("Received market data created event for {DataType} {AssetId}",
-         eventData.DataType, eventData.AssetId);
-
-     await _retryPolicy.ExecuteAsync(async () =>
-     {
-         try
-         {
-             // Process event
-             _logger.LogInformation("Processing market data created event: {EventId}", eventData.Id);
-
-             // Call the event processor to handle the event
-             await _eventProcessor.ProcessCreatedEventAsync(eventData, cancellationToken);
-         }
-         catch (Exception ex) when (!(ex is OperationCanceledException))
-         {
-             _logger.LogError(ex, "Error processing market data created event: {EventId}", eventData.Id);
-             throw; // Rethrowing to trigger retry
-         }
-     });
+     await HandleEventWithRetryAsync(
+         eventData,
+         _eventProcessor.ProcessCreatedEventAsync,
+         "created",
+         cancellationToken);
  }

And similarly update the HandleMarketDataChangedEventAsync method.


62-63: Consider using pattern matching for exception filtering

The current exception filter is correct but could be more readable with modern C# pattern matching.

- catch (Exception ex) when (!(ex is OperationCanceledException))
+ catch (Exception ex) when (ex is not OperationCanceledException)

Also apply this change to the other exception filter at line 88-89.


30-32: Same pattern matching suggestion for the retry policy

Similarly, the retry policy's exception filtering could use pattern matching for consistency.

- .Handle<Exception>(ex => !(ex is OperationCanceledException))
+ .Handle<Exception>(ex => ex is not OperationCanceledException)
src/Phoenix.MarketData.Infrastructure/Repositories/CosmosRepositoryEvents.cs (4)

1-10: Remove unused namespaces

There are several imported namespaces that aren't used in this file. Consider removing them to keep the code clean.

 using System;
-using System.Collections.Generic;
-using System.Linq;
-using System.Threading.Tasks;
-using Microsoft.Azure.Cosmos;
-using Microsoft.Azure.Cosmos.Linq;
-using Microsoft.Extensions.Logging;
-using Phoenix.MarketData.Core.Events;
-using Phoenix.MarketData.Domain.Models;

13-13: Clarify header comment

The comment "// ========== EVENTS ==========" suggests there might be other sections in this file, but there aren't any. Consider removing this comment or clarifying its purpose.

-     // ========== EVENTS ==========

14-45: Consider using record types for event classes

Since these event classes represent immutable data with value semantics, consider using C# record types instead of classes for a more concise and intention-revealing implementation.

-    public class EntityCreatedEvent<T>
-    {
-        public T Entity { get; }
-        public DateTimeOffset Timestamp { get; }
-        public EntityCreatedEvent(T entity)
-        {
-            Entity = entity ?? throw new ArgumentNullException(nameof(entity));
-            Timestamp = DateTimeOffset.UtcNow;
-        }
-    }
+    public record EntityCreatedEvent<T>(T Entity)
+    {
+        public DateTimeOffset Timestamp { get; } = DateTimeOffset.UtcNow;
+
+        public EntityCreatedEvent(T entity) : this(entity ?? throw new ArgumentNullException(nameof(entity)))
+        {
+        }
+    }

Similarly, transform the other event classes to record types to better express their immutable nature.


14-45: Consider implementing a common event interface

These event classes share a common structure (Timestamp property and entity-related data). Consider defining an interface to unify them, which would make it easier to handle events generically.

public interface IRepositoryEvent
{
    DateTimeOffset Timestamp { get; }
}

public class EntityCreatedEvent<T> : IRepositoryEvent
{
    // Existing implementation
}

public class EntityUpdatedEvent<T> : IRepositoryEvent
{
    // Existing implementation
}

public class EntityDeletedEvent : IRepositoryEvent
{
    // Existing implementation
}
tests/Phoenix.MarketData.Infrastructure.Tests/Events/MarketDataEventSubscriberTests.cs (2)

64-70: Hard-coded EventType assertion may break with domain constant changes

The verification expects e.EventType == "Created", which couples the test to the literal string instead of the domain constant.
If MarketDataCreatedIntegrationEvent.EventType is renamed (e.g., "MarketDataCreated"), the test will fail although behaviour is correct.

-                        e.EventType == "Created"),
+                        e.EventType == MarketDataCreatedIntegrationEvent.EventType),

Apply the same pattern to the “Changed” test below to de-duplicate magic strings.


124-133: Logger verification is fragile and can produce false negatives

Verifying ILogger.Log with Times.AtLeastOnce asserts any info-level message was written, which is implementation-detail heavy and can fail when log-level filtering is configured differently in CI.

Consider dropping the logger verification or asserting the functional interaction (event processor invocation) only.

src/Phoenix.MarketData.Core/Validation/ValidationResult.cs (2)

75-95: Consider making AddError and AddErrors methods return this for method chaining.

The AddError and AddErrors methods correctly handle null inputs, but they could be more flexible if they returned this to support method chaining.

-public void AddError(ValidationError error)
+public ValidationResult AddError(ValidationError error)
{
    if (error != null)
    {
        _errors.Add(error);
    }
+   return this;
}

-public void AddErrors(IEnumerable<ValidationError> errors)
+public ValidationResult AddErrors(IEnumerable<ValidationError> errors)
{
    if (errors != null)
    {
        _errors.AddRange(errors);
    }
+   return this;
}

13-13: Consider using Count instead of Any() for better performance.

The IsValid property uses !Errors.Any() to check if there are any errors. For better performance, consider using _errors.Count == 0 instead, as it avoids enumeration overhead.

-public bool IsValid => !Errors.Any();
+public bool IsValid => _errors.Count == 0;
docs/architecture-implementation-guide.md (1)

26-54: Add missing articles in the Push/Pull Implementation Guidelines.

The guidelines for repository implementation and push/pull models are clear, but there are a few grammatical issues with missing articles.

-Use Pull model for synchronous requests
+Use the Pull model for synchronous requests
-Use Push model for reactive updates
+Use the Push model for reactive updates
-Consider performance implications of each approach
+Consider the performance implications of each approach
🧰 Tools
🪛 LanguageTool

[uncategorized] ~50-~50: You might be missing the article “the” here.
Context: ...h/Pull Implementation Guidelines - Use Pull model for synchronous requests - Use Pu...

(AI_EN_LECTOR_MISSING_DETERMINER_THE)


[uncategorized] ~51-~51: You might be missing the article “the” here.
Context: ...ll model for synchronous requests - Use Push model for reactive updates - Don't forc...

(AI_EN_LECTOR_MISSING_DETERMINER_THE)


[uncategorized] ~53-~53: You might be missing the article “the” here.
Context: ...ion unless it fits naturally - Consider performance implications of each approach ### 4. E...

(AI_EN_LECTOR_MISSING_DETERMINER_THE)

tests/Phoenix.MarketData.Infrastructure.Tests/Validation/CompositeValidatorTests.cs (2)

186-186: Remove code comment that appears to be a work note.

The comment about fixing the use of It.IsAny<Type>() looks like a development note that should be removed from the production code.

-            // Fixed: Use generic It.IsAny<Type>() to avoid type conversion issues
             derivedEntityValidator.Verify(v => v.ValidateAsync(It.IsAny<TestDerivedEntity>(), It.IsAny<CancellationToken>()), Times.Once);

151-191: Consider adding tests for validator registration scenarios.

The current tests cover the validation behavior well, but there are no tests for scenarios such as:

  • Registering a validator for a type that already has a validator registered
  • Unregistering validators (if supported)
  • The order of registration affecting which validator is chosen

Consider adding these tests to ensure the registration behavior is well-defined.

src/Phoenix.MarketData.Core/Events/IntegrationEvents.cs/MarketDataEvents.cs (2)

28-28: Recommend using enum for EventType instead of string literals

The EventType property is currently overridden in derived classes with string literals ("Created", "Changed"). Consider defining an enum for event types to improve type safety and maintainability.

+ public enum MarketDataEventType
+ {
+     Created,
+     Changed
+ }

- public abstract string EventType { get; }
+ public abstract MarketDataEventType EventType { get; }

// Then in the derived classes:
- public override string EventType => "Created";
+ public override MarketDataEventType EventType => MarketDataEventType.Created;

58-83: Implement event conversion to avoid duplication

The MarketDataCreatedIntegrationEvent class has good structure but relies on copy-pasting the constructor implementation between event types. Consider adding a factory method to create events based on type.

src/Phoenix.MarketData.Application/Services/MarketDataService.cs (2)

40-53: Improve return value handling in PublishMarketDataAsync

The conditional check and cast for the returned ID seems awkward and might indicate inconsistent repository interface design.

- return id is IEntity entity ? entity.Id : string.Empty;
+ // If id is always expected to be an IEntity or string, consider making the repository more consistent
+ return id switch
+ {
+     IEntity entity => entity.Id,
+     string strId => strId,
+     _ => string.Empty
+ };

54-61: Simplify GetLatestMarketDataAsync implementation

The method uses an unnecessary intermediate variable. Consider returning the repository call directly.

- var result = await _repository.GetLatestMarketDataAsync(
-     dataType, assetClass, assetId, region, asOfDate, documentType);
- return result;
+ return await _repository.GetLatestMarketDataAsync(
+     dataType, assetClass, assetId, region, asOfDate, documentType);
src/Phoenix.MarketData.Infrastructure/Mapping/CryptoOrdinalSpotPriceDataMapper.cs (1)

85-90: Consider simplifying by removing static methods

The pattern of having both instance methods and static convenience methods adds complexity. Consider simplifying to just instance methods with dependency injection.

You could remove the static methods and singleton instance, then use dependency injection to provide the mapper instance where needed.

src/Phoenix.MarketData.Infrastructure/Events/EventGridPublisher.cs (3)

29-37: Retrieve secrets in parallel to shave off network latency

GetEventGridEndpointAsync() and GetEventGridKeyAsync() are independent I/O calls but are awaited sequentially. Two round-trips can be cut to one by fetching them concurrently.

-var endpoint = await secretProvider.GetEventGridEndpointAsync();
-var key = await secretProvider.GetEventGridKeyAsync();
+var (endpoint, key) = await Task
+    .WhenAll(secretProvider.GetEventGridEndpointAsync(),
+             secretProvider.GetEventGridKeyAsync())
+    .ContinueWith(t => (t.Result[0], t.Result[1]));

49-49: Prefer BinaryData.FromObjectAsJson for less allocation & clearer intent

BinaryData.FromString(JsonSerializer.Serialize(obj)) allocates an intermediate string for every event. BinaryData.FromObjectAsJson(obj) avoids that and handles the serialization in-place.

-BinaryData.FromString(JsonSerializer.Serialize(eventData))
+BinaryData.FromObjectAsJson(eventData)

Apply the same change inside the LINQ projection in PublishManyAsync.

Also applies to: 87-90


129-133: Topic derivation helper can be simplified & made culture-invariant

The current manual loop is hard to read and misses globalisation edge-cases (e.g. Turkish “İ”). A short regex does the job more safely.

- return string.Concat(eventType.Select((x, i) =>
-     i > 0 && char.IsUpper(x) ? "-" + x.ToString().ToLower() : x.ToString().ToLower()));
+ return Regex.Replace(eventType, "(?<!^)([A-Z])", "-$1")
+             .ToLowerInvariant();

Remember to using System.Text.RegularExpressions;.

src/Phoenix.MarketData.Infrastructure/Repositories/CosmosRepository.cs (2)

149-169: Sequential bulk insert nullifies the benefit of “bulk”

The loop issues one network request per document. Cosmos DB SDK supports bulk mode with BulkExecutionOptions or TransactionalBatch, which is orders of magnitude faster and cheaper.

Refactor to use TransactionalBatch when all items share the same partition key or enable bulk execution via CosmosClientOptions.AllowBulkExecution = true.


20-25: Missing CancellationTokens throughout repository APIs

Similar to the publisher, all async data-access methods should surface a CancellationToken to allow callers to abort long-running Cosmos queries or writes, especially important in Azure Functions where execution timeouts are enforced.

Adding it now avoids a future breaking change in the public interface.

Also applies to: 27-58, 59-80, 98-113, 115-147, 171-233

src/Phoenix.MarketData.Domain/Repositories/IRepository.cs (1)

13-13: Consider adding pagination to QueryAsync

The QueryAsync method returns an IEnumerable without pagination parameters. For large data sets, this could lead to performance issues or timeout problems. Consider adding pagination support.

-    Task<IEnumerable<T>> QueryAsync(string dataType, string assetClass, string? assetId = null, DateTime? fromDate = null, DateTime? toDate = null);
+    Task<(IEnumerable<T> Items, string? ContinuationToken)> QueryAsync(string dataType, string assetClass, string? assetId = null, DateTime? fromDate = null, DateTime? toDate = null, int? maxItemCount = null, string? continuationToken = null);
📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 25088c1 and f394ea5.

⛔ Files ignored due to path filters (12)
  • node_modules/.bin/husky is excluded by !**/node_modules/**
  • node_modules/.bin/husky.cmd is excluded by !**/node_modules/**
  • node_modules/.bin/husky.ps1 is excluded by !**/node_modules/**
  • node_modules/.package-lock.json is excluded by !**/node_modules/**
  • node_modules/husky/LICENSE is excluded by !**/node_modules/**
  • node_modules/husky/README.md is excluded by !**/node_modules/**
  • node_modules/husky/bin.js is excluded by !**/node_modules/**
  • node_modules/husky/husky is excluded by !**/node_modules/**
  • node_modules/husky/index.d.ts is excluded by !**/node_modules/**
  • node_modules/husky/index.js is excluded by !**/node_modules/**
  • node_modules/husky/package.json is excluded by !**/node_modules/**
  • package-lock.json is excluded by !**/package-lock.json
📒 Files selected for processing (99)
  • .aiguidance/ai-guidelines.md (1 hunks)
  • .aiguidance/context-providers.json (1 hunks)
  • .aiguidance/settings.json (1 hunks)
  • .github/workflows/build-and-test.yml (1 hunks)
  • .gitignore (1 hunks)
  • .husky/pre-commit (1 hunks)
  • .vscode/extensions.json (1 hunks)
  • .vscode/launch.json (1 hunks)
  • .vscode/settings.json (1 hunks)
  • .vscode/tasks.json (1 hunks)
  • Phoenix.MarketData.Functions/Program.cs (0 hunks)
  • Phoenix.MarketData.Functions/SaveDocumentToDb.cs (0 hunks)
  • Phoenix.MarketData.Infrastructure.Tests/MarketDataRepositoryTests.cs (0 hunks)
  • Phoenix.MarketData.Infrastructure.Tests/Phoenix.MarketData.Infrastructure.Tests.csproj (0 hunks)
  • Phoenix.MarketDataPlatform.sln (3 hunks)
  • docs/README.md (4 hunks)
  • docs/architecture-implementation-guide.md (1 hunks)
  • package.json (1 hunks)
  • run-tests-with-coverage.cmd (1 hunks)
  • src/Phoenix.MarketData.Application/Events/MarketDataEventSubscriber.cs (1 hunks)
  • src/Phoenix.MarketData.Application/Phoenix.MarketData.Application.csproj (1 hunks)
  • src/Phoenix.MarketData.Application/Services/Decorators/ValidationMarketDataServiceDecorator.cs (1 hunks)
  • src/Phoenix.MarketData.Application/Services/MarketDataEventProcessor.cs (1 hunks)
  • src/Phoenix.MarketData.Application/Services/MarketDataService.cs (1 hunks)
  • src/Phoenix.MarketData.Core/Configuration/IMarketDataSecretProvider.cs (1 hunks)
  • src/Phoenix.MarketData.Core/Configuration/ISecretCache.cs (1 hunks)
  • src/Phoenix.MarketData.Core/Domain/MarketDataChangedEvent.cs (1 hunks)
  • src/Phoenix.MarketData.Core/Domain/MarketDataCreatedEvent.cs (1 hunks)
  • src/Phoenix.MarketData.Core/Events/IEventPublisher.cs (1 hunks)
  • src/Phoenix.MarketData.Core/Events/IMarketDataEventSubscriber.cs (1 hunks)
  • src/Phoenix.MarketData.Core/Events/IntegrationEvents.cs/MarketDataEvents.cs (1 hunks)
  • src/Phoenix.MarketData.Core/Mapping/IModelMapper.cs (1 hunks)
  • src/Phoenix.MarketData.Core/Phoenix.MarketData.Core.csproj (1 hunks)
  • src/Phoenix.MarketData.Core/Repositories/IRepository.cs (1 hunks)
  • src/Phoenix.MarketData.Core/Validation/FluentValidationExtensions.cs (1 hunks)
  • src/Phoenix.MarketData.Core/Validation/IValidator.cs (1 hunks)
  • src/Phoenix.MarketData.Core/Validation/SchemaValidationExtensions.cs (1 hunks)
  • src/Phoenix.MarketData.Core/Validation/ServiceValidationDecorator.cs (1 hunks)
  • src/Phoenix.MarketData.Core/Validation/ValidationDecorator.cs (1 hunks)
  • src/Phoenix.MarketData.Core/Validation/ValidationError.cs (1 hunks)
  • src/Phoenix.MarketData.Core/Validation/ValidationException.cs (1 hunks)
  • src/Phoenix.MarketData.Core/Validation/ValidationPipelineBehaviour.cs (1 hunks)
  • src/Phoenix.MarketData.Core/Validation/ValidationResult.cs (1 hunks)
  • src/Phoenix.MarketData.Domain/Constants/CurrencyCodes.cs (1 hunks)
  • src/Phoenix.MarketData.Domain/Events/IMarketDataEventPublisher.cs (1 hunks)
  • src/Phoenix.MarketData.Domain/Models/AssetClass.cs (1 hunks)
  • src/Phoenix.MarketData.Domain/Models/BaseMarketData.cs (3 hunks)
  • src/Phoenix.MarketData.Domain/Models/CryptoOrdinalSpotPriceData.cs (1 hunks)
  • src/Phoenix.MarketData.Domain/Models/FxSpotPriceData.cs (1 hunks)
  • src/Phoenix.MarketData.Domain/Models/FxVolSurfaceData.cs (1 hunks)
  • src/Phoenix.MarketData.Domain/Models/Interfaces/IMarketData.cs (0 hunks)
  • src/Phoenix.MarketData.Domain/Models/Interfaces/IMarketDataEntity.cs (1 hunks)
  • src/Phoenix.MarketData.Domain/Models/PriceSide.cs (1 hunks)
  • src/Phoenix.MarketData.Domain/Models/Regions.cs (1 hunks)
  • src/Phoenix.MarketData.Domain/Repositories/IRepository.cs (1 hunks)
  • src/Phoenix.MarketData.Domain/Services/IMarketDataService.cs (1 hunks)
  • src/Phoenix.MarketData.Functions/JsonConverters/JsonConverters.cs (1 hunks)
  • src/Phoenix.MarketData.Functions/Phoenix.MarketData.Functions.csproj (2 hunks)
  • src/Phoenix.MarketData.Functions/Program.cs (1 hunks)
  • src/Phoenix.MarketData.Functions/SaveDocumentToDb.cs (1 hunks)
  • src/Phoenix.MarketData.Infrastructure/Configuration/MarketDataSecretProvider.cs (1 hunks)
  • src/Phoenix.MarketData.Infrastructure/Configuration/NoOpSecretCache.cs (1 hunks)
  • src/Phoenix.MarketData.Infrastructure/Cosmos/CosmosClientFactory.cs (0 hunks)
  • src/Phoenix.MarketData.Infrastructure/Cosmos/MarketDataRepository.cs (0 hunks)
  • src/Phoenix.MarketData.Infrastructure/Cosmos/VersionManager.cs (0 hunks)
  • src/Phoenix.MarketData.Infrastructure/Events/EventGridPublisher.cs (1 hunks)
  • src/Phoenix.MarketData.Infrastructure/Mapping/BaseMarketDataMapper.cs (1 hunks)
  • src/Phoenix.MarketData.Infrastructure/Mapping/CryptoOrdinalSpotPriceDataMapper.cs (1 hunks)
  • src/Phoenix.MarketData.Infrastructure/Mapping/FxSpotPriceDataMapper.cs (1 hunks)
  • src/Phoenix.MarketData.Infrastructure/Mapping/IMarketDataMapper.cs (1 hunks)
  • src/Phoenix.MarketData.Infrastructure/Phoenix.MarketData.Infrastructure.csproj (1 hunks)
  • src/Phoenix.MarketData.Infrastructure/Repositories/CosmosRepository.cs (1 hunks)
  • src/Phoenix.MarketData.Infrastructure/Repositories/CosmosRepositoryEvents.cs (1 hunks)
  • src/Phoenix.MarketData.Infrastructure/Repositories/CosmosRepositoryExtensions.cs (1 hunks)
  • src/Phoenix.MarketData.Infrastructure/Repositories/Interfaces.cs (1 hunks)
  • src/Phoenix.MarketData.Infrastructure/Repositories/VersioningExtensions.cs (1 hunks)
  • src/Phoenix.MarketData.Infrastructure/Schemas/JsonSchemaValidator.cs (2 hunks)
  • src/Phoenix.MarketData.Infrastructure/Serialization/BaseMarketDataDto.cs (2 hunks)
  • src/Phoenix.MarketData.Infrastructure/Serialization/CryptoOrdinalSpotPriceDataDto.cs (2 hunks)
  • src/Phoenix.MarketData.Infrastructure/Serialization/FxSpotPriceDataDto.cs (2 hunks)
  • src/Phoenix.MarketData.Infrastructure/Serialization/JsonConverters/DateOnlyJsonConverter.cs (1 hunks)
  • src/Phoenix.MarketData.Infrastructure/Serialization/JsonConverters/TimeOnlyJsonConverter.cs (1 hunks)
  • src/Phoenix.MarketData.Infrastructure/Validation/CompositeValidator.cs (1 hunks)
  • src/Phoenix.MarketData.Infrastructure/Validation/CryptoOrdinalSpotPriceDataValidator.cs (1 hunks)
  • src/Phoenix.MarketData.Infrastructure/Validation/FxSpotPriceDataValidator.cs (1 hunks)
  • tests/Phoenix.MarketData.Application.Tests/Services/MarketDataServiceTests.cs (1 hunks)
  • tests/Phoenix.MarketData.Infrastructure.Tests/Events/MarketDataEventSubscriberTests.cs (1 hunks)
  • tests/Phoenix.MarketData.Infrastructure.Tests/Generate-CoverageReport.ps1 (1 hunks)
  • tests/Phoenix.MarketData.Infrastructure.Tests/Mapping/CryptoOrdinalSpotPriceDataMapperTests.cs (1 hunks)
  • tests/Phoenix.MarketData.Infrastructure.Tests/Mapping/FxSpotPriceDataMapperTests.cs (1 hunks)
  • tests/Phoenix.MarketData.Infrastructure.Tests/Phoenix.MarketData.Infrastructure.Tests.csproj (1 hunks)
  • tests/Phoenix.MarketData.Infrastructure.Tests/Repository/MarketDataRepositoryTests.cs (1 hunks)
  • tests/Phoenix.MarketData.Infrastructure.Tests/Validation/CompositeValidatorTests.cs (1 hunks)
  • tests/Phoenix.MarketData.Infrastructure.Tests/Validation/CryptoOrdinalSpotPriceDataValidatorTests.cs (1 hunks)
  • tests/Phoenix.MarketData.Infrastructure.Tests/Validation/FxSpotPriceDataValidatorTests.cs (1 hunks)
  • tests/Phoenix.MarketData.Infrastructure.Tests/Validation/ValidationDecoratorTests.cs (1 hunks)
  • tests/Phoenix.MarketData.Infrastructure.Tests/coverage-threshold.xml (1 hunks)
  • tests/Phoenix.MarketData.Infrastructure.Tests/coverage.runsettings (1 hunks)
  • tests/Phoenix.MarketData.Infrastructure.Tests/generate-coverage-report.sh (1 hunks)
💤 Files with no reviewable changes (8)
  • Phoenix.MarketData.Infrastructure.Tests/Phoenix.MarketData.Infrastructure.Tests.csproj
  • src/Phoenix.MarketData.Infrastructure/Cosmos/CosmosClientFactory.cs
  • src/Phoenix.MarketData.Infrastructure/Cosmos/VersionManager.cs
  • Phoenix.MarketData.Functions/SaveDocumentToDb.cs
  • Phoenix.MarketData.Functions/Program.cs
  • Phoenix.MarketData.Infrastructure.Tests/MarketDataRepositoryTests.cs
  • src/Phoenix.MarketData.Domain/Models/Interfaces/IMarketData.cs
  • src/Phoenix.MarketData.Infrastructure/Cosmos/MarketDataRepository.cs
🧰 Additional context used
🧬 Code Graph Analysis (12)
src/Phoenix.MarketData.Domain/Models/PriceSide.cs (2)
src/Phoenix.MarketData.Infrastructure/Mapping/CryptoOrdinalSpotPriceDataMapper.cs (1)
  • PriceSide (71-83)
src/Phoenix.MarketData.Infrastructure/Mapping/FxSpotPriceDataMapper.cs (1)
  • PriceSide (61-73)
src/Phoenix.MarketData.Core/Validation/IValidator.cs (1)
src/Phoenix.MarketData.Core/Validation/ValidationResult.cs (8)
  • ValidationResult (6-96)
  • ValidationResult (24-27)
  • ValidationResult (32-32)
  • ValidationResult (37-37)
  • ValidationResult (42-43)
  • ValidationResult (48-49)
  • ValidationResult (54-55)
  • ValidationResult (60-64)
src/Phoenix.MarketData.Core/Validation/ServiceValidationDecorator.cs (3)
src/Phoenix.MarketData.Application/Services/Decorators/ValidationMarketDataServiceDecorator.cs (3)
  • Task (21-31)
  • Task (33-39)
  • Task (41-48)
src/Phoenix.MarketData.Core/Validation/IValidator.cs (1)
  • Task (16-16)
src/Phoenix.MarketData.Core/Validation/ValidationDecorator.cs (2)
  • Task (28-32)
  • Task (43-61)
src/Phoenix.MarketData.Infrastructure/Serialization/JsonConverters/DateOnlyJsonConverter.cs (1)
src/Phoenix.MarketData.Infrastructure/Serialization/JsonConverters/TimeOnlyJsonConverter.cs (1)
  • Write (29-33)
src/Phoenix.MarketData.Domain/Models/FxVolSurfaceData.cs (1)
src/Phoenix.MarketData.Domain/Models/BaseMarketData.cs (1)
  • BaseMarketData (6-172)
src/Phoenix.MarketData.Core/Validation/ValidationError.cs (1)
src/Phoenix.MarketData.Core/Validation/ValidationResult.cs (1)
  • ToString (66-67)
src/Phoenix.MarketData.Infrastructure/Serialization/JsonConverters/TimeOnlyJsonConverter.cs (1)
src/Phoenix.MarketData.Infrastructure/Serialization/JsonConverters/DateOnlyJsonConverter.cs (1)
  • Write (25-28)
src/Phoenix.MarketData.Domain/Services/IMarketDataService.cs (1)
src/Phoenix.MarketData.Functions/JsonConverters/JsonConverters.cs (1)
  • DateOnly (7-8)
src/Phoenix.MarketData.Core/Repositories/IRepository.cs (2)
src/Phoenix.MarketData.Core/Mapping/IModelMapper.cs (1)
  • IEnumerable (23-23)
src/Phoenix.MarketData.Core/Validation/SchemaValidationExtensions.cs (1)
  • IEnumerable (23-46)
src/Phoenix.MarketData.Core/Events/IntegrationEvents.cs/MarketDataEvents.cs (1)
src/Phoenix.MarketData.Domain/Models/AssetClass.cs (1)
  • AssetClass (6-14)
src/Phoenix.MarketData.Infrastructure/Mapping/CryptoOrdinalSpotPriceDataMapper.cs (4)
src/Phoenix.MarketData.Domain/Models/CryptoOrdinalSpotPriceData.cs (1)
  • CryptoOrdinalSpotPriceData (3-12)
src/Phoenix.MarketData.Infrastructure/Serialization/CryptoOrdinalSpotPriceDataDto.cs (1)
  • CryptoOrdinalSpotPriceDataDto (6-61)
src/Phoenix.MarketData.Domain/Models/AssetClass.cs (1)
  • AssetClass (6-14)
src/Phoenix.MarketData.Infrastructure/Mapping/FxSpotPriceDataMapper.cs (1)
  • PriceSide (61-73)
src/Phoenix.MarketData.Domain/Models/Interfaces/IMarketDataEntity.cs (2)
src/Phoenix.MarketData.Domain/Models/AssetClass.cs (1)
  • AssetClass (6-14)
src/Phoenix.MarketData.Functions/JsonConverters/JsonConverters.cs (2)
  • DateOnly (7-8)
  • TimeOnly (16-17)
🪛 markdownlint-cli2 (0.17.2)
docs/README.md

52-52: Trailing punctuation in heading
Punctuation: ':'

(MD026, no-trailing-punctuation)

🪛 Shellcheck (0.10.0)
tests/Phoenix.MarketData.Infrastructure.Tests/generate-coverage-report.sh

[warning] 11-11: Use 'cd ... || exit' or 'cd ... || return' in case cd fails.

(SC2164)

🪛 LanguageTool
.aiguidance/ai-guidelines.md

[uncategorized] ~108-~108: You might be missing the article “the” here.
Context: ... computational efficiency - Consider environmental impact of training and inference - B...

(AI_EN_LECTOR_MISSING_DETERMINER_THE)

docs/architecture-implementation-guide.md

[uncategorized] ~50-~50: You might be missing the article “the” here.
Context: ...h/Pull Implementation Guidelines - Use Pull model for synchronous requests - Use Pu...

(AI_EN_LECTOR_MISSING_DETERMINER_THE)


[uncategorized] ~51-~51: You might be missing the article “the” here.
Context: ...ll model for synchronous requests - Use Push model for reactive updates - Don't forc...

(AI_EN_LECTOR_MISSING_DETERMINER_THE)


[uncategorized] ~53-~53: You might be missing the article “the” here.
Context: ...ion unless it fits naturally - Consider performance implications of each approach ### 4. E...

(AI_EN_LECTOR_MISSING_DETERMINER_THE)

🪛 actionlint (1.7.7)
.github/workflows/build-and-test.yml

16-16: the runner of "actions/checkout@v3" action is too old to run on GitHub Actions. update the action's version to fix this issue

(action)


19-19: the runner of "actions/setup-dotnet@v3" action is too old to run on GitHub Actions. update the action's version to fix this issue

(action)


54-54: the runner of "actions/upload-artifact@v3" action is too old to run on GitHub Actions. update the action's version to fix this issue

(action)

🪛 YAMLlint (1.37.1)
.github/workflows/build-and-test.yml

[error] 14-14: trailing spaces

(trailing-spaces)


[error] 17-17: trailing spaces

(trailing-spaces)


[error] 22-22: trailing spaces

(trailing-spaces)


[error] 25-25: trailing spaces

(trailing-spaces)


[error] 28-28: trailing spaces

(trailing-spaces)


[error] 31-31: trailing spaces

(trailing-spaces)


[error] 34-34: trailing spaces

(trailing-spaces)


[error] 41-41: trailing spaces

(trailing-spaces)


[error] 52-52: trailing spaces

(trailing-spaces)


[error] 59-59: trailing spaces

(trailing-spaces)


[error] 65-65: no new line character at the end of file

(new-line-at-end-of-file)

⏰ Context from checks skipped due to timeout of 90000ms (1)
  • GitHub Check: Greptile Review
🔇 Additional comments (136)
src/Phoenix.MarketData.Domain/Models/AssetClass.cs (1)

1-1: Ensure namespace migration consistency.
The namespace was updated from Phoenix.MarketData.Domain to Phoenix.MarketData.Core, aligning with the new core project structure. Please verify that all references (imports, project files, unit tests) have been updated accordingly to prevent any broken links or build errors.

src/Phoenix.MarketData.Infrastructure/Mapping/BaseMarketDataMapper.cs (1)

1-1: Updated namespace import to Core.Models
The change from Phoenix.MarketData.Domain.Models to Phoenix.MarketData.Core.Models aligns with the refactoring that migrated your domain entities into the new Core layer. This update is correct; BaseMarketData and related types should now resolve from Core.Models.

.aiguidance/settings.json (2)

2-3: Configuration Settings Sound Good
The contextWindow and responseFormat settings align with expectations, establishing a high token limit and enforcing code block output with syntax highlighting.


5-8: Review Startup Configuration Options
The startup settings (forceRepoScan and scanDepth) look correct. Confirm that scanDepth accepts "full" as a valid value and consider documenting permitted depth options if not already specified in project docs.

.aiguidance/context-providers.json (1)

2-5: Context Provider Limits Are Well-Defined
The token and line/depth limits for currentFile, diff, terminal, and folder contexts seem reasonable and balanced.

.aiguidance/ai-guidelines.md (1)

1-15: Comprehensive AI Guidelines
This document provides a thorough overview of AI integration best practices, covering principles, user experience, testing, and compliance. Its structure and detail level are commendable.

.husky/pre-commit (1)

1-2: Good addition of pre-commit hooks for test automation

Adding Husky pre-commit hooks ensures tests run automatically before each commit, improving code quality and catching issues early in the development cycle. This integrates well with the broader CI/CD improvements mentioned in the summary.

Consider enhancing this hook in the future to:

  • Only run tests relevant to changed files for faster commits
  • Include linting or code formatting checks
  • Provide a bypass flag for exceptional situations
src/Phoenix.MarketData.Domain/Models/Regions.cs (2)

1-2: Namespace update aligns with core layering.
The file-scoped namespace was updated from Phoenix.MarketData.Domain to Phoenix.MarketData.Core to support the new core project structure and maintain consistency with other moved domain models.


1-12: Verify file location matches namespace.
The namespace now points to Phoenix.MarketData.Core, but the file path remains under Domain/Models. Ensure the file was physically moved into the Core project (e.g., src/Phoenix.MarketData.Core/Models) and that your .csproj reference is updated accordingly to avoid mismatches.

src/Phoenix.MarketData.Domain/Models/PriceSide.cs (2)

3-15: Approve namespace refactor.
The PriceSide enum was moved under the file-scoped Phoenix.MarketData.Core namespace without modifying its members or descriptions. This aligns perfectly with the new core layering.


1-2: Ensure folder and project alignment.
Confirm this file’s physical location and project references have been updated to match the new Phoenix.MarketData.Core namespace; otherwise, import/build errors may occur.

Also applies to: 3-15

.gitignore (1)

5-12: Add coverage artifacts to ignore list.
Patterns for TestResults, CoverageReport, .cobertura.xml, .opencover.xml, and coverage.json were correctly added to prevent generated coverage files from being committed.

.vscode/extensions.json (1)

1-6: Recommend VS Code extensions for Azure Functions and C#.
The workspace recommendations for ms-azuretools.vscode-azurefunctions and ms-dotnettools.csharp will help standardize the developer environment and improve productivity.

src/Phoenix.MarketData.Domain/Models/CryptoOrdinalSpotPriceData.cs (3)

1-12: Approve model migration to core.
The CryptoOrdinalSpotPriceData class has been relocated under Phoenix.MarketData.Core.Models and leverages required properties for safer initialization, while preserving all domain properties. This supports the new layered architecture without altering behavior.


1-2: Verify language version for required.
The use of C# 11’s required modifier requires setting <LangVersion>latest</LangVersion> or 11 in the Core project’s .csproj. Please confirm your project’s language version supports this feature.


1-12: Ensure file path matches the new namespace.
As with other moved models, verify that this file resides in the Core.Models folder and is included in the corresponding Core project to avoid namespace path mismatches.

src/Phoenix.MarketData.Functions/Phoenix.MarketData.Functions.csproj (3)

20-21: New dependencies support event-driven architecture and modern serialization

The addition of the EventGrid and System.Text.Json packages aligns with the architectural overhaul mentioned in the summary, supporting event-driven design and modern serialization.


37-37: Project reference path updated to match new structure

The project reference path has been updated to reflect the architectural changes in the solution structure.


21-21:

Details

✅ Verification successful

Verify System.Text.Json version

The System.Text.Json version 9.0.5 appears unusually high. The version number typically aligns with the .NET SDK version (which is 9.0 based on your TargetFramework).


🏁 Script executed:

#!/bin/bash
# Check the latest official version of System.Text.Json
dotnet search System.Text.Json --prerelease | grep "System.Text.Json "

Length of output: 169


🏁 Script executed:

#!/bin/bash
# Fetch the latest published version of System.Text.Json from NuGet
curl -s https://api.nuget.org/v3-flatcontainer/system.text.json/index.json \
  | grep -oP '"\K[0-9]+\.[0-9]+\.[0-9]+(?=")' \
  | tail -1

Length of output: 136


System.Text.Json Version Validated

Version 9.0.5 is indeed the latest stable release on NuGet and aligns with .NET 9.0. No change required.

package.json (1)

1-9: Good setup for Git hooks with Husky

Adding Husky for Git hooks is a great practice to enforce quality checks before commits.

.vscode/launch.json (1)

1-11: Effective debugging configuration for Azure Functions

This VS Code launch configuration is well set up for attaching to the Azure Functions host process, which will streamline the debugging experience.

run-tests-with-coverage.cmd (1)

1-4: Good automation for running tests with coverage

This script effectively automates running tests with coverage and generating reports, which is valuable for maintaining code quality.

.vscode/settings.json (1)

1-29: Settings configured correctly for development and deployment workflows

The VS Code settings file properly configures:

  • Custom dictionary with domain-specific terms for spell checking
  • Azure Functions deployment settings with appropriate runtime and project paths
  • Test explorer configuration to discover test projects
  • Debugging behavior

This configuration aligns well with the project structure described in the PR summary and supports both local development and deployment workflows.

src/Phoenix.MarketData.Domain/Events/IMarketDataEventPublisher.cs (1)

6-9: Well-designed event publisher interface

The interface follows good design principles:

  • Single responsibility focused on publishing market data events
  • Asynchronous method with clear naming
  • Appropriate generic constraint ensuring only market data entities can be published

This provides a clean abstraction for event publishing that can be implemented by different technologies.

src/Phoenix.MarketData.Application/Phoenix.MarketData.Application.csproj (2)

1-7: Modern project configuration with appropriate settings

The project is correctly configured with:

  • .NET 9.0 target framework
  • Implicit usings enabled for cleaner code
  • Nullable reference types enabled for better null safety

These settings align with modern .NET development practices.


11-14: Application Insights configuration comments

The commented-out Application Insights packages suggest monitoring might be planned but not yet implemented.

Verify if this is intentional or if monitoring should be enabled as part of this PR. If monitoring is required later, consider adding a TODO comment with a ticket reference.

src/Phoenix.MarketData.Core/Phoenix.MarketData.Core.csproj (3)

1-7: Core project correctly configured

The Core project is properly set up with modern .NET settings:

  • .NET 9.0 target framework
  • Implicit usings enabled
  • Nullable reference types enabled

These settings provide a solid foundation for the core abstractions.


9-15: Appropriate dependencies for core abstractions

The Core project includes well-chosen dependencies:

  • MediatR for implementing mediator pattern
  • Logging abstractions for decoupled logging
  • System.Text.Json for serialization
  • FluentValidation for validation rules
  • JsonSchema.Net for schema validation

These packages align well with the core responsibility of providing abstractions and validation.


17-19: Clean dependency on Domain project

The Core project correctly depends only on the Domain project, following clean architecture principles by keeping dependencies pointing inward toward the domain.

docs/README.md (2)

20-57: Great addition of testing & code coverage documentation!

The new section provides clear instructions for running tests with coverage, generating HTML coverage reports, and viewing the results. This will be valuable for the team to maintain and improve test coverage.

🧰 Tools
🪛 markdownlint-cli2 (0.17.2)

52-52: Trailing punctuation in heading
Punctuation: ':'

(MD026, no-trailing-punctuation)


124-129: Good improvement to table formatting

The alignment of column headers and content in the markdown tables improves readability.

Also applies to: 139-144, 153-160

src/Phoenix.MarketData.Core/Configuration/ISecretCache.cs (1)

1-14: Well-designed interface for secret caching

The interface provides a clean abstraction for caching secrets with appropriate documentation.

src/Phoenix.MarketData.Core/Validation/IValidator.cs (1)

3-17: Well-designed validation interface

The interface provides a clean contract for validating entities asynchronously with appropriate documentation. The use of the ValidationResult class provides a rich structure for validation responses.

src/Phoenix.MarketData.Infrastructure/Configuration/NoOpSecretCache.cs (1)

1-13: Good implementation of null object pattern

This no-operation implementation of ISecretCache is clean and serves as a good default when caching isn't required. It follows the null object pattern correctly.

tests/Phoenix.MarketData.Infrastructure.Tests/coverage-threshold.xml (1)

2-15: Well-structured coverage configuration with appropriate thresholds

The coverage thresholds are clearly defined with a reasonable overall target of 60% and appropriate module-specific targets. Having a higher threshold for Core (70%) makes sense as it contains critical business logic, while Infrastructure has a slightly lower threshold (50%) which is common for code with external dependencies.

Consider documenting the rationale behind these specific threshold values in a comment or in your development documentation to help future maintainers understand the reasoning.

src/Phoenix.MarketData.Core/Events/IEventPublisher.cs (1)

4-26: Well-designed event publisher interface

The interface provides a clean abstraction for event publishing with appropriate use of generics, asynchronous methods, and optional topic routing. The XML documentation clearly describes the purpose of each method.

Consider whether you might need additional methods to handle or notify about event publishing failures in the future, depending on your error handling strategy.

src/Phoenix.MarketData.Domain/Models/FxVolSurfaceData.cs (1)

6-24: Verify all required properties from BaseMarketData are addressed.

The class inherits from BaseMarketData which has several required properties. Ensure that your implementation addresses all required properties from the base class.

Based on the relevant code snippet from BaseMarketData, this class would need to set the following required properties in a constructor or object initialization:

  • SchemaVersion
  • AssetId
  • AssetClass
  • DataType
  • Region
  • DocumentType
  • AsOfDate

I recommend adding initialization code or constructor to ensure all required properties are set.

src/Phoenix.MarketData.Infrastructure/Mapping/IMarketDataMapper.cs (1)

5-11: Clean, well-designed mapper interface.

The interface is appropriately generic and follows good design principles:

  • Clear domain-to-DTO and DTO-to-domain mapping methods
  • Proper constraints ensuring TDomain implements IMarketDataEntity
  • Simple, focused interface following single responsibility principle

This aligns well with the clean architecture approach mentioned in the PR summary.

tests/Phoenix.MarketData.Infrastructure.Tests/Validation/ValidationDecoratorTests.cs (1)

31-52: Well-written test for successful validation flow.

The test follows good practices with clear Arrange-Act-Assert pattern and proper verification that the service is called exactly once when validation succeeds.

Phoenix.MarketDataPlatform.sln (3)

8-13: LGTM: Solution architecture follows clean layering principles

The solution file has been updated to include the new Core and Application projects, which establishes a proper layered architecture. This structure follows industry best practices by separating concerns across layers:

  • Core/Domain layer for business entities and interfaces
  • Application layer for business logic and use cases
  • Infrastructure for external integrations

The project path corrections and additions are properly configured with unique GUIDs.


27-28: Project structure is properly organized

The new projects are correctly nested within the source directory structure, ensuring proper organization of the solution.


44-51: Build configurations are properly set up

Build configurations for Debug and Release modes are correctly defined for the new projects, ensuring they can be built in all solution configurations.

tests/Phoenix.MarketData.Infrastructure.Tests/Phoenix.MarketData.Infrastructure.Tests.csproj (5)

16-31: Well-chosen testing dependencies

The project includes a comprehensive set of modern testing tools:

  • xUnit as the test framework
  • Moq for mocking
  • FluentAssertions for readable assertions
  • AutoFixture for test data generation
  • Bogus for fake data creation
  • Coverage collection with coverlet

This provides a robust foundation for thorough testing.


33-38: Good use of global usings

The global usings for test frameworks reduce clutter in individual test files and promote consistency.


40-44: Proper project references

The test project correctly references all necessary projects, enabling comprehensive testing of infrastructure components against their dependencies.


47-52: Well-implemented coverage reporting target

The custom MSBuild target for generating coverage reports is a valuable addition. It:

  1. Runs after tests complete
  2. Installs the report generator tool if needed
  3. Generates HTML reports for easy visualization
  4. Provides informative console output

This facilitates both local development and CI pipeline integration.


4-14:

Details

❓ Verification inconclusive

Verify .NET 9.0 target framework compatibility

The test project targets .NET 9.0, which might be in preview/beta status. Consider ensuring this aligns with your production deployment timeline and CI environment compatibility.


🏁 Script executed:

#!/bin/bash
# Check if .NET 9.0 is in preview/beta status
dotnet --list-sdks | grep "9.0"

Length of output: 128


Verify .NET 9.0 availability and compatibility

The test project is targeting net9.0, which isn’t yet generally available and may require preview SDKs. Please verify that your local machines, CI agents, and any deployment pipelines have a compatible .NET 9.0 SDK installed. If you’re not ready to depend on a preview release, consider retargeting to a GA framework (for example net8.0) until .NET 9.0 reaches general availability.

  • Ensure your CI/CD agents list the .NET 9.0 SDK (dotnet --list-sdks) or install the required preview builds.
  • Confirm this aligns with your production/deployment timeline and any downstream consumers.
src/Phoenix.MarketData.Core/Configuration/IMarketDataSecretProvider.cs (1)

1-30: Well-designed secret provider interface

This interface follows good design principles:

  1. Single Responsibility: Focused solely on secret retrieval
  2. Dependency Inversion: Provides an abstraction for secret management
  3. Proper Async Design: All methods correctly return Task for asynchronous operations
  4. Comprehensive Documentation: Clear XML comments for all members

The granular methods for different connection strings and secrets allow for more flexible implementation and better testability.

tests/Phoenix.MarketData.Infrastructure.Tests/coverage.runsettings (1)

1-21: Well-configured code coverage settings

The coverage configuration is thoughtfully designed:

  1. Uses multiple standard formats (cobertura, opencover) for compatibility with various tools
  2. Properly excludes generated code and test assemblies
  3. Specifically includes the main project assemblies
  4. Excludes infrastructure bootstrapping code (Program, Startup)

This balanced approach will provide meaningful coverage metrics focused on business logic code that should be tested.

src/Phoenix.MarketData.Core/Mapping/IModelMapper.cs (1)

1-25: Well-designed interface with clear responsibilities!

The interface is well-structured with appropriate XML documentation and provides a clean abstraction for object mapping operations. The generic parameters allow for type-safe mapping between any object types.

src/Phoenix.MarketData.Domain/Constants/CurrencyCodes.cs (1)

1-29: Clean implementation of currency code constants

This static class provides a standardized way to reference currency codes throughout the application, reducing the risk of typos and inconsistencies. The approach with constants and a read-only list is appropriate.

tests/Phoenix.MarketData.Infrastructure.Tests/Generate-CoverageReport.ps1 (1)

31-32:

Details

✅ Verification successful

Verify ReportGenerator handles multiple coverage files correctly

The script uses a wildcard pattern that might match multiple coverage files. It's not explicitly clear how ReportGenerator handles multiple inputs or if this was the intended behavior.


🏁 Script executed:

#!/bin/bash
# Verify how many coverage files might be matched
find . -path "**/TestResults/**/coverage.cobertura.xml" | wc -l

Length of output: 67


🏁 Script executed:

#!/bin/bash
# Show how coverageFilesPattern is defined in the PowerShell script
rg -n -C3 'coverageFilesPattern' tests/Phoenix.MarketData.Infrastructure.Tests/Generate-CoverageReport.ps1

# List all test project files to see how many coverage outputs may be generated
find . -type f -name '*.Tests.csproj'

# Count how many test projects exist
find . -type f -name '*.Tests.csproj' | wc -l

Length of output: 797


No changes needed: wildcard pattern is safe and supported

Verified that $coverageFilesPattern (line 32) expands to a single coverage.cobertura.xml file today—there’s only one test project—and ReportGenerator natively accepts and merges multiple report inputs when a wildcard matches more than one file. No modifications required.

.vscode/tasks.json (1)

108-130: LGTM! Well-structured coverage report task.

The coverage report generation task is well-structured with appropriate platform-specific commands for Windows, Linux, and macOS.

src/Phoenix.MarketData.Infrastructure/Repositories/VersioningExtensions.cs (1)

17-28: The query implementation looks good.

The LINQ query correctly filters by all necessary entity properties and orders by version to find the highest version number.

src/Phoenix.MarketData.Core/Validation/SchemaValidationExtensions.cs (2)

11-15: LGTM! Clean and efficient implementation.

This extension method neatly converts schema validation results to the application's internal ValidationResult format.


23-46: Well-implemented recursive collection of validation errors.

The recursion pattern effectively flattens the nested validation results structure, and the path handling is robust.

src/Phoenix.MarketData.Infrastructure/Validation/FxSpotPriceDataValidator.cs (2)

12-13: CancellationToken is not used in ValidateAsync implementation

The method declares a cancellationToken parameter but doesn't use it when creating the validation result. While this works correctly (since the validation is synchronous), it's worth noting that the method doesn't respect cancellation.

-        public Task<ValidationResult> ValidateAsync(FxSpotPriceData data, CancellationToken cancellationToken = default)
+        public Task<ValidationResult> ValidateAsync(FxSpotPriceData data, CancellationToken cancellationToken = default)
         {
             // Code remains the same
            
-            return Task.FromResult(errors.Count > 0
-                ? ValidationResult.Failure(errors)
-                : ValidationResult.Success());
+            return Task.FromResult(errors.Count > 0
+                ? ValidationResult.Failure(errors)
+                : ValidationResult.Success());
         }

Also applies to: 46-48


14-44: LGTM! Validation logic is comprehensive

The validator implements all necessary validation rules for FxSpotPriceData:

  • Proper null checking with ArgumentNullException
  • AssetId validation for format compliance
  • AssetClass validation ensuring it's "fx"
  • Price validation ensuring it's positive
  • Currency validation ensuring it's provided
src/Phoenix.MarketData.Infrastructure/Repositories/Interfaces.cs (1)

14-14: LGTM! Good pattern for soft delete functionality

The ISoftDeletable interface is a clean way to implement soft delete functionality across entities.

src/Phoenix.MarketData.Infrastructure/Serialization/CryptoOrdinalSpotPriceDataDto.cs (3)

1-2: LGTM! Migrated to System.Text.Json

Good migration from Newtonsoft.Json to System.Text.Json, which aligns with modern .NET practices.


8-21: LGTM! Added default initializations for string properties

The addition of default initializations for string properties and the Side property ensures properties are properly initialized even if not explicitly set, which helps prevent null reference exceptions.


24-44: LGTM! Constructor parameter list refactored for better readability

The parameter list in the constructor has been nicely reformatted for improved readability.

src/Phoenix.MarketData.Core/Validation/ServiceValidationDecorator.cs (2)

6-11: LGTM! Well-documented abstract decorator class

The class is well-documented with XML comments explaining its purpose, which is good practice for shared infrastructure components.


12-18: LGTM! Proper null check in constructor

The constructor correctly implements a null check for the decorated service, which is a good defensive programming practice.

src/Phoenix.MarketData.Core/Validation/ValidationPipelineBehaviour.cs (2)

8-16: Good use of MediatR pipeline for validation

The implementation injects FluentValidation validators via constructor to perform validation before request processing, which is a clean approach for cross-cutting concerns.


41-42: Ensure validation doesn't block async operations

The await next() call is properly awaited to ensure validation is performed before proceeding with request handling.

tests/Phoenix.MarketData.Infrastructure.Tests/Mapping/FxSpotPriceDataMapperTests.cs (3)

14-44: Good test for DTO mapping with case preservation

The test correctly verifies that the original casing is preserved through DisplayAssetId when mapping to DTOs, which is important for maintaining data integrity.


46-54: Proper null input validation

The test verifies that null inputs are rejected with appropriate exceptions, which is critical for robust error handling.


56-89: Thorough domain mapping verification

The test carefully validates both the preservation of original casing in DisplayAssetId and normalization (lowercase) in AssetId, ensuring proper bidirectional mapping behavior.

tests/Phoenix.MarketData.Application.Tests/Services/MarketDataServiceTests.cs (4)

16-36: Good constructor validation tests

The tests correctly verify that null dependencies are rejected, enforcing proper dependency injection.


38-59: Repository interaction verification

The test properly verifies that the service delegates to the repository and returns the same entity without modification.


61-111: Thorough validation-before-save testing

Tests comprehensively verify both the success and failure paths for entity validation, ensuring that invalid entities don't reach the repository and that valid entities are properly saved.


113-128: Clean test entity implementation

The helper class provides all necessary properties with sensible defaults, making test setup concise and readable.

tests/Phoenix.MarketData.Infrastructure.Tests/Mapping/CryptoOrdinalSpotPriceDataMapperTests.cs (3)

14-51: Good test coverage for crypto ordinal DTO mapping

The test comprehensively verifies that all domain properties are correctly mapped to the DTO, with special attention to case preservation via DisplayAssetId.


53-61: Proper null handling verification

Test correctly ensures that the mapper rejects null inputs with appropriate exceptions.


63-103: Thorough domain mapping verification

Test carefully validates both the preservation of original casing in DisplayAssetId and normalization in AssetId, ensuring proper bidirectional mapping.

src/Phoenix.MarketData.Infrastructure/Serialization/JsonConverters/DateOnlyJsonConverter.cs (4)

1-4: Good migration to System.Text.Json!

The implementation correctly migrates from Newtonsoft.Json to System.Text.Json, which is recommended for newer .NET applications due to better performance and modern API design.


10-10: Documentation improvement - Good!

Clear documentation on the format used for serialization helps developers understand how the DateOnly values will be represented in JSON.


16-23: Well-implemented Read method with proper error handling

The implementation correctly validates input and uses proper culture settings for consistent date parsing. The explicit error message on null/empty values is a good practice.


25-27: Consistent implementation with TimeOnlyJsonConverter

The Write method follows the same pattern as the TimeOnlyJsonConverter, ensuring consistency across date/time serialization in the codebase.

src/Phoenix.MarketData.Infrastructure/Repositories/CosmosRepositoryExtensions.cs (2)

13-15: Extension methods approach is a good design choice

Using extension methods for additional repository functionality allows for clean separation of concerns while extending the base repository capabilities.


16-36:

Details

✅ Verification successful

Consider safer handling of ISoftDeletable casting

The current implementation might throw a NullReferenceException at runtime if an entity doesn't properly implement ISoftDeletable despite the type check.

Consider refactoring to this safer approach:

-                if (typeof(ISoftDeletable).IsAssignableFrom(typeof(T)) && !includeSoftDeleted)
-                    results.AddRange(response.Where(e => !(e as ISoftDeletable)!.IsDeleted));
+                if (typeof(ISoftDeletable).IsAssignableFrom(typeof(T)) && !includeSoftDeleted)
+                    results.AddRange(response.Where(e => e is ISoftDeletable sd && !sd.IsDeleted));

Let's verify this is a real concern:


🏁 Script executed:

#!/bin/bash
# Check if there are entities that inherit from IMarketDataEntity but not ISoftDeletable
rg -g "*.cs" "class\s+\w+\s*:\s*\w*IMarketDataEntity" --no-filename
rg -g "*.cs" "ISoftDeletable" --no-filename

Length of output: 1128


No runtime NRE risk for ISoftDeletable cast

The typeof(ISoftDeletable).IsAssignableFrom(typeof(T)) guard ensures e as ISoftDeletable will never be null, so the current code cannot throw a NullReferenceException. No changes are required.

src/Phoenix.MarketData.Functions/SaveDocumentToDb.cs (2)

18-25: Good dependency injection pattern

The constructor properly validates dependencies using null checks and assigns them to instance fields.


63-71:

Details

❓ Verification inconclusive

Consider streaming for large request bodies

The current implementation reads the entire request body into memory, which could be problematic for large payloads.

For large payloads, consider using streaming deserialization. We should verify if this is a concern:


🏁 Script executed:

#!/bin/bash
# Check for typical sizes of market data payloads or find references to large data handling
rg "MaxRequestBodySize" -A 3 -B 3

Length of output: 33


Consider streaming for large request bodies

The code at src/Phoenix.MarketData.Functions/SaveDocumentToDb.cs (lines 63–71) reads the entire HTTP request into memory:

string requestBody = await new StreamReader(req.Body).ReadToEndAsync();
// …schema validation…

This can lead to high memory usage or OOM errors if payloads grow large. I didn’t find any MaxRequestBodySize or similar settings in the repo, so please verify your typical data sizes. If you expect very large JSON documents, consider:

  • Using a streaming JSON API (e.g. System.Text.Json.Utf8JsonReader)
  • Reading and validating chunks of the stream rather than buffering the full body
src/Phoenix.MarketData.Domain/Models/Interfaces/IMarketDataEntity.cs (1)

3-6: Good interface segregation

Defining a minimal IEntity interface following the Interface Segregation Principle is a good practice, as it allows for more flexibility when implementing and composing entity types.

src/Phoenix.MarketData.Infrastructure/Validation/CompositeValidator.cs (1)

61-62:

Details

✅ Verification successful

Verify the default success behavior is appropriate.

Returning a successful validation result when no validator is registered might mask validation issues. Consider if a warning or validation failure would be more appropriate when no validator is found for a type, especially in strict validation scenarios.


🏁 Script executed:

#!/bin/bash
# Search for places where CompositeValidator is used to understand the context
# and determine if default success is appropriate

rg -A 5 "CompositeValidator" --glob "*.cs"

Length of output: 7983


🏁 Script executed:

#!/bin/bash
cat tests/Phoenix.MarketData.Infrastructure.Tests/Validation/CompositeValidatorTests.cs

Length of output: 10974


Default success behavior is by design and covered by tests

The CompositeValidator explicitly returns ValidationResult.Success() when no matching validator is registered, and this behavior is exercised in:

  • ValidateAsync_WithNoRegisteredValidator_ReturnsSuccess
  • ValidateAsync_WithBaseEntityAndDerivedValidator_ReturnsSuccess

Since the default‐to‐success semantics are intentional and well-tested, no change is required here.

src/Phoenix.MarketData.Infrastructure/Schemas/JsonSchemaValidator.cs (1)

33-38: Consider handling potential stack overflow in deeply nested schemas.

The recursive approach in GetAllMessages could potentially cause a stack overflow for extremely complex schemas with deep nesting. This is unlikely in most scenarios but worth considering for robustness.

Are you dealing with deeply nested schemas that could cause recursive calls to exceed the stack limit? If so, you might want to implement an iterative approach using a stack data structure instead of recursion.

src/Phoenix.MarketData.Infrastructure/Configuration/MarketDataSecretProvider.cs (2)

1-6: LGTM! Clear and standard imports.

The imports are appropriately selected for Azure Key Vault integration and implementing the secret provider interface.


9-12: Class structure follows good design principles.

The class properly implements the interface and has appropriate private fields for the secret client and cache.

src/Phoenix.MarketData.Core/Repositories/IRepository.cs (1)

8-20: LGTM! Repository interface follows SOLID principles.

The base repository interface is well-designed with standard CRUD operations. All methods properly return Task for asynchronous operations.

src/Phoenix.MarketData.Core/Validation/ValidationError.cs (2)

13-16: LGTM! Clear and concise ToString implementation.

The ToString method provides a clean format for displaying validation errors, properly handling the case when PropertyName is null or empty.


23-69: LGTM! Well-documented constructors with appropriate null handling.

The constructors are well-documented and handle null values appropriately by falling back to empty strings for required fields.

tests/Phoenix.MarketData.Infrastructure.Tests/Validation/FxSpotPriceDataValidatorTests.cs (2)

15-39: LGTM! Thorough test for the valid data scenario.

The test properly arranges a valid entity, performs validation, and asserts that the result is valid with no errors.


41-50: LGTM! Good test for null input handling.

The test correctly verifies that the validator throws an ArgumentNullException when a null entity is provided.

src/Phoenix.MarketData.Infrastructure/Serialization/BaseMarketDataDto.cs (4)

1-1: Successful migration to System.Text.Json

Good job migrating from Newtonsoft.Json to System.Text.Json. This aligns with modern .NET practices and should provide better performance.


8-33: String initialization is properly handled

Good practice initializing string properties with string.Empty instead of letting them default to null. This helps prevent NullReferenceExceptions and makes the code more robust.


29-30: Collection initialization best practice applied

Good use of the collection initializer = new(); for Tags property. This ensures the property is never null and follows best practices for collection properties.


72-72: Good use of C# collection expression

Using the new C# 11 collection expression syntax [] for creating an empty list is a nice modernization of the codebase.

src/Phoenix.MarketData.Infrastructure/Serialization/JsonConverters/TimeOnlyJsonConverter.cs (3)

1-5: Successfully migrated to System.Text.Json

Good job migrating from Newtonsoft.Json to System.Text.Json, removing unused imports, and only including necessary namespaces.


15-27: Improved error handling in JSON deserialization

The error handling has been improved with clearer error messages in the exception. The type check logic is sound and follows best practices for JSON converters.


29-33: Correct implementation of Write method

The implementation correctly serializes TimeOnly values using the specified format and invariant culture, matching the pattern used in DateOnlyJsonConverter.

src/Phoenix.MarketData.Application/Events/MarketDataEventSubscriber.cs (1)

18-42: Well-configured retry policy

The retry policy implementation with Polly is excellent. The exponential backoff strategy with proper exception filtering and detailed logging will make the system resilient against transient failures.

src/Phoenix.MarketData.Infrastructure/Repositories/CosmosRepositoryEvents.cs (1)

34-45: Consider making EntityDeletedEvent generic

The EntityDeletedEvent class is not generic, unlike the other event classes. For consistency, consider making it generic as well, or explain why it's different.

Is there a specific reason why EntityDeletedEvent doesn't follow the generic pattern of the other event classes? If not, consider making it generic for consistency:

public class EntityDeletedEvent<T>
{
    public string Id { get; }
    public DateTimeOffset Timestamp { get; }
    public EntityDeletedEvent(string id)
    {
        Id = id ?? throw new ArgumentNullException(nameof(id));
        Timestamp = DateTimeOffset.UtcNow;
    }
}

If the current design is intentional, please clarify with a comment why this event differs from the others.

src/Phoenix.MarketData.Infrastructure/Mapping/FxSpotPriceDataMapper.cs (1)

40-56:

Details

❌ Incorrect review comment

Id is not transferred from DTO → Domain

The Id generated upstream is lost, potentially breaking idempotency and audit trails.

             var domain = new FxSpotPriceData
             {
+                Id = dto.Id,
                 SchemaVersion = dto.SchemaVersion,
                 Version = dto.Version,
@@
                 Tags = dto.Tags?.ToList() ?? new List<string>(),
                 Side = ConvertToDomainSide(dto.Side)
             };

Validate that FxSpotPriceData.Id is mutable; if it should remain immutable, consider adding a constructor parameter instead.


🏁 Script executed:

#!/bin/bash
# Locate and inspect the FxSpotPriceData domain model to verify if an Id property exists and is mutable
rg -l "class FxSpotPriceData" | while read file; do
  echo "Found domain class at: $file"
  echo "----- Class Definition (first 200 lines) -----"
  sed -n '1,200p' "$file"
  echo
  echo "----- 'Id' Property Occurrences -----"
  grep -n "Id" "$file" || echo "No 'Id' occurrences found in this file."
  echo
done

Length of output: 21203


🏁 Script executed:

#!/bin/bash
# Locate and inspect BaseMarketData to see if Id is declared and mutable
rg -l "class BaseMarketData" | while read file; do
  echo "Found BaseMarketData at: $file"
  echo "----- Class Definition (first 200 lines) -----"
  sed -n '1,200p' "$file"
  echo
  echo "----- 'Id' Property Details -----"
  grep -n "Id" "$file" || echo "No 'Id' occurrences found."
  echo
done

Length of output: 10453


Incorrect: Id Is Computed, Not Lost or Mutable

The Id on FxSpotPriceData comes from the read-only BaseMarketData.Id property, which recalculates itself via CalculateId() based on the same fields you’re mapping (DataType, AssetClass, AssetId, Region, AsOfDate, DocumentType, and Version). You cannot—and should not—assign to domain.Id. Since your mapper sets all of those underlying properties to the values from the DTO (including Version), the computed domain.Id will exactly match dto.Id.

– No changes required in FxSpotPriceDataMapper.ToDomain.
– The absence of an Id = dto.Id assignment is intentional and correct.

Likely an incorrect or invalid review comment.

src/Phoenix.MarketData.Core/Validation/ValidationResult.cs (3)

1-27: Well-structured validation result class with proper encapsulation.

The ValidationResult class provides a clean interface for representing validation outcomes with a collection of errors. The constructor properly handles null inputs and initializes the error collection.


29-65: Good factory methods for creating success and failure results.

The static factory methods provide a convenient way to create validation results for different scenarios, making the API more usable and expressive.


66-74: Clear string representation methods for validation results.

Both the ToString() override and the ErrorMessage property provide useful string representations of the validation result, making it easier to display validation outcomes to users or in logs.

src/Phoenix.MarketData.Infrastructure/Serialization/FxSpotPriceDataDto.cs (3)

1-3: Proper migration from Newtonsoft.Json to System.Text.Json.

The import statements have been correctly updated to use System.Text.Json.Serialization instead of Newtonsoft.Json.


8-12: Property attributes correctly updated for System.Text.Json.

The properties have been correctly decorated with [JsonPropertyName] attributes for System.Text.Json instead of [JsonProperty] for Newtonsoft.Json.


14-40: JsonConstructor attribute properly applied for System.Text.Json.

The constructor is correctly annotated with [JsonConstructor] and maintains the mapping logic from domain to DTO enums.

docs/architecture-implementation-guide.md (4)

1-25: Well-structured architecture guide with clear organization.

The introduction and overview sections provide a good foundation for understanding the architectural changes. The event-driven architecture section clearly defines the key components and their responsibilities.


55-82: Clear guidelines for domain layering and validation.

The documentation clearly explains the separation between domain models, DTOs, and DB entities, as well as how validation is implemented as a cross-cutting concern.


83-105: Comprehensive security guidelines with best practices.

The security implementation guidelines provide clear direction on handling secrets securely, which is a critical aspect of any production system.


106-148: Well-defined migration strategy and additional considerations.

The migration strategy provides a clear path for incremental adoption of the new architecture, and the additional considerations for testing, observability, and performance are valuable.

tests/Phoenix.MarketData.Infrastructure.Tests/Validation/CompositeValidatorTests.cs (8)

15-30: Well-structured test entity classes for validation testing.

The test entity classes provide a good foundation for testing the composite validator with inheritance hierarchies, simulating real-world scenarios.


38-62: Comprehensive test for successful validation with registered validator.

This test correctly verifies that the composite validator delegates to the appropriate registered validator and returns the expected result.


64-88: Good coverage of edge cases for validation.

The tests for no registered validator and null entity inputs ensure the composite validator handles these scenarios correctly.


90-118: Ensures base entities don't trigger derived-type validators.

This test verifies that the composite validator doesn't inappropriately use a validator registered for a derived type when validating a base type entity.


120-147: Properly tests validation failure scenarios.

The test verifies that validation failures are correctly propagated from the validator to the caller.


149-191: Comprehensive test for multiple validators.

This test ensures that the composite validator correctly routes validation calls to the appropriate validator based on the entity type.


193-235: Ensures most specific validator is chosen.

This test correctly verifies that when multiple validators are registered for a hierarchy of types, the most specific one is chosen.


237-259: Properly tests exception propagation.

This test ensures that exceptions thrown by validators are properly propagated to the caller without being swallowed or transformed.

src/Phoenix.MarketData.Core/Events/IntegrationEvents.cs/MarketDataEvents.cs (3)

6-17: The interface design looks good!

The IMarketDataIntegrationEvent interface provides a clean contract with all essential properties for market data events.


19-56: Well-designed base class with proper initialization

Good use of the required modifier and init setters to enforce initialization at object creation time. The [SetsRequiredMembers] attribute on the constructor properly signals that it initializes all required properties.


85-110: LGTM: Event class follows the same pattern as previous

The MarketDataChangedIntegrationEvent follows the same pattern as the Created event, maintaining consistency.

src/Phoenix.MarketData.Application/Services/MarketDataEventProcessor.cs (3)

11-15: Clean interface design with proper cancellation support

The IMarketDataEventProcessor interface is well-defined with methods for processing different event types and proper cancellation token support.


22-28: Good constructor implementation with null checks

The constructor properly validates dependencies using null checks, which prevents potential null reference exceptions.


30-62: Placeholder implementation with good error handling

The ProcessCreatedEventAsync method includes structured logging and proper error handling, though it's mostly placeholder code with example implementation comments.

src/Phoenix.MarketData.Application/Services/MarketDataService.cs (1)

12-22: Well-designed service interface with proper generics constraint

The IMarketDataService<T> interface has a good design with appropriate generic constraints and comprehensive methods for managing market data.

src/Phoenix.MarketData.Infrastructure/Mapping/CryptoOrdinalSpotPriceDataMapper.cs (3)

10-10: Good implementation of the mapper interface

The class properly implements the IMarketDataMapper interface, providing strong typing for the mapping operations.


16-36: Good null handling with proper default values

The ToDto method handles null values appropriately by providing empty string defaults for potentially null properties.


71-83: Well-implemented side conversion with null handling

The ConvertToDomainSide method properly handles null values and provides default handling for unrecognized enum values.

src/Phoenix.MarketData.Domain/Models/FxSpotPriceData.cs (2)

3-7: Class relocated to Core namespace as part of architectural refactoring

The class has been moved from Phoenix.MarketData.Domain.Models to Phoenix.MarketData.Core.Models, which aligns with the architectural refactoring described in the PR summary. This is a good change that centralizes core domain entities.


10-10: Good addition of Currency property with sensible default

Adding the Currency property with a default value of USD is a useful enhancement that improves the model's expressiveness while maintaining backward compatibility through the default value.

src/Phoenix.MarketData.Domain/Models/BaseMarketData.cs (2)

59-68: Good addition of DisplayAssetId for preserving case

The addition of the DisplayAssetId property is excellent for maintaining the original casing for UI/reporting purposes while using the lowercase version for ID calculation and matching. The getter implementation with fallback to _assetId is also well-designed.


149-171: Well-implemented ID calculation with validation

The CalculateId method is well-implemented with appropriate validation of required fields before attempting to construct the ID. The format of the ID using double underscores as separators is clear and the inclusion of the Version at the end when available is appropriate.

/// crucial for determining trading rates and market conditions.
/// </summary>
public required decimal Price { get; set; }
public decimal Price { get; set; }
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Consider making Price property required

The Price property has been simplified but is no longer marked as required. For a price data object, this property should likely be required to ensure data validity.

-    public decimal Price { get; set; }
+    public required decimal Price { get; set; }
🤖 Prompt for AI Agents
In src/Phoenix.MarketData.Domain/Models/FxSpotPriceData.cs at line 7, the Price
property is currently not marked as required, which may allow invalid data.
Update the Price property to be required by adding the appropriate data
annotation or validation attribute to enforce that it must have a value,
ensuring data validity for price information.

@JustAGhosT JustAGhosT requested a review from mareeben May 21, 2025 22:47
@JustAGhosT JustAGhosT self-assigned this May 21, 2025
Copy link

@korbit-ai korbit-ai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Review by Korbit AI

Korbit automatically attempts to detect when you fix issues in new commits.
Category Issue Status
Documentation Unclear purpose in class documentation ▹ view
Functionality Default Currency May Mask Missing Data ▹ view
Functionality Missing ValidationResult Type Import ▹ view
Functionality Overly Strict Time Format Validation ▹ view ✅ Fix detected
Readability Missing Required Modifier on Essential Property ▹ view
Error Handling Silent Success on Missing Validator ▹ view
Documentation Interface documentation lacks purpose and context ▹ view
Documentation Parameter documentation lacks constraints ▹ view
Performance Inefficient timestamp creation in high-volume events ▹ view
Design Non-injectable timestamp prevents deterministic testing ▹ view
Files scanned

Due to the exceptionally large size of this PR, I've limited my review to the following files:

File Path Reviewed
src/Phoenix.MarketData.Domain/Models/PriceSide.cs
src/Phoenix.MarketData.Domain/Models/Regions.cs
src/Phoenix.MarketData.Domain/Events/IMarketDataEventPublisher.cs
src/Phoenix.MarketData.Infrastructure/Mapping/IMarketDataMapper.cs
src/Phoenix.MarketData.Domain/Models/AssetClass.cs
src/Phoenix.MarketData.Core/Domain/MarketDataCreatedEvent.cs
src/Phoenix.MarketData.Infrastructure/Configuration/NoOpSecretCache.cs
src/Phoenix.MarketData.Core/Domain/MarketDataChangedEvent.cs
src/Phoenix.MarketData.Core/Validation/FluentValidationExtensions.cs
src/Phoenix.MarketData.Core/Validation/IValidator.cs
src/Phoenix.MarketData.Domain/Services/IMarketDataService.cs
src/Phoenix.MarketData.Core/Configuration/ISecretCache.cs
src/Phoenix.MarketData.Infrastructure/Mapping/BaseMarketDataMapper.cs
src/Phoenix.MarketData.Domain/Models/Interfaces/IMarketDataEntity.cs
src/Phoenix.MarketData.Domain/Repositories/IRepository.cs
src/Phoenix.MarketData.Domain/Models/FxSpotPriceData.cs
src/Phoenix.MarketData.Core/Configuration/IMarketDataSecretProvider.cs
src/Phoenix.MarketData.Infrastructure/Repositories/Interfaces.cs
src/Phoenix.MarketData.Domain/Models/FxVolSurfaceData.cs
src/Phoenix.MarketData.Core/Mapping/IModelMapper.cs
src/Phoenix.MarketData.Functions/JsonConverters/JsonConverters.cs
src/Phoenix.MarketData.Domain/Models/CryptoOrdinalSpotPriceData.cs
src/Phoenix.MarketData.Core/Validation/ValidationException.cs
src/Phoenix.MarketData.Domain/Constants/CurrencyCodes.cs
src/Phoenix.MarketData.Core/Events/IEventPublisher.cs
src/Phoenix.MarketData.Core/Events/IMarketDataEventSubscriber.cs
src/Phoenix.MarketData.Infrastructure/Repositories/VersioningExtensions.cs
src/Phoenix.MarketData.Core/Validation/ValidationPipelineBehaviour.cs
src/Phoenix.MarketData.Core/Repositories/IRepository.cs
src/Phoenix.MarketData.Infrastructure/Repositories/CosmosRepositoryEvents.cs
src/Phoenix.MarketData.Infrastructure/Serialization/JsonConverters/DateOnlyJsonConverter.cs
src/Phoenix.MarketData.Infrastructure/Serialization/FxSpotPriceDataDto.cs
src/Phoenix.MarketData.Core/Validation/ServiceValidationDecorator.cs
src/Phoenix.MarketData.Infrastructure/Validation/CryptoOrdinalSpotPriceDataValidator.cs
src/Phoenix.MarketData.Infrastructure/Validation/FxSpotPriceDataValidator.cs
src/Phoenix.MarketData.Core/Validation/SchemaValidationExtensions.cs
src/Phoenix.MarketData.Infrastructure/Serialization/JsonConverters/TimeOnlyJsonConverter.cs
src/Phoenix.MarketData.Application/Services/Decorators/ValidationMarketDataServiceDecorator.cs
src/Phoenix.MarketData.Infrastructure/Schemas/JsonSchemaValidator.cs
src/Phoenix.MarketData.Infrastructure/Repositories/CosmosRepositoryExtensions.cs
src/Phoenix.MarketData.Infrastructure/Configuration/MarketDataSecretProvider.cs
src/Phoenix.MarketData.Core/Validation/ValidationError.cs
src/Phoenix.MarketData.Infrastructure/Validation/CompositeValidator.cs
src/Phoenix.MarketData.Core/Validation/ValidationDecorator.cs
src/Phoenix.MarketData.Infrastructure/Serialization/CryptoOrdinalSpotPriceDataDto.cs
src/Phoenix.MarketData.Infrastructure/Serialization/BaseMarketDataDto.cs
src/Phoenix.MarketData.Application/Services/MarketDataService.cs
src/Phoenix.MarketData.Core/Validation/ValidationResult.cs
src/Phoenix.MarketData.Infrastructure/Mapping/FxSpotPriceDataMapper.cs
src/Phoenix.MarketData.Core/Events/IntegrationEvents.cs/MarketDataEvents.cs
src/Phoenix.MarketData.Application/Events/MarketDataEventSubscriber.cs
src/Phoenix.MarketData.Functions/Program.cs
src/Phoenix.MarketData.Functions/SaveDocumentToDb.cs
src/Phoenix.MarketData.Domain/Models/BaseMarketData.cs
src/Phoenix.MarketData.Infrastructure/Mapping/CryptoOrdinalSpotPriceDataMapper.cs
src/Phoenix.MarketData.Infrastructure/Events/EventGridPublisher.cs
src/Phoenix.MarketData.Application/Services/MarketDataEventProcessor.cs
src/Phoenix.MarketData.Infrastructure/Repositories/CosmosRepository.cs

Explore our documentation to understand the languages and file types we support and the files we ignore.

Check out our docs on how you can make Korbit work best for you and your team.

Loving Korbit!? Share us on LinkedIn Reddit and X

public PriceSide Side { get; set; } = PriceSide.Mid;
}
} No newline at end of file
public string Currency { get; set; } = CurrencyCodes.USD;
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Default Currency May Mask Missing Data category Functionality

Tell me more
What is the issue?

Currency property with a default value of USD may mask missing currency specifications in FX spot price data.

Why this matters

In an FX spot price context, having a default currency could lead to incorrect market data if the actual currency pair is not explicitly set.

Suggested change ∙ Feature Preview

Remove the default value and mark the property as required:

public required string Currency { get; set; }
Provide feedback to improve future suggestions

Nice Catch Incorrect Not in Scope Not in coding standard Other

💬 Looking for more details? Reply to this comment to chat with Korbit.

/// </summary>
public required decimal Price { get; set; }
public decimal Price { get; set; }
public PriceSide Side { get; set; } = PriceSide.Mid;
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Missing Required Modifier on Essential Property category Readability

Tell me more
What is the issue?

The Price property lacks a 'required' modifier, despite being a crucial property for FX spot price data.

Why this matters

Without the required modifier, the property could be left uninitialized, making the object state ambiguous and harder to understand the expected usage.

Suggested change ∙ Feature Preview
public required decimal Price { get; set; }
Provide feedback to improve future suggestions

Nice Catch Incorrect Not in Scope Not in coding standard Other

💬 Looking for more details? Reply to this comment to chat with Korbit.

Comment on lines +5 to +7
/// <summary>
/// No-operation implementation of ISecretCache for when caching isn't required
/// </summary>
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Unclear purpose in class documentation category Documentation

Tell me more
What is the issue?

The documentation only describes what the class is (a no-op implementation) but not why one would choose to use it.

Why this matters

Without understanding the use cases for this implementation, developers might misuse it or recreate similar functionality elsewhere.

Suggested change ∙ Feature Preview

///


/// Implementation of ISecretCache that bypasses caching, useful for development environments
/// or scenarios where secret values should always be retrieved from their original source.
///

Provide feedback to improve future suggestions

Nice Catch Incorrect Not in Scope Not in coding standard Other

💬 Looking for more details? Reply to this comment to chat with Korbit.

Comment on lines +61 to +62
// If no specific validator is found, return success
return ValidationResult.Success();
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Silent Success on Missing Validator category Error Handling

Tell me more
What is the issue?

The validator silently succeeds when no specific validator is found for an entity type, which could lead to invalid data passing through without validation.

Why this matters

This default behavior undermines the validation system's purpose and could allow potentially invalid market data to propagate through the system undetected.

Suggested change ∙ Feature Preview

Either throw an exception when no validator is found or log a warning and make it explicit in the code that this is intentional. Example:

if (_validators.TryGetValue(entityType, out var wrapper))
{
    return await wrapper.ValidateAsync(entity, cancellationToken);
}

throw new InvalidOperationException($"No validator registered for entity type {entityType.Name}");
Provide feedback to improve future suggestions

Nice Catch Incorrect Not in Scope Not in coding standard Other

💬 Looking for more details? Reply to this comment to chat with Korbit.

/// </summary>
/// <param name="entity">The entity to validate</param>
/// <returns>A validation result indicating success or failure with error details</returns>
Task<ValidationResult> ValidateAsync(T entity, CancellationToken cancellationToken = default);
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Missing ValidationResult Type Import category Functionality

Tell me more
What is the issue?

The ValidationResult type is used in the interface but not imported with a using statement, which will cause compilation errors.

Why this matters

Without the proper type import, code using this interface will fail to compile, breaking the core validation functionality of the platform.

Suggested change ∙ Feature Preview

Add the required using statement at the top of the file:

using System.Threading.Tasks;
using Phoenix.MarketData.Core.Validation.Models; // Assuming ValidationResult is in this namespace
Provide feedback to improve future suggestions

Nice Catch Incorrect Not in Scope Not in coding standard Other

💬 Looking for more details? Reply to this comment to chat with Korbit.

Comment on lines +5 to +7
/// <summary>
/// Interface for validators that can validate entities
/// </summary>
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Interface documentation lacks purpose and context category Documentation

Tell me more
What is the issue?

The interface documentation restates what is obvious from the name without explaining the purpose or usage context.

Why this matters

Without understanding why this validator interface exists or when to use it, developers may create redundant validation solutions or misuse the interface.

Suggested change ∙ Feature Preview

///


/// Provides asynchronous validation capabilities for domain entities, ensuring they meet business rules and data constraints
/// before persistence or processing.
///

Provide feedback to improve future suggestions

Nice Catch Incorrect Not in Scope Not in coding standard Other

💬 Looking for more details? Reply to this comment to chat with Korbit.

/// <summary>
/// Validates an entity and returns the validation result
/// </summary>
/// <param name="entity">The entity to validate</param>
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Parameter documentation lacks constraints category Documentation

Tell me more
What is the issue?

The parameter documentation restates the obvious without providing value about constraints or requirements.

Why this matters

Developers need to understand any special requirements or constraints for the entity parameter to properly implement the interface.

Suggested change ∙ Feature Preview

/// The entity to validate. Must not be null.

Provide feedback to improve future suggestions

Nice Catch Incorrect Not in Scope Not in coding standard Other

💬 Looking for more details? Reply to this comment to chat with Korbit.

{
throw new JsonException("Cannot convert null or empty string to TimeOnly.");

if (TimeOnly.TryParseExact(str, Format, CultureInfo.InvariantCulture, DateTimeStyles.None, out var result))

This comment was marked as resolved.

public class MarketDataCreatedEvent<T> where T : IMarketDataEntity
{
public T Data { get; }
public DateTimeOffset OccurredAt { get; }
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Non-injectable timestamp prevents deterministic testing category Design

Tell me more
What is the issue?

The OccurredAt property is immutable but not initialized in the constructor's parameter list, making it impossible to set a specific timestamp for testing or historical event recreation.

Why this matters

This design choice makes it difficult to perform deterministic testing and impossible to recreate historical events with their original timestamps, which is crucial for event sourcing and audit trails.

Suggested change ∙ Feature Preview

Modify the constructor to accept an optional timestamp parameter:

public MarketDataCreatedEvent(T data, DateTimeOffset? occurredAt = null)
{
    Data = data ?? throw new ArgumentNullException(nameof(data));
    OccurredAt = occurredAt ?? DateTimeOffset.UtcNow;
}
Provide feedback to improve future suggestions

Nice Catch Incorrect Not in Scope Not in coding standard Other

💬 Looking for more details? Reply to this comment to chat with Korbit.

public MarketDataCreatedEvent(T data)
{
Data = data ?? throw new ArgumentNullException(nameof(data));
OccurredAt = DateTimeOffset.UtcNow;
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Inefficient timestamp creation in high-volume events category Performance

Tell me more
What is the issue?

DateTimeOffset.UtcNow creates a new instance on every event creation, making timestamp tracking unnecessarily expensive at high event volumes.

Why this matters

In high-throughput event systems, creating new DateTimeOffset instances for each event can create memory pressure and impact GC performance. Events can potentially be created thousands of times per second.

Suggested change ∙ Feature Preview

Consider injecting the timestamp as a constructor parameter, allowing batch creation of events to share a single timestamp or use a more efficient time source:

public MarketDataCreatedEvent(T data, DateTimeOffset? occurredAt = null)
{
    Data = data ?? throw new ArgumentNullException(nameof(data));
    OccurredAt = occurredAt ?? DateTimeOffset.UtcNow;
}
Provide feedback to improve future suggestions

Nice Catch Incorrect Not in Scope Not in coding standard Other

💬 Looking for more details? Reply to this comment to chat with Korbit.

… query capabilities

Implement partition key resolution for better performance, add paged queries with continuation token support, and improve error handling. Remove unnecessary System.Text.Json package reference and add comprehensive query methods for more efficient data access patterns.
…ails

feat: add table of contents, code samples, and implementation patterns to AI guidelines
chore: improve dev tooling with updated package.json, husky config, and GitHub workflow
docs: add architecture diagrams and implementation examples to documentation
JustAGhosT and others added 8 commits May 23, 2025 00:41
…e-report.sh

Co-authored-by: greptile-apps[bot] <165735046+greptile-apps[bot]@users.noreply.github.com>
Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
…e-report.sh

Co-authored-by: greptile-apps[bot] <165735046+greptile-apps[bot]@users.noreply.github.com>
Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
…e-report.sh

Co-authored-by: greptile-apps[bot] <165735046+greptile-apps[bot]@users.noreply.github.com>
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

♻️ Duplicate comments (4)
.github/workflows/update-guidelines-timestamp.yml (1)

19-19: Update GitHub Actions Node.js setup action.

The current step uses an outdated actions/setup-node@v3 runner which may not be supported long-term. Update to the latest stable major version (e.g., actions/setup-node@v4) for improved compatibility and security.

-        uses: actions/setup-node@v3
+        uses: actions/setup-node@v4
🧰 Tools
🪛 actionlint (1.7.7)

19-19: the runner of "actions/setup-node@v3" action is too old to run on GitHub Actions. update the action's version to fix this issue

(action)

src/Phoenix.MarketData.Core/Repositories/IRepository.cs (1)

95-95: Consider returning T? from UpdateAsync to handle non-existent entities

The UpdateAsync method currently returns T, but it should return T? to handle cases where the entity to be updated doesn't exist. This was highlighted in a previous review comment.

        /// <param name="entity">The entity to update</param>
        /// <param name="cancellationToken">Optional cancellation token</param>
-       /// <returns>The updated entity</returns>
+       /// <returns>The updated entity if found; otherwise null</returns>
-       Task<T> UpdateAsync(T entity, CancellationToken cancellationToken = default);
+       Task<T?> UpdateAsync(T entity, CancellationToken cancellationToken = default);
tests/Phoenix.MarketData.Infrastructure.Tests/generate-coverage-report.sh (2)

14-15: Handle directory change failure.

Add explicit error handling to the cd command to avoid silent failures (ShellCheck SC2164):

- cd "$SCRIPT_DIR"
+ cd "$SCRIPT_DIR" || { echo "Error: Failed to change to project directory"; exit 1; }
🧰 Tools
🪛 Shellcheck (0.10.0)

[warning] 15-15: Use 'cd ... || exit' or 'cd ... || return' in case cd fails.

(SC2164)


31-34: 🛠️ Refactor suggestion

Fix recursive coverage file discovery.

Bash’s ** globbing isn’t enabled by default, so the pattern may not expand and no files will be found. Use find for reliable discovery and semicolon-separated input to ReportGenerator:

- COVERAGE_PATTERN="$TEST_RESULTS_DIR/**/coverage.cobertura.xml"
-
- reportgenerator "-reports:$COVERAGE_PATTERN" "-targetdir:$REPORT_DIR" -reporttypes:Html
+ coverage_files=$(find "$TEST_RESULTS_DIR" -type f -name "coverage.cobertura.xml" | paste -sd ";" -)
+ reportgenerator "-reports:$coverage_files" "-targetdir:$REPORT_DIR" -reporttypes:Html
🧹 Nitpick comments (7)
.github/workflows/update-guidelines-timestamp.yml (2)

14-16: Use the default GitHub token instead of a personal access token.

You can rely on ${{ secrets.GITHUB_TOKEN }} built into GitHub Actions for checkout, avoiding the need to manage a separate secret and reducing maintenance overhead.

-      - uses: actions/checkout@v4
-        with:
-          token: ${{ secrets.PAT_TOKEN }}
+      - uses: actions/checkout@v4
+        with:
+          token: ${{ secrets.GITHUB_TOKEN }}

27-31: Improve shell conditional logic to avoid unexpected commits.

The pattern A && B || C can lead to executing C even when both A and B succeed. Refactor to an explicit if statement for clarity and to prevent unintended commits:

-        run: |
-          git config --global user.name 'GitHub Action'
-          git config --global user.email 'action@github.com'
-          git add .aiguidance/ai-guidelines.md
-          git diff --quiet && git diff --staged --quiet || (git commit -m "Update AI guidelines timestamp [skip ci]" && git push)
+        run: |
+          git config --global user.name 'GitHub Action'
+          git config --global user.email 'action@github.com'
+          git add .aiguidance/ai-guidelines.md
+          if ! git diff --quiet || ! git diff --staged --quiet; then
+            git commit -m "Update AI guidelines timestamp [skip ci]"
+            git push
+          fi
🧰 Tools
🪛 actionlint (1.7.7)

27-27: shellcheck reported issue in this script: SC2015:info:4:18: Note that A && B || C is not if-then-else. C may run when A is true

(shellcheck)

src/Phoenix.MarketData.Core/Repositories/IRepository.cs (3)

36-39: Move the validation remarks to be part of the XML documentation for GetPagedAsync

The remarks about throwing ArgumentOutOfRangeException for invalid pagination parameters are currently placed between XML documentation blocks. This creates confusion about which method the remarks apply to.

        /// <param name="pageSize">Number of items per page</param>
        /// <param name="cancellationToken">Optional cancellation token</param>
        /// <returns>A paged result containing the entities and pagination metadata</returns>
+       /// <remarks>
+       /// Implementations should throw <see cref="ArgumentOutOfRangeException"/> when
+       /// <paramref name="pageIndex"/> < 0 or <paramref name="pageSize"/> <= 0.
+       /// </remarks>
        Task<PagedResult<T>> GetPagedAsync(
            int pageIndex = 0,
            int pageSize = 50,
            CancellationToken cancellationToken = default);

-       /// <remarks>
-       /// Implementations should throw <see cref="ArgumentOutOfRangeException"/> when
-       /// <paramref name="pageIndex"/> < 0 or <paramref name="pageSize"/> <= 0.
-       /// </remarks>
        /// <summary>
        /// Gets all entities (use with caution on large datasets)
        /// </summary>

55-55: Add validation remarks to QueryPagedAsync method

The QueryPagedAsync method should have the same validation requirements as GetPagedAsync since both methods use pagination parameters.

        /// <param name="cancellationToken">Optional cancellation token</param>
        /// <returns>A paged result containing the filtered entities and pagination metadata</returns>
+       /// <remarks>
+       /// Implementations should throw <see cref="ArgumentOutOfRangeException"/> when
+       /// <paramref name="pageIndex"/> < 0 or <paramref name="pageSize"/> <= 0.
+       /// </remarks>
        Task<PagedResult<T>> QueryPagedAsync(Expression<Func<T, bool>> predicate, int pageIndex = 0, int pageSize = 50, CancellationToken cancellationToken = default);

168-168: Consider adding usage examples for ContinuationToken

The ContinuationToken property is a good addition for APIs that support continuation-based pagination, but it might be helpful to add a code example or more detailed documentation on how this should be used.

        /// <summary>
        /// Gets a token that can be used for continuation in APIs that support it
+       /// </summary>
+       /// <example>
+       /// When implementing paged API endpoints:
+       /// <code>
+       /// [HttpGet]
+       /// public async Task<ActionResult<PagedResult<MyEntity>>> GetPaged([FromQuery] int pageIndex = 0, [FromQuery] int pageSize = 50, [FromQuery] string continuationToken = null)
+       /// {
+       ///     var result = await _repository.GetPagedAsync(pageIndex, pageSize);
+       ///     Response.Headers.Add("X-Continuation-Token", result.ContinuationToken);
+       ///     return Ok(result);
+       /// }
+       /// </code>
+       /// </example>
-       /// </summary>
        public string? ContinuationToken { get; set; }
tests/Phoenix.MarketData.Infrastructure.Tests/generate-coverage-report.sh (2)

1-2: Enable strict shell mode.

Consider adding set -euo pipefail immediately after the shebang to ensure the script exits on any error, undefined variable, or failed pipeline.

 #!/bin/bash
+set -euo pipefail

19-19: Consider using the custom runsettings file.

The repository provides a coverage.runsettings to configure filters/exclusions. To apply it, update the test invocation:

- dotnet test . --collect:"XPlat Code Coverage"
+ dotnet test . --settings coverage.runsettings --collect:"XPlat Code Coverage"
📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between d9ce5db and f57b19f.

📒 Files selected for processing (6)
  • .github/workflows/update-guidelines-timestamp.yml (1 hunks)
  • scripts/update-guidelines.js (1 hunks)
  • src/Phoenix.MarketData.Core/Configuration/ISecretCache.cs (1 hunks)
  • src/Phoenix.MarketData.Core/Repositories/IRepository.cs (1 hunks)
  • src/Phoenix.MarketData.Domain/Repositories/IRepository.cs (1 hunks)
  • tests/Phoenix.MarketData.Infrastructure.Tests/generate-coverage-report.sh (1 hunks)
🚧 Files skipped from review as they are similar to previous changes (3)
  • src/Phoenix.MarketData.Core/Configuration/ISecretCache.cs
  • scripts/update-guidelines.js
  • src/Phoenix.MarketData.Domain/Repositories/IRepository.cs
🧰 Additional context used
🪛 actionlint (1.7.7)
.github/workflows/update-guidelines-timestamp.yml

19-19: the runner of "actions/setup-node@v3" action is too old to run on GitHub Actions. update the action's version to fix this issue

(action)


27-27: shellcheck reported issue in this script: SC2015:info:4:18: Note that A && B || C is not if-then-else. C may run when A is true

(shellcheck)

🪛 Shellcheck (0.10.0)
tests/Phoenix.MarketData.Infrastructure.Tests/generate-coverage-report.sh

[warning] 15-15: Use 'cd ... || exit' or 'cd ... || return' in case cd fails.

(SC2164)

🔇 Additional comments (4)
src/Phoenix.MarketData.Core/Repositories/IRepository.cs (4)

105-111: Good improvement on BulkInsertAsync return type

The BulkInsertAsync method now returns IEnumerable<T> with the inserted entities instead of just a boolean. This is a more useful return type that provides the caller with the actual inserted entities, including any generated values.


24-34: Excellent pagination implementation

The pagination support has been well implemented with the GetPagedAsync method and the comprehensive PagedResult<T> class. This addresses a previous review comment and provides a solid foundation for handling large datasets efficiently.


20-22: Proper cancellation token support

All async methods now include cancellation token support with default values, which is a best practice for potentially long-running operations. This correctly addresses a previous review comment.


153-153: Robust TotalPages calculation with zero protection

The TotalPages calculation correctly handles the case where PageSize is zero, preventing a potential division by zero exception.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 18

♻️ Duplicate comments (20)
src/Phoenix.MarketData.Infrastructure/Configuration/NoOpSecretCache.cs (3)

8-8: Consider making this class sealed.

Since this class implements a specific no-op behavior and isn't designed for inheritance, marking it as sealed would better communicate its design intent and potentially allow for compiler optimizations.

-internal class NoOpSecretCache : ISecretCache
+internal sealed class NoOpSecretCache : ISecretCache

5-7: 🛠️ Refactor suggestion

Enhance class documentation with usage context.

The current documentation only describes what the class is, but not when or why it should be used. Improving the documentation would help developers understand appropriate usage scenarios.

 /// <summary>
-/// No-operation implementation of ISecretCache for when caching isn't required
+/// No-operation implementation of ISecretCache that bypasses caching functionality.
+/// Useful in development environments, testing scenarios, or when secrets should 
+/// always be retrieved from their original source rather than cached.
 /// </summary>

10-10: 🛠️ Refactor suggestion

Document null return behavior for GetSecret method.

Always returning null could violate the interface contract if callers expect previously cached values to be returned. Consider adding XML documentation to explicitly state this behavior.

+/// <summary>
+/// Always returns null regardless of the secret name provided, as this implementation doesn't cache values.
+/// </summary>
+/// <param name="secretName">The name of the secret to retrieve (ignored in this implementation)</param>
+/// <returns>Always returns null</returns>
 public string? GetSecret(string secretName) => null;
src/Phoenix.MarketData.Functions/OpenApi/SwaggerExtension.cs (1)

24-27: Add error handling for file operations.

The file reading operation could throw exceptions if the file exists but is inaccessible or corrupted. Consider adding appropriate exception handling to gracefully manage IO errors.

if (File.Exists(customOpenApiPath))
{
+   try
+   {
        options.CustomDocument = File.ReadAllText(customOpenApiPath);
+   }
+   catch (IOException ex)
+   {
+       // Log the error and continue without custom OpenAPI document
+       // This prevents startup failures due to file access issues
+   }
}
src/Phoenix.MarketData.Application/Services/MarketDataEventProcessor.cs (3)

30-66: ⚠️ Potential issue

Implement the actual creation logic in ProcessCreatedEventAsync

The method currently queries for existing data but doesn't actually process or store the new data. This is incomplete implementation that should be addressed.

Add the necessary implementation to store or process the created event data after verifying its existence. For example:

 public async Task ProcessCreatedEventAsync(IMarketDataIntegrationEvent createdEvent, CancellationToken cancellationToken = default)
 {
     // Add null check guard clause
     if (createdEvent == null)
         throw new ArgumentNullException(nameof(createdEvent));

     // Implementation to process created events
     _logger.LogInformation("Processing created event: {EventId}", createdEvent.Id);

     try
     {
         // Example implementation - actual logic will depend on your requirements
         // This might involve fetching data, transforming it, and updating repositories

         // For example, you could query existing data:
         var existingData = await _marketDataService.QueryMarketDataAsync(
             createdEvent.DataType,
             createdEvent.AssetClass,
             createdEvent.AssetId,
             cancellationToken: cancellationToken);

         _logger.LogInformation("Found {Count} existing records for {AssetId}",
             existingData?.Count() ?? 0, createdEvent.AssetId);

+        // If no existing data or if data needs to be created regardless
+        if (existingData == null || !existingData.Any())
+        {
+            // Create a new market data entity from the event
+            var newEntity = MapToMarketDataEntity(createdEvent);
+            
+            // Publish/save the new entity
+            await _marketDataService.PublishMarketDataAsync(
+                newEntity,
+                cancellationToken);
+                
+            _logger.LogInformation("Created new market data for {AssetId}", createdEvent.AssetId);
+        }
+        else
+        {
+            _logger.LogInformation("Market data already exists for {AssetId}, skipping creation", createdEvent.AssetId);
+        }

         // Additional processing logic as needed
         // This could include:
         // - Data validation or enrichment
         // - Notifications to other systems
         // - Cache updates
     }
     catch (Exception ex)
     {
         _logger.LogError(ex, "Error processing created event {EventId} for {AssetId}",
             createdEvent.Id, createdEvent.AssetId);
         throw; // Re-throw for error handling at higher levels
     }
 }
+
+// Helper method to map integration event to domain entity
+private IMarketDataEntity MapToMarketDataEntity(IMarketDataIntegrationEvent @event)
+{
+    // Implement mapping logic based on your domain model
+    // This is a placeholder - actual implementation depends on your entity structure
+    return new MarketDataEntity
+    {
+        AssetId = @event.AssetId,
+        AssetClass = @event.AssetClass,
+        DataType = @event.DataType,
+        DocumentType = @event.DocumentType,
+        Timestamp = @event.Timestamp,
+        // Other properties as needed
+    };
+}

92-93: 🛠️ Refactor suggestion

Avoid hardcoding region values

The region is hardcoded as "global" which could be problematic. Consider passing the region from the event or making it configurable.

- const string defaultRegion = "global"; // Consider moving this to configuration
+ // Get region from configuration
+ string defaultRegion = _configuration.GetValue<string>("DefaultRegion") ?? "global";

You'll need to inject IConfiguration in the constructor:

private readonly IConfiguration _configuration;

public MarketDataEventProcessor(
    IMarketDataService<IMarketDataEntity> marketDataService,
    ILogger<MarketDataEventProcessor> logger,
    IConfiguration configuration)
{
    _marketDataService = marketDataService ?? throw new ArgumentNullException(nameof(marketDataService));
    _logger = logger ?? throw new ArgumentNullException(nameof(logger));
    _configuration = configuration ?? throw new ArgumentNullException(nameof(configuration));
}

102-112: ⚠️ Potential issue

Implement version comparison logic

Version comparison logic is mentioned in comments but not implemented. This could lead to race conditions or data inconsistency.

Add the necessary implementation to compare versions and handle the update appropriately:

 if (latestData != null)
 {
     _logger.LogInformation("Found latest data for {AssetId} with version {Version}",
         changedEvent.AssetId, latestData.Version);

-    // Additional comparison and processing logic
-    // This could include:
-    // - Version comparison to avoid processing outdated events
-    // - Merging or updating data
-    // - Triggering downstream systems
+    // Version comparison to avoid processing outdated events
+    if (changedEvent.Version <= latestData.Version)
+    {
+        _logger.LogWarning("Received outdated event with version {EventVersion} which is <= current version {CurrentVersion}",
+            changedEvent.Version, latestData.Version);
+        return; // Skip processing outdated events
+    }
+    
+    // Update the entity with new data
+    var updatedEntity = MergeData(latestData, changedEvent);
+    
+    // Save the updated entity
+    await _marketDataService.PublishMarketDataAsync(
+        updatedEntity,
+        cancellationToken);
+        
+    _logger.LogInformation("Updated market data for {AssetId} from version {OldVersion} to {NewVersion}",
+        changedEvent.AssetId, latestData.Version, updatedEntity.Version);
+    
+    // Trigger any downstream systems if needed
+    // await _eventPublisher.PublishEventAsync(new MarketDataUpdatedEvent(updatedEntity), cancellationToken);
 }

You'll also need to add a helper method for merging data:

private IMarketDataEntity MergeData(IMarketDataEntity existingData, IMarketDataIntegrationEvent changedEvent)
{
    // Implement merging logic based on your domain model
    // This could be a deep clone of the existing entity with updates from the event
    // or creating a new entity with the latest data
    
    // Example (simplified):
    var updatedEntity = existingData.Clone(); // Assuming there's a Clone method
    updatedEntity.Version = changedEvent.Version;
    // Update other properties as needed
    
    return updatedEntity;
}
src/Phoenix.MarketData.Infrastructure/Events/EventGridPublisher.cs (4)

45-47: Consider using null-coalescing throw pattern for cleaner secret validation

While you've added validation for empty strings, consider using a null-coalescing throw pattern for more concise and fail-fast validation.

-                var endpoint = await secretProvider.GetEventGridEndpointAsync();
-                var key = await secretProvider.GetEventGridKeyAsync();
+                var endpoint = await secretProvider.GetEventGridEndpointAsync() 
+                    ?? throw new ArgumentException("Event Grid endpoint not configured");
+                var key = await secretProvider.GetEventGridKeyAsync()
+                    ?? throw new ArgumentException("Event Grid key not configured");

86-86: 🛠️ Refactor suggestion

Add JsonSerializerOptions for consistent serialization

Using JsonSerializer.Serialize without options can lead to inconsistent handling of null values, date formatting, and property naming.

-                var data = BinaryData.FromString(JsonSerializer.Serialize(eventData));
+                var options = new JsonSerializerOptions { PropertyNamingPolicy = JsonNamingPolicy.CamelCase };
+                var data = BinaryData.FromString(JsonSerializer.Serialize(eventData, options));

Apply similar changes to line 131 in PublishManyAsync and line 200 in PublishMarketDataEventAsync.


196-198: 🛠️ Refactor suggestion

Use HasValue to avoid incorrect DataCreated event triggers

Using GetValueOrDefault() returns 0 for null values, which could incorrectly trigger DataCreated events for existing data with null versions.

-            var eventType = (marketData.Version.GetValueOrDefault() > 1)
+            var eventType = (marketData.Version.HasValue && marketData.Version.Value > 1)
                ? "Phoenix.MarketData.DataChanged"
                : "Phoenix.MarketData.DataCreated";

222-224: 🛠️ Refactor suggestion

Add null check to DeriveTopicFromEventType

The method will throw if eventType is null. Add a null check to handle this case gracefully.

        private static string DeriveTopicFromEventType(string eventType)
        {
+            if (string.IsNullOrEmpty(eventType)) throw new ArgumentNullException(nameof(eventType));
            return string.Concat(eventType.Select((x, i) => i > 0 && char.IsUpper(x) ? "-" + x.ToString().ToLower() : x.ToString().ToLower()));
        }
.github/workflows/update-guidelines-timestamp.yml (1)

22-25: 🛠️ Refactor suggestion

Update GitHub Actions setup-node version

The current version of actions/setup-node is outdated. Update to the latest version for better compatibility and security.

      - name: Setup Node.js
-        uses: actions/setup-node@v3
+        uses: actions/setup-node@v4
        with:
          node-version: "18"
🧰 Tools
🪛 actionlint (1.7.7)

23-23: the runner of "actions/setup-node@v3" action is too old to run on GitHub Actions. update the action's version to fix this issue

(action)

tests/Phoenix.MarketData.Infrastructure.Tests/generate-coverage-report.sh (2)

15-15: Handle directory change failure.

The script should handle potential failures when changing directories.

-cd "$SCRIPT_DIR"
+cd "$SCRIPT_DIR" || { echo "Failed to change to script directory"; exit 1; }
🧰 Tools
🪛 Shellcheck (0.10.0)

[warning] 15-15: Use 'cd ... || exit' or 'cd ... || return' in case cd fails.

(SC2164)


19-19: Specify the coverage settings file in the test script.

The test command should use the coverage settings file that exists in the project.

-dotnet test . --collect:"XPlat Code Coverage"
+dotnet test . --settings coverage.runsettings
.github/workflows/build-and-test.yml (1)

50-57: Consider increasing code coverage thresholds.

The current thresholds (critical: 5%, alert: 10%, warning: 20%) mentioned in past reviews are quite low for a production application. Higher thresholds would encourage better test coverage.

# In your coverage-threshold.xml file:
- thresholdAlert: 10
- thresholdCritical: 5
- thresholdWarning: 20
+ thresholdAlert: 60
+ thresholdCritical: 50
+ thresholdWarning: 70
tests/Phoenix.MarketData.Infrastructure.Tests/Mapping/CryptoOrdinalSpotPriceDataMapperTests.cs (4)

45-59: Add missing property assertions in ToDto test.

The test doesn't verify all properties that were set in the domain object are correctly mapped to the DTO.

Following previous review feedback, ensure all properties are verified in the assertions, including SchemaVersion, AssetClass, DataType, Region, DocumentType, InscriptionNumber, and Currency.

Add the following assertions:

Assert.Equal(domain.SchemaVersion, result.SchemaVersion);
Assert.Equal(domain.AssetClass, result.AssetClass);
Assert.Equal(domain.DataType, result.DataType);
Assert.Equal(domain.Region, result.Region);
Assert.Equal(domain.DocumentType, result.DocumentType);
Assert.Equal(domain.InscriptionNumber, result.InscriptionNumber);
Assert.Equal(domain.Currency, result.Currency);

104-119: Add missing property assertions in ToDomain test.

The test doesn't verify all properties that were set in the DTO are correctly mapped to the domain object.

Following previous review feedback, ensure all properties are verified in the assertions, including SchemaVersion, AssetClass, DataType, Region, DocumentType, InscriptionNumber, and Currency.

Add the following assertions:

Assert.Equal(dto.SchemaVersion, result.SchemaVersion);
Assert.Equal(dto.AssetClass, result.AssetClass);
Assert.Equal(dto.DataType, result.DataType);
Assert.Equal(dto.Region, result.Region);
Assert.Equal(dto.DocumentType, result.DocumentType);
Assert.Equal(dto.InscriptionNumber, result.InscriptionNumber);
Assert.Equal(dto.Currency, result.Currency);

132-166: Add more property assertions in static mapping test.

The test only checks three properties (AssetId, Price, and CollectionName) when comparing the results of instance and static mapping methods.

Following previous review feedback, verify all properties to ensure complete equivalence between the instance and static mapping results:

Assert.Equal(instanceResult.SchemaVersion, staticResult.SchemaVersion);
Assert.Equal(instanceResult.AssetClass, staticResult.AssetClass);
Assert.Equal(instanceResult.DataType, staticResult.DataType);
Assert.Equal(instanceResult.Region, staticResult.Region);
Assert.Equal(instanceResult.DocumentType, staticResult.DocumentType);
Assert.Equal(instanceResult.ParentInscriptionId, staticResult.ParentInscriptionId);
Assert.Equal(instanceResult.InscriptionId, staticResult.InscriptionId);
Assert.Equal(instanceResult.InscriptionNumber, staticResult.InscriptionNumber);
Assert.Equal(instanceResult.Currency, staticResult.Currency);
Assert.Equal(instanceResult.Side, staticResult.Side);
Assert.Equal(instanceResult.AsOfDate, staticResult.AsOfDate);
Assert.Equal(instanceResult.AsOfTime, staticResult.AsOfTime);
Assert.Equal(instanceResult.Tags?.Count, staticResult.Tags?.Count);

1-279: Add edge case tests for string properties.

The tests focus on the happy path with valid inputs, but don't test edge cases for string properties like assetId, collectionName, and inscriptionId.

Following previous review feedback, add tests for edge cases such as empty strings and special characters:

[Theory]
[InlineData("")]  // Empty string
[InlineData("  ")]  // Whitespace
[InlineData("BTC/USD!@#$")]  // Special characters
public void ToDomain_WithSpecialCharactersInAssetId_MapsCorrectly(string assetId)
{
    // Arrange
    var mapper = new CryptoOrdinalSpotPriceDataMapper();
    var dto = CreateValidDto();  // Helper method to create a valid DTO
    dto = dto with { assetId = assetId };  // Update just the assetId

    // Act
    var result = mapper.ToDomain(dto);

    // Assert
    Assert.Equal(assetId, result.DisplayAssetId);  // Original case preserved
    Assert.Equal(assetId.ToLowerInvariant(), result.AssetId);  // Normalized to lowercase
}

Similarly, add tests for collectionName and inscriptionId.

src/Phoenix.MarketData.Infrastructure/Repositories/Interfaces.cs (1)

1-11: Remove unused using directives

Microsoft.Azure.Cosmos*, Microsoft.Extensions.Logging, System.Linq, etc., are not referenced in this file and generate build warnings. Removing them keeps the file clean.

This was raised in an earlier review and still applies.

🧹 Nitpick comments (30)
src/Phoenix.MarketData.Infrastructure/Configuration/NoOpSecretCache.cs (3)

12-12: Add XML documentation to CacheSecret method.

For consistency, add XML documentation to this method as you've done for the other methods.

+/// <summary>
+/// No-operation implementation that ignores any cache requests.
+/// </summary>
+/// <param name="secretName">The name of the secret (ignored)</param>
+/// <param name="value">The secret value (ignored)</param>
 public void CacheSecret(string secretName, string value) { }

14-17: Clarify expiration return value in documentation.

The current documentation doesn't explain why IsExpired is set to false rather than true. Clarifying this would help prevent misinterpretation.

 /// <summary>
-/// Gets a secret with its expiration information - no-op implementation returns null
+/// Gets a secret with its expiration information - no-op implementation returns null for the value
+/// and false for IsExpired (indicating no expiration) since no value is being cached.
 /// </summary>

1-24: Add unit tests for NoOpSecretCache.

Although this is a simple implementation, it's good practice to have unit tests to verify the expected behavior, especially since it's implementing a critical interface related to secret management.

Would you like me to generate unit tests for this class to ensure its behavior is validated?

src/Phoenix.MarketData.Functions/OpenApi/SwaggerExtension.cs (2)

12-34: Add XML documentation for the extension method.

Adding proper XML documentation for public methods improves code maintainability and helps other developers understand the purpose and usage of the method.

+    /// <summary>
+    /// Configures Swagger/OpenAPI documentation for the Azure Functions host.
+    /// </summary>
+    /// <param name="builder">The host builder to configure.</param>
+    /// <returns>The configured host builder for fluent chaining.</returns>
     public static IHostBuilder ConfigureSwagger(this IHostBuilder builder)
     {
         // Implementation
     }

24-27: Consider validating the JSON content before using it.

The code assumes the JSON file contains valid OpenAPI configuration. Consider adding validation to ensure the content is valid JSON and properly formatted as an OpenAPI document before assigning it to the options.

if (File.Exists(customOpenApiPath))
{
    try
    {
-       options.CustomDocument = File.ReadAllText(customOpenApiPath);
+       var jsonContent = File.ReadAllText(customOpenApiPath);
+       
+       // Basic JSON validation
+       if (!string.IsNullOrEmpty(jsonContent) && IsValidJson(jsonContent))
+       {
+           options.CustomDocument = jsonContent;
+       }
+       else
+       {
+           // Log warning about invalid JSON
+       }
    }
    catch (IOException ex)
    {
        // Log the error and continue without custom OpenAPI document
    }
}

+ // Add this helper method to validate JSON
+ private static bool IsValidJson(string json)
+ {
+     try
+     {
+         System.Text.Json.JsonDocument.Parse(json);
+         return true;
+     }
+     catch
+     {
+         return false;
+     }
+ }
src/Phoenix.MarketData.Application/Services/MarketDataEventProcessor.cs (2)

11-15: Add XML documentation to interface methods

The interface methods lack XML documentation, which would improve API discoverability and usage. Consider adding documentation that describes the purpose of each method, parameters, and return values.

 public interface IMarketDataEventProcessor
 {
+    /// <summary>
+    /// Processes a market data creation event.
+    /// </summary>
+    /// <param name="createdEvent">The market data integration event containing creation information.</param>
+    /// <param name="cancellationToken">Optional cancellation token.</param>
+    /// <returns>A task representing the asynchronous operation.</returns>
     Task ProcessCreatedEventAsync(IMarketDataIntegrationEvent createdEvent, CancellationToken cancellationToken = default);
+    
+    /// <summary>
+    /// Processes a market data change event.
+    /// </summary>
+    /// <param name="changedEvent">The market data integration event containing change information.</param>
+    /// <param name="cancellationToken">Optional cancellation token.</param>
+    /// <returns>A task representing the asynchronous operation.</returns>
     Task ProcessChangedEventAsync(IMarketDataIntegrationEvent changedEvent, CancellationToken cancellationToken = default);
 }

85-89: Simplify timestamp handling

The timestamp handling could be simplified. The code first gets the DateTimeOffset and then extracts the DateTime to create a DateOnly.

- // Get the timestamp - it's already a DateTimeOffset
- DateTimeOffset timestamp = changedEvent.Timestamp;
-
- // Convert DateTimeOffset directly to DateOnly
- dateOnly = DateOnly.FromDateTime(timestamp.DateTime);
+ // Convert DateTimeOffset directly to DateOnly in one step
+ dateOnly = DateOnly.FromDateTime(changedEvent.Timestamp.DateTime);
src/Phoenix.MarketData.Infrastructure/Events/EventGridPublisher.cs (2)

17-33: Add XML documentation for public members

Adding XML documentation to public classes and methods improves code maintainability and enables better IntelliSense support.

+    /// <summary>
+    /// Publishes events to Azure Event Grid using CloudEvents and EventGridEvents formats
+    /// </summary>
    public class EventGridPublisher : IEventPublisher
    {
        private readonly EventGridPublisherClient _client;
        private readonly ILogger<EventGridPublisher> _logger;
        private readonly string _baseSourceUri;

        // Event Grid limits
        private const int MaxEventsPerBatch = 100;
        private const int MaxPayloadSizeBytes = 1024 * 1024; // 1MB

+        /// <summary>
+        /// Initializes a new instance of the <see cref="EventGridPublisher"/> class.
+        /// </summary>
+        /// <param name="client">The EventGrid client</param>
+        /// <param name="logger">The logger</param>
+        /// <param name="baseSourceUri">The base URI for event sources</param>
+        /// <exception cref="ArgumentNullException">Thrown when client or logger is null</exception>
        // Constructor for direct DI usage
        public EventGridPublisher(EventGridPublisherClient client, ILogger<EventGridPublisher> logger, string baseSourceUri = "https://phoenix.marketdata/events/")

112-187: Consider implementing retry logic for batch publishing

The PublishManyAsync method has a comment about implementing retry logic but doesn't actually do it. Consider adding an exponential backoff retry strategy for more resilient event publishing.

You could implement a retry policy using Polly or a simple retry loop:

// Example with simple retry loop
private async Task RetryAsync(Func<Task> action, int maxRetries = 3)
{
    int retryCount = 0;
    while (true)
    {
        try
        {
            await action();
            return;
        }
        catch (Exception ex) when (retryCount < maxRetries)
        {
            retryCount++;
            var delay = TimeSpan.FromSeconds(Math.Pow(2, retryCount)); // Exponential backoff
            _logger.LogWarning(ex, "Retry {RetryCount}/{MaxRetries} after {Delay}s", 
                retryCount, maxRetries, delay.TotalSeconds);
            await Task.Delay(delay);
        }
    }
}

// Then use it in PublishManyAsync:
await RetryAsync(() => _client.SendEventsAsync(batch, cancellationToken));
.github/workflows/update-guidelines-timestamp.yml (2)

31-35: Improve git diff conditional logic

The current shell syntax using && and || can be confusing and might not behave as expected in all cases. Consider using an explicit if statement for clarity.

-          git diff --quiet && git diff --staged --quiet || (git commit -m "Update AI guidelines timestamp [skip ci]" && git push)
+          if ! git diff --quiet || ! git diff --staged --quiet; then
+            git commit -m "Update AI guidelines timestamp [skip ci]"
+            git push
+          fi
🧰 Tools
🪛 actionlint (1.7.7)

31-31: shellcheck reported issue in this script: SC2015:info:4:18: Note that A && B || C is not if-then-else. C may run when A is true

(shellcheck)


1-36: Consider adding workflow_dispatch trigger

Adding a workflow_dispatch trigger would allow manual execution of this workflow when needed, which can be useful for testing or forcing an update.

on:
  push:
    branches: [main]
    paths:
      - ".aiguidance/ai-guidelines.md"
+  workflow_dispatch:  # Allow manual triggering
🧰 Tools
🪛 actionlint (1.7.7)

23-23: the runner of "actions/setup-node@v3" action is too old to run on GitHub Actions. update the action's version to fix this issue

(action)


31-31: shellcheck reported issue in this script: SC2015:info:4:18: Note that A && B || C is not if-then-else. C may run when A is true

(shellcheck)

.github/workflows/build-and-test.yml (1)

38-38: Add configuration parameter to test command.

The test step should include --configuration Release for a consistent test environment.

-        run: dotnet test --no-build --verbosity normal --settings tests/Phoenix.MarketData.Infrastructure.Tests/coverage.runsettings
+        run: dotnet test --no-build --configuration Release --verbosity normal --settings tests/Phoenix.MarketData.Infrastructure.Tests/coverage.runsettings
tests/Phoenix.MarketData.Infrastructure.Tests/Repository/DeleteOperationTests.cs (2)

14-72: Test looks good but could be expanded to cover soft delete.

The test correctly verifies hard delete behavior by checking that DeleteItemAsync is called and an event is published. However, since the DeleteAsync method accepts a boolean parameter (false for hard delete), it would be beneficial to add a complementary test for soft delete behavior (when passing true).

Consider adding another test method:

[Fact]
public async Task DeleteAsync_ShouldUpdateItemWithIsDeletedFlag_WhenUsingSoftDelete()
{
    // Arrange similar to existing test
    // Mock UpsertItemAsync instead of DeleteItemAsync
    
    // Act
    var result = await Repository.DeleteAsync(Id, true);
    
    // Assert
    // Verify UpsertItemAsync was called with IsDeleted=true
    // Verify event was published
}

26-49: Extract partition key query mocking to a reusable method.

The setup for GetPartitionKeyForIdAsync is complex and takes up a significant portion of the test. This same pattern might be needed in other tests.

Consider extracting this mock setup to a helper method in the base class:

protected void SetupPartitionKeyQueryMock(string id, string partitionKey)
{
    var mockFeedIterator = new Mock<FeedIterator<FxSpotPriceData>>();
    mockFeedIterator
        .SetupSequence(f => f.HasMoreResults)
        .Returns(true)
        .Returns(false);

    var mockResponse = new Mock<FeedResponse<FxSpotPriceData>>();
    mockResponse.Setup(r => r.Count).Returns(1);
    mockResponse.Setup(r => r.GetEnumerator())
        .Returns(new List<FxSpotPriceData> { MarketData }.GetEnumerator());

    mockFeedIterator
        .Setup(f => f.ReadNextAsync(It.IsAny<CancellationToken>()))
        .ReturnsAsync(mockResponse.Object);

    MockContainer
        .Setup(c => c.GetItemQueryIterator<FxSpotPriceData>(
            It.IsAny<QueryDefinition>(),
            It.IsAny<string>(),
            It.IsAny<QueryRequestOptions>()))
        .Returns(mockFeedIterator.Object);
}
tests/Phoenix.MarketData.Application.Tests/Services/Decorators/ValidationMarketDataServiceDecoratorTests.cs (1)

144-154: Consider moving TestMarketData to the top of the class or to a separate file.

The TestMarketData class is defined at the end of the file, which makes it harder to understand the tests without scrolling down to see the structure.

Consider moving the TestMarketData class definition to the top of the test class or to a separate file for better readability.

tests/Phoenix.MarketData.Infrastructure.Tests/Repository/CrudOperationTests.cs (2)

20-20: Remove commented instruction.

The comment "Add this line" appears to be leftover from development and should be removed.

-            mockResponse.Setup(r => r.StatusCode).Returns(System.Net.HttpStatusCode.OK); // Add this line
+            mockResponse.Setup(r => r.StatusCode).Returns(System.Net.HttpStatusCode.OK);

68-144: Consider verifying event payload in addition to topic.

The tests verify that events are published with the correct topic, but don't verify the content of the event payload.

Consider capturing the published event and verifying its properties match the expected values:

// For UpdateAsync test
MockEventPublisher.Verify(e =>
    e.PublishAsync(
        It.Is<object>(o => VerifyEventPayload(o, MarketData)),
        It.Is<string>(topic => topic.Contains("updated")),
        It.IsAny<CancellationToken>()),
    Times.Once);

// Helper method
private bool VerifyEventPayload(object payload, FxSpotPriceData expected)
{
    // Verify payload contains expected data
    // This depends on your event structure
    return true;
}
src/Phoenix.MarketData.Infrastructure/Repositories/CosmosRepositoryQueryMethods.cs (1)

141-167: Consider adding pagination to the fallback query approach

The fallback approach fetches all items from the database and then filters them in memory, which could be inefficient for large datasets.

// Fall back to a direct query if LINQ fails
-var queryDefinition = new QueryDefinition("SELECT * FROM c");
-using var iterator = _container.GetItemQueryIterator<T>(queryDefinition);
+// Add a reasonable page size to avoid loading too much data at once
+var queryOptions = new QueryRequestOptions { MaxItemCount = 100 };
+var queryDefinition = new QueryDefinition("SELECT * FROM c");
+using var iterator = _container.GetItemQueryIterator<T>(queryDefinition, requestOptions: queryOptions);

while (iterator.HasMoreResults)
{
    var response = await iterator.ReadNextAsync(cancellationToken);
src/Phoenix.MarketData.Infrastructure/Repositories/CosmosRepositoryBatchExtensions.cs (4)

21-34: Consider implementing parallel processing for better performance

The current implementation processes items sequentially, which could be slow for large batches. Consider using parallel processing with Task.WhenAll for better performance.

public static async Task<IEnumerable<T>> AddBatchAsync<T>(
    this CosmosRepository<T> repository, 
    IEnumerable<T> items,
    CancellationToken cancellationToken = default) 
    where T : class, IMarketDataEntity
{
-    var results = new List<T>();
-    foreach (var item in items)
-    {
-        var result = await repository.AddAsync(item, cancellationToken);
-        results.Add(result);
-    }
-    return results;
+    // Process items in parallel for better performance
+    var tasks = items.Select(item => repository.AddAsync(item, cancellationToken)).ToArray();
+    return await Task.WhenAll(tasks);
}

44-57: Consider implementing parallel processing for better performance

Similar to the add batch method, this update implementation processes items sequentially. Consider using parallel processing for better performance with large batches.

public static async Task<IEnumerable<T>> UpdateBatchAsync<T>(
    this CosmosRepository<T> repository, 
    IEnumerable<T> items,
    CancellationToken cancellationToken = default) 
    where T : class, IMarketDataEntity
{
-    var results = new List<T>();
-    foreach (var item in items)
-    {
-        var result = await repository.UpdateAsync(item, cancellationToken);
-        results.Add(result);
-    }
-    return results;
+    // Process items in parallel for better performance
+    var tasks = items.Select(item => repository.UpdateAsync(item, cancellationToken)).ToArray();
+    return await Task.WhenAll(tasks);
}

68-82: Consider implementing parallel processing for better performance

The delete batch implementation could also benefit from parallel processing for improved performance with large batches.

public static async Task<IEnumerable<bool>> DeleteBatchAsync<T>(
    this CosmosRepository<T> repository, 
    IEnumerable<string> ids,
    bool useSoftDelete = false,
    CancellationToken cancellationToken = default) 
    where T : class, IMarketDataEntity
{
-    var results = new List<bool>();
-    foreach (var id in ids)
-    {
-        var result = await repository.DeleteAsync(id, useSoftDelete, cancellationToken);
-        results.Add(result);
-    }
-    return results;
+    // Process items in parallel for better performance
+    var tasks = ids.Select(id => repository.DeleteAsync(id, useSoftDelete, cancellationToken)).ToArray();
+    return await Task.WhenAll(tasks);
}

8-11: Add warning about potential RU consumption with parallel operations

When implementing batch operations against Cosmos DB, it's important to be aware of the RU (Request Unit) consumption implications, especially with parallel operations.

Consider adding a comment or documentation warning about potential RU consumption with parallel batch operations. You might also want to add an optional parameter to control the degree of parallelism, allowing users to balance performance and RU consumption based on their specific needs.

src/Phoenix.MarketData.Infrastructure/Repositories/CosmosRepositoryCrudMethods.cs (2)

66-93: Race condition on soft-delete path – consider conditional patch instead of read-modify-write

Soft delete currently performs:

  1. ReadItem (via GetByIdAsync)
  2. Toggle IsDeleted
  3. UpsertItem

Between 1 and 3 another writer could update the item, and the upsert will blindly overwrite that change. Cosmos DB supports PatchItemAsync and conditional writes on the ETag header (If-Match). Switching to either would keep the operation idempotent and avoid lost updates.


149-152: String interpolation allocates for every call

GetEventTopicName is called for every CRUD op. Using string.Create or caching the lower-case type name once in a static readonly field would avoid repeated allocations on the hot path.

tests/Phoenix.MarketData.Infrastructure.Tests/Repository/ErrorHandlingTests.cs (1)

18-30: Consider asserting on the thrown exception’s StatusCode for stronger guarantees

Right now we only assert an exception of type CosmosException is thrown. Asserting that ex.StatusCode == HttpStatusCode.InternalServerError guarantees that the repository is not wrapping/swallowing errors unexpectedly.

Example:

var ex = await Assert.ThrowsAsync<CosmosException>(() => Repository.AddAsync(MarketData));
Assert.Equal(HttpStatusCode.InternalServerError, ex.StatusCode);
tests/Phoenix.MarketData.Infrastructure.Tests/Repository/QueryOperationTests.cs (2)

50-61: Heavy LINQ provider mocking can be removed – repository uses GetItemQueryIterator directly

Most of the setup for IOrderedQueryable and custom IQueryProvider is unnecessary (and brittle) because CosmosRepository.QueryAsync ultimately calls _container.GetItemQueryIterator<T>().

Reducing the mock surface:

-// Setup for the LINQ queryable with extension method support
-var mockQueryable = new Mock<IOrderedQueryable<FxSpotPriceData>>();
-...
-MockContainer
-    .Setup(c => c.GetItemLinqQueryable<FxSpotPriceData>(...))
-    .Returns(mockQueryable.Object);
+// The repository doesn't rely on LINQ in the happy path, so we can skip this block

Simpler tests are easier to maintain and read.


219-225: FluentAssertions improves readability of multi-assert blocks

Consider replacing multiple Assert.* calls with a single readable assertion chain:

result.Should().HaveCount(2)
      .And.ContainSingle(x => x.AssetId == "eurusd")
      .And.ContainSingle(x => x.AssetId == "gbpusd");

This provides clearer failure messages.

src/Phoenix.MarketData.Infrastructure/Repositories/CosmosRepositoryPartitionKey.cs (1)

30-40: Potentially expensive fan-out query

Iterating through multiple “likely” partition keys creates N point reads but still risks a full cross-partition scan on the final fallback (new PartitionKey(id)).
If you cannot derive the key deterministically, consider maintaining a lookup map (id ➜ partitionKey) in a small auxiliary container/table or including the partition key as a deterministic prefix of the id.

This will avoid repeated RU-costly queries in hot paths.

src/Phoenix.MarketData.Infrastructure/Repositories/VersioningExtensions.cs (1)

32-43: Cross-partition query may be unintentionally enabled

The LINQ query does not set RequestOptions.PartitionKey, which can trigger a costly fan-out across all partitions. Because you know dataType, assetClass, and region, you can reconstruct the partition key (same logic as elsewhere) and pass it to GetItemLinqQueryable:

var pk = $"{dataType}:{assetClass}:{region}";
var query = container.GetItemLinqQueryable<T>(
                requestOptions: new QueryRequestOptions { PartitionKey = new PartitionKey(pk) },
                allowSynchronousQueryExecution: false)

This reduces RU consumption and latency.

src/Phoenix.MarketData.Infrastructure/Repositories/CosmosRepositoryMethods.cs (1)

258-289: CountAsync may generate large RU bills for filtered counts

IQueryable<T>.ToFeedIterator() materialises every matching item and sums the page counts, which is O(n).

For filtered counts you can issue a server-side aggregated query instead:

var sql = "SELECT VALUE COUNT(1) FROM c WHERE " + _predicateToSqlTranslator.Translate(predicate);
var queryDef = new QueryDefinition(sql);
var iterator = _container.GetItemQueryIterator<int>(queryDef);

If translating the expression tree is too involved, at minimum add a warning in the XML docs that filtered counts are RU-expensive.

📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between f57b19f and e8ce251.

📒 Files selected for processing (48)
  • .github/workflows/build-and-test.yml (1 hunks)
  • .github/workflows/update-guidelines-timestamp.yml (1 hunks)
  • .gitignore (1 hunks)
  • .husky/post-commit (1 hunks)
  • .husky/post-commit.cmd (1 hunks)
  • .prettierignore (1 hunks)
  • package.json (1 hunks)
  • scripts/test-wrapper.js (1 hunks)
  • scripts/update-guidelines.js (1 hunks)
  • src/Phoenix.MarketData.Application/Services/Decorators/ValidationMarketDataServiceDecorator.cs (1 hunks)
  • src/Phoenix.MarketData.Application/Services/MarketDataEventProcessor.cs (1 hunks)
  • src/Phoenix.MarketData.Application/Services/MarketDataService.cs (1 hunks)
  • src/Phoenix.MarketData.Core/Events/IEventPublisher.cs (1 hunks)
  • src/Phoenix.MarketData.Domain/Models/BaseMarketData.cs (2 hunks)
  • src/Phoenix.MarketData.Domain/Models/FxSpotPriceData.cs (1 hunks)
  • src/Phoenix.MarketData.Domain/Models/Interfaces/IMarketDataEntity.cs (1 hunks)
  • src/Phoenix.MarketData.Domain/Models/Interfaces/ISoftDeletable.cs (1 hunks)
  • src/Phoenix.MarketData.Functions/OpenApi/PhoenixOpenApiConfigurationOptions.cs (1 hunks)
  • src/Phoenix.MarketData.Functions/OpenApi/SwaggerExtension.cs (1 hunks)
  • src/Phoenix.MarketData.Functions/Phoenix.MarketData.Functions.csproj (1 hunks)
  • src/Phoenix.MarketData.Infrastructure/Configuration/NoOpSecretCache.cs (1 hunks)
  • src/Phoenix.MarketData.Infrastructure/Events/EventGridPublisher.cs (1 hunks)
  • src/Phoenix.MarketData.Infrastructure/Mapping/BaseMarketDataMapper.cs (3 hunks)
  • src/Phoenix.MarketData.Infrastructure/Mapping/CryptoOrdinalSpotPriceDataMapper.cs (1 hunks)
  • src/Phoenix.MarketData.Infrastructure/Mapping/FxSpotPriceDataMapper.cs (1 hunks)
  • src/Phoenix.MarketData.Infrastructure/Repositories/CosmosRepository.cs (1 hunks)
  • src/Phoenix.MarketData.Infrastructure/Repositories/CosmosRepositoryBatchExtensions.cs (1 hunks)
  • src/Phoenix.MarketData.Infrastructure/Repositories/CosmosRepositoryCrudMethods.cs (1 hunks)
  • src/Phoenix.MarketData.Infrastructure/Repositories/CosmosRepositoryExtensions.cs (1 hunks)
  • src/Phoenix.MarketData.Infrastructure/Repositories/CosmosRepositoryMethods.cs (1 hunks)
  • src/Phoenix.MarketData.Infrastructure/Repositories/CosmosRepositoryPartitionKey.cs (1 hunks)
  • src/Phoenix.MarketData.Infrastructure/Repositories/CosmosRepositoryQueryMethods.cs (1 hunks)
  • src/Phoenix.MarketData.Infrastructure/Repositories/Interfaces.cs (1 hunks)
  • src/Phoenix.MarketData.Infrastructure/Repositories/RepositoryExceptions.cs (1 hunks)
  • src/Phoenix.MarketData.Infrastructure/Repositories/VersioningExtensions.cs (1 hunks)
  • tests/Phoenix.MarketData.Application.Tests/Services/Decorators/ValidationMarketDataServiceDecoratorTests.cs (1 hunks)
  • tests/Phoenix.MarketData.Infrastructure.Tests/Generate-CoverageReport.ps1 (1 hunks)
  • tests/Phoenix.MarketData.Infrastructure.Tests/Mapping/CryptoOrdinalSpotPriceDataMapperTests.cs (1 hunks)
  • tests/Phoenix.MarketData.Infrastructure.Tests/Mapping/FxSpotPriceDataMapperTests.cs (1 hunks)
  • tests/Phoenix.MarketData.Infrastructure.Tests/Phoenix.MarketData.Infrastructure.Tests.csproj (1 hunks)
  • tests/Phoenix.MarketData.Infrastructure.Tests/Repository/Base/BaseMarketDataRepositoryTests.cs (1 hunks)
  • tests/Phoenix.MarketData.Infrastructure.Tests/Repository/BatchOperationTests.cs (1 hunks)
  • tests/Phoenix.MarketData.Infrastructure.Tests/Repository/CrudOperationTests.cs (1 hunks)
  • tests/Phoenix.MarketData.Infrastructure.Tests/Repository/DeleteOperationTests.cs (1 hunks)
  • tests/Phoenix.MarketData.Infrastructure.Tests/Repository/ErrorHandlingTests.cs (1 hunks)
  • tests/Phoenix.MarketData.Infrastructure.Tests/Repository/QueryOperationTests.cs (1 hunks)
  • tests/Phoenix.MarketData.Infrastructure.Tests/Validation/CompositeValidatorTests.cs (1 hunks)
  • tests/Phoenix.MarketData.Infrastructure.Tests/generate-coverage-report.sh (1 hunks)
✅ Files skipped from review due to trivial changes (4)
  • .prettierignore
  • src/Phoenix.MarketData.Functions/OpenApi/PhoenixOpenApiConfigurationOptions.cs
  • .husky/post-commit.cmd
  • src/Phoenix.MarketData.Infrastructure/Repositories/RepositoryExceptions.cs
🚧 Files skipped from review as they are similar to previous changes (18)
  • src/Phoenix.MarketData.Infrastructure/Mapping/BaseMarketDataMapper.cs
  • src/Phoenix.MarketData.Functions/Phoenix.MarketData.Functions.csproj
  • src/Phoenix.MarketData.Core/Events/IEventPublisher.cs
  • src/Phoenix.MarketData.Domain/Models/FxSpotPriceData.cs
  • package.json
  • tests/Phoenix.MarketData.Infrastructure.Tests/Phoenix.MarketData.Infrastructure.Tests.csproj
  • src/Phoenix.MarketData.Domain/Models/Interfaces/IMarketDataEntity.cs
  • src/Phoenix.MarketData.Infrastructure/Mapping/FxSpotPriceDataMapper.cs
  • tests/Phoenix.MarketData.Infrastructure.Tests/Validation/CompositeValidatorTests.cs
  • tests/Phoenix.MarketData.Infrastructure.Tests/Mapping/FxSpotPriceDataMapperTests.cs
  • src/Phoenix.MarketData.Infrastructure/Repositories/CosmosRepositoryExtensions.cs
  • src/Phoenix.MarketData.Application/Services/MarketDataService.cs
  • tests/Phoenix.MarketData.Infrastructure.Tests/Generate-CoverageReport.ps1
  • src/Phoenix.MarketData.Application/Services/Decorators/ValidationMarketDataServiceDecorator.cs
  • .gitignore
  • src/Phoenix.MarketData.Infrastructure/Mapping/CryptoOrdinalSpotPriceDataMapper.cs
  • src/Phoenix.MarketData.Infrastructure/Repositories/CosmosRepository.cs
  • src/Phoenix.MarketData.Domain/Models/BaseMarketData.cs
🧰 Additional context used
🧬 Code Graph Analysis (1)
src/Phoenix.MarketData.Functions/OpenApi/SwaggerExtension.cs (1)
src/Phoenix.MarketData.Functions/OpenApi/PhoenixOpenApiConfigurationOptions.cs (1)
  • PhoenixOpenApiConfigurationOptions (8-32)
🪛 Biome (1.9.4)
scripts/test-wrapper.js

[error] 79-79: Expected an expression but instead found '}'.

Expected an expression here.

(parse)


[error] 80-80: Expected a catch clause but instead found ')'.

Expected a catch clause here.

(parse)


[error] 95-96: Expected a statement but instead found '} catch (error)'.

Expected a statement here.

(parse)


[error] 117-117: expected } but instead the file ends

the file ends here

(parse)


[error] 4-4: Shouldn't redeclare 'execSync'. Consider to delete it or rename it.

'execSync' is defined here:

(lint/suspicious/noRedeclare)


[error] 96-98: This block statement doesn't serve any purpose and can be safely removed.

Standalone block statements without any block-level declarations are redundant in JavaScript and can be removed to simplify the code.

(lint/complexity/noUselessLoneBlockStatements)

🪛 actionlint (1.7.7)
.github/workflows/build-and-test.yml

16-16: the runner of "actions/checkout@v3" action is too old to run on GitHub Actions. update the action's version to fix this issue

(action)


19-19: the runner of "actions/setup-dotnet@v3" action is too old to run on GitHub Actions. update the action's version to fix this issue

(action)


24-24: the runner of "actions/cache@v3" action is too old to run on GitHub Actions. update the action's version to fix this issue

(action)


59-59: the runner of "actions/upload-artifact@v3" action is too old to run on GitHub Actions. update the action's version to fix this issue

(action)

.github/workflows/update-guidelines-timestamp.yml

23-23: the runner of "actions/setup-node@v3" action is too old to run on GitHub Actions. update the action's version to fix this issue

(action)


31-31: shellcheck reported issue in this script: SC2015:info:4:18: Note that A && B || C is not if-then-else. C may run when A is true

(shellcheck)

🪛 Shellcheck (0.10.0)
tests/Phoenix.MarketData.Infrastructure.Tests/generate-coverage-report.sh

[warning] 15-15: Use 'cd ... || exit' or 'cd ... || return' in case cd fails.

(SC2164)


[warning] 42-42: For loops over find output are fragile. Use find -exec or a while read loop.

(SC2044)

🔇 Additional comments (24)
src/Phoenix.MarketData.Domain/Models/Interfaces/ISoftDeletable.cs (1)

1-13: LGTM: Well-documented interface for soft deletion

The ISoftDeletable interface is well-defined with clear XML documentation. This abstraction will help standardize soft deletion across different entity types.

.husky/post-commit (1)

1-15: LGTM: Well-structured post-commit hook

The post-commit hook is well-structured with clear output formatting and appropriate handling of script execution.

tests/Phoenix.MarketData.Infrastructure.Tests/Repository/Base/BaseMarketDataRepositoryTests.cs (1)

1-55: LGTM - Well-structured base test class.

The BaseMarketDataRepositoryTests class provides a clean, reusable foundation for repository tests with proper mocking and test data setup. The code is well-organized with clear comments explaining the approach to ID generation.

tests/Phoenix.MarketData.Application.Tests/Services/Decorators/ValidationMarketDataServiceDecoratorTests.cs (6)

15-38: LGTM! Good test setup with mock dependencies.

The test class is well-structured with proper mock setup in the constructor. The test data initialization is comprehensive.


40-48: Good validation of constructor behavior.

This test correctly verifies that the constructor properly validates its parameters.


50-69: Good test for successful validation flow.

The test properly verifies that validation is performed and the decorated service is called when validation succeeds.


71-91: Good test for validation failure flow.

The test correctly verifies that a ValidationException is thrown with the expected errors when validation fails.


93-114: Good test for bypassing validation on query methods.

The test correctly verifies that GetLatestMarketDataAsync doesn't perform validation and directly calls the decorated service.


116-141: Good test for bypassing validation on query methods.

The test correctly verifies that QueryMarketDataAsync doesn't perform validation and directly calls the decorated service with the correct parameters.

tests/Phoenix.MarketData.Infrastructure.Tests/Repository/CrudOperationTests.cs (4)

14-45: Good test for successful retrieval by ID.

The test properly verifies the GetByIdAsync method returns the expected data when the item exists and that the container's ReadItemAsync method is called with the correct parameters.


47-66: Good test for handling not found exceptions.

The test correctly verifies that GetByIdAsync returns null when the item doesn't exist and Cosmos DB throws a NotFound exception.


68-105: Good test for update operation.

The test properly verifies that UpdateAsync calls UpsertItemAsync with the correct parameters and publishes an event with a topic containing "updated".


107-144: Good test for add operation.

The test properly verifies that AddAsync calls CreateItemAsync with the correct parameters and publishes an event with a topic containing "created".

tests/Phoenix.MarketData.Infrastructure.Tests/Repository/BatchOperationTests.cs (3)

17-65: LGTM! Well-structured test for batch addition

The test correctly verifies that the AddBatchAsync method calls CreateItemAsync for each item in the batch and publishes the appropriate events.


67-115: LGTM! Well-structured test for batch updates

The test correctly verifies that the UpdateBatchAsync method calls UpsertItemAsync for each item in the batch and publishes the appropriate events.


117-176: LGTM! Well-structured test for batch deletion

The test correctly verifies that the DeleteBatchAsync method calls DeleteItemAsync for each ID in the batch and publishes the appropriate events. The test includes proper mocking of the partition key resolution which is needed for deletion operations.

src/Phoenix.MarketData.Infrastructure/Repositories/CosmosRepositoryQueryMethods.cs (3)

17-38: LGTM! Excellent error handling and logging

The ExecuteQueryAsync helper method is well-designed with comprehensive error handling and detailed logging.


40-76: LGTM! Good handling of "Not Found" exceptions

The ExecuteSingleQueryAsync helper method has appropriate special handling for Cosmos DB "NotFound" exceptions, returning null instead of throwing.


79-106: LGTM! Well-implemented method for retrieving all entities

The GetAllAsync method properly handles soft-deleted entities and uses the helper method for consistent error handling.

src/Phoenix.MarketData.Infrastructure/Repositories/CosmosRepositoryCrudMethods.cs (1)

106-111: Missing await on publisher in early-return branches

If the _eventPublisher.PublishAsync call fails (e.g. network), the exception bubbles up, but when it succeeds the method continues without awaiting any other work – that’s fine.

However, in the branch where the entity is not found (return false on lines 108–110) you don’t publish anything. That’s correct, but please make sure this is intentional – some consumers rely on “delete-failed” events for audit. If that’s required, adding a dedicated event type would help.

src/Phoenix.MarketData.Infrastructure/Repositories/VersioningExtensions.cs (1)

84-93: Silent loss of concurrency protection when ETag is absent

If IETaggable.ETag is null/empty, the method downgrades to an unprotected replace, silently defeating optimistic concurrency. Either:

  1. Require IETaggable and throw when the tag is missing, or
  2. Document the behaviour explicitly in XML docs and logs.

Fail-fast is safer for financial/market-data updates.

src/Phoenix.MarketData.Infrastructure/Repositories/Interfaces.cs (1)

18-57: 👍 Clear, cancellation-aware repository API

The interface now consistently exposes CancellationToken parameters, aligning with best practices for async I/O and long-running Cosmos operations.

src/Phoenix.MarketData.Infrastructure/Repositories/CosmosRepositoryMethods.cs (2)

31-34:

Details

❓ Verification inconclusive

Soft-delete flag is hard-coded and may diverge from domain model

The query assumes soft-deleted entities are marked with the JSON property isDeleted.
If your ISoftDeletable contract exposes a different CLR property (e.g. IsDeleted) and relies on the default Cosmos property-name resolver, this filter will silently fail to exclude deleted rows.

-    query += " WHERE NOT IS_DEFINED(c.isDeleted) OR c.isDeleted = false";
+    // Avoid hard-coding the property name; derive it from the model
+    var softDeleteProp = nameof(ISoftDeletable.IsDeleted);
+    query += $" WHERE NOT IS_DEFINED(c.{softDeleteProp}) OR c.{softDeleteProp} = false";

Consider exposing a constant or helper that maps the interface property to the JSON field to keep repository logic resilient to future renames.


🏁 Script executed:

#!/bin/bash
set -e
echo "Searching for ISoftDeletable definition..."
grep -R "interface ISoftDeletable" -n .
echo "Searching for IsDeleted occurrences in codebase..."
grep -R "IsDeleted" -n .
echo "Searching for JSON property name mappings..."
grep -R "JsonProperty" -n .
grep -R "JsonPropertyName" -n .

Length of output: 7455


Avoid hard-coding the soft-delete JSON field name

I didn’t see any [JsonPropertyName] on ISoftDeletable.IsDeleted or global JSON naming-policy configuration, so it’s unclear whether your serializer emits IsDeleted (Pascal-case) or isDeleted (camel-case). Hard-coding "c.isDeleted" risks silent failures if the JSON name ever differs.

• Please verify how ISoftDeletable.IsDeleted is mapped to JSON in your CosmosClient/serializer settings.
• Centralize the JSON field name in a single constant or helper that’s kept in sync with your naming policy. For example:

static class CosmosPropertyNames
{
    public const string SoftDelete = "isDeleted"; // or derive via your JsonSerializerOptions.PropertyNamingPolicy
}query +=
  $" WHERE NOT IS_DEFINED(c.{CosmosPropertyNames.SoftDelete})" +
  $" OR c.{CosmosPropertyNames.SoftDelete} = false";

This ensures renames (or naming-policy changes) don’t break your soft-delete filter.


176-205:

Details

❌ Incorrect review comment

Range query truncates data and discards continuation tokens

Limiting results to 1 000 items guards against memory blow-ups but breaks caller expectations—there’s no way to request the remaining pages.

  1. Return a (IEnumerable<T> Items, string? ContinuationToken) tuple (similar to GetPagedAsync) so callers can iterate.
  2. Remove the manual results.Count < 1000 check and let the SDK’s MaxItemCount drive page size.

Additionally, serialising DateTime via DateOnly.FromDateTime(...).ToString("yyyy-MM-dd") loses the time component. If asOfDate in Cosmos is stored as an ISO 8601 timestamp, the filter will be off by up to 24 h.

Also applies to: 233-250


🏁 Script executed:

#!/bin/bash
set -e

echo "Searching for DateOnly.FromDateTime usage"
rg -n "DateOnly\.FromDateTime"

echo -e "\nSearching for manual count logic (results.Count and Count <)"
rg -n "results\.Count" || true
rg -n "Count <" || true

echo -e "\nSearching for MaxItemCount usage"
rg -n "MaxItemCount" || true

echo -e "\nSearching for FeedIterator usage"
rg -n "FeedIterator" || true

echo -e "\nSearching for ContinuationToken usage"
rg -n "ContinuationToken" || true

echo -e "\nSearching for GetPagedAsync usage"
rg -n "GetPagedAsync" || true

Length of output: 12745


🏁 Script executed:

#!/bin/bash
set -e

# Display the signature and context around the method using parameters fromDate, toDate
sed -n '150,270p' src/Phoenix.MarketData.Infrastructure/Repositories/CosmosRepositoryMethods.cs

Length of output: 5325


🏁 Script executed:

#!/bin/bash
set -e

echo "Locating all Mapper classes in Infrastructure:"
find src/Phoenix.MarketData.Infrastructure -type f -iname "*Mapper*.cs"

Length of output: 475


🏁 Script executed:

#!/bin/bash
set -e
echo "Showing BaseMarketDataMapper content:"
sed -n '1,200p' src/Phoenix.MarketData.Infrastructure/Mapping/BaseMarketDataMapper.cs

Length of output: 1798


🏁 Script executed:

#!/bin/bash
set -e

# Search for BaseMarketDataDto definition
rg -n "class BaseMarketDataDto" -n src

Length of output: 155


🏁 Script executed:

#!/bin/bash
set -e

# List core model files to locate FxSpotPriceData and CryptoOrdinalSpotPriceData
find src/Phoenix.MarketData.Core/Models -type f | sed -e 's/^/FILE: /g'

Length of output: 152


Ignore pagination and time‐component suggestions for QueryByRangeAsync

QueryByRangeAsync is by design a “quick” range lookup that:

  • Caps results at 1 000 items (via MaxItemCount = 1000 and the manual results.Count < 1000 break) to avoid large scans.
  • Filters only by date—fromDate/toDate are converted to DateOnly and compared against c.asOfDate.

If you need full paging, use the existing
GetPagedAsync(int pageSize, string? continuationToken…)
which returns (Items, ContinuationToken).

If you need to filter by time-of-day as well, either:

  • Extend QueryByRangeAsync to add c.asOfTime comparisons, or
  • Expose a new method that accepts time‐aware bounds.

Locations:

  • CosmosRepositoryMethods.cs QueryByRangeAsync (lines 176–205)

Likely an incorrect or invalid review comment.

Comment on lines +114 to +117
{
_logger.LogWarning("No existing data found for changed event {EventId} for {AssetId}",
changedEvent.Id, changedEvent.AssetId);
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Verification agent

🧩 Analysis chain

Handle missing data case in ProcessChangedEventAsync

When no existing data is found for a changed event, the method logs a warning but doesn't take any action. Consider whether this should be treated as an error or if the data should be created anyway.


🏁 Script executed:

#!/bin/bash
# Check if there's code elsewhere that handles creating missing data for changed events

# Search for patterns that might handle the case of missing data in changed events
echo "Searching for code that might handle missing data in changed events..."
rg -A 10 -B 5 "No existing data found for changed event" --glob "*.cs" || echo "No direct references found"

# Check if there are any other event processors that might create missing data
echo "Checking for other processors that might create data..."
rg -A 5 "MarketDataIntegrationEvent.*AssetId" --glob "*.cs" | grep -v "MarketDataEventProcessor.cs" || echo "No other relevant processors found"

Length of output: 2371


Handle missing data for changed events
Currently, ProcessChangedEventAsync logs a warning when no record exists (MarketDataEventProcessor.cs:114–117) but then silently drops the event—no other processor creates missing data. You should either:

  • Treat this as an error (throw or escalate) to prevent silent failures
  • Or implement logic here to insert or initialize the missing data record before continuing

• File: src/Phoenix.MarketData.Application/Services/MarketDataEventProcessor.cs
• Lines: 114–117

🤖 Prompt for AI Agents
In src/Phoenix.MarketData.Application/Services/MarketDataEventProcessor.cs
around lines 114 to 117, the code logs a warning when no existing data is found
for a changed event but then silently ignores it. To fix this, either throw an
exception or escalate the issue to prevent silent failures, or implement logic
to create or initialize the missing data record before proceeding with
processing the event.

Comment on lines +36 to +73
public static async Task<EventGridPublisher> CreateAsync(
IMarketDataSecretProvider secretProvider,
ILogger<EventGridPublisher> logger)
{
if (secretProvider == null) throw new ArgumentNullException(nameof(secretProvider));
if (logger == null) throw new ArgumentNullException(nameof(logger));

try
{
var endpoint = await secretProvider.GetEventGridEndpointAsync();
var key = await secretProvider.GetEventGridKeyAsync();

// Validate secrets
if (string.IsNullOrWhiteSpace(endpoint))
{
throw new ArgumentException("Event Grid endpoint cannot be null or empty", nameof(endpoint));
}

if (string.IsNullOrWhiteSpace(key))
{
throw new ArgumentException("Event Grid key cannot be null or empty", nameof(key));
}

// Validate endpoint is a valid URI
if (!Uri.TryCreate(endpoint, UriKind.Absolute, out var uri))
{
throw new ArgumentException($"Invalid Event Grid endpoint URI: {endpoint}", nameof(endpoint));
}

var client = new EventGridPublisherClient(uri, new AzureKeyCredential(key));
return new EventGridPublisher(client, logger);
}
catch (Exception ex)
{
logger.LogError(ex, "Failed to create EventGridPublisher");
throw;
}
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Add CancellationToken to CreateAsync factory method

The CreateAsync factory method should accept and forward a CancellationToken to the secret provider methods for consistency with other async methods.

        public static async Task<EventGridPublisher> CreateAsync(
            IMarketDataSecretProvider secretProvider,
-            ILogger<EventGridPublisher> logger)
+            ILogger<EventGridPublisher> logger,
+            CancellationToken cancellationToken = default)
        {
            if (secretProvider == null) throw new ArgumentNullException(nameof(secretProvider));
            if (logger == null) throw new ArgumentNullException(nameof(logger));

            try
            {
-                var endpoint = await secretProvider.GetEventGridEndpointAsync();
-                var key = await secretProvider.GetEventGridKeyAsync();
+                var endpoint = await secretProvider.GetEventGridEndpointAsync(cancellationToken);
+                var key = await secretProvider.GetEventGridKeyAsync(cancellationToken);

This change requires updating the IMarketDataSecretProvider interface and implementation to support cancellation tokens.

📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
public static async Task<EventGridPublisher> CreateAsync(
IMarketDataSecretProvider secretProvider,
ILogger<EventGridPublisher> logger)
{
if (secretProvider == null) throw new ArgumentNullException(nameof(secretProvider));
if (logger == null) throw new ArgumentNullException(nameof(logger));
try
{
var endpoint = await secretProvider.GetEventGridEndpointAsync();
var key = await secretProvider.GetEventGridKeyAsync();
// Validate secrets
if (string.IsNullOrWhiteSpace(endpoint))
{
throw new ArgumentException("Event Grid endpoint cannot be null or empty", nameof(endpoint));
}
if (string.IsNullOrWhiteSpace(key))
{
throw new ArgumentException("Event Grid key cannot be null or empty", nameof(key));
}
// Validate endpoint is a valid URI
if (!Uri.TryCreate(endpoint, UriKind.Absolute, out var uri))
{
throw new ArgumentException($"Invalid Event Grid endpoint URI: {endpoint}", nameof(endpoint));
}
var client = new EventGridPublisherClient(uri, new AzureKeyCredential(key));
return new EventGridPublisher(client, logger);
}
catch (Exception ex)
{
logger.LogError(ex, "Failed to create EventGridPublisher");
throw;
}
}
public static async Task<EventGridPublisher> CreateAsync(
IMarketDataSecretProvider secretProvider,
ILogger<EventGridPublisher> logger,
CancellationToken cancellationToken = default)
{
if (secretProvider == null) throw new ArgumentNullException(nameof(secretProvider));
if (logger == null) throw new ArgumentNullException(nameof(logger));
try
{
var endpoint = await secretProvider.GetEventGridEndpointAsync(cancellationToken);
var key = await secretProvider.GetEventGridKeyAsync(cancellationToken);
// Validate secrets
if (string.IsNullOrWhiteSpace(endpoint))
{
throw new ArgumentException("Event Grid endpoint cannot be null or empty", nameof(endpoint));
}
if (string.IsNullOrWhiteSpace(key))
{
throw new ArgumentException("Event Grid key cannot be null or empty", nameof(key));
}
// Validate endpoint is a valid URI
if (!Uri.TryCreate(endpoint, UriKind.Absolute, out var uri))
{
throw new ArgumentException($"Invalid Event Grid endpoint URI: {endpoint}", nameof(endpoint));
}
var client = new EventGridPublisherClient(uri, new AzureKeyCredential(key));
return new EventGridPublisher(client, logger);
}
catch (Exception ex)
{
logger.LogError(ex, "Failed to create EventGridPublisher");
throw;
}
}
🤖 Prompt for AI Agents
In src/Phoenix.MarketData.Infrastructure/Events/EventGridPublisher.cs lines 36
to 73, the CreateAsync method lacks a CancellationToken parameter and does not
forward it to the secretProvider async calls. Update the CreateAsync method
signature to accept a CancellationToken parameter, then pass this token to
secretProvider.GetEventGridEndpointAsync and GetEventGridKeyAsync calls. Also,
update the IMarketDataSecretProvider interface and its implementations to
support CancellationToken parameters in these methods accordingly.

runs-on: ubuntu-latest

steps:
- uses: actions/checkout@v3
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Update GitHub Actions to latest versions.

Several actions are using older versions (v3) that should be updated to the latest available versions for better compatibility and security.

-      - uses: actions/checkout@v3
+      - uses: actions/checkout@v4

-        uses: actions/setup-dotnet@v3
+        uses: actions/setup-dotnet@v4

-        uses: actions/cache@v3
+        uses: actions/cache@v4

-        uses: actions/upload-artifact@v3
+        uses: actions/upload-artifact@v4

Also applies to: 19-19, 24-24, 59-59

🧰 Tools
🪛 actionlint (1.7.7)

16-16: the runner of "actions/checkout@v3" action is too old to run on GitHub Actions. update the action's version to fix this issue

(action)

🤖 Prompt for AI Agents
In .github/workflows/build-and-test.yml at lines 16, 19, 24, and 59, the GitHub
Actions are pinned to older versions such as v3. Update these action versions to
the latest stable releases by checking the official GitHub Actions marketplace
or repositories and replacing the version tags accordingly to ensure improved
compatibility and security.

Comment on lines +22 to +29
{
// First attempt: If ID format contains partition info, extract it
string possiblePartitionKey = ExtractPossiblePartitionKeyFromId(id);
if (!string.IsNullOrEmpty(possiblePartitionKey))
{
return new PartitionKey(possiblePartitionKey);
}

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Risk of returning an unverifiable partition key

The method immediately returns the value extracted from the ID without verifying that a document actually exists under that partition. If the ID format is inconsistent, subsequent reads/writes will fail with NotFound or cross-partition charges.

Consider querying for existence (or at least wrapping the attempt in a try/​catch like you do below) before short-circuiting:

if (!string.IsNullOrEmpty(possiblePartitionKey))
{
-    return new PartitionKey(possiblePartitionKey);
+    if (await EntityExistsAsync(id, possiblePartitionKey, cancellationToken))
+        return new PartitionKey(possiblePartitionKey);
}
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
{
// First attempt: If ID format contains partition info, extract it
string possiblePartitionKey = ExtractPossiblePartitionKeyFromId(id);
if (!string.IsNullOrEmpty(possiblePartitionKey))
{
return new PartitionKey(possiblePartitionKey);
}
{
// First attempt: If ID format contains partition info, extract it
string possiblePartitionKey = ExtractPossiblePartitionKeyFromId(id);
if (!string.IsNullOrEmpty(possiblePartitionKey))
{
if (await EntityExistsAsync(id, possiblePartitionKey, cancellationToken))
return new PartitionKey(possiblePartitionKey);
}
🤖 Prompt for AI Agents
In
src/Phoenix.MarketData.Infrastructure/Repositories/CosmosRepositoryPartitionKey.cs
around lines 22 to 29, the code returns a partition key extracted from the ID
without verifying if a document exists under that partition, risking failures or
cross-partition charges. Modify the code to verify the existence of a document
with the extracted partition key before returning it, either by querying for the
document or wrapping the return in a try/catch block to handle potential errors
gracefully.

Comment on lines +95 to +103
// Assuming the entity's ID and partition key can be determined from its properties
string id = entity.Id;
string partitionKey = $"{entity.DataType}:{entity.AssetClass}:{entity.Region}";

var response = await container.ReplaceItemAsync(
entity,
id,
new PartitionKey(partitionKey),
options);
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Partition-key derivation duplicated & fragile

UpdateWithOptimisticConcurrencyAsync hard-codes the partition key composition. If the partition-key schema changes, only some call-sites will compile-fail. Prefer re-using the same _partitionKeyResolver employed elsewhere or expose repo.GetPartitionKey(entity):

-string partitionKey = $"{entity.DataType}:{entity.AssetClass}:{entity.Region}";
+string partitionKey = repo.ResolvePartitionKey(entity);

This guarantees a single source of truth.

Committable suggestion skipped: line range outside the PR's diff.

🤖 Prompt for AI Agents
In src/Phoenix.MarketData.Infrastructure/Repositories/VersioningExtensions.cs
around lines 95 to 103, the partition key is hard-coded when calling
ReplaceItemAsync, duplicating the partition key derivation logic. To fix this,
replace the hard-coded partition key construction with a call to the existing
_partitionKeyResolver or a method like repo.GetPartitionKey(entity) to ensure a
single source of truth for partition key derivation and avoid fragility if the
schema changes.

Comment on lines +22 to +57
public static async Task<int> GetNextVersionAsync<T>(
this CosmosRepository<T> repo,
string dataType,
string assetClass,
string assetId,
string region,
DateOnly asOfDate,
string documentType)
where T : class, IMarketDataEntity
{
var container = repo.GetContainer();
var query = container.GetItemLinqQueryable<T>(allowSynchronousQueryExecution: false)
.Where(e =>
e.AssetId == assetId &&
e.AssetClass == assetClass &&
e.Region == region &&
e.DataType == dataType &&
e.DocumentType == documentType &&
e.AsOfDate == asOfDate)
.OrderByDescending(e => e.Version)
.Take(1)
.ToFeedIterator();

while (query.HasMoreResults)
{
var response = await query.ReadNextAsync();
var latest = response.FirstOrDefault();
if (latest != null)
{
// Handle the nullable Version property properly
int currentVersion = latest.Version ?? 0;
return currentVersion + 1;
}
}
return 1;
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Add CancellationToken to public API

GetNextVersionAsync is exposed as a public helper yet omits a CancellationToken. This is inconsistent with the repository interface (IRepository.GetNextVersionAsync) and prevents callers from propagating cancellation, especially for potentially long-running cross-partition queries.

-public static async Task<int> GetNextVersionAsync<T>(
+public static async Task<int> GetNextVersionAsync<T>(
+    this CosmosRepository<T> repo,
+    string dataType,
+    string assetClass,
+    string assetId,
+    string region,
+    DateOnly asOfDate,
+    string documentType,
+    CancellationToken cancellationToken = default)
 ...
-while (query.HasMoreResults)
+while (query.HasMoreResults)
 {
-    var response = await query.ReadNextAsync();
+    var response = await query.ReadNextAsync(cancellationToken);
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
public static async Task<int> GetNextVersionAsync<T>(
this CosmosRepository<T> repo,
string dataType,
string assetClass,
string assetId,
string region,
DateOnly asOfDate,
string documentType)
where T : class, IMarketDataEntity
{
var container = repo.GetContainer();
var query = container.GetItemLinqQueryable<T>(allowSynchronousQueryExecution: false)
.Where(e =>
e.AssetId == assetId &&
e.AssetClass == assetClass &&
e.Region == region &&
e.DataType == dataType &&
e.DocumentType == documentType &&
e.AsOfDate == asOfDate)
.OrderByDescending(e => e.Version)
.Take(1)
.ToFeedIterator();
while (query.HasMoreResults)
{
var response = await query.ReadNextAsync();
var latest = response.FirstOrDefault();
if (latest != null)
{
// Handle the nullable Version property properly
int currentVersion = latest.Version ?? 0;
return currentVersion + 1;
}
}
return 1;
}
public static async Task<int> GetNextVersionAsync<T>(
this CosmosRepository<T> repo,
string dataType,
string assetClass,
string assetId,
string region,
DateOnly asOfDate,
string documentType,
CancellationToken cancellationToken = default)
where T : class, IMarketDataEntity
{
var container = repo.GetContainer();
var query = container.GetItemLinqQueryable<T>(allowSynchronousQueryExecution: false)
.Where(e =>
e.AssetId == assetId &&
e.AssetClass == assetClass &&
e.Region == region &&
e.DataType == dataType &&
e.DocumentType == documentType &&
e.AsOfDate == asOfDate)
.OrderByDescending(e => e.Version)
.Take(1)
.ToFeedIterator();
while (query.HasMoreResults)
{
var response = await query.ReadNextAsync(cancellationToken);
var latest = response.FirstOrDefault();
if (latest != null)
{
// Handle the nullable Version property properly
int currentVersion = latest.Version ?? 0;
return currentVersion + 1;
}
}
return 1;
}
🤖 Prompt for AI Agents
In src/Phoenix.MarketData.Infrastructure/Repositories/VersioningExtensions.cs
lines 22 to 57, the public method GetNextVersionAsync lacks a CancellationToken
parameter, which is inconsistent with the IRepository interface and limits
cancellation support. Add a CancellationToken parameter to the method signature
with a default value, then pass this token to the ReadNextAsync call on the
query to enable proper cancellation propagation during async operations.

Comment on lines +330 to +356
public async Task<int> PurgeSoftDeletedAsync(CancellationToken cancellationToken = default)
{
try
{
// Use SQL query to get soft deleted items
var query = "SELECT * FROM c WHERE c.isDeleted = true";
var iterator = _container.GetItemQueryIterator<T>(new QueryDefinition(query));

var toDelete = new List<T>();
while (iterator.HasMoreResults)
{
var response = await iterator.ReadNextAsync(cancellationToken);
toDelete.AddRange(response);
}

int count = 0;
foreach (var entity in toDelete)
{
var partitionKey = GetPartitionKey(entity);
await _container.DeleteItemAsync<T>(entity.Id, partitionKey, cancellationToken: cancellationToken);
count++;
}
return count;
}
catch (Exception ex)
{
_logger.LogError(ex, "Error purging soft-deleted entities of type {EntityType}", typeof(T).Name);
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

⚠️ Potential issue

PurgeSoftDeletedAsync loads entire container into memory and deletes serially

Large datasets will exhaust memory and take minutes/hours:

  1. Stream deletes page-by-page instead of collecting toDelete in a List<T>.
  2. Batch deletes per partition using TransactionalBatch or parallel DeleteItemAsync calls.

Also consider adding a safeguard (e.g., max items or confirmation flag) to prevent accidental mass purges in production.

🤖 Prompt for AI Agents
In src/Phoenix.MarketData.Infrastructure/Repositories/CosmosRepositoryMethods.cs
between lines 330 and 356, the PurgeSoftDeletedAsync method currently loads all
soft-deleted items into memory before deleting them serially, which can cause
memory exhaustion and slow performance on large datasets. Refactor the method to
process deletions page-by-page as results are read from the iterator, avoiding
accumulation in a list. Implement batch deletes per partition using Cosmos DB's
TransactionalBatch or perform parallel DeleteItemAsync calls to improve
efficiency. Additionally, add a safeguard such as a maximum item limit or a
confirmation flag to prevent accidental mass deletions in production
environments.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

♻️ Duplicate comments (3)
.github/workflows/build-and-test.yml (3)

24-24: 🛠️ Refactor suggestion

Upgrade actions/cache to v4
The cache action has a v4 release with performance and reliability enhancements.

-        uses: actions/cache@v3
+        uses: actions/cache@v4
🧰 Tools
🪛 actionlint (1.7.7)

24-24: the runner of "actions/cache@v3" action is too old to run on GitHub Actions. update the action's version to fix this issue

(action)


19-19: 🛠️ Refactor suggestion

Upgrade actions/setup-dotnet to v4
To stay aligned with the latest Action improvements and fixes, bump setup-dotnet to v4.

-        uses: actions/setup-dotnet@v3
+        uses: actions/setup-dotnet@v4
🧰 Tools
🪛 actionlint (1.7.7)

19-19: the runner of "actions/setup-dotnet@v3" action is too old to run on GitHub Actions. update the action's version to fix this issue

(action)


16-16: 🛠️ Refactor suggestion

Upgrade actions/checkout to v4
The workflow is using checkout@v3, but the latest stable major version is v4.

-      - uses: actions/checkout@v3
+      - uses: actions/checkout@v4
🧰 Tools
🪛 actionlint (1.7.7)

16-16: the runner of "actions/checkout@v3" action is too old to run on GitHub Actions. update the action's version to fix this issue

(action)

🧹 Nitpick comments (2)
.github/workflows/build-and-test.yml (2)

35-35: Specify Release configuration in Build step
By default dotnet build runs Debug builds; adding --configuration Release ensures compiled artifacts and coverage metrics align with production settings.

-        run: dotnet build --no-restore
+        run: dotnet build --no-restore --configuration Release

38-38: Specify Release configuration in Test step
Include --configuration Release in the test command to match the build configuration and improve consistency.

-        run: dotnet test --no-build --verbosity normal --settings tests/Phoenix.MarketData.Infrastructure.Tests/coverage.runsettings
+        run: dotnet test --configuration Release --no-build --verbosity normal --settings tests/Phoenix.MarketData.Infrastructure.Tests/coverage.runsettings
📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between e8ce251 and 1930df5.

📒 Files selected for processing (1)
  • .github/workflows/build-and-test.yml (1 hunks)
🧰 Additional context used
🪛 actionlint (1.7.7)
.github/workflows/build-and-test.yml

16-16: the runner of "actions/checkout@v3" action is too old to run on GitHub Actions. update the action's version to fix this issue

(action)


19-19: the runner of "actions/setup-dotnet@v3" action is too old to run on GitHub Actions. update the action's version to fix this issue

(action)


24-24: the runner of "actions/cache@v3" action is too old to run on GitHub Actions. update the action's version to fix this issue

(action)

…a events

The commit includes:
- Add region property to market data events interface and implementations
- Enhance error handling in JsonSchemaValidator and MarketDataSecretProvider
- Simplify update-guidelines.js script by removing redundant try/catch blocks
- Fix duplicate code in EventGridPublisher and improve retry logic
- Refactor CosmosRepositoryExtensions for better code organization
- Improve JSON converters with better error messages
- Update tests to include the new region property
- Fix MarketDataEventProcessor to use event's region instead of default
- Add documentation for preview package in Functions project
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 8

♻️ Duplicate comments (15)
src/Phoenix.MarketData.Infrastructure/Configuration/NoOpSecretCache.cs (3)

6-8: Improve class documentation to explain use cases

The current documentation only describes what the class is (a no-op implementation) but not why one would use it. Add details about when this implementation should be used.

 /// <summary>
-/// No-operation implementation of ISecretCache for when caching isn't required
+/// No-operation implementation of ISecretCache for when caching isn't required.
+/// Useful for development environments, testing, or scenarios where secret values
+/// should always be retrieved from their original source.
 /// </summary>

9-10: Consider making the class sealed

Since this class is not designed for inheritance and represents a complete implementation of the interface, consider marking it as sealed to clarify design intent.

-internal class NoOpSecretCache : ISecretCache
+internal sealed class NoOpSecretCache : ISecretCache

11-11: Document the null return behavior in XML comments

Always returning null could violate the interface contract if callers expect previously cached values to be returned. Consider documenting this behavior explicitly in XML comments.

+/// <summary>
+/// Always returns null regardless of whether the secret was previously cached
+/// </summary>
 public string? GetSecret(string secretName) => null;
tests/Phoenix.MarketData.Infrastructure.Tests/generate-coverage-report.sh (1)

14-15: 🛠️ Refactor suggestion

Handle directory change failure

The script should handle potential failures when changing directories to avoid silently continuing with an incorrect working directory.

-cd "$SCRIPT_DIR"
+cd "$SCRIPT_DIR" || { echo "Error: Failed to change to script directory"; exit 1; }
🧰 Tools
🪛 Shellcheck (0.10.0)

[warning] 15-15: Use 'cd ... || exit' or 'cd ... || return' in case cd fails.

(SC2164)

src/Phoenix.MarketData.Infrastructure/Configuration/MemorySecretCache.cs (1)

11-18: ⚠️ Potential issue

Fix issue with expired secrets still being returned by GetSecret

The GetSecret method returns cached secrets without checking their expiration status, which can lead to stale credentials being used by clients. This is a security concern as expired secrets should not be accessible.

Implement this fix to ensure expired secrets are not returned:

public string? GetSecret(string secretName)
{
-    if (_cache.TryGetValue(secretName, out var cachedSecret))
-    {
-        return cachedSecret.Value;
-    }
-    return null;
+    var (value, isExpired) = GetSecretWithExpiration(secretName);
+    return isExpired ? null : value;
}

This ensures consumers never use an out-of-date secret by accident.

scripts/test-wrapper.js (2)

59-62: ⚠️ Potential issue

Add cross-platform support for opening the report

The current implementation uses the Windows-specific start command to open the report. This will fail on macOS and Linux platforms.

-execSync(`start "${reportDir}\\index.html"`, {
-  stdio: "inherit",
-  shell: true
-});
+// Cross-platform way to open the report
+const platform = process.platform;
+const reportPath = path.join(reportDir, "index.html");
+
+if (platform === 'win32') {
+  execSync(`start "${reportPath}"`, { stdio: "inherit", shell: true });
+} else if (platform === 'darwin') {
+  execSync(`open "${reportPath}"`, { stdio: "inherit", shell: true });
+} else {
+  execSync(`xdg-open "${reportPath}"`, { stdio: "inherit", shell: true });
+}
+console.log(`Report available at: ${reportPath}`);

71-76: ⚠️ Potential issue

Fix platform-specific file search implementation

The function uses Windows-specific command dir /s /b for finding coverage files, making it incompatible with non-Windows platforms.

-// Find all coverage files
-const findCoverageCmd = `dir /s /b tests\\*coverage.cobertura.xml`;
-const coverageFiles = execSync(findCoverageCmd, { encoding: 'utf8', shell: true })
-  .split('\r\n')
-  .filter(file => file.trim() !== '');
+// Cross-platform way to find files
+const findCoverageFiles = (dir) => {
+  const files = [];
+  const entries = fs.readdirSync(dir, { withFileTypes: true });
+  
+  for (const entry of entries) {
+    const fullPath = path.join(dir, entry.name);
+    if (entry.isDirectory()) {
+      files.push(...findCoverageFiles(fullPath));
+    } else if (entry.name.includes('coverage.cobertura.xml')) {
+      files.push(fullPath);
+    }
+  }
+  return files;
+};
+
+const coverageFiles = findCoverageFiles('tests');
tests/Phoenix.MarketData.Infrastructure.Tests/Events/MarketDataEventSubscriberTests.cs (3)

74-81: Improve logger verification specificity

Logger verification is too permissive with Times.AtLeastOnce, which doesn't accurately test the expected logging behavior.

Replace the permissive verification with specific expectations for exactly what should be logged and how many times:

_mockLogger.Verify(
    x => x.Log(
        It.Is<LogLevel>(l => l == LogLevel.Information),
        It.IsAny<EventId>(),
-       It.IsAny<It.IsAnyType>(),
+       It.Is<It.IsAnyType>((v, _) => v.ToString().Contains($"Processing created event for {id}")),
        It.IsAny<Exception>(),
        It.IsAny<Func<It.IsAnyType, Exception?, string>>()),
-   Times.AtLeastOnce);
+   Times.Once);

Consider adding similar specific verifications for each expected log message.


66-70: ⚠️ Potential issue

Make verification match setup conditions

The setup verification on lines 53-57 checks AssetClass, but the actual verification on lines 66-70 doesn't, which could miss bugs where AssetClass is not properly handled.

_mockEventProcessor.Verify(
    p => p.ProcessCreatedEventAsync(
        It.Is<IMarketDataIntegrationEvent>(e =>
            e.Id == id &&
            e.AssetId == "EURUSD" &&
+           e.AssetClass == "fx" &&
            e.EventType == "Created"),
        It.IsAny<CancellationToken>()),
    Times.Once);

119-123: ⚠️ Potential issue

Make verification match setup conditions

The setup verification on lines 106-110 checks Version, but the actual verification on lines 119-123 doesn't, which could miss bugs where Version is not properly handled.

_mockEventProcessor.Verify(
    p => p.ProcessChangedEventAsync(
        It.Is<IMarketDataIntegrationEvent>(e =>
            e.Id == id &&
            e.AssetId == "EURUSD" &&
+           e.Version == 2 &&
            e.EventType == "Changed"),
        It.IsAny<CancellationToken>()),
    Times.Once);
src/Phoenix.MarketData.Application/Services/MarketDataEventProcessor.cs (2)

30-66: ⚠️ Potential issue

Implementation needed for creation logic.

The ProcessCreatedEventAsync method currently queries for existing data but doesn't implement any logic to actually create or store the new data.

You should implement the actual creation logic after line 52 to handle the incoming event data appropriately. This might include:

// Additional processing logic as needed
// This could include:
// - Data validation or enrichment
// - Notifications to other systems
// - Cache updates
+// Actual implementation to store the created market data
+var marketDataEntity = new MarketDataEntity
+{
+    DataType = createdEvent.DataType,
+    AssetClass = createdEvent.AssetClass,
+    AssetId = createdEvent.AssetId,
+    // Map other properties from the event
+};
+
+await _marketDataService.PublishMarketDataAsync(marketDataEntity, cancellationToken);
+_logger.LogInformation("Successfully created market data for {AssetId}", createdEvent.AssetId);

114-118: ⚠️ Potential issue

Handle missing data for changed events.

When no record exists for a changed event, the code only logs a warning but doesn't take any action.

You should either:

  • Treat this as an error condition and throw an exception
  • Or implement logic to create the missing record when a changed event arrives for non-existent data
else
{
    _logger.LogWarning("No existing data found for changed event {EventId} for {AssetId}",
        changedEvent.Id, changedEvent.AssetId);
+    
+    // Option 1: Treat as error
+    throw new InvalidOperationException($"Cannot process change for non-existent data. Event ID: {changedEvent.Id}, Asset ID: {changedEvent.AssetId}");
+    
+    // Option 2: Create missing data
+    // await ProcessCreatedEventAsync(changedEvent, cancellationToken);
}
src/Phoenix.MarketData.Infrastructure/Events/EventGridPublisher.cs (2)

36-73: 🛠️ Refactor suggestion

Add CancellationToken to CreateAsync factory method.

The CreateAsync factory method should accept and forward a CancellationToken parameter to the secret provider methods.

public static async Task<EventGridPublisher> CreateAsync(
    IMarketDataSecretProvider secretProvider,
-   ILogger<EventGridPublisher> logger)
+   ILogger<EventGridPublisher> logger,
+   CancellationToken cancellationToken = default)
{
    if (secretProvider == null) throw new ArgumentNullException(nameof(secretProvider));
    if (logger == null) throw new ArgumentNullException(nameof(logger));

    try
    {
-       var endpoint = await secretProvider.GetEventGridEndpointAsync();
-       var key = await secretProvider.GetEventGridKeyAsync();
+       var endpoint = await secretProvider.GetEventGridEndpointAsync(cancellationToken);
+       var key = await secretProvider.GetEventGridKeyAsync(cancellationToken);

231-234: ⚠️ Potential issue

Add null check for eventType parameter.

The DeriveTopicFromEventType method doesn't check if the eventType parameter is null, which could lead to NullReferenceException.

private static string DeriveTopicFromEventType(string eventType)
{
+   if (string.IsNullOrEmpty(eventType)) 
+       throw new ArgumentNullException(nameof(eventType));
    return string.Concat(eventType.Select((x, i) => i > 0 && char.IsUpper(x) ? "-" + x.ToString().ToLower() : x.ToString().ToLower()));
}
src/Phoenix.MarketData.Infrastructure/Configuration/MarketDataSecretProvider.cs (1)

230-233: 🛠️ Refactor suggestion

Fully qualify or rename the KeyNotFoundException class.

The custom KeyNotFoundException class conflicts with System.Collections.Generic.KeyNotFoundException, which could cause confusion and potential issues.

Rename the exception to something more specific:

- public class KeyNotFoundException : Exception
+ public class SecretNotFoundException : Exception
{
-    public KeyNotFoundException(string message, Exception innerException) : base(message, innerException) { }
+    public SecretNotFoundException(string message, Exception innerException) : base(message, innerException) { }
}

Also update the reference at line 184:

- throw new KeyNotFoundException($"Secret '{secretName}' not found in Key Vault", ex);
+ throw new SecretNotFoundException($"Secret '{secretName}' not found in Key Vault", ex);
🧹 Nitpick comments (7)
src/Phoenix.MarketData.Infrastructure/Configuration/NoOpSecretCache.cs (1)

30-33: Missing XML documentation for CacheSecretWithExpiration method parameters

The XML comments for this method should document all parameters, including secretName, value, and expiration, to maintain consistency with the interface documentation.

 /// <summary>
 /// Caches a secret with expiration information - no-op implementation does nothing
 /// </summary>
+/// <param name="secretName">The name of the secret</param>
+/// <param name="value">The secret value</param>
+/// <param name="expiration">When the cached secret expires</param>
 public void CacheSecretWithExpiration(string secretName, string value, DateTimeOffset expiration) { }
tests/Phoenix.MarketData.Infrastructure.Tests/generate-coverage-report.sh (1)

53-57: Improve IFS usage for multiple coverage files

When merging multiple coverage files, the IFS approach could be improved to handle filenames with spaces or special characters more robustly.

-    REPORTS_ARG="-reports:$(IFS=','; echo "${COVERAGE_FILES[*]}")"
+    # Create a comma-separated list of properly quoted filenames
+    REPORTS=()
+    for file in "${COVERAGE_FILES[@]}"; do
+        REPORTS+=("$file")
+    done
+    REPORTS_ARG="-reports:$(IFS=','; echo "${REPORTS[*]}")"
scripts/coverage-report.js (1)

91-94: Consider adding more robust GitHub URL pattern matching

The current regex for replacing GitHub URLs might not handle all possible URL patterns. Consider a more robust approach.

 // Replace any GitHub URLs with local paths
-const processed = content.replace(
-  /https:\/\/raw\.githubusercontent\.com\/[^"]+\/src\//g,
-  "src/",
-);
+const processed = content.replace(
+  /https:\/\/raw\.githubusercontent\.com\/[^"]+?\/(src|tests)\//g,
+  (match, folder) => `${folder}/`
+);
src/Phoenix.MarketData.Infrastructure/Configuration/MemorySecretCache.cs (1)

46-50: Fix indentation in CacheSecret method

There's an extra indentation in the closing brace of the method.

public void CacheSecret(string secretName, string value)
{
    // Default to 1 hour expiration if not specified
    CacheSecretWithExpiration(secretName, value, DateTimeOffset.UtcNow.AddHours(1));
-    }
+}
src/Phoenix.MarketData.Core/Events/IntegrationEvents/MarketDataEvents.cs (2)

52-52: Consider using constants for AssetClass values

The AssetClass property likely accepts specific values from the AssetClass constants class that was provided in the relevant code snippets.

You could add validation or type safety by using an enum or referencing the constants directly:

/// <summary>Gets the asset class</summary>
-public required string AssetClass { get; init; }
+public required string AssetClass { get; init; }

// Consider adding validation in constructors:
// if (!IsValidAssetClass(assetClass)) throw new ArgumentException("Invalid asset class", nameof(assetClass));

// And adding a private helper method:
// private static bool IsValidAssetClass(string assetClass) => 
//     assetClass == Domain.Models.AssetClass.Fx ||
//     assetClass == Domain.Models.AssetClass.Equity || 
//     /* other valid asset classes */;

36-37: Consider adding validation for Region property

The Region property was recently added to the interface, but there's no validation or documentation on what values are acceptable.

Consider adding validation for valid regions and documenting the acceptable values in the XML comments:

/// <summary>Gets the region for the event</summary>
+/// <remarks>
+/// Valid values are: "GLOBAL", "US", "EMEA", "APAC", etc.
+/// </remarks>
public string Region { get; }
src/Phoenix.MarketData.Infrastructure/Configuration/MarketDataSecretProvider.cs (1)

130-145: Simplify cache retrieval logic.

The current implementation first tries TryGetSecret, then falls back to GetSecret if it returns false. Since GetSecret provides less information (no expiration status), this approach is unnecessarily complex.

- // Try to get from cache first, with expiration
- if (_secretCache.TryGetSecret(secretName, out var cachedValue, out var isExpired))
- {
-     if (!string.IsNullOrEmpty(cachedValue) && !isExpired)
-     {
-         _logger?.LogDebug("Retrieved secret '{SecretName}' from cache", secretName);
-         return cachedValue;
-     }
- }
- else
- {
-     // Fallback to GetSecret if TryGetSecret returns false
-     cachedValue = _secretCache.GetSecret(secretName);
-     // Assume it's expired if we need to fall back to GetSecret
-     isExpired = true;
- }
+ // Try to get from cache first, with expiration
+ string? cachedValue = null;
+ bool isExpired = true;
+ bool hasValue = _secretCache.TryGetSecret(secretName, out cachedValue, out isExpired);
+ 
+ if (hasValue && !string.IsNullOrEmpty(cachedValue) && !isExpired)
+ {
+     _logger?.LogDebug("Retrieved secret '{SecretName}' from cache", secretName);
+     return cachedValue;
+ }

This simplifies the logic and makes the intention clearer. The GetSecret fallback is unnecessary since you can access the value from TryGetSecret even if it returns false.

📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 1930df5 and 6036b1d.

📒 Files selected for processing (20)
  • package.json (1 hunks)
  • scripts/coverage-report.js (1 hunks)
  • scripts/test-wrapper.js (1 hunks)
  • scripts/update-guidelines.js (1 hunks)
  • src/Phoenix.MarketData.Application/Events/MarketDataEventSubscriber.cs (1 hunks)
  • src/Phoenix.MarketData.Application/Services/MarketDataEventProcessor.cs (1 hunks)
  • src/Phoenix.MarketData.Core/Configuration/ISecretCache.cs (1 hunks)
  • src/Phoenix.MarketData.Core/Events/IntegrationEvents/MarketDataEvents.cs (1 hunks)
  • src/Phoenix.MarketData.Domain/Repositories/IRepository.cs (1 hunks)
  • src/Phoenix.MarketData.Functions/JsonConverters/JsonConverters.cs (1 hunks)
  • src/Phoenix.MarketData.Functions/Phoenix.MarketData.Functions.csproj (1 hunks)
  • src/Phoenix.MarketData.Infrastructure/Configuration/MarketDataSecretProvider.cs (1 hunks)
  • src/Phoenix.MarketData.Infrastructure/Configuration/MemorySecretCache.cs (1 hunks)
  • src/Phoenix.MarketData.Infrastructure/Configuration/NoOpSecretCache.cs (1 hunks)
  • src/Phoenix.MarketData.Infrastructure/Events/EventGridPublisher.cs (1 hunks)
  • src/Phoenix.MarketData.Infrastructure/Repositories/CosmosRepositoryExtensions.cs (1 hunks)
  • src/Phoenix.MarketData.Infrastructure/Schemas/JsonSchemaValidator.cs (2 hunks)
  • tests/Phoenix.MarketData.Infrastructure.Tests/Events/MarketDataEventSubscriberTests.cs (1 hunks)
  • tests/Phoenix.MarketData.Infrastructure.Tests/Mapping/CryptoOrdinalSpotPriceDataMapperTests.cs (1 hunks)
  • tests/Phoenix.MarketData.Infrastructure.Tests/generate-coverage-report.sh (1 hunks)
🚧 Files skipped from review as they are similar to previous changes (9)
  • scripts/update-guidelines.js
  • src/Phoenix.MarketData.Functions/Phoenix.MarketData.Functions.csproj
  • src/Phoenix.MarketData.Infrastructure/Schemas/JsonSchemaValidator.cs
  • package.json
  • src/Phoenix.MarketData.Functions/JsonConverters/JsonConverters.cs
  • src/Phoenix.MarketData.Infrastructure/Repositories/CosmosRepositoryExtensions.cs
  • src/Phoenix.MarketData.Domain/Repositories/IRepository.cs
  • src/Phoenix.MarketData.Application/Events/MarketDataEventSubscriber.cs
  • tests/Phoenix.MarketData.Infrastructure.Tests/Mapping/CryptoOrdinalSpotPriceDataMapperTests.cs
🧰 Additional context used
🧬 Code Graph Analysis (4)
src/Phoenix.MarketData.Core/Configuration/ISecretCache.cs (1)
src/Phoenix.MarketData.Infrastructure/Configuration/MemorySecretCache.cs (4)
  • GetSecret (11-18)
  • TryGetSecret (32-44)
  • CacheSecret (46-50)
  • CacheSecretWithExpiration (52-56)
src/Phoenix.MarketData.Infrastructure/Configuration/NoOpSecretCache.cs (1)
src/Phoenix.MarketData.Infrastructure/Configuration/MemorySecretCache.cs (5)
  • GetSecret (11-18)
  • CacheSecret (46-50)
  • Value (21-29)
  • TryGetSecret (32-44)
  • CacheSecretWithExpiration (52-56)
src/Phoenix.MarketData.Core/Events/IntegrationEvents/MarketDataEvents.cs (1)
src/Phoenix.MarketData.Domain/Models/AssetClass.cs (1)
  • AssetClass (6-14)
src/Phoenix.MarketData.Infrastructure/Configuration/MarketDataSecretProvider.cs (3)
src/Phoenix.MarketData.Infrastructure/Configuration/NoOpSecretCache.cs (5)
  • NoOpSecretCache (9-34)
  • TryGetSecret (23-28)
  • GetSecret (11-11)
  • Value (18-18)
  • CacheSecretWithExpiration (33-33)
src/Phoenix.MarketData.Core/Configuration/ISecretCache.cs (3)
  • TryGetSecret (19-22)
  • GetSecret (12-12)
  • CacheSecretWithExpiration (37-37)
src/Phoenix.MarketData.Infrastructure/Configuration/MemorySecretCache.cs (4)
  • TryGetSecret (32-44)
  • GetSecret (11-18)
  • Value (21-29)
  • CacheSecretWithExpiration (52-56)
🪛 Shellcheck (0.10.0)
tests/Phoenix.MarketData.Infrastructure.Tests/generate-coverage-report.sh

[warning] 15-15: Use 'cd ... || exit' or 'cd ... || return' in case cd fails.

(SC2164)


[warning] 42-42: Prefer mapfile or read -a to split command output (or quote to avoid splitting).

(SC2207)

🪛 GitHub Check: Build and Test
src/Phoenix.MarketData.Application/Services/MarketDataEventProcessor.cs

[warning] 92-92:
The variable 'defaultRegion' is assigned but its value is never used


[warning] 92-92:
The variable 'defaultRegion' is assigned but its value is never used

🔇 Additional comments (10)
src/Phoenix.MarketData.Core/Configuration/ISecretCache.cs (4)

5-12: Well-designed interface for secret caching operations

The interface definition is clear, with proper XML documentation and a focused set of methods for retrieving secrets. The GetSecret method provides a straightforward way to retrieve cached secrets by name.


14-22: Good implementation of the conventional Try pattern

The TryGetSecret method properly follows the .NET conventional Try pattern, returning a boolean success indicator and using out parameters for the values. This is a cleaner approach than using tuples and makes error handling more explicit for callers.


24-29: Clear contract for basic secret caching

The CacheSecret method provides a simple way to cache a secret without worrying about expiration details. The documentation is clear and the method signature is appropriately simple.


31-37: Expiration-aware caching method is well defined

The CacheSecretWithExpiration method extends the basic caching functionality with explicit expiration control. The method is well documented and uses appropriate parameter types.

tests/Phoenix.MarketData.Infrastructure.Tests/generate-coverage-report.sh (3)

18-24: Good test execution with proper error handling

The script correctly runs tests with coverage collection and checks the exit code, aborting on failure. This ensures coverage reports are only generated when tests pass successfully.


58-63: Good error handling for report generation

The script properly checks the exit code of the report generation command and exits with the same code if it fails. This ensures failures are properly propagated to the caller.


69-79: Robust browser opening with platform detection

The script intelligently detects the operating system and uses the appropriate command to open the report in a browser. It also provides fallback instructions when automatic opening isn't possible.

scripts/coverage-report.js (2)

16-27: Good installation check for ReportGenerator

The script properly checks if the ReportGenerator tool is installed and installs it if needed, with appropriate error handling.


9-11: 🛠️ Refactor suggestion

Add error handling for test execution

The test execution doesn't have proper error handling. If tests fail, the script continues to generate reports, which might be misleading.

 console.log("Running tests on the solution...");
-execSync("dotnet test Phoenix.MarketDataPlatform.sln", { stdio: "inherit" });
-console.log("Tests completed successfully");
+try {
+  execSync("dotnet test Phoenix.MarketDataPlatform.sln", { stdio: "inherit" });
+  console.log("Tests completed successfully");
+} catch (error) {
+  console.error("Tests failed with error:", error.message);
+  process.exit(1);
+}

Likely an incorrect or invalid review comment.

src/Phoenix.MarketData.Infrastructure/Events/EventGridPublisher.cs (1)

205-207:

Details

❌ Incorrect review comment

Verify version comparison logic.

The current implementation checks if marketData.Version.GetValueOrDefault() > 1 to determine if it's a change or creation event, which could incorrectly trigger a change event when Version is null.

- var eventType = (marketData.Version.GetValueOrDefault() > 1) 
+ var eventType = (marketData.Version.HasValue && marketData.Version.Value > 1)
    ? "Phoenix.MarketData.DataChanged" 
    : "Phoenix.MarketData.DataCreated";

Run this script to verify if there are any existing cases where Version is null:


🏁 Script executed:

#!/bin/bash
# Check if there are any cases where Version is null or undefined
rg -A 2 "Version\s*=\s*null" --glob "*.cs" || echo "No explicit null assignments found"

# Check how Version is handled in tests
rg -A 5 -B 5 "marketData.*Version" --glob "*Tests.cs"

Length of output: 179


Ignore version null check suggestion

The existing logic already treats a null Version as 0 via GetValueOrDefault(), so it will correctly yield "Phoenix.MarketData.DataCreated" when Version is null or ≤ 1, and only "Phoenix.MarketData.DataChanged" when Version > 1. The proposed change adds no new safety and can be disregarded.

• File: src/Phoenix.MarketData.Infrastructure/Events/EventGridPublisher.cs
• Lines: 205–207

Likely an incorrect or invalid review comment.

Comment on lines +15 to +18
/// <summary>
/// Gets a secret with its expiration information - no-op implementation returns null
/// </summary>
public (string? Value, bool IsExpired) GetSecretWithExpiration(string secretName) => (null, false);
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Interface inconsistency: GetSecretWithExpiration method not in ISecretCache

The GetSecretWithExpiration method appears to be implementing a method that's not defined in the ISecretCache interface shown in the first file. This suggests an inconsistency between the interface and implementation.

#!/bin/bash
# Check if GetSecretWithExpiration is defined in the ISecretCache interface
grep -A 2 "GetSecretWithExpiration" src/Phoenix.MarketData.Core/Configuration/ISecretCache.cs || echo "Method not found in interface"
🤖 Prompt for AI Agents
In src/Phoenix.MarketData.Infrastructure/Configuration/NoOpSecretCache.cs around
lines 15 to 18, the GetSecretWithExpiration method is implemented but not
defined in the ISecretCache interface, causing inconsistency. To fix this,
either remove the GetSecretWithExpiration method from NoOpSecretCache if it is
not needed or add its definition to the ISecretCache interface in
src/Phoenix.MarketData.Core/Configuration/ISecretCache.cs to ensure the
implementation matches the interface.

Comment on lines +20 to +28
/// <summary>
/// Tries to get a secret with expiration information - no-op implementation always returns false
/// </summary>
public bool TryGetSecret(string secretName, out string? value, out bool isExpired)
{
value = null;
isExpired = false;
return false;
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Return value inconsistency in TryGetSecret implementation

The isExpired parameter is set to false for the no-op implementation, but the MemorySecretCache implementation sets it to true when the secret is not found. Consider making this behavior consistent across implementations.

 public bool TryGetSecret(string secretName, out string? value, out bool isExpired)
 {
     value = null;
-    isExpired = false;
+    isExpired = true;
     return false;
 }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
/// <summary>
/// Tries to get a secret with expiration information - no-op implementation always returns false
/// </summary>
public bool TryGetSecret(string secretName, out string? value, out bool isExpired)
{
value = null;
isExpired = false;
return false;
}
/// <summary>
/// Tries to get a secret with expiration information - no-op implementation always returns false
/// </summary>
public bool TryGetSecret(string secretName, out string? value, out bool isExpired)
{
value = null;
isExpired = true;
return false;
}
🤖 Prompt for AI Agents
In src/Phoenix.MarketData.Infrastructure/Configuration/NoOpSecretCache.cs around
lines 20 to 28, the TryGetSecret method sets isExpired to false when the secret
is not found, which is inconsistent with MemorySecretCache that sets isExpired
to true in this case. Update the NoOpSecretCache implementation to set isExpired
to true when returning false to align behavior across implementations.

Comment on lines +42 to +47
COVERAGE_FILES=( $(find "$TEST_RESULTS_DIR" -name "coverage.cobertura.xml" 2>/dev/null) )

if [ ${#COVERAGE_FILES[@]} -eq 0 ]; then
echo "Error: No coverage files found at $TEST_RESULTS_DIR"
exit 1
elif [ ${#COVERAGE_FILES[@]} -eq 1 ]; then
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Use safer array assignment for handling filenames with spaces

The current array assignment method can break when filenames contain spaces or special characters. Consider using a more robust approach.

-COVERAGE_FILES=( $(find "$TEST_RESULTS_DIR" -name "coverage.cobertura.xml" 2>/dev/null) )
+# Use mapfile to safely handle filenames with spaces
+mapfile -t COVERAGE_FILES < <(find "$TEST_RESULTS_DIR" -name "coverage.cobertura.xml" -type f 2>/dev/null)
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
COVERAGE_FILES=( $(find "$TEST_RESULTS_DIR" -name "coverage.cobertura.xml" 2>/dev/null) )
if [ ${#COVERAGE_FILES[@]} -eq 0 ]; then
echo "Error: No coverage files found at $TEST_RESULTS_DIR"
exit 1
elif [ ${#COVERAGE_FILES[@]} -eq 1 ]; then
# Use mapfile to safely handle filenames with spaces
mapfile -t COVERAGE_FILES < <(find "$TEST_RESULTS_DIR" -name "coverage.cobertura.xml" -type f 2>/dev/null)
if [ ${#COVERAGE_FILES[@]} -eq 0 ]; then
echo "Error: No coverage files found at $TEST_RESULTS_DIR"
exit 1
elif [ ${#COVERAGE_FILES[@]} -eq 1 ]; then
🧰 Tools
🪛 Shellcheck (0.10.0)

[warning] 42-42: Prefer mapfile or read -a to split command output (or quote to avoid splitting).

(SC2207)

🤖 Prompt for AI Agents
In tests/Phoenix.MarketData.Infrastructure.Tests/generate-coverage-report.sh
around lines 42 to 47, the array assignment for COVERAGE_FILES uses command
substitution that can break with filenames containing spaces or special
characters. Replace the current assignment with a safer method such as using a
while-read loop with null delimiters or mapfile to correctly handle filenames
with spaces, ensuring each filename is properly captured as a separate array
element.

Comment on lines +91 to +93
// Use a configurable default region since Region isn't in the interface
const string defaultRegion = "global"; // Consider moving this to configuration

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Remove unused variable.

The defaultRegion constant is defined but never used in the code. Line 98 directly uses changedEvent.Region instead.

Either remove this unused variable or use it as a fallback when changedEvent.Region is null:

- // Use a configurable default region since Region isn't in the interface
- const string defaultRegion = "global"; // Consider moving this to configuration

+ // Extract required data
  var latestData = await _marketDataService.GetLatestMarketDataAsync(
      changedEvent.DataType,
      changedEvent.AssetClass,
      changedEvent.AssetId,
-     changedEvent.Region, // Use event's region
+     changedEvent.Region ?? "global", // Use event's region or default to global
      dateOnly,
      changedEvent.DocumentType,
      cancellationToken);
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
// Use a configurable default region since Region isn't in the interface
const string defaultRegion = "global"; // Consider moving this to configuration
// Extract required data
var latestData = await _marketDataService.GetLatestMarketDataAsync(
changedEvent.DataType,
changedEvent.AssetClass,
changedEvent.AssetId,
changedEvent.Region ?? "global", // Use event's region or default to global
dateOnly,
changedEvent.DocumentType,
cancellationToken);
🧰 Tools
🪛 GitHub Check: Build and Test

[warning] 92-92:
The variable 'defaultRegion' is assigned but its value is never used


[warning] 92-92:
The variable 'defaultRegion' is assigned but its value is never used

🤖 Prompt for AI Agents
In src/Phoenix.MarketData.Application/Services/MarketDataEventProcessor.cs
around lines 91 to 93, the constant defaultRegion is declared but never used. To
fix this, either remove the defaultRegion variable entirely or modify the code
to use defaultRegion as a fallback value when changedEvent.Region is null or
empty, ensuring the variable serves a purpose.

JustAGhosT and others added 4 commits May 23, 2025 03:32
fix: adjust parameter types in IRepository and MarketDataEventSubscriber
chore: clean up .gitignore and update package.json repository URL
docs: enhance SaveMarketData.http with clearer date format notes
style: update SwaggerUIFunction to use latest swagger-ui-dist scripts
feat: implement soft delete filtering in CosmosRepositoryQueryMethods
Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
This commit moves all code from Phoenix.MarketData.Core namespace to Phoenix.MarketData.Domain namespace to consolidate domain-related code in a single namespace. It also adds ConfigureAwait(false) to async calls in the validation decorator, adds null checks to MarketDataEvents constructor, and improves error handling in various components.
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 12

♻️ Duplicate comments (28)
src/Phoenix.MarketData.Core/Domain/MarketDataCreatedEvent.cs (2)

15-20: 🛠️ Refactor suggestion

Properties should be immutable since events should be immutable

Event objects represent something that already happened and should not be modifiable after creation. Make the properties init-only to enforce immutability.

-    public T Data { get; }
-    public DateTimeOffset OccurredAt { get; }
+    public T Data { get; init; }
+    public DateTimeOffset OccurredAt { get; init; }

26-30: 🛠️ Refactor suggestion

Add support for timestamp injection to enable deterministic testing

The current implementation hardcodes DateTimeOffset.UtcNow, making it impossible to test with specific timestamps or recreate historical events.

-    public MarketDataCreatedEvent(T data)
+    public MarketDataCreatedEvent(T data, DateTimeOffset? occurredAt = null)
     {
         Data = data ?? throw new ArgumentNullException(nameof(data));
-        OccurredAt = DateTimeOffset.UtcNow;
+        OccurredAt = occurredAt ?? DateTimeOffset.UtcNow;
     }

This change will:

  1. Enable deterministic testing with specific timestamps
  2. Allow recreation of historical events
  3. Improve performance in high-volume scenarios by allowing batch events to share a timestamp
src/Phoenix.MarketData.Infrastructure/Validation/CryptoOrdinalSpotPriceDataValidator.cs (3)

13-48: Consider consistent error handling for null input

The method throws an exception for null input but returns validation errors for other invalid states. For consistency, consider including null check in the validation errors.

 public Task<ValidationResult> ValidateAsync(CryptoOrdinalSpotPriceData data, CancellationToken cancellationToken = default)
 {
-    ArgumentNullException.ThrowIfNull(data);
     var errors = new List<ValidationError>();

+    if (data == null)
+    {
+        errors.Add(new ValidationError("Data", "Price data cannot be null"));
+        return Task.FromResult(new ValidationResult(errors));
+    }
+
     // Validation logic...

Alternatively, document this behavior in comments if throwing exceptions for null inputs is the preferred pattern across your codebase.


29-32: 🛠️ Refactor suggestion

Split combined validation check for clearer error messages

The current implementation combines two checks (null/empty and not "crypto") into one validation error. Splitting these provides more specific error messages to the client.

-if (string.IsNullOrWhiteSpace(data.AssetClass) || data.AssetClass.ToLower() != "crypto")
-{
-    errors.Add(new ValidationError("AssetClass", "Asset class must be 'crypto'"));
-}
+if (string.IsNullOrWhiteSpace(data.AssetClass))
+{
+    errors.Add(new ValidationError("AssetClass", "Asset class is required"));
+}
+else if (!data.AssetClass.Equals("crypto", StringComparison.OrdinalIgnoreCase))
+{
+    errors.Add(new ValidationError("AssetClass", "Asset class must be 'crypto'"));
+}

This also fixes the case-sensitive comparison by using StringComparison.OrdinalIgnoreCase.


23-26: 🛠️ Refactor suggestion

Use regex pattern matching for XXX/YYY format

The current check using Contains('/') is too permissive and would accept invalid formats like "/XXX" or "XXX/YYY/ZZZ".

-else if (!data.AssetId.Contains('/'))
+else if (!System.Text.RegularExpressions.Regex.IsMatch(data.AssetId, @"^[A-Z0-9]+/[A-Z0-9]+$", System.Text.RegularExpressions.RegexOptions.IgnoreCase))
 {
     errors.Add(new ValidationError("AssetId", "Asset ID format is invalid, expected format: XXX/YYY"));
 }
tests/Phoenix.MarketData.Infrastructure.Tests/Mapping/FxSpotPriceDataMapperTests.cs (4)

23-23: Avoid using DateTime.Now in tests.

Using DateTime.Now in tests can lead to flaky tests since the value changes between test runs. Consider using a fixed datetime value instead.

-                AsOfTime = TimeOnly.FromDateTime(DateTime.Now),
+                AsOfTime = TimeOnly.FromDateTime(new DateTime(2023, 1, 1, 12, 0, 0)),

35-35: 🛠️ Refactor suggestion

Test tag assignment, but not asserting tags.

You're setting tags on the domain object, but there's no assertion to verify they're properly mapped to the DTO.

Add the following assertions after line 46:

            Assert.Equal(domain.Price, result.Price);
            Assert.Equal((PriceSideDto)domain.Side, result.Side); // Cast to PriceSideDto for comparison
+            // Verify tags are mapped correctly
+            Assert.NotNull(result.Tags);
+            Assert.Equal(2, result.Tags.Count);
+            Assert.Contains("tag1", result.Tags);
+            Assert.Contains("tag2", result.Tags);

129-136: 🛠️ Refactor suggestion

Insufficient assertions in static mapper test.

The test only verifies a few properties but doesn't check all relevant fields that should be mapped identically between static and instance methods.

Add more comprehensive assertions:

            // Assert
            Assert.Equal(instanceResult.AssetId, staticResult.AssetId);
            Assert.Equal(instanceResult.Price, staticResult.Price);
            Assert.Equal(instanceResult.Side, staticResult.Side);
+            Assert.Equal(instanceResult.AsOfDate, staticResult.AsOfDate);
+            Assert.Equal(instanceResult.AsOfTime, staticResult.AsOfTime);
+            Assert.Equal(instanceResult.SchemaVersion, staticResult.SchemaVersion);
+            Assert.Equal(instanceResult.AssetClass, staticResult.AssetClass);
+            Assert.Equal(instanceResult.DataType, staticResult.DataType);
+            Assert.Equal(instanceResult.Region, staticResult.Region);
+            Assert.Equal(instanceResult.DocumentType, staticResult.DocumentType);

162-168: 🛠️ Refactor suggestion

Insufficient assertions in domain mapping test.

Similar to the previous test, this one also lacks comprehensive assertions for the static mapping method.

            // Assert
            Assert.Equal(instanceResult.AssetId, staticResult.AssetId);
            Assert.Equal(instanceResult.Price, staticResult.Price);
            Assert.Equal(instanceResult.Side, staticResult.Side);
+            Assert.Equal(instanceResult.AsOfDate, staticResult.AsOfDate);
+            Assert.Equal(instanceResult.AsOfTime, staticResult.AsOfTime);
+            Assert.Equal(instanceResult.SchemaVersion, staticResult.SchemaVersion);
+            Assert.Equal(instanceResult.AssetClass, staticResult.AssetClass);
+            Assert.Equal(instanceResult.DataType, staticResult.DataType);
+            Assert.Equal(instanceResult.Region, staticResult.Region);
+            Assert.Equal(instanceResult.DocumentType, staticResult.DocumentType);
tests/Phoenix.MarketData.Infrastructure.Tests/Mapping/CryptoOrdinalSpotPriceDataMapperTests.cs (3)

23-23: Avoid using DateTime.Now in tests.

Using DateTime.Now in tests can lead to flaky tests since the value changes between test runs. Consider using a fixed datetime value instead.

-                AsOfTime = TimeOnly.FromDateTime(DateTime.Now),
+                AsOfTime = TimeOnly.FromDateTime(new DateTime(2023, 1, 1, 12, 0, 0)),

208-212: ⚠️ Potential issue

Fix parameter types in PriceSideDtoToDomainConversion test.

The test method parameter is named dtoSide but is of type PriceSide (domain type), which is confusing. Also, the InlineData attributes use PriceSide values when they should use PriceSideDto values since this is a DTO-to-domain conversion test.

        [Theory]
-       [InlineData(PriceSide.Ask, PriceSide.Ask)]  // Fixed: Changed from PriceSideDto to PriceSide
-       [InlineData(PriceSide.Bid, PriceSide.Bid)]  // Fixed: Changed from PriceSideDto to PriceSide
-       [InlineData(PriceSide.Mid, PriceSide.Mid)]  // Fixed: Changed from PriceSideDto to PriceSide
-       public void PriceSideDtoToDomainConversion_ShouldMapCorrectly(PriceSide dtoSide, PriceSide expectedDomainSide)
+       [InlineData(PriceSideDto.Ask, PriceSide.Ask)]
+       [InlineData(PriceSideDto.Bid, PriceSide.Bid)]
+       [InlineData(PriceSideDto.Mid, PriceSide.Mid)]
+       public void PriceSideDtoToDomainConversion_ShouldMapCorrectly(PriceSideDto dtoSide, PriceSide expectedDomainSide)

163-166: 🛠️ Refactor suggestion

Insufficient assertions in static mapper test.

The test only verifies a few properties but doesn't check all relevant fields that should be mapped identically between static and instance methods.

            // Assert
            Assert.Equal(instanceResult.AssetId, staticResult.AssetId);
            Assert.Equal(instanceResult.Price, staticResult.Price);
            Assert.Equal(instanceResult.CollectionName, staticResult.CollectionName);
+            Assert.Equal(instanceResult.AsOfDate, staticResult.AsOfDate);
+            Assert.Equal(instanceResult.AsOfTime, staticResult.AsOfTime);
+            Assert.Equal(instanceResult.SchemaVersion, staticResult.SchemaVersion);
+            Assert.Equal(instanceResult.AssetClass, staticResult.AssetClass);
+            Assert.Equal(instanceResult.DataType, staticResult.DataType);
+            Assert.Equal(instanceResult.Region, staticResult.Region);
+            Assert.Equal(instanceResult.DocumentType, staticResult.DocumentType);
+            Assert.Equal(instanceResult.ParentInscriptionId, staticResult.ParentInscriptionId);
+            Assert.Equal(instanceResult.InscriptionId, staticResult.InscriptionId);
+            Assert.Equal(instanceResult.Side, staticResult.Side);
src/Phoenix.MarketData.Domain/Repositories/IRepository.cs (1)

43-43: 🛠️ Refactor suggestion

SaveAsync should return more information about the save operation

The SaveAsync method returns only a string (presumably the ID), but it would be more useful to return additional information such as version number or ETag for concurrency control, similar to how the Get methods return both the entity and an ETag.

-        Task<string> SaveAsync(T marketData);
+        Task<(string Id, string ETag)> SaveAsync(T marketData);
src/Phoenix.MarketData.Infrastructure/Mapping/FxSpotPriceDataMapper.cs (3)

46-47: 🛠️ Refactor suggestion

Validate AssetId before assignment

The code assigns dto.AssetId directly to both AssetId and DisplayAssetId without validating that it's not null or empty.

-                AssetId = dto.AssetId, // This will be auto-normalized to lowercase
-                DisplayAssetId = dto.AssetId, // Preserve the original case
+                AssetId = !string.IsNullOrEmpty(dto.AssetId) ? dto.AssetId : throw new ArgumentException("AssetId cannot be null or empty", nameof(dto)), // This will be auto-normalized to lowercase
+                DisplayAssetId = dto.AssetId, // Preserve the original case

52-52: ⚠️ Potential issue

Preserve original CreateTimestamp

Setting CreateTimestamp to DateTime.UtcNow overwrites potentially important provenance information from the original data source. Unless there's a specific requirement to timestamp at ingestion, the original timestamp should be preserved.

-                CreateTimestamp = DateTime.UtcNow,
+                CreateTimestamp = dto.CreateTimeStamp ?? DateTime.UtcNow,

86-86: 🛠️ Refactor suggestion

Make singleton instance thread-safe

The static instance is not thread-safe. Consider using Lazy to ensure thread-safe initialization.

-        private static readonly FxSpotPriceDataMapper _instance = new();
+        private static readonly Lazy<FxSpotPriceDataMapper> _lazyInstance = new Lazy<FxSpotPriceDataMapper>(() => new FxSpotPriceDataMapper(), isThreadSafe: true);
+        private static FxSpotPriceDataMapper _instance => _lazyInstance.Value;
src/Phoenix.MarketData.Functions/SaveDocumentToDb.cs (3)

87-87: Use more efficient stream handling for request body

Using StreamReader and ReadToEndAsync loads the entire request into memory at once, which could be inefficient for large requests.

-        string requestBody = await new StreamReader(req.Body).ReadToEndAsync();
+        string requestBody;
+        using (var reader = new StreamReader(req.Body))
+        {
+            requestBody = await reader.ReadToEndAsync();
+        }

For even better memory management with large requests, consider:

-        string requestBody = await new StreamReader(req.Body).ReadToEndAsync();
+        string requestBody;
+        using (var stream = req.Body)
+        {
+            using (var reader = new StreamReader(stream))
+            {
+                requestBody = await reader.ReadToEndAsync();
+            }
+        }

97-111: 🛠️ Refactor suggestion

Implement specific exception handling for JSON deserialization

The current exception handling catches all exceptions, which might mask specific issues. Adding specific handling for JSON deserialization errors would provide clearer error messages to clients.

        try
        {
            requestData = System.Text.Json.JsonSerializer.Deserialize<TDto>(requestBody, new System.Text.Json.JsonSerializerOptions
            {
                PropertyNameCaseInsensitive = true
            });
            if (requestData == null)
                throw new ArgumentNullException(nameof(requestData));
        }
+        catch (System.Text.Json.JsonException ex)
+        {
+            _logger.LogError(ex, $"JSON parsing error for {dataType}/{assetClass}: {ex.Message}");
+            return new BadRequestObjectResult($"Invalid JSON format: {ex.Message}");
+        }
+        catch (ArgumentNullException ex)
+        {
+            _logger.LogError(ex, $"Null data error for {dataType}/{assetClass}");
+            return new BadRequestObjectResult("Request data cannot be null.");
+        }
        catch (Exception ex)
        {
            _logger.LogError(ex, $"Error deserializing request body for {dataType}/{assetClass}.");
            return new BadRequestObjectResult("Invalid request body.");
        }

162-172: 🛠️ Refactor suggestion

Improve error classification in ProcessActionResult

When result is null, the method returns a BadRequest (400) status, which suggests a client error. However, a null result from a repository operation is more likely a server-side issue and should return a 500 status.

    private IActionResult ProcessActionResult(IMarketDataEntity? result)
    {
        if (result != null)
        {
            var msg = $"Document saved successfully to {result.Id}.";
            _logger.LogInformation(msg);
            return new OkObjectResult(msg);
        }
        _logger.LogError("Error saving document (null result).");
-        return new BadRequestObjectResult("Error saving document.");
+        return new ObjectResult("Error saving document (server error).")
+        {
+            StatusCode = StatusCodes.Status500InternalServerError
+        };
    }
src/Phoenix.MarketData.Application/Events/MarketDataEventSubscriber.cs (1)

30-32: Consider catching specific exception types

The retry policy catches all exceptions except OperationCanceledException. Consider handling specific exceptions relevant to the expected failure modes to avoid masking critical errors.

            // Configure retry policy with exponential backoff
            _retryPolicy = Policy
-                .Handle<Exception>(ex => !(ex is OperationCanceledException))
+                .Handle<TimeoutException>()
+                .Or<HttpRequestException>()
+                .Or<InvalidOperationException>()
+                .Or<ApplicationException>(ex => !(ex is OperationCanceledException))
                .WaitAndRetryAsync(
src/Phoenix.MarketData.Functions/Program.cs (1)

94-94: 🛠️ Refactor suggestion

Add configuration validation for EventGridPublisher.

The EventGridPublisher is registered without validating its configuration settings, unlike the thorough validation for Cosmos DB settings. This could lead to runtime errors if required EventGrid settings are missing.

+// EventGrid configuration validation
+var eventGridEndpoint = configuration["EventGrid:Endpoint"];
+var eventGridKey = configuration["EventGrid:Key"];
+
+if (string.IsNullOrWhiteSpace(eventGridEndpoint))
+    throw new InvalidOperationException("Missing configuration: EventGrid:Endpoint");
+if (string.IsNullOrWhiteSpace(eventGridKey))
+    throw new InvalidOperationException("Missing configuration: EventGrid:Key");
+
+// Register EventGrid client and publisher
+services.AddSingleton(sp => new Azure.Messaging.EventGrid.EventGridPublisherClient(
+    new Uri(eventGridEndpoint),
+    new Azure.AzureKeyCredential(eventGridKey)));
 services.AddSingleton<IEventPublisher, EventGridPublisher>();
src/Phoenix.MarketData.Application/Services/MarketDataEventProcessor.cs (2)

91-93: Remove unused variable.

The defaultRegion constant is defined but never used. Either remove it or use it as the fallback when changedEvent.Region is null.

-// Use a configurable default region since Region isn't in the interface
-const string defaultRegion = "global"; // Consider moving this to configuration
🧰 Tools
🪛 GitHub Check: Build and Test

[warning] 92-92:
The variable 'defaultRegion' is assigned but its value is never used


114-118: 🛠️ Refactor suggestion

Handle missing data case in ProcessChangedEventAsync.

When no existing data is found for a changed event, the method logs a warning but doesn't take any action. This could lead to silent failures.

else
{
    _logger.LogWarning("No existing data found for changed event {EventId} for {AssetId}",
        changedEvent.Id, changedEvent.AssetId);
+    // Consider one of these approaches:
+    // 1. Throw an exception: throw new InvalidOperationException($"No existing data found for changed event {changedEvent.Id}");
+    // 2. Create the missing data: await _marketDataService.PublishMarketDataAsync(...);
+    // 3. Add to a dead-letter queue for further investigation
}
src/Phoenix.MarketData.Core/Validation/ValidationResult.cs (1)

12-12: Mark _errors field as readonly.

The _errors field is only assigned in the constructor and modified through List methods, so it should be marked as readonly.

-private List<ValidationError> _errors;
+private readonly List<ValidationError> _errors;
src/Phoenix.MarketData.Infrastructure/Events/EventGridPublisher.cs (2)

40-51: Still missing CancellationToken in secret retrieval – please propagate

Prior feedback suggested surfacing a CancellationToken through CreateAsync and forwarding it to IMarketDataSecretProvider.
The method still calls GetEventGridEndpointAsync() / GetEventGridKeyAsync() without the token, making it impossible to cancel a blocked secret-store call.


213-223: Inconsistent JSON-serialisation options in legacy EventGrid event

PublishMarketDataEventAsync uses JsonSerializer.Serialize(marketData) without the camel-case options applied elsewhere, leading to schema drift across event types. Re-use the shared JsonSerializerOptions instance for consistency.

-var eventData = new BinaryData(JsonSerializer.Serialize(marketData));
+var options = new JsonSerializerOptions { PropertyNamingPolicy = JsonNamingPolicy.CamelCase };
+var eventData = new BinaryData(JsonSerializer.Serialize(marketData, options));
src/Phoenix.MarketData.Infrastructure/Repositories/CosmosRepositoryMethods.cs (2)

343-365: Still loads all soft-deleted items into memory before deleting

This method was flagged in a previous review; the updated implementation still:

  1. Reads every soft-deleted item into toDelete (List<T>).
  2. Deletes them serially.

For large datasets this will OOM and be painfully slow. Stream the iterator page-by-page and delete in batches or in parallel (TransactionalBatch per partition).


75-110: 🛠️ Refactor suggestion

Unbounded Task.Run fan-out can throttle the gateway and waste RU/s

BulkInsertAsync spins up one Task.Run per entity (foreach … Task.Run …), which can:

  1. Create thousands of in-memory tasks → high GC / scheduler overhead.
  2. Trigger a flood of parallel CreateItemAsync calls that exceed RU limits, causing 429 throttling & retries.
  3. Lose Cosmos-SDK “bulk mode” benefits (it is automatically enabled when you do not wrap calls in Task.Run; the SDK batches internally).

Refactor to rely on the SDK’s built-in bulk executor (parallelism is controlled via CosmosClientOptions.AllowBulkExecution and the async pipeline). A minimal change is to drop Task.Run, use Select directly, and optionally gate concurrency with ParallelOptions or SemaphoreSlim.

-var entityList = entities.ToList();
-var tasks = new List<Task>();
-
-foreach (var entity in entityList)
-{
-    var task = Task.Run(async () =>
-    {
+var tasks = entities.Select(async entity =>
+{
     try
     {
         var pk = GetPartitionKey(entity);
         await _container.CreateItemAsync(entity, pk, cancellationToken: cancellationToken);
         Interlocked.Increment(ref count);
     }
     catch (Exception ex)
     {
         lock (exceptions) exceptions.Add(ex);
         _logger.LogError(ex,
             "Bulk insert failed for entity {EntityType} with ID {Id}: {ErrorMessage}",
             typeof(T).Name, entity.Id, ex.Message);
     }
-    }, cancellationToken);
-    tasks.Add(task);
-}
-
-await Task.WhenAll(tasks);
+});
+
+await Task.WhenAll(tasks);

Consider limiting concurrency, e.g. Parallel.ForEachAsync with maxDegreeOfParallelism = Environment.ProcessorCount * 2.

🧹 Nitpick comments (24)
scripts/run-coverage.js (4)

1-113: Well-structured script with comprehensive coverage workflow!

The script effectively manages the full test and coverage workflow with good cross-platform support and error handling. The color-coded output provides excellent visibility into the process stages.

A few suggestions for improved robustness:

  1. Consider checking if the solution file exists before running tests
  2. Use absolute paths instead of relative ones for better reliability when script is run from different directories
- execSync("dotnet test Phoenix.MarketDataPlatform.sln", { stdio: "inherit" });
+ const solutionPath = path.join(process.cwd(), "Phoenix.MarketDataPlatform.sln");
+ if (!fs.existsSync(solutionPath)) {
+   console.error(`${colors.red}❌ Solution file not found: ${solutionPath}${colors.reset}`);
+   process.exit(1);
+ }
+ execSync(`dotnet test "${solutionPath}"`, { stdio: "inherit" });

115-160: Improve cross-platform file handling in the helper function

The function properly handles different platforms, but it could be improved for better maintainability and reliability.

  1. Use path.join for Windows file paths instead of hardcoded backslashes:
- const findCommand = `dir /s /b tests\\*coverage.cobertura.xml`;
+ const findCommand = `dir /s /b "${path.join('tests', '*coverage.cobertura.xml')}"`;
  1. The regex for GitHub URL replacement assumes a specific format. Consider making it more flexible:
- const processed = content.replace(
-   /https:\/\/raw\.githubusercontent\.com\/[^"]+\/src\//g,
-   "src/",
- );
+ // More flexible regex that handles variations in GitHub URLs
+ const processed = content.replace(
+   /https:\/\/raw\.githubusercontent\.com\/[^"]+?\/(src|lib|source)\//g,
+   "$1/",
+ );
  1. Consider using Node's native glob module instead of shell commands for more consistent cross-platform behavior:
// Add to imports: const glob = require('glob');

function processAllCoverageFiles() {
  // Find all coverage files in a truly cross-platform way
  const coverageFiles = glob.sync('tests/**/coverage.cobertura.xml');
  
  console.log(
    `${colors.blue}Found ${coverageFiles.length} coverage files to process${colors.reset}`,
  );
  
  // Rest of the function remains the same...
}

Note: You would need to add glob as a dependency with npm install glob.


79-85: Use consistent path handling in reportgenerator command

The command uses forward slashes in glob patterns which might cause issues on Windows.

- `reportgenerator "-reports:tests/**/TestResults/**/processed.coverage.cobertura.xml" "-targetdir:${reportDir}" "-reporttypes:Html" "-sourcedirs:${process.cwd()}" "-verbosity:Warning"`,
+ `reportgenerator "-reports:${path.join('tests', '**', 'TestResults', '**', 'processed.coverage.cobertura.xml')}" "-targetdir:${reportDir}" "-reporttypes:Html" "-sourcedirs:${process.cwd()}" "-verbosity:Warning"`,

Though in practice, reportgenerator might handle the forward slashes correctly on Windows, using path.join would ensure consistent behavior across platforms.


38-53: Consider showing installation progress feedback

The ReportGenerator installation might take some time, especially on slower connections.

console.log(
  `${colors.yellow}Installing ReportGenerator tool...${colors.reset}`,
);
+ console.log(`${colors.blue}This may take a moment...${colors.reset}`);
execSync("dotnet tool install -g dotnet-reportgenerator-globaltool", {
  stdio: "inherit",
  shell: true,
});
+ console.log(`${colors.green}✅ ReportGenerator installed successfully${colors.reset}`);
src/Phoenix.MarketData.Infrastructure/Validation/CryptoOrdinalSpotPriceDataValidator.cs (1)

1-3: Remove duplicate import statement

The same namespace is imported twice, which is unnecessary and flagged by static analysis.

 using Phoenix.MarketData.Domain.Models;
 using Phoenix.MarketData.Domain.Validation;
-using Phoenix.MarketData.Domain.Models;
 using System;
🧰 Tools
🪛 GitHub Check: Build and Test

[warning] 3-3:
The using directive for 'Phoenix.MarketData.Domain.Models' appeared previously in this namespace

src/Phoenix.MarketData.Core/Domain/Events/MarketDataChangedEvent.cs (1)

4-5: Use file-scoped namespace for consistency

MarketDataCreatedEvent<T> uses file-scoped namespace while this class uses block-scoped namespace. Use a consistent style across the codebase.

-namespace Phoenix.MarketData.Domain.Domain.Events
-{
+namespace Phoenix.MarketData.Domain.Domain.Events;

Make the same change at the end of the file to remove the closing brace.

src/Phoenix.MarketData.Functions/OpenApi/SwaggerMiddleware.cs (3)

1-5: Add missing System namespace import

The Exception type on line 44 is used without importing its namespace.

 using System.IO;
 using System.Threading.Tasks;
+using System;
 using Microsoft.AspNetCore.Http;
 using Microsoft.Extensions.Logging;

27-27: Make the OpenAPI file path configurable

The OpenAPI file path is hardcoded to "openapi.json" in the current directory, which reduces flexibility.

-                var openApiFilePath = Path.Combine(Directory.GetCurrentDirectory(), "openapi.json");
+                var openApiFileName = _configuration.GetValue<string>("OpenApi:FileName") ?? "openapi.json";
+                var openApiFilePath = Path.Combine(Directory.GetCurrentDirectory(), openApiFileName);

Add dependency injection for IConfiguration in the constructor:

 private readonly RequestDelegate _next;
 private readonly ILogger<SwaggerMiddleware> _logger;
+private readonly IConfiguration _configuration;

-public SwaggerMiddleware(RequestDelegate next, ILogger<SwaggerMiddleware> logger)
+public SwaggerMiddleware(RequestDelegate next, ILogger<SwaggerMiddleware> logger, IConfiguration configuration)
 {
     _next = next;
     _logger = logger;
+    _configuration = configuration;
 }

And add the necessary using statement:

+using Microsoft.Extensions.Configuration;

31-35: Add cache headers to improve performance

The OpenAPI specification doesn't change frequently. Adding cache headers would reduce server load and improve client performance.

                     if (File.Exists(openApiFilePath))
                     {
+                        // Add cache headers
+                        context.Response.Headers.Add("Cache-Control", "public, max-age=3600");
+                        context.Response.Headers.Add("ETag", $"\"{File.GetLastWriteTimeUtc(openApiFilePath).Ticks}\"");
                         await context.Response.SendFileAsync(openApiFilePath);
                         return;
                     }

This will instruct clients to cache the OpenAPI specification for one hour and provide an ETag for conditional requests.

tests/Phoenix.MarketData.Infrastructure.Tests/Mapping/FxSpotPriceDataMapperTests.cs (1)

6-6: Remove duplicate import.

The import for Phoenix.MarketData.Domain.Models on line 6 is a duplicate of line 5, which causes a compiler warning.

-using Phoenix.MarketData.Domain.Models;
🧰 Tools
🪛 GitHub Check: Build and Test

[warning] 6-6:
The using directive for 'Phoenix.MarketData.Domain.Models' appeared previously in this namespace

src/Phoenix.MarketData.Core/Validation/ValidationException.cs (2)

24-29: Consider using ReadOnlyCollection for more consistent implementation.

You're using IReadOnlyList<ValidationError> for the property type but .ToList().AsReadOnly() for the implementation, which returns a ReadOnlyCollection<T>. For more consistency, consider using ReadOnlyCollection<T> explicitly.

        /// <summary>
        /// Gets the collection of validation errors that caused this exception
        /// </summary>
-        public IReadOnlyList<ValidationError> Errors { get; }
+        public ReadOnlyCollection<ValidationError> Errors { get; }

        /// <summary>
        /// Initializes a new instance of the ValidationException class with a collection of validation errors
        /// </summary>
        /// <param name="errors">The validation errors, or null for an empty collection</param>
        public ValidationException(IEnumerable<ValidationError>? errors)
            : base(CreateMessage(errors))
        {
            // Ensure we always have a non-null collection of errors
-            Errors = errors?.ToList().AsReadOnly() ?? new ReadOnlyCollection<ValidationError>(Array.Empty<ValidationError>());
+            Errors = new ReadOnlyCollection<ValidationError>(errors?.ToList() ?? new List<ValidationError>());
        }

79-80: Use C# 7.0+ null check pattern for more concise code.

You can use the C# 7.0+ pattern for null checks to make the code more concise.

        public override void GetObjectData(SerializationInfo info, StreamingContext context)
        {
-            if (info == null)
-                throw new ArgumentNullException(nameof(info));
+            if (info == null) throw new ArgumentNullException(nameof(info));

            base.GetObjectData(info, context);
            info.AddValue("Errors", Errors.ToArray());
        }
tests/Phoenix.MarketData.Infrastructure.Tests/Validation/CryptoOrdinalSpotPriceDataValidatorTests.cs (2)

8-8: Remove duplicate import.

The import for Phoenix.MarketData.Domain.Models on line 8 is a duplicate of line 7, which causes a compiler warning.

-using Phoenix.MarketData.Domain.Models;
🧰 Tools
🪛 GitHub Check: Build and Test

[warning] 8-8:
The using directive for 'Phoenix.MarketData.Domain.Models' appeared previously in this namespace


31-32: Test would be more robust with assertions for all properties.

The test is missing assertions for all the properties that were set in the domain entity. This would ensure complete mapping validation.

Add more assertions to verify all set properties:

            // Assert
            Assert.True(result.IsValid);
            Assert.Empty(result.Errors);
+            
+            // Additional assertions to verify the validated entity remains unchanged
+            Assert.Equal("BTC/USD", entity.AssetId);
+            Assert.Equal("crypto", entity.AssetClass);
+            Assert.Equal("spotprice", entity.DataType);
+            Assert.Equal("1.0", entity.SchemaVersion);
+            Assert.Equal("global", entity.Region);
+            Assert.Equal("price", entity.DocumentType);
+            Assert.Equal("USD", entity.Currency);
+            Assert.Equal("Bitcoin", entity.CollectionName);
+            Assert.Equal("12345", entity.ParentInscriptionId);
+            Assert.Equal("67890", entity.InscriptionId);
+            Assert.Equal(50000.00m, entity.Price);
tests/Phoenix.MarketData.Infrastructure.Tests/Mapping/CryptoOrdinalSpotPriceDataMapperTests.cs (3)

6-6: Remove duplicate import.

The import for Phoenix.MarketData.Domain.Models on line 6 is a duplicate of line 5, which causes a compiler warning.

-using Phoenix.MarketData.Domain.Models;
🧰 Tools
🪛 GitHub Check: Build and Test

[warning] 6-6:
The using directive for 'Phoenix.MarketData.Domain.Models' appeared previously in this namespace


72-73: Remove extra blank line.

There's an unnecessary extra blank line at line 72.

            Assert.Throws<ArgumentNullException>(() => mapper.ToDto(null!));
        }

-
        [Fact]
        public void ToDomain_WithValidDto_ReturnsMappedEntity()

230-230: Update comment to match parameter type.

The comment should be updated to match the actual parameter type.

                price: 50000.00m,
-               side: dtoSide,  // Fixed: Changed parameter type from PriceSideDto to PriceSide
+               side: dtoSide,  // Using DTO side value for conversion test
                collectionName: "Bitcoin",
src/Phoenix.MarketData.Infrastructure/Mapping/FxSpotPriceDataMapper.cs (1)

3-3: Remove duplicate using directive

The using directive for 'Phoenix.MarketData.Domain.Models' appears twice (lines 2 and 3).

 using Phoenix.MarketData.Domain;
 using Phoenix.MarketData.Domain.Models;
-using Phoenix.MarketData.Domain.Models;
 using Phoenix.MarketData.Infrastructure.Serialization;
🧰 Tools
🪛 GitHub Check: Build and Test

[warning] 3-3:
The using directive for 'Phoenix.MarketData.Domain.Models' appeared previously in this namespace

src/Phoenix.MarketData.Functions/Program.cs (1)

17-17: Remove duplicate import.

The import of Phoenix.MarketData.Domain.Models on line 17 is a duplicate of the same import on line 13.

-using Phoenix.MarketData.Domain.Models;
🧰 Tools
🪛 GitHub Check: Build and Test

[failure] 17-17:
The type or namespace name 'Domain' does not exist in the namespace 'Phoenix.MarketData' (are you missing an assembly reference?)

src/Phoenix.MarketData.Core/Events/IntegrationEvents/MarketDataEvents.cs (3)

116-116: Consider using enum or constants for event types.

String literals like "Created" and "Changed" for event types are used directly in the code. Consider using enum values or constants for better maintainability and to avoid magic strings.

+public static class MarketDataEventTypes
+{
+    public const string Created = "Created";
+    public const string Changed = "Changed";
+}

 public class MarketDataCreatedIntegrationEvent : MarketDataIntegrationEventBase
 {
     /// <summary>Gets the event type identifier</summary>
-    public override string EventType => "Created";
+    public override string EventType => MarketDataEventTypes.Created;

Also applies to: 150-150


93-98: Consider using a validation library.

The manual null checks in the constructor could be replaced with a validation library or pattern for more concise code.

-if (id == null) throw new ArgumentNullException(nameof(id));
-if (assetId == null) throw new ArgumentNullException(nameof(assetId));
-if (assetClass == null) throw new ArgumentNullException(nameof(assetClass));
-if (dataType == null) throw new ArgumentNullException(nameof(dataType));
-if (documentType == null) throw new ArgumentNullException(nameof(documentType));
-if (region == null) throw new ArgumentNullException(nameof(region));
+Guard.Against.Null(id, nameof(id));
+Guard.Against.Null(assetId, nameof(assetId));
+Guard.Against.Null(assetClass, nameof(assetClass));
+Guard.Against.Null(dataType, nameof(dataType));
+Guard.Against.Null(documentType, nameof(documentType));
+Guard.Against.Null(region, nameof(region));

121-123: Remove redundant empty base constructor calls.

The explicit base() calls in the default constructors are redundant as they're called implicitly.

-public MarketDataCreatedIntegrationEvent() : base()
+public MarketDataCreatedIntegrationEvent()
 {
 }

-public MarketDataChangedIntegrationEvent() : base()
+public MarketDataChangedIntegrationEvent()
 {
 }

Also applies to: 155-157

src/Phoenix.MarketData.Infrastructure/Mapping/CryptoOrdinalSpotPriceDataMapper.cs (1)

1-4: Remove duplicate / unused using directives

Two identical using Phoenix.MarketData.Domain.Models; lines (and an apparently unused Phoenix.MarketData.Domain;) slip past the compiler but add noise.

-using Phoenix.MarketData.Domain;
 using Phoenix.MarketData.Domain.Models;
-using Phoenix.MarketData.Domain.Models;
 using Phoenix.MarketData.Infrastructure.Serialization;
🧰 Tools
🪛 GitHub Check: Build and Test

[warning] 3-3:
The using directive for 'Phoenix.MarketData.Domain.Models' appeared previously in this namespace

src/Phoenix.MarketData.Infrastructure/Repositories/CosmosRepositoryMethods.cs (1)

1-1: Header comment references the wrong file name

The file begins with // src/Phoenix.MarketData.Infrastructure/Repositories/CosmosRepositoryAdditionalMethods.cs, but is actually named CosmosRepositoryMethods.cs.
This small mismatch can confuse grep searches and automated tooling that rely on the header comment.

📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 88df286 and 6e67dec.

📒 Files selected for processing (79)
  • .gitignore (1 hunks)
  • package.json (1 hunks)
  • scripts/coverage-report.js (1 hunks)
  • scripts/run-coverage.js (1 hunks)
  • scripts/test-wrapper.js (1 hunks)
  • src/Phoenix.MarketData.Application/Events/MarketDataEventSubscriber.cs (1 hunks)
  • src/Phoenix.MarketData.Application/Services/Decorators/ValidationMarketDataServiceDecorator.cs (1 hunks)
  • src/Phoenix.MarketData.Application/Services/MarketDataEventProcessor.cs (1 hunks)
  • src/Phoenix.MarketData.Application/Services/MarketDataService.cs (1 hunks)
  • src/Phoenix.MarketData.Core/Configuration/IMarketDataSecretProvider.cs (1 hunks)
  • src/Phoenix.MarketData.Core/Configuration/ISecretCache.cs (1 hunks)
  • src/Phoenix.MarketData.Core/Domain/Events/MarketDataChangedEvent.cs (1 hunks)
  • src/Phoenix.MarketData.Core/Domain/MarketDataCreatedEvent.cs (1 hunks)
  • src/Phoenix.MarketData.Core/Events/IEventPublisher.cs (1 hunks)
  • src/Phoenix.MarketData.Core/Events/IMarketDataEventSubscriber.cs (1 hunks)
  • src/Phoenix.MarketData.Core/Events/IntegrationEvents/MarketDataEvents.cs (1 hunks)
  • src/Phoenix.MarketData.Core/Mapping/IModelMapper.cs (1 hunks)
  • src/Phoenix.MarketData.Core/Repositories/IRepository.cs (1 hunks)
  • src/Phoenix.MarketData.Core/Validation/FluentValidationExtensions.cs (1 hunks)
  • src/Phoenix.MarketData.Core/Validation/IValidator.cs (1 hunks)
  • src/Phoenix.MarketData.Core/Validation/SchemaValidationExtensions.cs (1 hunks)
  • src/Phoenix.MarketData.Core/Validation/ServiceValidationDecorator.cs (1 hunks)
  • src/Phoenix.MarketData.Core/Validation/ValidationDecorator.cs (1 hunks)
  • src/Phoenix.MarketData.Core/Validation/ValidationError.cs (1 hunks)
  • src/Phoenix.MarketData.Core/Validation/ValidationException.cs (1 hunks)
  • src/Phoenix.MarketData.Core/Validation/ValidationPipelineBehaviour.cs (1 hunks)
  • src/Phoenix.MarketData.Core/Validation/ValidationResult.cs (1 hunks)
  • src/Phoenix.MarketData.Domain/Constants/CurrencyCodes.cs (1 hunks)
  • src/Phoenix.MarketData.Domain/Events/IMarketDataEventPublisher.cs (1 hunks)
  • src/Phoenix.MarketData.Domain/Models/BaseMarketData.cs (2 hunks)
  • src/Phoenix.MarketData.Domain/Models/CryptoOrdinalSpotPriceData.cs (1 hunks)
  • src/Phoenix.MarketData.Domain/Models/FxSpotPriceData.cs (1 hunks)
  • src/Phoenix.MarketData.Domain/Models/FxVolSurfaceData.cs (1 hunks)
  • src/Phoenix.MarketData.Domain/Models/PriceSide.cs (1 hunks)
  • src/Phoenix.MarketData.Domain/Models/Regions.cs (1 hunks)
  • src/Phoenix.MarketData.Domain/Repositories/IRepository.cs (1 hunks)
  • src/Phoenix.MarketData.Functions/HttpRequests/SaveMarketData.http (3 hunks)
  • src/Phoenix.MarketData.Functions/OpenApi/OpenApiModels.cs (1 hunks)
  • src/Phoenix.MarketData.Functions/OpenApi/SwaggerExtension.cs (1 hunks)
  • src/Phoenix.MarketData.Functions/OpenApi/SwaggerMiddleware.cs (1 hunks)
  • src/Phoenix.MarketData.Functions/OpenApi/SwaggerUIFunction.cs (1 hunks)
  • src/Phoenix.MarketData.Functions/Phoenix.MarketData.Functions.csproj (1 hunks)
  • src/Phoenix.MarketData.Functions/Program.cs (1 hunks)
  • src/Phoenix.MarketData.Functions/Properties/launchSettings.json (1 hunks)
  • src/Phoenix.MarketData.Functions/SaveDocumentToDb.cs (1 hunks)
  • src/Phoenix.MarketData.Infrastructure/Configuration/MarketDataSecretProvider.cs (1 hunks)
  • src/Phoenix.MarketData.Infrastructure/Configuration/MemorySecretCache.cs (1 hunks)
  • src/Phoenix.MarketData.Infrastructure/Configuration/NoOpSecretCache.cs (1 hunks)
  • src/Phoenix.MarketData.Infrastructure/Events/EventGridPublisher.cs (1 hunks)
  • src/Phoenix.MarketData.Infrastructure/Mapping/BaseMarketDataMapper.cs (2 hunks)
  • src/Phoenix.MarketData.Infrastructure/Mapping/CryptoOrdinalSpotPriceDataMapper.cs (1 hunks)
  • src/Phoenix.MarketData.Infrastructure/Mapping/FxSpotPriceDataMapper.cs (1 hunks)
  • src/Phoenix.MarketData.Infrastructure/Repositories/CosmosRepository.cs (1 hunks)
  • src/Phoenix.MarketData.Infrastructure/Repositories/CosmosRepositoryBatchExtensions.cs (1 hunks)
  • src/Phoenix.MarketData.Infrastructure/Repositories/CosmosRepositoryCrudMethods.cs (1 hunks)
  • src/Phoenix.MarketData.Infrastructure/Repositories/CosmosRepositoryEvents.cs (1 hunks)
  • src/Phoenix.MarketData.Infrastructure/Repositories/CosmosRepositoryExtensions.cs (1 hunks)
  • src/Phoenix.MarketData.Infrastructure/Repositories/CosmosRepositoryMethods.cs (1 hunks)
  • src/Phoenix.MarketData.Infrastructure/Repositories/CosmosRepositoryQueryMethods.cs (1 hunks)
  • src/Phoenix.MarketData.Infrastructure/Repositories/Interfaces.cs (1 hunks)
  • src/Phoenix.MarketData.Infrastructure/Schemas/JsonSchemaValidator.cs (2 hunks)
  • src/Phoenix.MarketData.Infrastructure/Serialization/CryptoOrdinalSpotPriceDataDto.cs (2 hunks)
  • src/Phoenix.MarketData.Infrastructure/Serialization/FxSpotPriceDataDto.cs (1 hunks)
  • src/Phoenix.MarketData.Infrastructure/Validation/CompositeValidator.cs (1 hunks)
  • src/Phoenix.MarketData.Infrastructure/Validation/CryptoOrdinalSpotPriceDataValidator.cs (1 hunks)
  • src/Phoenix.MarketData.Infrastructure/Validation/FxSpotPriceDataValidator.cs (1 hunks)
  • tests/Phoenix.MarketData.Infrastructure.Tests/Events/MarketDataEventSubscriberTests.cs (1 hunks)
  • tests/Phoenix.MarketData.Infrastructure.Tests/Mapping/CryptoOrdinalSpotPriceDataMapperTests.cs (1 hunks)
  • tests/Phoenix.MarketData.Infrastructure.Tests/Mapping/FxSpotPriceDataMapperTests.cs (1 hunks)
  • tests/Phoenix.MarketData.Infrastructure.Tests/Repository/Base/BaseMarketDataRepositoryTests.cs (1 hunks)
  • tests/Phoenix.MarketData.Infrastructure.Tests/Repository/BatchOperationTests.cs (1 hunks)
  • tests/Phoenix.MarketData.Infrastructure.Tests/Repository/CrudOperationTests.cs (1 hunks)
  • tests/Phoenix.MarketData.Infrastructure.Tests/Repository/DeleteOperationTests.cs (1 hunks)
  • tests/Phoenix.MarketData.Infrastructure.Tests/Repository/ErrorHandlingTests.cs (1 hunks)
  • tests/Phoenix.MarketData.Infrastructure.Tests/Repository/QueryOperationTests.cs (1 hunks)
  • tests/Phoenix.MarketData.Infrastructure.Tests/Validation/CompositeValidatorTests.cs (1 hunks)
  • tests/Phoenix.MarketData.Infrastructure.Tests/Validation/CryptoOrdinalSpotPriceDataValidatorTests.cs (1 hunks)
  • tests/Phoenix.MarketData.Infrastructure.Tests/Validation/FxSpotPriceDataValidatorTests.cs (1 hunks)
  • tests/Phoenix.MarketData.Infrastructure.Tests/Validation/ValidationDecoratorTests.cs (1 hunks)
✅ Files skipped from review due to trivial changes (2)
  • src/Phoenix.MarketData.Functions/Properties/launchSettings.json
  • src/Phoenix.MarketData.Domain/Models/PriceSide.cs
🚧 Files skipped from review as they are similar to previous changes (56)
  • src/Phoenix.MarketData.Infrastructure/Mapping/BaseMarketDataMapper.cs
  • src/Phoenix.MarketData.Functions/HttpRequests/SaveMarketData.http
  • src/Phoenix.MarketData.Core/Validation/IValidator.cs
  • src/Phoenix.MarketData.Domain/Events/IMarketDataEventPublisher.cs
  • src/Phoenix.MarketData.Core/Validation/FluentValidationExtensions.cs
  • src/Phoenix.MarketData.Domain/Models/Regions.cs
  • src/Phoenix.MarketData.Domain/Models/FxSpotPriceData.cs
  • src/Phoenix.MarketData.Core/Events/IMarketDataEventSubscriber.cs
  • src/Phoenix.MarketData.Core/Mapping/IModelMapper.cs
  • src/Phoenix.MarketData.Domain/Models/FxVolSurfaceData.cs
  • src/Phoenix.MarketData.Core/Configuration/ISecretCache.cs
  • src/Phoenix.MarketData.Core/Configuration/IMarketDataSecretProvider.cs
  • tests/Phoenix.MarketData.Infrastructure.Tests/Validation/ValidationDecoratorTests.cs
  • src/Phoenix.MarketData.Core/Validation/ValidationPipelineBehaviour.cs
  • src/Phoenix.MarketData.Domain/Constants/CurrencyCodes.cs
  • src/Phoenix.MarketData.Core/Validation/ServiceValidationDecorator.cs
  • src/Phoenix.MarketData.Infrastructure/Serialization/FxSpotPriceDataDto.cs
  • src/Phoenix.MarketData.Infrastructure/Validation/CompositeValidator.cs
  • src/Phoenix.MarketData.Core/Validation/SchemaValidationExtensions.cs
  • src/Phoenix.MarketData.Infrastructure/Schemas/JsonSchemaValidator.cs
  • src/Phoenix.MarketData.Domain/Models/CryptoOrdinalSpotPriceData.cs
  • tests/Phoenix.MarketData.Infrastructure.Tests/Repository/Base/BaseMarketDataRepositoryTests.cs
  • src/Phoenix.MarketData.Infrastructure/Validation/FxSpotPriceDataValidator.cs
  • src/Phoenix.MarketData.Infrastructure/Configuration/MemorySecretCache.cs
  • scripts/test-wrapper.js
  • src/Phoenix.MarketData.Core/Events/IEventPublisher.cs
  • src/Phoenix.MarketData.Infrastructure/Configuration/NoOpSecretCache.cs
  • tests/Phoenix.MarketData.Infrastructure.Tests/Repository/DeleteOperationTests.cs
  • package.json
  • src/Phoenix.MarketData.Core/Validation/ValidationDecorator.cs
  • src/Phoenix.MarketData.Functions/OpenApi/OpenApiModels.cs
  • src/Phoenix.MarketData.Infrastructure/Serialization/CryptoOrdinalSpotPriceDataDto.cs
  • src/Phoenix.MarketData.Functions/Phoenix.MarketData.Functions.csproj
  • .gitignore
  • src/Phoenix.MarketData.Infrastructure/Repositories/CosmosRepositoryBatchExtensions.cs
  • tests/Phoenix.MarketData.Infrastructure.Tests/Validation/FxSpotPriceDataValidatorTests.cs
  • tests/Phoenix.MarketData.Infrastructure.Tests/Repository/CrudOperationTests.cs
  • tests/Phoenix.MarketData.Infrastructure.Tests/Repository/QueryOperationTests.cs
  • src/Phoenix.MarketData.Infrastructure/Repositories/CosmosRepositoryCrudMethods.cs
  • src/Phoenix.MarketData.Application/Services/Decorators/ValidationMarketDataServiceDecorator.cs
  • tests/Phoenix.MarketData.Infrastructure.Tests/Repository/BatchOperationTests.cs
  • src/Phoenix.MarketData.Functions/OpenApi/SwaggerExtension.cs
  • tests/Phoenix.MarketData.Infrastructure.Tests/Repository/ErrorHandlingTests.cs
  • scripts/coverage-report.js
  • src/Phoenix.MarketData.Infrastructure/Repositories/CosmosRepositoryExtensions.cs
  • tests/Phoenix.MarketData.Infrastructure.Tests/Events/MarketDataEventSubscriberTests.cs
  • src/Phoenix.MarketData.Functions/OpenApi/SwaggerUIFunction.cs
  • src/Phoenix.MarketData.Infrastructure/Repositories/CosmosRepositoryEvents.cs
  • src/Phoenix.MarketData.Infrastructure/Repositories/CosmosRepository.cs
  • tests/Phoenix.MarketData.Infrastructure.Tests/Validation/CompositeValidatorTests.cs
  • src/Phoenix.MarketData.Application/Services/MarketDataService.cs
  • src/Phoenix.MarketData.Domain/Models/BaseMarketData.cs
  • src/Phoenix.MarketData.Core/Repositories/IRepository.cs
  • src/Phoenix.MarketData.Infrastructure/Configuration/MarketDataSecretProvider.cs
  • src/Phoenix.MarketData.Core/Validation/ValidationError.cs
  • src/Phoenix.MarketData.Infrastructure/Repositories/Interfaces.cs
🧰 Additional context used
🧬 Code Graph Analysis (2)
scripts/run-coverage.js (3)
scripts/coverage-report.js (5)
  • require (3-3)
  • fs (5-5)
  • path (4-4)
  • reportDir (32-32)
  • coverageFiles (84-89)
scripts/test-wrapper.js (6)
  • require (3-3)
  • fs (5-5)
  • path (4-4)
  • reportDir (30-30)
  • openCommand (61-61)
  • coverageFiles (86-88)
scripts/update-guidelines.js (1)
  • content (18-18)
src/Phoenix.MarketData.Infrastructure/Mapping/FxSpotPriceDataMapper.cs (4)
src/Phoenix.MarketData.Domain/Models/FxSpotPriceData.cs (1)
  • FxSpotPriceData (5-12)
src/Phoenix.MarketData.Infrastructure/Serialization/FxSpotPriceDataDto.cs (1)
  • FxSpotPriceDataDto (6-35)
src/Phoenix.MarketData.Domain/Models/BaseMarketData.cs (1)
  • AddTag (143-154)
src/Phoenix.MarketData.Infrastructure/Mapping/CryptoOrdinalSpotPriceDataMapper.cs (1)
  • PriceSide (82-94)
🪛 GitHub Check: Build and Test
src/Phoenix.MarketData.Functions/SaveDocumentToDb.cs

[failure] 6-6:
The type or namespace name 'Infrastructure' does not exist in the namespace 'Phoenix.MarketData' (are you missing an assembly reference?)


[failure] 7-7:
The type or namespace name 'Infrastructure' does not exist in the namespace 'Phoenix.MarketData' (are you missing an assembly reference?)


[failure] 8-8:
The type or namespace name 'Infrastructure' does not exist in the namespace 'Phoenix.MarketData' (are you missing an assembly reference?)

src/Phoenix.MarketData.Application/Services/MarketDataEventProcessor.cs

[warning] 92-92:
The variable 'defaultRegion' is assigned but its value is never used

src/Phoenix.MarketData.Core/Validation/ValidationException.cs

[warning] 77-77:
Member 'ValidationException.GetObjectData(SerializationInfo, StreamingContext)' overrides obsolete member 'Exception.GetObjectData(SerializationInfo, StreamingContext)'. Add the Obsolete attribute to 'ValidationException.GetObjectData(SerializationInfo, StreamingContext)'.


[warning] 66-66:
'Exception.Exception(SerializationInfo, StreamingContext)' is obsolete: 'This API supports obsolete formatter-based serialization. It should not be called or extended by application code.' (https://aka.ms/dotnet-warnings/SYSLIB0051)


[warning] 82-82:
'Exception.GetObjectData(SerializationInfo, StreamingContext)' is obsolete: 'This API supports obsolete formatter-based serialization. It should not be called or extended by application code.' (https://aka.ms/dotnet-warnings/SYSLIB0051)

src/Phoenix.MarketData.Functions/Program.cs

[failure] 3-3:
The type or namespace name 'Cosmos' does not exist in the namespace 'Microsoft.Azure' (are you missing an assembly reference?)


[failure] 9-9:
The type or namespace name 'Domain' does not exist in the namespace 'Phoenix.MarketData' (are you missing an assembly reference?)


[failure] 10-10:
The type or namespace name 'Infrastructure' does not exist in the namespace 'Phoenix.MarketData' (are you missing an assembly reference?)


[failure] 11-11:
The type or namespace name 'Infrastructure' does not exist in the namespace 'Phoenix.MarketData' (are you missing an assembly reference?)


[failure] 12-12:
The type or namespace name 'Infrastructure' does not exist in the namespace 'Phoenix.MarketData' (are you missing an assembly reference?)


[failure] 13-13:
The type or namespace name 'Domain' does not exist in the namespace 'Phoenix.MarketData' (are you missing an assembly reference?)


[failure] 17-17:
The type or namespace name 'Domain' does not exist in the namespace 'Phoenix.MarketData' (are you missing an assembly reference?)

src/Phoenix.MarketData.Infrastructure/Mapping/CryptoOrdinalSpotPriceDataMapper.cs

[warning] 3-3:
The using directive for 'Phoenix.MarketData.Domain.Models' appeared previously in this namespace

src/Phoenix.MarketData.Infrastructure/Mapping/FxSpotPriceDataMapper.cs

[warning] 3-3:
The using directive for 'Phoenix.MarketData.Domain.Models' appeared previously in this namespace

src/Phoenix.MarketData.Infrastructure/Validation/CryptoOrdinalSpotPriceDataValidator.cs

[warning] 3-3:
The using directive for 'Phoenix.MarketData.Domain.Models' appeared previously in this namespace

tests/Phoenix.MarketData.Infrastructure.Tests/Mapping/CryptoOrdinalSpotPriceDataMapperTests.cs

[warning] 6-6:
The using directive for 'Phoenix.MarketData.Domain.Models' appeared previously in this namespace

tests/Phoenix.MarketData.Infrastructure.Tests/Mapping/FxSpotPriceDataMapperTests.cs

[warning] 6-6:
The using directive for 'Phoenix.MarketData.Domain.Models' appeared previously in this namespace

tests/Phoenix.MarketData.Infrastructure.Tests/Validation/CryptoOrdinalSpotPriceDataValidatorTests.cs

[warning] 8-8:
The using directive for 'Phoenix.MarketData.Domain.Models' appeared previously in this namespace

🪛 GitHub Actions: Build and Test
src/Phoenix.MarketData.Core/Validation/ValidationException.cs

[warning] 77-77: CS0672: Member 'ValidationException.GetObjectData(SerializationInfo, StreamingContext)' overrides obsolete member 'Exception.GetObjectData(SerializationInfo, StreamingContext)'. Add the Obsolete attribute to 'ValidationException.GetObjectData(SerializationInfo, StreamingContext)'.


[warning] 66-66: SYSLIB0051: 'Exception.Exception(SerializationInfo, StreamingContext)' is obsolete: 'This API supports obsolete formatter-based serialization. It should not be called or extended by application code.'


[warning] 82-82: SYSLIB0051: 'Exception.GetObjectData(SerializationInfo, StreamingContext)' is obsolete: 'This API supports obsolete formatter-based serialization. It should not be called or extended by application code.'

src/Phoenix.MarketData.Functions/Program.cs

[error] 3-3: CS0234: The type or namespace name 'Cosmos' does not exist in the namespace 'Microsoft.Azure' (are you missing an assembly reference?)

🔇 Additional comments (14)
src/Phoenix.MarketData.Functions/OpenApi/SwaggerMiddleware.cs (1)

19-54: Excellent implementation of middleware with proper error handling

The middleware has comprehensive error handling, appropriate HTTP status codes, and good logging practices. The caught exception types are specific and relevant to file operations.

tests/Phoenix.MarketData.Infrastructure.Tests/Validation/CryptoOrdinalSpotPriceDataValidatorTests.cs (1)

1-142: Comprehensive validation test coverage.

The test class provides good coverage of validation scenarios including happy path, null input, missing required fields, invalid data, and negative values. This is a well-structured set of tests that follows the AAA (Arrange-Act-Assert) pattern.

🧰 Tools
🪛 GitHub Check: Build and Test

[warning] 8-8:
The using directive for 'Phoenix.MarketData.Domain.Models' appeared previously in this namespace

src/Phoenix.MarketData.Domain/Repositories/IRepository.cs (2)

52-57: Well-designed query method with appropriate parameter types

The QueryAsync method properly uses DateOnly for date parameters, which is semantically correct for market data that doesn't require time components. The optional parameters with null defaults allow for flexible querying patterns.


8-12: Well-structured generic repository interface

This interface provides a clean, well-documented abstraction for market data entity persistence with appropriate methods for versioned data retrieval, saving, and querying. The generic constraint ensures type safety while allowing flexibility.

src/Phoenix.MarketData.Infrastructure/Mapping/FxSpotPriceDataMapper.cs (1)

87-89: Static methods provide good backward compatibility

The static methods that delegate to the instance implementation provide good backward compatibility for existing code while allowing the class to implement the new interface. This is a solid refactoring approach.

src/Phoenix.MarketData.Functions/SaveDocumentToDb.cs (1)

33-52: Well-documented API with comprehensive OpenAPI annotations

The OpenAPI annotations provide excellent documentation for the API, including operation details, parameter descriptions, request body examples, and response types. This improves API discoverability and integration.

src/Phoenix.MarketData.Application/Events/MarketDataEventSubscriber.cs (1)

52-68: Good implementation of retry with cancellation support

The implementation correctly passes the cancellation token to both the retry policy and the event processor method, ensuring that cancellation is properly honored at all levels of execution.

src/Phoenix.MarketData.Functions/Program.cs (2)

24-25: Fix configuration loading order.

The current configuration order loads production settings (prod.settings.json) before local settings (local.settings.json), which prevents local settings from overriding production values during development.

-               .AddJsonFile("prod.settings.json", optional: true, reloadOnChange: true)
-               .AddJsonFile("local.settings.json", optional: true, reloadOnChange: true);
+               .AddJsonFile("local.settings.json", optional: true, reloadOnChange: true)
+               .AddJsonFile("prod.settings.json", optional: true, reloadOnChange: true);

74-84: Good validation pattern for Cosmos DB settings.

The fail-fast approach with clear error messages ensures the application won't start with missing configuration.

src/Phoenix.MarketData.Application/Services/MarketDataEventProcessor.cs (1)

86-90: Good timestamp handling approach.

The conversion from DateTimeOffset to DateOnly is done cleanly without unnecessary string conversions.

src/Phoenix.MarketData.Core/Validation/ValidationResult.cs (1)

11-154: Well-designed validation result class.

The ValidationResult class follows good design principles:

  1. Immutability with non-mutating methods that return new instances
  2. Clear factory methods for creating success/failure results
  3. Comprehensive error aggregation and reporting
  4. Good XML documentation

The design facilitates consistent validation handling throughout the application.

src/Phoenix.MarketData.Core/Events/IntegrationEvents/MarketDataEvents.cs (1)

10-38: Well-designed integration event interface.

The IMarketDataIntegrationEvent interface provides a comprehensive set of properties with good XML documentation. The design facilitates consistent event handling throughout the application.

src/Phoenix.MarketData.Infrastructure/Repositories/CosmosRepositoryQueryMethods.cs (1)

174-182: Fallback in-memory filtering may OOM large result sets

When LINQ translation fails, the code downloads every item in the container and then filters locally:

filtered = response.AsQueryable().Where(predicate.Compile());

For sizable collections this can exhaust memory and network bandwidth.
Consider:

  1. Attempting server-side translation with a library such as CosmosQueryTranslator before falling back.
  2. Adding a hard cap (e.g., MaxItemCount) or paging during in-memory filtering.
  3. Surfacing a warning/exception to the caller so they can refine the predicate.
src/Phoenix.MarketData.Infrastructure/Repositories/CosmosRepositoryMethods.cs (1)

294-302: SELECT VALUE COUNT(1) on the full container can be extremely expensive

Running an unrestricted count issues a cross-partition query that scans every item, which is slow and burns RU/s on large containers.
If an approximate count is acceptable, use the Cosmos diagnostics / ReadThroughputAsync or maintain a separate counter.
If an exact count is required, at least pass a QueryRequestOptions { MaxConcurrency = 1 } and surface this cost explicitly in the calling API.

Would you confirm the expected container size and performance requirements? We can provide an incremental counter or a range-restricted count to avoid full scans.

Comment on lines +93 to +101
// Use cross-platform command to open the browser
const openCommand =
process.platform === "win32"
? `start "" "${reportDir}\\index.html"`
: process.platform === "darwin"
? `open "${reportDir}/index.html"`
: `xdg-open "${reportDir}/index.html"`;

execSync(openCommand, { stdio: "inherit", shell: true });
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Handle potential browser opening failure gracefully

The script attempts to open the browser but doesn't handle potential failures when the commands aren't available.

try {
  execSync(openCommand, { stdio: "inherit", shell: true });
+ console.log(`${colors.green}✅ Browser opened with report${colors.reset}`);
+ } catch (browserError) {
+   console.log(
+     `${colors.yellow}⚠️ Could not open browser automatically. Please open ${path.join(reportDir, 'index.html')} manually.${colors.reset}`
+   );
+ }

Committable suggestion skipped: line range outside the PR's diff.

🤖 Prompt for AI Agents
In scripts/run-coverage.js around lines 93 to 101, the code executes a command
to open the browser without handling errors if the command fails. Wrap the
execSync call in a try-catch block to catch any exceptions thrown when the
command is unavailable or fails, and log a warning or error message instead of
letting the script crash.

using System;
using Phoenix.MarketData.Domain.Models;

namespace Phoenix.MarketData.Domain.Domain.Events;
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Inconsistent namespace structure with file location

The file is in Phoenix.MarketData.Core/Domain but uses namespace Phoenix.MarketData.Domain.Domain.Events. This creates confusion about the project structure.

-namespace Phoenix.MarketData.Domain.Domain.Events;
+namespace Phoenix.MarketData.Core.Domain;

Alternatively, consider moving the file to match the namespace structure.

📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
namespace Phoenix.MarketData.Domain.Domain.Events;
namespace Phoenix.MarketData.Core.Domain;
🤖 Prompt for AI Agents
In src/Phoenix.MarketData.Core/Domain/MarketDataCreatedEvent.cs at line 4, the
namespace declaration does not match the file's directory structure, causing
confusion. Update the namespace to reflect the folder path by changing it to
Phoenix.MarketData.Core.Domain.Events, or alternatively, move the file to a
directory that matches the current namespace
Phoenix.MarketData.Domain.Domain.Events.

Comment on lines +4 to +60
namespace Phoenix.MarketData.Domain.Domain.Events
{
/// <summary>
/// Event raised when market data changes
/// </summary>
/// <typeparam name="T">Type of market data entity</typeparam>
public class MarketDataChangedEvent<T> where T : IMarketDataEntity
{
/// <summary>
/// Gets the updated market data
/// </summary>
public T Data { get; }

/// <summary>
/// Gets the timestamp when this event occurred
/// </summary>
public DateTimeOffset OccurredAt { get; }

/// <summary>
/// Gets the type of change that occurred
/// </summary>
public ChangeType ChangeType { get; }

/// <summary>
/// Initializes a new instance of the MarketDataChangedEvent class
/// </summary>
/// <param name="data">The updated market data</param>
/// <param name="changeType">The type of change that occurred</param>
public MarketDataChangedEvent(T data, ChangeType changeType = ChangeType.Updated)
{
Data = data ?? throw new ArgumentNullException(nameof(data));
OccurredAt = DateTimeOffset.UtcNow;
ChangeType = changeType;
}
}

/// <summary>
/// Represents the type of change that occurred to market data
/// </summary>
public enum ChangeType
{
/// <summary>
/// Data was created
/// </summary>
Created,

/// <summary>
/// Data was updated
/// </summary>
Updated,

/// <summary>
/// Data was deleted
/// </summary>
Deleted
}
} No newline at end of file
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Create a base class to reduce duplication with MarketDataCreatedEvent

There's significant code duplication between this class and MarketDataCreatedEvent<T>. Extract common functionality to a base class.

Create a new base class:

using System;
using Phoenix.MarketData.Domain.Models;

namespace Phoenix.MarketData.Domain.Domain.Events
{
    /// <summary>
    /// Base class for market data events
    /// </summary>
    /// <typeparam name="T">Type of market data entity</typeparam>
    public abstract class MarketDataEventBase<T> where T : IMarketDataEntity
    {
        /// <summary>
        /// Gets the market data
        /// </summary>
        public T Data { get; init; }

        /// <summary>
        /// Gets the timestamp when this event occurred
        /// </summary>
        public DateTimeOffset OccurredAt { get; init; }

        /// <summary>
        /// Initializes a new instance of a market data event
        /// </summary>
        /// <param name="data">The market data</param>
        /// <param name="occurredAt">The timestamp when this event occurred</param>
        protected MarketDataEventBase(T data, DateTimeOffset? occurredAt = null)
        {
            Data = data ?? throw new ArgumentNullException(nameof(data));
            OccurredAt = occurredAt ?? DateTimeOffset.UtcNow;
        }
    }
}

Then modify both event classes to inherit from it.

🤖 Prompt for AI Agents
In src/Phoenix.MarketData.Core/Domain/Events/MarketDataChangedEvent.cs around
lines 4 to 60, there is duplication between MarketDataChangedEvent<T> and
MarketDataCreatedEvent<T>. To fix this, create a new abstract base class
MarketDataEventBase<T> with properties Data and OccurredAt, and a constructor
initializing these. Then refactor both MarketDataChangedEvent<T> and
MarketDataCreatedEvent<T> to inherit from this base class, removing duplicated
properties and constructors from them.

Comment on lines +32 to +37
public MarketDataChangedEvent(T data, ChangeType changeType = ChangeType.Updated)
{
Data = data ?? throw new ArgumentNullException(nameof(data));
OccurredAt = DateTimeOffset.UtcNow;
ChangeType = changeType;
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Add support for timestamp injection to enable deterministic testing

The current implementation hardcodes DateTimeOffset.UtcNow, making it impossible to test with specific timestamps or recreate historical events.

-    public MarketDataChangedEvent(T data, ChangeType changeType = ChangeType.Updated)
+    public MarketDataChangedEvent(T data, ChangeType changeType = ChangeType.Updated, DateTimeOffset? occurredAt = null)
     {
         Data = data ?? throw new ArgumentNullException(nameof(data));
-        OccurredAt = DateTimeOffset.UtcNow;
+        OccurredAt = occurredAt ?? DateTimeOffset.UtcNow;
         ChangeType = changeType;
     }

This change will:

  1. Enable deterministic testing with specific timestamps
  2. Allow recreation of historical events
  3. Improve performance in high-volume scenarios by allowing batch events to share a timestamp
🤖 Prompt for AI Agents
In src/Phoenix.MarketData.Core/Domain/Events/MarketDataChangedEvent.cs around
lines 32 to 37, the constructor currently sets OccurredAt to
DateTimeOffset.UtcNow directly, which prevents injecting specific timestamps for
testing or historical event recreation. Modify the constructor to accept an
optional DateTimeOffset parameter for OccurredAt, defaulting to UtcNow if not
provided. This allows deterministic testing, recreating historical events, and
sharing timestamps in batch scenarios.

Comment on lines +15 to +25
public T Data { get; }

/// <summary>
/// Gets the timestamp when this event occurred
/// </summary>
public DateTimeOffset OccurredAt { get; }

/// <summary>
/// Gets the type of change that occurred
/// </summary>
public ChangeType ChangeType { get; }
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Make properties immutable with init-only setters

Events represent something that has already happened and should not be modifiable after creation. Use init-only properties to enforce immutability.

-    public T Data { get; }
-    public DateTimeOffset OccurredAt { get; }
-    public ChangeType ChangeType { get; }
+    public T Data { get; init; }
+    public DateTimeOffset OccurredAt { get; init; }
+    public ChangeType ChangeType { get; init; }
🤖 Prompt for AI Agents
In src/Phoenix.MarketData.Core/Domain/Events/MarketDataChangedEvent.cs between
lines 15 and 25, the properties Data, OccurredAt, and ChangeType are currently
read-only but not explicitly immutable after object initialization. Change their
getters to use init-only setters instead of just get; this will allow setting
these properties only during object initialization and enforce immutability
afterward, ensuring the event data cannot be modified once created.

Comment on lines +79 to +94
// DRY handler for each DTO type
private async Task<IActionResult> HandleDocumentAsync<TDto>(
HttpRequest req,
string dataType,
string assetClass,
string schemaVersion,
Func<TDto, IMarketDataEntity> toDomainMapper)
{
string requestBody = await new StreamReader(req.Body).ReadToEndAsync();

// Schema validation
var validated = JsonSchemaValidatorRegistry.Validator.Validate(dataType, assetClass, schemaVersion, requestBody, out var errorMessage);
if (!validated)
{
_logger.LogError($"Schema validation failed for {dataType}/{assetClass}: {errorMessage}");
return new BadRequestObjectResult("Could not validate request body against schema.");
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Inconsistent cancellation token handling

In HandleDocumentAsync, the cancellation token is passed to _eventProcessor.ProcessChangedEventAsync but not to the retry policy's ExecuteAsync method. This means that while the inner operation respects cancellation, retries will not.

-            await _retryPolicy.ExecuteAsync(async () =>
+            await _retryPolicy.ExecuteAsync(async (ct) =>
             {
                 try
                 {
                     // Process event
                     _logger.LogInformation("Processing market data changed event: {EventId}", eventData.Id);

                     // Call the event processor to handle the event
-                    await _eventProcessor.ProcessChangedEventAsync(eventData, cancellationToken);
+                    await _eventProcessor.ProcessChangedEventAsync(eventData, ct);
                 }
                 catch (Exception ex) when (!(ex is OperationCanceledException))
                 {
                     _logger.LogError(ex, "Error processing market data changed event: {EventId}", eventData.Id);
                     throw; // Rethrowing to trigger retry
                 }
-            });
+            }, cancellationToken);

Committable suggestion skipped: line range outside the PR's diff.

🤖 Prompt for AI Agents
In src/Phoenix.MarketData.Functions/SaveDocumentToDb.cs around lines 79 to 94,
the cancellation token is passed to _eventProcessor.ProcessChangedEventAsync but
not to the retry policy's ExecuteAsync method, causing retries to ignore
cancellation requests. Modify the call to ExecuteAsync to accept and pass the
cancellation token so that the entire retry operation respects cancellation
signals.

Comment on lines +79 to +94
await _retryPolicy.ExecuteAsync(async () =>
{
try
{
// Process event
_logger.LogInformation("Processing market data changed event: {EventId}", eventData.Id);

// Call the event processor to handle the event
await _eventProcessor.ProcessChangedEventAsync(eventData, cancellationToken);
}
catch (Exception ex) when (!(ex is OperationCanceledException))
{
_logger.LogError(ex, "Error processing market data changed event: {EventId}", eventData.Id);
throw; // Rethrowing to trigger retry
}
});
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Inconsistent cancellation token handling

Unlike the HandleMarketDataCreatedEventAsync method, this method doesn't pass the cancellation token to the retry policy's ExecuteAsync method. This creates an inconsistency between the two similar methods and means retries won't respect cancellation requests.

-            await _retryPolicy.ExecuteAsync(async () =>
+            await _retryPolicy.ExecuteAsync(async (ct) =>
             {
                 try
                 {
                     // Process event
                     _logger.LogInformation("Processing market data changed event: {EventId}", eventData.Id);

                     // Call the event processor to handle the event
-                    await _eventProcessor.ProcessChangedEventAsync(eventData, cancellationToken);
+                    await _eventProcessor.ProcessChangedEventAsync(eventData, ct);
                 }
                 catch (Exception ex) when (!(ex is OperationCanceledException))
                 {
                     _logger.LogError(ex, "Error processing market data changed event: {EventId}", eventData.Id);
                     throw; // Rethrowing to trigger retry
                 }
-            });
+            }, cancellationToken);
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
await _retryPolicy.ExecuteAsync(async () =>
{
try
{
// Process event
_logger.LogInformation("Processing market data changed event: {EventId}", eventData.Id);
// Call the event processor to handle the event
await _eventProcessor.ProcessChangedEventAsync(eventData, cancellationToken);
}
catch (Exception ex) when (!(ex is OperationCanceledException))
{
_logger.LogError(ex, "Error processing market data changed event: {EventId}", eventData.Id);
throw; // Rethrowing to trigger retry
}
});
await _retryPolicy.ExecuteAsync(async (ct) =>
{
try
{
// Process event
_logger.LogInformation("Processing market data changed event: {EventId}", eventData.Id);
// Call the event processor to handle the event
await _eventProcessor.ProcessChangedEventAsync(eventData, ct);
}
catch (Exception ex) when (!(ex is OperationCanceledException))
{
_logger.LogError(ex, "Error processing market data changed event: {EventId}", eventData.Id);
throw; // Rethrowing to trigger retry
}
}, cancellationToken);
🤖 Prompt for AI Agents
In src/Phoenix.MarketData.Application/Events/MarketDataEventSubscriber.cs around
lines 79 to 94, the retry policy's ExecuteAsync call does not receive the
cancellationToken, unlike the similar HandleMarketDataCreatedEventAsync method.
To fix this, pass the cancellationToken as an argument to ExecuteAsync so that
retries respect cancellation requests consistently.

Comment on lines +3 to +12
using Microsoft.Azure.Cosmos;
using Microsoft.Azure.Functions.Worker;
using Microsoft.Extensions.Configuration;
using Microsoft.Extensions.DependencyInjection;
using Microsoft.Extensions.Hosting;
using Microsoft.Extensions.Logging;
using Phoenix.MarketData.Domain.Events;
using Phoenix.MarketData.Infrastructure.Events;
using Phoenix.MarketData.Infrastructure.Repositories;
using Phoenix.MarketData.Infrastructure.Serialization.JsonConverters;
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Fix missing assembly references.

Build failures indicate missing assembly references. Ensure your project file includes all required package references.

#!/bin/bash
# Check project file for required package references
echo "Checking project file for required package references..."
grep -rn --include="*.csproj" "Microsoft.Azure.Cosmos" src/
grep -rn --include="*.csproj" "Microsoft.Azure.Functions" src/
🧰 Tools
🪛 GitHub Check: Build and Test

[failure] 3-3:
The type or namespace name 'Cosmos' does not exist in the namespace 'Microsoft.Azure' (are you missing an assembly reference?)


[failure] 9-9:
The type or namespace name 'Domain' does not exist in the namespace 'Phoenix.MarketData' (are you missing an assembly reference?)


[failure] 10-10:
The type or namespace name 'Infrastructure' does not exist in the namespace 'Phoenix.MarketData' (are you missing an assembly reference?)


[failure] 11-11:
The type or namespace name 'Infrastructure' does not exist in the namespace 'Phoenix.MarketData' (are you missing an assembly reference?)


[failure] 12-12:
The type or namespace name 'Infrastructure' does not exist in the namespace 'Phoenix.MarketData' (are you missing an assembly reference?)

🪛 GitHub Actions: Build and Test

[error] 3-3: CS0234: The type or namespace name 'Cosmos' does not exist in the namespace 'Microsoft.Azure' (are you missing an assembly reference?)

🤖 Prompt for AI Agents
In src/Phoenix.MarketData.Functions/Program.cs around lines 3 to 12, the build
fails due to missing assembly references for Microsoft.Azure.Cosmos and
Microsoft.Azure.Functions. Fix this by opening the project file (.csproj) and
adding or verifying the presence of the necessary PackageReference entries for
these assemblies. Ensure the project file includes references to
Microsoft.Azure.Cosmos and Microsoft.Azure.Functions packages to resolve the
build errors.

Comment on lines +30 to +66
public async Task ProcessCreatedEventAsync(IMarketDataIntegrationEvent createdEvent, CancellationToken cancellationToken = default)
{
// Add null check guard clause
if (createdEvent == null)
throw new ArgumentNullException(nameof(createdEvent));

// Implementation to process created events
_logger.LogInformation("Processing created event: {EventId}", createdEvent.Id);

try
{
// Example implementation - actual logic will depend on your requirements
// This might involve fetching data, transforming it, and updating repositories

// For example, you could query existing data:
var existingData = await _marketDataService.QueryMarketDataAsync(
createdEvent.DataType,
createdEvent.AssetClass,
createdEvent.AssetId,
cancellationToken: cancellationToken);

_logger.LogInformation("Found {Count} existing records for {AssetId}",
existingData?.Count() ?? 0, createdEvent.AssetId);

// Additional processing logic as needed
// This could include:
// - Data validation or enrichment
// - Notifications to other systems
// - Cache updates
}
catch (Exception ex)
{
_logger.LogError(ex, "Error processing created event {EventId} for {AssetId}",
createdEvent.Id, createdEvent.AssetId);
throw; // Re-throw for error handling at higher levels
}
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Verification agent

🧩 Analysis chain

Implement actual processing logic in ProcessCreatedEventAsync.

The method currently fetches existing data but doesn't implement the actual processing logic mentioned in the comments (validation, enrichment, notifications, cache updates).


🏁 Script executed:

#!/bin/bash
# Check if there's any implementation in other files that might handle the processing logic
echo "Searching for files that might implement the processing logic..."
rg -A 5 "IMarketDataService.*PublishMarketDataAsync" --glob "*.cs"
rg -A 5 "ValidationMarketDataServiceDecorator" --glob "*.cs"

Length of output: 6751


Implement actual processing logic in ProcessCreatedEventAsync

This method currently only queries for existing data and logs the counts. You need to replace the placeholder comments with your real business logic. For example, consider:

  • Validating or enriching the fetched data before saving or forwarding.
  • Mapping the integration event into domain entities and persisting changes via your repositories.
  • Publishing downstream events or notifications (e.g. via _marketDataService.PublishMarketDataAsync(...)).
  • Updating caches or triggering any other side-effects required by your domain.
  • Handling retry/back-off or compensating actions as needed.

Please add the concrete processing steps in ProcessCreatedEventAsync so it fully implements your requirements.

🤖 Prompt for AI Agents
In src/Phoenix.MarketData.Application/Services/MarketDataEventProcessor.cs
between lines 30 and 66, the ProcessCreatedEventAsync method currently only
queries existing data and logs the count but lacks the actual processing logic.
You need to replace the placeholder comments with concrete implementation steps
such as validating or enriching the fetched data, mapping the integration event
to domain entities and persisting changes via repositories, publishing
downstream events using methods like _marketDataService.PublishMarketDataAsync,
updating caches, and handling any necessary retry or compensating actions to
fully implement the business requirements.

Comment on lines +27 to +31
var results = await queryFunc(cancellationToken);
_logger.LogDebug("{OperationName} returned {Count} results of type {EntityType}",
operationName, results.Count(), typeof(TResult).Name);
return results;
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Avoid double-enumeration & hidden performance hit in ExecuteQueryAsync

results.Count() forces a second full enumeration when results is an IEnumerable<T> that is not already materialised (e.g. an IAsyncEnumerable or LINQ query).
Materialise once into a List<TResult> and log off that instance to prevent costly reiteration.

-var results = await queryFunc(cancellationToken);
-_logger.LogDebug("{OperationName} returned {Count} results of type {EntityType}",
-    operationName, results.Count(), typeof(TResult).Name);
-return results;
+var materialised = (await queryFunc(cancellationToken)).ToList();
+_logger.LogDebug("{OperationName} returned {Count} results of type {EntityType}",
+    operationName, materialised.Count, typeof(TResult).Name);
+return materialised;
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
var results = await queryFunc(cancellationToken);
_logger.LogDebug("{OperationName} returned {Count} results of type {EntityType}",
operationName, results.Count(), typeof(TResult).Name);
return results;
}
var materialised = (await queryFunc(cancellationToken)).ToList();
_logger.LogDebug("{OperationName} returned {Count} results of type {EntityType}",
operationName, materialised.Count, typeof(TResult).Name);
return materialised;
🤖 Prompt for AI Agents
In
src/Phoenix.MarketData.Infrastructure/Repositories/CosmosRepositoryQueryMethods.cs
around lines 27 to 31, the code calls results.Count() on an IEnumerable which
causes a second enumeration and potential performance issues. To fix this,
materialize the results once by converting them to a List<TResult> immediately
after awaiting queryFunc, then use that list for both logging the count and
returning the results. This avoids multiple enumerations and improves
performance.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant

Comments