Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
14 changes: 8 additions & 6 deletions .github/copilot-instructions.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@ This is a **multi-project .NET 10.0 solution** for AI-powered static code analys
- **Shared** (`Lintellect.Shared`): Data contracts (`AnalysisRequest`, `GitInfo`, `AnalyzerFindings`) shared between CLI and API
- **AppHost** (`Lintellect.AppHost`) & **ServiceDefaults** (`Lintellect.ServiceDefaults`): .NET Aspire orchestration for local development with OpenTelemetry, health checks, and service discovery

**Key Data Flow**: CI/CD runs CLI → Roslyn analyzes code → CLI detects Git context → Results posted to API → API processes with AI (Claude/Semantic Kernel) → DevOps integration
**Key Data Flow**: CI/CD runs CLI → Roslyn analyzes code → CLI detects Git context → Results posted to API → API processes with AI (Claude / Azure OpenAI via Microsoft Agent Framework) → DevOps integration

## Solution Structure

Expand All @@ -24,6 +24,7 @@ This is a **multi-project .NET 10.0 solution** for AI-powered static code analys
### Test Projects

- **Lintellect.Api.FunctionalTests**: Functional tests using Testcontainers, Respawn, and Shouldly
- **Lintellect.Api.IntegrationTests**: End-to-end tests against real AI providers (Azure OpenAI, Anthropic). Self-skip when credentials are missing.
- **Lintellect.Api.UnitTests**: Unit tests using NUnit, NSubstitute, and Shouldly
- **Lintellect.Cli.UnitTests**: Unit tests for the CLI project using NUnit and Shouldly

Expand All @@ -33,7 +34,7 @@ This is a **multi-project .NET 10.0 solution** for AI-powered static code analys
- **C# Version**: 14.0 (latest)
- **Key Technologies**:
- **Roslyn** (Microsoft.CodeAnalysis.CSharp.Workspaces) for code analysis
- **AI Integration**: Anthropic Claude API and Microsoft Semantic Kernel
- **AI Integration**: Anthropic Claude API and Azure OpenAI via Microsoft Agent Framework
- **Database**: PostgreSQL with Entity Framework Core
- **Messaging**: Channel-based job queue (no Azure Service Bus currently)
- **CLI**: System.CommandLine for command-line interface
Expand Down Expand Up @@ -114,7 +115,7 @@ This is a **multi-project .NET 10.0 solution** for AI-powered static code analys
- **Unit Tests**: Fast, isolated tests with mocks
- **Integration Tests**: Test with real dependencies (CLI with real solution files)
- **Functional Tests**: End-to-end API tests with test database
- **InternalsVisibleTo**: The CLI project exposes internals to the test project
- **InternalsVisibleTo**: The CLI project exposes internals to its unit tests; the API project exposes internals to `Lintellect.Api.UnitTests` and `Lintellect.Api.IntegrationTests`
- **Test Data Builders**: Use fluent builders for creating test data (see `TestDataBuilder`)

## CI/CD Integration
Expand Down Expand Up @@ -143,7 +144,7 @@ When working on the API project:

1. **Architecture**: Follow Clean Architecture with Domain → Application → Infrastructure layers
2. **CQRS**: Use Mediator pattern for commands and queries
3. **AI Integration**: Support both Anthropic Claude and Microsoft Semantic Kernel
3. **AI Integration**: Support both Anthropic Claude and Azure OpenAI via Microsoft Agent Framework
4. **Background Processing**: Use Channel-based job queue for async analysis processing
5. **Database**: PostgreSQL with Entity Framework Core and JSONB for flexible data storage
6. **API Design**: Use minimal APIs and controllers as appropriate
Expand Down Expand Up @@ -175,7 +176,7 @@ When working with code analysis:
4. **Docker**: API project supports Docker with Linux target OS
5. **User Secrets**: Both API and AppHost have user secrets configured
6. **Package Management**: Centralized package version management via `Directory.Packages.props`
7. **AI Providers**: Support for both Anthropic Claude and Microsoft Semantic Kernel
7. **AI Providers**: Support for both Anthropic Claude and Azure OpenAI via Microsoft Agent Framework
8. **Database**: PostgreSQL with JSONB for flexible data storage
9. **Background Processing**: Channel-based job queue instead of Azure Service Bus
10. **Testing**: Comprehensive test coverage with unit, integration, and functional tests
Expand Down Expand Up @@ -232,7 +233,8 @@ When working with code analysis:

- [Microsoft.CodeAnalysis Documentation](https://learn.microsoft.com/en-us/dotnet/csharp/roslyn-sdk/)
- [System.CommandLine Documentation](https://learn.microsoft.com/en-us/dotnet/standard/commandline/)
- [Semantic Kernel Documentation](https://learn.microsoft.com/en-us/semantic-kernel/)
- [Microsoft Agent Framework Documentation](https://learn.microsoft.com/en-us/agent-framework/)
- [Microsoft.Extensions.AI Documentation](https://learn.microsoft.com/en-us/dotnet/api/microsoft.extensions.ai)
- [Anthropic Claude API Documentation](https://docs.anthropic.com/)
- [.NET Aspire Documentation](https://learn.microsoft.com/en-us/dotnet/aspire/)
- [Entity Framework Core Documentation](https://learn.microsoft.com/en-us/ef/core/)
Expand Down
3 changes: 2 additions & 1 deletion .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -361,4 +361,5 @@ MigrationBackup/
# Fody - auto-generated XML schema
FodyWeavers.xsd

*.cursor
*.cursor
.claude/settings.local.json
25 changes: 21 additions & 4 deletions CLAUDE.md
Original file line number Diff line number Diff line change
Expand Up @@ -79,15 +79,15 @@ Lintellect is an AI-powered PR code review assistant. There are two runtime comp

**CLI** (`Lintellect.Cli`) — stateless, runs in CI/CD pipelines. Reads PR context from CI environment variables, performs Roslyn-based C# analysis, and POSTs an `AnalysisRequest` to the API.

**API** (`Lintellect.Api`) — ASP.NET Core service. Receives the request, persists it as an `AnalysisJob`, enqueues it onto a channel-based `AnalysisJobQueue`, and a background service processes it: calls Claude or Semantic Kernel, then posts review comments back to GitHub/Azure DevOps.
**API** (`Lintellect.Api`) — ASP.NET Core service. Receives the request, persists it as an `AnalysisJob`, enqueues it onto a channel-based `AnalysisJobQueue`, and a background service processes it: calls Claude or Azure OpenAI (via Microsoft Agent Framework), then posts review comments back to GitHub/Azure DevOps.

**Data flow:**

```
CI/CD → CLI (Roslyn analysis + Git context extraction)
→ POST /analysis to API
→ AnalysisJob persisted in PostgreSQL
→ Background service dequeues + calls AI (Claude / Semantic Kernel)
→ Background service dequeues + calls AI (Claude / Azure OpenAI via Microsoft Agent Framework)
→ Results stored (Summary, DetailedAnalysis, InlineSuggestions)
→ Comments posted to PR via Octokit / TFS client
```
Expand All @@ -109,18 +109,32 @@ Apis/ → Minimal API endpoints, API key auth filter

| Interface | Purpose |
| --------------------- | -------------------------------------------------------------------- |
| `IAnalyzerService` | AI service contract (ClaudeAnalyzerService, SemanticAnalyzerService) |
| `IAnalyzerService` | AI service contract (ClaudeAnalyzerService, AzureOpenAIAnalyzerService) |
| `IGitInfoExtractor` | Extract PR context from CI env vars |
| `IGitClientFactory` | Create GitHub/Azure DevOps clients dynamically |
| `IPullRequestService` | Fetch diffs, post comments |
| `IMcpServiceResolver` | Resolve MCP servers for AI context |
| `IWorkItemService` | Resolve linked work items / issues for a PR (per-provider) |
| `IWorkItemSummarizer` | AI-condense linked work items into a tight GOAL + CONTEXT block |

Factories (`GitInfoExtractorFactory`, `GitClientFactory`) select implementations based on `EGitProvider` at runtime.

### AI prompt pipeline

`PromptBuilder` assembles prompts from templates in `Infrastructure/Services/AI/Prompts/Templates/{Language}/`. `TokenAwareChunker` splits large diffs to stay within model token limits; `TokenEstimator` estimates token counts without calling the API.

### Work-item context (on by default)

When `AnalysisRequest.EnableWorkItemContext` is true (CLI flag `--enable-work-item-context` / `-ewi`, defaults to true; pass `--enable-work-item-context false` to disable), the orchestrator resolves linked work items via `IWorkItemService` and runs a single `IWorkItemSummarizer` pass that produces a structured response:

```
GOAL: <one sentence>
CONTEXT:
<2-3 short paragraphs>
```

The full block is injected into the Summary and Detailed-Analysis prompts via `{{workItemContext}}`; only the `GOAL` line is injected into the per-file Inline-Suggestion prompts (per-file calls multiply tokens by file count, so the inline cost stays bounded). Failures during fetch or summarization log + continue with no context. Azure DevOps work items are resolved server-side via the WIT REST API; GitHub uses PR-body parsing for `Closes/Fixes/Resolves #N` keywords.

### Configuration

Settings fall back to environment variables via `PostConfigure<>()`:
Expand All @@ -129,7 +143,10 @@ Settings fall back to environment variables via `PostConfigure<>()`:
| ----------------------------------- | ---------------------- |
| `ApiKey` | — |
| `ConnectionStrings:postgresdb` | — |
| `ClaudeAnalyzer:ApiKey` | — |
| `ClaudeAnalyzer:ApiKey` | `CLAUDE_API_KEY` |
| `AzureOpenAIAnalyzer:ApiKey` | `AZURE_OPENAI_API_KEY` |
| `AzureOpenAIAnalyzer:Endpoint` | `AZURE_OPENAI_ENDPOINT` |
| `AzureOpenAIAnalyzer:DeploymentName`| `AZURE_OPENAI_DEPLOYMENT_NAME` |
| `GitCredentials:GitHub:Token` | `GITHUB_TOKEN` |
| `GitCredentials:AzureDevOps:Pat` | `AZURE_DEVOPS_PAT` |
| `GitCredentials:AzureDevOps:OrgUrl` | `AZURE_DEVOPS_ORG_URL` |
Expand Down
10 changes: 4 additions & 6 deletions Directory.Packages.props
Original file line number Diff line number Diff line change
Expand Up @@ -14,17 +14,15 @@
<PackageVersion Include="Mediator.Abstractions" Version="3.0.1" />
<PackageVersion Include="Mediator.SourceGenerator" Version="3.0.1" />
<PackageVersion Include="Microsoft.Extensions.Http.Polly" Version="10.0.1" />
<PackageVersion Include="Microsoft.SemanticKernel.Agents.Core" Version="1.72.0" />
<PackageVersion Include="Microsoft.SemanticKernel.Agents.OpenAI" Version="1.72.0-preview" />
<PackageVersion Include="ModelContextProtocol.Core" Version="1.0.0" />
<PackageVersion Include="Microsoft.Agents.AI" Version="1.3.0" />
<PackageVersion Include="Microsoft.Agents.AI.OpenAI" Version="1.3.0" />
<PackageVersion Include="Azure.AI.OpenAI" Version="2.9.0-beta.1" />
<PackageVersion Include="ModelContextProtocol.Core" Version="1.2.0" />
<PackageVersion Include="Polly.Extensions.Http" Version="3.0.0" />
<PackageVersion Include="Microsoft.AspNetCore.OpenApi" Version="10.0.1" />
<PackageVersion Include="Microsoft.EntityFrameworkCore" Version="10.0.1" />
<PackageVersion Include="Microsoft.EntityFrameworkCore.Design" Version="10.0.1" />
<PackageVersion Include="Microsoft.EntityFrameworkCore.InMemory" Version="10.0.0" />
<PackageVersion Include="Microsoft.SemanticKernel" Version="1.72.0" />
<PackageVersion Include="Microsoft.SemanticKernel.Connectors.AzureOpenAI" Version="1.72.0" />
<PackageVersion Include="Microsoft.SemanticKernel.Connectors.OpenAI" Version="1.72.0" />
<PackageVersion Include="Microsoft.TeamFoundationServer.Client" Version="20.268.0-preview" />
<PackageVersion Include="Microsoft.TeamFoundationServer.ExtendedClient" Version="20.268.0-preview" />
<PackageVersion Include="Npgsql.EntityFrameworkCore.PostgreSQL" Version="10.0.0" />
Expand Down
1 change: 1 addition & 0 deletions Lintellect.slnx
Original file line number Diff line number Diff line change
Expand Up @@ -5,6 +5,7 @@
</Folder>
<Folder Name="/tests/">
<Project Path="tests/Lintellect.Api.FunctionalTests/Lintellect.Api.FunctionalTests.csproj" />
<Project Path="tests/Lintellect.Api.IntegrationTests/Lintellect.Api.IntegrationTests.csproj" />
Comment thread
DawnDevelop marked this conversation as resolved.
<Project Path="tests/Lintellect.Api.UnitTests/Lintellect.Api.UnitTests.csproj" />
<Project Path="tests/Lintellect.Cli.UnitTests/Lintellect.Cli.UnitTests.csproj" Id="23001e5e-e6a8-4b2e-be18-36aa87a4422f" />
</Folder>
Expand Down
13 changes: 11 additions & 2 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -154,6 +154,15 @@ Lintellect analyze \
--enable-inline-suggestions \
--enable-azure-devops-code-owners

# Linked work items / issues are used as PR context by default.
# Azure DevOps: linked work items resolved server-side via the WIT REST API.
# GitHub: PR body parsed for "Closes/Fixes/Resolves #N" keywords.
# To opt out:
Lintellect analyze \
--language "csharp" \
--enable-summary-comment \
--enable-work-item-context false

# Python analysis with Semgrep
Lintellect analyze \
--language "python" \
Expand Down Expand Up @@ -330,10 +339,10 @@ API-Key: your-api-key

```json
{
"SemanticAnalyzer": {
"AzureOpenAIAnalyzer": {
"ApiKey": "your-azure-ai-key",
"Endpoint": "https://your-resource.openai.azure.com/",
"Model": "gpt-4o"
"DeploymentName": "gpt-4o"
}
}
```
Expand Down
17 changes: 17 additions & 0 deletions src/Lintellect.Api/Application/Interfaces/IAnalyzerService.cs
Original file line number Diff line number Diff line change
Expand Up @@ -65,6 +65,23 @@ Task<List<InlineSuggestion>> GenerateInlineSuggestionsAsync(
/// <param name="cancellationToken">Cancellation token</param>
/// <returns>Answer to the question in Markdown format</returns>
Task<string> AnswerQuestionAsync(AnalyzerServiceModel analysisResult, string threadContext, string question, CancellationToken cancellationToken = default);

/// <summary>
/// Runs a tightly-scoped, single-shot AI call used for ancillary context summarization
/// (e.g. condensing linked work items before they are fed into the main review prompts).
/// Implementations must not require an <see cref="AnalyzerServiceModel"/>; the caller supplies a
/// fully-rendered system + user prompt and an output token cap.
/// </summary>
/// <param name="systemPrompt">Fully-rendered system prompt.</param>
/// <param name="userPrompt">Fully-rendered user prompt.</param>
/// <param name="maxOutputTokens">Hard cap on the response token count.</param>
/// <param name="cancellationToken">Cancellation token.</param>
/// <returns>The model's text response, or an empty string if the model produced nothing.</returns>
Task<string> SummarizeContextAsync(
string systemPrompt,
string userPrompt,
int maxOutputTokens,
CancellationToken cancellationToken = default);
}

/// <summary>
Expand Down
18 changes: 18 additions & 0 deletions src/Lintellect.Api/Application/Interfaces/IGitClient.cs
Original file line number Diff line number Diff line change
Expand Up @@ -149,4 +149,22 @@ Task AddCodeOwnersToPr(
/// </returns>
Task<PullRequestCommentThread> GetPullRequestThreadContextAsync(string projectName, string repositoryName, int pullRequestId, int prCommentId);

/// <summary>
/// Retrieves work items / issues linked to a pull request.
/// Implementations resolve linked items using the most natural mechanism for the provider:
/// Azure DevOps reads PR work-item refs from the WIT API; GitHub parses the PR body for closing
/// keywords and fetches the matching issues. Caller-supplied <paramref name="hints"/> (e.g. ids
/// already extracted CLI-side) are taken as-is and resolved into rich references.
/// </summary>
/// <param name="projectName">Project / owner name.</param>
/// <param name="repositoryName">Repository name.</param>
/// <param name="pullRequestId">Pull request ID / number.</param>
/// <param name="hints">Optional work-item ids the caller pre-extracted (CLI body parsing, etc.).</param>
/// <returns>Resolved work-item references, possibly empty.</returns>
Task<List<WorkItemReference>> GetLinkedWorkItemsAsync(
string projectName,
string repositoryName,
int pullRequestId,
IReadOnlyList<WorkItemReference>? hints = null);

}
13 changes: 13 additions & 0 deletions src/Lintellect.Api/Application/Interfaces/IWorkItemService.cs
Original file line number Diff line number Diff line change
@@ -0,0 +1,13 @@
using Lintellect.Shared.Models;

namespace Lintellect.Api.Application.Interfaces;

/// <summary>
/// Resolves work items / issues linked to a pull request, regardless of provider.
/// Wraps <see cref="IGitClientFactory"/> + <see cref="IGitClient.GetLinkedWorkItemsAsync"/> so the
/// orchestrator can stay provider-agnostic.
/// </summary>
public interface IWorkItemService
{
Task<List<WorkItemReference>> ResolveAsync(AnalysisRequest analysisRequest, CancellationToken cancellationToken = default);
}
24 changes: 24 additions & 0 deletions src/Lintellect.Api/Application/Interfaces/IWorkItemSummarizer.cs
Original file line number Diff line number Diff line change
@@ -0,0 +1,24 @@
using Lintellect.Shared.Models;

namespace Lintellect.Api.Application.Interfaces;

/// <summary>
/// Result of summarizing a set of linked work items into a compact AI-ready context block.
/// <see cref="FullContext"/> contains the full GOAL + CONTEXT block injected into Summary and
/// Detailed-Analysis prompts. <see cref="Goal"/> is the single GOAL line injected into the
/// per-file Inline-Suggestion prompts (kept tight to avoid per-file token blow-up).
/// </summary>
public sealed record WorkItemSummary(string FullContext, string Goal)
{
public static WorkItemSummary Empty { get; } = new(string.Empty, string.Empty);
}

/// <summary>
/// Condenses a set of <see cref="WorkItemReference"/>s into a tight context block suitable for
/// injection into the main code-review prompts. Implementations call the configured
/// <see cref="IAnalyzerService"/> with a hard token cap.
/// </summary>
public interface IWorkItemSummarizer
{
Task<WorkItemSummary> SummarizeAsync(IReadOnlyList<WorkItemReference> workItems, CancellationToken cancellationToken = default);
}
Original file line number Diff line number Diff line change
Expand Up @@ -22,7 +22,10 @@ public sealed record ProcessAnalysisJobCommand(
public sealed class ProcessAnalysisJobCommandHandler(
IApplicationDbContext context,
PullRequestService prService,
IAnalyzerService analyzerService) : IRequestHandler<ProcessAnalysisJobCommand, PullRequestAnalysisReportModel>
IAnalyzerService analyzerService,
IWorkItemService workItemService,
IWorkItemSummarizer workItemSummarizer,
ILogger<ProcessAnalysisJobCommandHandler> logger) : IRequestHandler<ProcessAnalysisJobCommand, PullRequestAnalysisReportModel>
{
public async ValueTask<PullRequestAnalysisReportModel> Handle(ProcessAnalysisJobCommand request, CancellationToken cancellationToken)
{
Expand Down Expand Up @@ -68,7 +71,15 @@ public async ValueTask<PullRequestAnalysisReportModel> Handle(ProcessAnalysisJob

// Step 2: Prepare analyzer and custom instructions
var customInstructions = await prService.GetCustomInstructionsAsync(analysisRequest);
var aiAnalyzerModel = new AnalyzerServiceModel(analysisRequest, customInstructions ?? string.Empty);

// Step 2b: Resolve and summarize linked work items (graceful degradation on failure)
var workItemSummary = await ResolveWorkItemContextAsync(analysisRequest, cancellationToken);

var aiAnalyzerModel = new AnalyzerServiceModel(
analysisRequest,
customInstructions ?? string.Empty,
WorkItemContext: workItemSummary.FullContext,
WorkItemGoal: workItemSummary.Goal);

// Step 3: Execute analysis tasks in parallel
var analysisResults = await ExecuteAnalysisTasksAsync(analyzerService, aiAnalyzerModel, diffFull, analysisRequest, cancellationToken);
Expand Down Expand Up @@ -325,6 +336,34 @@ private static DiffStatistics BuildDiffStatistics(Dictionary<string, string> dif
};
}

private async Task<WorkItemSummary> ResolveWorkItemContextAsync(AnalysisRequest analysisRequest, CancellationToken cancellationToken)
{
if (!analysisRequest.EnableWorkItemContext)
{
return WorkItemSummary.Empty;
}

try
{
var items = await workItemService.ResolveAsync(analysisRequest, cancellationToken);
if (items.Count == 0)
{
logger.LogInformation("Work item context enabled but no linked items found for PR #{PullRequestId}",
analysisRequest.GitInfo?.PullRequestId);
return WorkItemSummary.Empty;
}

return await workItemSummarizer.SummarizeAsync(items, cancellationToken);
}
catch (Exception ex)
{
logger.LogError(ex,
"Work item context resolution failed for PR #{PullRequestId}; continuing without context",
analysisRequest.GitInfo?.PullRequestId);
return WorkItemSummary.Empty;
}
}

private async Task<bool> CheckForDuplicateAnalysisAsync(AnalysisRequest analysisRequest, CancellationToken cancellationToken)
{

Expand Down
Loading
Loading