Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion package.json
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
{
"name": "@mesadev/saguaro",
"version": "0.4.16",
"version": "0.4.17",
"description": "AI code review that enforces your team's rules during development",
"license": "Apache-2.0",
"type": "module",
Expand Down
19 changes: 3 additions & 16 deletions src/adapter/rules.ts
Original file line number Diff line number Diff line change
@@ -1,11 +1,6 @@
import fs from 'node:fs';
import path from 'node:path';
import {
loadValidatedConfig,
resolveApiKey,
resolveModelForReview,
resolveModelFromResolvedConfig,
} from '../config/model-config.js';
import { resolveGeneratorBackend } from '../generator/llm-backend.js';
import { findRepoRoot } from '../git/git.js';
import { generateRule } from '../rules/generator.js';
import type { PreviewRuleResult } from '../rules/preview.js';
Expand Down Expand Up @@ -176,20 +171,12 @@ export function writeGeneratedRules(rules: RulePolicy[]): WriteGeneratedRulesRes
export async function generateRuleAdapter(request: GenerateRuleAdapterRequest): Promise<GenerateRuleAdapterResult> {
const repoRoot = findRepoRoot();
const target = analyzeTarget({ targetPath: request.target, repoRoot });

const config = loadValidatedConfig();
const apiKey = resolveApiKey(config);
const modelName = resolveModelForReview(config, 'rules');
const model = resolveModelFromResolvedConfig({
provider: config.model.provider,
model: modelName,
apiKey,
});
const backend = resolveGeneratorBackend();

const result = await generateRule({
intent: request.intent,
target,
model,
backend,
title: request.title,
severity: request.severity,
repoRoot,
Expand Down
2 changes: 1 addition & 1 deletion src/ai/agent-runner.ts
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@ import type { AgentRunner, AgentRunnerOptions, AgentRunnerResult } from '../core

const FIVE_MINUTES_MS = 5 * 60 * 1000;
const TEN_MB = 10 * 1024 * 1024;
const DEFAULT_MAX_TURNS = 10;
const DEFAULT_MAX_TURNS = 30;
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Medium

Increasing DEFAULT_MAX_TURNS from 10 to 30 (3x) has significant implications. This affects ALL agent operations system-wide, not just rule generation. Each turn involves an LLM API call, so this change could triple execution time and API costs for operations that hit the limit. Without justification in the PR description or code comments, this appears to be a global change to solve a specific problem. Consider: (1) making maxTurns configurable per operation type, (2) documenting why 30 turns is necessary, or (3) investigating if the need for many turns indicates a prompting/architecture issue that should be fixed differently.

Fix in Cursor • Fix in Claude

Prompt for Agent
Task: Address review feedback left on GitHub.
Repository: mesa-dot-dev/saguaro#11
File: src/ai/agent-runner.ts#L6
Action: Open this file location in your editor, inspect the highlighted code, and resolve the issue described below.

Feedback:
Increasing DEFAULT_MAX_TURNS from 10 to 30 (3x) has significant implications. This affects ALL agent operations system-wide, not just rule generation. Each turn involves an LLM API call, so this change could triple execution time and API costs for operations that hit the limit. Without justification in the PR description or code comments, this appears to be a global change to solve a specific problem. Consider: (1) making maxTurns configurable per operation type, (2) documenting why 30 turns is necessary, or (3) investigating if the need for many turns indicates a prompting/architecture issue that should be fixed differently.


interface SpawnCliOptions {
command: string;
Expand Down
18 changes: 3 additions & 15 deletions src/cli/lib/rules.ts
Original file line number Diff line number Diff line change
Expand Up @@ -10,12 +10,7 @@ import {
locateRulesDirectoryAdapter,
validateRulesAdapter,
} from '../../adapter/rules.js';
import {
loadValidatedConfig,
resolveApiKey,
resolveModelForReview,
resolveModelFromResolvedConfig,
} from '../../config/model-config.js';
import { resolveGeneratorBackend } from '../../generator/llm-backend.js';
import { findRepoRoot } from '../../git/git.js';
import { generateRule } from '../../rules/generator.js';
import { previewRule } from '../../rules/preview.js';
Expand Down Expand Up @@ -231,19 +226,12 @@ const createRule = async (argv: CreateRuleArgv): Promise<number> => {
spinner.start('Generating rule...');

try {
const config = loadValidatedConfig();
const apiKey = resolveApiKey(config);
const modelName = resolveModelForReview(config, 'rules');
const model = resolveModelFromResolvedConfig({
provider: config.model.provider,
model: modelName,
apiKey,
});
const backend = resolveGeneratorBackend();

const result = await generateRule({
intent,
target,
model,
backend,
title: argv.title,
severity: argv.severity as Severity | undefined,
repoRoot,
Expand Down
1 change: 1 addition & 0 deletions src/generator/index.ts
Original file line number Diff line number Diff line change
@@ -1,3 +1,4 @@
export { CliLlmBackend, type GeneratorLlmBackend, resolveGeneratorBackend, SdkLlmBackend } from './llm-backend.js';
export { orchestrate as generateRules } from './orchestrator.js';
export type {
GenerateRulesOptions,
Expand Down
217 changes: 217 additions & 0 deletions src/generator/llm-backend.ts
Original file line number Diff line number Diff line change
@@ -0,0 +1,217 @@
import type { LanguageModel } from 'ai';
import { generateObject, generateText } from 'ai';
import type { z } from 'zod';
import { zodToJsonSchema } from 'zod-to-json-schema';
import {
createClaudeCliRunner,
createCodexCliRunner,
createGeminiCliRunner,
isCliAvailable,
} from '../ai/agent-runner.js';
import type { ModelProvider } from '../config/model-config.js';
import { loadValidatedConfig, resolveApiKey, resolveModelFromResolvedConfig } from '../config/model-config.js';
import type { AgentRunner } from '../core/types.js';
import { logger } from '../util/logger.js';

// ---------------------------------------------------------------------------
// Interface
// ---------------------------------------------------------------------------

export interface StructuredResult<T> {
object: T;
inputTokens: number;
outputTokens: number;
}

export interface TextResult {
text: string;
}

export interface GeneratorLlmBackend {
generateStructured<T extends z.ZodType>(options: {
system: string;
prompt: string;
schema: T;
abortSignal?: AbortSignal;
}): Promise<StructuredResult<z.infer<T>>>;

generatePlainText(options: { system: string; prompt: string }): Promise<TextResult>;
}

// ---------------------------------------------------------------------------
// CLI Implementation (default — uses claude -p / codex / gemini)
// ---------------------------------------------------------------------------

export class CliLlmBackend implements GeneratorLlmBackend {
constructor(
private runner: AgentRunner,
private cwd: string
) {}

async generateStructured<T extends z.ZodType>(options: {
system: string;
prompt: string;
schema: T;
abortSignal?: AbortSignal;
}): Promise<StructuredResult<z.infer<T>>> {
// eslint-disable-next-line @typescript-eslint/no-explicit-any -- zod-to-json-schema expects Zod v3 types
const jsonSchema = JSON.stringify(zodToJsonSchema(options.schema as any), null, 2);
const jsonInstruction = `\n\nCRITICAL: Your response must be ONLY a raw JSON object. No preamble, no explanation, no markdown fences, no "here is" or "I will" text. The very first character of your response MUST be \`{\`. Respond with nothing but valid JSON matching this schema:\n\n${jsonSchema}`;

const result = await this.runner.execute({
systemPrompt: options.system + jsonInstruction,
prompt: options.prompt,
cwd: this.cwd,
abortSignal: options.abortSignal,
});

const jsonText = extractJson(result.output);
let parsed: unknown;
try {
parsed = JSON.parse(jsonText);
} catch {
throw new Error(`CLI backend returned invalid JSON: ${jsonText.slice(0, 200)}...`);
}

const validated = options.schema.safeParse(parsed);
if (!validated.success) {
const issues = validated.error.issues
.map((i: z.ZodIssue) => `${i.path.join('.') || '(root)'}: ${i.message}`)
.join('; ');
throw new Error(`CLI backend output failed schema validation: ${issues}`);
}

return { object: validated.data, inputTokens: 0, outputTokens: 0 };
}

async generatePlainText(options: { system: string; prompt: string }): Promise<TextResult> {
const result = await this.runner.execute({
systemPrompt: options.system,
prompt: options.prompt,
cwd: this.cwd,
});
return { text: result.output };
}
}

// ---------------------------------------------------------------------------
// SDK Implementation (fallback — requires API key)
// ---------------------------------------------------------------------------

export class SdkLlmBackend implements GeneratorLlmBackend {
constructor(private model: LanguageModel) {}

async generateStructured<T extends z.ZodType>(options: {
system: string;
prompt: string;
schema: T;
abortSignal?: AbortSignal;
}): Promise<StructuredResult<z.infer<T>>> {
// eslint-disable-next-line @typescript-eslint/no-explicit-any -- generateObject has complex generics that don't align with our generic constraint
const result = await generateObject({
model: this.model,
schema: options.schema as any,
system: options.system,
prompt: options.prompt,
abortSignal: options.abortSignal,
});
return {
object: result.object as z.infer<T>,
inputTokens: result.usage.inputTokens ?? 0,
outputTokens: result.usage.outputTokens ?? 0,
};
}

async generatePlainText(options: { system: string; prompt: string }): Promise<TextResult> {
const result = await generateText({
model: this.model,
system: options.system,
prompt: options.prompt,
});
return { text: result.text };
}
}

// ---------------------------------------------------------------------------
// Factory — CLI default, SDK fallback
// ---------------------------------------------------------------------------

const PROVIDER_CLI: Record<ModelProvider, string> = {
anthropic: 'claude',
openai: 'codex',
google: 'gemini',
};

function createCliRunnerForProvider(provider: ModelProvider): AgentRunner {
switch (provider) {
case 'anthropic':
return createClaudeCliRunner();
case 'openai':
return createCodexCliRunner();
case 'google':
return createGeminiCliRunner();
}
}

/**
* Resolve the LLM backend for rule generation.
* Default: CLI agent (claude -p, codex, gemini).
* Fallback: AI SDK with API key (only when no CLI agent is installed).
*/
export function resolveGeneratorBackend(configPath?: string): GeneratorLlmBackend {
const config = loadValidatedConfig(configPath);
const provider = config.model.provider;
const cliCommand = PROVIDER_CLI[provider];

if (isCliAvailable(cliCommand)) {
logger.debug(`[generator] Using CLI backend (${cliCommand})`);
const runner = createCliRunnerForProvider(provider);
return new CliLlmBackend(runner, process.cwd());
}

// Fallback: SDK (requires API key)
logger.debug(`[generator] CLI ${cliCommand} not available, falling back to SDK`);
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Medium

Silent fallback from CLI to SDK could fail unexpectedly if no API key is configured. The code assumes resolveApiKey(config) will succeed on line 174, but if the API key is missing, the tool will fail deep in execution rather than at startup with a clear error message. Recommendation: Add validation before fallback: const apiKey = resolveApiKey(config); if (!apiKey) { throw new Error('${cliCommand} CLI not found and no API key configured. Install CLI or set API key.'); } This provides users with clear, actionable feedback.

Fix in Cursor • Fix in Claude

Prompt for Agent
Task: Address review feedback left on GitHub.
Repository: mesa-dot-dev/saguaro#11
File: src/generator/llm-backend.ts#L173
Action: Open this file location in your editor, inspect the highlighted code, and resolve the issue described below.

Feedback:
Silent fallback from CLI to SDK could fail unexpectedly if no API key is configured. The code assumes `resolveApiKey(config)` will succeed on line 174, but if the API key is missing, the tool will fail deep in execution rather than at startup with a clear error message. Recommendation: Add validation before fallback: `const apiKey = resolveApiKey(config); if (!apiKey) { throw new Error('${cliCommand} CLI not found and no API key configured. Install CLI or set API key.'); }` This provides users with clear, actionable feedback.

const apiKey = resolveApiKey(config);
const model = resolveModelFromResolvedConfig({
provider,
model: config.model.name,
apiKey,
});
return new SdkLlmBackend(model);
}

// ---------------------------------------------------------------------------
// Helpers
// ---------------------------------------------------------------------------

function extractJson(text: string): string {
let cleaned = text.trim();

// Strip markdown code fences
const fenceMatch = cleaned.match(/```(?:json)?\s*\n([\s\S]*?)\n\s*```/);
if (fenceMatch) {
cleaned = fenceMatch[1]!.trim();
}

// Find the first { or [ — skip any preamble text the LLM added
const objStart = cleaned.indexOf('{');
const arrStart = cleaned.indexOf('[');
let start = -1;
if (objStart === -1) start = arrStart;
else if (arrStart === -1) start = objStart;
else start = Math.min(objStart, arrStart);

if (start > 0) {
cleaned = cleaned.slice(start);
}

// Trim trailing non-JSON (e.g. trailing explanation after the closing brace)
const lastBrace = cleaned.lastIndexOf('}');
const lastBracket = cleaned.lastIndexOf(']');
const end = Math.max(lastBrace, lastBracket);
if (end > 0) {
cleaned = cleaned.slice(0, end + 1);
}

return cleaned.trim();
}
Loading
Loading