Thank you for your interest in contributing to Rose! This document provides guidelines and standards for contributing to the project.
- Getting Started
- Development Setup
- Code Quality Standards
- Testing Requirements
- Pull Request Process
- Code Review Guidelines
- Common Patterns
- Python 3.12 or higher
- Node.js 18+ and npm
uvpackage manager- Git
- API keys for development (Groq, ElevenLabs, Qdrant)
-
Fork and clone the repository
git clone https://github.com/YOUR_USERNAME/ai-companion.git cd ai-companion -
Install dependencies
uv sync cd frontend && npm install && cd ..
-
Set up environment variables
cp .env.example .env # Edit .env with your API keys # Also configure frontend environment cp frontend/.env.example frontend/.env
-
Start development servers
# Start both frontend and backend with hot reload python scripts/run_dev_server.py -
Run tests to verify setup
uv run pytest
We use Ruff for code formatting and linting. All code must pass formatting and linting checks before being merged.
Before committing:
# Format code
make format-fix
# Fix linting issues
make lint-fix
# Check formatting (CI will run this)
make format-check
# Check linting (CI will run this)
make lint-checkConfiguration:
- Line length: 120 characters
- Import sorting: Enabled
- Target Python version: 3.12
All public functions must have complete type annotations:
# Good ✅
from typing import Optional, List
async def search_memories(
query: str,
k: int = 5,
metadata_filter: Optional[dict] = None
) -> List[str]:
"""Search for similar memories."""
# Implementation
pass
# Bad ❌
async def search_memories(query, k=5, metadata_filter=None):
# Missing type hints
passType Checking:
# Run mypy type checker
uv run mypy src/All public functions, classes, and modules must have docstrings:
def extract_memory(message: str) -> Optional[str]:
"""Extract important information from a user message.
Analyzes the message content to identify information worth storing
in long-term memory, such as personal details, emotional states,
or healing goals.
Args:
message: The user's message content to analyze
Returns:
Extracted memory text if important information found, None otherwise
Example:
>>> extract_memory("My name is Sarah and I live in Portland")
"Name is Sarah, lives in Portland"
"""
# Implementation
passDocstring Format:
- Use Google-style docstrings
- Include Args, Returns, Raises sections as needed
- Add Examples for complex functions
- Keep descriptions concise but clear
Module Structure:
src/ai_companion/
├── core/ # Shared utilities, prompts, exceptions
├── graph/ # LangGraph workflow (nodes, edges, state)
├── interfaces/ # User-facing interfaces (web, API)
├── modules/ # Feature modules (memory, speech, etc.)
└── settings.py # Configuration management
Import Order:
- Standard library imports
- Third-party imports
- Local application imports
# Good ✅
import asyncio
from typing import Optional
from langchain_core.messages import HumanMessage
from qdrant_client import QdrantClient
from ai_companion.core.exceptions import MemoryError
from ai_companion.settings import settingsUse the standardized error handling decorators:
from ai_companion.core.error_handlers import handle_api_errors
@handle_api_errors("groq_stt", fallback_message="Could not transcribe audio")
async def transcribe_audio(audio_data: bytes) -> str:
"""Transcribe audio using Groq Whisper."""
# Implementation
passError Handling Rules:
- Use appropriate error handler decorator for the context
- Provide user-friendly fallback messages
- Never expose internal details in error messages
- Log errors with sufficient context
- Record metrics for monitoring
Follow consistent async patterns:
# Good ✅
import aiofiles
async def process_audio_file(path: str) -> bytes:
"""Process audio file asynchronously."""
async with aiofiles.open(path, 'rb') as f:
audio_data = await f.read()
return await transcribe_audio(audio_data)
# Bad ❌
async def process_audio_file(path: str) -> bytes:
"""Process audio file - BLOCKS EVENT LOOP!"""
with open(path, 'rb') as f: # Blocking I/O in async function
audio_data = f.read()
return await transcribe_audio(audio_data)Async Rules:
- Use
async deffor functions that perform I/O - Always
awaitasync function calls - Use
aiofilesfor file I/O in async contexts - Use
asyncio.gather()for parallel operations - Document any sync-to-async bridges with rationale
Avoid code duplication by extracting common logic:
# Good ✅
def _check_circuit_state(self) -> None:
"""Common state checking logic."""
# Shared implementation
pass
def call(self, func, *args, **kwargs):
self._check_circuit_state()
# Sync-specific logic
pass
async def call_async(self, func, *args, **kwargs):
self._check_circuit_state()
# Async-specific logic
pass
# Bad ❌
def call(self, func, *args, **kwargs):
# Duplicated state checking logic
if self._state == "OPEN":
# ...
pass
async def call_async(self, func, *args, **kwargs):
# Same logic duplicated
if self._state == "OPEN":
# ...
passTarget: <5% code duplication
All new code must include tests:
- Core modules: >80% coverage required
- Utility modules: >70% coverage required
- Integration tests: Cover all critical workflows
Running Tests:
# Run all tests
uv run pytest
# Run with coverage
uv run pytest --cov=src --cov-report=html
# Run specific test categories
uv run pytest tests/unit/
uv run pytest tests/integration/tests/
├── unit/ # Unit tests for individual modules
│ ├── test_memory_manager.py
│ ├── test_speech_to_text.py
│ └── test_error_handlers.py
├── integration/ # End-to-end workflow tests
│ └── test_workflow_integration.py
├── fixtures/ # Shared test fixtures
│ ├── audio_samples.py
│ └── mock_responses.py
└── conftest.py # Pytest configuration
Unit Test Example:
import pytest
from unittest.mock import patch, MagicMock
@pytest.mark.asyncio
async def test_memory_extraction(mock_groq_client):
"""Test memory extraction from user message."""
with patch("ai_companion.modules.memory.get_vector_store") as mock_vs:
manager = MemoryManager()
message = HumanMessage(content="My name is Sarah")
await manager.extract_and_store_memory(message)
# Verify LLM was called
mock_groq_client.analyze.assert_called_once()
# Verify memory was stored
mock_vs.return_value.store_memory.assert_called_once()Integration Test Example:
@pytest.mark.asyncio
async def test_complete_conversation_workflow(mock_external_services):
"""Test end-to-end conversation flow."""
initial_state = {
"messages": [HumanMessage(content="I'm feeling anxious")],
"workflow_type": "conversation"
}
graph = create_workflow_graph().compile()
result = await graph.ainvoke(initial_state)
# Verify workflow completed
assert "messages" in result
assert len(result["messages"]) > 1
assert isinstance(result["messages"][-1], AIMessage)Test Guidelines:
- Use descriptive test names:
test_<function>_<scenario>_<expected_outcome> - Mock external services (Groq, ElevenLabs, Qdrant)
- Test both success and error cases
- Keep tests focused and independent
- Use fixtures for common setup
Critical operations should have performance benchmarks:
import time
def test_memory_retrieval_performance():
"""Test that memory retrieval completes within 200ms."""
manager = MemoryManager()
start_time = time.perf_counter()
memories = manager.get_relevant_memories("test context")
elapsed_time = time.perf_counter() - start_time
assert elapsed_time < 0.2, f"Memory retrieval took {elapsed_time:.3f}s (>200ms)"Performance Targets:
- Memory extraction: <500ms
- Memory retrieval: <200ms
- STT transcription: <2s for 10s audio
- TTS synthesis: <1s for 100 words
- End-to-end workflow: <5s
-
Run all quality checks:
make format-fix make lint-fix uv run pytest --cov=src uv run mypy src/
-
Update documentation:
- Add/update docstrings
- Update README if adding features
- Update ARCHITECTURE.md if changing patterns
-
Write tests:
- Unit tests for new functions
- Integration tests for new workflows
- Achieve required coverage targets
## Description
Brief description of changes
## Type of Change
- [ ] Bug fix
- [ ] New feature
- [ ] Breaking change
- [ ] Documentation update
## Testing
- [ ] Unit tests added/updated
- [ ] Integration tests added/updated
- [ ] All tests passing
- [ ] Coverage targets met
## Checklist
- [ ] Code formatted with ruff
- [ ] Linting passes
- [ ] Type hints added
- [ ] Docstrings added/updated
- [ ] Documentation updated- Keep PRs focused: One feature or fix per PR
- Write clear descriptions: Explain what and why
- Reference issues: Link to related issues
- Request reviews: Tag relevant reviewers
- Respond to feedback: Address all review comments
What to Check:
- ✅ Code follows style guidelines
- ✅ Tests are comprehensive
- ✅ Documentation is clear
- ✅ No security issues
- ✅ Performance is acceptable
- ✅ Error handling is appropriate
Review Checklist:
- [ ] Code quality (formatting, linting, types)
- [ ] Test coverage (>70% for new code)
- [ ] Documentation (docstrings, README updates)
- [ ] Error handling (appropriate decorators)
- [ ] Performance (no obvious bottlenecks)
- [ ] Security (no exposed secrets, proper validation)Responding to Reviews:
- Address all comments
- Ask questions if unclear
- Make requested changes
- Mark conversations as resolved
- Request re-review when ready
-
Create module structure:
src/ai_companion/modules/new_module/ ├── __init__.py ├── module_name.py └── utils.py -
Add type hints and docstrings:
class NewModule: """Brief description of module purpose.""" def __init__(self): """Initialize module.""" pass async def process(self, input_data: str) -> str: """Process input data. Args: input_data: Data to process Returns: Processed result """ pass
-
Add error handling:
from ai_companion.core.error_handlers import handle_api_errors @handle_api_errors("new_module") async def process(self, input_data: str) -> str: # Implementation pass
-
Write tests:
# tests/unit/test_new_module.py import pytest @pytest.mark.asyncio async def test_new_module_process(): """Test new module processing.""" module = NewModule() result = await module.process("test input") assert result == "expected output"
-
Update documentation:
- Add module to PROJECT_STRUCTURE.md
- Document in ARCHITECTURE.md if significant
- Update README if user-facing
-
Define node function:
# src/ai_companion/graph/nodes.py from typing import Dict, Any async def new_node(state: AICompanionState) -> Dict[str, Any]: """Process state in new node. Args: state: Current workflow state Returns: State updates to apply """ # Implementation return {"key": "value"}
-
Add to graph:
# src/ai_companion/graph/graph.py graph.add_node("new_node", new_node) graph.add_edge("previous_node", "new_node")
-
Write integration test:
@pytest.mark.asyncio async def test_new_node_integration(): """Test new node in workflow.""" initial_state = {"messages": [...]} graph = create_workflow_graph().compile() result = await graph.ainvoke(initial_state) # Verify node executed correctly
-
Define endpoint:
# src/ai_companion/interfaces/web/routes/new_route.py from fastapi import APIRouter, HTTPException from pydantic import BaseModel router = APIRouter() class RequestModel(BaseModel): """Request model with validation.""" field: str @router.post("/api/new-endpoint") async def new_endpoint(request: RequestModel): """Handle new endpoint request. Args: request: Validated request data Returns: Response data """ # Implementation return {"result": "success"}
-
Add to app:
# src/ai_companion/interfaces/web/app.py from .routes.new_route import router as new_router app.include_router(new_router)
-
Write API test:
def test_new_endpoint(client): """Test new API endpoint.""" response = client.post("/api/new-endpoint", json={"field": "value"}) assert response.status_code == 200 assert response.json()["result"] == "success"
- Questions: Open a GitHub Discussion
- Bugs: Open a GitHub Issue
- Security: Email security@example.com (do not open public issue)
- Chat: Join our Discord/Slack (if available)
By contributing, you agree that your contributions will be licensed under the same license as the project (see LICENSE file).
Thank you for contributing to Rose! Your efforts help make this project better for everyone.