Thank you for your interest in contributing to BotForge RAG! This document provides guidelines and information for contributors.
- Code of Conduct
- Development Setup
- Development Workflow
- Code Standards
- Testing Guidelines
- Documentation
- Pull Request Process
- Issue Reporting
We are committed to providing a welcoming and inclusive environment for all contributors. Please read and follow our Code of Conduct.
- Use welcoming and inclusive language
- Be respectful of differing viewpoints and experiences
- Gracefully accept constructive criticism
- Focus on what is best for the community
- Show empathy towards other community members
- Python 3.9+ (3.11+ recommended)
- PostgreSQL 15+
- Redis 7+
- Git
- uv (recommended) or pip
-
Clone the repository
git clone https://github.com/your-org/botforge-rag.git cd botforge-rag -
Install dependencies
# Using uv (recommended) uv sync --all-extras # Or using pip pip install -e ".[dev,test]"
-
Set up pre-commit hooks
pre-commit install
-
Configure environment
cp .env.example .env # Edit .env with your local configuration -
Initialize database
# Start PostgreSQL and Redis services # Then run database initialization python scripts/init_db.py
-
Run tests
pytest tests/ -v
-
Start development server
PYTHONPATH=./src uvicorn botforge.main:app --reload --port 8000
- Code Formatting: Black
- Import Sorting: isort
- Linting: flake8, pylint
- Type Checking: mypy
- Testing: pytest
- Pre-commit: Automated code quality checks
main: Production-ready codedevelop: Integration branch for featuresfeature/*: New features and enhancementsbugfix/*: Bug fixeshotfix/*: Critical production fixes
-
Create feature branch
git checkout -b feature/your-feature-name
-
Make changes with clear commits
git add . git commit -m "feat: add new MCP integration feature"
-
Keep branch updated
git fetch origin git rebase origin/develop
-
Push and create pull request
git push origin feature/your-feature-name
Follow Conventional Commits:
<type>[optional scope]: <description>
[optional body]
[optional footer(s)]
Types:
feat: New featurefix: Bug fixdocs: Documentation changesstyle: Code style changesrefactor: Code refactoringtest: Adding or updating testschore: Maintenance tasks
Examples:
feat(mcp): add external tool registration endpoint
fix(vector): resolve embedding dimension mismatch
docs(api): update endpoint documentation
test(services): add unit tests for intent detection
- Formatter: Black (line length: 88)
- Import sorting: isort
- Docstring style: Google format
- Type hints: Required for all function signatures
- Async/Await: Use async/await for all I/O operations
- Error Handling: Comprehensive error handling with custom exceptions
- Logging: Use structured logging throughout
- Type Safety: Full type annotations required
- Dependencies: Minimize external dependencies, justify additions
from typing import Dict, List, Optional
import asyncio
from botforge.core.logger import log
async def process_query(
user_id: str,
bot_id: str,
query: str,
model: str = "gpt-3.5-turbo"
) -> Dict[str, Any]:
"""Process user query with intent detection and routing.
Args:
user_id: Unique identifier for the user
bot_id: Unique identifier for the bot
query: User's input query
model: OpenAI model to use for processing
Returns:
Dictionary containing response and metadata
Raises:
ValueError: If required parameters are missing
ProcessingError: If query processing fails
"""
try:
log.info(f"Processing query for user {user_id}, bot {bot_id}")
# Implementation here
result = await some_async_operation()
return {
"response": result,
"metadata": {"model": model, "timestamp": time.time()}
}
except Exception as e:
log.error(f"Query processing failed: {e}")
raise ProcessingError(f"Failed to process query: {e}") from etests/
├── unit/ # Unit tests for individual components
├── integration/ # Integration tests for component interaction
├── e2e/ # End-to-end tests for complete workflows
├── fixtures/ # Test data and fixtures
└── conftest.py # Pytest configuration
- Unit Tests: All new functions and methods
- Integration Tests: API endpoints and service interactions
- Test Coverage: Minimum 80% code coverage
- Test Data: Use fixtures for consistent test data
- Async Testing: Use pytest-asyncio for async functions
import pytest
from unittest.mock import AsyncMock, patch
from botforge.services.vector_query import VectorQueryService
class TestVectorQueryService:
@pytest.fixture
async def service(self):
return VectorQueryService()
@pytest.mark.asyncio
async def test_query_processing(self, service):
"""Test basic query processing functionality."""
# Arrange
query = "What is machine learning?"
expected_intent = "information_retrieval"
# Act
result = await service.process_query(
user_id="test-user",
bot_id="test-bot",
query=query
)
# Assert
assert result["intent"] == expected_intent
assert "response" in result
assert len(result["response"]) > 0
@pytest.mark.asyncio
async def test_error_handling(self, service):
"""Test error handling for invalid inputs."""
with pytest.raises(ValueError):
await service.process_query("", "test-bot", "test query")# Run all tests
pytest
# Run with coverage
pytest --cov=src --cov-report=html
# Run specific test file
pytest tests/unit/test_vector_query.py
# Run tests matching pattern
pytest -k "test_mcp"
# Run tests with verbose output
pytest -v -s- API Documentation: OpenAPI/Swagger specifications
- Code Documentation: Comprehensive docstrings
- Architecture Documentation: High-level system design
- User Guides: Step-by-step integration guides
Use Google-style docstrings:
def calculate_similarity(vector1: List[float], vector2: List[float]) -> float:
"""Calculate cosine similarity between two vectors.
Args:
vector1: First vector as list of floats
vector2: Second vector as list of floats
Returns:
Cosine similarity score between 0 and 1
Raises:
ValueError: If vectors have different dimensions
Example:
>>> calculate_similarity([1, 0, 0], [0, 1, 0])
0.0
"""- Update API documentation for endpoint changes
- Add examples for new features
- Update README for significant changes
- Include migration guides for breaking changes
-
Code Quality: Ensure all checks pass
pre-commit run --all-files
-
Tests: Verify all tests pass
pytest tests/ -v
-
Documentation: Update relevant documentation
-
Changelog: Add entry to CHANGELOG.md
## Description
Brief description of changes and motivation.
## Type of Change
- [ ] Bug fix (non-breaking change that fixes an issue)
- [ ] New feature (non-breaking change that adds functionality)
- [ ] Breaking change (fix or feature that would cause existing functionality to not work as expected)
- [ ] Documentation update
## Testing
- [ ] Unit tests added/updated
- [ ] Integration tests added/updated
- [ ] Manual testing completed
## Checklist
- [ ] Code follows style guidelines
- [ ] Self-review completed
- [ ] Documentation updated
- [ ] Tests added and passing
- [ ] No new warnings introduced- Automated Checks: All CI checks must pass
- Code Review: At least one maintainer approval required
- Testing: New features require comprehensive tests
- Documentation: API changes require documentation updates
Include the following information:
- Environment: OS, Python version, dependency versions
- Steps to Reproduce: Detailed reproduction steps
- Expected Behavior: What should happen
- Actual Behavior: What actually happens
- Error Messages: Full error messages and stack traces
- Additional Context: Any other relevant information
Include the following information:
- Problem Description: What problem does this solve?
- Proposed Solution: How should it work?
- Alternatives: Other solutions considered
- Implementation Ideas: Technical approach if you have ideas
bug: Something isn't workingenhancement: New feature or requestdocumentation: Improvements or additions to documentationgood first issue: Good for newcomershelp wanted: Extra attention is neededpriority-high: High priority issue
- GitHub Discussions: For questions and general discussion
- GitHub Issues: For bug reports and feature requests
- Documentation: Check existing documentation first
- Code Review: Participate in code reviews
Contributors will be recognized in:
- README.md contributors section
- Release notes for significant contributions
- Annual contributor recognition
- Maintainers: See MAINTAINERS.md
- Security Issues: security@botforge.ai
- General Questions: discussions@botforge.ai
Thank you for contributing to BotForge RAG! 🚀