Thank you for your interest in contributing to the Tokenomics Platform! This document provides guidelines and instructions for contributing.
- Code of Conduct
- Getting Started
- Development Setup
- Development Workflow
- Code Style
- Testing
- Documentation
- Pull Request Process
- Project Structure
This project adheres to a code of conduct that all contributors are expected to follow. Please be respectful and constructive in all interactions.
- Fork the repository on GitHub
- Clone your fork locally:
git clone https://github.com/yourusername/tokenomics-platform.git cd tokenomics-platform - Add the upstream repository:
git remote add upstream https://github.com/originalowner/tokenomics-platform.git
python3 -m venv venv
source venv/bin/activate # On Windows: venv\Scripts\activatepip install -r requirements.txt
pip install -e .pip install pytest pytest-cov black flake8 mypycp env.template .env
# Edit .env with your API keys for testinggit checkout -b feature/your-feature-name- Write clean, readable code
- Follow the code style guidelines
- Add tests for new functionality
- Update documentation as needed
# Run unit tests
pytest tests/unit/ -v
# Run integration tests
pytest tests/integration/ -v
# Run all tests
pytest tests/ -v
# Check code style
black --check tokenomics/
flake8 tokenomics/git add .
git commit -m "Add: Description of your changes"Commit Message Guidelines:
- Use present tense ("Add feature" not "Added feature")
- Use imperative mood ("Fix bug" not "Fixes bug")
- Keep first line under 50 characters
- Add detailed description if needed
Commit Types:
Add:- New featureFix:- Bug fixUpdate:- Update existing featureRefactor:- Code refactoringDocs:- Documentation changesTest:- Test additions/changesStyle:- Code style changes
git fetch upstream
git rebase upstream/maingit push origin feature/your-feature-nameWe follow PEP 8 with some modifications:
- Line Length: 100 characters (soft limit)
- Indentation: 4 spaces
- Quotes: Double quotes for strings
- Imports: Sorted and grouped (stdlib, third-party, local)
We use Black for code formatting:
black tokenomics/
black tests/We use flake8 for linting:
flake8 tokenomics/We encourage type hints for better code clarity:
def query(self, query: str, token_budget: Optional[int] = None) -> Dict[str, Any]:
...- Unit Tests: Test individual functions/classes in isolation
- Integration Tests: Test component interactions
- Diagnostic Tests: Comprehensive platform validation
def test_feature_name():
"""Test description."""
# Arrange
platform = TokenomicsPlatform(config)
# Act
result = platform.query("test query")
# Assert
assert result["response"] is not None
assert result["tokens_used"] > 0# Run all tests
pytest tests/ -v
# Run specific test file
pytest tests/unit/test_memory.py -v
# Run with coverage
pytest tests/ --cov=tokenomics --cov-report=html
# Run diagnostic tests
python tests/diagnostic/extensive_diagnostic_test.pyWe aim for >80% test coverage. Check coverage with:
pytest tests/ --cov=tokenomics --cov-report=term-missing- Docstrings: Use Google-style docstrings
- Comments: Explain "why" not "what"
- Type Hints: Include type hints for function signatures
Example:
def query(
self,
query: str,
token_budget: Optional[int] = None,
use_cache: bool = True,
) -> Dict[str, Any]:
"""
Process a query through the Tokenomics platform.
Args:
query: The user query string
token_budget: Optional token budget limit
use_cache: Whether to use caching
Returns:
Dictionary containing response, tokens_used, and metadata
Raises:
ValueError: If query is empty
"""
...- Architecture Docs:
docs/architecture/ - Guides:
docs/guides/ - Testing Docs:
docs/testing/ - Results:
docs/results/
When adding new features:
- Update relevant documentation
- Add examples if applicable
- Update architecture diagrams if needed
-
Ensure all tests pass:
pytest tests/ -v
-
Check code style:
black --check tokenomics/ flake8 tokenomics/
-
Update documentation if needed
-
Add changelog entry if applicable
## Description
Brief description of changes
## Type of Change
- [ ] Bug fix
- [ ] New feature
- [ ] Breaking change
- [ ] Documentation update
## Testing
- [ ] Unit tests added/updated
- [ ] Integration tests added/updated
- [ ] All tests pass
## Checklist
- [ ] Code follows style guidelines
- [ ] Self-review completed
- [ ] Comments added for complex code
- [ ] Documentation updated
- [ ] No new warnings generated- Automated Checks: CI will run tests and linting
- Code Review: At least one maintainer will review
- Feedback: Address any feedback or requested changes
- Approval: Once approved, your PR will be merged
tokenomics-platform/
├── tokenomics/ # Core package
│ ├── memory/ # Memory layer
│ ├── orchestrator/ # Token orchestrator
│ ├── bandit/ # Bandit optimizer
│ └── ...
├── tests/ # Test suite
│ ├── unit/ # Unit tests
│ ├── integration/ # Integration tests
│ ├── diagnostic/ # Diagnostic tests
│ └── benchmarks/ # Benchmark tests
├── docs/ # Documentation
├── examples/ # Usage examples
└── scripts/ # Utility scripts
- Performance optimizations
- Additional LLM provider support
- Enhanced compression techniques
- Quality scoring improvements
- Documentation improvements
- Additional test coverage
- Benchmark improvements
- Configuration enhancements
- Monitoring and observability
- Code refactoring
- Style improvements
- Example additions
- Questions: Open a GitHub Discussion
- Bugs: Open a GitHub Issue
- Feature Requests: Open a GitHub Issue
- Security Issues: Email security@example.com
Contributors will be:
- Listed in CONTRIBUTORS.md
- Credited in release notes
- Acknowledged in project documentation
Thank you for contributing to the Tokenomics Platform! 🎉