⚠️ Research Project Notice: This is an experimental research project and is NOT intended for production use. It serves as a testbed for exploring advanced AI agent architectures and development methodologies.
This project is a comprehensive research initiative exploring the intersection of AI agent frameworks, specification-driven development, and modern development tools. The AWS AI Crew system serves as a practical implementation for investigating various aspects of AI agentic applications.
- Multi-agent orchestration using CrewAI's sequential and hierarchical processes
- Agent collaboration patterns and workflow optimization
- Token efficiency optimization through context passing and prompt caching
- Agent specialization for domain-specific expertise (AWS architecture, security, cost optimization)
- Requirements engineering using EARS (Easy Approach to Requirements Syntax) patterns
- Design documentation with comprehensive architectural specifications
- Task-driven implementation with incremental development workflows
- Iterative refinement of specifications through user feedback loops
- Development-time MCP usage within Kiro IDE for enhanced development experience
- Runtime MCP integration enabling agents to access real-time AWS documentation
- Context management and documentation retrieval optimization
- MCP server configuration and performance tuning
- Prompt engineering and context optimization strategies
- Cost reduction techniques through caching and efficient token usage
- Quality assurance methodologies for AI-generated responses
- Performance monitoring and metrics collection
.
├── .kiro/ # Kiro IDE configuration and specifications
│ ├── settings/mcp.json # MCP server configuration
│ ├── specs/aws-ai-crew/ # Feature specifications (SDD artifacts)
│ │ ├── requirements.md # EARS-compliant requirements
│ │ ├── design.md # Architectural design document
│ │ └── tasks.md # Implementation task breakdown
│ └── steering/ # AI assistant guidance rules
├── aws_ai_crew/ # CrewAI project implementation
│ ├── src/aws_ai_crew/ # Main application code
│ │ ├── config/ # Agent and task configurations
│ │ │ ├── agents.yaml # Static agent definitions
│ │ │ └── tasks.yaml # Task workflows with context passing
│ │ ├── crew.py # CrewAI orchestration logic
│ │ └── main.py # CLI entry point
│ └── tests/ # Test suite
└── README.md # This file
- Sequential vs Hierarchical: Sequential processes with context passing proved more token-efficient than hierarchical delegation
- Static Agent Configuration: Separating agent definitions from dynamic request context improves reusability and caching
- Additive Specialist Pattern: Specialists adding only missing aspects rather than full rewrites reduces redundancy
- Real-time Documentation Access: Agents can query current AWS documentation during task execution
- Context Document Management: Systematic tracking of consulted sources improves response quality
- Development-time Enhancement: MCP integration in Kiro IDE accelerates specification development
- Python 3.11+
- OpenAI API key
- Amazon Kiro IDE (for SDD workflow)
- UV package manager
# Clone the repository
git clone <repository-url>
cd aws-agents
# Install dependencies
cd aws_ai_crew
uv sync
# Configure environment
cp .env.example .env
# Edit .env with your OpenAI API key# Run AWS consultation
python -m aws_ai_crew.main --topic "How to add DynamoDB table to a CDK stack?" --language "typescript"- Requirements Gathering: EARS-compliant user stories and acceptance criteria
- Design Phase: Comprehensive architectural documentation with diagrams
- Task Breakdown: Incremental implementation tasks with requirement traceability
- Iterative Refinement: User feedback integration and specification updates
- Static Agent Definition: Role-based agent configuration without dynamic context
- Task Context Design: Strategic context passing between sequential tasks
- Optimization Testing: Token usage monitoring and efficiency improvements
- Quality Validation: Response accuracy and completeness verification
- ✅ Multi-agent workflow optimization for token efficiency
- ✅ MCP integration for real-time documentation access
- ✅ Sequential process architecture with context passing
- ✅ Static agent configuration patterns
- 🔄 Advanced prompt engineering techniques
- 🔄 Quality metrics and evaluation frameworks
- 🔄 Memory optimization strategies
- 🔄 Multi-modal agent capabilities
- 📋 Agent learning and adaptation mechanisms
- 📋 Cross-domain knowledge transfer
- 📋 Automated specification generation
- 📋 Performance benchmarking frameworks
Areas of particular interest:
- Agent Architecture Patterns: Novel approaches to multi-agent coordination
- Optimization Techniques: Methods for improving efficiency and quality
- Evaluation Methodologies: Frameworks for measuring agent performance
- Integration Patterns: New ways to combine AI agents with development tools
This project is purely for research and educational purposes. It is not intended for production use and should not be deployed in production environments without significant additional development, testing, and security review.
This research project is provided as-is for educational and research purposes. Please refer to the LICENSE file for specific terms and conditions.
This project represents ongoing research into AI agentic applications and modern development methodologies. Results and findings are subject to change as research progresses.