This Python project implements a service that processes heart rate data, validates it, generates alerts based on thresholds, and logs events appropriately. The service is defined using Protocol Buffers and provides four main endpoints.
- Submit Heart Rate: Submit a single heart rate measurement
- Stream Heart Rate: Stream heart rate measurements and receive immediate feedback
- Get Heart Rate Status: Get the current status of heart rate monitoring
- Calculate Exercise Zones: Analyze heart rate data to calculate time spent in different exercise zones
.
├── src/ # Main application code
│ ├── server.py # gRPC server implementation
│ ├── client.py # gRPC client implementation
│ ├── handlers/ # Request handlers
│ └── utils/ # Utility functions
│ ├── data_store.py # Redis data storage
│ ├── logger.py # Logging utilities
│ ├── metrics.py # Prometheus metrics
│ └── validator.py # Heart rate validation
├── proto/ # Protocol Buffers definitions
│ └── heartrate_service.proto
├── tests/ # Test files
│ ├── unit/ # Unit tests
│ ├── integration/ # Integration tests
│ └── perf/ # Performance tests
├── scripts/ # Helper scripts
│ ├── build.sh # Generate Proto files
│ ├── setup.sh # Install dependencies
│ ├── run.sh # Run server or client
│ ├── test.sh # Run tests
│ ├── check-ports.sh # Port availability checker
│ └── validate.sh # Complete validation workflow
├── Dockerfile # Docker image definition
├── docker compose.yml # Multi-container configuration
├── Makefile # Convenient commands
└── requirements.txt # Python dependencies
-
Create a virtual environment:
python3 -m venv .venv source .venv/bin/activate # On Windows: .venv\Scripts\activate
-
Install dependencies:
pip install -r requirements.txt
-
Generate Protocol Buffers files:
./scripts/build.sh
-
Install Redis (required for data storage):
# On macOS with Homebrew brew install redis brew services start redis # On Ubuntu/Debian sudo apt-get install redis-server sudo systemctl start redis-server
-
Run the server:
./scripts/run.sh server # Or directly with Python: python -m src.server -
Run the client (in another terminal):
./scripts/run.sh client # Or directly with Python: python -m src.client -
Run tests:
./scripts/test.sh # Or directly with Python: python -m unittest discover tests
- Note: Ensure you have
ghzinstalled on your machine for load testing. On macOS, you can install it using Homebrew:brew install ghz
This project can be run using Docker and Docker Compose, which simplifies setup and ensures consistent environments.
-
Build the Docker images:
make build
-
Start the service:
make up
This starts both the Redis server and the Heart Rate Monitoring service.
-
Run the client:
make client
-
Run tests:
make test -
Stop the service:
make down
To ensure code quality, the project includes a build-and-test workflow:
-
Build and automatically run tests:
make build-and-test
-
Run complete Docker validation (recommended before commits):
make validate
This performs a comprehensive validation of your Docker setup including:
- Building fresh images
- Checking for port conflicts
- Running service health checks
- Running all unit and integration tests
-
View logs:
make logs
-
View logs for a specific service:
make logs-service # Heart rate service logs make logs-redis # Redis logs
-
Restart services:
make restart
-
Clean up resources:
make clean
-
Check port availability:
make ports
-
Run specific test suites:
make test-unit # Run only unit tests make test-integration # Run only integration tests make test-performance # Run performance tests
The service uses the following ports by default:
- 50051: gRPC server
- 8080: Prometheus metrics
- 6379: Redis
The application automatically handles port conflicts when you run make up or make build-and-test. If any required ports are in use, you'll be presented with interactive options:
- Free the ports - This will attempt to kill processes using the conflicted ports
- Use alternative ports - This will automatically switch to alternative ports (50052, 8081, 6380)
- Cancel operation - This will abort the startup process
You can also directly specify custom ports by setting environment variables:
GRPC_PORT=50060 METRICS_PORT=8090 REDIS_PORT=6390 make upThe service uses a dedicated validator module (src/utils/validator.py) to validate heart rate measurements:
- Valid heart rate range: 40-180 BPM
- Normal heart rate range: 50-150 BPM
- Low heart rate alert: 40-49 BPM
- High heart rate alert: 151-180 BPM
Heart rate data is stored in Redis, providing:
- Extremely Fast in-memory operations
- Built-in time-series capabilities
- Support for atomic operations
The service exposes metrics via Prometheus for:
- Request count and latency
- Error rates
- Redis operation metrics
- Alert frequency
The service provides the following gRPC endpoints:
SubmitHeartRate(HeartRateRequest) returns (HeartRateResponse)StreamHeartRate(stream HeartRateRequest) returns (stream HeartRateResponse)GetHeartRateStatus(StatusRequest) returns (StatusResponse)CalculateExerciseZones(ExerciseZoneRequest) returns (ExerciseZoneResponse)
For detailed API documentation, see the Proto file: proto/heartrate_service.proto
During runtime, the application dynamically creates the following directories:
-
generated/: Stores files generated from Protocol Buffers definitions. These files are essential for gRPC communication and are recreated whenever the Proto files are updated. -
log/: Contains theserver.logfile, which records significant events with a log level ofWARNor higher. The log file is managed with a rotation policy, ensuring a new log file is created every 7 days to maintain readability and manage disk space efficiently.
Both directories are automatically created and managed by the application, requiring no manual intervention.
-
Modular Architecture: The codebase follows a modular design with separate components for protocol definitions, server implementation, request handlers, and utilities.
-
Redis for Data Storage: Selected for its speed and suitability for time-series data. Redis object instantiation follows Lazy Singleton Design Pattern
-
Prometheus for Metrics: Enables real-time visibility into the system's performance and health.
-
Centralized Validation: Dedicated validator module ensures consistent data validation across all endpoints.
-
Docker and Docker Compose: Used for environment consistency and ease of deployment across different platforms.
-
gRPC Communication: Provides a high-performance, type-safe communication mechanism supporting both unary and streaming RPCs.
- Pros
- Performace and Scalability: gRPC offers high throughput and low latency for both unary and streaming calls. Redis provides fast data operations, making the system responsive and scalable.
- Modularity: The project is organized into distinct modules (handlers, utilities, tests etc.), making it easy to maintain, extend, and test individual components.
- Observability: Prometheus metrics and robust logging provide excellent insights into system performance and enable proactive troubleshooting.
- Containerization: Docker and Docker Compose simplify deployment and ensure consistent environments across development, testing, and production.
- Cons
- Complexity: Using gRPC, Redis, and Prometheus together introduces a learning curve, especially for teams used to traditional REST APIs.
- Operational Overhead: Running additional infrastructure components (e.g., Redis, Prometheus) requires monitoring and maintenance.
- Resource Consumption: While each component is optimized for performance, the integration of multiple services (especially in containerized environments) can lead to increased resource usage compared to a simpler monolithic approach.
-
Unit Tests: Located in tests/unit. Each RPC handler has a corresponding test file. Uses mocks to isolate business logic.
-
Integration Tests: Located in tests/integration. Spins up a local gRPC server in-process and tests end-to-end functionality.
-
Performance/Load Tests: Located in tests/perf. Includes benchmarks for unary RPCs and load tests for streaming endpoints.
-
Continuous Testing: A scripts/test.sh file runs all tests sequentially. Integrated into the CI/CD pipeline via Docker Compose.
- Authentication and Authorization: Add user authentication and authorization for API security
- CI/CD Pipeline: Expand the existing CI/CD pipeline to incorporate advanced automated testing, streamlined deployment workflows, and support for multiple environments. The current pipeline, defined in
.github/workflows/ci.yml, performs basic validation of source code pushed to GitHub. Enhancements could include integration tests, performance benchmarks, and deployment to staging or production environments. - Testing Strategy: Strengthen the testing framework by increasing unit and integration test coverage. Incorporate diverse scenarios, including unary and streaming gRPC calls, and introduce load and stress testing to evaluate system performance under high demand.
- Data Retention and Log Rotation Policies: Introduce configurable data retention policies for heart rate measurements, allowing users to define retention periods based on their needs. Additionally, implement log rotation based on file size thresholds to ensure efficient disk space management and prevent excessive log growth.
- Dashboard: Develop an interactive web dashboard for visualizing heart rate data, leveraging tools like Grafana for seamless integration with Prometheus metrics.
- Memory Profiling: Integrate Python's built-in memory profiling tools, such as
tracemallocormemory_profiler, to identify and address memory bottlenecks, ensuring optimal resource utilization. - Mobile Client: Build a cross-platform mobile application to enable real-time heart rate monitoring and alerts on the go.
- The typical BPM range for adults is 40–180; values outside this range are considered invalid.
- Redis is used as the primary data store, and it is expected to be running (locally or in Docker).
- The calculation formulas used in CalculateExerciseZonesHandler are simplified and may differ from standard exercise zone calculation methods (e.g., the Karvonen formula or percentage-based methods using Max HR). Future iterations might align these calculations more closely with industry standards.
- The service handles up to 10 requests per second for unary calls and supports at least 5 concurrent streaming clients (Tested and verified with tests/perf)
- The logging configuration writes to a file with rotation, and the generated directory for protobuf files is recreated whenever the definitions change.
This project follows a test-driven development approach:
- Write tests for new features or bug fixes
- Implement the changes
- Run
make build-and-testto ensure your changes work and don't break existing functionality - Run
make validateto perform a complete validation of your Docker setup - Commit your changes only if all validations pass
