This document outlines my approach and implementation for the backend of issue management system. It involves building a REST API to manage issue entities, with several key features including CRUD operations, revision tracking, authentication, and comparison functionality.
Before running the application, ensure you have the following installed:
- Docker and Docker Compose
- Node.js (v18 or higher)
- npm (usually comes with Node.js)
- Make - For running Makefile commands
- macOS: Comes pre-installed
- Linux: Install via package manager (e.g.,
apt-get install make) - Windows: Install via Chocolatey (
choco install make) or use Git Bash which includes make
- Husky - For Git hooks (automatically installed when you run
npm install)
The application uses a Makefile to simplify common operations. You can set up everything with a single command:
make setupThis command will:
- Start the Docker containers (
make up) - Run database migrations (
make m-up) - Seed the database with initial data (
make db-seed)
IMPORTANT: The database seeding step is crucial as it creates client data required by the API. Many endpoints require a valid
X-Client-Idheader, which is obtained from these seeded client records. Make sure the seeding process completes successfully.
After completing this setup, the API will be available at http://localhost:8080.
If you need to run the database seed separately (e.g., if you skipped the setup command or need to reset the data):
make db-seedAlternatively, you can run each step individually if needed:
# Start containers
make up
# Run migrations
make m-up
# Seed database
make db-seedThe API is documented using Swagger. To access the documentation:
- Ensure the application is running
- Navigate to http://localhost:8080/docs in your web browser
- The Swagger UI provides a complete overview of all endpoints, request/response formats, and allows you to test the API directly from the browser
If you need to modify the application:
- Code Structure: The codebase follows a layered architecture with routes, controllers, services, and models
- TypeScript: The project uses TypeScript for type safety and better developer experience
- Testing: Run tests with
make testornpm test - Linting: Ensure code quality with
make lintor fix issues withmake lint-fix - Git Hooks: The project uses Husky to run pre-commit hooks that automatically check linting and run tests before each commit. The pre-commit hook enforces a minimum of 85% test coverage - commits will be rejected if coverage falls below this threshold
For a complete development workflow:
- Make your code changes
- Stage your changes with
git add - When you commit, Husky will automatically run linting and tests
- If the pre-commit checks pass, your commit will proceed
- If they fail, fix the issues and try committing again
- Restart the application:
make rst - Verify your changes work as expected
I intentionally use lint-staged with Husky as a best practice to enforce code quality standards automatically during the development workflow. This ensures consistent code style and prevents quality issues from being committed.
Husky is automatically installed when you run npm install. If you need to manually set it up:
npm install
npm run prepareThis will install Husky and set up the Git hooks. The pre-commit hook is configured to:
- Run lint-staged to automatically fix and format only the files you've changed (not the entire codebase)
- Run all tests with coverage reporting
- Verify that test coverage is at least 85% (the commit will fail if coverage is below this threshold)
The lint-staged configuration in package.json works as follows:
- When you commit, it automatically runs ESLint with the
--fixflag on your staged TypeScript and JavaScript files - It also runs Prettier with the
--writeflag on your staged TypeScript, JavaScript, JSON, and Markdown files - Both tools automatically fix and format your code, and lint-staged automatically re-stages these changes
- You don't need to run
git addagain after the tools fix your code
I designed the application using a layered architecture pattern that separates concerns and promotes maintainability:
┌─────────────────┐
│ Routes │ API endpoints and request handling
└────────┬────────┘
│
▼
┌─────────────────┐
│ Controllers │ Request validation and response formatting
└────────┬────────┘
│
▼
┌─────────────────┐
│ Services │ Business logic and data manipulation
└────────┬────────┘
│
▼
┌─────────────────┐
│ Models │ Data access and database interaction
└─────────────────┘
- Routes: Define API endpoints and apply middleware
- Controllers: Handle request validation and response formatting
- Services: Implement business logic and data manipulation
- Models: Manage data access and database interactions
- Middleware: Provide cross-cutting concerns like authentication and error handling
- Validators: Ensure data integrity through validation rules
- Utils: Reusable helper functions
The core data model consists of two main entities:
┌────────────────┐ ┌────────────────────┐
│ Issue │ │ IssueRevision │
├────────────────┤ ├────────────────────┤
│ id │ │ id │
│ title │ │ issue_id │
│ description │ │ issue_data │
│ created_by │◄──────┤ changes │
│ updated_by │ │ created_by │
│ created_at │ │ created_at │
│ updated_at │ └────────────────────┘
└────────────────┘
The authentication system uses JWT tokens and client identification:
┌─────────┐ ┌─────────────┐ ┌─────────────┐
│ Client │ │ API │ │ Database │
└────┬────┘ └──────┬──────┘ └──────┬──────┘
│ │ │
│ POST /auth/signup │ │
│ (email, password, client_id) │ │
├─────────────────────────────►│ │
│ │ │
│ POST /auth/login │ │
│ (email, password) │ │
├─────────────────────────────►│ │
│ │ │
│ │ Validate credentials │
│ ├──────────────────────────────►│
│ │ │
│ │ User data │
│ │◄──────────────────────────────┤
│ │ │
│ JWT Token │ │
│◄─────────────────────────────┤ │
│ │ │
│ Request with: │ │
│ - Authorization: Bearer JWT │ │
│ - X-Client-ID: client_id │ │
├─────────────────────────────►│ │
│ │ │
│ │ Verify token & client ID │
│ ├───────────┐ │
│ │ │ │
│ │◄───────────┘ │
│ │ │
│ │ Process request │
│ ├───────────┐ │
│ │ │ │
│ │◄───────────┘ │
│ │ │
│ Response │ │
│◄─────────────────────────────┤ │
│ │ │
Note: The
client_idparameter is optional during signup but is required as anX-Client-Idheader for most API endpoints. The database seed creates client records that provide valid client IDs for testing.
The revision system tracks all changes to issues:
- When an issue is created, an initial revision is stored
- Each update generates a new revision with:
- Complete issue state after the update
- Only the fields that were changed
- Timestamp and user information
The comparison endpoint provides a detailed diff between two revisions:
┌─────────────────────────────────────────────────────────────┐
│ Revision Comparison │
├─────────────────────────────────────────────────────────────┤
│ ┌───────────────┐ ┌───────────────┐ │
│ │ Before │ │ After │ │
│ │ (Revision A) │ │ (Revision B) │ │
│ └───────┬───────┘ └───────┬───────┘ │
│ │ │ │
│ └──────────┬─────────────┘ │
│ ▼ │
│ ┌───────────────────────────────────┐ │
│ │ Changes │ │
│ │ - Field X: "old" → "new" │ │
│ │ - Field Y: "removed" │ │
│ │ - Field Z: null → "added" │ │
│ └───────────────────────────────────┘ │
│ │
│ ┌───────────────────────────────────┐ │
│ │ Revision Trail │ │
│ │ [Rev A] → [Rev C] → [Rev D] → [Rev B] │
│ └───────────────────────────────────┘ │
└─────────────────────────────────────────────────────────────┘
| Method | Endpoint | Description |
|---|---|---|
| POST | /v1/auth/login | Authenticate and receive JWT token |
| POST | /v1/auth/signup | Register a new user |
| POST | /v1/auth/refresh-token | Refresh JWT token |
| POST | /v1/issues | Create a new issue |
| GET | /v1/issues | List all issues (with optional filters) |
| GET | /v1/issues/:id | Get a specific issue by ID |
| PUT | /v1/issues/:id | Update an issue |
| DELETE | /v1/issues/:id | Delete an issue |
| GET | /v1/issues/:id/revisions | Get all revisions for an issue |
| GET | /v1/issues/:id/compare | Compare two revisions of an issue |
| GET | /v1/users/me | Get current user |
| PUT | /v1/users/me | Update current user |
| DELETE | /v1/users/me | Delete current user |
| GET | /v1/clients | List all clients |
| GET | /v1/clients/:id | Get a specific client by ID |
| POST | /v1/clients | Create a new client |
| PUT | /v1/clients/:id | Update a client |
| DELETE | /v1/clients/:id | Delete a client |
I intentionally added the following utility endpoints to enhance the API's usability and monitoring capabilities:
| Method | Endpoint | Description |
|---|---|---|
| GET | / | Root endpoint that returns basic API information |
| GET | /health | Health check endpoint for monitoring API status |
| GET | /docs | Swagger documentation endpoint for interactive API exploration |
I chose to implement the solution in TypeScript to provide:
- Type safety and better IDE support
- Self-documenting code through interfaces and types
- Easier maintenance and refactoring
I intentionally implemented API versioning (e.g., /v1/issues, /v1/auth) as a best practice to:
- Ensure backward compatibility when introducing breaking changes
- Allow for the evolution of the API without disrupting existing clients
- Provide a clear migration path for clients when new versions are released
- Support multiple API versions simultaneously during transition periods
For authentication, I implemented:
- JWT-based authentication for stateless operation
- Client ID validation through the X-Client-ID header
- User tracking for all database operations
I implemented a centralized error handling approach:
- Consistent error response format
- Detailed error codes and messages
- Proper HTTP status codes
- Error logging for debugging
The codebase includes:
- Unit tests for individual components
- Integration tests for API endpoints
- Mock services for isolated testing
- End-to-end (e2e) tests intentionally added using supertest for comprehensive API validation
The GET /issues endpoint supports:
- Filtering by creator and updater
- Date range filtering for creation and update times
- Pagination for large result sets with the following parameters:
page: Page number (default: 1)limit: Number of items per page (default: 10)- Response includes pagination metadata (total count, current page, page size, total pages)
Challenge: Efficiently storing and retrieving issue revisions.
Solution: I used a separate table for revisions with a JSON column for storing the complete issue state and changes. This approach provides flexibility while maintaining good query performance.
Challenge: Generating meaningful comparisons between arbitrary revisions.
Solution: I implemented a diff algorithm that:
- Identifies added, removed, and modified fields
- Handles both forward and backward comparisons
- Generates a complete revision trail between the compared revisions
Challenge: Implementing secure authentication without compromising usability.
Solution:
- JWT tokens with appropriate expiration
- Client ID validation for additional security
- Secure password storage with bcrypt
- Environment-based configuration for secrets
The API has been deployed and is accessible at:
- Production URL: https://issue-api.lutfifadlan.com
The deployment is fully automated with CI/CD:
- The API is automatically deployed to a VPS when changes are pushed to the solution/mochamad-lutfi-fadlan branch
- The CI/CD pipeline handles building, testing, and deploying the application
A frontend application has been built to interact with the API:
- Technology Stack: React, TypeScript, Next.js, Tailwind CSS
- Production URL: https://issue-dashboard.lutfifadlan.com
- CI/CD: Automatically deployed to Vercel when changes are pushed to the main branch
The frontend provides a user-friendly interface for interacting with all the API features, including issue management, revision tracking, and comparison functionality.
Given more time, I would enhance the solution with:
- Caching: Implement Redis caching for frequently accessed data
- Rate Limiting: Add more sophisticated rate limiting based on user and client
- Metrics: Add Prometheus metrics for monitoring API usage and performance
- Webhooks: Implement webhooks for issue events to enable integrations
- Soft Delete: Add soft delete functionality to preserve issue history
- Advanced Filtering: Implement full-text search and more complex filtering options