GoScheduler is a robust, distributed scheduling system written in Go. It orchestrates and processes scheduled tasks using a microservices architecture, leveraging MongoDB, RabbitMQ, and Redis for persistence, messaging, and caching.
goscheduler/
├── api/
│ ├── config/
│ ├── handlers/
│ ├── models/
│ │ ├── scheduler/
│ │ └── spec/
│ ├── repo/
│ │ ├── scheduler/
│ │ └── spec/
│ ├── Dockerfile
│ └── main.go
├── apidoc/
│ ├── config/
│ ├── web/
│ │ ├── index.html
│ │ └── api.scheduler.yaml
│ ├── Dockerfile
│ └── main.go
├── docs/
│ ├── postman/
│ └── swagger/
├── mock/
│ ├── generate-client-backup
│ ├── generate-client-go-to-go
│ ├── generate-client-java-to-go
│ ├── generate-client-java-to-python
│ ├── models-files-to-generate
├── sdk/
│ ├── config/
│ ├── connects/
│ ├── crypt/
│ ├── env/
│ ├── errors/
│ ├── fmts/
│ ├── glogs/
│ ├── middleware/
│ ├── mongo/
│ │ └── conn/
│ ├── rabbitmq/
│ │ └── v1/
│ └── redis/
│ └── v2/
├── worker/
│ ├── client/
│ │ └── {trace_id}/
│ │ └── bin/
│ │ │ └── client
│ │ └── go_client/
│ │ └── main.go
│ │ └── spec_main.txt
│ │ └── scheduler.process.go
│ │ └── service.metadata.yaml
│ │ └── spec.yaml
│ ├── config/
│ ├── java/
│ │ └── openapi-generator-cli.jar
│ ├── models/
│ ├── models-files-to-generate/
│ │ └── spec_main.txt
│ │ └── scheduler.process.go
│ ├── python/
│ │ └── get_data_spec.py
│ ├── scheduler.process/
│ ├── repo/
│ │ └── scheduler.process/
│ │ └── spec/
│ ├── Dockerfile
│ └── main.go
├── .env
├── docker-compose.yaml
├── go.mod
├── go.sum
└── LICENSE
- Architecture Overview
- Prerequisites
- Project Structure
- Deployment with Docker Compose
- Docker Compose Commands
- API Documentation
- Worker, RabbitMQ, and MongoDB Flow
- Spec Flow Diagram
- Scheduler Process Flow Diagram
- Troubleshooting
- Contributing
- License
- API: Receives and manages scheduling requests, exposes RESTful endpoints.
- RabbitMQ: Message broker for decoupling API and Worker.
- Worker: Consumes tasks from RabbitMQ, processes them, and updates their status in MongoDB.
- Python: Generates client from API spec, executes scheduler process commands.
- Java: Generates code for client to execute scheduler process requests.
- MongoDB: Stores process/task state and data.
- Redis: Used for caching or fast access.
- Docker and Docker Compose
- Go 1.19 or higher (for local development)
/
├── api/ # API service (Dockerfile, handlers, models)
├── apidoc/ # API doc service for documentation
├── docs/ # Documentation files (Swagger, postman collection)
├── worker/ # Worker service (Dockerfile, processing logic)
├── sdk/ # Shared Go packages and utilities
├── .env # Environment variables
├── docker-compose.yaml # Orchestration of all services
└── README.mdgit clone https://github.com/INCT-DD/os-scheduler.git
git checkout main
cd goschedulerEdit the .env file to set ports, credentials, and connection strings for MongoDB, RabbitMQ, and Redis as needed.
docker-compose up -dThis will launch:
- RabbitMQ (with management UI on port 15672)
- MongoDB
- Redis
- API (default: http://localhost:8080)
- API Doc (default: http://localhost:8082)
- Worker (with python, python lib and Java requirements) (default: http://localhost:8081)
docker-compose down- RabbitMQ UI: http://localhost:15672 (default user/password: admin/admin123 — These values can be configured in the
.envfile by changingRABBITMQ_USERNAMEandRABBITMQ_PASSWORDvariables) - MongoDB: localhost:27017
- Redis: localhost:6379
- API: http://localhost:8080/v1/api/ping
- Worker: http://localhost:8081/v1/worker/ping
- API Doc: http://localhost:8082/v1/apidoc/ping
- API Doc - Swagger: http://localhost:8082/v1/apidoc/swagger/cd656
GoScheduler uses Docker Compose to orchestrate all its services. Here are the most useful commands for managing the application:
docker-compose psdocker compose logs -f apidocker compose logs -f apidocdocker compose logs -f workerShows all running services, their ports, and current status.
# Start all services in the background
docker-compose up -d
# Build all services in the background
docker-compose up -d --build
# Build all services in the background no-cache
docker-compose build --no-cache
# Start specific services
docker-compose up -d api worker# Stop all services but keep containers
docker-compose stop
# Stop and remove containers, networks, and volumes
docker-compose down
# Stop and remove ALL containers, networks, and volumes
docker-compose down -v
# Stop and remove containers, networks, volumes, and images
docker-compose down --rmi all# Remove a volume by name
docker volume rm goscheduler_redis_data
docker volume rm goscheduler_mongodb_data
docker volume rm goscheduler_rabbitmq_data
# Remove ALL images in the machine
docker system prune -f# View logs from all services
docker-compose logs
# View logs from specific services
docker-compose logs api worker
# Follow logs in real-time
docker-compose logs -f
# Follow logs from specific service with last 100 lines
docker-compose logs -f --tail=100 api# Restart all services
docker-compose restart
# Restart specific services
docker-compose restart api worker# Run multiple instances of the worker
docker-compose up -d --scale worker=3# Execute a command in a running container
docker-compose exec api sh
# Run a one-off command in a new container
docker-compose run --rm api go test ./...docker statsdocker build \
--platform=linux/amd64 \
--no-cache \
-f ./api/Dockerfile \
-t goscheduler_api:latest \
.docker run --rm -p 8080:8080 goscheduler_api:latest docker build \
--platform=linux/amd64 \
--no-cache \
-f apidoc/Dockerfile \
-t goscheduler_apidoc:latest \
.docker run --rm -p 8082:8082 goscheduler_apidoc:latestdocker build \
--platform=linux/amd64 \
--no-cache \
-f ./worker/Dockerfile \
-t goscheduler_worker:latest \
.docker run --rm -p 8081:8081 goscheduler_worker:latest All endpoints require Basic Auth (the credentials API_SCHEDULER_AUTH_USERNAME and API_SCHEDULER_AUTH_PASSWORD can be configured in the .env file).
Full documentation can be found in API Doc service. Postman collection and swagger can be found in docs/postman/ and docs/swagger/ on the project's folder.
[POST] /v1/api/spec
This endpoint receives a Spec payload, validates that all required fields are present and correctly formatted. If the validation passes, the Spec is created on MongoDB with status WAITING and its unique indentifier (Trace ID) is published on a RabbitMQ queue. A worker then consumes this queue, process the Spec and generates the client code asynchronously.
| Field | Type | Required | Description |
|---|---|---|---|
name |
string | Yes | A name to identify the Spec. This name is used by the client to indicate what service to make the Scheduler Process request. |
headers |
Object | Yes | A key-value pair of all the Headers that the client expects to be sent on a Scheduler Process request. |
spec |
string | Yes | A base64-encoded string of the OpenAPI .yaml Spec definition file of the client. |
Request example:
{
"name": "Filmot",
"headers": {
"x-rapidapi-host": "filmot-tube-metadata-archive.p.rapidapi.com",
"x-rapidapi-key": "THEi6IKsE3mshLA9rbwj6DMQLAI1p1BHU0qjsn1xqbzwX5ArMB"
},
"spec": "b3BlbmFwaTogMy4wLjMKaW5mbzoKICB0aXRsZTogQVBJCiAgdmVyc2lvbjogMS4wLjAKICBjb250YWN0OiB7fQpzZXJ2ZXJzOiBbXQpwYXRoczoge30KdGFnczogW10K"
}Response:
200 OK
{
"trace_id": "liTsN5cS3JHrVoYW"
}[GET] /v1/api/spec/{trace_id}/status
This endpoint returns the status and the steps of the process of a Spec.
Params:
| Name | Type | Required | Description |
|---|---|---|---|
trace_id |
string | Yes | Trace ID of the spec |
Spec status response examples
• Status: WAITING
200 OK
{
"status": "WAITING",
"message": "The spec has been created and it is now waiting to be processed.",
"step": [
"Received in the API"
]
}• Status: IN_PROGRESS
200 OK
{
"status": "IN_PROGRESS",
"message": "The spec is being processed.",
"step": [
"Received in the API",
"Received in the worker"
]
}• Status: READY_YAML
200 OK
{
"status": "READY_YAML",
"message": "The spec yaml configuration file has been generated.",
"step": [
"Received in the API",
"Received in the worker",
"Spec YAML file generated"
]
}• Status: READY_CLIENT
200 OK
{
"status": "READY_CLIENT",
"message": "The client code has been generated.",
"step": [
"Received in the API",
"Received in the worker",
"Spec YAML file generated",
"Client code generated (not reviewed)"
]
}• Status: SUCCESS
200 OK
{
"status": "SUCCESS",
"message": "The spec has been completed successfully.",
"step": [
"Received in the API",
"Received in the worker",
"Spec YAML file generated",
"Client code generated (not reviewed)",
"Client code generated and reviewed"
]
}• Status: ERROR
200 OK
{
"status": "ERROR",
"message": "The spec has one or more errors.",
"error": "failed to execute python/get_data_spec.py script: exit status 3 - Unable to process spec OpenAPI",
"step": [
"Received in the API",
"Received in the worker",
"Spec YAML file generated"
]
}[GET] /v1/api/spec/{trace_id}?spec=true&client=true
This endpoint returns a Spec by its Trace ID. The spec and client query param can be used to specify what is returned.
Params:
| Name | Type | Required | Description |
|---|---|---|---|
trace_id |
string | Yes | Trace ID of the spec |
spec |
boolean | No | Use this query param to return the base64-encoded string of the OpenAPI .yaml Spec definition file of the client. (Default value: false) |
client |
boolean | No | Use this query param to return the URL, endpoints and servers of the client. (Default value: false) |
Response:
200 OK
{
"trace_id": "0iNULRtUjwi8DJoz",
"created_at": "2025-08-26T12:11:49.226Z",
"updated_at": "2025-08-26T14:04:22.302Z",
"name": "filmot",
"status": {
"status": "SUCCESS",
"message": "The spec has been completed successfully.",
"step": [
"Received in the API",
"Received in the worker",
"Spec YAML file generated",
"Client code generated (not reviewed)",
"Client code generated and reviewed"
]
},
"client": {
"endpoints": {
"GET /getsearchchannels": [
{
"default": null,
"enum": null,
"name": "term",
"in": "query",
"type": "string",
"required": true
}
],
"GET /getsearchsubtitles": [
{
"default": null,
"enum": null,
"name": "query",
"in": "query",
"type": "string",
"required": true
},
{
"default": null,
"enum": null,
"name": "channel",
"in": "query",
"type": "string",
"required": false
}
]
},
"headers": [
"x-rapidapi-key",
"x-rapidapi-host"
],
"servers": [
"https://filmot-tube-metadata-archive.p.rapidapi.com"
],
"url": "https://filmot-tube-metadata-archive.p.rapidapi.com"
},
"spec": "b3BlbmFwaTogMy4wLjMKaW5mbzoKICB0aXRsZTogQVBJCiAgdmVyc2lvbjogMS4wLjAKICBjb250YWN0OiB7fQpzZXJ2ZXJzOiBbXQpwYXRoczoge30KdGFnczogW10K",
"headers": {
"x-rapidapi-host": "filmot-tube-metadata-archive.p.rapidapi.com",
"x-rapidapi-key": "THEi6IKsE3mshLA9rbwj6DMQLAI1p1BHU0qjsn1xqbzwX5ArMB"
}
}[PUT] /v1/api/spec/{trace_id}
This endpoint receives a Spec payload, validates that the fields sent are correctly formatted. If the validation passes, the Spec fields sent will be updated on MongoDB. If the spec field was sent, the API checks if it is different than the existing one using a 64-character hash algorithm. If spec is different, the status will be changed again to WAITING and the Spec unique indentifier (Trace ID) is published on a RabbitMQ queue. If spec is the same, the Spec unique identifier (Trace ID) is published on a different RabbitMQ queue, where the service.metadata.yaml file will be updated with the changes. A worker then consumes this queue, process the Spec and generates the client code asynchronously.
| Field | Type | Required | Description |
|---|---|---|---|
name |
string | No | A name to identify the Spec. This name is used by the client to indicate what service to make the Scheduler Process request. |
headers |
Object | No | A key-value pair of all the Headers that the client expects to be sent on a Scheduler Process request. |
spec |
string | No | A base64-encoded string of the OpenAPI .yaml Spec definition file of the client. |
Params:
| Name | Type | Required | Description |
|---|---|---|---|
trace_id |
string | Yes | Trace ID of the spec |
Request example:
{
"name": "Filmot",
"headers": {
"x-rapidapi-host": "filmot-tube-metadata-archive.p.rapidapi.com",
"x-rapidapi-key": "THEi6IKsE3mshLA9rbwj6DMQLAI1p1BHU0qjsn1xqbzwX5ArMB"
},
"spec": "b3BlbmFwaTogMy4wLjMKaW5mbzoKICB0aXRsZTogQVBJCiAgdmVyc2lvbjogMS4wLjAKICBjb250YWN0OiB7fQpzZXJ2ZXJzOiBbXQpwYXRoczoge30KdGFnczogW10K"
}Response:
204 No Content
[DELETE] /v1/api/spec/{trace_id}
This endpoint removes a Spec and saves its data on another MongoDB collection.
Params:
| Name | Type | Required | Description |
|---|---|---|---|
trace_id |
string | Yes | Trace ID of the spec |
Response:
204 No Content
[GET] /v1/api/spec/{offset}/{limit}?spec=true&client=true
This endpoint returns a list of Spec. Offset and limit URL params can be used to specify the data that gets returned. The spec and client query param can be used to specify what is returned.
Params:
| Name | Type | Required | Description |
|---|---|---|---|
offset |
int | Yes | Starting point to retrieve the results. (Default value: 0) |
limit |
int | Yes | The maximum number of results to display per request. (Default value: 100) |
spec |
boolean | No | Use this query param to return the base64-encoded string of the OpenAPI .yaml Spec definition file of the client. (Default value: false) |
client |
boolean | No | Use this query param to return the URL, endpoints and servers of the client. (Default value: false) |
Response:
{
"count": 2,
"results": [
{
"trace_id": "liTsN5cS3JHrVoYW",
"created_at": "2025-07-28T12:50:00.253Z",
"updated_at": "2025-07-28T12:50:06.007Z",
"name": "filmot",
"status": {
"status": "SUCCESS",
"message": "The spec has been completed successfully.",
"step": [
"Received in the API",
"Received in the worker",
"Spec YAML file generated",
"Client code generated (not reviewed)",
"Client code generated and reviewed"
]
},
"spec": "b3BlbmFwaTogMy4wLjMKaW5mbzoKICB0aXRsZTogQVBJCiAgdmVyc2lvbjogMS4wLjAKICBjb250YWN0OiB7fQpzZXJ2ZXJzOiBbXQpwYXRoczoge30KdGFnczogW10K",
"client": {
"endpoints": {
"GET /getsearchchannels": [
{
"default": null,
"enum": null,
"name": "term",
"in": "query",
"type": "string",
"required": true
}
],
"GET /getsearchsubtitles": [
{
"default": null,
"enum": null,
"name": "query",
"in": "query",
"type": "string",
"required": true
},
{
"default": null,
"enum": null,
"name": "channel",
"in": "query",
"type": "string",
"required": false
}
]
},
"headers": [
"x-rapidapi-key",
"x-rapidapi-host"
],
"servers": [
"https://filmot-tube-metadata-archive.p.rapidapi.com"
],
"url": "https://filmot-tube-metadata-archive.p.rapidapi.com"
},
"headers": {
"x-rapidapi-host": "filmot-tube-metadata-archive.p.rapidapi.com",
"x-rapidapi-key": "THEi6IKsE3mshLA9rbwj6DMQLAI1p1BHU0qjsn1xqbzwX5ArMB"
}
},
{
"trace_id": "M00B7DmC7UK6zruG",
"created_at": "2025-07-25T18:06:26.944Z",
"updated_at": "2025-07-25T18:06:31.252Z",
"name": "test",
"status": {
"status": "SUCCESS",
"message": "The spec has been completed successfully.",
"step": [
"Received in the API",
"Received in the worker",
"Spec YAML file generated",
"Client code generated (not reviewed)",
"Client code generated and reviewed"
]
},
"spec": "b3BlbmFwaTogMy4wLjMKaW5mbzoKICB0aXRsZTogQVBJCiAgdmVyc2lvbjogMS4wLjAKICBjb250YWN0OiB7fQpzZXJ2ZXJzOiBbXQpwYXRoczoge30KdGFnczogW10K",
"client": {
"endpoints": {
"GET /getsearchchannels": [
{
"default": null,
"enum": null,
"name": "term",
"in": "query",
"type": "string",
"required": true
}
],
"GET /getsearchsubtitles": [
{
"default": null,
"enum": null,
"name": "query",
"in": "query",
"type": "string",
"required": true
},
{
"default": null,
"enum": null,
"name": "channel",
"in": "query",
"type": "string",
"required": false
}
]
},
"headers": [
"x-rapidapi-key",
"x-rapidapi-host"
],
"servers": [
"https://filmot-tube-metadata-archive.p.rapidapi.com"
],
"url": "https://filmot-tube-metadata-archive.p.rapidapi.com"
},
"headers": {
"x-rapidapi-host": "filmot-tube-metadata-archive.p.rapidapi.com",
"x-rapidapi-key": "THEi6IKsE3mshLA9rbwj6DMQLAI1p1BHU0qjsn1xqbzwX5ArMB"
}
}
]
}[POST] /v1/api/scheduler/process
This endpoint receives a Scheduler Process payload, validates that all required fields are present and correctly formatted. If the validation passes, the Scheduler Process is created on MongoDB with status WAITING and its unique indentifier (Trace ID) is published on a RabbitMQ queue. A worker then consumes this queue, process the Scheduler Process and makes the API request to the client asynchronously. The status can be checked using the Get Scheduler Process Status endpoint. The response can be retrieved using the Get Scheduler Process Response endpoint.
| Field | Type | Required | Description |
|---|---|---|---|
service |
string | Yes | The name that identifies the Spec the client wants to make a request. |
endpoint |
string | Yes | The endpoint to make the request. |
method |
string | Yes | Method of the endpoint of the request. |
headers |
Object | No | A key-value pair of all the Headers to be sent in the request. |
query_params |
Object | No | A key-value pair of the query params of the request. |
url_params |
Object | No | A key-value pair of the URL params of the request. |
req_body |
any | No | Payload body of the request. |
Request example:
{
"service": "Filmot",
"endpoint": "/getsearchsubtitles",
"method": "GET",
"headers": {
"x-rapidapi-host": "filmot-tube-metadata-archive.p.rapidapi.com",
"x-rapidapi-key": "THEi6IKsE3mshLA9rbwj6DMQLAI1p1BHU0qjsn1xqbzwX5ArMB"
},
"query_params": {
"query": "hello world",
"channel": "Technology Explained",
"channelCount": 5,
"queryVideoID": "dQw4w9WgXcQ",
"lang": "en",
"page": 2,
"category": "Science & Technology",
"excludeCategory": "Music,Gaming",
"license": 2,
"maxViews": 50000,
"minViews": 5000,
"maxLikes": 2000,
"minLikes": 100,
"country": 14,
"channelID": "UCBJycsmduvYEL83R_U4JriQ",
"title": "Tutorial",
"startDuration": 300,
"endDuration": 600,
"searchManualSubs": 1,
"startDate": "2023-01-01",
"endDate": "2023-12-31"
}
}Response:
200 OK
{
"trace_id": "WE7koqi5NvmmlINj",
"status": "WAITING",
"msg": "Your request is waiting to be processed."
}[GET] /v1/api/scheduler/process/{trace_id}/status
This endpoint returns the status and the steps of the process of a Scheduler Process.
Params:
| Name | Type | Required | Description |
|---|---|---|---|
trace_id |
string | Yes | Trace ID of scheduler process. |
Scheduler Process status response examples
• Status: WAITING
200 OK
{
"status": "WAITING",
"message": "The request has been created. It is waiting to be processed.",
"step": [
"Received in the API"
]
}• Status: IN_PROGRESS
200 OK
{
"status": "IN_PROGRESS",
"message": "The request is being processed.",
"step": [
"Received in the API",
"Received in the worker"
]
}• Status: SUCCESS
200 OK
{
"status": "SUCCESS",
"message": "The request has been executed and the process has been completed successfully.",
"step": [
"Received in the API",
"Received in the worker",
"Executed command successfully"
]
}• Status: ERROR
200 OK
{
"status": "ERROR",
"message": "There was one or more errors trying to execute scheduler process.",
"error": "The command could not be executed because the service requested (filmot) is not available.",
"step": [
"Received in the API",
"Received in the worker"
]
}[GET] /v1/api/scheduler/process/{trace_id}
This endpoint returns a Scheduler Process by its Trace ID. This endpoint does not return the response of the Scheduler Process request, to retrieve the response use the Get Scheduler Process Response endpoint.
Params:
| Name | Type | Required | Description |
|---|---|---|---|
trace_id |
string | Yes | Trace ID of scheduler process. |
Response:
200 OK
{
"trace_id": "WE7koqi5NvmmlINj",
"timestamp": "2025-07-28T14:17:28.572Z",
"status": {
"status": "SUCCESS",
"message": "The request has been executed and the process has been completed successfully.",
"step": [
"Received in the API",
"Received in the worker",
"Executed command successfully"
]
},
"service": "filmot",
"method": "GET",
"endpoint": "/getsearchchannels",
"headers": {
"x-rapidapi-host": "filmot-tube-metadata-archive.p.rapidapi.com",
"x-rapidapi-key": "THEi6IKsE3mshLA9rbwj6DMQLAI1p1BHU0qjsn1xqbzwX5ArMB"
},
"query_params": {
"term": "Science Channel"
},
"url_params": null,
"req_body": null
}[GET] /v1/api/scheduler/process/{trace_id}/response
This endpoint returns the response of the request made by a Scheduler Process. If the response is not available yet, the status will be returned instead. If the response is available, it will be returned exactly like the response from the Spec client API. If an error ocurried, the status will be returned with a description of the error.
Params:
| Name | Type | Required | Description |
|---|---|---|---|
trace_id |
string | Yes | Trace ID of scheduler process. |
✅ Successful response example:
200 OK
[
{
"r": 2312517590,
"value": "UCvJiYiBUbw4tmpRSZT2r1Hw",
"label": "Science Channel",
"thumbnailurl": "http:\/\/yt3.googleusercontent.com\/XiCSgGILswBoCZTIG9TYQ1r7hGlUvSDtqi5ODX1SyPNqWr_csSUEwEV4Xrid_EYbnHD_aDbXuw=s900-c-k-c0x00ffffff-no-rj",
"subcount": 5140000,
"subcountp": "5.1M",
"viewcount": 1133721956,
"viewcountp": "1.1B",
"newshortname": "@sciencechannel",
"islist": false
},
{
"r": 5518934,
"value": "UCHpFyLQgg4h9VZuFyby7RbQ",
"label": "SCIENCE CHANNEL\uff08JST\uff09",
"thumbnailurl": "http:\/\/yt3.googleusercontent.com\/pLsHN8CQzk9dn0vDcWLpQ3M4uiCI6vCRDVNnmSXKGakP6X4W2sgg0csNVmzFzq38QjuzrJeeWQ=s900-c-k-c0x00ffffff-no-rj",
"subcount": 618000,
"subcountp": "618K",
"viewcount": 178777332,
"viewcountp": "178.8M",
"newshortname": "@jstsciencechannel",
"islist": false
}
]
⚠️ Important
Notice that the successful response will be exactly as the response from the request made to the client.
❌ Error response example:
200 OK
{
"error": true,
"message": "Error executing API call: API call failed: term is required and must be specified"
}2023/01/01 13:01:43 ✅ Redis connected successfully!
2023/01/01 13:02:46 ⏳ Trying to reconnect MongoDB in 4s...
2023/01/01 13:30:47 ⏳ Trying to reconnect RabbitMQ in 8s...
2023/01/01 13:31:17 ✅ RabbitMQ connected successfully!
2023/01/01 13:51:17 ✅ Connected to MongoDB successfully!
| Key | Type | Description |
|---|---|---|
| @timestamp | string | Timestamp of when the log happened. (Format: 2006-01-02T15:04:05.000Z) |
| data | string | Date and time of when the log happened. (Format: 2006-01-02 15:04:05) |
| service | string | Name of the service (e.g. api.goscheduler) |
| handler | string | Signature or name of the function called (e.g. func (s *connects.HConn) MyFunc(c *quick.Ctx)) |
| time_start | string | Date and time of when the handler function started. (Format: 2006-01-02T15:04:05.000Z) |
| time_end | string | Date and time of when the handler function ended. (Format: 2006-01-02T15:04:05.000Z) |
| latency | string | Amount of time it took for the handler function to complete. Measured by the difference between time_start and time_end (e.g. 0.2 ms) |
| status | int | Status of the response (e.g. 400) |
| url | string | URL path of the endpoint of the request (e.g. /v1/my/url/path) |
| req_headers | object | Header array of the request |
| resp_headers | object | Header array of the response |
| content_length | int | Size of the request body in bytes (e.g. 136) |
| method | string | Method of the request (e.g. GET) |
| host | string | Host name from the HTTP request (e.g. localhost:8080) |
| agent | string | User-Agent value from the HTTP request Header (e.g. PostmanRuntime/7.37.3) |
| remote_ip | string | X-Real-IP value from the HTTP request Header (e.g. 127.0.0.1) |
| req_body | string | Payload of the HTTP request |
| queue_msg_body | string | Body of the message received in queue. Usually used in the worker |
| other_data | object | Any other relevant data that can be useful in the log |
There are options to control the logs in the .env file.
- GLOGS_WRITE_STDOUT (boolean) (Default:
true)
Whether or not to display the log output. (Values false, true)
- GLOGS_INSERT_MONGODB (boolean) (Default:
true)
Whether or not to save log to MongoDB. (Values false, true)
- GLOGS_LOG_TYPE (string) (Default:
text)
Defines the type of the output of the log. (Values json, text)
Text
6:14PM ERR json: cannot unmarshal number into Go struct field CreateSpecReq.username of type string @timestamp=2025-06-03T18:14:36.606Z X-TRACE-ID=aKmtdLr6B7KvWiE8 agent=PostmanRuntime/7.37.3 content_length=270 data="2025-06-03 18:14:36" handler="func (s *connects.HConn) SpecCreate(c *quick.Ctx)" host=localhost:8080 latency="0 ms" method=POST remote_ip=127.0.0.1 req_body={"password":"5wDBqSQRb6Ujm1JezXMvwjBHaqJO1RBs","spec":"b3BlbmFwaTogMy4wLjMKaW5mbzoKICB0aXRsZTogQVBJCiAgdmVyc2lvbjogMS4wLjAKICBjb250YWN0OiB7fQpzZXJ2ZXJzOiBbXQpwYXRoczoge30KdGFnczogW10K","token":"8710aeb6-0ded-48e9-80d6-8e55ffd4d6bc","username":1} req_headers={"Accept":["*/*"],"Accept-Encoding":["gzip, deflate, br"],"Authorization":["Basic xxxx"],"Cache-Control":["no-cache"],"Connection":["keep-alive"],"Content-Length":["270"],"Content-Type":["application/json"],"Postman-Token":["76105005-6180-417e-8bee-83a91ef01f1e"],"User-Agent":["PostmanRuntime/7.37.3"]} resp_headers={"Content-Encoding":["gzip"],"Content-Type":["application/json"],"Vary":["Accept-Encoding"],"X-Trace-Id":["aKmtdLr6B7KvWiE8"]} service=api.goscheduler status=400 time_end=2025-06-03T18:14:36.606Z time_start=2025-06-03T18:14:36.606Z url=/v1/api/specJSON
{"level":"error","@timestamp":"2025-06-03T18:14:10.277Z","data":"2025-06-03 18:14:10","X-TRACE-ID":"RfZ3NhX7noIF26tZ","service":"api.goscheduler","handler":"func (s *connects.HConn) SpecCreate(c *quick.Ctx)","time_start":"2025-06-03T18:14:10.277Z","time_end":"2025-06-03T18:14:10.277Z","latency":"0 ms","url":"/v1/api/spec","req_headers":{"Accept":["*/*"],"Accept-Encoding":["gzip, deflate, br"],"Authorization":["Basic xxxx"],"Cache-Control":["no-cache"],"Connection":["keep-alive"],"Content-Length":["270"],"Content-Type":["application/json"],"Postman-Token":["4f7c1ac7-f102-4cc4-b29d-736433c1d47e"],"User-Agent":["PostmanRuntime/7.37.3"]},"resp_headers":{"Content-Encoding":["gzip"],"Content-Type":["application/json"],"Vary":["Accept-Encoding"],"X-Trace-Id":["RfZ3NhX7noIF26tZ"]},"content_length":270,"method":"POST","host":"localhost:8080","agent":"PostmanRuntime/7.37.3","remote_ip":"127.0.0.1","req_body":{
"name": 1,
"headers": {
"x-rapidapi-host": "filmot-tube-metadata-archive.p.rapidapi.com",
"x-rapidapi-key": "THEi6IKsE3mshLA9rbwj6DMQLAI1p1BHU0qjsn1xqbzwX5ArMB"
},
"spec": "b3BlbmFwaTogMy4wLjMKaW5mbzoKICB0aXRsZTogQVBJCiAgdmVyc2lvbjogMS4wLjAKICBjb250YWN0OiB7fQpzZXJ2ZXJzOiBbXQpwYXRoczoge30KdGFnczogW10K"
},"status":400,"message":"json: cannot unmarshal number into Go struct field CreateSpecReq.name of type string"}- Client sends a scheduling request to the API service.
- API validates the request, stores the process in MongoDB with status
WAITING, and publishes a message to RabbitMQ queue. - Worker service (with 5 concurrent workers) listens to RabbitMQ, consumes the message, and updates the process status to
IN_PROGRESSin MongoDB. - Worker processes the task according to its business logic.
- Worker updates the process status to
SUCCESSorERRORin MongoDB after processing. - Client can query the API to check the status of their request.
- Client sends a request to the API.
- API validates the payload and saves spec data to MongoDB.
- API publishes the Trace ID of the Spec to RabbitMQ queue.
- Worker consumes Trace ID of the Spec from RabbitMQ queue.
- Worker calls a python script
get_data_spec.pyto generatespec.yamlfile for that client. - Worker generates the
service.metadata.yamlfile with metadata for that client. - Worker executes a Java binary file
openapi-generator-cli.jarto generateclient_go/*directory for that Spec using thespec.yamlfile generated in Step 5. - Worker saves the generated client data on MongoDB.
- Worker publishes Trace ID to another RabbitMQ queue.
- Worker consumes Trace ID and the
main.gofile with all requests for that client will be generated using AI or be generated manually.
1. Client sends a request to the API.
|
2. API validates the payload and saves spec data to MongoDB.
|
# Spec status: WAITING
|
3. API publishes the Trace ID of the Spec to RabbitMQ queue.
|
# For [POST] /v1/api/spec requests, it always publishes to the RabbitMQ queue.
|
# For [PUT] /v1/api/spec/:id requests, it checks if a different spec was sent or if the name has been changed. If so, it publishes to the RabbitMQ queue.
|
4. Worker consumes Trace ID of the Spec from RabbitMQ queue.
|
# Spec status: IN_PROGRESS
|
5. Worker calls a python script `get_data_spec.py` to generate `spec.yaml` file for that client.
|
6. Worker generates the `service.metadata.yaml` file with metadata for that client.
|
# Spec status: READY_YAML
|
7. Worker executes a Java binary file `openapi-generator-cli.jar` to generate `client_go/*` directory for that Spec using the spec.yaml file generated in Step 5.
|
8. Worker saves the generated client data on MongoDB.
|
# Spec status: READY_CLIENT
|
9. Worker publishes Trace ID to another RabbitMQ queue.
|
10. Worker consumes Trace ID and the `main.go` file with all requests for that client will be generated using AI or be generated manually (Read the README.md of code generated)
|
# Spec status: SUCCESSflowchart TD
A[Client sends request to API] --> B[API validates payload and saves spec data to MongoDB]
B -->|Spec status: WAITING| C{Request type?}
C -->|POST /v1/api/spec| D[Always publish Trace ID to RabbitMQ to generate client]
C -->|PUT /v1/api/spec/:id| E[If spec differs or name changed, publish to RabbitMQ]
D --> F[Worker consumes Trace ID from RabbitMQ]
E --> F
F -->|Spec status: IN_PROGRESS| G[Worker runs get_data_spec.py to produce spec.yaml]
G --> H[Worker generates the service.metadata.yaml file with metadata]
H -->|Spec status: READY_YAML| I[Worker executes openapi-generator-cli.jar to generate client_go/*]
I --> J[Worker saves generated client data to MongoDB]
J -->|Spec status: READY_CLIENT| K[Worker publishes Trace ID to another RabbitMQ queue]
K --> L[Worker consumes Trace ID]
L --> M[Worker generates main.go using AI or manual method]
M -->|Spec status: SUCCESS| N[Process complete]The main.go script file with all requests for a client will be generated by AI using the following prompt:
Implement the main.go file, which receives a spec and a scheduler process as parameters to execute an API call. The API calls will be made using a Go lib generated by the OpenAPIGeneratorCli program. It is located in the client_go/ folder. The details of how the main.go file will be executed are in the scheduler.process.go file, where cmd.Stdin is used to pass parameters and execute using exec.Command. Use the spec_main.txt file as an example of what main.go file must have, despite having a .txt extension it has Go code and it is .txt file to prevent Go package and repeated function errors. The main.go file has a readInputFromStdin function used to read the parameters sent by cmd.Stdin. The goal is to understand the functions of the generated lib code and use them to implement all the spec's API endpoint calls in the main.go file, which will be called by scheduler.process.go. For the requests in main.go, use NewRequestWithContext passing a context.WithTimeout configured to 1 hour. Make sure to check the imports in the main.go file. To import the functions of the generated lib, use "module_name/client_go" where module_name is the unique name of the Spec. Execute `go mod init <module_name>` and `go mod tidy` to get the client ready to be used. Inside the client_go folder from the client code generated, remove go.mod and go.sum files, remove the ".openapi-generator" and "test" folders to avoid import errors. After the main.go is ready, generate the binary with the name "client" and move it so that the binary file path is bin/client. Return example test calls using the format "echo '{"spec":{...}, "scheduler_process":{...}}' | ./bin/client"
The spec_main.txt, scheduler.process.go and the other example files can be found in these directories:
| Directory | Description |
|---|---|
mock/models-files-to-generate/ |
Contains the unmodified version of the spec_main.txt and scheduler.process.go files. |
mock/generate-client-go-to-go/ |
Contains the modified version of the main.go file and the client_go/ folder generated by oapi-codegen (Go) |
mock/generate-client-java-to-go/ |
Contains the modified version of the main.go file and the client_go/ folder generated by openapi-generator-cli.jar (Java) |
mock/generate-client-java-to-python/ |
Contains the modified version of the pytest file and the filmot_python_client/ folder generated by openapi-generator-cli.jar (Java) |
After executing the prompt, the output expected is:
- An updated
main.gofile that can execute all endpoints of the client spec. - Some examples using
echo '...' | go run main.gobash command for testing the requests.
The chosen model was Java generating Go, and to test what was generated by AI or done manually we used the examples below.
Example getsearchchannels
echo '{
"spec": {
"headers": {
"x-rapidapi-host": "filmot-tube-metadata-archive.p.rapidapi.com",
"x-rapidapi-key": "THEi6IKsE3mshLA9rbwj6DMQLAI1p1BHU0qjsn1xqbzwX5ArMB"
},
"url": "https://filmot-tube-metadata-archive.p.rapidapi.com"
},
"scheduler_process": {
"method": "GET",
"endpoint": "/getsearchchannels",
"query_params": {"term": "Science Channel"}
}
}' | go run main.goExample getsearchsubtitles
echo '{
"spec":{
"headers":{
"x-rapidapi-host": "filmot-tube-metadata-archive.p.rapidapi.com",
"x-rapidapi-key":"THEi6IKsE3mshLA9rbwj6DMQLAI1p1BHU0qjsn1xqbzwX5ArMB"
},
"url":"https://filmot-tube-metadata-archive.p.rapidapi.com"
},
"scheduler_process":{
"method":"GET",
"endpoint":"/getsearchsubtitles",
"query_params":{
"query":"hello world",
"lang":"en",
"page":1
}
}
}' | go run main.goOutput overview of a client generation using Claude Code:
Total cost: $0.4197
Total duration (API): 1m 50.8s
Total duration (wall): 7m 51.4s
Total code changes: 285 lines added, 2 lines removed
Usage by model:
claude-3-5-haiku: 509 input, 524 output, 0 cache read, 0 cache write
claude-sonnet: 41 input, 6.8k output, 364.8k cache read, 54.8k cache writeClient folder final structure + binary:
client/
├── {trace_id}/
│ ├── bin/
│ │ └── client
│ ├── client_go/
│ │ └── ...
│ ├── go.mod
│ ├── go.sum
│ ├── spec_main.txt
│ ├── scheduler.process.go
│ ├── service.metadata.yaml
│ └── spec.yamlMetadata of a Spec:
The metadata of a Spec can be found in the client/{TRACE_ID}/service.data.yaml file of each service. It has the following data:
| Name | Type | Description |
|---|---|---|
trace_id |
string | Trace ID of the Spec |
name |
string | Name of the Spec |
updated_at |
string | Date and time of when this file was last updated. (Format: 2006-01-02T15:04:05.000Z) |
Example:
trace_id: 0iNULRtUjwi8DJoz
name: filmot
updated_at: "2025-08-26T11:04:22.312Z" ┌─────────────┐ ┌─────────────┐ ┌─────────────┐
│ │ │ │ │ │
│ WAITING │───▶│ IN_PROGRESS │───▶│ ERROR │
│ │ │ │ │ │
└─────────────┘ └─────────────┘ └─────────────┘
│ ▲ ▲
│ │ │
▼ │ │
┌─────────────┐ │ │
│ │ │ │
│ READY YAML │─────────┘ |
│ │ |
└─────────────┘ |
│ |
│ |
▼ |
┌─────────────┐ │
│ │ │
│ READY CLI │────────────┘
│ │
└─────────────┘
│
│
▼
┌─────────────┐
│ │
│ SUCCESS │
│ │
└─────────────┘
flowchart TD
A[Waiting] --> B[In progress]
B --> E[Error]
B -->|Generate spec.yaml and service.metadata.yaml in Go| C[Ready YAML]
C --> E
C -->|Generate client_go with Java| D[Ready cli]
D --> E
D -->|Generate with AI or manual - human| F[Success]- Client sends a request to the API.
- API validates the payload and saves scheduler process data to MongoDB.
- API publishes the Trace ID of the scheduler process to RabbitMQ queue.
- Worker consumes Trace ID of the scheduler process from RabbitMQ queue.
- Worker gets scheduler process and spec data from MongoDB and checks if that same request exists on Redis.
- If same request exists on Redis, jump to step 8.
- Merges headers from spec and scheduler process, keeping the ones from scheduler process if any repeats, executes the
client/<trace_id>/bin/clientbinary script of themain.gofile for that client passing scheduler process and spec data as parameters. After finishing, the worker sets the response in cache on Redis. - Worker publishes response in base64 to another RabbitMQ queue.
- Worker consumes response in base64 and updates scheduler process in MongoDB, making it available to get from the API.
1. Client sends a request to the API.
|
2. API validates the payload and saves scheduler process data to MongoDB.
|
# Scheduler process status: WAITING
|
3. API publishes the Trace ID of the scheduler process to RabbitMQ queue.
|
4. Worker consumes Trace ID of the scheduler process from RabbitMQ queue.
|
# Scheduler process status: IN_PROGRESS
|
5. Worker gets scheduler process and spec data from MongoDB, checks if that same request exists on Redis and if spec is available.
|
# Spec is available if spec status is SUCCESS. It means the client code has been generated and reviewed.
|
6. If same request exists on Redis, jump to step 8.
|
7. Merges headers from spec and scheduler process, keeping the ones from scheduler process if any repeats, executes the `client/<trace_id>/bin/client` binary script of the `main.go` file for that client passing scheduler process and spec data as parameters. After finishing, the worker sets the response in cache on Redis.
|
8. Worker publishes response in base64 to another RabbitMQ queue.
|
9. Worker consumes response in base64 and updates scheduler process in MongoDB, making it available to get from the API.
|
# Scheduler process status: SUCCESSflowchart TD
A[Client sends request to API] --> B[API validates payload and saves scheduler process data to MongoDB]
B -->|Scheduler process status: WAITING| C[API publishes Trace ID of scheduler process to RabbitMQ queue]
C --> D[Worker consumes Trace ID of scheduler process from RabbitMQ queue]
D -->|Scheduler process status: IN_PROGRESS| E[Worker gets scheduler process + spec data from MongoDB]
E --> F{Spec available?}
F -->|Spec status: SUCCESS| G{Request exists in Redis?}
F -->|Spec not available| H[Wait or retry until spec is generated]
G -->|Yes| I[Skip to step 8]
G -->|No| J[Merges headers from spec and scheduler process and executes binary of the main.go with scheduler + spec data; cache response in Redis]
I --> K[Publish response in base64 to another RabbitMQ queue]
J --> K
K --> L[Worker consumes base64 response and updates scheduler process in MongoDB]
L -->|Scheduler process status: SUCCESS| M[Process complete and available via API]The headers configured in the Spec endpoints will be used for all Scheduler Process of that service. The headers configured in the Scheduler Process endpoints will be used for that Scheduler Process only and will replace the ones defined with the same name in the Spec.
Here is an example of how it will be merged:
Spec headers
{
"header1": "val1",
"header2": "val2"
}Scheduler Process headers
{
"header2": "val50",
"header3": "val3"
}Merged headers — This will be sent in the request
{
"header1": "val1",
"header2": "val50",
"header3": "val3"
}The worker service receives messages from the RabbitMQ consumer client and executes a function based in the type of the message.
Overview of the worker service:
- Consumes messages from RabbitMQ queue
- Runs 5 concurrent workers for parallel processing
- Updates task status in MongoDB at each stage
- Handles errors and retries when necessary
- Provides logging for monitoring and debugging
| Key | Type | Description |
|---|---|---|
| trace_id | string | Trace ID of the object of the action. |
| timestamp | string | Date and time of when the message was sent. (Format: 2006-01-02T15:04:05.000Z) |
| type | string | Used to describe what function worker will execute. (Values: SPEC, SPEC_UPDATE, SPEC_CLIENT_GENERATED, SCHEDULER_PROCESS and SCHEDULER_PROCESS_RESPONSE) |
| data | any | Used to send any data relevant to the message. This field is not required and should be used with caution to avoid sending large amounts of data to queue. |
Example of JSON queue message:
{
"trace_id": "{TRACE_ID}",
"timestamp": "2025-06-01T11:04:14.250Z",
"type": "SPEC"
}| Message Type | Description |
|---|---|
SPEC |
Handles Specs created in the API. |
SPEC_UPDATE |
Handles Specs updated in the API. |
SPEC_CLIENT_GENERATED |
Handles Specs code generation with AI or manually and other setup configurations. |
SCHEDULER_PROCESS |
Handles Scheduler Process created in the API. |
SCHEDULER_PROCESS_RESPONSE |
Handles responses from Scheduler Process client requests. |
SPEC: Handles spec created in the API.
- Validates trace ID and finds spec data in MongoDB
- Updates status of spec for each stage
- Generates
client/<trace_id>/spec.yamlfile for the service unique name - Executes
get_data_spec.pyscript to get endpoint, headers and URL information - Generates the raw client code
client/<trace_id>/go_clientfiles for the spec generated - Publishes to RabbitMQ queue and send spec to be generated and reviewed by AI or manually
- Saves the script response in MongoDB and updates status accordingly
SPEC_UPDATE: Handles spec updated in the API.
- Validates trace ID and finds spec data in MongoDB
- Generates
client/<trace_id>/service.metadata.yamlfile for the service
SPEC_CLIENT_GENERATED: Handles spec code generation in the worker.
- Validates Trace ID and finds spec data in MongoDB
- Removes
go.mod,go.sumandclient_go/test/*files from the client code generated - Creates
main.goscript file to execute all endpoints in the spec - Execute
go mod init <client_name>andgo mod tidy - Creates binary executable file for the client
client/<trace_id>/bin/client - Updates spec status to
SUCCESSto make it available to use in scheduler process calls
SCHEDULER_PROCESS: Handles scheduler process created in the API.
- Validates Trace ID and finds scheduler process data in MongoDB
- Validates scheduler process service and finds spec by name
- Check if spec status is
SUCCESSto make sure it is available and the request can be made to the client. - Executes
client/<trace_id>/bin/clientbinary passing spec and scheduler process data - Gets the response, parses and encrypts it in base64
- Publishes the base64 response to RabbitMQ and updates status accordingly
SCHEDULER_PROCESS_RESPONSE: Handles responses from scheduler process client requests.
- Validates if trace ID was sent in the queue message
- Updates scheduler process in MongoDB with base64-encoded response
2023/01/01 12:00:00 Starting worker service on port 8081
2023/01/01 12:00:01 Connected to RabbitMQ at amqp://guest:guest@rabbitmq:5672/
2023/01/01 12:00:01 Connected to MongoDB at mongodb://mongodb:27017
2023/01/01 12:00:01 Starting 5 concurrent workers
2023/01/01 12:00:10 Consumer received message with ID: 123e4567-e89b-12d3-a456-426614174000
2023/01/01 12:00:10 Processing task: {"key1":"value1","key2":"value2"}
2023/01/01 12:00:11 Task completed successfully, status updated in MongoDB
- Ports already in use: Change the ports in
.envordocker-compose.yaml. - RabbitMQ not reachable: Ensure the service is up and check logs with
docker-compose logs rabbitmq. - MongoDB connection issues: Confirm credentials and network settings.
- API 401 Unauthorized: Check Basic Auth credentials in
.env. - Worker not processing: Check logs for errors and ensure RabbitMQ is accessible.
Pull requests are welcome! For major changes, open an issue first to discuss what you would like to change.
- Fork the repo
- Create your feature branch (
git checkout -b feature/fooBar) - Commit your changes (
git commit -am 'Add some fooBar') - Push to the branch (
git push origin feature/fooBar) - Create a new Pull Request
MIT License. See LICENSE for details.