This project contains multiple Kubernetes operators built with Kubebuilder. Each operator is organized
in its own directory with an op- prefix.
- Go (version 1.21 or later)
- Docker (for building container images)
- kind (Kubernetes in Docker) - already configured
- kubectl (for interacting with Kubernetes clusters)
- Kubebuilder (for operator development)
- Each operator directory follows the naming convention:
op-<operator-name> - Operators are built using the Kubebuilder framework
- Local development and testing use kind clusters
cp config.example.env config.env and adjust the values in config.env as needed.
./setup.sh- creates the kind cluster, if necessary
- runs
./scripts/docker-login.shto authenticate with the Docker registry and stores it as a secret in the cluster - sets the current context to the cluster
- preinstalls images defined in
scripts/preload-images.sh
It can be helpful to start from a clean slate by deleting the kind cluster.
./teardown.shRun ./setup.sh again to recreate it.
- Ensure all prerequisites are installed
- Navigate to the specific operator directory (e.g.,
op-example) - Follow the operator-specific README for build and deployment instructions
op-bed/
├── op-<operator1>/ # First operator
├── op-<operator2>/ # Second operator
├── op-<operator3>/ # Third operator
└── ... # Additional operators
Each operator directory contains its own Kubebuilder-generated structure with controllers, APIs, and configuration files.
This project includes a complete observability stack using Grafana LGTM (Loki, Grafana, Tempo, Mimir) and Alloy for telemetry collection.
- Grafana (port 3000): Unified observability UI with pre-configured datasources
- Loki (port 3100): Log aggregation system
- Tempo (ports 3200, 4317, 4318): Distributed tracing with OTLP support
- Mimir (port 9009): Prometheus-compatible metrics storage
- Pyroscope (port 4040): Continuous profiling platform
- Alloy (port 12345): Telemetry collector and forwarder
-
Start the Kind cluster:
./setup.sh
-
Deploy the observability stack using Tilt:
tilt up
-
Access Grafana UI at http://localhost:3000 (anonymous access enabled)
- Applications exposing Prometheus metrics will be automatically discovered and scraped by Alloy
- Metrics are forwarded to Mimir and available in Grafana
- Send traces to Alloy's OTLP endpoints:
- gRPC:
localhost:14317 - HTTP:
localhost:14318
- gRPC:
- Traces are forwarded to Tempo and available in Grafana
- Send logs to Alloy's OTLP endpoints (same as tracing)
- Logs are forwarded to Loki and available in Grafana
- Applications must expose pprof endpoints (typically on
/debug/pprof/*) - Add these annotations to your pods to enable profiling:
metadata: annotations: pyroscope.io/scrape: "true" pyroscope.io/port: "6060" # Port where pprof endpoints are exposed
- Profiles are forwarded to Pyroscope and available at http://localhost:4040
For a Go application with pprof enabled:
import _ "net/http/pprof"
func main() {
// Start pprof server
go func() {
log.Println(http.ListenAndServe("localhost:6060", nil))
}()
// Your application code...
}Then annotate your Kubernetes deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-app
spec:
template:
metadata:
annotations:
pyroscope.io/scrape: "true"
pyroscope.io/port: "6060"
spec:
containers:
- name: my-app
ports:
- containerPort: 6060
name: pprof