Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Empty file.
Original file line number Diff line number Diff line change
Expand Up @@ -46,3 +46,4 @@ spec:
reconcile: true
services:
- port: "8080"
reconcile: true
11 changes: 11 additions & 0 deletions examples/advanced/12-autoscale/01-based-on-own-metrics/load.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,11 @@
for i in $(seq 1 200); do
kubectl apply -f - <<EOF
apiVersion: autoscale.orkestra.io/v1alpha1
kind: Ingestor
metadata:
name: ingestor-$i
spec:
image: nginx
replicas: 1
EOF
done
37 changes: 22 additions & 15 deletions examples/advanced/13-dependencies/01-in-binary/README.md
Original file line number Diff line number Diff line change
@@ -1,10 +1,10 @@
# 12 — Dependencies · 01: In Binary
# 13 — Dependencies · 01: In Binary

`App` will not start reconciling until `Database` is healthy. No init containers, no polling loops, no Go code — just a single line in the Katalog.

**What you learn:** `dependsOn`, the three `dependsOn` YAML formats, the `healthy` vs `started` conditions, how the `cross:` block reads the dependency's status for injection, and how Orkestra enforces ordering at the controller level.

**Builds on:** [13-01 — Cross Operator In Binary](../../13-cross-operator/01-in-binary/README.md)
<!-- **Builds on:** [13-01 — Cross Operator In Binary](../../13-cross-operator/01-in-binary/README.md) -->

---

Expand Down Expand Up @@ -68,13 +68,13 @@ kubectl apply -f crd.yaml

---

## Step 3 — Install Orkestra
## Step 3 — Run Orkestra and Control Center

```bash
helm repo add orkestra https://orkspace.github.io/orkestra
helm install orkestra orkestra/orkestra \
--namespace orkestra-system \
--wait --timeout 120s
ork run -f katalog.yaml

# Another terminal
ork contro start
```

---
Expand All @@ -90,10 +90,13 @@ kubectl get app my-database

```
NAME IMAGE DB ENDPOINT PHASE AGE
my-database nginx:stable-alpine <none> Pending 5s
my-database nginx:stable-alpine <none> 5s
```

App is pending — Database does not exist yet. Orkestra skips its reconcile without error.
No phase written for App — Database does not exist yet. Orkestra skips its reconcile without error.

Check the control center on http://localhost:8081, you will see "Dependency Issue" for App.
Select App and scroll down to see why under `"Dependencies"`

---

Expand All @@ -119,6 +122,8 @@ app.deps.orkestra.io/my-database nginx:stable-alpine my-database.default.sv

Once Database reaches `Running`, Orkestra starts App's reconcile automatically. App picks up the endpoint from the cross block and injects it into its Deployment.

Check the control center and see App become healthy and the phase `Running`.

---

## Step 6 — Verify the injected env
Expand All @@ -129,28 +134,30 @@ kubectl get deployment my-database-deployment -o jsonpath='{.spec.template.spec.

```json
[
{ "name": "DB_HOST", "value": "my-database.default.svc:5432" }
{
"name": "DB_HOST",
"value": "my-database.default.svc:5432"
}
]
```

---

## Step 7 — Simulate a dependency restart

Delete Database and watch App's behaviour:
Delete Database CRD and watch App's behaviour:

```bash
kubectl delete database my-database
kubectl get app my-database
kubectl delete crd databases.deps.orkestra.io
```
Check the control center.

Orkestra detects that the dependency is gone and puts App back into `Pending`. Re-apply Database and App resumes within one resync cycle.
Orkestra detects that the dependency is gone and puts App back into `Dependency Issue`. Re-apply Database `(crd.yaml)` and App resumes within one resync cycle.

---

## Cleanup

```bash
chmod +x cleanup.sh && ./cleanup.sh
helm uninstall orkestra -n orkestra-system
```
Empty file modified examples/advanced/13-dependencies/01-in-binary/cleanup.sh
100644 → 100755
Empty file.
10 changes: 10 additions & 0 deletions examples/advanced/13-dependencies/01-in-binary/katalog.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -85,8 +85,17 @@ spec:

status:
fields:
# ── Initial state ───────────────────────────────────────────
- path: phase
value: "Pending"
when:
- field: cross.database.status.endpoint
equals: ""
- path: phase
value: "Running"
when:
- field: cross.database.status.phase
equals: "Running"
- path: dbEndpoint
value: "{{ .cross.database.status.endpoint }}"

Expand All @@ -102,3 +111,4 @@ spec:
services:
- port: "80"
targetPort: "8080"
reconcile: true
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
# 12 — Dependencies · 02: Cross Binary
# 13 — Dependencies · 02: Cross Binary

Same `dependsOn: database: healthy` ordering as `01-in-binary`, but Database runs in a **separate Orkestra deployment** in a hardened namespace. App's Orkestra resolves the dependency condition through the Database Orkestra's HTTP health API.

Expand Down
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
# 12 — Dependencies · 03: Cross Cluster
# 13 — Dependencies · 03: Cross Cluster

Database lives in **cluster-a** (infrastructure cluster). App lives in **cluster-b** (application cluster) and will not start until Database in cluster-a is healthy — across a real network boundary.

Expand Down
1 change: 1 addition & 0 deletions examples/beginner/01-hello-website/katalog.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -26,6 +26,7 @@ spec:
onCreate:
deployments:
- image: "{{ .spec.image }}"
sleep: 5xx
# name defaults to: <cr-name>-deployment
# namespace defaults to: <cr-namespace>
# replicas defaults to: 1
2 changes: 1 addition & 1 deletion pkg/certmanager/generate.go
Original file line number Diff line number Diff line change
Expand Up @@ -35,7 +35,7 @@ type TLSBundle struct {
func GenerateTLSBundle(commonName string, dnsNames []string, validFor string) (*TLSBundle, error) {
validity := 365 * 24 * time.Hour // default: 1 year
if validFor != "" {
if d, err := orktypes.ParseRotationDuration(validFor); err == nil {
if d, err := orktypes.ParseTimeDuration(validFor); err == nil {
validity = d
}
}
Expand Down
18 changes: 13 additions & 5 deletions pkg/katalog/validate.go
Original file line number Diff line number Diff line change
Expand Up @@ -448,7 +448,7 @@ func (k *Katalog) validateNamespaceProtection() error {
// across all enabled CRDs. It is fail-fast: the first invalid duration
// returns an error immediately.
//
// Supported units (extended by ParseRotationDuration):
// Supported units (extended by ParseTimeDuration):
//
// d = days (24h)
// w = weeks (7d)
Expand All @@ -460,16 +460,24 @@ func (k *Katalog) validateTimeDuration() error {
continue
}

// Validate all declared sleep durations (discovered by pkg/types)
for _, e := range crd.CollectSleepEntries() {
if _, err := orktypes.ParseTimeDuration(e.Duration); err != nil {
return durationError(name, e.ResourceName, "sleep", e.Duration, err)
}
}

// Validate secret durations (rotateAfter, TLS.validFor)
if crd.HasOnCreate() {
for _, s := range crd.OperatorBox.OnCreate.Secrets {
if s.RotateAfter != "" {
if _, err := orktypes.ParseRotationDuration(s.RotateAfter); err != nil {
if _, err := orktypes.ParseTimeDuration(s.RotateAfter); err != nil {
return durationError(name, s.Name, "rotateAfter", s.RotateAfter, err)
}
}
// Check per-secret TLS presence
if s.TLS != nil && s.TLS.ValidFor != "" {
if _, err := orktypes.ParseRotationDuration(s.TLS.ValidFor); err != nil {
if _, err := orktypes.ParseTimeDuration(s.TLS.ValidFor); err != nil {
return durationError(name, s.Name, "validFor", s.TLS.ValidFor, err)
}
}
Expand All @@ -479,13 +487,13 @@ func (k *Katalog) validateTimeDuration() error {
if crd.HasOnReconcile() {
for _, s := range crd.OperatorBox.OnReconcile.Secrets {
if s.RotateAfter != "" {
if _, err := orktypes.ParseRotationDuration(s.RotateAfter); err != nil {
if _, err := orktypes.ParseTimeDuration(s.RotateAfter); err != nil {
return durationError(name, s.Name, "rotateAfter", s.RotateAfter, err)
}
}
// Check per-secret TLS presence
if s.TLS != nil && s.TLS.ValidFor != "" {
if _, err := orktypes.ParseRotationDuration(s.TLS.ValidFor); err != nil {
if _, err := orktypes.ParseTimeDuration(s.TLS.ValidFor); err != nil {
return durationError(name, s.Name, "validFor", s.TLS.ValidFor, err)
}
}
Expand Down
27 changes: 26 additions & 1 deletion pkg/orkestra-registry/common/parse.go
Original file line number Diff line number Diff line change
@@ -1,6 +1,11 @@
package common

import "fmt"
import (
"fmt"
"time"

orktypes "github.com/orkspace/orkestra/pkg/types"
)

// ParseBool interprets common boolean representations from template expressions.
func ParseBool(s string) bool {
Expand All @@ -18,3 +23,23 @@ func ParsePort(s string) int {
fmt.Sscanf(s, "%d", &p)
return p
}

// SleepIfNeeded parses an extended duration string and sleeps if non-zero.
// Used by all operatorBox resources to inject artificial latency for
// autoscaling tests, chaos engineering, and latency simulation.
func SleepIfNeeded(s string) error {
if s == "" {
return nil
}

d, err := orktypes.ParseTimeDuration(s)
if err != nil {
return err
}

if d > 0 {
time.Sleep(d)
}

return nil
}
14 changes: 14 additions & 0 deletions pkg/orkestra-registry/configmaps/configmap.go
Original file line number Diff line number Diff line change
Expand Up @@ -40,6 +40,11 @@ type ResolvedConfigMapSpec struct {

// Labels — applied to ConfigMap metadata.
Labels map[string]string

// Sleep injects an artificial delay into the reconcile of this resource.
// Useful for autoscale testing, latency simulation, and chaos engineering.
// Accepts extended duration units (s, m, h, d, w, mo, y).
Sleep string
}

// Create creates a ConfigMap if it does not already exist.
Expand All @@ -51,6 +56,9 @@ func Create(ctx context.Context, kube *kubeclient.Kubeclient, owner domain.Objec
}

namespace := common.ResolveNamespace(owner, spec.Namespace)
if err := common.SleepIfNeeded(spec.Sleep); err != nil {
return err
}

_, err := kube.Clientset().CoreV1().ConfigMaps(namespace).Get(ctx, spec.Name, metav1.GetOptions{})
if err != nil && !errors.IsNotFound(err) {
Expand Down Expand Up @@ -94,6 +102,9 @@ func Update(ctx context.Context, kube *kubeclient.Kubeclient, owner domain.Objec
}

namespace := common.ResolveNamespace(owner, spec.Namespace)
if err := common.SleepIfNeeded(spec.Sleep); err != nil {
return err
}

existing, err := kube.Clientset().CoreV1().ConfigMaps(namespace).Get(ctx, spec.Name, metav1.GetOptions{})
if err != nil {
Expand Down Expand Up @@ -139,6 +150,9 @@ func Update(ctx context.Context, kube *kubeclient.Kubeclient, owner domain.Objec
// Delete deletes the ConfigMap if it exists.
func Delete(ctx context.Context, kube *kubeclient.Kubeclient, owner domain.Object, spec ResolvedConfigMapSpec) error {
namespace := common.ResolveNamespace(owner, spec.Namespace)
if err := common.SleepIfNeeded(spec.Sleep); err != nil {
return err
}

err := kube.Clientset().CoreV1().ConfigMaps(namespace).Delete(ctx, spec.Name, metav1.DeleteOptions{})
if err != nil {
Expand Down
23 changes: 14 additions & 9 deletions pkg/orkestra-registry/configmaps/types.go
Original file line number Diff line number Diff line change
Expand Up @@ -47,36 +47,41 @@ import orktypes "github.com/orkspace/orkestra/pkg/types"
// - production
type ConfigMapTemplateSource struct {
// Version — OrkestraRegistry implementation version. Omit for latest.
Version string `yaml:"version" validate:"omitempty"`
Version string

// Name — ConfigMap name.
// Default: "{{ .metadata.name }}-config"
Name string `yaml:"name" validate:"omitempty"`
Name string

// Namespace — primary target namespace.
// Default: "{{ .metadata.namespace }}"
Namespace string `yaml:"namespace" validate:"omitempty"`
Namespace string

// ToNamespaces — create one copy in each listed namespace.
// Each element supports template expressions.
ToNamespaces []string `yaml:"toNamespaces" validate:"omitempty"`
ToNamespaces []string

// FromConfigMap — name of an existing ConfigMap to copy data from.
// Orkestra reads this at reconcile time — copies stay in sync with the source.
FromConfigMap string `yaml:"fromConfigMap" validate:"omitempty"`
FromConfigMap string

// FromNamespace — namespace where FromConfigMap lives.
// Default: same namespace as the CR.
FromNamespace string `yaml:"fromNamespace" validate:"omitempty"`
FromNamespace string

// Data — static key-value entries.
// When FromConfigMap is also set, these entries override matching keys from the source.
Data map[string]string `yaml:"data" validate:"omitempty"`
Data map[string]string

// Labels — applied to all created ConfigMap copies.
Labels []orktypes.ResourceLabel `yaml:"labels" validate:"omitempty"`
Labels []orktypes.ResourceLabel

// Reconcile: true — sync on every reconcile.
// When true, if the source ConfigMap changes, all copies are updated automatically.
Reconcile bool `yaml:"reconcile" validate:"omitempty"`
Reconcile bool

// Sleep injects an artificial delay into the reconcile of this resource.
// Useful for autoscale testing, latency simulation, and chaos engineering.
// Accepts extended duration units (s, m, h, d, w, mo, y).
Sleep string
}
14 changes: 14 additions & 0 deletions pkg/orkestra-registry/cronjobs/cronjob.go
Original file line number Diff line number Diff line change
Expand Up @@ -69,6 +69,11 @@ type ResolvedCronJobSpec struct {
// for pulling any of the images used by this PodSpec.
// If specified, these secrets will be passed to individual puller implementations for them to use.
ImagePullSecrets []string

// Sleep injects an artificial delay into the reconcile of this resource.
// Useful for autoscale testing, latency simulation, and chaos engineering.
// Accepts extended duration units (s, m, h, d, w, mo, y).
Sleep string
}

// Create creates a CronJob if it does not already exist.
Expand All @@ -80,6 +85,9 @@ func Create(ctx context.Context, kube *kubeclient.Kubeclient, owner domain.Objec
}

namespace := common.ResolveNamespace(owner, spec.Namespace)
if err := common.SleepIfNeeded(spec.Sleep); err != nil {
return err
}

_, err := kube.Clientset().BatchV1().CronJobs(namespace).Get(ctx, spec.Name, metav1.GetOptions{})
if err != nil && !errors.IsNotFound(err) {
Expand Down Expand Up @@ -118,6 +126,9 @@ func Update(ctx context.Context, kube *kubeclient.Kubeclient, owner domain.Objec
}

namespace := common.ResolveNamespace(owner, spec.Namespace)
if err := common.SleepIfNeeded(spec.Sleep); err != nil {
return err
}

existing, err := kube.Clientset().BatchV1().CronJobs(namespace).Get(ctx, spec.Name, metav1.GetOptions{})
if err != nil {
Expand Down Expand Up @@ -220,6 +231,9 @@ func Update(ctx context.Context, kube *kubeclient.Kubeclient, owner domain.Objec
// Delete deletes the CronJob if it exists.
func Delete(ctx context.Context, kube *kubeclient.Kubeclient, owner domain.Object, spec ResolvedCronJobSpec) error {
namespace := common.ResolveNamespace(owner, spec.Namespace)
if err := common.SleepIfNeeded(spec.Sleep); err != nil {
return err
}

err := kube.Clientset().BatchV1().CronJobs(namespace).Delete(ctx, spec.Name, metav1.DeleteOptions{})
if err != nil {
Expand Down
Loading
Loading