This document defines key concepts and terminology used throughout the Temporal Worker Controller documentation.
A logical grouping in Temporal that represents a collection of workers that are deployed together and should be versioned together. Examples include "payment-processor", "notification-sender", or "data-pipeline-worker". This is a concept within Temporal itself, not specific to Kubernetes. See https://docs.temporal.io/production-deployment/worker-deployments/worker-versioning for more details.
Key characteristics:
- Identified by a unique worker deployment name (e.g., "staging/payment-processor")
- Can have multiple concurrent worker versions running simultaneously
- Versions of a Worker Deployment are identified by Build IDs (e.g., "v1.5.1", "v1.5.2")
- Temporal routes workflow executions to appropriate worker versions based on the
RoutingConfigof the Worker Deployment that the versions are in.
The Kubernetes Custom Resource Definition that manages one Temporal Worker Deployment. This is the primary resource you interact with when using the Temporal Worker Controller.
Key characteristics:
- One
TemporalWorkerDeploymentCustom Resource per Temporal Worker Deployment - Manages the lifecycle of all versions for that worker deployment
- Defines rollout strategies, resource requirements, and connection details
- Controller creates and manages multiple Kubernetes
Deploymentresources based on this spec
The actual Kubernetes Deployment resources that run worker pods. The controller automatically creates these - you don't manage them directly.
Key characteristics:
- Multiple Kubernetes
Deploymentresources perTemporalWorkerDeploymentCustom Resource (one per version) - Named with the pattern:
{worker-deployment-name}-{build-id}(e.g.,staging/payment-processor-v1.5.1) - Each runs a specific version of your worker code
One TemporalWorkerDeployment Custom Resource → Multiple Kubernetes Deployment resources (managed by controller)
Make changes to the spec of your TemporalWorkerDeployment Custom Resource, and the controller handles all the underlying Kubernetes Deployment resources for different versions.
Worker deployment versions progress through various states during their lifecycle:
The version has been specified in the TemporalWorkerDeployment custom resource but hasn't been registered with Temporal yet. This typically happens when:
- The worker pods are still starting up
- There are connectivity issues to Temporal
- The worker code has errors preventing registration
The version is registered with Temporal but isn't automatically receiving any new workflow executions through the Worker Deployment's RoutingConfig. This is the initial state for new versions before they are promoted via Versioning API calls. Inactive versions can receive workflow executions via VersioningOverride only.
The version is receiving a percentage of new workflow executions. If managed by a Progressive rollout, the percentage gradually increases according to the configured rollout steps. If the rollout is Manual, the user is responsible for setting the ramp percentage and ramping version.
The current version receives all new workflow executions except those routed to the Ramping version. This is the "stable" version that handles the majority of traffic - all new workflows not being ramped to a newer version, plus all existing AutoUpgrade workflows running on the task queues in this Worker Deployment.
The version is no longer receiving new workflow executions but may still be processing existing workflows.
All Pinned workflows on this version have completed. The version is ready for cleanup according to the sunset configuration.
Requires explicit human intervention to promote versions. New versions remain in the Inactive state until manually promoted.
Use cases:
- Advanced deployment scenarios that are not supported by the other strategies (eg. user wants to do custom testing and validation before making changes to how workflow traffic is routed)
Immediately routes 100% of new workflow executions to the target version once it's healthy and registered.
Use cases:
- Non-production environments
- Low-risk deployments
- When you want immediate cutover without gradual rollout
Gradually increases the percentage of new workflow executions routed to the new version according to configured steps.
Use cases:
- Production deployments where you want to validate new versions gradually
- When you want automated rollouts with built-in safety checks
- Deployments that benefit from canary analysis
Configuration that tells the controller how to connect to the same Temporal cluster and namespace that the worker is connected to:
- connectionRef: A reference to a
TemporalConnectioncustom resource. This object contains anamefield to specify theTemporalConnectionresource. - temporalNamespace: The Temporal namespace to connect to
- deploymentName: The logical deployment name in Temporal (auto-generated if not specified)
Defines how new versions are promoted:
- strategy: Manual, AllAtOnce, or Progressive
- steps: For Progressive strategy, defines ramp percentages and pause durations
- gate: Optional workflow that must succeed on all task queues in the target Worker Deployment Version before promotion continues. Gate can receive an input payload:
workflowType: The workflow name/type to execute for validationinput: Inline JSON object passed as the first workflow argumentinputFrom: Reference to aConfigMaporSecretkey whose contents are JSON; passed as the first workflow argument
Notes on gate inputs:
- Exactly one of
inputorinputFrommay be set. - The value must be a JSON object (not a string containing JSON).
- Large/sensitive payloads should use
inputFrom.secretKeyRefor split into smaller documents.
Defines how Drained versions are cleaned up:
- scaledownDelay: How long to wait after a version has been Drained before scaling pods to zero
- deleteDelay: How long to wait after a version has been Drained before deleting the Kubernetes
Deployment
The pod template used for the target version of this worker deployment. Similar to the pod template used in a standar Kubernetes Deployment, but managed by the controller.
The controller automatically sets these environment variables for all worker pods:
The host and port of the Temporal server, derived from the TemporalConnection custom resource.
The worker must connect to this Temporal endpoint, but since this is user provided and not controller generated, the user does not necessarily need to access this env var to get that endpoint if it already knows the endpoint another way.
The Temporal namespace the worker should connect to, from spec.workerOptions.temporalNamespace.
The worker must connect to this Temporal namespace, but since this is user provided and not controller generated, the user does not necessarily need to access this env var to get that namespace if it already knows the namespace another way.
The worker deployment name in Temporal, auto-generated from the TemporalWorkerDeployment name and Kubernetes namespace.
The worker must use this to configure its worker.DeploymentOptions.
The build ID for this specific version, derived from the container image tag and hash of the target pod template.
The worker must use this to configure its worker.DeploymentOptions.
The pattern of running multiple versions of the same service simultaneously. Running multiple versions of your workers simultaneously is essential for supporting Pinned workflows in Temporal, as Pinned workflows must continue executing on the worker version they started on.
The automated process of:
- Registering new versions with Temporal
- Gradually routing traffic to new versions
- Cleaning up resources for drained versions
Resources that are created, updated, and deleted automatically by the controller:
TemporalWorkerDeploymentcustom resources, to update their status- Kubernetes
Deploymentresources for each version - Labels and annotations for tracking and management