|
| 1 | +--- |
| 2 | +description: SUSE Observability |
| 3 | +--- |
| 4 | + |
| 5 | +# Getting Started with Open Telemetry operator on Kubernetes |
| 6 | + |
| 7 | +Here is the setup we'll be creating, for an application that needs to be monitored: |
| 8 | + |
| 9 | +* The monitored application / workload running in cluster A, auto-instrumented by the operator |
| 10 | +* The Open Telemetry operator in cluster A |
| 11 | +* A collector created by the operator |
| 12 | +* SUSE Observability running in cluster B, or SUSE Cloud Observability |
| 13 | + |
| 14 | + |
| 15 | + |
| 16 | +## Install the operator |
| 17 | + |
| 18 | +The Open Telemetry operator offers some extra features over the normal Kubernetes setup: |
| 19 | +* It can auto-instrument your application pods for supported languages (Java, .NET, Python, Golang, Node.js), without having to modify the applications or docker images at all |
| 20 | +* It can be dropped in as a replacement for the Prometheus operator and start scraping Prometheus exporter endpoints based on service and pod monitors |
| 21 | + |
| 22 | +### Create the namespace and a secret for the API key |
| 23 | + |
| 24 | +We'll install in the `open-telemetry` namespace and use the receiver API key generated during installation (see [here](/use/security/k8s-ingestion-api-keys.md#api-keys) where to find it): |
| 25 | + |
| 26 | +```bash |
| 27 | +kubectl create namespace open-telemetry |
| 28 | +kubectl create secret generic open-telemetry-collector \ |
| 29 | + --namespace open-telemetry \ |
| 30 | + --from-literal=API_KEY='<suse-observability-api-key>' |
| 31 | +``` |
| 32 | + |
| 33 | +### Configure & Install the operator |
| 34 | + |
| 35 | +The operator is installed with a Helm chart, so first configure the chart repository. |
| 36 | + |
| 37 | +```bash |
| 38 | +helm repo add open-telemetry https://open-telemetry.github.io/opentelemetry-helm-charts |
| 39 | +``` |
| 40 | + |
| 41 | +Let's create a `otel-operator.yaml` file to configure the operator: |
| 42 | + |
| 43 | +{% code title="otel-operator.yaml" lineNumbers="true" %} |
| 44 | +```yaml |
| 45 | +# Add image pull secret for private registries |
| 46 | +imagePullSecrets: [] |
| 47 | +manager: |
| 48 | + image: |
| 49 | + # Uses chart.appVersion for the tag |
| 50 | + repository: ghcr.io/open-telemetry/opentelemetry-operator/opentelemetry-operator |
| 51 | + collectorImage: |
| 52 | + # find the latest collector releases at https://github.com/open-telemetry/opentelemetry-collector-releases/releases |
| 53 | + repository: otel/opentelemetry-collector-k8s |
| 54 | + tag: 0.123.0 |
| 55 | + targetAllocatorImage: |
| 56 | + repository: "" |
| 57 | + tag: "" |
| 58 | + # Only needed when overriding the image repository, make sure to always specify both the image and tag: |
| 59 | + autoInstrumentationImage: |
| 60 | + java: |
| 61 | + repository: "" |
| 62 | + tag: "" |
| 63 | + nodejs: |
| 64 | + repository: "" |
| 65 | + tag: "" |
| 66 | + python: |
| 67 | + repository: "" |
| 68 | + tag: "" |
| 69 | + dotnet: |
| 70 | + repository: "" |
| 71 | + tag: "" |
| 72 | + # The Go instrumentation support in the operator is disabled by default. |
| 73 | + # To enable it, use the operator.autoinstrumentation.go feature gate. |
| 74 | + go: |
| 75 | + repository: "" |
| 76 | + tag: "" |
| 77 | + |
| 78 | +admissionWebhooks: |
| 79 | + # A production setup should use certManager to generate the certificate, without certmanager the certificate will be generated during the Helm install |
| 80 | + certManager: |
| 81 | + enabled: false |
| 82 | + # The operator has validation and mutation hooks that need a certificate, with this we generate that automatically |
| 83 | + autoGenerateCert: |
| 84 | + enabled: true |
| 85 | +``` |
| 86 | +{% endcode %} |
| 87 | +
|
| 88 | +Now install the collector, using the configuration file: |
| 89 | +
|
| 90 | +```bash |
| 91 | +helm upgrade --install opentelemetry-operator open-telemetry/opentelemetry-operator \ |
| 92 | + --namespace open-telemetry \ |
| 93 | + --values otel-operator.yaml |
| 94 | +``` |
| 95 | + |
| 96 | +This only installs the operator. Continue to install the collector and enable auto-instrumentation. |
| 97 | + |
| 98 | +## The Open Telemetry collector |
| 99 | + |
| 100 | +The operator manages one or more collector deployments via a Kubernetes custom resource of kind `OpenTelemetryCollector`. We'll create one using the same configuration as used in the [Kubernetes getting started guide](./getting-started-k8s.md). |
| 101 | + |
| 102 | +It uses the secret created earlier in the guide. Make sure to replace `<otlp-suse-observability-endpoint:port>` with your OTLP endpoint (see [OTLP API](../otlp-apis.md) for your endpoint) and insert the name for your Kubernetes cluster instead of `<your-cluster-name>`: |
| 103 | + |
| 104 | +{% code title="collector.yaml" lineNumbers="true" %} |
| 105 | +```yaml |
| 106 | +apiVersion: opentelemetry.io/v1beta1 |
| 107 | +kind: OpenTelemetryCollector |
| 108 | +metadata: |
| 109 | + name: otel-collector |
| 110 | +spec: |
| 111 | + mode: deployment |
| 112 | + envFrom: |
| 113 | + - secretRef: |
| 114 | + name: open-telemetry-collector |
| 115 | + # optional service-account for pulling the collector image from a private registries |
| 116 | + # serviceAccount: otel-collector |
| 117 | + config: |
| 118 | + receivers: |
| 119 | + otlp: |
| 120 | + protocols: |
| 121 | + grpc: |
| 122 | + endpoint: 0.0.0.0:4317 |
| 123 | + http: |
| 124 | + endpoint: 0.0.0.0:4318 |
| 125 | + # Scrape the collectors own metrics |
| 126 | + prometheus: |
| 127 | + config: |
| 128 | + scrape_configs: |
| 129 | + - job_name: opentelemetry-collector |
| 130 | + scrape_interval: 10s |
| 131 | + static_configs: |
| 132 | + - targets: |
| 133 | + - ${env:MY_POD_IP}:8888 |
| 134 | + extensions: |
| 135 | + health_check: |
| 136 | + endpoint: ${env:MY_POD_IP}:13133 |
| 137 | + # Use the API key from the env for authentication |
| 138 | + bearertokenauth: |
| 139 | + scheme: SUSEObservability |
| 140 | + token: "${env:API_KEY}" |
| 141 | + exporters: |
| 142 | + debug: {} |
| 143 | + nop: {} |
| 144 | + otlp/suse-observability: |
| 145 | + auth: |
| 146 | + authenticator: bearertokenauth |
| 147 | + # Put in your own otlp endpoint, for example suse-observability.my.company.com:443 |
| 148 | + endpoint: <otlp-suse-observability-endpoint:port> |
| 149 | + compression: snappy |
| 150 | + processors: |
| 151 | + memory_limiter: |
| 152 | + check_interval: 5s |
| 153 | + limit_percentage: 80 |
| 154 | + spike_limit_percentage: 25 |
| 155 | + batch: {} |
| 156 | + resource: |
| 157 | + attributes: |
| 158 | + - key: k8s.cluster.name |
| 159 | + action: upsert |
| 160 | + # Insert your own cluster name |
| 161 | + value: <your-cluster-name> |
| 162 | + - key: service.instance.id |
| 163 | + from_attribute: k8s.pod.uid |
| 164 | + action: insert |
| 165 | + # Use the k8s namespace also as the open telemetry namespace |
| 166 | + - key: service.namespace |
| 167 | + from_attribute: k8s.namespace.name |
| 168 | + action: insert |
| 169 | + connectors: |
| 170 | + # Generate metrics for spans |
| 171 | + spanmetrics: |
| 172 | + metrics_expiration: 5m |
| 173 | + namespace: otel_span |
| 174 | + service: |
| 175 | + extensions: [ health_check, bearertokenauth ] |
| 176 | + pipelines: |
| 177 | + traces: |
| 178 | + receivers: [otlp] |
| 179 | + processors: [memory_limiter, resource, batch] |
| 180 | + exporters: [debug, spanmetrics, otlp/suse-observability] |
| 181 | + metrics: |
| 182 | + receivers: [otlp, spanmetrics, prometheus] |
| 183 | + processors: [memory_limiter, resource, batch] |
| 184 | + exporters: [debug, otlp/suse-observability] |
| 185 | + logs: |
| 186 | + receivers: [otlp] |
| 187 | + processors: [] |
| 188 | + exporters: [nop] |
| 189 | + telemetry: |
| 190 | + metrics: |
| 191 | + address: ${env:MY_POD_IP}:8888 |
| 192 | +``` |
| 193 | +{% endcode %} |
| 194 | +
|
| 195 | +{% hint style="warning" %} |
| 196 | +**Use the same cluster name as used for installing the SUSE Observability agent** if you also use the SUSE Observability agent with the Kubernetes stackpack. Using a different cluster name will result in an empty traces perspective for Kubernetes components and will overall make correlating information much harder for SUSE Observability and your users. |
| 197 | +{% endhint %} |
| 198 | +
|
| 199 | +Now apply this `collector.yaml` in the `open-telemetry` namespace to deploy a collector: |
| 200 | + |
| 201 | +```bash |
| 202 | +kubectl apply --namespace open-telemetry -f collector.yaml |
| 203 | +``` |
| 204 | + |
| 205 | +The collector offers a lot more configuration receivers, processors and exporters, for more details see our [collector page](../collector.md). For production usage often large amounts of spans are generated and you will want to start setting up [sampling](../sampling.md). |
| 206 | + |
| 207 | +## Auto-instrumentation |
| 208 | + |
| 209 | +### Configure auto-instrumentation |
| 210 | + |
| 211 | +Now we need to tell the operator how to configure the auto instrumentation for the different languages using another custom resource, of kind `Instrumentation`. It is mainly used to configure the collector that was just deployed as the telemetry endpoint for the instrumented applications. |
| 212 | + |
| 213 | +It can be defined in a single place and used by all pods in the cluster, but it is also possible to have a different `Instrumentation` in each namespace. We'll be doing the former here. Note that if you used a different namespace or a different name for the otel collector the endpoint in this file needs to be updated accordingly. |
| 214 | + |
| 215 | +Create an `instrumentation.yaml`: |
| 216 | + |
| 217 | +{% code title="instrumentation.yaml" lineNumbers="true" %} |
| 218 | +```yaml |
| 219 | +apiVersion: opentelemetry.io/v1alpha1 |
| 220 | +kind: Instrumentation |
| 221 | +metadata: |
| 222 | + name: otel-instrumentation |
| 223 | +spec: |
| 224 | + exporter: |
| 225 | + # default endpoint for the instrumentation |
| 226 | + endpoint: http://otel-collector-collector.open-telemetry.svc.cluster.local:4317 |
| 227 | + propagators: |
| 228 | + - tracecontext |
| 229 | + - baggage |
| 230 | + defaults: |
| 231 | + # To use the standard app.kubernetes.io/ labels for the service name, version and namespace: |
| 232 | + useLabelsForResourceAttributes: true |
| 233 | + python: |
| 234 | + env: |
| 235 | + # Python autoinstrumentation uses http/proto by default, so data must be sent to 4318 instead of 4317. |
| 236 | + - name: OTEL_EXPORTER_OTLP_ENDPOINT |
| 237 | + value: http://otel-collector-collector.open-telemetry.svc.cluster.local:4318 |
| 238 | + dotnet: |
| 239 | + env: |
| 240 | + # Dotnet autoinstrumentation uses http/proto by default, so data must be sent to 4318 instead of 4317. |
| 241 | + - name: OTEL_EXPORTER_OTLP_ENDPOINT |
| 242 | + value: http://otel-collector-collector.open-telemetry.svc.cluster.local:4318 |
| 243 | + go: |
| 244 | + env: |
| 245 | + # Go autoinstrumentation uses http/proto by default, so data must be sent to 4318 instead of 4317. |
| 246 | + - name: OTEL_EXPORTER_OTLP_ENDPOINT |
| 247 | + value: http://otel-collector-collector.open-telemetry.svc.cluster.local:4318 |
| 248 | +``` |
| 249 | +{% endcode %} |
| 250 | + |
| 251 | +Now apply the `instrumentation.yaml` also in the `open-telemetry` namespace: |
| 252 | + |
| 253 | +```bash |
| 254 | +kubectl apply --namespace open-telemetry -f instrumentation.yaml |
| 255 | +``` |
| 256 | + |
| 257 | +### Enable auto-instrumentation for a pod |
| 258 | + |
| 259 | +To instruct the operator to auto-instrument your applicaction pods we need to add an annotation to the pod: |
| 260 | +* Java: `instrumentation.opentelemetry.io/inject-java: open-telemetry/otel-instrumentation` |
| 261 | +* NodeJS: `instrumentation.opentelemetry.io/inject-nodejs: open-telemetry/otel-instrumentation` |
| 262 | +* Python: `instrumentation.opentelemetry.io/inject-python: open-telemetry/otel-instrumentation` |
| 263 | +* Go: `instrumentation.opentelemetry.io/inject-go: open-telemetry/otel-instrumentation` |
| 264 | + |
| 265 | +Note that the value of the annotation refers to the namespace and name of the `Instrumentation` resource that we created. Other options are: |
| 266 | + |
| 267 | +* "true" - inject and `Instrumentation` custom resource from the namespace. |
| 268 | +* "my-instrumentation" - name of `Instrumentation` custom resource in the current namespace. |
| 269 | +* "my-other-namespace/my-instrumentation" - namespace and name of `Instrumentation` custom resource in another namespace. |
| 270 | +* "false" - do not inject |
| 271 | + |
| 272 | +When a pod with one of the annotations is created the operator modifies the pod via a mutation hook: |
| 273 | +* It adds an init container that provides the auto-instrumentation library |
| 274 | +* It modifies the first container of the pod to load the instrumentation during start up and it adds environment variables to configure the instrumentation |
| 275 | + |
| 276 | +If you need to customize which containers should be instrumented use the [operator documentation](https://github.com/open-telemetry/opentelemetry-operator?tab=readme-ov-file#multi-container-pods-with-multiple-instrumentations). |
| 277 | + |
| 278 | +{% hint style="warning" %} |
| 279 | +Go auto-instrumentation requires elevated permissions. These permissions are set automatically by the operator: |
| 280 | + |
| 281 | +```yaml |
| 282 | +securityContext: |
| 283 | + privileged: true |
| 284 | + runAsUser: 0 |
| 285 | +``` |
| 286 | +{% endhint %} |
| 287 | + |
| 288 | +## View the results |
| 289 | +Go to SUSE Observability and make sure the Open Telemetry Stackpack is installed (via the main menu -> Stackpacks). |
| 290 | + |
| 291 | +After a short while and if your pods are getting some traffic you should be able to find them under their service name in the Open Telemetry -> services and service instances overviews. Traces will appear in the [trace explorer](/use/traces/k8sTs-explore-traces.md) and in the [trace perspective](/use/views/k8s-traces-perspective.md) for the service and service instance components. Span metrics and language specific metrics (if available) will become available in the [metrics perspective](/use/views/k8s-metrics-perspective.md) for the components. |
| 292 | + |
| 293 | +If you also have the Kubernetes stackpack installed the instrumented pods will also have the traces available in the [trace perspective](/use/views/k8s-traces-perspective.md). |
| 294 | + |
| 295 | +## Next steps |
| 296 | +You can add new charts to components, for example the service or service instance, for your application, by following [our guide](/use/metrics/k8s-add-charts.md). It is also possible to create [new monitors](/use/alerting/k8s-monitors.md) using the metrics and setup [notifications](/use/alerting/notifications/configure.md) to get notified when your application is not available or having performance issues. |
| 297 | + |
| 298 | +The operator, the `OpenTelemetryCollector`, and the `Instrumentation` custom resource, have more options that are documented in the [readme of the operator repository](https://github.com/open-telemetry/opentelemetry-operator). For example it is possible to install an optional [target allocator](https://github.com/open-telemetry/opentelemetry-operator?tab=readme-ov-file#target-allocator) via the `OpenTelemetryCollector` resource, it can be used to configure the Prometheus receiver of the collector. This is especially useful when you want to replace Prometheus operator and are using its `ServiceMonitor` and `PodMonitor` custom resources. |
| 299 | + |
| 300 | +# More info |
| 301 | + |
| 302 | +* [API keys](/use/security/k8s-ingestion-api-keys.md) |
| 303 | +* [Open Telemetry API](../otlp-apis.md) |
| 304 | +* [Customizing Open Telemetry Collector configuration](../collector.md) |
| 305 | +* [Open Telemetry SDKs](../instrumentation/README.md) |
| 306 | +* [Open Telemetry Operator](https://github.com/open-telemetry/opentelemetry-operator) |
0 commit comments