Skip to content

Commit 7af9166

Browse files
committed
Update and improve OpenTelemetry documentation
1 parent 950cfe3 commit 7af9166

23 files changed

Lines changed: 953 additions & 393 deletions

.gitbook/assets/otel/aws_nodejs_otel_proxy_collector_configuration.svg

Lines changed: 0 additions & 16 deletions
This file was deleted.
202 KB
Loading
210 KB
Loading
127 KB
Loading

SUMMARY.md

Lines changed: 10 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -92,16 +92,20 @@
9292
* [Certificates for sidecar injection](setup/agent/k8sTs-agent-request-tracing-certificates.md)
9393

9494
## 🔭 Open Telemetry
95-
* [Getting started](setup/otel/getting-started.md)
95+
* [Overview](setup/otel/overview.md)
96+
* [Getting started](setup/otel/getting-started/README.md)
97+
* [Rancher & Kubernetes](setup/otel/getting-started/getting-started-k8s.md)
98+
* [AWS Lambda](setup/otel/getting-started/getting-started-lambda.md)
99+
* [Linux](setup/otel/getting-started/getting-started-linux.md)
100+
* [Concepts](setup/otel/concepts.md)
96101
* [Open telemetry collector](setup/otel/collector.md)
97-
* [Collector as a proxy](setup/otel/proxy-collector.md)
98-
* [Languages](setup/otel/languages/README.md)
99-
* [Generic Exporter configuration](setup/otel/languages/sdk-exporter-config.md)
102+
* [Sampling](setup/otel/sampling.md)
103+
* [Instrumentation](setup/otel/languages/README.md)
100104
* [Java](setup/otel/languages/java.md)
101105
* [Node.js](setup/otel/languages/node.js.md)
102106
* [Auto-instrumentation of Lambdas](setup/otel/languages/node.js/auto-instrumentation-of-lambdas.md)
103107
* [.NET](setup/otel/languages/dot-net.md)
104-
* [Verify the results](setup/otel/languages/verify.md)
108+
* [SDK Exporter configuration](setup/otel/languages/sdk-exporter-config.md)
105109
* [Troubleshooting](setup/otel/troubleshooting.md)
106110

107111
## CLI
@@ -171,7 +175,7 @@
171175
## 🔐 Security
172176

173177
* [Service Tokens](use/security/k8s-service-tokens.md)
174-
* [Ingestion API Keys](use/security/k8s-ingestion-api-keys.md)
178+
* [API Keys](use/security/k8s-ingestion-api-keys.md)
175179

176180
## ☁️ SaaS
177181
* [User Management](saas/user-management.md)

setup/install-stackstate/kubernetes_openshift/ingress.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -55,7 +55,7 @@ This step assummes that [Generate `baseConfig_values.yaml` and `sizing_values.ya
5555
{% endhint %}
5656

5757

58-
## Configure Ingress Rule for Open Telemetry Traces via the SUSE Observability Helm chart
58+
## Configure Ingress Rule for Open Telemetry
5959

6060
The SUSE Observability Helm chart exposes an `opentelemetry-collector` service in its values where a dedicated `ingress` can be created. This is disabled by default. The ingress needed for `opentelemetry-collector` purposed needs to support GRPC protocol. The example below shows how to use the Helm chart to configure an nginx-ingress controller with GRPC and TLS encryption enabled. Note that setting up the controller itself and the certificates is beyond the scope of this document.
6161

setup/otel/collector.md

Lines changed: 164 additions & 199 deletions
Large diffs are not rendered by default.

setup/otel/concepts.md

Lines changed: 53 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,53 @@
1+
---
2+
description: SUSE Observability
3+
---
4+
5+
# Open Telemetry concepts
6+
7+
This is a summary of the most important concepts in Open Telemetry and should be sufficient to get started. For a more detailed introduction use the [Open Telemetry documentation](https://opentelemetry.io/docs/concepts/)
8+
9+
## Signals
10+
11+
Open Telemetry recognizes 3 telemetry signals:
12+
13+
* Traces
14+
* Metrics
15+
* Logs
16+
17+
At the momemt SUSE Observability supports traces and metrics, logs will be supported in a future version. For Kubernetes logs it is possible to use the [SUSE Observability agent](/k8s-quick-start-guide.md) instead.
18+
19+
### Traces
20+
21+
Traces allow us to visualize the path of a request through your application. A trace consists of one or more spans that together form a tree, a single trace can be entirely within a single service, but it can also go across many services. Each span represents an operation in the processing of the request and has:
22+
* a name
23+
* start and end time, from that a duration can be calculated
24+
* status
25+
* attributes
26+
* resource attributes (see [resources](#resources))
27+
* events
28+
29+
Span attributes are used to provide metadata for the span, for example a span that for an operation that places an order can have the `orderId` as an attribute, or a span for an HTTP operation can have the HTTP method and URL as attributes.
30+
31+
Span events can be used to represent a point in time where something important happened within the operation of the span. For example if the span failed there can be an `exception` or an `error` event that captures the error message, a stacktrace and the exact point in time the error occurred.
32+
33+
### Metrics
34+
35+
Metrics are measurements captured at runtime and they result in a metric event. Metrics are important indicators for application performance and availability and are often used to alert on an outage or performance problem. Metrics have:
36+
* a name
37+
* a timestamp
38+
* a kind (counter, gauge, histogram, etc.)
39+
* attributes
40+
* resource attributes (see [resources](#resources))
41+
42+
Attributes provide the metadata for a metric.
43+
44+
## Resources
45+
46+
A resource is the entity that produces the telemetry data. The resource attributes provide the metadata for the resource. For example a process running in a container, in a pod, in a namespace in a Kubernetes cluster can have resource attributes for all these entities.
47+
48+
Resource attributes are often automatically assigned by the SDKs. However it is recommended to always set the `service.name` and `service.namespace` attributes explicitly. The first one is the logical name for the service, if not set the SDK will set an `unknown_service` value making it very hard to use the data later in SUSE Observability. The namespace is a convenient way to organize your services, especially useful if you have the same services running in multiple locations.
49+
50+
## Semantic conventions
51+
52+
Open Telemetry defines common names for operations and data, they call this the semantic conventions. Semantic conventions follow a naming scheme that allows for standardizing processing of data across languages, libraries and code bases. There are semantic conventions for all signals and for resource attributes. They are defined for many different platforms and operations on the [Open Telemetry website](https://opentelemetry.io/docs/specs/semconv/attributes-registry/). SDKs make use of the semantic convetions to assign these attributes and SUSE Observability also respects the semantic conventions and relies on them, for example to recognize Kubernetes resources.
53+

setup/otel/getting-started.md

Lines changed: 0 additions & 22 deletions
This file was deleted.
Lines changed: 163 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,163 @@
1+
---
2+
description: SUSE Observability
3+
---
4+
5+
# Getting Started with Open Telemetry on Rancher / Kubernetes
6+
7+
Here is the setup we'll be creating, for an application that needs to be monitored:
8+
9+
* The monitored application / workload running in cluster A
10+
* The Open Telemetry collector running near the observed application(s), so in cluster A, and sending the data to SUSE Observability
11+
* SUSE Observability running in cluster B, or SUSE Cloud Observability
12+
13+
![Container instrumentation with Opentelemetry via collector running as Kubernetes deployment](/.gitbook/assets/otel/open-telemetry-collector-kubernetes.png)
14+
15+
16+
## The Open Telemetry collector
17+
18+
{% hint type="info" %}
19+
For a production setup it is strongly recommended to install the collector, since it allows your service to offload data quickly and the collector can take care of additional handling like retries, batching, encryption or even sensitive data filtering.
20+
{% endhint %}
21+
22+
First we'll install the OTel (Open Telemetry) collector in cluster A. We configure it to:
23+
24+
* Receive data from, potentially many, instrumented applications
25+
* Enrich collected data with Kubernetes attributes
26+
* Generate metrics for traces
27+
* Forward the data to SUSE Observability, including authentication using the API key
28+
29+
Next to that it will also retry sending data when there are a connection problems.
30+
31+
### Create a secret for the API key
32+
33+
We'll use the receiver API key generated during installation (see [here](/use/security/k8s-ingestion-api-keys.md#api-keys) where to find it):
34+
35+
```bash
36+
kubectl create secret generic open-telemetry-collector \
37+
--namespace open-telemetry \
38+
--from-literal=API_KEY='<suse-observability-api-key>'
39+
```
40+
41+
### Configure and install the collector
42+
43+
We install the collector with a Helm chart provided by the Open Telemetry project. Make sure you have the Open Telemetry helm charts repository configured:
44+
45+
```bash
46+
helm repo add open-telemetry https://open-telemetry.github.io/opentelemetry-helm-charts
47+
```
48+
49+
Create a `otel-collector.yaml` values file for the Helm chart. Here is a good starting point for usage with SUSE Observability, replace `<otlp-suse-observability-endpoint>` with your OTLP endpoint (see [OTLP API](../otlp-apis.md) for your endpoint) and insert the name for your Kubernetes cluster instead of `<your-cluster-name>`:
50+
51+
{% code title="otel-collector.yaml" lineNumbers="true" %}
52+
```yaml
53+
# Set the API key from the secret as an env var:
54+
extraEnvsFrom:
55+
- secretRef:
56+
name: open-telemetry-collector
57+
mode: deployment
58+
image:
59+
# Use the collector container image that has all components important for k8s. In case of missing components the otel/opentelemetry-collector-contrib image can be used which
60+
# has all components in the contrib repository: https://github.com/open-telemetry/opentelemetry-collector-contrib
61+
repository: "otel/opentelemetry-collector-k8s"
62+
ports:
63+
metrics:
64+
enabled: true
65+
presets:
66+
kubernetesAttributes:
67+
enabled: true
68+
extractAllPodLabels: true
69+
# This is the config file for the collector:
70+
config:
71+
receivers:
72+
otlp:
73+
protocols:
74+
grpc:
75+
endpoint: 0.0.0.0:4317
76+
http:
77+
endpoint: 0.0.0.0:4318
78+
extensions:
79+
# Use the API key from the env for for authentication
80+
bearertokenauth:
81+
scheme: SUSEObservability
82+
token: "${env:API_KEY}"
83+
exporters:
84+
otlp/suse-observability:
85+
auth:
86+
authenticator: bearertokenauth
87+
# Put in your own otlp endpoint
88+
endpoint: <otlp-suse-observability-endpoint>
89+
compression: snappy
90+
processors:
91+
memory_limiter:
92+
check_interval: 5s
93+
limit_percentage: 80
94+
spike_limit_percentage: 25
95+
batch:
96+
resource:
97+
attributes:
98+
- key: k8s.cluster.name
99+
action: upsert
100+
# Insert your own cluster name
101+
value: <your-cluster-name>
102+
- key: service.instance.id
103+
from_attribute: k8s.pod.uid
104+
action: insert
105+
# Use the k8s namespace also as the open telemetry namespace
106+
- key: service.namespace
107+
from_attribute: k8s.namespace.name
108+
action: insert
109+
connectors:
110+
# Generate metrics for spans
111+
spanmetrics:
112+
metrics_expiration: 5m
113+
namespace: otel_span
114+
service:
115+
extensions: [ health_check, bearertokenauth ]
116+
pipelines:
117+
traces:
118+
receivers: [otlp]
119+
processors: [memory_limiter, resource, batch]
120+
exporters: [debug, spanmetrics, otlp/suse-observability]
121+
metrics:
122+
receivers: [otlp, spanmetrics, prometheus]
123+
processors: [memory_limiter, resource, batch]
124+
exporters: [debug, otlp/suse-observability]
125+
```
126+
{% endcode %}
127+
128+
{% hint type="warning" %}
129+
**Use the same cluster name as used for installing the SUSE Observability agent** if you also use the SUSE Observablity agent with the Kubernetes stackpack. Using a different cluster name will result in an empty traces perspective for Kubernetes components and will overall make correlating information much harder for SUSE Observability and your users.
130+
{% endhint %}
131+
132+
Now install the collector, using the configuration file:
133+
134+
```bash
135+
helm upgrade --install opentelemetry-collector open-telemetry/opentelemetry-collector \
136+
--values otel-collector.yaml \
137+
--namespace open-telemetry
138+
```
139+
140+
The collector offers a lot more configuration receivers, processors and exporters, for more details see our [collector page](../collector.md). For production usage often large amounts of spans are generated and you will want to start setting up [sampling](../sampling.md).
141+
142+
## Collect telemetry data from your application
143+
144+
The common way to collect telemetry data is to instrument your application using the Open Telemetry SDK's. We've documented some quick start guides for a few languages, but there are many more:
145+
* [Java](languages/java.md)
146+
* [.NET](languages/dot-net.md)
147+
* [Node.js](languages/node.js.md)
148+
149+
For other languages follow the documentation on [opentelemetry.io](https://opentelemetry.io/docs/languages/) and make sure to configure the SDK exporter to ship data to the collector you just installed by following [these instructions](languages/sdk-exporter-config.md).
150+
151+
## View the results
152+
Go to SUSE Observability and make sure the Open Telemetry Stackpack is installed (via the main menu -> Stackpacks).
153+
154+
After a a short while and if your pods are getting some traffic you should be able to find them under their service name in the Open Telemetry -> services and service instances overviews. Traces will appear in the [trace explorer](/use/traces/k8sTs-explore-traces.md) and in the [trace perspective](/use/views/k8s-traces-perspective.md) for the service and service instance components. Span metrics and language specific metrics (if available) will become available in the [metrics perspective](/use/views/k8s-metrics-perspective.md) for the components.
155+
156+
If you also have the Kubernetes stackpack installed the instrumented pods will also have the traces available in the [trace perspective](/use/views/k8s-traces-perspective.md).
157+
158+
# More info
159+
160+
* [API keys](/use/security/k8s-ingestion-api-keys.md)
161+
* [Open Telemetry API](../otlp-apis.md)
162+
* [Customizing Open Telemetry Collector configuration](../collector.md)
163+
* [Open Telemetry SDKs](../languages/README.md)

0 commit comments

Comments
 (0)