Skip to content

Commit 02d1fb6

Browse files
committed
STAC-22435 AWS Lambda otel setup
1 parent 4ac7fe0 commit 02d1fb6

2 files changed

Lines changed: 122 additions & 14 deletions

File tree

setup/otel/getting-started/getting-started-k8s.md

Lines changed: 7 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -66,6 +66,13 @@ presets:
6666
extractAllPodLabels: true
6767
# This is the config file for the collector:
6868
config:
69+
receivers:
70+
otlp:
71+
protocols:
72+
grpc:
73+
endpoint: 0.0.0.0:4317
74+
http:
75+
endpoint: 0.0.0.0:4318
6976
extensions:
7077
# Use the API key from the env far for authentication
7178
bearertokenauth:

setup/otel/getting-started/getting-started-lambda.md

Lines changed: 115 additions & 14 deletions
Original file line numberDiff line numberDiff line change
@@ -4,30 +4,124 @@ description: SUSE Observability
44

55
# Getting started for AWS Lambda
66

7-
We'll setup monitoring for these components:
8-
* The monitored AWS Lambda (instrumented using Open Telemetry)
7+
We'll setup monitoring for one or more AWS Lambda functions:
8+
* The monitored AWS Lambda function(s) (instrumented using Open Telemetry)
99
* The Open Telmetry collector
1010
* SUSE Observability or SUSE Cloud Observability
1111

12-
DIAGRAM...
12+
![AWS Lambda Instrumentation With Opentelemetry via proxy collector](/.gitbook/assets/otel/aws_nodejs_otel_proxy_collector_configuration.svg)
1313

14-
## Install the Open Telemetry collector
14+
## The Open Telemetry collector
1515

1616
{% hint type="info" %}
17-
For a production setup it is strongly recommended to install the collector, since it allows your service to offload data quickly and the collector can take care of additional handling like retries, batching, encryption or even sensitive data filtering.
17+
For a production setup it is strongly recommended to install the collector, since it allows your service to offload data quickly and the collector can take care of additional handling like retries, batching, encryption or even sensitive data filtering.
1818
{% endhint %}
1919

20-
First we'll install the collector. It can handle several tasks:
20+
First we'll install the OTel (Open Telemetry) collector, in this example we use a Kubernetes cluster to run it close to the Lambda functions. A similar setup can be made using a collector installed on a virtual machine instead. The configuration used here only acts as a secure proxy to offload data quickly from the Lambda functions and runs within trusted network infrastructure.
2121

22-
* Sampling of traces
23-
* Forward the data to SUSE Observability, including authentication using the API key
24-
* Retries sending data when there are a connection problems, even after the lambda function has terminated
22+
### Create a secret for the API key
2523

26-
To configure the collector you'll need the OTLP or OTLP over HTTP endpoint of SUSE Observability to be securely accessible. The SUSE Observability Helm chart allows you to set that up via an ingress configuration. If you didn't do that during installation now is the time to [add that ingress configuration](/setup/install-stackstate/kubernetes_openshift/ingress.md#configure-ingress-rule-for-open-telemetry). If you're using SUSE Cloud Observability these are the applicable URLs:
27-
* https://otel-<your-suse-observabillity>.app.stackstate.com - For the OTLP protocol
28-
* https://otel-http-<your-suse-observabillity>.app.stackstate.com - For OTLP over HTTP
24+
We'll use the receiver API key generated during installation (see [here](/use/security/k8s-ingestion-api-keys.md#api-keys) where to find it):
2925

30-
Follow the steps [here](../proxy-collector.md) to configure and install the collector. These steps show how to make it securely accessible for the instrumented Lambda functions by installing it in a Kubernetes cluster, but the same can be done by installing the collector on an EC2 instance and installing the collector using an RPM or Debian package, see also [Getting started other](getting-started-other.md).
26+
```bash
27+
kubectl create secret generic open-telemetry-collector \
28+
--namespace open-telemetry \
29+
--from-literal=API_KEY='<suse-observability-api-key>'
30+
```
31+
32+
### Configure and install the collector
33+
34+
We install the collector with a Helm chart provided by the Open Telemetry project. Make sure you have the Open Telemetry helm charts repository configured:
35+
36+
```bash
37+
helm repo add open-telemetry https://open-telemetry.github.io/opentelemetry-helm-charts
38+
```
39+
40+
Create a `otel-collector.yaml` values file for the Helm chart. Here is a good starting point for usage with SUSE Observability, replace `<otlp-suse-observability-endpoint>` with your OTLP endpoint (see [OTLP API](../otlp-apis.md) for your endpoint) and insert the name for your Kubernetes cluster instead of `<your-cluster-name>`:
41+
42+
{% code title="otel-collector.yaml" lineNumbers="true" %}
43+
```yaml
44+
mode: deployment
45+
presets:
46+
kubernetesAttributes:
47+
enabled: true
48+
# You can also configure the preset to add all the associated pod's labels and annotations to you telemetry.
49+
# The label/annotation name will become the resource attribute's key.
50+
extractAllPodLabels: true
51+
extraEnvsFrom:
52+
- secretRef:
53+
name: open-telemetry-collector
54+
image:
55+
repository: "otel/opentelemetry-collector-k8s"
56+
57+
config:
58+
receivers:
59+
otlp:
60+
protocols:
61+
grpc:
62+
endpoint: 0.0.0.0:4317
63+
http:
64+
endpoint: 0.0.0.0:4318
65+
extensions:
66+
# Use the API key from the env far for authentication
67+
bearertokenauth:
68+
scheme: SUSEObservability
69+
token: "${env:API_KEY}"
70+
exporters:
71+
otlp:
72+
auth:
73+
authenticator: bearertokenauth
74+
# Put in your own otlp endpoint
75+
endpoint: <otlp-suse-observability-endpoint>
76+
77+
service:
78+
extensions: [health_check, bearertokenauth]
79+
pipelines:
80+
traces:
81+
receivers: [otlp]
82+
processors: [batch]
83+
exporters: [otlp]
84+
metrics:
85+
receivers: [otlp]
86+
processors: [batch]
87+
exporters: [otlp]
88+
logs:
89+
receivers: [otlp]
90+
processors: [batch]
91+
exporters: [otlp]
92+
93+
ingress:
94+
enabled: true
95+
annotations:
96+
kubernetes.io/ingress.class: ingress-nginx-external
97+
nginx.ingress.kubernetes.io/ingress.class: ingress-nginx-external
98+
nginx.ingress.kubernetes.io/backend-protocol: GRPC
99+
# "12.34.56.78/32" IP address of NatGateway in the VPC where the otel data is originating from
100+
# nginx.ingress.kubernetes.io/whitelist-source-range: "12.34.56.78/32"
101+
hosts:
102+
- host: "otlp-collector-proxy.${CLUSTER_NAME}"
103+
paths:
104+
- path: /
105+
pathType: ImplementationSpecific
106+
port: 4317
107+
tls:
108+
- secretName: ${CLUSTER_NODOT}-ecc-tls
109+
hosts:
110+
- "otlp-collector-proxy.${CLUSTER_NAME}"
111+
```
112+
{% endcode %}
113+
114+
Now install the collector, using the configuration file:
115+
116+
```bash
117+
helm upgrade --install opentelemetry-collector open-telemetry/opentelemetry-collector \
118+
--values otel-collector.yaml \
119+
--namespace open-telemetry
120+
```
121+
122+
Make sure that the proxy is accessible by the Lambda functions by connecting them to the same VPC. It is recommended to use a source-range whitelist to filter out data from untrusted and/or unknown sources.
123+
124+
The collector offers a lot more configuration receivers, processors and exporters, for more details see our [collector page](../collector.md). For production usage often large amounts of spans are generated and you will want to start setting up [sampling](../sampling.md).
31125

32126
## Instrument a Lambda function
33127

@@ -36,4 +130,11 @@ Open Telemetry supports instrumenting Lambda functions in multiple languages usi
36130
## View the results
37131
Go to SUSE Observability and make sure the Open Telemetry Stackpack is installed (via the main menu -> Stackpacks).
38132

39-
After a a short while and if your Lambda function(s) are getting some traffic you should be able to find the functions under their service name in the Open Telemetry -> services and service instances overviews. Traces will appear in the [trace explorer](/use/traces/k8sTs-explore-traces.md) and in the [trace perspective](/use/views/k8s-traces-perspective.md) for the service and service instance components. Span metrics and language specific metrics (if available) will become available in the [metrics perspective](/use/views/k8s-metrics-perspective.md) for the components.
133+
After a a short while and if your Lambda function(s) are getting some traffic you should be able to find the functions under their service name in the Open Telemetry -> services and service instances overviews. Traces will appear in the [trace explorer](/use/traces/k8sTs-explore-traces.md) and in the [trace perspective](/use/views/k8s-traces-perspective.md) for the service and service instance components. Span metrics and language specific metrics (if available) will become available in the [metrics perspective](/use/views/k8s-metrics-perspective.md) for the components.
134+
135+
# More info
136+
137+
* [API keys](/use/security/k8s-ingestion-api-keys.md)
138+
* [Open Telemetry API](../otlp-apis.md)
139+
* [Customizing Open Telemetry Collector configuration](../collector.md)
140+
* [Open Telemetry SDKs](../languages/README.md)

0 commit comments

Comments
 (0)