In my head, the flow looks like this:
On certain events (e.g. a new release) and on demand:
-
Prow job creates the loadtesting cluster at an infrastructure provider and obtains the kubeconfig
-
Prow job runs the setup.sh script to provision all the components and sets the kcp images to the latest commit on main
-
Prow job runs the suite and pushes the results into an S3 bucket. Results would be the client side metrics as well as the following prometheus metrics over the whole run, as well as logs and compacted pprof (needs to be actively) and a Grafana screenshot of the kcp loadtests/workspace debug dashboard (there is a grafana screenshot plugin).
container_memory_working_set_bytes{namespace="kcp", image=~"ghcr.io/kcp-dev/kcp:.*", pod!~".*proxy.*"}
sum( rate(container_cpu_usage_seconds_total{namespace="kcp", image=~"ghcr.io/kcp-dev/kcp:.*", pod!~".*proxy.*"}[1m])) by (pod)
kcp_logicalcluster_count{} for all phases
-
Prow job deletes the cluster again
-
(optional) we generate a little Slack message, which calculates the diff between runs so we can see the performance effect
Implementation Advices:
- we will have a bucket on Oracle cloud to store the data
- by the time we implement this we expect to run it on an oracle cluster
Generally the job should be designed in a way that it takes in a kcp commit as a parameter, so we could run it manually at any point. For example after a PR which we think afffects performance we could trigger it manually.
In my head, the flow looks like this:
On certain events (e.g. a new release) and on demand:
Prow job creates the loadtesting cluster at an infrastructure provider and obtains the kubeconfig
Prow job runs the setup.sh script to provision all the components and sets the kcp images to the latest commit on main
Prow job runs the suite and pushes the results into an S3 bucket. Results would be the client side metrics as well as the following prometheus metrics over the whole run, as well as logs and compacted pprof (needs to be actively) and a Grafana screenshot of the
kcp loadtests/workspace debugdashboard (there is a grafana screenshot plugin).Prow job deletes the cluster again
(optional) we generate a little Slack message, which calculates the diff between runs so we can see the performance effect
Implementation Advices:
Generally the job should be designed in a way that it takes in a kcp commit as a parameter, so we could run it manually at any point. For example after a PR which we think afffects performance we could trigger it manually.