This project uses Mise-en-place as a manager of tool versions (python, uv, nodejs, pnpm
etc.), as well as a task runner and environment manager. Mise will download all the needed tools automatically -- you
don't need to install them yourself.
Clone this project, then run these setup steps:
brew install mise # more ways to install: https://mise.jdx.dev/installing-mise.html
mise trust
mise installAfter setup, you can use:
-
mise runto list tasks and select one interactively to run -
mise <task-name>to run a task -
mise x -- <command>to run a project tool -- for examplemise x -- uv add <package>
If you want to run tools directly without the mise x -- prefix, you need to activate a shell hook:
-
Bash:
eval "$(mise activate bash)"(add to~/.bashrcto make permanent) -
Zsh:
eval "$(mise activate zsh)"(add to~/.zshrcto make permanent) -
Fish:
mise activate fish | source(add to~/.config/fish/config.fishto make permanent) -
Other shells: documentation
Edit [env] in mise.local.toml in the project root (documentation). Run
mise setup if you don't see the file.
Starting up the platform using the CLI (agentstack platform start, even mise agentstack-cli:run -- platform start)
will use
published images by default. To use local images, you need to build them and import them into the platform.
Instead, use:
mise agentstack:startThis will build the images (agentstack-server and agentstack-ui) and import them to the cluster. You can add other
CLI arguments as you normally would when using agentstack CLI, for example:
mise agentstack:start --set docling.enabled=true --set oidc.enabled=trueTo stop or delete the platform use
mise agentstack:stop
mise agentstack:deleteFor debugging and direct access to kubernetes, setup KUBECONFIG and other environment variables using:
# Activate environment
eval "$(mise run agentstack:shell)"
# Deactivate environment
deactivateBy default, authentication and authorization are disabled.
Starting the platform with OIDC enabled:
mise agentstack:start --set oidc.enabled=trueThis does the following:
- Installs Istio in ambient mode.
- Creates a gateway and routes for
https://agentstack.localhost:8336/. - Installs the Kiali console.
Why TLS is used:
OAuth tokens are returned to the browser only over HTTPS to avoid leakage over plain HTTP. Always access the UI via
https://agentstack.localhost:8336/.
Istio details:
The default namespace is labeled istio.io/dataplane-mode=ambient. This ensures all intra-pod traffic is routed through
ztunnel, except the agentstack pod, which uses hostNetwork and is not compatible with the Istio mesh.
Available endpoints:
| Service | HTTPS | HTTP |
|---|---|---|
| Kiali Console | – | http://localhost:20001 |
| BeeAI UI | https://agentstack.localhost:8336 |
http://localhost:8334 |
| BeeAI API Docs | https://agentstack.localhost:8336/api/v1/docs |
http://localhost:8333/api/v1/docs |
OIDC configuration:
- Update OIDC provider credentials and settings helm/values.yaml under:
oidc:
enabled: false
discovery_url: "<oidc_discovery_endpoint>"
admin_emails: "a comma separated list of email addresses"
nextauth_trust_host: true
nextauth_secret: "<To generate a random string, you can use the Auth.js CLI: npx auth secret>"
nextauth_url: "http://localhost:8336"
nextauth_providers: [
{
"name": "w3id",
"id": "w3id",
"type": "oidc",
"class": "IBM",
"client_id": "<oidc_client_id>",
"client_secret": "<oidc_client_secret>",
"issuer": "<oidc_issuer>",
"jwks_url": "<oidc_jwks_endpoint>",
"nextauth_url": "http://localhost:8336",
"nextauth_redirect_proxy_url": "http://localhost:8336"
},
{
"name": "IBMiD",
"id": "IBMiD",
"type": "oidc",
"class": "IBM",
"client_id": "<oidc_client_id>",
"client_secret": "<oidc_client_secret>",
"issuer": "<oidc_issuer>",
"jwks_url": "<oidc_jwks_endpoint>",
"nextauth_url": "http://localhost:8336",
"nextauth_redirect_proxy_url": "http://localhost:8336"
}
]Note: the class in the providers entry must be a valid provider supported by next-auth.
see: https://github.com/nextauthjs/next-auth-example/blob/main/auth.ts
- When debugging the ui component (See debugging individual components), copy the env.example as .env and update the following oidc specific values:
OIDC_PROVIDERS = '[{"name": "w3id","id": "w3id","type": "oidc","class": "IBM","client_id": "<your_client_id>","client_secret": "<your_client_secret>","issuer": "your_issuer","jwks_url": "<your_jwks_url>","nextauth_url": "http://localhost:3000","nextauth_redirect_proxy_url": "http://localhost:3000"}]'
NEXTAUTH_SECRET = "<To generate a random string, you can use the Auth.js CLI: npx auth secret>"
NEXTAUTH_URL = "http://localhost:3000"
OIDC_ENABLED = trueOptionally add:
NEXTAUTH_DEBUG = "true"To deploy the helm chart to OpenShift:
- Update values.yaml so that oidc.enabled is true. e.g.:
odic:
enabled: true- Update values.yaml so that the
nextauth_urland thenextauth_redirect_proxy_urlvalues reflect the URL for the route created for theagentstack-ui-svc. - Ensure that the oidc.nextauth_providers array entries in values.yaml have valid/appropriate values
It's desirable to run and debug (i.e. in an IDE) individual components against the full stack (PostgreSQL,
OpenTelemetry, Arize Phoenix, ...). For this, we include Telepresence which allows rewiring
a Kubernetes container to your local machine. (Note that sshfs is not needed, since we don't use it in this setup.)
mise run agentstack-server:dev:startThis will do the following:
- Create .env file if it doesn't exist yet (you can add your configuration here)
- Stop default platform VM ("agentstack") if it exists
- Start a new VM named "agentstack-local-dev" separate from the "agentstack" VM used by default
- Install telepresence into the cluster
Note that this will require root access on your machine, due to setting up a networking stack.
- Replace agentstack in the cluster and forward any incoming traffic to localhost
After the command succeeds, you can:
- send requests as if your machine was running inside the cluster. For example:
curl http://<service-name>:<service-port>. - connect to postgresql using the default credentials
postgresql://agentstack-user:password@postgresql:5432/agentstack - now you can start your server from your IDE or using
mise run agentstack-server:runon port 18333 - run agentstack-cli using
mise agentstack-cli:run -- <command>or HTTP requests to localhost:8333 or localhost:18333- localhost:8333 is port-forwarded from the cluster, so any requests will pass through the cluster networking to the agentstack pod, which is replaced by telepresence and forwarded back to your local machine to port 18333
- localhost:18333 is where your local platform should be running
To inspect cluster using kubectl or k9s and lima using limactl, activate the dev environment using:
# Activate dev environment
eval "$(mise run agentstack-server:dev:shell)"
# Deactivate dev environment
deactivateWhen you're done you can stop the development cluster and networking using
mise run agentstack-server:dev:stopOr delete the cluster entirely using
mise run agentstack-server:dev:deleteTIP: If you run into connection issues after sleep or longer period of inactivity try
mise run agentstack-server:dev:reconnectfirst. You may not need to clean and restart the entire VM
We use a separate VM for local development of e2e and integration tests, the setup is almost identical, but you need to change kubeconfig location in your .env:
# Use for developing e2e and integration tests locally
K8S_KUBECONFIG=~/.agentstack/lima/agentstack-local-test/copied-from-guest/kubeconfig.yamland then run agentstack-server:dev:test:start
TIP: Similarly to dev environment you can use
mise run agentstack-server:dev:test:reconnect
Lower-level networking using telepresence directly
# Activate environment
eval "$(mise run agentstack-server:dev:shell)"
# Start platform
mise agentstack-cli:run -- platform start --vm-name=agentstack-local-dev # optional --tag [tag] --import-images
mise x -- telepresence helm install
mise x -- telepresence connect
# Receive traffic to a pod by replacing it in the cluster
mise x -- telepresence replace <pod-name>
# More information about how replace/intercept/ingress works can be found in the [Telepresence documentation](https://telepresence.io/docs/howtos/engage).
# Once done, quit Telepresence using:
```sh
mise x -- telepresence quitIf you want to run this local setup against Ollama you must use a special option when setting up the LLM:
agentstack model setup --use-true-localhost
The following commands can be used to create or run migrations in the dev environment above:
- Run migrations:
mise run agentstack-server:migrations:run - Generate migrations:
mise run agentstack-server:migrations:generate - Use Alembic command directly:
mise run agentstack-server:migrations:alembic
NOTE: The dev setup will run the locally built image including its migrations before replacing it with your local instance. If new migrations you just implemented are not working, the dev setup will not start properly and you need to fix migrations first. You can activate the shell using
eval "$(mise run agentstack-server:dev:shell)"and use your favorite kubernetes IDE (e.g., k9s or kubectl) to see the migration logs.
To run Agent Stack components in development mode (ensuring proper rebuilding), use the following commands.
Build and run server using setup described in Running the platform from source Or use development setup described in Running and debugging individual components
mise agentstack-cli:run -- agent list
mise agentstack-cli:run -- agent run website_summarizer "summarize iambee.ai"# run the UI development server:
mise agentstack-ui:run
# UI is also available from agentstack-server (in static mode):
mise agentstack-server:run
⚠️ IMPORTANT
Always create pre-release before the actual public release and check that the upgrade and installation work.
Use the release script:
mise run release