Terraform module that deploys OpenClaw on AWS Lightsail with Tailscale VPN for private-only access.
All public ports are blocked. The instance is accessible exclusively through Tailscale.
| Resource | Purpose |
|---|---|
Lightsail instance (openclaw_ls_1_0) |
OpenClaw autonomous AI agent |
| Static IP | Stable address for the instance |
| Firewall rules | Port 22 restricted to 100.64.0.0/10 (Tailscale CGNAT, unreachable from internet) |
| Tailscale + Serve | Installed via cloud-init, joins your tailnet, exposes dashboard via Tailscale Serve (HTTPS) |
- Go to https://login.tailscale.com/admin/settings/keys
- Click Generate auth key
- Enable Reusable if you plan to recreate the instance
- Copy the key (starts with
tskey-auth-)
.
├── main.tf # Root module — label + openclaw module
├── variables.tf # Root variables (namespace, environment, name, etc.)
├── outputs.tf # Root outputs
├── providers.tf # AWS provider with allowed_account_ids
├── backend.tf # S3 backend declaration
├── backend.hcl # S3 backend config (bucket, key, region, profile)
├── terraform.tfvars # Variable values for this deployment
└── modules/
└── openclaw/
├── main.tf # Lightsail instance, static IP, ports, Tailscale
├── variables.tf # Module inputs (context, AZ, bundle, auth key)
└── outputs.tf # Module outputs (id, name, IP, ARN)
aws s3api create-bucket \
--bucket <your-bucket-name> \
--region eu-west-1 \
--create-bucket-configuration LocationConstraint=eu-west-1 \
--profile <your-profile>
aws s3api put-bucket-versioning \
--bucket <your-bucket-name> \
--versioning-configuration Status=Enabled \
--profile <your-profile>
aws s3api put-bucket-encryption \
--bucket <your-bucket-name> \
--server-side-encryption-configuration \
'{"Rules":[{"ApplyServerSideEncryptionByDefault":{"SSEAlgorithm":"AES256"}}]}' \
--profile <your-profile>
aws s3api put-public-access-block \
--bucket <your-bucket-name> \
--public-access-block-configuration \
BlockPublicAcls=true,IgnorePublicAcls=true,BlockPublicPolicy=true,RestrictPublicBuckets=true \
--profile <your-profile>bucket = "<your-bucket-name>"
key = "openclaw-lightsail/terraform.tfstate"
region = "eu-west-1"
profile = "<your-profile>"
encrypt = truenamespace = "myorg"
environment = "sandbox"
name = "openclaw"
availability_zone = "eu-west-1a"
bundle_id = "medium_3_0"Resource names are generated as {namespace}-{environment}-{name} (e.g., myorg-sandbox-openclaw) using the CloudPosse label module. Tags are applied automatically.
terraform init -backend-config=backend.hcl
terraform plan -var='tailscale_auth_key=tskey-auth-...'
terraform apply -var='tailscale_auth_key=tskey-auth-...'The auth key is passed on the command line to avoid storing it in files. Terraform marks it as sensitive and will not display it in plan output.
Once terraform apply completes, the instance is on your tailnet:
# SSH (uses Tailscale SSH — no key management needed)
ssh root@<namespace>-<environment>-<name>
# Get the dashboard URL (includes your tailnet suffix)
ssh root@<namespace>-<environment>-<name> 'tailscale serve status'
# Dashboard (HTTPS via Tailscale Serve — valid Let's Encrypt cert, no token needed)
open https://<namespace>-<environment>-<name>.<tailnet>.ts.netThe dashboard is served through Tailscale Serve with a valid Let's Encrypt certificate. Authentication is handled by your Tailscale identity — no gateway token is required. The <tailnet> suffix is specific to your Tailscale account (e.g., tail8fe87f).
Valid SSH users are root and ubuntu. Tailscale SSH is enabled (--ssh), so no SSH keys are needed — authentication is handled by your tailnet identity.
If Tailscale is running locally during apply, Terraform verifies tailnet connectivity before completing. If Tailscale is not running locally, the verification step is skipped with a warning — you can verify manually:
tailscale ping <namespace>-<environment>-<name>The gateway has two auth layers: connection auth (handled by Tailscale) and device identity (requires a token from the browser). On first access you need to enter the gateway token once — it's stored in your browser's localStorage and reused automatically.
- Get the gateway token:
terraform output -raw gateway_token - Open the dashboard:
https://<namespace>-<environment>-<name>.<tailnet>.ts.net - Click the Overview tab (gear icon)
- Paste the gateway token into the Gateway Token field
- Click Connect
OpenClaw uses Amazon Bedrock as the default AI provider. The IAM role must be configured via the CloudShell script in the Lightsail console:
- Open the Lightsail console
- Click your instance
- Go to the Getting started tab
- Run the provided CloudShell script to set up Bedrock access
| Variable | Type | Default | Description |
|---|---|---|---|
namespace |
string |
— | Organization abbreviation (e.g., apro) |
environment |
string |
— | Environment name (e.g., sandbox, prod) |
name |
string |
— | Application name (e.g., openclaw) |
tailscale_auth_key |
string |
— | Tailscale auth key (sensitive) |
region |
string |
eu-west-1 |
AWS region |
aws_profile |
string |
apro-datalake-sandbox |
AWS CLI profile |
aws_account_id |
string |
515966504419 |
Allowed AWS account ID |
availability_zone |
string |
eu-west-1a |
Lightsail AZ |
bundle_id |
string |
medium_3_0 |
Lightsail bundle (4 GB RAM, 2 vCPUs, 80 GB disk) |
| Variable | Type | Default | Description |
|---|---|---|---|
context |
any |
— | CloudPosse label context from root |
tailscale_auth_key |
string |
— | Tailscale auth key (sensitive) |
availability_zone |
string |
— | Lightsail AZ |
blueprint_id |
string |
openclaw_ls_1_0 |
Lightsail blueprint |
bundle_id |
string |
medium_3_0 |
Lightsail bundle |
| Output | Description |
|---|---|
id |
Label ID used for all resource names (e.g., apro-sandbox-openclaw) |
instance_name |
Lightsail instance name |
static_ip |
Static IP address (public ports are blocked) |
instance_arn |
Instance ARN |
gateway_token |
Gateway token for Control UI auth (sensitive — use terraform output -raw gateway_token) |
dashboard_url |
Tailscale Serve HTTPS URL for the dashboard |
The Lightsail firewall blocks all public access. The only open port is SSH (22) restricted to 100.64.0.0/10, which is the Tailscale CGNAT range — unreachable from the public internet.
Tailscale traffic flows through the WireGuard tunnel (tailscale0 interface), bypassing the Lightsail firewall entirely. The cloud-init script opens the OS-level firewall (UFW and iptables) for the tailscale0 interface and UDP port 41641 (WireGuard direct connections).
The OpenClaw gateway runs on 127.0.0.1:18789 and is exposed via Tailscale Serve (--tailscale serve). Tailscale Serve intercepts HTTPS connections on the Tailscale interface, terminates TLS with a valid Let's Encrypt certificate, and proxies to the gateway. The gateway accepts Tailscale identity headers (auth.allowTailscale: true) so no gateway token is needed.
Browser → Tailscale tunnel → Tailscale Serve (HTTPS, Let's Encrypt) → Gateway (ws://127.0.0.1:18789)
Apache still runs on ports 80/443 for the Lightsail console's "Getting Started" page but does not serve the dashboard. Dashboard access must use the Tailscale Serve FQDN (https://<hostname>.<tailnet>.ts.net).
| Bundle | RAM | vCPUs | Disk | Price |
|---|---|---|---|---|
small_3_0 |
2 GB | 1 | 60 GB | $12/mo |
medium_3_0 |
4 GB | 2 | 80 GB | $24/mo |
large_3_0 |
8 GB | 2 | 160 GB | $48/mo |
xlarge_3_0 |
16 GB | 4 | 320 GB | $96/mo |
The medium_3_0 bundle is the recommended minimum (power=2000, exceeds the blueprint's minPower=1000).
Tailscale SSH maps your tailnet identity to a local user on the instance. The OpenClaw blueprint has root and ubuntu as valid users. If you see failed to look up local user, specify the user explicitly:
ssh root@<namespace>-<environment>-<name>SSH into the instance via the Lightsail browser console and check the setup log:
sudo cat /var/log/tailscale-setup.log
sudo tailscale statusIf a previous instance with the same name exists on your tailnet (offline), Tailscale appends -1, -2, etc. Remove stale devices at https://login.tailscale.com/admin/machines before redeploying.
The null_resource.wait_for_tailscale provisioner waits up to 5 minutes for the instance to appear on the tailnet. If local Tailscale is not running, the check is skipped. If it times out:
- Check that the auth key is valid and not expired
- Check
/var/log/tailscale-setup.logon the instance - Ensure the auth key allows the device to join (check tailnet policy)
If re-deploying to a fresh environment, the S3 bucket must be created before terraform init. If the bucket already exists from a previous deployment, just run terraform init -backend-config=backend.hcl.