A demonstration project showing how Cilium network policies and Istio ambient service mesh work together to provide defense-in-depth security for Kubernetes workloads.
This lab deploys a simple 3-tier application:
- hello-gateway: Istio ingress gateway (Kubernetes Gateway API)
- hello-app: Python Flask application that queries PostgreSQL and external APIs
- postgres: PostgreSQL 17 database
- Kubernetes: kind cluster (local development)
- Cilium: CNI providing L3/L4/L7 network policy enforcement
- Istio Ambient Mode: Service mesh without sidecars, providing mTLS and identity-based policies
- ztunnel: Istio's L4 proxy (DaemonSet) that handles transparent mTLS encryption
External Client
β (HTTP plaintext)
hello-gateway pod
β (app sends to hello-app:8000)
ztunnel (node-local)
β (mTLS encrypted, SPIFFE identity attached)
Network
β (mTLS encrypted tunnel)
ztunnel (destination node)
β (mTLS decrypted)
hello-app pod
β (app sends to postgres:5432)
ztunnel (node-local)
β (mTLS encrypted)
Network
β (mTLS encrypted tunnel)
ztunnel (destination node)
β (mTLS decrypted)
postgres pod
Unlike traditional Istio with sidecar proxies, ambient mode uses a shared node-local proxy (ztunnel):
- Traffic Redirection: iptables rules redirect pod traffic to ztunnel on the same node
- Identity Injection: ztunnel reads the pod's ServiceAccount and injects SPIFFE identity
- mTLS Encryption: ztunnel encrypts traffic with mTLS before sending to network
- Policy Enforcement: ztunnel enforces Istio AuthorizationPolicy based on identity
- Transparent: Pods are unaware of mTLS - no code changes or sidecar required
- Issuer: istiod (Istio control plane CA)
- Identity Format:
spiffe://cluster.local/ns/<namespace>/sa/<serviceaccount> - Lifetime: 24 hours (default)
- Rotation: Automatic at 50% lifetime (12 hours)
- Storage: Private keys stored in ztunnel memory, never written to disk
- Validation: Peer certificates validated on every connection
Example identities in this lab:
- Gateway:
spiffe://cluster.local/ns/demo/sa/hello-gateway-istio - hello-app:
spiffe://cluster.local/ns/demo/sa/hello-app - postgres:
spiffe://cluster.local/ns/demo/sa/postgres
Every workload gets a cryptographic identity based on its Kubernetes ServiceAccount:
# ServiceAccount defines the identity
apiVersion: v1
kind: ServiceAccount
metadata:
name: hello-app
namespace: demo
# Istio automatically issues a certificate with identity:
# spiffe://cluster.local/ns/demo/sa/hello-appEnforces WHO can access WHAT based on cryptographic identity:
apiVersion: security.istio.io/v1
kind: AuthorizationPolicy
metadata:
name: hello-app-policy
namespace: demo
spec:
selector:
matchLabels:
app: hello-app
action: ALLOW
rules:
- from:
- source:
principals:
- cluster.local/ns/demo/sa/hello-gateway-istio
to:
- operation:
ports:
- "8000"This policy says:
- Only the gateway (with identity
cluster.local/ns/demo/sa/hello-gateway-istio) can access hello-app - Access is only allowed on port 8000
- All other traffic is denied by default
Enforces that ALL communication uses mTLS:
apiVersion: security.istio.io/v1
kind: PeerAuthentication
metadata:
name: default
namespace: demo
spec:
mtls:
mode: STRICT- STRICT: Only accept mTLS connections (reject plaintext)
- PERMISSIVE: Accept both mTLS and plaintext (for migration)
- This lab uses STRICT for zero-trust security
Both Cilium and Istio enforce policies, providing layered security:
| Layer | Technology | What It Enforces | Policy Type |
|---|---|---|---|
| L3/L4 Network | Cilium | IP addresses, ports, protocols | CiliumNetworkPolicy |
| L4/L7 Identity | Istio | Workload identity, HTTP paths | AuthorizationPolicy |
| Encryption | Istio | mTLS for all communication | PeerAuthentication |
Both layers must allow traffic for a connection to succeed.
cilium-network-policies/
βββ 00-base.yaml # Default deny + DNS
βββ 01-istio-ambient.yaml # HBONE (ztunnel communication)
βββ 02-hello-app.yaml # hello-app specific policies
βββ 03-postgres.yaml # postgres specific policies
βββ 04-gateway.yaml # gateway specific policies
βββ 05-observability.yaml # Prometheus metrics scraping
Cilium can enforce L7 policies even with mTLS traffic by inspecting the SNI (Server Name Indication) field:
apiVersion: cilium.io/v2
kind: CiliumNetworkPolicy
metadata:
name: hello-app-to-github
namespace: demo
spec:
endpointSelector:
matchLabels:
app: hello-app
egress:
- toEntities:
- world
toPorts:
- ports:
- port: "443"
protocol: TCP
serverNames:
- "api.github.com" # ALLOWED
# api.cloudflare.com would be BLOCKEDThis works because:
- SNI is sent in plaintext during TLS handshake (before encryption)
- Cilium's eBPF programs can inspect SNI before connection is established
- Only connections to
api.github.comare allowed; others are dropped
Important: DNS policy in the demo namespace cannot effectively restrict DNS destinations due to Istio ambient architecture:
- Pods in
demonamespace have Cilium policies applied - All traffic is redirected through ztunnel (in
istio-systemnamespace) - ztunnel has no policy enforcement (Cilium policies disabled)
- ztunnel makes DNS queries on behalf of pods
- Therefore, DNS can reach any destination regardless of
demonamespace policy
To truly restrict DNS:
- Apply Cilium policies to ztunnel in
istio-systemnamespace, OR - Use Cilium DNS proxy with L7 visibility, OR
- Use Istio ServiceEntry resources to control external service access
The current allow-dns policy is honest - it allows DNS without claiming to restrict destinations.
Pod (hello-app)
β
[Cilium eBPF] β CiliumNetworkPolicy enforcement (L3/L4/L7 network)
β
iptables redirect
β
ztunnel
β
[Istio Policy] β AuthorizationPolicy enforcement (L4 identity-based)
β
[mTLS Encryption] β PeerAuthentication enforcement (STRICT mode)
β
Network
Cilium enforces: Source/destination IP, ports, protocols, SNI (L7) Istio enforces: Source/destination identity (SPIFFE), mTLS requirement
-
Cilium NetworkPolicy evaluated first (eBPF at network interface)
- If blocked: packet dropped, connection fails
- If allowed: proceeds to next layer
-
iptables redirect to ztunnel (transparent to pod)
-
Istio AuthorizationPolicy evaluated by ztunnel
- If no ALLOW rule matches: connection rejected (default deny)
- If ALLOW rule matches: proceeds to mTLS
-
Istio PeerAuthentication enforced by ztunnel
- If STRICT mode and peer doesn't present valid mTLS cert: connection rejected
- If valid mTLS cert: connection established
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β Application Layer β
β (hello-app, postgres, gateway) β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β Istio AuthorizationPolicy β
β (Identity-based L4/L7 enforcement) β
β β’ Validates SPIFFE identity from mTLS certificate β
β β’ cluster.local/ns/demo/sa/hello-app β
β β’ Default DENY, explicit ALLOW rules required β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β Istio PeerAuthentication (mTLS) β
β (Encryption & Identity Transport) β
β β’ STRICT mode - reject plaintext connections β
β β’ Certificates issued by istiod CA β
β β’ Auto-rotation every 12 hours (24h lifetime) β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β ztunnel (L4 Proxy Layer) β
β (Transparent mTLS Encryption/Decryption) β
β β’ DaemonSet - one per node β
β β’ Enforces AuthorizationPolicy β
β β’ Handles certificate management β
β β’ Port 15001 (outbound), 15006 (inbound), 15008 (HBONE) β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β Cilium NetworkPolicy (eBPF Layer) β
β (L3/L4/L7 Network-level enforcement) β
β β’ IP address, port, protocol filtering β
β β’ SNI-based L7 HTTPS filtering β
β β’ DNS policy (with caveats in ambient mode) β
β β’ toEntities: world, cluster, host β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β Network Infrastructure β
β (Physical/Virtual Network) β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
- Default Deny: Both Cilium and Istio default to denying all traffic
- Least Privilege: Each workload gets only the minimum required permissions
- Identity-Based: Authorization based on cryptographic identity, not IP
- Defense in Depth: Multiple layers of security (network + identity + encryption)
- Encrypted Communication: All workload-to-workload traffic uses mTLS
Cilium NetworkPolicy:
- Network-level attacks (port scanning, network pivoting)
- Exfiltration via unexpected ports or protocols
- Access to unauthorized network destinations
- DNS tunneling (with proper ztunnel policies)
Istio AuthorizationPolicy:
- Lateral movement (compromised workload accessing other services)
- Identity spoofing (attacker must have valid mTLS certificate)
- Unauthorized access (even if network policy allows, identity policy can deny)
Istio PeerAuthentication:
- Man-in-the-middle attacks (mTLS provides confidentiality and integrity)
- Eavesdropping (all traffic encrypted)
- Plaintext protocol attacks (STRICT mode rejects non-mTLS)
- Docker
- kind (Kubernetes v1.34.0)
- kubectl
- helm
- cilium CLI (v0.18.9)
- istioctl (v1.28.1)
kind create cluster --name ebpf-lab --config kind-config.yamlThis creates a 2-node cluster with disabled default CNI.
cilium install --version 1.18.3
cilium status --waitCilium becomes the CNI and handles pod networking.
Installed version: Cilium 1.18.3
kubectl apply --server-side -f https://github.com/kubernetes-sigs/gateway-api/releases/download/v1.4.1/experimental-install.yamlThis installs the Kubernetes Gateway API CRDs required by Istio.
istioctl install --set profile=ambient -yThis installs:
- istiod (control plane)
- ztunnel (DaemonSet on each node)
- CNI plugin (for traffic redirection)
Installed version: Istio 1.28.1
kubectl create namespace demo
kubectl label namespace demo istio.io/dataplane-mode=ambientAll pods in this namespace will automatically use ztunnel for mTLS.
# Build application image (if not already built)
docker build -t hello-db-app:latest ./app
kind load docker-image hello-db-app:latest --name ebpf-lab
# Deploy postgres and hello-app
kubectl apply -f pg.yaml
kubectl apply -f hello-app.yaml
# Deploy gateway
# Note: The Gateway resource has an annotation to create the service as ClusterIP
# instead of LoadBalancer (default), which is required for kind clusters
kubectl apply -f hello-gateway.yaml
# Apply Istio policies
kubectl apply -f peer-authentication.yaml
kubectl apply -f hello-policy-l4.yamlkubectl apply -f cilium-network-policies/Apply policies in order:
- Base policies (default deny + DNS)
- Istio ambient policies (HBONE)
- Workload-specific policies
- Gateway policies
- Observability policies
# Check Cilium policy enforcement
kubectl get ciliumnetworkpolicies -n demo
# Check Istio policies
kubectl get peerauthentication,authorizationpolicy -n demo
# Check workload status
kubectl get pods -n demo
# Verify mTLS is enforced (check PeerAuthentication)
kubectl get peerauthentication -n demo -o yaml
# Verify Gateway service is ClusterIP (not LoadBalancer)
kubectl get svc -n demo hello-gateway-istio
# If the service is LoadBalancer (from a previous deployment), patch it:
# kubectl patch svc hello-gateway-istio -n demo -p '{"spec":{"type":"ClusterIP"}}'# The gateway service is ClusterIP (for kind compatibility), so use port-forward
kubectl port-forward -n demo svc/hello-gateway-istio 8080:80
# Access application (shows status page with DB, GitHub, and Cloudflare connectivity)
curl http://localhost:8080/
# Check health endpoint
curl http://localhost:8080/health
# Or open in browser for a nice UI
open http://localhost:8080/# Deploy a test pod without proper identity
kubectl run -n demo test --image=curlimages/curl --rm -it -- /bin/sh
# Try to access hello-app (should fail - no matching AuthorizationPolicy)
curl hello-app:8000
# Try to access postgres (should fail - blocked by both Cilium and Istio)
curl postgres:5432# Get endpoint ID for hello-app pod
kubectl get ciliumendpoints -n demo
# View applied policies (run from inside Cilium pod)
kubectl exec -n kube-system ds/cilium -- cilium-dbg policy get <endpoint-id>
# Monitor policy decisions (run from inside Cilium pod)
kubectl exec -n kube-system ds/cilium -- cilium-dbg monitor --type policy-verdictMIT License - use freely for learning and demonstration purposes.
Requirement: hello-app needs to access api.example.com on HTTPS.
Solution: Add Cilium NetworkPolicy with SNI filtering:
apiVersion: cilium.io/v2
kind: CiliumNetworkPolicy
metadata:
name: hello-app-to-example-api
namespace: demo
spec:
endpointSelector:
matchLabels:
app: hello-app
egress:
- toEntities:
- world
toPorts:
- ports:
- port: "443"
protocol: TCP
serverNames:
- "api.example.com"Requirement: Deploy worker-app that needs to query postgres.
Steps:
- Create ServiceAccount for identity:
apiVersion: v1
kind: ServiceAccount
metadata:
name: worker-app
namespace: demo- Add Istio AuthorizationPolicy to postgres:
apiVersion: security.istio.io/v1
kind: AuthorizationPolicy
metadata:
name: postgres-policy
namespace: demo
spec:
selector:
matchLabels:
app: postgres
action: ALLOW
rules:
- from:
- source:
principals:
- cluster.local/ns/demo/sa/hello-app
- cluster.local/ns/demo/sa/worker-app # Add this
to:
- operation:
ports:
- "5432"- Add Cilium NetworkPolicy:
apiVersion: cilium.io/v2
kind: CiliumNetworkPolicy
metadata:
name: worker-app-to-postgres
namespace: demo
spec:
endpointSelector:
matchLabels:
app: worker-app
egress:
- toEndpoints:
- matchLabels:
app: postgres
toPorts:
- ports:
- port: "5432"
protocol: TCPProblem: New workload can't connect to existing service.
Debugging Steps:
- Check Cilium policy drops:
# Monitor from specific Cilium pod on the node where your workload runs
kubectl exec -n kube-system <cilium-pod-name> -- cilium-dbg monitor --type drop- Check Istio policy denials:
kubectl logs -n istio-system -l app=ztunnel | grep -i "denied\|rejected"- Verify identity:
istioctl x describe pod <pod-name> -n demo- Check endpoint policies:
kubectl get ciliumendpoints -n demo <pod-name> -o yaml