diff --git a/ansible/files/wiab_server_nftables.conf.j2 b/ansible/files/wiab_server_nftables.conf.j2
index 709c0e6c9..5ec25b52e 100644
--- a/ansible/files/wiab_server_nftables.conf.j2
+++ b/ansible/files/wiab_server_nftables.conf.j2
@@ -67,7 +67,9 @@ table ip nat {
chain POSTROUTING {
type nat hook postrouting priority 100;
oifname != docker0 ip saddr 172.17.0.0/16 counter masquerade
+{% if not (private_deployment | default(false) | bool) %}
oifname $INF_WAN counter masquerade comment "{{ wire_comment }} masquerade outgoing traffic"
+{% endif %}
}
chain DOCKER {
iifname docker0 counter return
diff --git a/ansible/inventory/demo/wiab-staging.yml b/ansible/inventory/demo/wiab-staging.yml
index fb3ee33fd..8fda6821d 100644
--- a/ansible/inventory/demo/wiab-staging.yml
+++ b/ansible/inventory/demo/wiab-staging.yml
@@ -6,4 +6,6 @@ wiab-staging:
ansible_user: 'demo'
ansible_ssh_private_key_file: "~/.ssh/id_ed25519"
vars:
- artifact_hash: 2200257f7a528f3a8157e8878fc7ee1c945594d1
+ artifact_hash: 7da2319729ba792f91d7ccba4e026c21cd3a3691
+ # it will disable internet access to VMs created on the private network
+ private_deployment: true
diff --git a/ansible/wiab-staging-provision.yml b/ansible/wiab-staging-provision.yml
index 39c24027d..2ac5d2187 100644
--- a/ansible/wiab-staging-provision.yml
+++ b/ansible/wiab-staging-provision.yml
@@ -298,9 +298,8 @@
kubenode2_ip: "{{ kubenode_ip_result.results[1].stdout }}"
kubenode3_ip: "{{ kubenode_ip_result.results[2].stdout }}"
wire_comment: "wiab-stag"
-
tags: always
- name: Configure nftables
import_playbook: ./wiab-staging-nftables.yaml
- tags: nftables
+ tags: [never, nftables]
diff --git a/bin/helm-operations.sh b/bin/helm-operations.sh
index 9dbe49a58..30b645d80 100755
--- a/bin/helm-operations.sh
+++ b/bin/helm-operations.sh
@@ -5,7 +5,7 @@ set -Eeo pipefail
# Read values from environment variables with defaults
BASE_DIR="${BASE_DIR:-/wire-server-deploy}"
TARGET_SYSTEM="${TARGET_SYSTEM:-example.com}"
-CERT_MASTER_EMAIL="certmaster@${CERT_MASTER_EMAIL}:-certmaster@${TARGET_SYSTEM}"
+CERT_MASTER_EMAIL="${CERT_MASTER_EMAIL:-certmaster@example.com}"
# DEPLOY_CERT_MANAGER env variable is used to decide if cert_manager and nginx-ingress-services charts should get deployed
# default is set to TRUE to deploy it unless changed
diff --git a/changelog.d/3-deploy-builds/wiab-stag-nftables-snat-fix b/changelog.d/3-deploy-builds/wiab-stag-nftables-snat-fix
new file mode 100644
index 000000000..55ceb183e
--- /dev/null
+++ b/changelog.d/3-deploy-builds/wiab-stag-nftables-snat-fix
@@ -0,0 +1,5 @@
+Added: variable private_deployment with default true to disable SNAT on adminhost
+Fixed: cert_master_email env var parsing in helm-operations.sh
+Fixed: made running wiab-staging-nftables.yaml playbook explicit
+Added: wiab-staging.md documentation to add details about default SNAT access being denied and how to enable it
+Added: wiab-staging.md network flow diagram
diff --git a/offline/wiab-staging.md b/offline/wiab-staging.md
index d3a3de4ea..53b7cef31 100644
--- a/offline/wiab-staging.md
+++ b/offline/wiab-staging.md
@@ -1,6 +1,6 @@
# Scope
-**Wire in a Box (WIAB) Staging** is a demo installation of Wire running on a single physical machine using KVM-based virtual machines. This setup replicates the multi-node production Wire architecture in a consolidated environment suitable for testing, evaluation, and learning about Wire's infrastructure—but **not for production use**.
+**Wire in a Box (WIAB) Staging** is a staging installation of Wire running on a single physical machine using KVM-based virtual machines. This setup replicates the multi-node production Wire architecture in a consolidated environment suitable for testing, evaluation, and learning about Wire's infrastructure—but **not for production use**.
**Important:** This is a sandbox environment. Data from a staging installation cannot be migrated to production. WIAB Staging is designed for experimentation, validation, and understanding Wire's deployment model.
@@ -13,11 +13,11 @@
- This solution helps developers understand Wire's infrastructure requirements and test deployment processes
**Resource Requirements:**
-- One physical machine with hypervisor support:
+- One physical machine (aka `adminhost`) with hypervisor support:
- **Memory:** 55 GiB RAM
- **Compute:** 29 vCPUs
- **Storage:** 850 GB disk space (thin-provisioned)
- - 7 VMs with [Ubuntu 22](https://releases.ubuntu.com/jammy/) as per (#VM-Provisioning)
+ - 7 VMs with [Ubuntu 22](https://releases.ubuntu.com/jammy/) as per [requried resources](#vm-provisioning)
- **DNS Records**:
- a way to create DNS records for your domain name (e.g. wire.example.com)
- Find a detailed explanation at [How to set up DNS records](https://docs.wire.com/latest/how-to/install/demo-wiab.html#dns-requirements)
@@ -50,20 +50,32 @@ We would require 7 VMs as per the following details, you can choose to use your
- **kubenodes (kubenode1, kubenode2, kubenode3):** Run the Kubernetes cluster and host Wire backend services
- **datanodes (datanode1, datanode2, datanode3):** Run distributed data services:
- - Cassandra (distributed database)
- - PostgreSQL (operational database)
- - Elasticsearch (search engine)
- - Minio (S3-compatible object storage)
- - RabbitMQ (message broker)
+ - Cassandra
+ - PostgreSQL
+ - Elasticsearch
+ - Minio
+ - RabbitMQ
- **assethost:** Hosts static assets to be used by kubenodes and datanodes
+### Internet access for VMs:
+
+In most cases, Wire Server components do not require internet access, except in the following situations:
+- **External email services** – If your users’ email servers are hosted on the public internet (for example, user@gmail.com etc).
+- **Mobile push notifications (FCM/APNS)** – Required to enable notifications for Android and Apple mobile devices. Wire uses [AWS services](https://docs.wire.com/latest/how-to/install/infrastructure-configuration.html#enable-push-notifications-using-the-public-appstore-playstore-mobile-wire-clients) to relay notifications to Firebase Cloud Messaging (FCM) and Apple Push Notification Service (APNS).
+- **Third-party content previews** – If you want clients to display previews for services such as Giphy, Google, Spotify, or SoundCloud. Wire provides a proxy service for third-party content so clients do not communicate directly with these services, preventing exposure of IP addresses, cookies, or other metadata.
+- **Federation with other Wire servers** – Required if your deployment needs to federate with another Wire server hosted on the public internet.
+
+> **Note:** Internet access is also required by the cert-manager pods (via Let's Encrypt) to issue TLS certificates when manual certificates are not used.
+>
+> This internet access is temporarily enabled as described in [cert-manager behaviour in NAT / bridge environments](#cert-manager-behaviour-in-nat--bridge-environments) to allow certificate issuance. Once the certificates are successfully issued by cert-manager, the internet access is removed from the VMs.
+
## WIAB staging ansible playbook
-The ansible playbook will perform the following operations for you:
+The ansible playbook will perform the following operations for you and it expects the access to internet on the target system to be available to be able to download/install packages:
**System Setup & Networking**:
- Updates all system packages and installs required tools (git, curl, docker, qemu, libvirt, yq, etc.)
- - Configures SSH, firewall (nftables), and user permissions (sudo, kvm, docker groups)
+ - Configures SSH and user permissions (sudo, kvm, docker groups)
**wire-server-deploy Artifact & Ubuntu Cloud Image**:
- Downloads wire-server-deploy static artifact and Ubuntu cloud image
@@ -79,7 +91,6 @@ The ansible playbook will perform the following operations for you:
- Generates inventory.yml with actual VM IPs replacing placeholders
- Configures network interface variables for all k8s-nodes and datanodes
-
*Note: Skip the Ansible playbook step if you are managing VMs with your own hypervisor.*
### Getting started with Ansible playbook
@@ -106,8 +117,9 @@ cd wire-server-deploy
**Step 2: Configure your Ansible inventory for your physical machine**
A sample inventory is available at [ansible/inventory/demo/wiab-staging.yml](https://github.com/wireapp/wire-server-deploy/blob/master/ansible/inventory/demo/wiab-staging.yml).
+Replace example.com with your physical machine (`adminhost`) address where KVM is available and adjust other variables like `ansible_user` and `ansible_ssh_private_key_file`. The SSH user for ansible `ansible_user` should have password-less `sudo` access. The adminhost should be running Ubuntu 22.04. From here on, we would refer the physical machine as `adminhost`.
-*Note: Replace example.com with your physical machine (adminhost) address where KVM is available and adjust other variables like ansible_user and ansible_ssh_private_key_file. The SSH user for ansible `ansible_user` should have password-less `sudo` access. The physical host should be running Ubuntu 22.04.*
+The `private_deployment` variable determines whether the VMs created below will have internet access. When set to `true` (default value), no internet access is available to VMs. Check [Internet access for VMs](#internet-access-for-vms) to understand more about it.
**Step 3: Run the VM and network provision**
@@ -119,28 +131,45 @@ ansible-playbook -i ansible/inventory/demo/wiab-staging.yml ansible/wiab-staging
## Ensure secondary ansible inventory for VMs
-Now you should have 7 VMs running on your physical machine. If you have used the ansible playbook, you should also have a directory `/home/ansible_user/wire-server-deploy` with all resources required for further deployment. If you didn't use the above playbook, download the `wire-server-deploy` artifact shared by Wire support and unarchieve (tar tgz) it.
+Now you should have 7 VMs running on your `adminhost`. If you have used the ansible playbook, you should also have a directory `/home/ansible_user/wire-server-deploy` with all resources required for further deployment. If you didn't use the above playbook, download the `wire-server-deploy` artifact shared by Wire support and unarchieve (tar tgz) it.
Ensure the inventory file `ansible/inventory/offline/inventory.yml` in the directory `/home/ansible_user/wire-server-deploy` contains values corresponding to your VMs. If you have already used the [Ansible playbook above](#getting-started-with-ansible-playbook) to set up VMs, this file should have been prepared for you.
+The purpose of secondary ansible inventory is to interact only with the VMs. All the operations concerning the secondary inventory are meant to install datastores and k8s services.
+
## Next steps
Since the inventory is ready, please continue with the following steps:
-> **Note**: All next steps assume that the wire-server-deploy artifact has been downloaded on the `adminhost` (your physical machine) and extracted at `/home/ansible_user/wire-server-deploy`. All commands from here on will be issued from this directory on the `adminhost`, ssh on the node before proceeding.
+> **Note**: All next steps assume that the wire-server-deploy artifact has been downloaded on the `adminhost` and extracted at `/home/ansible_user/wire-server-deploy`. All commands from here on will be issued from this directory on the `adminhost`, ssh on the node before proceeding.
### Environment Setup
-- **[Making tooling available in your environment](docs_ubuntu_22.04.md#making-tooling-available-in-your-environment)**
- - Source the `bin/offline-env.sh` shell script by running `source bin/offline-env.sh` to set up a `d` alias that runs commands inside a Docker container with all necessary tools for offline deployment.
-
- **[Generating secrets](docs_ubuntu_22.04.md#generating-secrets)**
- - Run `./bin/offline-secrets.sh` to generate fresh secrets for Minio and coturn services. This creates two secret files: `ansible/inventory/group_vars/all/secrets.yaml` and `values/wire-server/secrets.yaml`.
+ - Run `bin/offline-secrets.sh` to generate fresh secrets for Minio and coturn services. It uses the docker container images shipped inside the `wire-server-deploy` directory.
+ ```bash
+ ./bin/offline-secrets.sh
+ ```
+ - This creates following secret files:
+ - `ansible/inventory/group_vars/all/secrets.yaml`
+ - `values/wire-server/secrets.yaml`
+ - `values/coturn/prod-secrets.example.yaml`
+
+- **[Making tooling available in your environment](docs_ubuntu_22.04.md#making-tooling-available-in-your-environment)**
+ - Source the `bin/offline-env.sh` shell script by running following command to set up a `d` alias that runs commands inside a Docker container with all necessary tools for offline deployment.
+ ```bash
+ source bin/offline-env.sh
+ ```
+ - You can always use this alias `d` later to interact with the ansible playbooks, k8s cluster and the helm charts.
+ - The docker container mounts everything here from the `wire-server-deploy` directory, hence this acts an entry point for all the future interactions with ansible, k8s and helm charts.
### Kubernetes & Data Services Deployment
- **[Deploying Kubernetes and stateful services](docs_ubuntu_22.04.md#deploying-kubernetes-and-stateful-services)**
- - Run `d ./bin/offline-cluster.sh` to deploy Kubernetes and stateful services (Cassandra, PostgreSQL, Elasticsearch, Minio, RabbitMQ). This script deploys all infrastructure needed for Wire backend operations.
+ ```bash
+ d ./bin/offline-cluster.sh
+ ```
+ - Run the above command to deploy Kubernetes and stateful services (Cassandra, PostgreSQL, Elasticsearch, Minio, RabbitMQ). This script deploys all infrastructure needed for Wire backend operations.
### Helm Operations to install wire services and supporting helm charts
@@ -177,59 +206,111 @@ d sh -c 'TARGET_SYSTEM="example.dev" CERT_MASTER_EMAIL="certmaster@example.dev"
## Network Traffic Configuration
-### Bring traffic from the physical machine to Wire services in the k8s cluster
+### Bring traffic from the adminhost to Wire services in the k8s cluster
-If you used the Ansible playbook earlier, nftables firewall rules are pre-configured to forward traffic. If you set up VMs manually with your own hypervisor, you must manually configure network traffic flow using nftables as descibed below.
+Our Wire services are ready to receive traffic but we must enable network access from the `adminhost` network interface to the k8s pods running in the virtual network. We can acheive it by setting up [nftables](https://documentation.ubuntu.com/security/security-features/network/firewall/nftables/) rules on the `adminhost`. When using any other type of firewall tools, please ensure following network configuration is achieved.
**Required Network Configuration:**
-The physical machine (adminhost) must forward traffic from external clients to the Kubernetes cluster running Wire services. This involves:
-
-1. **HTTP/HTTPS Traffic (Ingress)** - Forward ports 80 and 443 to the nginx-ingress-controller running on a Kubernetes node
- - Port 80 (HTTP) → Kubernetes node port 31772
- - Port 443 (HTTPS) → Kubernetes node port 31773
-
-2. **Calling Services Traffic (Coturn/SFT)** - Forward media and TURN protocol traffic to Coturn/SFT
- - Port 3478 (TCP/UDP) → Coturn control traffic
- - Ports 32768-65535 (UDP) → Media relay traffic for WebRTC calling
+The `adminhost` must forward traffic from external clients to the Kubernetes cluster running Wire services. This involves:
+
+1. **HTTP/HTTPS Traffic (Ingress)** – Forward external web traffic to Kubernetes ingress with load balancing across nodes
+ - Port 80 (TCP, from any external source to adminhost WAN IP) → DNAT to any Kubernetes node on port 31772 → HTTP ingress
+ - Port 443 (TCP, from any external source to adminhost WAN IP) → DNAT to any Kubernetes node on port 31773 → HTTPS ingress
+
+2. **Calling Services Traffic (Coturn/SFT)** – Forward TURN control and media traffic to the dedicated calling node
+ - Port 3478 (TCP/UDP, from any external source to adminhost WAN IP) → DNAT to calling node → TURN control traffic
+ - Ports 32768–65535 (UDP, from any external source to adminhost WAN IP) → DNAT to calling node → WebRTC media relay
+
+3. **Normal Access Rules (Host-Level Access)** – Restrict direct access to adminhost
+ - Port 22 (TCP, from allowed sources to adminhost) → allow → SSH access
+ - Traffic from loopback and VM bridge interfaces → allow → internal communication
+ - Any traffic within VM network → allowed → ensures inter-node communication
+ - All other inbound traffic to adminhost → drop → default deny policy
+
+4. **Masquerading (If [Internet access for VMs](#internet-access-for-vms) is required)** – Enable outbound connectivity for VMs
+ - Any traffic from VM subnet leaving via WAN interface → SNAT/masquerade → ensures return traffic from internet.
+
+5. **Conditional Rules (cert-manager / HTTP-01 in NAT setups)** – Temporary adjustments for certificate validation
+ - DNAT hairpin traffic (VM → public IP → VM) → may require SNAT/masquerade on VM bridge → ensures return path during HTTP-01 self-checks
+ - Asymmetric routing scenarios → may require relaxed reverse path filtering → prevents packet drops during validation
+
+```mermaid
+flowchart TB
+
+%% External Clients
+Client[External Client]
+LetsEncrypt["(Optional)
Let's Encrypt"]
+Internet["(If Required)
Internet Services
(AWS/FCM/APNS, Email Services etc)"]
+
+%% Admin Host
+AdminHost["AdminHost
(Firewall)"]
+
+%% VM Network
+subgraph VM_Network ["VM Network (virbr0)"]
+ K1[KubeNode1]
+ K2[KubeNode2]
+ K3["KubeNode3
(CALLING NODE)"]
+end
+
+%% Ingress Traffic
+Client -->|HTTPS → wire-records.example.com| AdminHost
+AdminHost -->|"DNAT →31772/31773"| K1
+AdminHost -->|"DNAT →31772/31773"| K2
+AdminHost -->|"DNAT →31772/31773"| K3
+
+%% Calling Traffic
+Client -->|TCP/UDP Calling| AdminHost
+AdminHost -->|DNAT → Calling Node| K3
+
+%% Outbound Traffic (Masquerade)
+K1 -.->|SNAT via AdminHost| Internet
+K2 -.->|SNAT via AdminHost| Internet
+K3 -.->|SNAT via AdminHost| Internet
+
+%% Cert-Manager Flow
+K1 <-.->|HTTP-01 self-check| AdminHost
+AdminHost-.->|Request TLS certificate| LetsEncrypt
+```
**Implementation:**
-Use the detailed nftables rules in [../ansible/files/wiab_server_nftables.conf.j2](../ansible/files/wiab_server_nftables.conf.j2) as the template. The nftable configuration template covers:
-- Defining your network variables (Coturn IP, Kubernetes node IP, WAN interface)
-- Creating NAT rules for HTTP/HTTPS ingress traffic
-- Setting up TURN protocol forwarding for Coturn and traffic for SFTD
+The nftables rules are detailed in [wiab_server_nftables.conf.j2](https://github.com/wireapp/wire-server-deploy/blob/master/ansible/files/wiab_server_nftables.conf.j2) as the configuration template. Please ensure no other firewall services like `ufw` or `iptables` are configured on the node before continuing.
-*Note: If you have already ran the playbook wiab-staging-provision.yml then it is already be configured for you. Confirm it by checking if the wire endpoint `https://webapp.TARGET_SYSTEM` is reachable from public internet or your private network (in case of private network), but not from the adminhost itself.*
-
-You can also apply these rules using the Ansible playbook against your adminhost, by following:
+If you have already used the `wiab-staging-provision.yml` ansible playbook to create the VMs, then you can apply these rules using the sme playbook (with the tag `nftables`) against your adminhost, by following:
```bash
-ansible-playbook -i inventory.yml ansible/wiab-staging-nftables.yml
+ansible-playbook -i ansible/inventory/demo/wiab-staging.yml ansible/wiab-staging-provision.yml --tags nftables
+```
+Alternatively, if you have not used the `wiab-staging-provision.yml` ansible playbook to create the VMs but would like to configure nftables rules, you can invoke the ansible playbook [wiab-staging-nftables.yaml](https://github.com/wireapp/wire-server-deploy/blob/master/ansible/wiab-staging-nftables.yaml) against the physical node. The playbook is available in the directory `wire-server-deploy/ansible`.
+
+The inventory file `inventory.yml` should define the following variables:
+```yaml
+wiab-staging:
+ hosts:
+ deploy_node:
+ # this should be the adminhost
+ ansible_host: example.com
+ ansible_ssh_common_args: '-o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null -o ServerAliveInterval=60 -o ServerAliveCountMax=3 -o TCPKeepAlive=yes'
+ ansible_user: 'demo'
+ ansible_ssh_private_key_file: "~/.ssh/id_ed25519"
+ vars:
+ # Kubernetes node IPs
+ kubenode1_ip: 192.168.122.11
+ kubenode2_ip: 192.168.122.12
+ kubenode3_ip: 192.168.122.13
+ # Calling services node(kubenode3)
+ calling_node_ip: 192.168.122.13
+ # Host WAN interface name
+ inf_wan: enp41s0
+ wire_comment: "wiab-stag"
+ # it will disable internet access to VMs created on the private network
+ private_deployment: true
```
-You can run the above playbook from local system or where you have cloned/downloaded the [Wire server deploy ansible playbooks](#getting-the-ansible-playbooks).
-
-The inventory should define the following variables:
-
-```ini
-[all:vars]
-# Kubernetes node IPs
-kubenode1_ip=192.168.122.11
-kubenode2_ip=192.168.122.12
-kubenode3_ip=192.168.122.13
-
-# Calling services node (usually kubenode3)
-calling_node_ip=192.168.122.13
-
-# Host WAN interface name
-inf_wan=eth0
-
-# These are the same as wiab-staging.yml
-# user and ssh key for adminhost
-ansible_user='demo'
-ansible_ssh_private_key_file='~/.ssh/id_ed25519'
-
+To implement the nftables rules, now execute the following command:
+```bash
+ansible-playbook -i inventory.yml wire-server-deploy/ansible/wiab-staging-nftables.yaml
```
### cert-manager behaviour in NAT / bridge environments
@@ -238,6 +319,21 @@ When cert-manager performs HTTP-01 self-checks inside the cluster, traffic can h
- Pod → Node → host public IP → DNAT → Node → Ingress
+> **Note**: Using Let's encrypt with `cert-manager` requires internet access eg. `acme-v02.api.letsencrypt.org` to issue TLS certs and if you have chosen to keep the network private i.e. `private_deployment=true` for the VMs when applying nftables rules aka no internet access to VMs, then we need to make a temporary exception for this.
+>
+> To add a nftables masquerading rule for all outgoing traffic run the following command on the `adminhost` or make a similar change in your firewall:
+>
+> ```bash
+> # Host WAN interface name
+> INF_WAN=enp41s0
+> sudo nft insert rule ip nat POSTROUTING position 0 \
+> oifname $INF_WAN \
+> counter masquerade \
+> comment "wire-masquerade-for-letsencrypt"
+> ```
+>
+> If you are using a different implementation than nftables then please ensure Internet access to VMs.
+
In NAT/bridge setups (for example, using `virbr0` on the host):
- If nftables DNAT rules exist in `PREROUTING` without a matching SNAT on `virbr0 → virbr0`, return packets may bypass the host and break conntrack, causing HTTP-01 timeouts and certificate verification failures.
@@ -246,16 +342,26 @@ In NAT/bridge setups (for example, using `virbr0` on the host):
Before changing anything, first verify whether certificate issuance is actually failing:
1. Check whether certificates are successfully issued:
- ```bash
- d kubectl get certificates
- ```
-2. If certificates are not in `Ready=True` state, inspect cert-manager logs for HTTP-01 self-check or timeout errors:
- ```bash
- d kubectl logs -n cert-manager-ns
- ```
+ ```bash
+ d kubectl get certificates
+ ```
+2. Check if k8s pods can access to its own domain:
+ ```bash
+ # Replace below. To find the aws-sns pod id, run the command:
+ # d kubectl get pods -l 'app=fake-aws-sns'
+ d kubectl exec -ti fake-aws-sns- -- sh -c 'curl --connect-timeout 10 -v webapp.'
+ ```
+3. If certificates are not in `Ready=True` state, inspect cert-manager logs for HTTP-01 self-check or timeout errors:
+ ```bash
+ # To find the , run the following command:
+ # d kubectl get pods -n cert-manager-ns -l 'app=cert-manager'
+ d kubectl logs -n cert-manager-ns
+ ```
If you observe HTTP-01 challenge timeouts or self-check failures in a NAT/bridge environment, hairpin SNAT and relaxed reverse-path filtering handling may be required. One possible approach is:
+> **Note:** All `nft` and `sysctl` commands should run on the adminhost.
+
- Relax reverse-path filtering to loose mode to allow asymmetric flows:
```bash
sudo sysctl -w net.ipv4.conf.all.rp_filter=2
@@ -295,6 +401,20 @@ If you observe HTTP-01 challenge timeouts or self-check failures in a NAT/bridge
xargs -r -I {} sudo nft delete rule ip nat POSTROUTING handle {}
```
+> **Note**: If above you had made an exception to allow temporary internet access to VMs by adding a nftables rules, then this should be removed now.
+>
+> To remove the nftables masquerading rule for all outgoing traffic run the following command:
+>
+> ```bash
+> # remove the masquerading rule
+> sudo nft -a list chain ip nat POSTROUTING | \
+> grep wire-masquerade-for-letsencrypt | \
+> sed -E 's/.*handle ([0-9]+).*/\1/' | \
+> xargs -r -I {} sudo nft delete rule ip nat POSTROUTING handle {}
+> ```
+>
+> If you are using a different implementation than nftables then please ensure temporary Internet access to VMs has been remved.
+
For additional background on when hairpin NAT is required and how it relates to WIAB Dev and WIAB Staging, see [Hairpin networking for WIAB Dev and WIAB Staging](tls-certificates.md#hairpin-networking-for-wiab-dev-and-wiab-staging).
## Further Reading