Before I actually began the project, I wanted to install and configure all the necessary software, ISO files, etc., so that everything was ready for the CSL. The goal was to prepare everything in advance and save time during the different stages of the project.
For the Proxmox installation and configuration, I used the documentation provided by Proxmox themselves and followed it step by step. Keep in mind to read carefully and not just click through—take the time to understand each step.
Caution
Installing the Proxmox ISO Installer will permanently overwrite the disk it is installed on, as it is a bare-metal installer. This means that any existing data will be permanently removed. Proceed with caution!
You should be able to install it on your own, as the documentation is quite clear. Here are the summed up steps I followed for the installation:
- Download the Proxmox ISO image installer (in my case the 8.2-2 version)
- Download Rufus or another USB burning tool
- Burn the Proxmox ISO onto a USB drive with at least 8GB of free space
- I used my old laptop to boot Proxmox from the USB drive --> BIOS
- Proceed with the Proxmox installation
Warning
As mentioned earlier, burning this disk with the Proxmox ISO file will permanently erase ALL DATA on the selected disk!
I proceeded with the installation and setup of Proxmox on my Acer laptop but encountered several issues. When I attempted to apply the configurations I had made, an error kept occurring despite my various attempts to fix it. The root of the problem was that the Proxmox installer had difficulties with the laptop's partition. I tried multiple solutions, but unfortunately, none were successful.
To keep a long story short, the real reason Proxmox had issues with my laptop’s partition was that the eMMC storage was damaged or faulty. That said, even if it hadn’t been damaged, it still wouldn’t have worked—because Proxmox doesn’t support eMMC storage.
If you’d like to read through my troubleshooting process, you can find all the documentation in the troubleshooting folder or here.
Tip
My ACER Switch Alpha 12 came with eMMC storage, a budget-friendly option found in low-cost devices. While sufficient for basic tasks, eMMC lacks the speed and durability of SSDs, which are better suited for long-term, intensive use.
Since my laptop wasn’t suitable for hosting Proxmox, I had to improvise. Luckily, I had an incredible teacher who gifted me an old Supermicro 'server' with 2 TB of storage. Long story short: I offered to buy it from him, but he refused—he insisted on giving it to me for free.
To enter the BIOS, press either Del or F2 repeatedly during startup. Once inside, navigate to the Boot section and, as shown in the screenshot, change Boot Option 1 to UEFI USB Key:UEFI. If you don’t need to configure anything else, you can press F4 to save and exit, or go to the Save & Exit tab and reboot the server.
If your system allows you to configure IPMI, I sincerely recommend doing so—it’s an incredibly useful and powerful feature to have.
After configuring the BIOS boot order on the server, I also wanted to check whether IPMI was enabled and functioning—and in my case, it was. However, I had to configure the station IP address, subnet mask, gateway, etc., since it wasn’t adapted to my router / ISP settings.
Important
Make sure to check your ISP’s DHCP range so that you don’t assign an IP address within that scope, which could potentially cause conflicts!
IPMI, short for Intelligent Platform Management Interface, is a standardized interface used for out-of-band management of servers. It allows administrators to monitor, manage, and troubleshoot a system independently of the operating system, and even when the server is powered off, as long as it's connected to power and the network.
When IPMI is enabled, I can access the server remotely through a dedicated management interface—often via a web-based dashboard or console—regardless of the server’s current state. This makes it possible to perform tasks like system reboots, BIOS configuration, or OS installations without needing physical access to the machine. It’s an incredibly useful and powerful feature, especially for server maintenance and remote administration!
Most of the time, the IPMI LAN port is located separately from the other LAN ports. In my case, the IPMI port is isolated from the four regular LAN ports and positioned on the far left.
Again after the reboot, leave the USB drive plugged in and wait for the Proxmox installation process to start. Once it appears, choose the Graphical Install, as it is more user-friendly.
- Select the target hard disk:
/dev/sda - Enter your location and time zone
- Create a strong admin password
- Use a valid email address, which will be used for important alerts and notifications from the server
- Choose a fitting hostname
- Configure the IP address (CIDR), gateway, and DNS (
1.1.1.1for Cloudflare or8.8.8.8for Google)
Once you’ve completed the configuration and the installation finishes, reboot your server/laptop/machine (whatever you're using), and unplug the USB stick so the installed Proxmox system can load.
After a while, you’ll be prompted to log in. The default login is root, using the password you defined during the Proxmox installation.
Here's what the IPMI interface looks like once you're logged in, and also how the Proxmox web GUI should appear after successful login:
Once I logged into my Proxmox server, I began installing and configuring the OPNsense firewall. As mentioned earlier, I had already downloaded all necessary resources (ISO files etc.) in advance to save time throughout the project.
To properly integrate the firewall into my virtual lab environment, I first created two additional Linux bridges:
-
vmbr1 serves as the internal LAN bridge. It functions as a virtual switch, connecting all internal lab VMs and VLANs to the OPNsense instance. This bridge is dedicated to traffic inside the lab, enabling segmentation and routing through the firewall.
-
vmbr2 acts as the virtual WAN interface. It provides the OPNsense firewall with access to the internet through a NAT (Network Address Translation) configuration. This setup allows outbound connectivity without exposing the internal lab network or interfering with the physical home network.
I intentionally avoided using vmbr0 (the default Proxmox WAN bridge) for the OPNsense WAN interface. vmbr0 is directly connected to my home router and is used by the Proxmox host itself. Using vmbr0 inside the firewall VM would have created a conflict, as the OPNsense VM, my home router, and the Proxmox host would all attempt to use the same default gateway, leading to routing issues and potential network disruptions.
By isolating the WAN traffic of the OPNsense VM on vmbr2 and implementing NAT on the Proxmox host, I ensured that the firewall has internet access without overlapping with or disrupting the production network. This approach also provides a safe and controlled boundary between the lab environment and the external network, which is essential for cybersecurity.
Tip
When creating the firewall VM, make sure to check the 'VLAN Aware' box to ensure communication with other VLANs.
Note
Notice that I’ve selected vmbr1 (the LAB LAN) instead of vmbr0 (the WAN). Once I’ve finished configuring the firewall, I’ll return to this point and update it accordingly to include vmbr2.
Before actually starting the OPNsense firewall VM, I needed to ensure that vmbr2 could access the internet through a NAT (Network Address Translation) mechanism. To achieve this, I created a custom shell script that configures Proxmox to forward traffic from the vmbr2 subnet to the main WAN interface (vmbr0), allowing outbound internet access for the OPNsense firewall.
This approach provides full internet connectivity for the virtual WAN interface without interfering with the physical home network or reusing the host’s default gateway. In addition, I created a second shell script that cleanly removes the NAT configuration in case I need to disable or undo these changes in the future.
Below are both scripts:
Tip
Before running the scripts, you can create manual backups of all affected system files. This ensures you can easily restore the previous state if needed.
Use the following commands on your Proxmox host:
$ cp /etc/network/interfaces /etc/network/interfaces.bak
$ cp /etc/sysctl.conf /etc/sysctl.conf.bak
$ iptables-save > /root/iptables-before.txt
#!/bin/bash
# NAT configuration script for vmbr2 → vmbr0
# Author: Pantera
# Enable IPv4 forwarding
echo 1 > /proc/sys/net/ipv4/ip_forward
sed -i 's/^#net.ipv4.ip_forward=.*/net.ipv4.ip_forward=1/' /etc/sysctl.conf
sysctl -w net.ipv4.ip_forward=1
# Set up NAT rule
iptables -t nat -A POSTROUTING -s x.x.x.x/24 -o vmbr0 -j MASQUERADE
# Save rules
iptables-save > /etc/iptables.up.rules
# Create persistent loader
cat <<EOF > /etc/rc.local
#!/bin/sh -e
iptables-restore < /etc/iptables.up.rules
exit 0
EOF
# Make /etc/rc.local executable
chmod +x /etc/rc.local
$ chmod +x /xyz/nat-setup.sh
#!/bin/bash
# NAT configuration script for vmbr2 → vmbr0
# Author: Pantera
# Uninstall NAT config
# Remove NAT rule
while iptables -t nat -C POSTROUTING -s x.x.x.x/24 -o vmbr0 -j MASQUERADE 2>/dev/null; do
iptables -t nat -D POSTROUTING -s x.x.x.x/24 -o vmbr0 -j MASQUERADE
done
# Remove iptables file
rm -f /etc/iptables.up.rules
# Remove rc.local if created by us
rm -f /etc/rc.local
$ chmod +x /xyz/nat-uninstall.sh
After the successful NAT configuration for vmbr2, I started the VM and proceeded with the OPNsense installation. Unfortunately, after starting the VM, it crashed with an error stating that the vmbr2 bridge does not exist. I verified this, and indeed, the bridge was missing because the section for vmbr2 was completely absent from the /etc/network/interfaces file. I had to manually add the configuration and reload the network. If you want to know how I did it, you can read through the troubleshooting process here or in the troubleshooting folder at the top.
Once the VM starts, you’ll see the live-mode login screen, which tells you to log in as either root or installer. Since this is our first time starting the VM and we need to install the system, we’ll log in as installer of course. The default password for both installer and root is opnsense.
- Choose your keyboard layout
- Choose your installation type (I chose ZFS for a modern installation)
- Select the virtual device type (I chose Stripe)
- Select your target disk for the installation
Note
ZFS: The recommended 3 GB of RAM for ZFS wasn't a concern in this case, since I can always allocate more RAM via Proxmox if needed. That’s why I went ahead with ZFS.
Virtual Device Type: I chose the stripe option (no redundancy) because this OPNsense instance runs inside Proxmox, where backups are handled separately. Redundancy wasn't necessary for this lab setup.
After the successful installation and configuration you should be able to login as root with the standard password or if you've chagned it with yours. Once logged in you'll see 13 Options to choose from. Firstly we're gonna set the interface ip adresses:
- Selected option
2to configure interface settings - Chose the WAN interface (
vtnet1) for configuration - Declined the DHCP option, as I wanted to assign a static IP address that fits the predefined LAB infrastructure (
vmbr2- Lab LAN) - Declined to use the gateway IP as the DNS server, since there is no DNS service running at that address
- Manually set
1.1.1.1as the WAN DNS server (Cloudflare) - Skipped IPv6 configuration for now
- Kept the Web GUI protocol as HTTPS (did not switch to HTTP)
- Chose to generate a new self-signed Web GUI certificate for secure access
- After that again select the option
2to configure the interface settings - Choose the LAN interface (
vtnet0) for configuration - Declined the DHCP option, as I wanted to assign a static IP address that fits the predefined LAB infrastructure (
vmbr1- virtual WAN / NAT) - Make sure to NOT enter a upstream gateway for this interface
- Skipped IPv6 configuration for now
- Enabled DHCP on LAN
- Keep the Web GUI protocol as HTTPS (did not switch to HTTP)
- Chose to generate a new self-signed Web GUI certificate for secure access
Important
To be honest, I don't know how or when it happened to me, but MAKE SURE to check whether the Linux bridges (vmbr) are correctly assigned to the firewall's network interfaces.
You'll save yourself hours of troubleshooting by simply verifying that net0 = vmbr1 (Lab LAN) and net1 = vmbr2 (virtual WAN NAT).
Once you've finished the installation and configuration, create a VM with an operating system of your choice and make sure NOT to attach it to a VLAN just yet!
In my case, that would be VLAN 10 for testing — but I need to do that AFTER logging into the OPNsense web GUI so I can configure the VLANs there first.
If you attach the VLAN too early, it may lead to issues with DHCP and internet access for all VMs using VLAN tags.
Once everything is set up in OPNsense, you’ll find the option to assign the VLAN in the VM’s network tab. This configuration step is essential for VLANs to work correctly in OPNsense, since VLANs will be configured, assigned, and managed directly through the firewall.
This VM will be used for testing purposes, so the configuration doesn't matter too much. In my case, I created a Ubuntu Desktop VM. Here are the configurations I used:
I decided to go through the "Wizard" configuration option because it includes all the basic settings in one place. This saved me time, as I didn’t have to click through the entire menu to reach each configuration section. I won’t document it very thoroughly since it’s pretty self-explanatory, but here are some of the configurations I made:
Now that the "Wizard" configuration is complete, I moved on to configure all the VLANs along with the corresponding firewall rules. Here are the summarized steps of how I did it:
- Head to
Interfaces > Devices > VLAN - Leave the device name empty; OPNsense will generate a fitting name automatically
- Choose the correct parent interface (in my case,
vtnet0→ LAN) - Enter the appropriate VLAN tag (e.g., 10 for VLAN 10)
- Write a clear and descriptive name
- Save and apply the changes
- Head to
Interfaces > Assignments - Add the newly created VLAN device and save it
- Go to
Interfaces > VLAN10Testingand enable the interface - Once enabled, change the IPv4 Configuration Type to Static and assign a suitable IP address
- I also updated the description for a more fitting alternative
- Head to
Services > ISC DHCPv4 > LAN_VLAN10and enable DHCP server on the interface (LAN_VLAN10). - Enter your IP range and don’t forget to specify a DNS server before saving. (I chose 1.1.1.1 and 8.8.8.8 — Cloudflare and Google)
- Head to
Firewall > Rules > LAN_VLAN10and add a new rule. - Make sure to block access from VLAN10 to the OPNsense interface, as shown in the screenshots. Since this is a testing VLAN in my case, allowing it to access the firewall while intentionally testing potentially malicious software would pose an unnecessary security risk.
- Head to
Firewall > Rules > LAN_VLAN10and add a new rule. - Make sure to allow internet access from VLAN10, as shown in the screenshots above.
- Head to
Firewall > NAT > Outboundand set the NAT mode to Hybrid Outbound NAT rule generation. - Create a new rule and make sure to fill it out exactly as shown in the screenshots above.
To verify that our VLAN configurations were successful, I ran a series of commands and performed several checks to ensure everything was working as intended. These tests helped confirm that the firewall rules, IP assignments, and network segmentation were all functioning properly.
$ ipconfig
$ ping 8.8.8.8
$ ping x.x.x.x
$ nslookup opnsense.lab.local
Here is the breakdown of every test I have performed to verify the configuration's validity:
ipconfigconfirms that the client successfully received an IP address from the OPNsense DHCP server.ping 8.8.8.8verifies internet connectivity via ICMP; packets are reaching Google’s DNS server with 0% loss.ping x.x.x.xreturns “Destination host unreachable”, which is expected – access to the firewall gateway is intentionally blocked for this VLAN.nslookup opnsense.lab.localshows that external DNS is working, but internal names likelab.localare not resolved – as configured (no internal DNS or override).- The Koenigsegg Agera RS — a breathtaking marvel of engineering — appearing in a Bing image search: confirmation that DNS resolution and web browsing are functioning flawlessly 😂.
- Attempting to access the OPNsense Web GUI fails as expected – access from VLAN10 is restricted by firewall rules to prevent potential compromise.
As shown in the title, I won’t be documenting this step thoroughly, since the principle is the same as with the testing VLAN. Instead, I’ll outline the key steps I took for each VLAN and briefly explain its purpose.
-
VLAN 20 and VLAN 1
I followed the same principle for the rules of the Container and Hacking VLANs. These VLANs must have access to the internet but are not allowed to access the firewall itself. The reason for this is that it would be an unnecessary security risk to allow them access to the firewall. -
VLAN 77
For the Omni-Administrative VLAN, I allowed internet access, but contrary to the others, I also allowed it to access the firewall. The reason being that since this is an admin VLAN, it should have access for configurations, troubleshooting, updates, monitoring, etc.
As the foundational step of this project, I deployed an Ubuntu Server to serve as the base system for my Docker host. I deliberately chose Ubuntu due to its stability, widespread adoption, and extensive documentation—making it an ideal candidate for server-based workloads and long-term maintainability.
For containerization, I opted for Docker. This decision was based on my existing experience and proficiency with Docker’s architecture, CLI tooling, and operational best practices. Leveraging a platform I'm already familiar with allows me to build and iterate on the project efficiently, without the overhead of learning a new container ecosystem from scratch.
During the installation, I manually configured the IP address and network settings. This was done intentionally to ensure a consistent and predictable network environment within my CSL. Static addressing gives me full control over routing, VLAN mapping, and access control—critical for simulating real-world cybersecurity infrastructure.
After successfully installing and configuring the system, I proceeded with the Docker setup. However, before that, I went through the following chapter. You don’t have to do this, it’s optional, and you can skip it if you prefer to continue directly.
Note
The next chapter on VPN configuration is NOT necessary. I chose to include it because it made sense to me personally, and I had always wanted to try it out.
The goal is to access a VM running in VLAN20 via SSH from a remote client connected through a ISP-generated WireGuard VPN. The remote client has no access to the VPN server configuration and receives a /32 IP from my ISP.
| Component | Detail |
|---|---|
| VPN Client | Receives IP: x.x.x.x/32 (no gateway) |
| Proxmox Host | Manages networking + NAT |
| VM | Ubuntu Server x.x.x.x/24, VLAN20 |
| Bridge Used | vmbr20 with IP x.x.x.x |
| Interface Tagging | VLAN Tag 20 used on eno1.20 |
| Proxmox IP (LAN) | x.x.x.x |
- The WireGuard VPN tunnel is fully controlled by my ISP, no changes can be made on the server side.
- The VPN client is isolated (
/32subnet, no gateway). - To bridge this limitation, Proxmox acts as a NAT Gateway and optionally as a SSH proxy via port forwarding.
- Proxmox interface
vmbr20handles VLAN20 and must have the default gateway IP of the VM.
Note
As you’ve probably noticed, I’ve purposefully censored certain technical details. This isn’t a tutorial anyway (as mentioned on the first page), so I’m sure you’ll understand.
In Proxmox UI or via /etc/network/interfaces:
auto vmbr20
iface vmbr20 inet static
address x.x.x.x/24
bridge_ports eno1.20
bridge_stp off
bridge_fd 0Ensure:
eno1.20exists as a VLAN subinterface or usebridge-vlan-awareonvmbr1as alternative.- Apply config or reboot host.
Inside x.x.x.x (Ubuntu Server VM):
$ ip addr add x.x.x.x/24 dev ens18
$ ip route add default via x.x.x.xOr via Netplan (/etc/netplan/01-netcfg.yaml):
network:
version: 2
ethernets:
ens18:
dhcp4: no
addresses:
- y.y.y.y/24
gateway4: y.y.y.yThen apply:
$ sudo netplan applyEnsure:
$ sudo systemctl enable ssh
$ sudo systemctl start sshCheck:
$ sudo ss -tnlp | grep :22This step allows the VPN client to access the VM even though it has no valid return path (/32 IP, no gateway).
$ echo 1 > /proc/sys/net/ipv4/ip_forward
$ iptables -t nat -A POSTROUTING -s x.x.x.x -j MASQUERADEMake it persistent:
$ apt install iptables-persistent -y
$ netfilter-persistent saveNot always necessary, but if needed:
$ ip route add x.x.x.x dev vmbr0If NAT does not work (e.g. WireGuard or ISP blocks return routes), use Proxmox port forwarding:
$ iptables -t nat -A PREROUTING -i vmbr0 -p tcp --dport xxxx -j DNAT --to-destination x.x.x.x:22
$ iptables -A FORWARD -p tcp -d x.x.x.x --dport 22 -j ACCEPTThen SSH from VPN client via:
$ssh -p xxxx user@x.x.x.xEnsure you're connected to WireGuard. Then:
$ ssh user@x.x.x.x # If NAT works
$ ssh -p xxxx user@x.x.x.x # If using port forwardingOnce access is stable, go to the main Proxmox shell and type the following commands:
$ apt update
$ apt install sudo -y
$ adduser newuser
$ usermod -aG sudo newuserAfter that, you should be able to log in as the new user via SSH and have sudo privileges. Once this is confirmed, disable the option to log in as root via SSH in the Proxmox shell.
$ sudo nano /etc/ssh/sshd_config
# Set:
PermitRootLogin no
PasswordAuthentication no
$ sudo systemctl restart sshThis chapter includes installing a secure, up-to-date version of Docker Engine, Docker CLI, containerd, and the Docker Compose plugin on Ubuntu Server.
$ sudo apt install ca-certificates curl gnupg -y
$ sudo install -m 0755 -d /etc/apt/keyringsca-certificates: Required to validate HTTPS connections securelycurl: Downloads files from HTTPS sources (used later for Docker GPG key)gnupg: Used to verify package authenticity/etc/apt/keyrings: A modern and secure place to store third-party GPG keys
$ curl -fsSL https://download.docker.com/linux/ubuntu/gpg | \
sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg- Ensures all Docker packages you install are signed and verified by Docker Inc.
- Prevents installation of tampered or malicious packages.
echo \
"deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.gpg] \
https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable" | \
sudo tee /etc/apt/sources.list.d/docker.list > /dev/null- Adds the official Docker repository to your system.
- Ensures that Docker-related packages are always pulled from a trusted and up-to-date source.
$ sudo apt update
$ sudo apt install docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin -yWhy install these components?
| Package | Purpose |
|---|---|
docker-ce |
Docker Engine (daemon + runtime) |
docker-ce-cli |
Command-line interface |
containerd.io |
High-performance container runtime |
docker-buildx-plugin |
Extended build support (multi-arch, caching, etc.) |
docker-compose-plugin |
Native Compose v2 support (replaces legacy docker-compose binary) |
$ sudo usermod -aG docker $USER- Adds your current user to the
dockergroup. - This allows you to run Docker commands without needing
sudoevery time.
Note
Requires re-login to take effect (see next step).
$ newgrp dockerOr log out and back in again.
- Linux only applies group membership changes at login.
- Without this, you’ll still get permission errors when using
docker.
$ docker run hello-worldWhat it does:
- Downloads and runs a test container from Docker Hub.
- Confirms that Docker Engine and networking work as expected.
You should see a message like:
$ sudo systemctl enable dockerEnsures Docker starts automatically with the system.
$ docker --version
$ docker compose versionVerifies that both the CLI and Compose are installed correctly.
Note
Highlights information that users should take into account, even when skimming.
Tip
Optional information to help a user be more successful.
Important
Crucial information necessary for users to succeed.
Warning
Critical content demanding immediate user attention due to potential risks.
Caution
Negative potential consequences of an action.
- YouTube: Gerard O'Brien, Building the Ultimate Cybersecurity Lab
- Medium: TheInfoSec Guy
- ChatGPT: OpenAI
- Friends








































