Automated migration tool to convert and upload VMware ESXi/vCenter VMs to Scale Computing Hypercore clusters — packaged as a single Docker container.
Caution
THIS IS PROVIDED AS-IS WITH NO WARRANTY. USE AT YOUR OWN RISK.
| Prerequisites Check | VMware Connection | Migration in Progress |
|---|---|---|
![]() |
![]() |
![]() |
| Feature | Description | |
|---|---|---|
| 🌐 | Web UI | Modern React-based interface for managing migrations |
| 🐳 | Docker Container | All dependencies pre-installed, zero host setup |
| ⚡ | VDDK Support | High-performance direct disk access to VMware |
| 🪟 | VirtIO Injection | Windows VMs boot properly on Scale |
| 📡 | Real-time Progress | WebSocket-based live progress tracking |
| ⏸️ | Stop / Start / Resume | Pause and resume migrations |
| 🔀 | Concurrent Migrations | Run multiple VM migrations in parallel |
| 💾 | Flexible Storage | Local, NFS, or SMB mount options |
| 🔌 | Connection Manager | Save and reuse VMware/Scale connections |
| 🖥️ | CLI Mode | Legacy bash script available for bare-metal use |
| Requirement | Minimum | Recommended |
|---|---|---|
| OS | Ubuntu 24.04 LTS | Ubuntu 24.04 LTS |
| CPU | 2 vCPU | 4+ vCPU |
| RAM | 8 GB | 16 GB+ |
| Storage | appropriate temp storage | appropriate temp storage |
- Docker Engine 24.0+ with Docker Compose v2
- KVM —
/dev/kvmmust be accessible - Git — for cloning the repository
| Target | Ports | Direction |
|---|---|---|
| ESXi / vCenter | 443 (HTTPS), 22 (SSH) | Outbound from container |
| Scale Computing Hypercore | 443 (HTTPS) | Outbound from container |
| Internet (build only) | 443 | For Docker image build |
Download the VMware Virtual Disk Development Kit (VDDK) 8.x tarball from the VMware Developer Portal. You'll upload it through the web UI after deployment.
# Install Docker on Ubuntu 24.04
curl -fsSL https://get.docker.com | sudo sh
sudo usermod -aG docker $USER
newgrp docker# Check KVM support
sudo apt install -y cpu-checker
kvm-ok
# Ensure /dev/kvm exists and is accessible
ls -la /dev/kvmNote
If running Ubuntu inside a KVM virtual machine, nested virtualization must be enabled on the hypervisor. See KVM Host Configuration below.
git clone https://github.com/mjlyon/ESX-to-Scale-Migration.git
cd ESX-to-Scale-Migration
cp .env.example .envEdit .env if you need to change the port, timezone, or resource limits:
nano .envdocker compose up -dNavigate to http://<host-ip>:8080 in your browser.
- Go to Setup and upload your VMware VDDK tarball
- Configure Storage (local volume, NFS, or SMB)
- Add Connections for your VMware and Scale Computing environments
- Start a New Migration from the dashboard
# Should output: INFO: /dev/kvm exists - KVM acceleration can be used
kvm-ok
# Check kernel modules
lsmod | grep kvmIf your Ubuntu 24.04 host is itself running as a KVM virtual machine, the outer hypervisor must enable nested virtualization:
# On the OUTER KVM host (Intel):
sudo modprobe -r kvm_intel
sudo modprobe kvm_intel nested=1
# Make persistent:
echo "options kvm_intel nested=1" | sudo tee /etc/modprobe.d/kvm-nested.conf
# For AMD:
sudo modprobe -r kvm_amd
sudo modprobe kvm_amd nested=1
echo "options kvm_amd nested=1" | sudo tee /etc/modprobe.d/kvm-nested.confVerify inside the Ubuntu VM:
cat /sys/module/kvm_intel/parameters/nested # Should output: Y
# or
cat /sys/module/kvm_amd/parameters/nested # Should output: 1# Ensure your user can access KVM
sudo usermod -aG kvm $USER
ls -la /dev/kvm
# crw-rw---- 1 root kvm 10, 232 ... /dev/kvmUbuntu uses AppArmor by default. The docker-compose.yml includes security_opt: apparmor:unconfined for the container. No host-level AppArmor changes are needed.
| Variable | Default | Description |
|---|---|---|
ESX2SCALE_PORT |
8080 |
Web UI port on the host |
MAX_CONCURRENT_JOBS |
2 |
Parallel migration jobs |
TZ |
America/New_York |
Timezone for logs and UI |
MEMORY_LIMIT |
8g |
Container memory limit |
ESX2SCALE_DB_PATH |
/data/db/esx2scale.db |
SQLite database path |
ESX2SCALE_CONVERSION_DIR |
/data/conversions |
Temp conversion storage |
ESX2SCALE_VDDK_DIR |
/data/vddk |
VDDK library location |
ESX2SCALE_LOG_DIR |
/data/logs |
Application logs |
All conversion data lives under /data inside the container, backed by a Docker named volume by default.
📂 Local Storage (Default)
Uses a Docker named volume. Good for small migrations or testing.
volumes:
- esx2scale-data:/data💽 Bind Mount (Recommended for Production)
Mount fast local NVMe/SSD storage directly. Best performance for large migrations.
volumes:
- esx2scale-data:/data
- /opt/esx2scale/conversions:/data/conversionsUse XFS filesystem for best large-file performance:
sudo mkfs.xfs /dev/nvme0n1p1
sudo mkdir -p /opt/esx2scale/conversions
sudo mount /dev/nvme0n1p1 /opt/esx2scale/conversions🌐 NFS Storage
Mount NFS on the host, then bind-mount into the container:
# On the host
sudo apt install -y nfs-common
sudo mkdir -p /mnt/nfs-conversions
sudo mount -t nfs your-nas:/export/conversions /mnt/nfs-conversions# docker-compose.yml
volumes:
- esx2scale-data:/data
- /mnt/nfs-conversions:/data/conversionsOr configure NFS mounts through the web UI under Storage.
🔗 SMB/CIFS Storage
Mount SMB on the host, then bind-mount into the container:
# On the host
sudo apt install -y cifs-utils
sudo mkdir -p /mnt/smb-conversions
sudo mount -t cifs //server/share /mnt/smb-conversions -o username=user,password=pass# docker-compose.yml
volumes:
- esx2scale-data:/data
- /mnt/smb-conversions:/data/conversionsOr configure SMB mounts through the web UI under Storage.
flowchart LR
subgraph VMware["🏢 VMware ESXi / vCenter"]
VM["🖥️ Source VM\n(Powered Off)"]
end
subgraph Container["🐳 Migration Container"]
V2V["⚙️ virt-v2v\n+ VDDK\n+ VirtIO"]
QCOW["💾 Converted\n.qcow2 disk"]
V2V --> QCOW
end
subgraph Scale["🏗️ Scale Computing Hypercore"]
VDisk["📀 Virtual Disk\nInventory"]
end
VM -- "SSH + VDDK\n(Port 443/22)" --> V2V
QCOW -- "HTTPS REST API\n(Port 443)" --> VDisk
| Phase | Description |
|---|---|
| 🔌 Connect | Connects to ESXi/vCenter via libvirt, retrieves VM inventory and moref IDs |
| ⚙️ Convert | Uses virt-v2v with VDDK for fast disk access, injects VirtIO drivers for Windows |
| 🔄 Post-Process | Converts raw disks to qcow2 format (compat=0.10) for Scale compatibility |
| ☁️ Upload | Uploads qcow2 disk(s) to Scale via REST API with progress tracking |
| ✅ Finalize | Disk appears in Scale Hypercore virtual disk inventory, ready to attach |
| OS | Versions | |
|---|---|---|
| 🪟 | Windows | 7, 8, 10, 11, Server 2008 R2 – 2022 |
| 🐧 | Linux | Ubuntu, Debian, CentOS, RHEL, Fedora, SUSE |
| 💻 | Other | Most modern x86_64 operating systems |
| Supported | Not Supported |
|---|---|
| ✅ VMDK (all variants) | ❌ VMs with snapshots (delete first) |
| ✅ Thin provisioned | ❌ RDM (raw device mapping) disks |
| ✅ Thick provisioned | ❌ Physical device passthrough |
| ✅ Multiple disks per VM | ❌ Powered-on VMs (must be shut down) |
🐳 Container won't start
/dev/kvm: no such file or directory
KVM is not available on the host. Check:
kvm-ok
ls -la /dev/kvmIf running inside a VM, enable nested virtualization (see KVM Host Configuration).
Permission denied on /dev/kvm
sudo usermod -aG kvm $USER
# Log out and back in, then retryAppArmor blocking container operations
The compose file includes apparmor:unconfined. If you still see AppArmor denials:
# Check AppArmor status
sudo aa-status
# As a last resort, use privileged mode instead:
# Uncomment `privileged: true` in docker-compose.yml
# and remove the devices/cap_add/security_opt lines⚙️ virt-v2v conversion fails
supermin: kernel not found
The container couldn't find a kernel for libguestfs. Check entrypoint logs:
docker compose logs esx2scale | grep -i kernelThe container auto-detects the kernel from /boot/vmlinuz-*. If missing, the Docker image may need rebuilding:
docker compose build --no-cacheVixDiskLib_Open: Unknown error
- Verify VDDK was uploaded correctly in the Setup page
- Ensure the source VM is powered off
- Check ESXi is accessible on port 443
Failed to retrieve VM moref
- Ensure SSH is enabled on the ESXi host
- Test connectivity:
ssh root@esxi-host vim-cmd vmsvc/getallvms
☁️ Upload to Scale fails
Connection refused
- Verify Scale Hypercore API is accessible:
curl -k https://scale-cluster/rest/v1/
- Check firewall rules allow port 443 outbound
Timeout during upload
- Large disks on slow networks may timeout
- Consider using a bind mount with local NVMe for faster conversion
- Check
MAX_CONCURRENT_JOBS— uploading multiple VMs saturates bandwidth
🐌 Performance issues
| Tip | Impact |
|---|---|
Use local NVMe/SSD for /data/conversions |
50%+ faster conversions |
| Use XFS filesystem for large files | Better than ext4 for VM images |
| Enable 10GbE / jumbo frames (MTU 9000) | Faster upload to Scale |
| Place migration host close to Scale cluster | Reduces network latency |
Tune MAX_CONCURRENT_JOBS |
Balance parallelism vs. I/O saturation |
Increase MEMORY_LIMIT for large VMs |
Prevents OOM during conversion |
cd ESX-to-Scale-Migration
git pull
docker compose build
docker compose up -dYour data (database, connections, VDDK) is stored in the esx2scale-data volume and persists across upgrades.
For local development with hot-reload on the backend:
docker compose -f docker-compose.dev.yml upThis mounts ./backend into the container and runs Uvicorn with --reload, so code changes take effect immediately.
For frontend development, run the Vite dev server separately:
cd frontend
npm install
npm run devThe Vite server at http://localhost:5173 proxies API calls to the backend at :8080.
-
Verify disk in Scale Hypercore
- Log into Scale web interface
- Navigate to Storage → Virtual Disks
- Confirm uploaded disk appears
-
Create new VM in Scale
- Create VM with appropriate CPU/RAM
- Attach uploaded virtual disk
- Use VIRTIO for disk and network controllers
-
Boot and test
- Power on VM
- Verify successful boot
- Test network connectivity
- Validate applications
-
Clean up
- Remove local converted files if no longer needed
- Update Scale VM settings as needed
Warning
VMware and Scale TLS verification is disabled by default because self-signed certificates are common in these environments. Enable verification if using CA-signed certs.
- Credentials are never logged or stored in plaintext — passwords are held in memory only
- Container privileges: Uses explicit KVM device passthrough and targeted capabilities instead of full
--privilegedmode - AppArmor: Container runs with
apparmor:unconfinedto allow virtualization operations - Network: All VMware/Scale communication uses HTTPS (port 443)
- Ubuntu Bare-Metal Guide — Install without Docker on Ubuntu/Debian
- RedHat Bare-Metal Guide — Install without Docker on RHEL/Alma/Rocky/Fedora
GPL-3.0 License — See LICENSE file for details.
- Built on virt-v2v from the libguestfs project
- Uses VMware VDDK for optimal disk access performance
- VirtIO drivers from the Fedora Project
- GitHub: https://github.com/mjlyon/ESX-to-Scale-Migration
- Scale Computing: https://www.scalecomputing.com/resources
- virt-v2v Docs: https://libguestfs.org/virt-v2v.1.html
Project Version: 2.0 (Ubuntu Container Edition) | Maintainer: mjlyon


