Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
123 changes: 97 additions & 26 deletions Guide/src/reference/openvmm/management/cli.md
Original file line number Diff line number Diff line change
@@ -1,12 +1,39 @@
# CLI

This page summarizes common `openvmm` command-line operations and VM launch
options.

```admonish danger title="Disclaimer"
The following list is not exhaustive, and may be out of date.

The most up to date reference is always the [code itself](https://openvmm.dev/rustdoc/linux/openvmm_entry/struct.Options.html),
as well as the generated CLI help (via `cargo run -- --help`).
The most up to date reference is always the [`RunOptions` rustdoc][run] and
[`ServeOptions` rustdoc][serve], as well as the generated CLI help (via
`cargo run -- run --help` or `cargo run -- serve --help`).
```

## Commands

Use `openvmm run` to launch a VM. For compatibility, launch options are also
accepted at the top level, so existing invocations like `openvmm -p 2 ...`
continue to work as an implicit `run` command. Command-style operations such as
`attach` and `inspect` are separate subcommands and do not accept VM launch
options before the command name.

* `run [OPTIONS]`: Launch a VM using the configured firmware, devices,
memory, and processor topology.
* `serve --transport <grpc|ttrpc> [--pidfile <PATH>] <SOCKETPATH>`: Host the
management RPC API on the specified Unix socket. The RPC API can create and
control VMs through the server process.
* `attach <SOCKETPATH>`: Connect to a VM started with `--mesh-listen` and run
an interactive REPL. Exiting the attached REPL disconnects the client without
stopping the VM.
* `inspect [OPTIONS] <SOCKETPATH> [ELEMENT]`: Connect to a VM started with
`--mesh-listen`, inspect one element path, print the result, and exit. Use
`-r`/`--recursive` to enumerate recursively, `--limit <N>` to bound recursive
depth, and `-v`/`--paravisor` to target the paravisor.

## VM Launch Options

* `--processors <COUNT>`: The number of processors. Defaults to 1.
* `--memory <SPEC>`: Configure guest RAM. Defaults to `size=1G`.
`SPEC` can be a size-only shorthand, such as `--memory 4G`, or a
Expand Down Expand Up @@ -61,7 +88,12 @@ as well as the generated CLI help (via `cargo run -- --help`).
--hypervisor kvm
```
* `--uefi`: Boot using `mu_msvm` UEFI
* `--uefi-firmware <FILE>`: Path to the UEFI firmware file (`MSVM.fd`). When `--uefi` is specified, this option is required only if you do not set the environment variable `OPENVMM_UEFI_FIRMWARE` (or the architecture-specific variants `X86_64_OPENVMM_UEFI_FIRMWARE`, or `AARCH64_OPENVMM_UEFI_FIRMWARE`). If omitted, the default is read from `OPENVMM_UEFI_FIRMWARE` first, then falls back to the architecture-specific variables.
* `--uefi-firmware <FILE>`: Path to the UEFI firmware file (`MSVM.fd`). When
`--uefi` is specified, this option is required only if you do not set the
environment variable `OPENVMM_UEFI_FIRMWARE`,
`X86_64_OPENVMM_UEFI_FIRMWARE`, or `AARCH64_OPENVMM_UEFI_FIRMWARE`. If
omitted, the default is read from `OPENVMM_UEFI_FIRMWARE` first, then falls
back to the architecture-specific variables.
* `--pcat`: Boot using the Microsoft Hyper-V PCAT BIOS
* `--disk file:<DISK>`: Exposes a single disk over VMBus. You must also
pass `--hv`. The `DISK` argument can be:
Expand All @@ -81,30 +113,44 @@ as well as the generated CLI help (via `cargo run -- --help`).
crashes, the pidfile is not removed — consumers should verify the PID is
still alive. No file locking is performed; concurrent launches with the same
pidfile path will overwrite each other. Not written for short-lived utility
modes such as `--write-saved-state-proto`.
modes such as `--write-saved-state-proto`. Also accepted by `serve`.
* `--mesh-listen <SOCKETPATH>`: Listen for REPL attach connections on the
specified socket path. It requires the process mesh and conflicts with
`--single-process`. The socket is removed during clean shutdown. Any local
process that can connect to the socket can control the VM, so place it in a
directory restricted to the intended user.
* `--nic`: Exposes a NIC using the Consomme user-mode NAT.
* `--gfx`: Enable a graphical console over VNC (see below)
* `--virtio-9p`: Expose a virtio 9p file system. Uses the format `tag,root_path`, e.g. `myfs,C:\\`.
The file system can be mounted in a Linux guest using `mount -t 9p -o trans=virtio tag /mnt/point`.
You can specify this argument multiple times to create multiple file systems.
* `--virtio-fs`: Expose a virtio-fs file system. The format is the same as `--virtio-9p`. The
file system can be mounted in a Linux guest using `mount -t virtiofs tag /mnt/point`.
You can specify this argument multiple times to create multiple file systems.
* `--virtio-rng`: Add a virtio entropy (RNG) device, exposing `/dev/hwrng` in the Linux guest.
The guest kernel must have `CONFIG_HW_RANDOM_VIRTIO` enabled.
* `--virtio-rng-bus <BUS>`: Select the bus for the virtio-rng device (`auto`, `mmio`, `pci`, `vpci`).
Defaults to `auto`.
* `--vhost-user <SOCKET_PATH>,type=<TYPE>[,tag=<NAME>][,num_queues=<N>][,queue_size=<N>][,pcie_port=<PORT>]`: Attach a
vhost-user device backed by an external process over a Unix socket (Linux
only). The backend process must already be listening on `SOCKET_PATH`.
* `--virtio-9p`: Expose a virtio 9p file system. Uses the format
`tag,root_path`, e.g. `myfs,C:\`. The file system can be mounted in a
Linux guest using `mount -t 9p -o trans=virtio tag /mnt/point`. You can
specify this argument multiple times to create multiple file systems.
* `--virtio-fs`: Expose a virtio-fs file system. The format is the same as
`--virtio-9p`. The file system can be mounted in a Linux guest using
`mount -t virtiofs tag /mnt/point`. You can specify this argument multiple
times to create multiple file systems.
* `--virtio-rng`: Add a virtio entropy (RNG) device, exposing `/dev/hwrng` in
the Linux guest. The guest kernel must have `CONFIG_HW_RANDOM_VIRTIO`
enabled.
* `--virtio-rng-bus <BUS>`: Select the bus for the virtio-rng device (`auto`,
`mmio`, `pci`, `vpci`). Defaults to `auto`.
* `--vhost-user <SPEC>`: Attach a vhost-user device backed by an external
process over a Unix socket (Linux only). The backend process must already be
listening on `SOCKET_PATH`. The spec uses this format:

```text
<SOCKET_PATH>,type=<TYPE>[,tag=<NAME>]
[,num_queues=<N>][,queue_size=<N>][,pcie_port=<PORT>]
```

Supported `type` values: `blk`, `fs`. For `type=fs`, `tag=<NAME>` is required
and specifies the mount tag exposed to the guest (max 36 bytes).
`num_queues` and `queue_size` control the queue layout (defaults: blk
num_queues=1/queue_size=128, fs num_queues=1/queue_size=1024).
Alternatively, use `device_id=<N>` instead of `type=` to specify the numeric
virtio device ID directly, with `queue_sizes=[N,N,N]` for per-queue sizes.
Examples:
```sh
```bash
--vhost-user /tmp/vhost-blk.sock,type=blk
--vhost-user /tmp/vhost-blk.sock,type=blk,num_queues=4,queue_size=512
--vhost-user /tmp/vhost-blk.sock,type=blk,pcie_port=rp0
Expand All @@ -113,7 +159,8 @@ as well as the generated CLI help (via `cargo run -- --help`).
--vhost-user /tmp/vhost.sock,device_id=26,queue_sizes=[256,256]
```

Serial devices can be configured to appear as different devices inside the guest:
Serial devices can be configured to appear as different devices inside the
guest:

* `--com1/com2 <BACKEND>`: Configure a COM port serial device.
* `--virtio-console <BACKEND>`: Expose a virtio console device (appears as
Expand All @@ -131,6 +178,27 @@ The `BACKEND` argument is the same for all serial devices:
connections on the given IP address and port. Typically IP will be
127.0.0.1, to restrict connections to the current host.

## Management RPC Server

Use `openvmm serve` to host the management RPC API without launching a VM from
the command line:

```bash
openvmm serve --transport grpc /tmp/openvmm-rpc.sock
openvmm serve --transport ttrpc /tmp/openvmm-rpc.sock
openvmm serve --transport ttrpc --pidfile /tmp/openvmm.pid /tmp/openvmm.sock
```

`--transport` is required. Builds that do not include the selected transport
will reject the command. Use `--pidfile <PATH>` to write the server process ID
on startup and remove it on clean exit.

The positional `SOCKETPATH` is the management RPC server socket. This is
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is it possible/desired to expose only a mesh socket? I guess that wouldn't have much value, since you couldn't launch a vm through it.

Copy link
Copy Markdown
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You could imagine only exposing a mesh socket and adding a way to specify the starting params via the CLI. I think we might get there eventually, but not yet. I can also imagine having a single socket support both gRPC and mesh (but not ttrpc and mesh, since ttrpc doesn't have any kind of protocol negotiation).

separate from `--mesh-listen <SOCKETPATH>`, which exposes the interactive attach
and inspect socket used by `openvmm attach` and `openvmm inspect`. When passed to
`serve`, `--mesh-listen` controls whether VMs managed by the server also expose
that attach/inspect socket.

## PCIe Device Support

OpenVMM can emulate a PCI Express topology using `--pcie-root-complex` and
Expand All @@ -139,7 +207,7 @@ attached to a root port to appear as PCIe devices in the guest.

### Setting up a PCIe topology

```sh
```bash
# Create a root complex and root port
--pcie-root-complex rc0 --pcie-root-port rc0:rp0
```
Expand All @@ -151,15 +219,15 @@ PCIe root port. The syntax varies slightly between device types:

**Disks** (comma-separated option): `--disk`, `--nvme`, `--virtio-blk`

```sh
```bash
--virtio-blk file:/path/to/disk.raw,pcie_port=rp0
--nvme file:/path/to/disk.raw,pcie_port=rp0
--disk file:/path/to/disk.raw,pcie_port=rp0
```

**NICs** (colon-prefixed): `--net`, `--virtio-net`, `--mana`

```sh
```bash
--virtio-net pcie_port=rp0:tap:tap0 # TAP is Linux-only
--net pcie_port=rp0:consomme
--mana pcie_port=rp0:tap:tap0 # TAP is Linux-only
Expand All @@ -168,7 +236,7 @@ PCIe root port. The syntax varies slightly between device types:
**Filesystems and other virtio devices** (colon-prefixed):
`--virtio-fs`, `--virtio-fs-shmem`, `--virtio-9p`, `--virtio-pmem`

```sh
```bash
--virtio-fs pcie_port=rp0:myfs,/path/to/share
--virtio-fs-shmem pcie_port=rp0:myfs,/path/to/share
--virtio-9p pcie_port=rp0:myfs,/path/to/share
Expand All @@ -177,20 +245,23 @@ PCIe root port. The syntax varies slightly between device types:

For `--virtio-rng` and `--virtio-console`, use their separate PCIe port flags:

```sh
```bash
--virtio-rng --virtio-rng-pcie-port rp0
--virtio-console console --virtio-console-pcie-port rp0
```

**vhost-user devices** (comma-separated option, Linux only): `--vhost-user`

```sh
```bash
--vhost-user /tmp/vhost-blk.sock,type=blk,pcie_port=rp0
--vhost-user /tmp/virtiofsd.sock,type=fs,tag=myfs,pcie_port=rp0
```

**VFIO device assignment** (Linux only): `--vfio`

```sh
```bash
--vfio rp0:0000:01:00.0
```

[run]: https://openvmm.dev/rustdoc/linux/openvmm_entry/struct.RunOptions.html
[serve]: https://openvmm.dev/rustdoc/linux/openvmm_entry/struct.ServeOptions.html
11 changes: 8 additions & 3 deletions Guide/src/reference/openvmm/management/grpc.md
Original file line number Diff line number Diff line change
@@ -1,8 +1,13 @@
# gRPC / ttrpc

To enable gRPC or ttrpc management interfaces, pass `--grpc <SOCKETPATH>` or
`--trpc <SOCKETPATH>`. This will spawn an OpenVMM process acting as a gRPC or
ttrpc server.
To enable gRPC or ttrpc management interfaces, use `openvmm serve` with
`--transport grpc` or `--transport ttrpc` and a Unix socket path. This will
spawn an OpenVMM process acting as a gRPC or ttrpc server.

```bash
openvmm serve --transport grpc /tmp/openvmm-rpc.sock
openvmm serve --transport ttrpc /tmp/openvmm-rpc.sock
```

Here is a list of supported RPCs:

Expand Down
Loading
Loading