Documentation > Professional+ > Virtualization Engine
⭐ PRO+

Virtualization Engine

Hardware virtualization lifecycle management for the three hypervisors SysManage agents can run as parent hosts: KVM/QEMU on Linux, bhyve on FreeBSD, and VMM/vmd on OpenBSD. Migrates ~30,000 lines of agent-side hypervisor orchestration to a server-side Cython engine so the agent retains only read-only listing + a thin command-execution shim.

Overview

The Virtualization Engine generates declarative deployment plans (files + shell command sequences) that the agent runs through its existing apply_deployment_plan handler. Plans cover the full VM lifecycle: image acquisition (download + decompression), disk allocation, cloud-init seed ISO build, virt-install/vm/vmctl invocation, post-boot agent bootstrap, and clean teardown. The agent never has to know how to talk to libvirt, vm-bhyve, or vmd directly — the engine handles all of it.

Tier & Licensing

  • Enterprise tier: full hardware virtualization management.
  • Community Edition: read-only listing of running VMs through the agent's list_child_hosts command. No create / delete / lifecycle.
  • Professional tier: includes the reboot orchestration slice (safe parent reboot with child host stop/start) but not VM creation / deletion.

Supported Hypervisors

KVM / QEMU (Linux)

Uses libvirt + virt-install. Cloud-image guests boot via cloud-init seed ISO; the engine handles per-distribution cloud-init wiring — full cloud-init runcmd for Linux distros and a nuageinit + bootstrap.sh + SSH-driven post-boot path for FreeBSD's BASIC-CLOUDINIT images. Multi-distro autoinstall is supported via the same engine: Debian preseed, Ubuntu Subiquity autoinstall, and Alpine apkovl overlays.

bhyve (FreeBSD)

Uses vm-bhyve. Raw FreeBSD images are pre-modified before first boot: an /etc/rc.d/sysmanage_firstboot script (with the FreeBSD firstboot rc.d keyword) is injected via mdconfig + mount so the agent installs itself once the guest comes up. ZFS zvol storage and pf NAT for vm-bhyve switches are also covered.

VMM / vmd (OpenBSD)

Uses vmctl + vmd. OpenBSD guests install via httpd-served sets and an install.conf embedded into a modified bsd.rd ramdisk (rdsetroot extract / vnconfig mount / re-pack). After OpenBSD's installer finishes, the VM reboots from disk; the engine plan reloads vmd with the final vm.conf fragment.

Open Source vs Professional+

  • Community Edition: agent reports the VMs it can see (list_child_hosts) so the parent's "Child Hosts" tab can render them read-only.
  • Professional+: full create / delete / start / stop / restart of KVM, bhyve, VMM child hosts.
  • Professional+: per-distribution guest provisioning (cloud-init runcmd, FreeBSD nuageinit, Debian preseed, Ubuntu autoinstall, Alpine apkovl, bhyve firstboot, OpenBSD install.conf).
  • Professional tier: safe-parent-reboot orchestration — stop child hosts, reboot parent, restart children automatically.
  • Professional+: libvirt network create/delete, vm-bhyve switch + pf NAT, VMM vm.conf fragments.
  • Professional+: KVM qcow2 clone + resize, bhyve ZFS zvol create/destroy, VMM disk allocation via vmctl create.

VM Create Flow

A KVM cloud-image VM create runs end-to-end as one deployment plan. Steps are skip-guarded so re-runs against an already-cached base image are cheap.

  1. Ensure the libvirt image directory exists.
  2. Download the cloud image (skipped if already cached).
  3. Decompress (xz / gz / bz2) if needed.
  4. Clone the base image to the per-VM disk path via qemu-img convert.
  5. Resize the cloned disk to the requested capacity.
  6. Build a cloud-init / nuageinit cidata ISO (genisoimage / mkisofs / xorrisofs fallback).
  7. Run virt-install --import.
  8. For FreeBSD: wait for DHCP + SSH, retry pubkey auth until nuageinit seeds ~/.ssh/authorized_keys, then drive a pty-based su -m root -c bootstrap.sh.
  9. Inside the guest: bootstrap.sh installs the agent .pkg / .deb, writes /etc/sysmanage-agent.yaml (including the auto-approve token), and starts the service.
  10. New agent registers with the parent server; auto-approve token matches; host appears in the main hosts list.

Using the UI

From a parent host's Host Details → Child Hosts tab:

  • Each hypervisor card (KVM, bhyve, VMM, LXD, WSL) shows its capability state and an Enable / Initialize button when the host is capable but the hypervisor is not yet bootstrapped.
  • When the hypervisor is ready, a Create VM (or Create Container for LXD, Create Instance for WSL) button opens the create dialog.
  • The dialog asks for distribution, hostname, username, password, and an optional auto-approve checkbox. For VMM you also supply an ISO URL and a separate root password.
  • Once a child host exists, per-row actions cover Start / Stop / Restart / Delete. Delete dispatches the engine's destroy + undefine + storage-purge plan; the row drops out of the UI immediately.

API Endpoints

  • POST /api/host/{id}/virtualization/create-child — create a KVM / bhyve / VMM / LXD / WSL child host
  • DELETE /api/host/{id}/children/{child_id} — destroy + undefine + purge storage
  • POST /api/host/{id}/children/{child_id}/start — lifecycle: start
  • POST /api/host/{id}/children/{child_id}/stop — lifecycle: stop
  • POST /api/host/{id}/children/{child_id}/restart — lifecycle: restart
  • POST /api/host/{id}/virtualization/{kvm|bhyve|vmm|lxd|wsl}/enable — bootstrap the hypervisor on the parent
  • GET /api/host/{id}/virtualization/status — capability + ready state per hypervisor
  • POST /api/host/{id}/reboot — safe reboot orchestration (stops children, reboots parent, restarts children)

Required Permissions

  • Enable KVM, Enable bhyve, Enable VMM, Enable LXD, Enable WSL — bootstrap the hypervisor on a parent host.
  • Create Child Host — open the create dialog and dispatch the create plan.
  • Start Child Host, Stop Child Host, Restart Child Host, Delete Child Host — per-action lifecycle controls.
  • Configure Child Host — edit child host settings (hostname, distribution, etc.).
  • Reboot Host — trigger the safe-parent-reboot flow.

Troubleshooting

  • Inner-VM agent registration fails with connection-timeout: check that the parent host's firewall (ufw / firewalld / pf) allows traffic from the libvirt NAT subnet (typically 192.168.122.0/24) to the SysManage API port (8080 by default).
  • API not reachable from inside the VM but reachable on the host: check /etc/sysmanage.yamlapi.host; set it to 0.0.0.0 (or the host's routable IP) instead of localhost.
  • FreeBSD bootstrap logs go to /var/log/sysmanage-firstboot.log (bhyve) or stream over the SSH session (KVM); the engine surfaces them in the command_result for the create plan.
  • Cloud-image download / decompression are skip-guarded against the final qcow2 path: if a previous VM left the decompressed image behind, subsequent creates re-use it without re-fetching.
  • The engine uses --os-variant generic for virt-install (a warning is harmless); pin a specific variant per distribution when libvirt's osinfo DB has a closer match.