BHYVE(5) Standards, Environments, and Macros BHYVE(5)

bhyve
zone brand for running a virtual machine instance under the bhyve hypervisor

A bhyve branded zone uses the brands(5) framework to provide an environment for running a virtual machine under the bhyve hypervisor.

bhyve zones are configured mostly via custom attributes in the zone configuration.

Supported attributes are:

acpi on|off

This is a legacy option that no longer has any effect.

bootdisk path[,serial=serial]

Specifies a ZFS volume dataset name which will be attached to the guest as the boot disk. Additional disks can be specified using the disk attribute, see below.

If the optional serial number is not provided, one will be automatically generated from the dataset name. The ZFS volume must also be mapped into the zone using a device block as shown in the example below.

A suitable ZFS volume can be created using the ‘zfs create -V’ command, for example:

zfs create -V 30G rpool/bootdisk2
bootrom firmware

Specifies the name of the boot ROM to use for starting the virtual machine. The available ROMs are:

BHYVE_RELEASE_CSM (default)
The default boot ROM that supports both UEFI and CSM (legacy) boot. If the VM supports UEFI boot, you should use BHYVE_RELEASE instead, since it incorporates newer firmware.
BHYVE_RELEASE
Production ROM supporting UEFI boot only.
BHYVE
An alias for BHYVE_RELEASE.
BHYVE_CSM
An alias for BHYVE_RELEASE_CSM.
BHYVE_DEBUG_CSM
A version of the CSM ROM which produces debug messages to the console.
BHYVE_DEBUG
A version of the UEFI ROM which produces debug messages to the console.

The firmware parameter can also be specified as an absolute path to a custom ROM.

cdrom[N] path

Specifies the path to one or more CD/DVD image (.iso) files that will be inserted into virtual CD/DVD drive(s) in the guest. To specify multiple image files, create multiple attributes with different values of N, starting with 0. If only a single CD image is required, N can be omitted.

Each image file must also be mapped into the zone using an fs block, as shown in the example below.

cloud-init on|off | filename | URL

When this option is enabled, and set to on or a filename, the guest will be booted with a small CD image attached that provides configuration data that cloud-init can use to automatically configure the guest operating system. When a file is provided, this is used directly for the provided user-data. If any network interfaces are configured with an allowed-address property, then that address will be provided along with the configuration data. See also the dns-domain, password, resolvers and sshkey options.

If a URL is provided, then that is passed to the guest system as the source of the full meta-data.

console options

This parameter configures where the guest's console device is presented. The default value is /dev/zconsole which means that the guest's console can be accessed via:

zlogin -C <zone>

Other supported values include socket,<path> which places a UNIX domain socket at ‘<path>’ through which the console can be accessed.

disk[N] dataset[,serial=serial]

Specifies one or more ZFS volume dataset names which will be attached to the guest as disks. To attach multiple disks, create multiple attributes with different values of N. In that case, the disk will be presented on target N. If only a single disk is required, N can be omitted. The disks specified via the disk attribute are in addition to the system boot disk, which is specified using bootdisk.

If the optional serial number is not provided, one will be automatically generated from the dataset name. Each ZFS volume must also be mapped into the zone using a device block as shown in the example below.

diskif type

Specifies the type of interface to which the disks will be attached. Available options are:

  • ahci
  • nvme
  • virtio-blk (default)
dns-domain domainname

The DNS domain name for the guest. Included in the data passed to the guest when the cloud-init option is enabled.

extra options

Any extra options to be passed directly to the bhyve hypervisor.

hostbridge type

Specifies the type of emulated system host bridge that will be presented to the guest. Available options are:

  • amd
  • i440fx (default)
  • netapp
  • q35
  • vendor=ID,device=ID
netif type

Specifies the type of network interface that will be used for the interfaces presented to the guest. Available options are:

  • virtio-net-viona (accelerated virtio interface, default)
  • virtio-net (legacy virtio interface)
  • e1000

Note that only the accelerated virtio interface supports filtering using the zone firewall.

password string|hash | filename

When the cloud-init option is enabled, the provided password will be passed to the guest which can use it to set the password for the default user. Depending on the guest, this may be the root user or a distribution-dependant initial user. password can be provided as a fixed string, a pre-computed hash or a path to a file that contains the desired password or password hash.

priv.debug on|off
Set to on to enable debugging for privilege management. The debug messages will appear in the zone's /tmp/init.log.
pptN on|off|slotS

Pass through a PCI device to the guest. Available devices for pass-through can be viewed with ‘pptadm list -a’. N must match the number of the desired device. Set to on to enable pass-through, and to off to disable it, or use slotS as described below.

Pass-through devices are presented to the guest in numerical order by default. An explicit order can be forced by setting the attribute value to slotS (S between 0 and 7) in which case the device will be placed into slot S, and any other devices will be added in numerical order around it.

The /dev/pptN device must also be passed through to the guest via a device block.

To enable a PCI device for pass-through, it must be bound to the ppt driver and added to the /etc/ppt_matches file, after which it will be visible in the output of ‘pptadm list -a’. The binding can be achieved using update_drv(1m) or by adding an entry to the /etc/ppt_aliases file (in the same format as /etc/driver_aliases) and rebooting.

ram size[KMGT]

Specify the guest's physical memory size. The size argument may be suffixed with one of K, M, G or T to indicate a multiple of kibibytes, mebibytes, gibibytes or tebibytes. If no suffix is given, the value is assumed to be in mebibytes.

The default value, if this attribute is not specified, is 256M.

resolvers resolver[,resolver...]

A comma-delimited list of DNS resolver IP addresses. These are included in the data passed to the guest when the cloud-init option is enabled.

rng on|off

Set to on to attach a virtio random number generator (RNG) to the guest (default: off).

sshkey string|filename

When the cloud-init option is enabled, the provided sshkey will be passed to the guest which can use it to set the authorised SSH keys for the default user and/or the root user. sshkey can be provided as a fixed string or a path to a file that contains the desired public key.

type type

Specifies the type of the virtual machine. This needs to be set for some guest operating systems so that things are set up as they expect. For most guests, this can be left unset. Supported values are:

  • generic (default)
  • openbsd
  • windows
uuid uuid

Specifies the unique identifier for the virtual machine. If this attribute is not set, a random UUID will be generated when the zone is first installed.

vcpus [cpus=]numcpus [,sockets=s][,cores=c][,threads=t]

Specify the number of guest virtual CPUs and/or the CPU topology. The default value for each of the parameters is 1. The topology must be consistent in that numcpus must equal the product of the other parameters.

The maximum supported number of virtual CPUs is 32.

virtfs[N] sharename , path[,ro]

Share a filesystem to the guest using Virtio 9p (VirtFS). The specified path is presented over PCI as a share named sharename. The optional ro option configures the share as read-only. The filesystem path being shared must also be mapped into the zone, using either a delegated dataset or a loopback (lofs) mount. See the EXAMPLES section below.

vga off|on|io

Specify the type of VGA emulation to use when the framebuffer and VNC server are enabled. Possible values for this option are:

off (default)
This option should be used for UEFI guests that assume that the VGA adapter is present if they detect the I/O ports.
on
This option should be used along with the CSM bootrom to boot traditional BIOS guests that require the legacy VGA I/O and memory regions to be available.
io
This option should be used for guests that attempt to issue BIOS calls which result in I/O port queries and fail to boot if I/O decode is disabled.
vnc on|wait|off|options

This parameter controls whether a virtual frambuffer is attached to the guest and made available via VNC. Available options are:

on
An alias for unix=/tmp/vm.vnc which creates the VNC socket within /tmp inside the zone.
wait
An alias for wait,unix=/tmp/vm.vnc which is identical to on except that the zone boot is halted until a VNC connection is established.
off
Disable the framebuffer. This is the same as omitting the vnc attribute.
unix=path
Sets up a VNC server on a UNIX socket at the specified path. Note that this path is relative to the zone root.
w=pixels
Specifies the horizontal screen resolution (default: 1024, max: 1920)
h=pixels
Specifies the vertical screen resolution (default: 768, max: 1200)
wait
Pause boot until a VNC connection is established.

Multiple options can be provided, separated by commas. See also xhci below.

The bhyve brand also ships a mini socat utility that can be used to connect the socket to a TCP port. The utility can be invoked like so:

/usr/lib/brand/bhyve/socat \
        /zones/bhyve/root/tmp/vm.vnc 5905
    

If you prefer, you can also use the real socat utility that's shipped in core:

/usr/bin/socat \
        TCP-LISTEN:5905,bind=127.0.0.1,reuseaddr,fork \
        UNIX-CONNECT:/zones/bhyve/root/tmp/vm.vnc
    
xhci on|off

Enable or disable the emulated USB tablet interface along with the emulated framebuffer. Note that this option currently needs to be disabled for illumos guests.

An example bhyve zone is shown below:
create -t bhyve
set zonepath=/zones/bhyve
add net
    set allowed-address=10.0.0.112/24
    set physical=vm0
end
add device
    set match=/dev/zvol/rdsk/rpool/bhyve0
end
add attr
    set name=ram
    set type=string
    set value=2G
end
add attr
    set name=vcpus
    set type=string
    set value="sockets=2,cores=4,threads=2"
end
add attr
    set name=bootdisk
    set type=string
    set value=rpool/bhyve0
end
add fs
    set dir=/rpool/iso/debian-9.4.0-amd64-netinst.iso
    set special=/rpool/iso/debian-9.4.0-amd64-netinst.iso
    set type=lofs
    add options ro
    add options nodevices
end
add attr
    set name=cdrom
    set type=string
    set value=/rpool/iso/debian-9.4.0-amd64-netinst.iso
end

The following example shows how to share a delegated dataset called rpool/datavol to a guest using VirtFS. This assumes that the mountpoint attribute on rpool/datavol is set to /datavol. This could have been done, for example, by creating the dataset with:

zfs create -o mountpoint=/datavol -o zoned=on rpool/datavol

Setting the mountpoint and zoned attributes at the same time prevents the filesystem from ever being mounted in the global zone.

add dataset
    set name=rpool/datavol
end
add attr
    set name=virtfs0
    set type=string
    set value=datavol,/datavol
end

and to share the global zone filesystem /data/websites read-only to the guest, add:

add fs
    set dir="/data/websites"
    set special="/data/websites"
    set type="lofs"
    add options ro
    add options nodevices
end
add attr
    set name=virtfs1
    set type=string
    set value=websites,/data/websites,ro
end

mdb(1), proc(1), bhyve(1m), dtrace(1m), zfs(1m), zoneadm(1m), zonecfg(1m), brands(5), privileges(5), resource_controls(5), zones(5)
September 11, 2021 OmniOS