BHYVE(7) | Standards, Environments, and Macros | BHYVE(7) |
bhyve
— zone brand
for running a virtual machine instance under the bhyve hypervisor
A bhyve
branded zone uses the
brands(7) framework to provide an
environment for running a virtual machine under the
bhyve(8) hypervisor.
bhyve
zones are configured mostly via
custom attributes in the zone configuration.
Supported attributes are:
acpi
on|offThis is a legacy option that no longer has any effect.
bootdisk
path[,serial
=serial]Specifies a ZFS volume dataset name which will be attached to
the guest as the boot disk. Additional disks can be specified using the
disk
attribute, see below.
If the optional serial number is not provided, one will be automatically generated from the dataset name. The ZFS volume must also be mapped into the zone using a device block as shown in the example below.
A suitable ZFS volume can be created using the
‘zfs create -V
’ command, for
example:
zfs create -V 30G
rpool/bootdisk2
bootorder
option[,option...]Specifies the attempted boot order for the virtual machine.
For the UEFI bootrom, the available options are:
shell
path
[N]path
or path0
and
subsequent ones use increasing suffixes.bootdisk
bootdisk
attribute.disk
[N]disk
attribute.cdrom
[N]cdrom
attribute.net
[N][=pxe
|http
]pxe
.For the legacy CSM bootrom, the available options are:
For both bootroms, the following legacy aliases are supported but are deprecated. These must be used alone, without any additional options:
Default value:
path0
,bootdisk
,cdrom0
Example:
net=http
,cdrom1
,disk
,bootdisk
,shell
bootnext
optionFor the UEFI bootrom, specifies the boot device to be used for
the next boot only, after which booting will revert to the standard boot
order (device enumeration order, or that defined using the
bootorder
attribute). The
option parameter is one of the
bootorder
values shown above for the UEFI
bootrom.
bootrom
firmwareSpecifies the name of the boot ROM to use for starting the virtual machine. The available ROMs are:
The firmware parameter can also be specified as an absolute path to a custom ROM.
cdrom
[N]
pathSpecifies the path to one or more CD/DVD image (.iso) files that will be inserted into virtual CD/DVD drive(s) in the guest. To specify multiple image files, create multiple attributes with different values of N, starting with 0. If only a single CD image is required, N can be omitted.
Each image file must also be mapped into the zone using an fs block, as shown in the example below.
cloud-init
on|off|filename|URLWhen this option is enabled, and set to on or a filename, the guest will be booted with a small CD image attached that provides configuration data that cloud-init can use to automatically configure the guest operating system. When a file is provided, this is used directly for the provided user-data. If any network interfaces are configured with an allowed-address property, then that address will be provided along with the configuration data. See also the dns-domain, password, resolvers and sshkey options.
If a URL is provided, then that is passed to the guest system as the source of the full meta-data.
console
optionsThis parameter configures where the guest's console device is presented. The default value is /dev/zconsole which means that the guest's console can be accessed via:
zlogin -C
<zone>
Other supported values include
socket,<path> which places a UNIX domain
socket at ‘<path>
’ through
which the console can be accessed.
debug.persist
on|off-b
option of
mdb(1).disk
[N]
dataset[,serial
=serial]Specifies one or more ZFS volume dataset names which will be
attached to the guest as disks. To attach multiple disks, create
multiple attributes with different values of N. In
that case, the disk will be presented on target N.
If only a single disk is required, N can be
omitted. The disks specified via the disk
attribute are in addition to the system boot disk, which is specified
using bootdisk
.
If the optional serial number is not provided, one will be automatically generated from the dataset name. Each ZFS volume must also be mapped into the zone using a device block as shown in the example below.
diskif
typeSpecifies the type of interface to which the disks will be attached. Available options are:
diskif
N
typeOverride the diskif type for the disk at target N.
dns-domain
domainnameThe DNS domain name for the guest. Included in the data passed to the guest when the cloud-init option is enabled.
extra
[N]Any extra options to be passed directly
to the bhyve
hypervisor. To add multiple
options, create multiple attriutes with different values of
N. If only a single extra option is required,
N can be omitted.
hostbridge
typeSpecifies the type of emulated system host bridge that will be presented to the guest. Available options are:
memreserve
on|offnetif
typeSpecifies the type of network interface that will be used for the interfaces presented to the guest. Available options are:
Note that only the accelerated virtio interface supports filtering using the zone firewall.
password
string|hash|filenameWhen the cloud-init option is enabled, the provided password will be passed to the guest which can use it to set the password for the default user. Depending on the guest, this may be the root user or a distribution-dependant initial user. The password can be provided as a fixed string, a pre-computed hash or a path to a file that contains the desired password or password hash, relative to the global zone root.
priv.debug
on|offppt
N
on
|off
|slot
SPass through a PCI device to the guest. Available devices for
pass-through can be viewed with ‘pptadm list
-a
’. N must match the number of the
desired device. Set to on
to enable
pass-through, and to off
to disable it, or use
slot
S as described
below.
Pass-through devices are presented to the guest in numerical
order by default. An explicit order can be forced by setting the
attribute value to slot
S
(S between 0 and 7) in
which case the device will be placed into slot S,
and any other devices will be added in numerical order around it.
The /dev/pptN device must also be passed through to the guest via a device block.
To enable a PCI device for pass-through, it must be bound to
the ppt driver and added to the
/etc/ppt_matches file, after which it will be
visible in the output of ‘pptadm list
-a
’. The binding can be achieved using
update_drv(8) or by adding an
entry to the /etc/ppt_aliases file (in the same
format as /etc/driver_aliases) and
rebooting.
ram
size[KMGT
]Specify the guest's physical memory size. The size argument may be suffixed with one of K, M, G or T to indicate a multiple of kibibytes, mebibytes, gibibytes or tebibytes. If no suffix is given, the value is assumed to be in mebibytes.
The default value, if this attribute is not specified, is 256M.
resolvers
resolver[,resolver...]A comma-delimited list of DNS resolver IP addresses. These are included in the data passed to the guest when the cloud-init option is enabled.
rng
on|offSet to on to attach a virtio random number generator (RNG) to the guest (default: off).
sshkey
string|filenameWhen the cloud-init option is enabled, the provided sshkey will be passed to the guest which can use it to set the authorised SSH keys for the default user and/or the root user. sshkey can be provided as a fixed string or a path to a file that contains the desired public key.
type
typeSpecifies the type of the virtual machine. This needs to be set for some guest operating systems so that things are set up as they expect. For most guests, this can be left unset. Supported values are:
uefivars
on|offEnable or disable persistent UEFI variables. Defaults to
on
.
uuid
uuidSpecifies the unique identifier for the virtual machine. If this attribute is not set, a random UUID will be generated when the zone is first installed.
vcpus
[cpus=
]numcpus[,sockets=
s][,cores=
c][,threads=
t]Specify the number of guest virtual CPUs and/or the CPU topology. The default value for each of the parameters is 1. The topology must be consistent in that numcpus must equal the product of the other parameters.
The maximum supported number of virtual CPUs is 32.
virtfs
[N]
sharename,path[,ro
]Share a filesystem to the guest using Virtio 9p (VirtFS). The
specified path is presented over PCI as a share
named sharename. The optional
ro
option configures the share as read-only. The
filesystem path being shared must also be mapped into the zone, using
either a delegated dataset or a loopback (lofs) mount. See the
EXAMPLES section below.
vga
off|on|ioSpecify the type of VGA emulation to use when the framebuffer and VNC server are enabled. Possible values for this option are:
vnc
on|wait|off|optionsThis parameter controls whether a virtual frambuffer is attached to the guest and made available via VNC. Available options are:
Multiple options can be provided, separated by commas. See
also xhci
below.
The bhyve
brand also ships a mini
socat utility that can be used to connect the socket to a TCP port. The
utility can be invoked like so:
/usr/lib/brand/bhyve/socat \ /zones/bhyve/root/tmp/vm.vnc 5905
If you prefer, you can also use the real socat utility that's shipped in core:
/usr/bin/socat \ TCP-LISTEN:5905,bind=127.0.0.1,reuseaddr,fork \ UNIX-CONNECT:/zones/bhyve/root/tmp/vm.vnc
xhci
on|offEnable or disable the emulated USB tablet interface along with the emulated framebuffer. Note that this option currently needs to be disabled for illumos guests.
An example bhyve
zone is shown below:
create -t bhyve set zonepath=/zones/bhyve add net set allowed-address=10.0.0.112/24 set physical=vm0 end add device set match=/dev/zvol/rdsk/rpool/bhyve0 end add attr set name=ram set type=string set value=2G end add attr set name=vcpus set type=string set value="sockets=2,cores=4,threads=2" end add attr set name=bootdisk set type=string set value=rpool/bhyve0 end add fs set dir=/rpool/iso/debian-9.4.0-amd64-netinst.iso set special=/rpool/iso/debian-9.4.0-amd64-netinst.iso set type=lofs add options ro add options nodevices end add attr set name=cdrom set type=string set value=/rpool/iso/debian-9.4.0-amd64-netinst.iso end
The following example shows how to share a delegated dataset called rpool/datavol to a guest using VirtFS. This assumes that the mountpoint attribute on rpool/datavol is set to /datavol. This could have been done, for example, by creating the dataset with:
zfs create -o mountpoint=/datavol -o
zoned=on rpool/datavol
Setting the mountpoint and zoned attributes at the same time prevents the filesystem from ever being mounted in the global zone.
add dataset set name=rpool/datavol end add attr set name=virtfs0 set type=string set value=datavol,/datavol end
and to share the global zone filesystem /data/websites read-only to the guest, add:
add fs set dir="/data/websites" set special="/data/websites" set type="lofs" add options ro add options nodevices end add attr set name=virtfs1 set type=string set value=websites,/data/websites,ro end
mdb(1), proc(1), brands(7), privileges(7), resource_controls(7), zones(7), bhyve(8), dtrace(8), zfs(8), zoneadm(8), zonecfg(8)
June 3, 2023 | OmniOS |