|BHYVE(5)||Standards, Environments, and Macros||BHYVE(5)|
bhyvebranded zone uses the brands(5) framework to provide an environment for running a virtual machine under the bhyve hypervisor.
bhyvezones are configured mostly via custom attributes in the zone configuration.
Supported attributes are:
This is a legacy option that no longer has any effect.
Specifies a ZFS volume dataset name which will be attached to the guest as the boot disk. Additional disks can be specified using the disk attribute, see below.
If the optional serial number is not provided, one will be automatically generated from the dataset name. The ZFS volume must also be mapped into the zone using a device block as shown in the example below.
A suitable ZFS volume can be created using the
zfs create -V’ command, for
zfs create -V 30G rpool/bootdisk2
Specifies the name of the boot ROM to use for starting the virtual machine. The available ROMs are:
The firmware parameter can also be specified as an absolute path to a custom ROM.
Specifies the path to one or more CD/DVD image (.iso) files that will be inserted into virtual CD/DVD drive(s) in the guest. To specify multiple image files, create multiple attributes with different values of N, starting with 0. If only a single CD image is required, N can be omitted.
Each image file must also be mapped into the zone using an fs block, as shown in the example below.
When this option is enabled, and set to on or a filename, the guest will be booted with a small CD image attached that provides configuration data that cloud-init can use to automatically configure the guest operating system. When a file is provided, this is used directly for the provided user-data. If any network interfaces are configured with an allowed-address property, then that address will be provided along with the configuration data. See also the dns-domain, password, resolvers and sshkey options.
If a URL is provided, then that is passed to the guest system as the source of the full meta-data.
This parameter configures where the guest's console device is presented. The default value is /dev/zconsole which means that the guest's console can be accessed via:
zlogin -C <zone>
Other supported values include
socket,<path> which places a UNIX domain
socket at ‘
which the console can be accessed.
Specifies one or more ZFS volume dataset names which will be attached to the guest as disks. To attach multiple disks, create multiple attributes with different values of N. In that case, the disk will be presented on target N. If only a single disk is required, N can be omitted. The disks specified via the disk attribute are in addition to the system boot disk, which is specified using bootdisk.
If the optional serial number is not provided, one will be automatically generated from the dataset name. Each ZFS volume must also be mapped into the zone using a device block as shown in the example below.
Specifies the type of interface to which the disks will be attached. Available options are:
The DNS domain name for the guest. Included in the data passed to the guest when the cloud-init option is enabled.
Any extra options to be passed directly
Specifies the type of emulated system host bridge that will be presented to the guest. Available options are:
Specifies the type of network interface that will be used for the interfaces presented to the guest. Available options are:
Note that only the accelerated virtio interface supports filtering using the zone firewall.
When the cloud-init option is enabled, the provided password will be passed to the guest which can use it to set the password for the default user. Depending on the guest, this may be the root user or a distribution-dependant initial user. password can be provided as a fixed string, a pre-computed hash or a path to a file that contains the desired password or password hash.
Pass through a PCI device to the guest. Available devices for
pass-through can be viewed with ‘
-a’. N must match the number of the
desired device. Set to
on to enable
pass-through, and to
off to disable it, or use
slotS as described
Pass-through devices are presented to the guest in numerical
order by default. An explicit order can be forced by setting the
attribute value to
(S between 0 and 7) in
which case the device will be placed into slot S,
and any other devices will be added in numerical order around it.
The /dev/pptN device must also be passed through to the guest via a device block.
To enable a PCI device for pass-through, it must be bound to
the ppt driver and added to the
/etc/ppt_matches file, after which it will be
visible in the output of ‘
-a’. The binding can be achieved using
update_drv(1m) or by adding
an entry to the /etc/ppt_aliases file (in the
same format as /etc/driver_aliases) and
Specify the guest's physical memory size. The size argument may be suffixed with one of K, M, G or T to indicate a multiple of kibibytes, mebibytes, gibibytes or tebibytes. If no suffix is given, the value is assumed to be in mebibytes.
The default value, if this attribute is not specified, is 256M.
A comma-delimited list of DNS resolver IP addresses. These are included in the data passed to the guest when the cloud-init option is enabled.
Set to on to attach a virtio random number generator (RNG) to the guest (default: off).
When the cloud-init option is enabled, the provided sshkey will be passed to the guest which can use it to set the authorised SSH keys for the default user and/or the root user. sshkey can be provided as a fixed string or a path to a file that contains the desired public key.
Specifies the type of the virtual machine. This needs to be set for some guest operating systems so that things are set up as they expect. For most guests, this can be left unset. Supported values are:
Specifies the unique identifier for the virtual machine. If this attribute is not set, a random UUID will be generated when the zone is first installed.
Specify the number of guest virtual CPUs and/or the CPU topology. The default value for each of the parameters is 1. The topology must be consistent in that numcpus must equal the product of the other parameters.
The maximum supported number of virtual CPUs is 32.
Share a filesystem to the guest using Virtio 9p (VirtFS). The
specified path is presented over PCI as a share
named sharename. The optional
ro option configures the share as read-only. The
filesystem path being shared must also be mapped into the zone, using
either a delegated dataset or a loopback (lofs) mount. See the
EXAMPLES section below.
Specify the type of VGA emulation to use when the framebuffer and VNC server are enabled. Possible values for this option are:
This parameter controls whether a virtual frambuffer is attached to the guest and made available via VNC. Available options are:
Multiple options can be provided, separated by commas. See also xhci below.
bhyve brand also ships a mini
socat utility that can be used to connect the socket to a TCP port. The
utility can be invoked like so:
/usr/lib/brand/bhyve/socat \ /zones/bhyve/root/tmp/vm.vnc 5905
If you prefer, you can also use the real socat utility that's shipped in core:
/usr/bin/socat \ TCP-LISTEN:5905,bind=127.0.0.1,reuseaddr,fork \ UNIX-CONNECT:/zones/bhyve/root/tmp/vm.vnc
Enable or disable the emulated USB tablet interface along with the emulated framebuffer. Note that this option currently needs to be disabled for illumos guests.
bhyvezone is shown below:
create -t bhyve set zonepath=/zones/bhyve add net set allowed-address=10.0.0.112/24 set physical=vm0 end add device set match=/dev/zvol/rdsk/rpool/bhyve0 end add attr set name=ram set type=string set value=2G end add attr set name=vcpus set type=string set value="sockets=2,cores=4,threads=2" end add attr set name=bootdisk set type=string set value=rpool/bhyve0 end add fs set dir=/rpool/iso/debian-9.4.0-amd64-netinst.iso set special=/rpool/iso/debian-9.4.0-amd64-netinst.iso set type=lofs add options ro add options nodevices end add attr set name=cdrom set type=string set value=/rpool/iso/debian-9.4.0-amd64-netinst.iso end
The following example shows how to share a delegated dataset called rpool/datavol to a guest using VirtFS. This assumes that the mountpoint attribute on rpool/datavol is set to /datavol. This could have been done, for example, by creating the dataset with:
zfs create -o mountpoint=/datavol -o zoned=on rpool/datavol
Setting the mountpoint and zoned attributes at the same time prevents the filesystem from ever being mounted in the global zone.
add dataset set name=rpool/datavol end add attr set name=virtfs0 set type=string set value=datavol,/datavol end
and to share the global zone filesystem /data/websites read-only to the guest, add:
add fs set dir="/data/websites" set special="/data/websites" set type="lofs" add options ro add options nodevices end add attr set name=virtfs1 set type=string set value=websites,/data/websites,ro end
|September 11, 2021||OmniOS|