ZPOOL(8) | Maintenance Commands and Procedures | ZPOOL(8) |
zpool
— configure
ZFS storage pools
zpool |
-? |
zpool |
add [-fgLnP ]
[-o
property=value]
pool vdev... |
zpool |
attach [-f ]
[-o
property=value]
pool device new_device |
zpool |
checkpoint [-d,
--discard ] pool |
zpool |
clear pool
[device] |
zpool |
create [-dfn ]
[-B ] [-m
mountpoint] [-o
property=value]...
[-o
feature@ feature=value]...
[-O
file-system-property=value]...
[-R root]
[-t tempname]
pool vdev... |
zpool |
destroy [-f ]
pool |
zpool |
detach pool device |
zpool |
export [-f ]
pool... |
zpool |
get [-Hp ]
[-o
field[,field]...]
all|property[,property]...
pool... |
zpool |
history [-il ]
[pool]... |
zpool |
import [-D ]
[-d dir] |
zpool |
import -a
[-DflmN ] [-F
[-n ]] [-c
cachefile|-d
dir] [-o
mntopts] [-o
property=value]...
[-R root] |
zpool |
import [-Dfmt ]
[-F [-n ]]
[--rewind-to-checkpoint ]
[-c
cachefile|-d
dir] [-o
mntopts] [-o
property=value]...
[-R root]
pool|id
[newpool] |
zpool |
initialize [-c |
-s ] pool
[device...] |
zpool |
iostat [[-lq ]
|-rw ] [-T
u|d]
[-ghHLnpPvy ]
[[pool...]|[pool
vdev...]|[vdev...]]
[interval [count]] |
zpool |
labelclear [-f ]
device |
zpool |
list [-HgLpPv ]
[-o
property[,property]...]
[-T u|d]
[pool]... [interval
[count]] |
zpool |
offline [-t ]
pool device... |
zpool |
online [-e ]
pool device... |
zpool |
reguid pool |
zpool |
reopen pool |
zpool |
remove [-np ]
pool device... |
zpool |
remove -s
pool |
zpool |
replace [-f ]
pool device
[new_device] |
zpool |
resilver pool... |
zpool |
scrub [-s |
-p ] pool... |
zpool |
trim [-d ]
[-r rate]
[-c | -s ]
pool [device...] |
zpool |
set
property=value
pool |
zpool |
split [-gLlnP ]
[-o
property=value]...
[-R root]
pool newpool |
zpool |
status [-DigLpPstvx ]
[-T u|d]
[pool]... [interval
[count]] |
zpool |
sync [pool]... |
zpool |
upgrade |
zpool |
upgrade -v |
zpool |
upgrade [-V
version]
-a |pool... |
The zpool
command configures ZFS storage
pools. A storage pool is a collection of devices that provides physical
storage and data replication for ZFS datasets. All datasets within a storage
pool share the same space. See zfs(8) for
information on managing datasets.
A "virtual device" describes a single device or a collection of devices organized according to certain performance and fault characteristics. The following virtual devices are supported:
A raidz group can have single-, double-, or triple-parity, meaning that the raidz group can sustain one, two, or three failures, respectively, without losing any data. The raidz1 vdev type specifies a single-parity raidz group; the raidz2 vdev type specifies a double-parity raidz group; and the raidz3 vdev type specifies a triple-parity raidz group. The raidz vdev type is an alias for raidz1.
A raidz group with N disks of size X with P parity disks can hold approximately (N-P)*X bytes and can withstand P device(s) failing before data integrity is compromised. The minimum number of devices in a raidz group is one more than the number of parity disks. The recommended number is between 3 and 9 to help increase performance.
For more information on special allocations, see the Special Allocation Class section.
Virtual devices cannot be nested, so a mirror or raidz virtual device can only contain files or disks. Mirrors of mirrors (or other combinations) are not allowed.
A pool can have any number of virtual devices at the top of the configuration (known as "root vdevs"). Data is dynamically distributed across all top-level devices to balance data among devices. As new virtual devices are added, ZFS automatically places data on the newly available devices.
Virtual devices are specified one at a time on the command line, separated by whitespace. The keywords mirror and raidz are used to distinguish where a group ends and another begins. For example, the following creates two root vdevs, each a mirror of two disks:
# zpool create mypool mirror c0t0d0 c0t1d0 mirror c1t0d0 c1t1d0
ZFS supports a rich set of mechanisms for handling device failure and data corruption. All metadata and data is checksummed, and ZFS automatically repairs bad data from a good copy when corruption is detected.
In order to take advantage of these features, a pool must make use of some form of redundancy, using either mirrored or raidz groups. While ZFS supports running in a non-redundant configuration, where each root vdev is simply a disk or file, this is strongly discouraged. A single case of bit corruption can render some or all of your data unavailable.
A pool's health status is described by one of three states: online, degraded, or faulted. An online pool has all devices operating normally. A degraded pool is one in which one or more devices have failed, but the data is still available due to a redundant configuration. A faulted pool has corrupted metadata, or one or more faulted devices, and insufficient replicas to continue functioning.
The health of the top-level vdev, such as mirror or raidz device, is potentially impacted by the state of its associated vdevs, or component devices. A top-level vdev or component device is in one of the following states:
One or more component devices is in the degraded or faulted state, but sufficient replicas exist to continue functioning. The underlying conditions are as follows:
One or more component devices is in the faulted state, and insufficient replicas exist to continue functioning. The underlying conditions are as follows:
zpool
offline
command.If a device is removed and later re-attached to the system, ZFS attempts to put the device online automatically. Device attach detection is hardware-dependent and might not be supported on all platforms.
ZFS allows devices to be associated with pools as "hot spares". These devices are not actively used in the pool, but when an active device fails, it is automatically replaced by a hot spare. If there is more than one spare that could be used as a replacement then they are tried in order of increasing capacity so that the smallest available spare that can replace the failed device is used. To create a pool with hot spares, specify a spare vdev with any number of devices. For example,
# zpool create pool mirror c0d0 c1d0 spare c2d0 c3d0
Spares can be shared across multiple pools, and can be added with
the zpool
add
command and
removed with the zpool
remove
command. Once a spare replacement is
initiated, a new spare vdev is created within the
configuration that will remain there until the original device is replaced.
At this point, the hot spare becomes available again if another device
fails.
If a pool has a shared spare that is currently being used, the pool can not be exported since other pools may use this shared spare, which may lead to potential data corruption.
Shared spares add some risk. If the pools are imported on different hosts, and both pools suffer a device failure at the same time, both could attempt to use the spare at the same time. This may not be detected, resulting in data corruption.
An in-progress spare replacement can be cancelled by detaching the hot spare. If the original faulted device is detached, then the hot spare assumes its place in the configuration, and is removed from the spare list of all active pools.
Spares cannot replace log devices.
The ZFS Intent Log (ZIL) satisfies POSIX requirements for synchronous transactions. For instance, databases often require their transactions to be on stable storage devices when returning from a system call. NFS and other applications can also use fsync(3C) to ensure data stability. By default, the intent log is allocated from blocks within the main pool. However, it might be possible to get better performance using separate intent log devices such as NVRAM or a dedicated disk. For example:
# zpool create pool c0d0 c1d0 log c2d0
Multiple log devices can also be specified, and they can be mirrored. See the EXAMPLES section for an example of mirroring multiple log devices.
Log devices can be added, replaced, attached, detached, and imported and exported as part of the larger pool. Mirrored devices can be removed by specifying the top-level mirror vdev.
Devices can be added to a storage pool as "cache devices". These devices provide an additional layer of caching between main memory and disk. For read-heavy workloads, where the working set size is much larger than what can be cached in main memory, using cache devices allow much more of this working set to be served from low latency media. Using cache devices provides the greatest performance improvement for random read-workloads of mostly static content.
To create a pool with cache devices, specify a cache vdev with any number of devices. For example:
# zpool create pool c0d0 c1d0 cache c2d0 c3d0
Cache devices cannot be mirrored or part of a raidz configuration. If a read error is encountered on a cache device, that read I/O is reissued to the original storage pool device, which might be part of a mirrored or raidz configuration.
The content of the cache devices is considered volatile, as is the case with other system caches.
Before starting critical procedures that include destructive
actions (e.g. zfs destroy
), an administrator can
checkpoint the pool's state and in the case of a mistake or failure, rewind
the entire pool back to the checkpoint. The checkpoint is automatically
discarded upon rewinding. Otherwise, the checkpoint can be discarded when
the procedure has completed successfully.
A pool checkpoint can be thought of as a pool-wide snapshot and should be used with care as it contains every part of the pool's state, from properties to vdev configuration. Thus, while a pool has a checkpoint certain operations are not allowed. Specifically, vdev removal/attach/detach, mirror splitting, and changing the pool's guid. Adding a new vdev is supported but in the case of a rewind it will have to be added again. Finally, users of this feature should keep in mind that scrubs in a pool that has a checkpoint do not repair checkpointed data.
To create a checkpoint for a pool:
# zpool checkpoint pool
To later rewind to its checkpointed state (which also discards the checkpoint), you need to first export it and then rewind it during import:
# zpool export pool # zpool import --rewind-to-checkpoint pool
To discard the checkpoint from a pool without rewinding:
# zpool checkpoint -d pool
Dataset reservations (controlled by the
reservation
or
refreservation
zfs properties) may be unenforceable
while a checkpoint exists, because the checkpoint is allowed to consume the
dataset's reservation. Finally, data that is part of the checkpoint but has
been freed in the current state of the pool won't be scanned during a
scrub.
The allocations in the special class are dedicated to specific block types. By default this includes all metadata, the indirect blocks of user data, and any dedup data. The class can also be provisioned to accept a limited percentage of small file data blocks.
A pool must always have at least one general (non-specified) vdev before other devices can be assigned to the special class. If the special class becomes full, then allocations intended for it will spill back into the normal class.
Dedup data can be excluded from the special class by setting the zfs_ddt_data_is_special zfs kernel variable to false (0).
Inclusion of small file blocks in the special class is opt-in. Each dataset can control the size of small file blocks allowed in the special class by setting the special_small_blocks dataset property. It defaults to zero so you must opt-in by setting it to a non-zero value. See zfs(8) for more info on setting this property.
Each pool has several properties associated with it. Some properties are read-only statistics while others are configurable and change the behavior of the pool.
The following are read-only properties:
allocated
-B
option.zpool
online
-e
). This space occurs when a LUN is dynamically
expanded.The space usage properties report actual physical space available
to the storage pool. The physical space can be different from the total
amount of space that any contained datasets can actually use. The amount of
space used in a raidz configuration depends on the characteristics of the
data being written. In addition, ZFS reserves some space for internal
accounting that the zfs(8) command takes
into account, but the zpool
command does not. For
non-full pools of a reasonable size, these effects should be invisible. For
small pools, or pools that are close to being completely full, these
discrepancies may become more noticeable.
The following property can be set at creation time and import time:
The following property can be set only at import time:
The following properties can be set at creation time and import
time, and later changed with the zpool
set
command:
zpool
replace
command. If
set to on, any new device, found in the same physical
location as a device that previously belonged to the pool, is
automatically formatted and replaced. The default behavior is
off. This property can also be referred to by its
shortened column name,
replace.zpool
import
-c
. Setting it to the special value
none creates a temporary pool that is never cached, and
the special value "" (empty string) uses the default location.
Multiple pools can share the same cache file. Because the kernel destroys and recreates this file when pools are added and removed, care should be taken when attempting to access this file. When the last pool using a cachefile is exported or destroyed, the file is removed.
EIO
to any new write I/O requests but
allows reads to any of the remaining healthy devices. Any write
requests that have yet to be committed to disk would be blocked.Automatic TRIM does not immediately reclaim blocks after a free. Instead, it will optimistically delay allowing smaller ranges to be aggregated in to a few larger ones. These can then be issued more efficiently to the storage.
Be aware that automatic trimming of recently freed data blocks
can put significant stress on the underlying storage devices. This will
vary depending of how well the specific device handles these commands.
For lower end devices it is often possible to achieve most of the
benefits of automatic trimming by running an on-demand (manual) TRIM
periodically using the zpool
trim
command.
zfs
list
is
run without the -t
option. The default value is
off. This property can also be referred to by its
shortened name,
listsnaps.zpool
import
. When a pool
is determined to be active it cannot be imported, even with the
-f
option. This property is intended to be used in
failover configurations where multiple hosts have access to a pool on
shared storage.
Multihost provides protection on import only. It does not protect against an individual device being used in multiple pools, regardless of the type of vdev. See the discussion under zpool create.
When this property is on, periodic writes to storage occur to show the pool is in use. See zfs_multihost_interval in the zfs-module-parameters(7) man page. In order to enable this property each host must set a unique hostid. The default value is off.
zpool
upgrade
command,
though this property can be used when a specific version is needed for
backwards compatibility. Once feature flags are enabled on a pool this
property will no longer have a value.All subcommands that modify state are logged persistently to the pool in their original form.
The zpool
command provides subcommands to
create and destroy storage pools, add capacity to storage pools, and provide
information about the storage pools. The following subcommands are
supported:
zpool
-?
zpool
add
[-fgLnP
] [-o
property=value]
pool vdev...-f
option, and the device checks
performed are described in the zpool
create
subcommand.
-f
-g
-L
-n
-P
-L
flag.-o
property=valuezpool
attach
[-f
] [-o
property=value]
pool device new_device-f
-o
property=valuezpool
checkpoint
[-d,
--discard
]
poolzpool
import
--rewind-to-checkpoint
. Rewinding will also discard the checkpoint.
The existence of a checkpoint in a pool prohibits the following
zpool
commands: remove
,
attach
, detach
,
split
, and reguid
. In
addition, it may break reservation boundaries if the pool lacks free
space. The zpool
status
command indicates the existence of a checkpoint or the progress of
discarding a checkpoint from a pool. The zpool
list
command reports how much space the checkpoint
takes from the pool.
-d,
--discard
zpool
clear
pool [device]zpool
create
[-dfn
] [-B
]
[-m
mountpoint]
[-o
property=value]...
[-o
feature@
feature=value]...
[-O
file-system-property=value]...
[-R
root]
[-t
tempname]
pool vdev...The command attempts to verify that each device specified is accessible and not currently in use by another subsystem. However this check is not robust enough to detect simultaneous attempts to use a new device in different pools, even if multihost is enabled. The administrator must ensure that simultaneous invocations of any combination of zpool replace, zpool create, zpool add, or zpool labelclear, do not refer to the same device. Using the same device in two pools will result in pool corruption.
There are some uses, such as being currently mounted, or
specified as the dedicated dump device, that prevents a device from ever
being used by ZFS. Other uses, such as having a preexisting UFS file
system, can be overridden with the -f
option.
The command also checks that the replication strategy for the
pool is consistent. An attempt to combine redundant and non-redundant
storage in a single pool, or to mix disks and files, results in an error
unless -f
is specified. The use of differently
sized devices within a single raidz or mirror group is also flagged as
an error unless -f
is specified.
Unless the -R
option is specified, the
default mount point is
/pool. The mount point
must not exist or must be empty, or else the root dataset cannot be
mounted. This can be overridden with the -m
option.
By default all supported features are enabled on the new pool
unless the -d
option is specified.
-B
-o
option. See the
Properties section for
details.-d
-o
option.
See zpool-features(7)
for details about feature properties.-f
-m
mountpoint-n
-o
property=value-o
feature@
feature=valuevalue can either be disabled or enabled.
-O
file-system-property=value-R
root-o
cachefile=none
-o
altroot=root-t
tempnamezpool
destroy
[-f
] pool-f
zpool
detach
pool devicezpool
export
[-f
] pool...Before exporting the pool, all datasets within the pool are unmounted. A pool can not be exported if it has a shared spare that is currently being used.
For pools to be portable, you must give the
zpool
command whole disks, not just slices, so
that ZFS can label the disks with portable EFI labels. Otherwise, disk
drivers on platforms of different endianness will not recognize the
disks.
-f
unmount
-f
command.
This command will forcefully export the pool even if it has a shared spare that is currently being used. This may lead to potential data corruption.
zpool
get
[-Hp
] [-o
field[,field]...]
all|property[,property]...
pool...name Name of storage pool property Property name value Property value source Property source, either 'default' or 'local'.
See the Properties section for more information on the available pool properties.
zpool
history
[-il
] [pool]...zpool
import
[-D
] [-d
dir]-d
option
is not specified, this command searches for devices in
/dev/dsk. The -d
option
can be specified multiple times, and all directories are searched. If the
device appears to be part of an exported pool, this command displays a
summary of the pool with the name of the pool, a numeric identifier, as
well as the vdev layout and current health of the device for each device
or file. Destroyed pools, pools that were previously destroyed with the
zpool
destroy
command, are
not listed unless the -D
option is specified.
The numeric identifier is unique, and can be used instead of the pool name when multiple exported pools of the same name are available.
zpool
import
-a
[-DflmN
]
[-F
[-n
]]
[-c
cachefile|-d
dir] [-o
mntopts] [-o
property=value]...
[-R
root]zpool
destroy
command, will not be imported unless the
-D
option is specified.
-a
-c
cachefile-d
dir-d
option can be specified multiple times.
This option is incompatible with the -c
option.-D
-f
option is
also required.-f
-F
-l
-m
-n
-F
recovery option. Determines
whether a non-importable pool can be made importable again, but does
not actually perform the pool recovery. For more details about pool
recovery mode, see the -F
option, above.-N
-o
mntopts-o
property=value-R
rootzpool
import
[-Dfmt
] [-F
[-n
]]
[--rewind-to-checkpoint
] [-c
cachefile|-d
dir] [-o
mntopts] [-o
property=value]...
[-R
root]
pool|id
[newpool]If a device is removed from a system without running
zpool
export
first, the
device appears as potentially active. It cannot be determined if this
was a failed export, or whether the device is really in use from another
host. To import a pool in this state, the -f
option is required.
-c
cachefile-d
dir-d
option can be specified multiple times.
This option is incompatible with the -c
option.-D
-f
option is also
required.-f
-F
-l
zpool
mount
on each encrypted dataset immediately
after the pool is imported. If any datasets have a
prompt keysource this command will block waiting for
the key to be entered. Otherwise, encrypted datasets will be left
unavailable until the keys are loaded.-m
-n
-F
recovery option. Determines
whether a non-importable pool can be made importable again, but does
not actually perform the pool recovery. For more details about pool
recovery mode, see the -F
option, above.-o
mntopts-o
property=value-R
root-t
--rewind-to-checkpoint
zpool
initialize
[-c
| -s
]
pool [device...]-c,
--cancel
-s
--suspend
zpool
initialize
with no flags on the relevant
target devices.zpool
iostat
[[-lq
] |-rw
]
[-T
u|d]
[-ghHLnpPvy
]
[[pool...]|[pool
vdev...]|[vdev...]]
[interval [count]]-n
flag is specified the headers are displayed
only once, otherwise they are displayed periodically. If
count is specified, the command exits after
count reports are printed. The first report printed
is always the statistics since boot regardless of whether
interval and count are passed.
Also note that the units of
K,
M,
G ... that are
printed in the report are in base 1024. To get the raw values, use the
-p
flag.
-T
u|d-i
-g
-H
-L
-n
-p
-P
-L
flag.-r
-v
-y
-w
total_wait: Total IO time (queuing + disk IO time). disk_wait: Disk IO time (time reading/writing the disk). syncq_wait: Amount of time IO spent in synchronous priority queues. Does not include disk time. asyncq_wait: Amount of time IO spent in asynchronous priority queues. Does not include disk time. scrub: Amount of time IO spent in scrub queue. Does not include disk time.
-l
total_wait: Average total IO time (queuing + disk IO time). disk_wait: Average disk IO time (time reading/writing the disk). syncq_wait: Average amount of time IO spent in synchronous priority queues. Does not include disk time. asyncq_wait: Average amount of time IO spent in asynchronous priority queues. Does not include disk time. scrub: Average queuing time in scrub queue. Does not include disk time. trim: Average queuing time in trim queue. Does not include disk time.
-q
syncq_read/write: Current number of entries in synchronous priority queues. asyncq_read/write: Current number of entries in asynchronous priority queues. scrubq_read: Current number of entries in scrub queue. trimq_write: Current number of entries in trim queue.
All queue statistics are instantaneous measurements of the number of entries in the queues. If you specify an interval, the measurements will be sampled from the end of the interval.
zpool
labelclear
[-f
] device-f
zpool
list
[-HgLpPv
] [-o
property[,property]...]
[-T
u|d]
[pool]... [interval
[count]]-g
-H
-o
propertyname
,
size
, allocated
,
free
, checkpoint,
expandsize
, fragmentation
,
capacity
, dedupratio
,
health
, altroot
.-L
-p
-P
-L
flag.-T
u|d-v
zpool
offline
[-t
] pool
device...-t
zpool
online
[-e
] pool
device...-e
zpool
reguid
poolzpool
reopen
poolzpool
remove
[-np
] pool
device...Removing a top-level vdev reduces the total amount of space in
the storage pool. The specified device will be evacuated by copying all
allocated space from it to the other devices in the pool. In this case,
the zpool
remove
command
initiates the removal and returns, while the evacuation continues in the
background. The removal progress can be monitored with
zpool
status.
This
feature must be enabled to be used, see
zpool-features(7)
A mirrored top-level device (log or data) can be removed by
specifying the top-level mirror for the same. Non-log devices or data
devices that are part of a mirrored configuration can be removed using
the zpool
detach
command.
-n
-p
-n
flag, displays
numbers as parsable (exact) values.zpool
remove
-s
poolzpool
replace
[-f
] pool
device [new_device]The size of new_device must be greater than or equal to the minimum size of all the devices in a mirror or raidz configuration.
new_device is required if the pool is not redundant. If new_device is not specified, it defaults to old_device. This form of replacement is useful after an existing disk has failed and has been physically replaced. In this case, the new disk may have the same /dev/dsk path as the old device, even though it is actually a different disk. ZFS recognizes this.
-f
zpool
resilver
pool...zpool
scrub
[-s
| -p
]
pool...zpool
status
command reports the progress of the scrub
and summarizes the results of the scrub upon completion.
Scrubbing and resilvering are very similar operations. The difference is that resilvering only examines data that ZFS knows to be out of date (for example, when attaching a new device to a mirror or replacing an existing device), whereas scrubbing examines all data to discover silent errors due to hardware faults or disk failure.
Because scrubbing and resilvering are I/O-intensive
operations, ZFS only allows one at a time. If a scrub is paused, the
zpool
scrub
resumes it.
If a resilver is in progress, ZFS does not allow a scrub to be started
until the resilver completes.
Note that, due to changes in pool data on a live system, it is possible for scrubs to progress slightly beyond 100% completion. During this period, no completion time estimate will be provided.
-s
-p
zpool
scrub
again.zpool
set
property=value
poolzpool
split
[-gLlnP
] [-o
property=value]...
[-R
root] pool
newpool-g
-L
-l
-n
-P
-L
flag.-o
property=value-R
rootzpool
status
[-DigLpPstvx
] [-T
u|d] [pool]...
[interval [count]]If a scrub or resilver is in progress, this command reports the percentage done and the estimated time to completion. Both of these are only approximate, because the amount of data in the pool and the other workloads on the system can change.
-D
-g
-L
-p
-P
-L
flag.-s
-t
-T
u|d-v
-x
zpool
sync
[pool]...zpool
sync
will sync all pools on the system. Otherwise,
it will only sync the specified pool.zpool
trim
[-d
] [-r
rate] [-c
|
-s
] pool
[device...]A manual on-demand TRIM operation can be initiated irrespective of the autotrim pool property setting. See the documentation for the autotrim property above for the types of vdev devices which can be trimmed.
-d
--secure
-r
--rate
rate-c,
--cancel
-s
--suspend
zpool
trim
with no
flags on the relevant target devices.zpool
upgrade
zpool
upgrade
-a
to enable all features on all pools.zpool
upgrade
-v
zpool
upgrade
[-V
version]
-a
|pool...The following exit values are returned:
# zpool create tank raidz c0t0d0 c0t1d0 c0t2d0 c0t3d0 c0t4d0 c0t5d0
# zpool create tank mirror c0t0d0 c0t1d0 mirror c0t2d0 c0t3d0
# zpool create tank /dev/dsk/c0t0d0s1 c0t1d0s4
# zpool create tank /path/to/file/a /path/to/file/b
# zpool add tank mirror c1t0d0 c1t1d0
# zpool list NAME SIZE ALLOC FREE FRAG EXPANDSZ CAP DEDUP HEALTH ALTROOT rpool 19.9G 8.43G 11.4G 33% - 42% 1.00x ONLINE - tank 61.5G 20.0G 41.5G 48% - 32% 1.00x ONLINE - zion - - - - - - - FAULTED -
# zpool destroy -f tank
# zpool export tank
# zpool import pool: tank id: 15451357997522795478 state: ONLINE action: The pool can be imported using its name or numeric identifier. config: tank ONLINE mirror ONLINE c1t2d0 ONLINE c1t3d0 ONLINE # zpool import tank
# zpool upgrade -a This system is currently running ZFS version 2.
# zpool create tank mirror c0t0d0 c0t1d0 spare c0t2d0
If one of the disks were to fail, the pool would be reduced to the degraded state. The failed device can be replaced using the following command:
# zpool replace tank c0t0d0 c0t3d0
Once the data has been resilvered, the spare is automatically removed and is made available for use should another device fail. The hot spare can be permanently removed from the pool using the following command:
# zpool remove tank c0t2d0
# zpool create pool mirror c0d0 c1d0 mirror c2d0 c3d0 log mirror \ c4d0 c5d0
# zpool add pool cache c2d0 c3d0
Once added, the cache devices gradually fill with content from
main memory. Depending on the size of your cache devices, it could take
over an hour for them to fill. Capacity and reads can be monitored using
the iostat
option as follows:
# zpool iostat -v pool 5
Given this configuration:
pool: tank state: ONLINE scrub: none requested config: NAME STATE READ WRITE CKSUM tank ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 c6t0d0 ONLINE 0 0 0 c6t1d0 ONLINE 0 0 0 mirror-1 ONLINE 0 0 0 c6t2d0 ONLINE 0 0 0 c6t3d0 ONLINE 0 0 0 logs mirror-2 ONLINE 0 0 0 c4t0d0 ONLINE 0 0 0 c4t1d0 ONLINE 0 0 0
The command to remove the mirrored log mirror-2 is:
# zpool remove tank mirror-2
The command to remove the mirrored data mirror-1 is:
# zpool remove tank mirror-1
# zpool list -v data NAME SIZE ALLOC FREE FRAG EXPANDSZ CAP DEDUP HEALTH ALTROOT data 23.9G 14.6G 9.30G 48% - 61% 1.00x ONLINE - raidz1 23.9G 14.6G 9.30G 48% - c1t1d0 - - - - - c1t2d0 - - - - 10G c1t3d0 - - - - -
ZPOOL_VDEV_NAME_GUID
zpool subcommands to output vdev guids by
default.
This behavior is identical to the zpool
status -g
command line option.ZPOOL_VDEV_NAME_FOLLOW_LINKS
zpool
subcommands to follow links for vdev
names by default. This behavior is identical to the zpool
status -L
command line option.ZPOOL_VDEV_NAME_PATH
zpool
subcommands to output full vdev path
names by default. This behavior is identical to the zpool
status -P
command line option.May 8, 2024 | OmniOS |