NVME(4) | Device Drivers Manual | NVME(4) |
nvme
— NVM Express
core driver
To compile this driver into your kernel, place the following line in your kernel configuration file:
device nvme
Or, to load the driver as a module at boot, place the following line in loader.conf(5):
nvme_load="YES"
Most users will also want to enable nvd(4) or nda(4) to expose NVM Express namespaces as disk devices which can be partitioned. Note that in NVM Express terms, a namespace is roughly equivalent to a SCSI LUN.
The nvme
driver provides support for NVM
Express (NVMe) controllers, such as:
The nvme
driver creates controller device
nodes in the format /dev/nvmeX and namespace device
nodes in the format /dev/nvmeXnsY. Note that the NVM
Express specification starts numbering namespaces at 1, not 0, and this
driver follows that convention.
By default, nvme
will create an I/O queue
pair for each CPU, provided enough MSI-X vectors and NVMe queue pairs can be
allocated. If not enough vectors or queue pairs are available, nvme(4) will
use a smaller number of queue pairs and assign multiple CPUs per queue
pair.
To force a single I/O queue pair shared by all CPUs, set the following tunable value in loader.conf(5):
hw.nvme.per_cpu_io_queues=0
To assign more than one CPU per I/O queue pair, thereby reducing the number of MSI-X vectors consumed by the device, set the following tunable value in loader.conf(5):
hw.nvme.min_cpus_per_ioq=X
To force legacy interrupts for all nvme
driver instances, set the following tunable value in
loader.conf(5):
hw.nvme.force_intx=1
Note that use of INTx implies disabling of per-CPU I/O queue pairs.
To control maximum amount of system RAM in bytes to use as Host Memory Buffer for capable devices, set the following tunable:
hw.nvme.hmb_max
The default value is 5% of physical memory size per device.
The nvd(4) driver is used to provide a disk
driver to the system by default. The nda(4) driver can
also be used instead. The nvd(4) driver performs better
with smaller transactions and few TRIM commands. It sends all commands
directly to the drive immediately. The nda(4) driver
performs better with larger transactions and also collapses TRIM commands
giving better performance. It can queue commands to the drive; combine
BIO_DELETE
commands into a single trip; and use the
CAM I/O scheduler to bias one type of operation over another. To select the
nda(4) driver, set the following tunable value in
loader.conf(5):
hw.nvme.verbose_cmd_dump=1
The following controller-level sysctls are currently implemented:
The following queue pair-level sysctls are currently implemented. Admin queue sysctls take the format of dev.nvme.0.adminq and I/O queue sysctls take the format of dev.nvme.0.ioq0.
In addition to the typical pci attachment, the
nvme
driver supports attaching to a
ahci(4) device. Intel's Rapid Storage Technology (RST)
hides the nvme device behind the AHCI device due to limitations in Windows.
However, this effectively hides it from the FreeBSD
kernel. To work around this limitation, FreeBSD
detects that the AHCI device supports RST and when it is enabled. See
ahci(4) for more details.
The nvme
driver first appeared in
FreeBSD 9.2.
The nvme
driver was developed by Intel and
originally written by Jim Harris
<jimharris@FreeBSD.org>,
with contributions from Joe Golio at EMC.
This man page was written by Jim Harris <jimharris@FreeBSD.org>.
June 6, 2020 | Debian |