What Is NVMe Storage?

What is NVMe?

NVMe (non-volatile memory express) is a storage transfer protocol for accessing data quickly from flash memory storage devices such as solid-state drives (SSDs). Fast, high-throughput, and massively parallel, the NVMe base specification enables flash memory to communicate directly with a computer via a high-speed peripheral component interconnect express (PCIe) bus. Extensions to the NVMe specification, maintained by NVM Express, are also available for a variety of network transfer protocols, including TCP, Fibre Channel, and InfiniBand. In this article, we’ll cover what NVMe is, how it works, and the larger role it plays in the data center.

How does NVMe work?

The NVMe specification describes a high-performance transfer protocol for connecting flash memory with host machines. Here’s how NVMe works:

  1. The host writes I/O command submission queues and doorbell registers (i.e., ready signals).
  2. The NVMe controller fetches and executes the I/O command queues and returns I/O completion queues followed by an interrupt to the host.
  3. The host records the I/O completion queues and clears the door register.

The end result is significantly lower overhead when compared with traditional transfer protocols such as SAS and SATA. Also, NVMe is optimized for non-uniform memory access (NUMA). This means it was designed to allow for multiple CPU cores to manage queues. Put it all together and we have a specification that capitalizes on the massively parallel, low latency data paths of NVMe without incurring the traditional costs of translating NVMe commands.

Benefits of NVMe Storage

NVMe storage’s main benefits include:

  • Faster transfer speeds (55-180 IOPS for HDDs vs. 3K-40K IOPS with SSDs)
  • Higher data throughput
  • Lockless connections that provide each CPU core with dedicated queue access to each SSD
  • Massive parallelism with over 64K queues for I/O operations
NVMe Storage vs. SATA SSD

Traditionally, NVMe SSDs connect to the CPU of a controller via Serial ATA (SATA), which is a computer bus interface initially introduced for hard-disk drives (HDDs). Early SSDs were designed to work with SATA merely to take advantage of the ubiquity of the technology. Today, PCIe is the preferred physical interface for SSDs. The NVMe transfer protocol allows manufacturers to take advantage of SSD technology when interfacing through a PCIe.

What is NVMe over fabrics (NVMe-oF)?

NVMe-oF is the practice of connecting NVMe storage systems with hosts over a network or data fabric. A data fabric simply refers to the network architecture, transfer protocol, and other technologies and services that allow data to be accessed and managed seamlessly across this network. It’s about extending the low latency and performance capabilities of NVMe over PCIe to storage area networks (SANs) through NVMe-friendly standards for popular transfer protocols such as Ethernet, Fibre Channel, and TCP. The NVMe over fabrics (NVMe-oF) specification was created and is currently maintained by NVM Express, an open collection of standards for non-volatile memory technologies. Let’s take a closer look at NVMe transfer protocols supported by this standard.

What is NVMe over Fibre Channel (NVMe/FC)?

NVMe over Fibre Channel (also known as NVMe/FC or NVMe-FC) is a high-speed transfer protocol for connecting NVMe storage systems to host devices over fabrics. It supports the fast, in-order, lossless transfer of raw block data between NVMe storage devices in a network. The original Fibre Channel Protocol (FCP) was designed to transport SCSI commands over Fibre Channel networks. It has become the dominant protocol used to connect servers with shared storage systems. While traditional FCP can be used to connect servers with NVMe storage devices, there’s an inherent performance penalty incurred when translating SCSI commands into NVMe commands for the NVMe array. The original Fibre Channel Protocol (FCP) was designed to transport SCSI commands over Fibre Channel networks. It has become the dominant protocol used to connect servers with shared storage systems. While traditional FCP can be used to connect servers with NVMe storage devices, there’s an inherent performance penalty incurred when translating SCSI commands into NVMe commands for the NVMe array.

What is NVMe over TCP (NVMe/TCP)?

NVMe over TCP (NVMe/TCP) is a low-latency transfer protocol that allows you to use standard Ethernet TCP/IP networking equipment natively with NVMe storage. TCP/IP is the default transfer protocol used by the internet by which messages are broken up into packets to avoid having to resend an entire message in the event of a disruption of service. As an extension of the NVMe-oF specification, NVMe/TCP allows you to send NVMe commands using the same TCP/IP protocol transfer packets you use to transmit other types of data. The plug-and-play ease and lower cost of standard Ethernet makes it an economical solution for connecting your NVMe storage devices over a data fabric. Ethernet also provides a greater range of network speeds and queue paths for data transport when compared to traditional iSCSI.

Why is NVMe vital for modernizing your data centre?

In an increasingly challenging market, big data is no longer enough to maintain a competitive edge—it must also be fast. And how do you make big data fast? You start in the server room. Transitioning from HDDs to SSDs is a good place to start, but it’s only one piece of the storage area network (SAN) puzzle. The transfer protocol, interconnects, and networking architecture also play important roles in the overall speed of your storage system. That means replacing legacy technologies, such as serial attached SCSI (SAS), with NVMe/PCIe and NVMe over fabrics (NVMe-oF). There’s an inherent advantage to using a transfer protocol specifically designed to communicate natively with NVMe storage technologies. Traditional storage systems use SAS links from their controller processors to the SSDs. Because SCSI is a legacy protocol designed for disk, each connection from the CPU core to the SSD is limited by the SAS host bus adapter (HBA) and synchronized locking. This serial bottleneck holds back flash arrays that don’t take advantage of newer protocols like NVMe. Pure Storage® FlashArray™ was specifically designed to overcome this SAS bottleneck. NVMe brings massive parallelism with up to 64K queues and lockless connections that can provide each CPU core with dedicated queue access to each SSD.

Leave a Reply

Scroll to Top