cheap mlb jerseyscheap jerseyscheap nhl jerseyscheap jerseys

NVMe-oF for the rest of us – Gigaom

There is a growing demand for NVMe devices in enterprise storage. It’s not surprising that NVMe is becoming the standard interface at the end of every storage solution, and more and more providers are working to provide the same interface, NVMe-oF, on a human interface. use.

The ABC of the NVMe Protocol

NVMe can be packaged in a number of transport protocols, including Fiber Channel (FC), InfiniBand, Remote Direct Memory Access on Converged Ethernet (RoCE), and a relatively new protocol, simple TCP. . This allows organizations to have a variety of options for designing new infrastructure and, at the same time, securing investments on existing infrastructures.

FC, for example, has been standard in every enterprise infrastructure for a long time. It requires dedicated cables, host adapters and switches. Ultimately it’s very expensive, but NVMe / FC is a good compromise if you invest a lot in this technology and want to depreciate your existing infrastructure and can plan a long-term transition to network types. other. In this case, the minimum requirement to apply NVMe / FC is Gen5 16Gb / s FC.

Business organizations have not really adopted InfiniBand. It has large bandwidth with low latency and is optimized to deliver small messages quickly. This is one of the most popular networks in high-performance computing, and NVMe offers its best on InfiniBand, but, again, if you’re a business, this isn’t for you (and very maybe the host product you plan to use will have limited support for it, if any).

One of the main advantages of FC and InfiniBand is their lossless nature. In fact, the network cares about the connection and does not lose the packets between the servers and the host system. On the other hand, standard Ethernet is a top-effort network and relies on a series of simplified network controls that packets can be lost along the way. They can be sent back, of course, but this can create performance issues. Converged Ethernet (CE) added more protocols to address these problems and prioritized traffic-specific like storage and bridging the FC and InfiniBand gap. In particular, the first CE implementation is required to encapsulate FC traffic over a data center Ethernet (FCoE). The idea behind FCoE is to converge both storage and the network on the same wire. It worked but no one at the time was ready for this kind of change. RoCE is an additional improvement that simplifies stacks and helps minimize latency. I tried to make this explanation simple and quick, and maybe it’s an over-simplification, but this should give you an idea.

Last but not least, it’s NVMe / TCP. It just works. It works on existing hardware (not PC discount switches, of course, but so does any enterprise switch) and standard server NICs. It’s not as effective as the others, but I’d like to dig deeper into this before thinking this isn’t the best option.

Theory versus reality

RoCE is great, but it’s also really expensive. To make it work, you need specific NICs (network interface adapters) and switches. This means you cannot reuse your existing network infrastructure, and you need to add NICs to servers that already have NICs, which also limits your hardware options and introduces keys on specific network adapters.

Besides cost per port, you need to consider that two 100Gb / s NICs (required for high availability) will provide 200Gb / s per node. This is a considerable rate, do you really need it? Do your apps take advantage of it?

But there’s more to it than that. Consider all the available options:

If we compare all the options currently on the market, you will note that NVMe / TCP has many advantages and in the real world, it doesn’t have any major downsides. It shines when it comes to cost per port and ROI. The speed is consistent with other solutions. The only parameter that is not on top compared to the others is latency (more info on this will be coming soon). But flexibility is an absolute aspect that should not be underestimated. In fact, you can use existing switches, configurations, and NICs installed in your server and adapt in the process.

Latency and flexibility

Yes, NVMe / TCP has higher latency than others. But how much? And how does it actually compare to what you have today in your data center?

A short meeting I had recently Lightbits laboratory There is a series of benchmarks that compare traditional Ethernet-based protocol (iSCSI) to NVMe / TCP. In my opinion the results are quite impressive and this should give you a good idea of ​​what to expect from NVMe / TCP adoption.

From this slide you can see that, simply replacing iSCSI with NVMe / TCP, the efficiency introduced by the new protocol stack will reduce latency and keep it below 200µSec, even if the system is especially stressful. Again, same hardware, improved efficiency.

Yes, with NVMe / FC or NVMe / RoCE you can get even better latency, but we are talking about 200µS and have very little workload and compute infrastructure really need low latency more than 200µS. Am I wrong ?!

The low cost of NVMe / TCP has another key advantage. It enables the modernization of legacy FC infrastructures at a fraction of the cost of the other options described in this article. Older FC HBA 8 / 16Gb / s switches and switches can be replaced with standard NICs and 10/25 Gb / s ethernet switches. This will simplify the network and its management while reducing support and maintenance costs.

Close the circle

NVMe / TCP is one of the best options for applying NVMe-oF. It is the cheapest and most versatile of the bunch. Furthermore, its performance and latency compare quite well with traditional protocols. Yes, TCP adds a bit of latency to NVMe, but for most enterprise workloads we’re talking about a huge improvement in moving to NVMe.

In my opinion and I’ve said this many times, NVMe / TCP will become the new iSCSI in terms of application. Ethernet hardware is very powerful, and with optimized protocols, it offers incredible performance at no additional cost or complexity. The reality is that not all servers in your data center need optimal performance, and with NVMe / TCP you have a variety of options to address all your business needs. Honestly, you can easily minimize the performance gap. Example: Laboratory Lightbits can take advantage of ADQ technology from Intel to further improve latency while still reducing costs and avoiding locking.

In the end, it all comes down to TCO and ROI. There is very little workload that may require latency provided by RoCE, the rest have NVMe / TCP – especially if we consider adopting and running on an already existing Ethernet infrastructure as easily as possible. how.

Disclaimer: Lightbits Labs is a GigaOm client.

Leave a Comment