Always updating business technology trends
Get updates that impact your industry from our GigaOm Research Community
One of the most exciting announcements at this year’s VMworld is Project Monterey. In a nutshell, VMware wants to offload some high-demand operations to smart network interface cards (SmartNIC) and other types of accelerators. That way, the virtualization stack becomes more efficient and feature-rich, while also allowing for resource separation and aggregation. This description might seem a bit vague, but has a number of real-world benefits for users, including increased security and efficiency, as well as better resource management. And all of these happen simultaneously improving the cluster’s overall performance!
The idea is quite simple and the same approaches have been used many times in the past. Today’s multi-purpose CPUs are complex and can do many things in parallel, but like it or not, there are limitations and some shared resources. The more diverse the tasks CPU is required to perform, the more context it has to switch context, creating internal cache errors that slow down processing and decrease system performance.
From a storage, encryption, compression, protocol transformation and all the math around data protection put a strain on the CPU, leading to poorer overall system performance and performance. . In this regard, Project Monterey will bring these tasks off-load to specialized accelerators and hardware, allowing the system CPU to focus potential and increase efficiency to run applications.
Unfortunately, Project Monterey isn’t here yet, and we’ll need to wait a few months to see a production version of it. On the other hand, the technology is already available and several vendors in different sectors are working on the exact same model as Project Monterey.
Software + Hardware defined memory
Software is great, but software that can get the most out of hardware is better. For many years, we had hardware-based storage systems powered by CPUs and ASICs built to purpose, and this was the only way to deliver the power needed to get everything fast enough to works normally. The storage array’s operating system is specifically designed to work with the hardware and exploit every bit of it. Over time, thanks to the increase in the power of CPUs and network components, we discovered that ASICs (and other esoteric accelerators) were practically useless and more and more system architectures increased. The system is starting to focus on standard hardware. This has brought to the market an increasing number of “software-defined” solutions.
Everything worked pretty well with the hard drive until it appeared, in order of appearance, flash memory, NVMe and storage. It doesn’t happen quickly, in fact, adopting flash is quite slow because of its price at first, but things have changed completely over the past few years.
We now have 100Gb / s Ethernet (if not more!), NVMe and NVMe-oF (shorter and large parallel data paths) and faster memory than ever can be configured. looks like a RAM extension (Intel Optane). The amount of data these devices can manage is enormous. In order to keep the storage system balanced and efficient, we need to ensure that every component can receive high data flow without congestion. It’s a classic example of a history of repetition, some say.
A storage system defined using standard software (i.e. no acceleration) may use CPUs for general purposes because:
- Slow hard drive (hundreds of IOPS)
- Flash is faster but still manageable (up to tens of thousands of IOPS)
- Relatively slow Ethernet (10Gb / s)
- Protocols are designed to handle hard drives in a serial fashion (SCSI, SATA, etc.).
The day we unleash the power of next-generation memory options thanks to NVMe and faster networks (100Gb / s or more), all-in-one CPUs have become the bottleneck. At the same time, expanding the storage system requires more powerful and more expensive CPUs to function. At the end of the day, everyone wants to go faster, but no one wants to give up on data protection, data services, security, and data footprint optimization. Would you do that?
Some SDS vendors quickly got this and started building the next generation of systems that leverage accelerators to do more (and better) with less (power and space). .
Software + Hardware Optimized
I want to show you an example Lightbits laboratory here. Lightbit’s LightOS is an innovative NVMe-based scaling software solution that aggregates NVMe devices on storage nodes and displays NVMe / TCP as its end-to-end protocol. It combines low latency and high performance of NVMe-oF (NVMe over fabric) storage with data services over standard Ethernet TCP / IP networks.
Recent company announced a partnership with Intel to take advantage of most of the latest hardware technologies from the giant chip maker:
- Intel Optane. For fast, non-volatile handling of write buffers and metadata
- Intel Ethernet 800 Series NIC. For optimization low latency NVMe / TCP is optimized
- Intel QLC 3D NAND SSD hard drive. For $ / GB better.
- And more…
Lightbits has shown incredible performance on versatile hardware. By adding these technologies to its solutions, it is possible to further optimize performance while increasing efficiency and overall system costs. On the other hand, Lightbits is offloading a bunch of tasks for Intel SmartNIC (with specific optimizations possible by ADQ technology) while taking advantage of the latest memory options in better performance, capacity, and cost compared to other solutions. For users, that means better performance, more capacity and higher overall efficiency along with reduced data center area, making it easier to switch to better TCO.
It’s worth noting that these accelerators can be considered specialized hardware, but they’re not custom hardware. Actually, we are talking about built-in, non-ASIC components designed by Lightbits. This is particularly important and gives Lightbits a huge long-term advantage as it can focus on software development instead of managing an ASIC design. This is also beneficial for Lightbits customers, as they will have additional options. In fact, they can choose between defined software (fast, efficient, cost-effective) and defined software with hardware acceleration (faster, more efficient, TCO-focused) .
Close the circle
If I said it once, I said it a million times. Modern data centers are no longer just x86. More and more advanced infrastructure today relies on dedicated hardware and accelerators such as GPUs, TPU, FPGA, smart NIC, etc.
The software was able to take advantage of these components and Lightbits is a great example of this. Its solution may work on general-purpose hardware, but it can work better with these components and provide better TCO and faster return on investment at the end of the day.
From a user perspective, hardware upgrade (lack of better terminology) is just software identified on steroids and it offers additional options for designing solutions that better meet business needs.