BiB 074: Replace iSCSI With NVMe/TCP From Lightbits Labs - a podcast by The Packet Pushers Team

from 2019-03-28T12:00:35

:: ::

The following is a transcript of the audio file you can listen to in the player above.
Welcome to Briefings In Brief, an audio digest of IT news and information from the Packet Pushers, including vendor briefings, industry research, and commentary.
I’m Ethan Banks, it’s March 28, 2019, and here’s what’s happening. I had a briefing with Lightbits Labs earlier this month. Why? Because I believe NVMe over TCP is going to make major inroads into enterprise environments over the next months and years, and Lightbits Labs employs many of the folks who wrote the spec. These folks are at the heart of all that is NVMe over TCP, and I wanted to hear about their product and what they’ve been doing with customers, as that might be you in the near future.
Network As Directly Attached Storage
Lightbits Labs has announced a software defined storage with hardware acceleration product. In a nutshell, the product is a global flash translation layer that decouples SSDs and compute. Put your compute wherever, mount a box full of fast storage via Lightbits using the NVMe over TCP protocol, and get storage latency that performs like directly attached storage, but without the waste of space.
Lightbits is aiming this offering at folks who are looking for hardcore storage performance. These folks are operating private clouds and large enterprises. Usually, these companies have their own composable stack. What they want from Lightbits is an API and speed.
Lightbits Is Different
If you’re thinking that this is essentially distributed storage and nothing new, you’re sort of right. Abstracting disk from compute isn’t new. But Lightbits cites four differentiators that we’ll talk through.
First, Lightbits works with the server hardware you bring dedicated to storage, provided it’s x86 with standard NVMe SSDs…nothing fancy. You can load the server with 8, 16, or even 32 SSDs. The NIC can be a standard Ethernet NIC. TCP offload is not required. TCP windowing is optimized by the NVMe over TCP stack to get a consistent latency profile for the storage server traffic.
If you want your rev up your stock x86 hardware a little, Lightbits will sell you an optional acceleration card for SSD management and data services. The LightField Card is a PCIe add-in that offloads data reduction, data protection, NVMe/TCP, and the global flash translation layer functions.
Second, Lightbits claims that the global flash translation layer, something you typically find at the host level, is unique. As I dug around the Internet, I couldn’t find anything to dispute that claim. Lightbits calls their global FTL LightOS. LightOS is the operating system software layer that virtualizes pools of SSDs. Lightbits claims that LightOS can improve the endurance of SSDs up to 4x, especially with compression and thin provisioning. And if that’s true, there’s an ROI calculation you perform, because the physical flash disk is going to last longer with LightOS sitting on top of it. LightOS doesn’t offer de-duplication today, but it’s on the roadmap.
Third, NVMe over TCP is, as the name implies, TCP. You can run this over your existing IP network. You can run it multi-hop, not a given in storage protocols. You don’t have to build a special network to handle special storage protocol magic. NVMe over TCP works with what you’ve got. That said, I will point out that NVMe over TCP might ask a little bit of that network you’ve got, and you should do some homework. Dr. J Metz did a dense, detail-filled presentation on NVMe over fabrics for network engineers, and you can find that presentation on our Ignition.PacketPushers.net website for free. But the point stands that NVMe over TCP doesn’t need a special network, which probably means it’s coming to your network at some point.
Fourth, Lightbits points out that t…

Further episodes of The Fat Pipe

Further podcasts by The Packet Pushers Team

Website of The Packet Pushers Team