This article is more than 1 year old

Flash in the enterprise is in a solid state. Where is it going next?

Flash forward

Sponsored Sales of purpose-built storage appliances are falling, markedly, according to IDC, revenue and capacity falling respectively by 16.2 per cent and 14.9 per cent in the analyst’s latest quarterly tracker in September.

Among the culprits responsible for this decline is flash, which is becoming a standard part of the enterprise storage armoury, whether in hybrid or all-flash setups.

Among the benefits of flash, are greater density that means fewer devices to deploy and manage, with reduced software licensing and administration costs as a result, along with increased reliability. Also, helping adoption is the fact prices are falling.

IDC has said it expects sales of all flash arrays are expect to growth on a compound annual growth rate of 21.4 percent through to 2020.

With the wind so clearly in the sails of flash, it’s worth taking time out to look at how the technology has evolved and its future.

Solid state storage has come a long way since it first appeared commercially in the mid-seventies. One of the pioneering products, Dataram’s Bulk Core, was a 2Mb system that emulated the hard disks minicomputers used at the time. It would be decades before flash would take over in many places as a primary storage technology though.

It was really only used for cache - there wasn’t even the concept of all-flash. It was often used to front-end conventional disk arrays, providing a form of cache to help speed up operations. In a rudimentary form, these were the first hybrid flash arrays.

As capacities grew, a second use for SSDs emerged that involved putting files permanently on an SSD, effectively turning it into a primary storage device, but this remained a niche application for decades given the relatively high storage costs and low capacity of the media. You’d see operating system or database tables stored that way in some cases, but examples were relatively rare.

Solid state disks gradually grew, but typically focused on the niche applications where they made financial sense. You’d see them in the military, or in performance-focused vertical applications in areas like seismological research. Flash began to hit the mainstream in the form of all-flash arrays in the mid-late 2000s as the benefits began to materialise.

One immediate benefit was speed. Hard drives faced a mechanical problem as they tried to keep up with their SAS and SATA interfaces. Moving the disc around to position the data under the read/write head when you need it takes time. Vendors tackle that problem by increasing rotational speed, topping out at around 15,000 RPM. Even then, they can’t match the far higher speed of directly accessing flash memory cells.

Flash storage’s higher speeds made it suitable for high-transaction databases explains George Crump, chief steward at storage analyst firm Storage Switzerland. “They’re still dominant today,” he says.

“The next wave is the one that put flash into the storage mainstream,” Crump told us, “and that’s how well storage handles virtualized environments.”

With the rise of cloud, flash helped to solve the “I/O blender” problem inherent in virtualized environments. Operating systems do their best to write and read data sequentially, maximising hard drive efficiency by keeping the read/write head one once place rather than wasting time dancing around on the platter.

Multiple operating systems on the same physical media all want read/write access at the same time, killing that efficiency. The data I/O naturally becomes more random, and HDDs don’t cope with it well. On the other hand, random access is flash storage’s strength.

Solid adoption

So beyond high-performance databases, the fortunes of enterprise flash are tied to virtualization and the cloud. No wonder it’s been doing so well.

These days, flash storage is a bright spot in an otherwise flat market. In June, IDC noted that the EMEA external storage market had suffered its ninth quarterly decline, falling by 3.9 per cent in Q1 2017. A quick drill-down shows flash doing well, though. Conventional hard drive revenues fell 34.5 per cent, while the all-flash market grew 100 per cent.

We are clearly no longer talking a niche. Between them, all-flash and hybrid flash arrays now account for 70 per cent of total market value in western Europe, according to IDC.

The current flash array players are Dell, HPE, NetApp, PureStorage, Kaminario, Huawei along with a string of start ups. Dell tops the market having taken on EMC, through the mega $67bn 2016 acquisition.

Of these, Huawei - which has been working on Flash storage since 2005 – stands out for having a unique approach to its architecture with its OcenStor Dorado V3. Its proprietary FlashLink technology employs specific SSDs to manage the flash resources. ASIC talks to the flash chips and the drive controllers, so that some of the drive functions take place faster - according to Huawei, although there are no public benchmarks. Huawei OceanStor Dorado V3 is just over a year old with the next version expected in the coming months.

Developments in NAND

So, that’s the sell, but what are the components of an enterprise flash system? The first is the media itself. NAND is the mainstay of enterprise flash storage because of its non-volatility; you don’t need to keep refreshing its power. It stores a charge in a floating gate transistor, which holds a voltage.

There are several kinds of NAND, each of which uses this floating gate differently, with its own capacity trade-off.

Multi-level cell (MLC) stores multiple voltages in its floating gates, enabling it to store multiple bits inside a single cell based on its voltage level. This makes it less reliable, because NAND cells tend to die over time. It also makes it typically slower to access, which is why it’s typically a consumer-grade memory used in phones, cameras and USB keys. Single-level cell (SLC) NAND just recognises the presence of absence of a charge, reducing the viable bits per cell to one. This reduces capacity and increases the cost, but also makes for better reliability and access times.

This SLC/TLC distinction was a big thing a few years ago, but the distinction has largely gone away, says Jim Handy, general director at specialist semiconductor analysis company Objective Analysis. The SSD controller manages reliability through error correction, and vendors can simply rely on redundancy, provisioning more chips in an SSD to take over when the controller software puts cells out of bounds.

“You may end up seeing more reliability out of the MLC with 25 per cent extra than you can get out of the SLC with no extra,” he says. “SLC nowadays costs around ten times as much as MLC, so the economics are certainly there.”

Redundancy like that has become more realistic as NAND prices have dropped, which has happened significantly in the last few years, explains Berry. As prices have dropped, capacity has expanded even more dramatically.

“For capacity in the same form factor, SSD has surpassed hard disk drives,” he says, adding that flash is still a premium product. “In price per Gigabyte, I’m still going to pay more.”

We won’t reach price parity between NAND flash and HDDs in the enterprise any time soon, but that may not matter. As the gap closes, the performance and capacity gains inherent in NAND will be enough to tip an increasing percentage of IT buyers in that directly.

With triple-layer and even forthcoming quadruple-level cell NAND, flash costs look set to plummet still further as capacities mount. IDC analysts predict the NAND cost premium falling to a third of its current 6.6x multiple by 2021.

One thing that promises to drive capacity still further is 3D NAND, in which manufacturers layer cells atop each other, creating a microscopic storage skyscraper.

Berry says that aside from putting more storage in the same space, this will help keep the cost of NAND flash on its downward trajectory. “There are economies of scale and it gives chip manufacturers a way to discount the product without cannibalising the price of the wafer,” he says.

Host interfaces

Another area of innovation is in the interface between the flash memory and the host. “These flash arrays represent the gutting of storage systems, taking out the guts of disk drives and putting in the guts of flash memory,” according to Berry.

The host interface is no longer waiting for relatively sluggish HDD mechanics to catch up. “Now there’s a huge performance difference, so you have to change the I/O,” he adds.

The pressure was on manufacturers to use higher-speed PCIe buses instead of SATA, SAS and SCSI, which constrained flash memory’s communication speeds. The big development that helped flash arrays was Non-Volatile Memory Express (NVMe), which standardised the interface between flash SSDs and the PCIe bus.

“NVMe-to-PCIe connectivity is probably the next big thing in flash development, and it’s something we’ll see continue to mature over the next six months to a year,” says Crump.

NVMe SSDs are finding their way into higher-end laptops and desktops where fast data access for gaming and workstation workloads is important. In the enterprise, they’re finding traction in direct-attached storage. When high-speed network fabrics combined with HDDs, shared storage became popular, Handy explains. Virtualization needs shared storage to move tasks between servers, so people moved away from direct-attach.

“When flash came along, it was clear that putting flash on the far side of the network slowed the flash down considerably,” he says.

Swinging the pendulum back to direct-attach isn’t optimal, but it’s likely until something better comes along. It’s already in development: All hail NVMe over Fabrics.

This specification, announced by the NVMe Express organization in June 2016, takes the NVMe protocol and shovels it over fibre channel or IP networks to get close to in-server latency.

“In most data centres that reduction in latency probably isn’t necessary yet, but we’re heading there,” says Crump. One use case driving this is also propelling the adoption of enterprise flash in general after its virtualization-based kick start: he says: analytics.

“The next wave comes as we move into unstructured data, and those analytic projects become interesting in all-flash systems,” Crump suggests.

This is where storage communication speed becomes increasingly important. “In the scale-out architecture, especially as they become made up of more nodes, the ability to reduce that latency becomes more critical.”

NVMe over Fabrics is still in its early days, he points out. It only works in controlled sample sets right now, where you get to specify the host adaptor, switch and storage array. “It’ll take a year to 18 months before you can just go out and buy any NVMe over Fabrics adaptor and have it work well for you,” Crump says.

NVMe is about to give flash storage a boost by improving interoperability and giving users with strict latency requirements a route back to shared storage. Nevertheless, it isn’t without its challenges. It can be difficult maintaining hot-swap capabilities in PCIe-based connections, and dual-port redundancy is also tricky over that interface. Vendors must get these things right to reassure customers who don’t want to trade reliability for performance.

In any case, there’s enough interest in NVMe to make it a big blip on analysts’ radar. IDC execs predict more revenue for NVMe-based SSDs than for any other storage interface, and they say that it will effectively replace SCSI.

What’s next?

What’s next for flash? Now that vendors have cracked the speed problem, they’ll have to find other ways of adding value, says Crump.

He likens it to buying laptops, arguing that the average user can’t drive a device hard enough to make an i7 worthwhile. Instead, they start looking at other characteristics, like the quality of the screen and how light the unit is.

“In storage, we keep cranking performance, but for an increasing number of data centres performance isn’t an issue any more,” he points out.

So what will enterprise flash customers be looking for? He lays out some thoughts here, including abstracted or ‘composable’ storage. NVMe over Fabrics will let vendors create scale-out storage systems that customers can tailor to different workloads on the fly, scaling for either performance or capacity as needed.

Enterprise flash storage is growing, clearly, with an accompanying pace of innovation from the industry that was absent during those early days. We should therefore expect to see capacity continue to grow and prices fall as a result, especially as more IT departments pile into these workloads and push what’s possible.

Sponsored by Huawei

More about

More about

More about

TIP US OFF

Send us news