So in case you have not already heard…
Sunnyvale, Calif. — December 21, 2015 — NetApp, Inc. (NASDAQ: NTAP) today announced it has entered into a definitive agreement to acquire SolidFire for $870 million in cash.
Previously reported in the Register as a possible 1.2B deal – http://www.theregister.co.uk/2015/12/21/kurian_strikes_netapp_buying_solidfire/
My two cents on this deal? It was not necessary for any technical reason. But we have to concede that street confidence in NetApp has been low, and this move shakes that up for the moment. It also provides a convenient exit strategy for the FlashRay program, that has more positives than negatives associated with it. The more I think about it, its actually a pretty impressive move by GK to rattle the cages not just on the outside but even more importantly INSIDE NetApp. And I remain in the GK fan club.
(edit – added 12/22/2015)) What SolidFire does provide, that hints at the direction of IT as a whole, is beautiful delicious performance SLAs. It is part of the infrastructure abstraction that has been missing, and is enormously difficult to develop. Once apps and systems are built on top that begin to really depend on these SLAs, ultimately they will be very sticky. But SolidFire’s implementation is fairly light here, and only works with a single media class on a well balanced cluster. Investing in SolidFire, as NetApp is doing, comes from a belief in the interface which has been abstracted, and not the implementation itself. Is this interface patented? Can it be? And does NetApp have what it takes to evolve this to support imbalanced clusters and multiple media types over the years? This is a big question, because those algorithms get very messy, and… when done right … nobody should even be discussing their mechanics, but simply experiencing the effects. At some point, it is analogous to evaluating service satisfaction of two different car rental companies, by evaluating their fleet management algorithms instead of the simplicity of cost, car quality, and availability of model.
In the long run, the confidence that NetApp builds with this, combined with some revenue increase and a sub-billion-dollar price for SolidFire, will probably be a wash. But is it a GREAT move?
- SolidFire has very few potential buyers left, after the Dell acquisition of EMC. Seriously, who else would pay a billion dollars?
- Ive been in this industry long enough to know that it takes incredible efforts to take a solid product and turn it into a corporation of thousands of employees. SolidFire might have been able to do this, but this does not happen often. Revenue in enterprise storage as a whole is stagnant, and SF needed a buyout to happen very soon…. hence the negotiation of the initial 1.2B offer down to 870M.
- I actually believe in NetApp’s portfolio capability to win with flash just as it was. The question isn’t if something is “intentionally built for flash”, it is “is it an architecture that supports flash?”. And LFS-variants such as ONTAP/WAFL do. In fact, their hybrid fixed-block/LFS file system is great for this application. Unfortunately, FlashRay lingered so long it injured the credibility of perfectly good products.
- SoliFire’s Performance SLAs are slick, but the cluster potentially misses on delivering the most important storage SLA of all – the ability to thick provision. Thin provisioning is GREAT, but sometimes there is a need to deliver guaranteed capacity at any and all costs. “Thin-only” storage has silently become the default and includes EMC XtremIO and Pure as well. At large implementation scale, automation scripts will need to be added to ramp up capacity preemptively, as well as the abstraction modified to track what provisioned storage has priority in gobbling up capacity first.
- Flash is going to plummet in price.
It is the last bullet that matters. Flash is going to fall so far in price, it will change everything. Two years ago, I presented an “Outrageous Opinion” while at NetApp that in 10 years (8 years from now) we will have a 1PB SSD available, with per-TB pricing only a small fraction of HDD at that time, but with minimal overwrite capabilities. It was so outrageous, it got no votes. Only 2 years later, we are now looking at SSD/HDD price parity in 2-3 years, SSD densities exceeding HDDs in 3 years, a 32TB SSD (or even 64TB) by 2017 or 2018, and at this rate 1PB is possible by mid-2020s.
Why does this matter? Because SolidFire fragments everything it stores into tiny blocks and distributes them across the cluster. SolidFire is capacity-efficiency-optimized, and has high CPU-to-storage ratios and high interconnect-usage to optimize for efficiency. In a future of solid-state storage so cheap its free, the demand to crunch-at-any-cost at the expense of interconnect traffic wont be there. SolidFire is a great technology (brilliant actually) – for the market economics of SSD over the past 5 years.
[edit – added 12/22/2015] The higher densities bring another wrinkle – lower and lower overwrite support. And pinging all you “need to design for flash” smarty pants, if you truly believe that you need to “design for flash from the ground up”, then you must also need to again “ReDesign for low-overwrite flash from the ground up”. Or.. going back to my earlier point, you simply use the best known architecture (or tool in the toolbox ) for the job, tweaked of course as necessary.
SolidFire also distributes the data to such an extent that I don’t see it as solid a platform as VSAN or Nutanix for hyperconvergence (or Hadoop variants in general.. because HCI is really just a specialized method of co-resident compute and storage).
But what do I know? Thank you Mark Twain for reminding me its cool to have unpopular opinions… And its easy to trust a Mark with awesome hair.
Parting advice to those at SolidFire – Start thinking about moving to Sunnyvale. NetApp’s travel budgets are tighter than Cameron Frye, and success will be tied to face time. I worked remotely out of the Waltham office, and speak from experience here.
Good luck with it ! And before I forget…