SharePoint Designer access to SharePoint O365

One of my top interests in Microsoft O365 is how SharePoint is now available as a service. Sure, it was once fun to build a farm from scratch, but people want results quick. SharePoint O365 should mean getting operational faster, and Microsoft offering the business community more appreciation how SharePoint administration is itself an abstraction.

A huge strength of list management is the visual workflow creation capabilities of SharePoint Designer. Since Designer will not run on OSX, I created a Windows 10 64-bit VM to run on VMware Fusion. With O365 then installed, the initial login / sign-in of SharePoint Designer failed.

First, with Designer installed, we try to connect to the SharePoint site of my marketing team,  .


The request fails.. click on “Details”…


Error is a “403 FORBIDDEN” … yikes.


I was connecting using the O365 account that absolutely was in the admin group, and it said I should have Full Access.  Looking at SharePoint permissions, there appears to be a DENY on SharePoint Designer access. Seriously!  SharePoint is often the victim of having simply offered way too many dials and levers.


The fix for this is identified in  (
) and in (

I needed the overall O365 account admin at my company to intervene and help me fix – I did not, even as a site administrator, have enough privilege to change what needed to be changed. That person enabled ” Allow users to run custom script on personal sites ” and ” Allow users to run custom script on self-service created sites” for the  SharePoint O365 top-level ZZZZZ site.

According to Microsoft this can take up to 24 hours to take effect, and sure enough that is true! I tested 2 hours, 6 hours, 16 hours after the fix – I thought the 24 hours was just a worst case scenario. But after hour 20 or so, access was granted !


And now we can start making workflows ….


Hope this helps. Good luck with it!


Why “46and2bits” ?

I’ve been asked a couple of times. “46 and 2” is simply a classic metal song from Tool (, and the topic is evolution. To quote Wikipedia :

  • Popular belief dictates that the song title references an idea first conceived by Carl Jung and later expounded upon by Drunvalo Melchizedek concerning the possibility of reaching a state of evolution at which the body would have two more than the normal 46 total chromosomes and leave a currently disharmonious state.[2][3] The premise is that humans would deviate from the current state of human DNA which contains 44 autosomes and 2 sex chromosomes. The next step of evolution would likely result in human DNA being reorganized into 46 and 2 chromosomes, according to Melchizedek.

Will this evolution occur? It might get a bit gnarly, but IMHO it already has. Human evolution is the coexistance and codependence on compute machinery. In fact, I think sci-fi has the whole concept of what “aliens” would look like completely wrong! Just read :

  • If and when we finally encounter aliens, they probably won’t look like little green men, or spiny insectoids. It’s likely they won’t be biological creatures at all, but rather, advanced robots that outstrip our intelligence in every conceivable way. While scores of philosophers, scientists and futurists have prophesied the rise of artificial intelligence and the impending singularity, most have restricted their predictions to Earth. Fewer thinkers—outside the realm of science fiction, that is—have considered the notion that artificial intelligence is already out there, and has been for eons.
  • Susan Schneider, a professor of philosophy at the University of Connecticut, is one who has. She joins a handful of astronomers, including Seth Shostak, director of NASA’s Search for Extraterrestrial Intelligence, or SETI, program, NASA Astrobiologist Paul Davies, and Library of Congress Chair in Astrobiology Stephen Dick in espousing the view that the dominant intelligence in the cosmos is probably artificial. In her paper “Alien Minds,” written for a forthcoming NASA publication, Schneider describes why alien life forms are likely to be synthetic, and how such creatures might think.

Even for humans – the whole notion of sending humans out to explore space is way to expensive, and too risky. We have proven that we can create machines that can travel long distances, have high tolerances for adverse conditions, and can accomplish much more for much less than sending people. Aside from the moon (and maybe someday Mars), people may never get much further.

So that “Two” in “46 and 2”…. My guess, its the binary machinery that many of us are dedicating our lives to helping advance. Its not “people vs the machines”. Those machines – they are us. At least the ones that we make here. The ones from elsewhere? Well… I think the movie Oblivion captures that scenario …

“Data Lake” Alternatives

While I have a moment, wanted to capture my collection of alternative phrases for “Data Lake”. They may perhaps reference slightly different technical topic – no rules here!

  1. Petabyte Pond
  2. Storage Swamp
  3. Bit Bog
  4. Terabituary
  5. LUNgoon
  6. Drive Bayou
  7. Byte Bight
  8. SANama Canal
  9. Tiers for Fijords
  10. Shallows HAL

So you have any more to add ?

Good luck with it!

NetApp did not need SolidFire

So in case you have not already heard…

Sunnyvale, Calif. — December 21, 2015 — NetApp, Inc. (NASDAQ: NTAP) today announced it has entered into a definitive agreement to acquire SolidFire for $870 million in cash.

Previously reported in the Register as a possible 1.2B deal –

My two cents on this deal? It was not necessary for any technical reason. But we have to concede that street confidence in NetApp has been low, and this move shakes that up for the moment. It also provides a convenient exit strategy for the FlashRay program, that has more positives than negatives associated with it. The more I think about it, its actually a pretty impressive move by GK to rattle the cages not just on the outside but even more importantly INSIDE NetApp. And I remain in the GK fan club.

(edit – added 12/22/2015)) What SolidFire does provide, that hints at the direction of IT as a whole, is beautiful delicious performance SLAs. It is part of the infrastructure abstraction that has been missing, and is enormously difficult to develop. Once apps and systems are built on top that begin to really depend on these SLAs, ultimately they will be very sticky. But SolidFire’s implementation is fairly light here, and only works with a single media class on a well balanced cluster. Investing in SolidFire, as NetApp is doing, comes from a belief in the interface which has been abstracted, and not the implementation itself. Is this interface patented? Can it be? And does NetApp have what it takes to evolve this to support imbalanced clusters and multiple media types over the years? This is a big question, because those algorithms get very messy, and… when done right … nobody should even be discussing their mechanics, but simply experiencing the effects. At some point, it is analogous to evaluating service satisfaction of two different car rental companies, by evaluating their fleet management algorithms instead of the simplicity of cost, car quality, and availability of model.

In the long run, the confidence that NetApp builds with this, combined with some revenue increase and a sub-billion-dollar price for SolidFire, will probably be a wash. But is it a GREAT move?

  1. SolidFire has very few potential buyers left, after the Dell acquisition of EMC. Seriously, who else would pay a billion dollars?
  2. Ive been in this industry long enough to know that it takes incredible efforts to take a solid product and turn it into a corporation of thousands of employees. SolidFire might have been able to do this, but this does not happen often. Revenue in enterprise storage as a whole is stagnant, and SF needed a buyout to happen very soon…. hence the negotiation of the initial 1.2B offer down to 870M.
  3. I actually believe in NetApp’s portfolio capability to win with flash just as it was. The question isn’t if something is “intentionally built for flash”, it is “is it an architecture that supports flash?”. And LFS-variants such as ONTAP/WAFL do. In fact, their hybrid fixed-block/LFS file system is great for this application. Unfortunately, FlashRay lingered so long it injured the credibility of perfectly good products.
  4. SoliFire’s Performance SLAs are slick, but the cluster potentially misses on delivering the most important storage SLA of all – the ability to thick provision. Thin provisioning is GREAT, but sometimes there is a need to deliver guaranteed capacity at any and all costs. “Thin-only” storage has silently become the default and includes EMC XtremIO and Pure as well. At large implementation scale, automation scripts will need to be added to ramp up capacity preemptively, as well as the abstraction modified to track what provisioned storage has priority in gobbling up capacity first.
  5. Flash is going to plummet in price.

It is the last bullet that matters. Flash is going to fall so far in price, it will change everything. Two years ago, I presented an “Outrageous Opinion” while at NetApp that in 10 years (8 years from now) we will have a 1PB SSD available, with per-TB pricing only a small fraction of HDD at that time, but with minimal overwrite capabilities. It was so outrageous, it got no votes. Only 2 years later, we are now looking at SSD/HDD price parity in 2-3 years, SSD densities exceeding HDDs in 3 years, a 32TB SSD (or even 64TB) by 2017 or 2018, and at this rate 1PB is possible by mid-2020s.





Why does this matter? Because SolidFire fragments everything it stores into tiny blocks and distributes them across the cluster. SolidFire is capacity-efficiency-optimized, and has high CPU-to-storage ratios and high interconnect-usage to optimize for efficiency. In a future of solid-state storage so cheap its free, the demand to crunch-at-any-cost at the expense of interconnect traffic wont be there. SolidFire is a great technology (brilliant actually) – for the market economics of SSD over the past 5 years.

[edit – added 12/22/2015] The higher densities bring another wrinkle – lower and lower overwrite support. And pinging all you “need to design for flash” smarty pants, if you truly believe that you need to “design for flash from the ground up”, then you must also need to again “ReDesign for low-overwrite flash from the ground up”. Or.. going back to my earlier point, you simply use the best known architecture (or tool in the toolbox ) for the job, tweaked of course as necessary.

SolidFire also distributes the data to such an extent that I don’t see it as solid a platform as VSAN or Nutanix for hyperconvergence (or Hadoop variants in general.. because HCI is really just a specialized method of co-resident compute and storage).

But what do I know? Thank you Mark Twain for reminding me its cool to have unpopular opinions… And its easy to trust a Mark with awesome hair.

Parting advice to those at SolidFire – Start thinking about moving to Sunnyvale. NetApp’s travel budgets are tighter than Cameron Frye, and success will be tied to face time. I worked remotely out of the Waltham office, and speak from experience here.

Good luck with it ! And before I forget…

Mellon Collie and the Infinite Scalability


Ever hear the phrase “Infinitely Scalable”, or “Infinite Scalability”? If you are in the field of IT, of course you have! This phrase makes me not just irritated, but also a bit sad and pensive. Both the “Infinite” part, and the “Scalable” part.

(Note: My blog focuses on data storage technology, and that is the lens through which I’ll apply my critical view of this phrase in this blog)(Note: The artwork in this blog is from the Smashing Pumpkins album of virtually the same name… I did not ask for permission.)

Isilon says they are “Infinitely Scalable” (See, allowing users to …

…. take advantage of one simple, highly reliable, infinitely scalable storage system. ONLY from Isilon.

“Only from Isilon”, they say? What about Nasuni, who also claims Infinite Scalability? (See

Nasuni Provides […] Infinitely Scalable Primary Storage and Continuous Data Protection

And Ceph? (See

Ceph provides an infinitely scalable Ceph Storage Cluster based upon RADOS …

And ScaleIO, which has been described by EMC’s Chad Sakac as ( …

something that scales to infinity and beyond

Unfortunately, some of the “Infinite Scalability” lingo has been applied to products from NetApp (my current employer). To the best of my knowledge, this has not sourced from NetApp corporation itself or any of the lead NetApp evangelists. Such an example can be found on Reddit (See

Fastest SPC results of all the major players, interoperable with any cloud, can handle disks of all kinds together in the same chassis, does every protocol under the sun, and will (with time) be infinitely scalable in both directions with zero downtime ever.

While a NetApp Clustered ONTAP (cDOT) solution can be built to be of an enormous scale (approaching 100PB of protected usable capacity, pre-deduplication and compression, no thin-provisioning or snapshot or cloning “effective capacity” marketing – see, and supports non-disruptive changing of scale (adding / removing controllers, adding and removing of physical capacity), it is certainly not “Infinite”. This is one of the many reasons I love NetApp culture… No need to go overboard with using “Infinite”, when reality is impressive enough.

But, Ive had enough of those others. Enough of the “Infinite Scalability” noise.

Time to explore the topic of “Infinite Scalability”.


  • Is “Infinite Scalability” something that exists in reality?
  • Are we all in agreement on what “Scalable” means?
  • What is even being measured, and how?

And why do we even need to discuss this? Simple

The Intellectual Arrogance in IT

Source :

There are laws that exist in the universe. Many of these laws we are still learning, while others (such as the futility of keeping socks properly paired) are well understood. We need to dream big and be creative, but that does not mean that the laws of the universe can be warped to meet the demands of our imagination.

Many of the individuals in IT are extremely intelligent. So why do many of the “thought leaders” in IT propose ideas that suggest that our industry developed systems that somehow break these laws? These laws are easily recognized and appreciated in other sciences and engineering disciplines, but why not in IT?

First, let us get a better grip on the definition of Scalability.

The Ways of Scale

When the term “Scalable” is used, consider that this word can have multiple meanings.


A solid starting point for the definition of Scalability can be found in WikiPedia (See

Scalability is the capability of a system, network, or process to handle a growing amount of work, or its potential to be enlarged in order to accommodate that growth. […]

An algorithm, design, […] or other system is said to scale if it is suitably efficient and practical when applied to large situations […]. If the design or system fails when a quantity increases, it does not scale.

I find it useful to consider the root of the word scalability (scale), along with the above definition, to identify the three valid dimensions of Scal(ing).

  1. You can build something that is Of a Large Scale (or of a great scale)… such as a multi-petabyte storage platform.

    Here, a requirement is that when factoring in the size, it delivers an a correspondingly large (but not necessarily linear) capacity.

  2. You can build something that has an architecture that Can be Built at Different Scales (but not necessarily able to change size once built).

    Consider the scalability of VSAN here (See “”). Thus document demonstrates that it can be built to be different sizes, and how well VSAN works at those different sizes. In this document, data is included for configurations of. 4, 16, 32, or 64 hosts.

  3. The Ability of a System to Support Dynamic Resizing (or Dynamic Change of Scale), where the size of the system can be increased (or even decreased) without disruption.

    This dimension of Scalability is suggested with the use of phrases such as “seamless scalability”.

    An aspect to consider here is administrator involvement – Is the resizing automatic ? Some storage systems will add capacity to the global pool automatically, even non-disruptively (you may see the phrase “non-disruptive scalability”). Meanwhile, while others may require (optionally) intervention such as disk configuration, and data rebalancing.

With these scalability dimensions now identified, here are some additional observations:

  • A “Scalable” system need not support all three dimensions simultaneously. For example, just because a system can be built at a large scale, that does not mean it can be dynamically resized. Same is true for a system that can be built at different sizes.
  • If a “Scalable” system can be built at different scales, that does not necessarily mean it can be built at a large scale.
  • We have not closely reviewed the definition of a cluster (or a single system) with respect to scalability. Much of the definition of clustering is critically depending upon the consistency of the system (“strong” vs “eventual”, in cluster terms). We will review that later.

Scaling in The World

Scaling is not a new concept. Im a big believer in looking at existing infrastructure (other engineering disciplines) for inspiration here.


A person would never say they can build a bridge at any scale, much less of “infinite scale”. With respect to size, consider the obvious limits on bridge lengths (See Longest Suspension Bridges in the world

The Akashi Kaikyõ Bridge between Kobe and Awaji Island in Japan

In a suspension bridge, the main cables suspend the deck (girder, roadway). Most of the bridge’s weight (and any vehicles on the bridge) is suspended from the cables. The cables are held up only by the towers, which means that the towers support a tremendous weight (load). The steel cables are both strong and flexible. This makes long span suspension bridges susceptible to wind forces. These days, engineers take special measures to assure stability (“aerodynamic” stability”) to minimize vibration and swaying in a suspension bridge under heavy winds. (The 1940 Tacoma Narrows Bridge is the world’s most famous example of aerodynamic instability in a suspension bridge.)

Regarding the ability to change scale, a person would never say that a bridge can have lanes added or removed at any time. Consider the effort to add only a couple of lanes to existing active bridge, the Quesnell Bridge (a girder bridge located on Edmonton’s busiest traffic corridor, the Whitemud Freeway, with volumes of more than 120,000 vehicles per day). When this section of the freeway needed widening, rerouting that traffic onto a detour was not an option. See Bridge widening project –


“We needed to design a system that would be cost effective, feasible and involve minimum construction time while allowing traffic to continue to flow,” says Gary Kriviak […].

Early analysis determined that there was some reserve capacity for additional weight on the existing piers and foundations, indicating that pier cap extensions were a feasible approach for supporting a widened bridge deck. A more conventional pier widening scheme would require construction from the foundation level up.

If no reserve capacity for additional load was available, new piers and foundations would have been required. Then – do the new lanes on the new piers join the old lanes, or do we have logically seperate bridges? And if so, then how does that impact intersections on either side of the bridge? Not to mention, the planning alone here is costly. You dont just bolt on a few lanes because a data sheet says that you can.

In conclusion here, saying a bridge is “infinitely scalable (build one of any size, and change size at any time) would be “intellectually arrogant”.



Ever try resizing an airplane after it was built? For example, extending the length of the fuselage to hold more capacity, or maybe bolting on an extra engine for more speed?

Although the design of an airplane is reasonably scalable (planes of different sizes sort of look the same), the laws of physics require planes of different size to be built slightly differently (different materials, fasteners, etc). Any change to one dimension has a ripple effect on the whole. To get a feel for the countless interdependencies, check out the airplane design tool at

Obviously, dynamic plane resizing is not possible, nor can they be built of infinite size. Like bridges, saying airplane design is “infinitely scalable” (build one of any size, and change size at any time) would also be “intellectually arrogant”.


The engineering challenges of building a skyscraper is fascinating. See


Source :

  • A taller building requires more elevators to service the additional floors, but the elevator shafts consume valuable floor space. If the service core (which contains the elevator shafts} becomes too big, it can reduce the profitability of the building
  • The load a skyscraper experiences is largely from the force of the building material itself. In most building designs, the weight of the structure is much larger than the weight of the material that it will support beyond its own weight. In technical terms, the dead load, the load of the structure, is larger than the live load, the weight of things in the structure (people, furniture, vehicles, etc.). As such, the amount of structural material required within the lower levels of a skyscraper will be much larger than the material required within higher levels. This is not always visually apparent.
  • The wind loading on a skyscraper is also considerable. In fact, the lateral wind load imposed on super-tall structures is generally the governing factor in the structural design. Wind pressure increases with height, so for very tall buildings, the loads associated with wind are larger than dead or live loads.

Again… thinking “Infinite scalability” when considering the challenges of building a skyscraper is crazy. But here we can see the challenges in a bridge, or an airplane, or a skyscraper – it is harder to visualize all of this in IT.

Consistency Impacts Scaling


It is important to consider the consistency of a system when discussing scaling. Specifically, in IT, Cluster Consistency impacts Scaling.

We have so far put forward some compelling cases of how “infinite scalability” is not a reality in other engineering disciplines. Yet, you may still believe that it is different in IT by thinking “Hasn’t the internet demonstrated that it is infinitely scalable (IP address limits aside) ? If it can scale to be so enormous, why cant a simple storage system?” Here, scalable is a measurement of both overall size and the ability to support non-disruptive resizeability.

For the internet, the secret to “scalability” lies in DNS. The distributed (and eventually consistent) nature of the name lookup protocol allows it to “scale well”. The name servers are not strictly consistent with each other. They may update asynchronously. What this means (and we implicitly know this) is that the internet is not a cluster with strong consistency, but only eventual consistency. A good definition of these two types of consistency can be found in

In the context of scale-out data storage, scalability is defined as the maximum storage cluster size which guarantees full data consistency, meaning there is only ever one valid version of stored data in the whole cluster, independently from the number of redundant physical data copies. Clusters which provide “lazy” redundancy by updating copies in an asynchronous fashion are called ‘eventually consistent’. This type of scale-out design is suitable when availability and responsiveness are rated higher than consistency, which is true for many web file hosting services or web caches (if you want the latest version, wait some seconds for it to propagate). For all classical transaction-oriented applications, this design should be avoided.

Many open source and even commercial scale-out storage clusters, especially those built on top of standard PC hardware and networks, provide eventual consistency only. […] Write operations invalidate other copies, but often don’t wait for their acknowledgements. Read operations typically don’t check every redundant copy prior to answering, potentially missing the preceding write operation. The large amount of metadata signal traffic would require specialized hardware and short distances to be handled with acceptable performance (i.e. act like a non-clustered storage device or database).

In other words, if the internet needed to be strongly consistent, it would likely be limited to a single NAS server sitting in a single city.

Going back to the earlier discussion on scalability in other engineering disciplines – I suppose that if you loosened the definition of a skyscraper to include a collection of buildings, interconnected with hallways in a loose manner, then you could also say that a skyscraper is “infinitely scalable”. But we wouldn’t do that – a single building is considered to be the “consistency domain”, if you will, of a skyscraper.

What we see here are tradeoffs being identified in scaling. These are summarized in the CAP Theorem.

The CAP Theorem

Excellent read :

The CAP theorem describes a few different strategies for distributing application logic across networks. CouchDB’s solution uses replication to propagate application changes across participating nodes. This is a fundamentally different approach from consensus algorithms and relational databases, which operate at different intersections of consistency, availability, and partition tolerance.

The CAP theorem […] identifies three distinct concerns [and you can have two out of the three, but not all three]:

  • Consistency

    • All database clients see the same data, even with concurrent updates.
  • Availability

    • All database clients are able to access some version of the data.
  • Partition tolerance

    • The database can be split over multiple servers.


The CAP Theorem is usually discussed when analyzing databases, but it is valid when analyzing storage systems as well. It is the requirement of most storage systems (clusters) to provide strong consistency that ultimately limits their scalability (size and resizing flexibility. A tip of the hat is deserved here to the application developers who seem to have better recognized those laws of the universe than many platform techies who are preaching “infinite scalability”.


There are some false beliefs that Hadoop is also infinitely scalable. (I wont include references to those docs – there are too many companies and names to list). No surprise, it is not. (Thank you again NetApp, the Hadoop guide using eSeries does not have a single use of the word “infinite” – see One compromise that can be made when designing a Hadoop cluster is to limit the object size to large objects. By avoiding small objects, the absolute total capacity in bytes is maximized since the object count (metadata) is constrained in a strongly consistent storage cluster.

Distributing the metadata in Hadoop also helps, but it is still not infinite – See

The default architecture of Hadoop utilizes a single NameNode as a master over the remaining data nodes. With a single NameNode, all data is forced into a bottleneck. This limits the Hadoop cluster to 50-200 million files.

The implementation of a single NameNode also requires the use of commercial-grade NAS, not budget-friendly commodity hardware.

A better alternative to the single NameNode architecture is one that uses a distributed metadata structure. A visualized comparison of the two architectures is provided below:

The fact that massive hyperscalers have used Hadoop feeds the infinite scalability myth. While true, many actually have multiple Hadoop system that are loosely coupled (i.e. – providing eventual consistency, not strong consistency), and then redirectors are used to map storage and tasks to the appropriate cluster.

Quantifying Scalability

So we have established that scal(ing) is never “infinite”. But how is scaling / scalability measured?

I love the article “Scalability is not Boolean”, by Udi Dahan (See

The first issue with scalability is the use of the word as an adjective: scalable. [As in:] “Is the system scalable?” Or the similar verb use: “Does it scale?”

The problem here is the implication that there is a yes/no answer to the question [of scalability].

Scalability is more than a boolean Yes or No, or speed. It is complicated. Here are aspects of measuring it. This list can be a long blog post unto itself, maybe someday in the future:

  • Speed of Scaling
  • Linear vs. Non-Linear
  • Relativity of Size
  • Granularity of Scale
  • Quantity of Disruption during Change of Size
  • Amount of Intervention / Management / Planning of Change of Size
  • Risk / Availability Consistency
  • Part Flexibility

In Conclusion

The bottom line is that IT is not that different, and in fact could stand to learn a lot, from the challenges faced with scalability in other established engineering disciplines.

Infinite Scalability, in particular of a system that requires some type of strong consistency, is not just possible. The moment you hear the phrase, it a good LOL is AOK. Its marketing.

And when a system is described as scalable, stop and consider what dimension is being considered, as the term “scalability” is rather overloaded, nor is there any standard consensus on how to quantity if.

Thank you for reading, and good luck with it!

The Cloud Between Virtualization and Abstraction

It is often for many people in IT, myself included, to confuse the terms virtualization and abstraction. When I searched for “virtualization vs abstraction” on Google, every article I found explained the difference between them in a way that was .. cloudy! So…. What IS the REAL difference?


Origins of the Word “Virtual”

“Virtualization” is the process of making a representation of something “virtual”. According to, virtual means

Virtual : Very close to being something without actually being it

The word “virtual” has been present in English since 1400-1500. The origin of “virtual” is the Medieval Latin word for “virtues” (or specifically, “virtuālis,”). A “Virtue” is defined as “a beneficial quality or power of a thing.” Over time, this evolved to being “important qualities” , or simply “identifying properties”.

“virtual” has been used for centuries in politics (230 years ago in Alexander Hamilton’s Federalist Papers) and continues to be used today for simple product comparisons (“The fake [ purse ] is virtually identical to the original of the same make by the authentic Louis Vuitton factories.”).

Today, that same definition of “virtual” from the Wester dictionary also includes “existing or occurring on computers or on the Internet”.

If today a person uses Google image search using only the keyword “virtual”, what would one see?

IT has taken cultural ownership of the word “virtual”, and this started with the introduction of Virtual memory some 50 years ago.

Virtual Memory

The concept of Virtual Memory First described in the 1956, and first implemented in the 1960s. (See

German physicist Fritz-Rudolf Güntsch

In the 1940s and 1950s, all larger programs had to contain logic for managing primary and secondary storage, such as overlaying. [….]

The concept of virtual memory was first developed by German physicist Fritz-Rudolf Güntsch […] in 1956 in his doctoral thesis, “Logical Design of a Digital Computer with Multiple Asynchronous Rotating Drums and Automatic High Speed Memory Operation”; it described […] hardware automatically moving blocks between primary memory and secondary drum memory.

Interesting that “virtual memory” was not the term used at first, and it isn’t clear exactly when the term “virtual memory” was adopted. One theory is that automated paging performance was nearly (virtually) identical to what a programmer could attain with manual paging. What is known is that the concept of virtual memory continued to evolve and was ultimately adopted in varying degrees on all hardware platforms.

“Virtualization Nation”

Over time, IT engineers have created virtual representations of just about everything else. We have even developed specific qualifiers to describe the hardware virtualization: There is Full virtualization, Partial virtualization, and Paravirtualization

Everyone wants to be part of “Virtualization Nation”. Nearly 25% of the blogs that I follow have been named using the inspiration of the cool “V” as a first letter, including vcdx133, vclouds_nl, veric, vhipster, virten_net, virtual10, virtualgeek, virtualinsanity, virtualizedgeek, virtualizetips, virtuallifestyle, virtuallyawesome, virtuallyuseful, virtualpro, virtualramblings, virtualstorageguy, virtualtacit, virtualtothecore, virtualvillarin, vmforsp, vmiss, vmstorageguy, vmtyler, vstorage, vtexan, and vtricks . This doesnt even include sites like 2vcps, myvirtualloud_net or thevirtualnoob .

Oddly enough, the best VMware blog is Duncan Epping’s: Yellow Bricks (see

Virtualization has simplified many things, has generally improved hardware utilization efficiency, and is indeed indispensable (unless you actually enjoyed manual paging, or bare-metal Windows restores). For many years it has been the brick on the gas pedal of our industry. However, it has created an equal and sometimes even increased level of complexity elsewhere.

There are hypervisors running on hardware that assists virtualization, running virtual machines that have local virtualization, with additional levels of virtualization running within those VMs or even hypervisors running within hypervisors. The IT industry has reached an interesting juncture, where virtualization seems to have become a giant self-reinforcing loop.

And then one day it dawned on me – Virtualization was a means to an end, but not an end onto itself.


When you look at the following piece of abstract art, what do you see ? I ask, because many suggest “abstraction” is just about “generalizing”. Are you generalizing this artwork into a bucket of “random shapes and colors”, or are you recognizing patterns and creating a deeper response to it ?

IT and “Abstraction”

The use of “abstraction” in IT has merited special recognition in the Webster dictionary (See

abstract in Technology :

  • A description of a concept that leaves out some information or details in order to simplify it in some useful way.
  • Abstraction is a powerful technique that is applied in many areas of computing and elsewhere. For example: abstract class, data abstraction, abstract interpretation, abstract syntax, Hardware Abstraction Layer.

In software development, an “abstract class” (shown below) is typically defined as a “generalization”. In this example, a “dog” is a generalization (or common category) of several dog breeds. But why do we do this ?

The motivation of abstraction is, first, to be able to perform recognition and categorization. Recognition is performed for the purposes of associating the proper set of rules on how to interface with, in this case, a dog.

You are abstracting as you read this this article. If you couldn’t visually process the lines and curves and colors on the screen in front you, and recognize letters from those abstractions, you wouldn’t be able to read this article at all. Abstraction is a fundamental human ability that allows all of us to recognize things and interface with the world

Abstraction Enables Recognition

Recognition is defined as (See :

Recognition is a match between virtual input (processes through the ventral stream) and a mental representation of an object.

Recognition, along with abstraction, is a core part of a human’s Computational Thinking. Computational thinking allows us to take a complex problem, understand what the problem is and develop possible solutions. The four key techniques (cornerstones) to computational thinking are (see :

  • Decomposition – breaking down a complex problem or system into smaller, more manageable parts
  • Pattern Recognition – looking for similarities among and within problems
  • Abstraction – focusing on the important information only, ignoring irrelevant detail
  • Algorithms – developing a step-by-step solution to the problem, or the rules to follow to solve the problem

Consider the image below: Despite many pictures of a duck at different sizes, angles, and with different methods (photographs or sketches), the human mind abstracts key aspects and identifies (recognizes) that each image is a duck.

You know who else can recognize ducks? Alex Rodriguez.

A-Rod Gets Abstraction

The universe is made up of energy and elements and all kinds of other things that smart physicists understand. There is no raw element called a “baseball”. A baseball is a collection of many different types of elements.. and billions of atoms, assembled together in different sub-objects (each of which is actually an abstraction onto itself) which are then packaged together into bigger objects, and ultimately into a thing that A-Rod (and the rest of us) recognize as a baseball.

Here is what is inside a baseball (see :

But what a human sees, when presented with a baseball, is:

  • textrue
  • size
  • weight
  • color and patterns on it
  • shape

The human mind, and in particular the mind of Alex Rodriguez, does not map a scientific explanation of the billions of atoms of a baseball and then categorize it as a baseball. Instead, a few key observations are made, and abstraction and recognition are used to identify the object as a baseball.

With the abstraction of that object created, and the object recognized, A-Rod’s mind then understands what he can do with that recognized abstraction – He can pick it up. he can hold it. He can throw it. He can catch it. He can buy and sell it. The actions that are supported by that abstraction is it’s interface. But his favorite action to take on a baseball?

A-Rod thinks “I recognize a baseball… I interface with hitting a home run.”

The Abstraction Principle

Abstraction is fundamental to the art of creating new things; these new things are constructed from an assembly of smaller things. Abstraction is also about establishing a set of rules that allows repeated recognition of that logical thing in different circumstances, and rules which can be shares with others so that they too will recognize this thing. Lastly, it is about supporting the object recognition to allow association with a set of rules which can be used to interact with this new thing, which is called the interface.

We circle back to a discussion of IT. Modules of codes, whole software programs, or even entire hardware systems, can provide an abstraction – creating something new that has a particular interface that allows that new thing to be used. This could be as simple as a POSIX interface, a document editor, a social media tool, even a number-crunching supercomputer. A great description can be found in these SJSU Computer Science notes at :

There are two ways to look at a module. A client (i.e., the person or client module that uses it) sees it in terms of the services it provides, while the implementer sees it in terms of its internal structure, in terms of how it provides services. We call the client’s view the module’s interface and the implementer’s view the module’s implementation.

The abstraction principle says a client shouldn’t need to know about a module’s implementation in order to use it, and the implementer should be able to change the module’s implementation without breaking the client’s code (assuming the interface doesn’t change).

The key to the Abstraction Principle is the independence of implementation (modularity) it creates.

Consider the automobile. It is an abstraction that is created out of thousands of individual parts – which function as a whole new entity with a unique reusable interface. Most don’t care about how the car is constructed (implemented). Yet when assembled, we can all recognize this as a car.

The common interface means that you can interface (use) many different types of cars, despite only having learned to drive on one specific type. The common interface includes how to enter a car, how to start it, how to put in cargo, steering, braking, and so on. This common interface is especially useful when you are rich and own many cars .. like A-Rod.

Virtualization vs. Abstraction

First we listen to Abe, a man from the future, share his perspectives on this subject…

Ive tried to summarize the difference between virtualization and abstraction in a simple way that makes sense, and the best I can do is – A true virtualization is a copy, an true abstraction is something new.

Unfortunately, even though that may be mostly right, it is misleading because very few things are absolutely either one or the other. That is why the above statement includes the qualifier “true”.

Things are Seldom Either Pure Abstractions or Virtualizations

Consider the humble x86 virtual machine. Given what has been presented thus far in this article, is this a virtualization, or an abstraction?

While called a virtual machine, a VM is also an abstraction of sorts, since it has several improved characteristics versus a physical server. Consider a VMware virtual machine also offers :

  • VM-level Snapshot and restore
  • High availability – Running the same VM on multiple machines with no changes to VM’s software
  • Live migration

This is all really freaking confusing, since virtual machines from a company called VirtualMachineWare are not exactly 100% virtualization. Which, coming full circle, proves it may actually not be a contradiction to describe a VM as an abstraction (an act I must admit Ive ridiculed in the past).

A relevant and enlightening subject to discuss at this juncture are VMware vVols (Virtual Volumes), and in particular the Storage Policy-Based Management (SPBM). I’m big on policy management, and QoS. These are real service abstractions, and abstraction is really hard to do right. It is also a bit of a tough sell, as service guarantees tend to (rightfully so) negatively impact efficiencies.

Chuck Hollis (of VMware) sees the benefits of policies, and I couldn’t agree more. (See

Policy responses can’t be intrinsic to specific vendor devices or subsystems, accessed only using proprietary mechanisms. Consistency is essential. Without consistency, automatic workflows and policy pushes quickly become manual (or perhaps semi-automated), with productivity being inherently lost.

In the image below, consider all the traditional complexity involved in requesting for a storage service level be met from the perspective of the application (owner) in a virtual environment. (See VMware blog post “Storage Directions for the Software-Defined Datacenter” :

But with SPBM, all of the complexity over understanding media types can be removed from the application level. It offers significantly more implementation independence. The value was clear immediately, and it is no wonder why NetApp jumped at the opportunity to fully support vVols before any other storage vendor (See )

Cloud is about ABSTRACTION

We all know the cynicism regarding “the cloud”. You have no doubt seen this sticker :

But the Cloud is more than just virtualization… and your stuff on someone else’s computer. See this article from 2009:

Cloud computing is all about abstraction

The key to understanding cloud computing is to not focus on any one definition, but to look at the common underlying attributes and characteristics of the technologies or concepts described within the definitions.

To reconcile the various perspectives on cloud computing, one can think of cloud computing as a scale that measures the degree of architectural abstraction offered by a solution: as the level of abstraction increases, the less is known about the underlying implementation, or the more “cloudy” the architecture appears to be.

The Real Cloud Disruption is new abstractions, and cleaner interfaces that speak the language of IT consumers in the context of their immediate business challenges. Don’t take my word for it – The IDC has been onto this trend for several years (described as the Third Platform):

In Conclusion

Writing this article was a journey for me – I must have re-written it no less than 10 times as during the investigation and proof points, I realized how wrong I myself was. You are encouraged to do the same with your own thoughts – sit down, write out those nagging thoughts, and then cross-examine them with an open mind.

There are a lot of great virtualization tools out there that are adding far more value than just copying something else. It is OK if you confuse the words “virtualization” and “abstraction”, because in many cases (in particular within the infrastructure) most elements involve aspects of both. But attention to detail on using them will make you stand out.

  • Do not allow their uses to fool you. “Abstraction” is not simply “commonality” or “generalization”. And “virtualization” is not simply a “logical copy”.
  • Be the smart person in the room and highlight when “virtualization” is adding value, and when it is simply adding layers of confusion.
  • Be cognizant of how much added value things like service-level (policy-based) control have, adding real implementation isolation and a dramatically more powerful interface to the end use.
  • Keep in mind how much the simplicity and re-usability of the interface matters
  • Cloud isnt just virtualization – It is creating new abstractions, and easier+intuitive interfaces that connects the power of IT more directly to the world

It is a great time to be in IT. All the best, and thanks for reading!

How Gartner’s 2015 Magic Quadrant for Solid State Storage Arrays has NetApp Wrong


NetApp was not placed high in Gartner’s 2015 Magic Quadrant for Solid-state Storage Arrays. Your’s truly did some research to see if this was warranted or not. The result is this article.

This article reviews the 2015 Gartner Magic Quadrant for SSAs (Solid-state Storage Arrays), and notable changes to the ratings of key vendors in this market. There is a critical look to see if NetApp experienced improved results since 2014, and why (or why not). It concludes with a description of additional concerns that has with the Magic Quadrant for SSAs (concerns that are not specific to NetApp).

  • Note to the reader: The author of this article is a Competitive Analyst at NetApp. Statements and opinions made do not reflect those of NetApp Corp.


What is the Gartner Magic Quadrant for Solid-state Storage Arrays?

According to, Gartner (founded in 1979) “is an American information technology research and advisory firm providing technology related insight … Research provided by Gartner is targeted at CIOs and senior IT leaders […]. Gartner clients include large corporations, government agencies, technology companies and the investment community”.

The general concept of Gartner’s Magic Quadrant can be found at “By applying a graphical treatment and a uniform set of evaluation criteria, a Magic Quadrant helps you quickly ascertain how well technology providers are executing their stated visions and how well they are performing against Gartner’s market view.” The top of this web page is shown below.


Observe that the Magic Quadrant is just one of several “Methodologies” used by Gartner when assessing the major vendors (aka “technology players”) within a market. Other Methodologies include …

  • Critical Capabilities
  • Hype Cycle
  • IT Market Clock
  • Market Guide
  • Vendor Rating
  • ITScore
  • Market Share
  • Market Forecast

Gartner has successfully crafted their art of analysis to abstract key concepts in each market in a way that is as market-independent as possible. Thus, Gartner has a “template” of methodologies that can be re-used for either Cloud Service Providers or even (in our case) Solid State Storage Arrays. Although there are some concerns raised in this article, make no mistake – there is not a single independent analyst firm that is further evolved at quantifying the virtually un-quantifiable than Gartner.

Gartner has analyzed the IT Storage Array Market for many years. Starting in 2011-2012, the introduction of all-flash (all-SSD) primary storage at attainable (albeit still expensive) prices was creating a serious challenge. It wasn’t clear how to rationalize against traditional HDD arrays in a single technology market.

To resolve this, Gartner in 2014 introduced a new technology market dedicated to SSA’s, which also (intentionally or not) declaring that SSA was a new tier (or even silo) of storage. 2015 was the second year of Gartner’s MQ for SSAs.

General Industry Reception of Gartner’s MQ for SSAs

Most IT customers hold Gartner in high regards, and thus their view of Gartner’s MQ for SSAs inherits this respect.

The opinions of most IT Storage vendors for Gartner’s MQ for SSAs is generally favorable. No surprise, the higher rated a vendor is on the MQ for SSAs, the more favorable that vendor will be of the MQ for SSAs. For example, see the Tweet below…

The response of the Press has been fair, with the new MQ is often cited in trade articles. Analysts who are more free to say what is on their mind have been more mixed. In a late June 2015 Tweet, Howard Marks ( said (in response to the Tweet below):

I agree with Howard Mark’s well grounded points in (See ), and believe Gartner should give them serious consideration. Some quotes are shown below:

… many vendors are so driven to be included in the analysis that they design products to fit Gartner’s definitions even when they believe that there is, or might be customer demand for something else. […]

Gartner even lists vendors that make “SSD” storage arrays that didn’t qualify for the MQ such as Dell and NetApp, implying that a SSD array like a Dell Compellent is somehow less than a real all-flash array like Pure Storage’s or an EMC VNXF, even though the flash in those systems is packaged in SAA SSDs, just like the flash in a Dell Compellent SC2040.

As an IT architect, this makes no sense to me. When choosing the product (or products) I want to solve my storage performance problem, I don’t care if they have a unique product name or share the name with a hybrid, or even all-disk, system using the same architecture. All I care about is that the storage system provides the performance and features that address the problem I’m trying to solve.

The 2015 MQ for SSAs

The 2014 MQ for SSAs is shown below. At the time, NetApp’s AFF was not fully productized, and the review really only considered EF for NetApp.

During the time between Gartner’s Release of the 2014 and 2015 MQ for SSAs, NetApp officially productized AFF and has established a solid revenue stream for this new offering. With the addition of this fast-growing growing, proven and enterprise ready offering, many expected a major improvement for NetApp in the Gartner MQ for SSAs in 2015. (Unfortunately, due to the timing of the 2015 MQ, further enhancements to AFF with ONTAP 8.3.1 are not reflected)

So what changed in 2015 (See On June 23 (2015), Gartner released the new “Magic Quadrant” (MQ) Report for Solid-State Arrays (SSAs):

  • Pure : Remains Highest “Completeness of Vision”, and moved up from 3rd to 2nd in “Ability to Execute”
  • EMC : Remains Highest “Ability to Execute”, and remains second in “Completeness of Vision”
  • HP : Advanced from Challengers Quadrant to Leaders Quadrant
  • IBM : Lowered rating on “Ability to Execute” (moving from second to third), yet remains in the Leaders Quadrant

And what about NetApp?


A 16% improvement???? That is absurd. It is as if the AFF had hardly any impact at all.

Reviewing Gartner’s Positioning of NetApp In the 2015 MQ for SSAs

It is one thing for NetApp to not be the leader in the MQ (and any assertion as such would be subjective). But let us be real – With all due respect for Pure Storage, is Pure as a vendor REALLY TWICE (over 2x) as “complete” as NetApp in the SSA market?

Before this article dives into the MQ statements on NetApp, it is first necessary to have a review of Design Centers. NOTE: This is more about establishing appropriate context for EF than AFF, but necessary as Gartner analyzes at the vendor level.

Design Centers and Product Positioning

Every product has a design center – and an architecture that results. This is a rule for all products, in all industries. However, IT has no shortage of belief that “no tradeoffs” is possible.

I like to illustrate the concept using vehicles, as shown below:

A vehicle has the attributes of performance, efficiency, and predictability (or robustness / durability).

The architecture (design) will determine how strong a product is with respect to each of those attributes. [ While it is possible to make something that is actually bad at everything, this article will not roam there..]

In the extreme case, shown by the three vehicles, there is a complete dedication to optimizing for one attribute at the cost of the other two.

  • The Russian BTR-80 (this one is modified to travel depths of water up to 5 feet for extended distances) is neither fuel-efficient or fast. However, it can handle the total loss of almost any two tires and still sustain virtually the same speed in the same elements.
  • The Rocket car (we could have chosen a top-fuel dragster) can reach astronomical speeds, but it is not fuel efficient, and has limited tolerance to part failures.
  • The lightweight solar-powered car is not extremely fast or durable, but given enough time it can cross any continent without using any fuel (or minimal fuel).

When a statement is made such as “a fast product, potentially the fastest in the industry, is flawed because it has less efficiency” … there is clearly an error in logic. THIS DRIVES ME NUTS!!!! The following dialogue captures this point…


Review of Statements on NetApp in Gartner’s MQ for SSAs

I object to multiple NetApp statements in the Gartner 2015 MQ for SSAs:

Gartner MQ for SSAs Statement on NetApp Comments

The lack of data reduction capabilities limits the appeal of EF-Series in server virtualization, VDI and OLTP consolidation use cases.


The EF-Series remains the flagship product, and is focused on workloads that do not need data reduction capabilities.

Yes, there are no data reduction capabilities with EF, and this is what makes the EF special. It has an ultra-light threading model and brawny data layout that gives EF the lowest and most predictable IO latency under any load of any SSA product available. And this performance has been validated through the SPC.

EVERY WORKLOAD would love to have data reduction. The question should be: Is the demand of guaranteed latency worth the price for a given application??? NetApp offers choices with EF rocket performance… and AFF still-remarkable high-performance PLUS wide-ranging storage abstraction and excellent efficiency. This choice should be viewed favorably in Gartner’s SSA technology market.

There is significant overlap in NetApp’s solid-state array product portfolio, with three different products causing confusion among customers about sustainable innovation and long-term viability of each of these products.


FlashRay, which was announced in 2014 – but is still in limited availability as a single-controller array with several missing software functionalities – raises questions about the ability of NetApp to be competitive amid rapid innovation from competitors.

What is the confusion ? EF is for density, and for when low latency guarantees trump the value of the storage services abstractions of AFF. FlashRay is not shipping in large numbers and is typically not part of the equation at this time. Because its not HA, and the requirements “Have established notable market presence as demonstrated by the amount of PB” doesn’t apply to FlashRay, why does Gartner even care at this time ?

If Gartner is holding FlashRay as a negative against NetApp, then it would only be fair to also hold DSSD as a negative on EMC. DSSD is in extreme limited availability and incomplete, also announced well over a year ago by EMC.

[NetApp Strength :] Both the EF-Series and FAS series are mature products that have a large installed base, offering existing customers platform continuity and management familiarity. EF is based on Enginio, which is closing in on nearly one MILLION systems deployed over 30 years. FAS has over 100,000 active controllers, and ONTAP supports more bytes than any other Storage OS, period. Having the largest, most proven, enterprise install base should weight heavy on the positive – If this isn’t a “Leader”, what does that say about all of those customers?

How to Fix – So where SHOULD NetApp Be ?

There is close, and then there is not close enough.

I cannot assert exactly where NetApp should be. Once it is within 20%, the debate gets very technical, yet also very subjective. For example, is NetApp’s on-site support in the Falkland Islands a 8 vs a 7 for vendor ZYX? So making this easy… if NetApp is within the range shown below, it would at “feel” more right, and let me move on to not having to research research.

Why? Because AFF is..

  • One of the limited systems that is unified (NAS and SAN)
  • Enterprise Ready
  • Proven, benchmarked performance with the SPC
  • Scalable in multiple directions, including supporting more raw and usable capacity in a cluster than any other Gartner recognized SSA product
  • Integration of replication methods (no external VM appliances or hardware)
  • Extensive history of application integration
  • Virtualized and Cloud-integrated
  • A platform that has fully abstracted storage management, not a file-system box, based on a language and interface that has been established for years.
  • I have about a thousand more bullets here… you get the point…

… and because there is no “confusion” in NetApp’s flash strategy as the MQ for SSAs suggests.

… and because Gartner should understand that what makes EF special and unique cannot be a diminished as a product challenge, and NetApp’s portfolio which includes EF performance and AFF all-round versatility and storage abstraction cannot possibly be seen as “less than one half complete” as any other vendor.

Other Observations on Gartner’s MQ for SSA

As an analyst, I respect the work that Gartner does, and has no false illusion of somehow being capable of producing a superior report. Just the same, the other non-NetApp concerns over Gartner’s MQ for SSA are:

  1. Why are some Hybrid Arrays on it ?
  2. Gartner is not considering that many MQ for SSAs have tradeoffs due to their features – Tradeoffs that previously never existed…

Why are some Hybrid Arrays on the Gartner 2015 MQ for SSAs ?

Here is the Inclusion and Exclusion Criteria from the Garter All-Flash MQ:

To be included in the Magic Quadrant for SSAs, a vendor must:

  • Offer a self-contained, solid-state-only system that has a dedicated model name and model number (see Note 1).
  • Have a solid-state-only system. It must be initially sold with 100% solid-state technology and cannot be reconfigured, expanded or upgraded at any point with any form of HDD within expansion trays via any vendor’s special upgrade, specific customer customization or vendor product exclusion process into a hybrid or general-purpose SSD and HDD storage array.
  • Sell its product as a stand-alone product, without the requirement to bundle it with other vendors’ storage products in order to be implemented in production.
  • ..[The bullets RE geographic availability, support, Gartner interviews, etc, are not shown]

The SSAs evaluated in this research include scale-up, scale-out and unified storage architectures. [….]

Thus.. By Gartner’s own Definition of an SSA, EMC XtremIO is a Hybrid

Ready for some controversy?


Like Nimble Storage CS Arrays, XtremIO should not be on the Gartner MQ for SSAs. This is according to Gartner’s own Inclusion rules.

It is well documented that each X-Brick contains two controllers, one DAE (Disk Array Enclosure), battery backup unit(s), and (possibly, depending on scale) Infiniband switches. According to, each XtremIO controller contains:

  • 2 * 8-core 2.1GHz Intel Xeon CPUs
  • 256G RAM
  • Customized linux, XIOS runs in user space on top of the Linux
  • 2 200G SSDs, one for boot partition, one for journal dump
  • 2 900G HDDs, store data path IO traces and logs

Look at evaluation criteria, and it is clear. EMC XtremIO uses HDDs, thus is it not an all solid state storage appliance.

Describing XtremIO as a Hybrid is NOT FUD

This is not a statement that XtremIO is therefore faster, or slower – Or any more or less capable.

Using HDDs as transaction logging devices is more than about just saving a few dollars. Those HDDs are probably faster with that streaming write IO profile, and these devices might have been a bottleneck if they were SSDs. I actually applaud EMC for using HDDs in this application if they helped, but just the same, EMC was a little “????” for not making sure Gartner knew this.

Suggestions for Gartner

Take XtremIO out of the Garner SSA technology market, or change the inclusion rules.

Gartner is not considering that many SSAs have tradeoffs due to their features – Tradeoffs that previously never existed…

WHAT ABOUT THE OPTION TO SELECTIVELY THICK PROVISION? How can enterprise storage actually support secure multi tenancy without the ability to thick provision and securely reserve capacity when needed ?

Sorry for the caps.. but this is a topic I’m passionate about. It is just another one of those configuration options that AFF has (thick or thin) that many others do not (thin only!).


Gartner continues to innovate with the Gartner MQ for SSA and keep with the times, and I’d like to thank Gartner for their efforts, and look forward to future reports.

Unfortunately, I believe Gartner has miscalculated on where to place NetApp. While it is nobody’s business other than Gartner’s to state exactly where NetApp should be in the MQ for SSAs, NetApp is so far away from the ballpark that the report as a whole misses its full potential.

Finally, some rules appear to have been used without balance or consistency. This is partially the reason for NetApp’s positioning in the 2015 MQ, and also a challenge as a hybrid system is present in the MQ. With so much hard work already done, these are easy fixes.