emc-f1-carInfrastructure has always been a tough place to compete in. Unlike applications, databases or middleware, infrastructure components are fairly easy to replace with another make and model, and thus the vendors try to show off their product as better than the one from the competition.

In case of storage subsystems, the important metrics has always been performance related and IOPS (I/O operations per second) in particular.

I remember a period when competitors of our high-end arrays (EMC Symmetrix, these days usually just called EMC VMAX) tried to artificially boost their benchmark numbers by limiting the data access pattern to only a few megabytes per front-end IO port. This caused their array to handle all I/O in the small memory buffer cache of each I/O port – and none of the I/O’s would really be handled by either central cache memory or backend disks. This way they could boost their IOPS numbers much higher than ours. Of course no real world application would ever only store a few megabytes of data so the numbers were pure bogus – but marketing wise it was an interesting move to say the least.

With the introduction of the first Sun based Exadata (the Exadata V2) late 2009, Oracle also jumped on the IOPS race and claimed a staggering one million IOPS. Awesome! So the gold standard was now 1 million IOPS, and the other players had to play along with the “mine’s bigger than yours” vendor contest.

I had never seen any real-world application requiring such a huge amount of IO – maybe with the exception of applications in super computing, but certainly not in regular, database oriented business applications. But the IOPS race continued and later Exadata models tried to lead the race with even higher IOPS numbers. An overview for full rack standard (2-socket) models:

Exadata V1: (09-2008): N/A
Exadata V2: (09-2009): 1.0 M
Exadata X2: (09-2010): 1.5 M
Exadata X3: (09-2012): 1.5 M
Exadata X4: (11-2013): 2.6 M
Exadata X5: (01-2015): 4.1 M @ 0.25ms

As an Oracle specialist, I frequently get Oracle performance (“AWR”) reports from customers and I keep most of those for future reference. So a while ago (around early 2014) I wrote a small Linux script to scan all my collected AWR reports for peak IOPS, and sorted the output. The intention was to find out which of my customers would win the “real world” IOPS contest in Oracle database environments – to see if we really need all that IOPS overkill.

49-times

The result: One of my customers in Turkey, running a HP Superdome (with HP-UX and Oracle 11g, connected to EMC storage) was peaking at 53,000 disk IOPS. Back then the latest Exadata (X4) theoretically offered 49 times (!) more IOPS than the most performance hungry database (in my collection) required. In perspective, EMC’s most mission critical database, the Oracle E-business ‘Global instance’ was peaking at 79,000 IOPS.
FYI, at that time, EMC’s global instance was in the top-5 of the world’s largest Oracle E-business implementations.

53,000 or even 79,000 IOPS is an impressive number – but a far cry from the millions that vendors claim you need to run your applications. By the way, I know there are rare exceptional outliers that drive much more than that (one of my recent engagements showed an Oracle database pushing over 110K) but for those you may need special treatment.

Now you may argue that for consolidation purposes, the IOPS numbers need to be higher than one single database requires (true). So I would say that a system offering 100,000 IOPS would be sufficient for the vast majority of consolidation scenarios. Maybe 200,000 for large projects. What’s more important for performance is I/O latency – and note how Oracle never was confident enough to  publish Exadata I/O latency (the real metric you want to know for OLTP performance) until they were finally capable of bringing it down to about 0.25 ms with the Exadata X5.

Which All-Flash vendor today is capable of delivering, say, 200.000 IOPS with an entry-level system?

I bet, in the top-5 AFA vendor list, all of them (at least the serious products, not including the “me-too” and the “lookalike-afa-but-not-really” ones).

At least our XtremIO is capable of driving over 200K IOPS at 0.25 ms latency or less – with the entry model – and scales out near linear when adding “X-bricks”. If you want to know the maximum IOPS numbers from XtremIO – The Beast, I suggest you look up the numbers – I consider them “By far more than enough”.

So in summary:

Entry level All-Flash Array: > 200K IOPS @ 1ms or less
Huge business application: <  50K IOPS

Which leads me to the following statement:

competing-flash-quote

Actually I wonder if IOPS was ever the most important metric. In one of my early posts,  Oracle and Data Integrity: Data in, Garbage Out? I stated:

“What is the basic function of a storage system? – Get the data back exactly the way you stored it.”

In my presentations for customers, one of my topics is “performance best practices” and I usually ask my audience the questions:

Are you running with the latest servers, the latest storage, the maximum configurations, with well-behaving applications that don’t have hotspots and distribute IO evenly across all resources?

Can you guarantee performance SLAs even when a disk fails? When a cable breaks? When you create business copies of your data? When you do backups, reporting, analytics, ETL?

When a rogue user triggers an ad hoc monster query? When your company’s customers log in ‘en masse’ because your marketing department ran a successful marketing campaign?

If you’re a frequent follower of my blog, you know I like metaphors – so let’s use one here. If you’re looking for a box with the highest claimed IOPS numbers, it’s like you’re buying a thoroughbred racing car. It will win over other cars in a race (proof of concept) when all circumstances are right: No rain on the track, highly trained driver, high-octane fuel, slick tyres, etc.

Would a true racing car be equally good for your daily drive to work? The engine needs replacement with each 100 laps, the tyres wear out even before that, a few drops of rain will ruin your whole day, not to speak about fuel efficiency or driver comfort. And you can’t bring passengers, it’s one seat only. No trunk space. Need a helmet and a support team to operate. Oh and by the way, the car’s published top speed (the reason for buying that car in the first place) can never be achieved on a normal busy highway.

dakartruckThe average application landscape in my opinion looks more like the Paris-Dakar rally than a formula race track: It’s hot, fine sand wreaks havoc on smooth operation of moving parts, there are bumps and potholes where you can’t see them, you need to bring spare tyres, fuel and other resources, speeds vary from very low to very high, but never even close to the speeds on the racing track and the weather is unpredictable. If you want to even finish the race you need a vehicle that can stand the harsh desert environment and keeps going even if something breaks… Forget about the racing car, you probably need something more like a Dakar truck.

So what indicators should we look for when comparing flash storage?

  • Data integrity, reliability, impact of component failures
  • Ability to survive without data loss or downtime, even if more than one drive breaks
  • Consistent performance (measured in IOPS as well as latency), not the “break the Guinness Book of Records” show-off numbers but the minimum performance levels your business needs, ALWAYS, even when bad things happen or when you drive large IO loads for long periods of time
  • Inline (direct) data services or post-processing?
  • Performance impact of data rebuilds (re-silvering)
  • Sensitivity for IO “hotspots”
  • Ease of use
  • Virtual provisioning efficiency
  • Data compression capabilities
  • Encryption, security
  • Cloning/snapshots efficiency, overhead, sizing limits, ease of use, application consistency groups
  • Scalability (online adding of capacity, sizing limits)
  • Data replication features (either built-in or using separate appliances from the vendor but well-integrated)
  • Product quality, customer service reputation, existing customer base (are you a Guinea pig field-testing a new product or using proven technology)
  • API features (VMware VAAI, SNIA T10, storage management APIs)
  • 3rd party management tools (reporting, automated provisioning, …)
  • (virtual) OS and platform support, interoperability

The list is not complete – you may require additional features or indicators, but you get the idea.

If you base your purchase decision purely on marketed peak IOPS numbers, you may find yourself spending too much on a product that offers too little.

This post first appeared on Dirty Cache by Bart Sjerps. Copyright © 2011 – 2016. All rights reserved. Not to be reproduced for commercial purposes without written permission.

Loading

The IOPS race is over
Tagged on:                             

Leave a Reply