Every now and then I get involved in Customer Proof of Concepts. A Proof of Concept (POC) is, according to Wikipedia, something like a demonstration of feasibility of a certain idea, concept or theory.
POCs often cause a very labor- and cost-intensive project, because it requires people (from both our customer and from our engineering teams), expensive systems like servers, storage arrays, switches etc. Sometimes we test with dummy test data because the test has nothing to do with the application architecture itself (such as when we want to test system management tooling or replication methods), and sometimes we test with a copy of our customers’ application dataset. In some cases, a full copy, in some others, a subset because the full data set is simply too large to handle efficiently. Sometimes our customers obfuscate the data in the data set because their production applications contain security-sensitive information that is not legally allowed to go out of the building.
There can be many reasons for doing a Proof of Concept, but often the most important one is to prove to our customer that our stuff works with their existing systems, applications, architecture, etc. As EMC (including its divisions RSA, Documentum, Greenplum, Data Domain and others) has a very wide portfolio of infrastructure products and solutions, the POC can be done to show feasibility of just about anything.
In my role, I deal with – mostly Oracle – databases (although our own Greenplum analytics “Big Data” database platform is quickly becoming more significant). The POCs I get involved in, typically have to do with how Oracle databases and applications interact with EMC’s infrastructure.
In nearly all cases, customers also test on systems from our competitors – sometimes from pure storage vendors, or sometimes providers of complete database appliances (if you know EMC a bit, you probably know who they are).
These days, strangely enough, I get the impression that in most of those tests the only purpose is to see how the application performs on our hardware (and especially in comparison with systems from our competitors), and the scope of the POC is limited to just that. Although I think we have a pretty good story on optimizing application performance (at a decent price/performance ratio), I think just testing for performance does not do EMC systems much justice.
Why not? Two reasons. First of all, the performance you get to enjoy in a POC might differ from the one you see in a real-world production environment. Second, our systems are built with much more in mind than just performance, because our customers don’t buy them for running toy applications. They run large, mission-critical, complex, demanding workloads that require the best in data integrity, availability, flexibility and protection features. On top of that, IT departments are always under cost pressure and even if the data sizes are growing (see the expanding digital universe) they need to keep cost levels the same (or even better, lower them a bit).
Let’s focus on the first one. Performance in the POC might differ from the one you see in real life. Why? If your app can run at a zillion transactions per second in the lab, why not in the real environment?
Because the lab environment is different. In the lab, we typically test only one application – in many cases, just a single Oracle database, sometimes with an application server hooked to it to drive the customers’ workload, sometimes even without app server (the workload is then generated by running simple SQL or stored procedures). Customers identify a few performance sensitive batch jobs or transactions, and run them in the POC to see what mileage they get.
In the real world, the workload from that application is much more complex. Typically, it has many users connected, business processes run simultaneously against the database and on top of that heavy batch reporting is also executing. The database and storage subsystem must be able to handle these mixed workloads which is a more challenging task than to focus on just one or a few transaction types.
Then – an EMC infrastructure is designed to consolidate workloads. Not just one, but many Oracle databases run on one system. But it does not end there. The same system also runs E-mail, file sharing, content management, maybe even a bunch of non-Oracle databases. It may host a VMware cloud where virtual desktops are booting from. And much more. Would you think all these other environments influence that one single database that was tested during the POC? If that is the case, then how reliable are the performance results from the tests, applied to the real world environment? Maybe (although this is near impossible to do) you should test the whole datacenter workload in the POC versus just one database.
Of course this is not realistic but I tell my customers that at least they need to be aware of this, and they might want to include some artificial additional workloads on the system to see what the effect is on the primary application. This is easier to do than you would think. Just a few servers with simple I/O load generators (such as EMC’s IORate or Oracle ORION tool can easily do the job – or Swingbench if you want to test with a real database instead of a simulator).
But, I am more puzzled by the fact that customers do not even test their complete data set these days. I have experienced on a few occasions that, even if the total database was more than 10 Terabytes, they do not even bring that whole database to the lab. They bring a subset, for some strange reason always less than 2 terabytes. Sometimes even less than one. Why? I got the opportunity to ask a few of our customers why and I didn’t get a direct answer. But after some digging, I found that this was suggested by our competitors that had already been running the tests (as perceived storage-only vendor you sometimes come in last).
So, what’s happening here? I can’t help but think that our competition has issues with larger datasets. Or they try to make stuff look good to get the lucrative deal signed quickly (Make Dirty Cash Fast).
The cache of a large Intel-based server these days is around one terabyte. So you could fit the whole database in server memory and never even hit the I/O backend (except for some writes that are hitting write cache in the storage anyway). And some storage systems have flash cache that can easily hold a terabyte database so that all reads are served from Flash (this includes EMC’s systems, by the way. Actually we can handle much more than one Terabyte of Flash).
Speaking of database appliances, Oracle Exadata X2 (a full rack) holds just over 5 terabytes of Flash. Do you think a 2 Terabyte database, after a run-up to fill the Flash, would run extremely fast on this machine? A million read-IOPS, maybe, or more? Great POC! Now if you load the machine to the limit (it can store much more than a meager 2 Terabytes). Not with dead, but active data. Heck, let’s say you put ten databases, ten Terabytes each (after compression), totalling 100 Terabytes on the machine, and each is running complex workloads (dirty little secret: real business applications do the same).
Do you still think you hit over a million IOPS if, on average, only one out of each 50 I/O’s is served from Flash, the rest from plain old-fashioned spinning rust disk drives?
The problem is that testing this would require a lot of extra work, and both customers and vendors don’t have time for that. So I can understand why they are testing with only one database. And many customer (OLTP) databases are still less than 5 Terabytes (even less than one for that matter). But be aware that testing such a small chunk of data does not tell you anything about real-world stuff. Unless you are prepared to buy a huge, expensive, energy-hungry appliance just for one application (and then find you need another one for disaster recovery, another one for testing, and another one for development).
So far for performance. Let’s focus on other (equally, or maybe even more important) things.
Replication. Today, even the classic data warehouse has become mission-critical. Or can you afford to shut it down for 5 days without the CEO poking your eye out? Business transaction code creeps in, and even modern business analytics these days have become a sub-part of the enterprise instead of a separate isolated system in the corner used by a lonely analyst. So all production databases need high availability, disaster recovery, good quality backups, data integrity and so on.
Disaster recovery requires some kind of processing somewhere to replicate data. It can be on the database server (i.e. Oracle Data Guard or other host based replication) or storage based. But again, it’s often too expensive and time-consuming to run the POC in replication mode. So everybody quickly agrees it’s better to keep it out of scope.
So now we can also disable archive log mode (no longer needed because we don’t have to ship logs) and therefore we can run data loads for the POC with NOLOGGING option. Index rebuilds? No logging, no problem. Can you afford this in real production? No, because then your standby database will no longer work. So in real life you have to live with the restrictions of doing all database transaction with logging enabled (even if you disable it, Data Guard enforces logging anyway). Could have a big performance impact.
For storage systems, equally. EMC’s systems, as said, are not built for POCs (although they can certainly make a good impression). They are built to run real workloads, with all bells and whistles, including replication. Our replication typically has less overhead and better performance than our competition (but don’t believe my word for it, go test it in our labs…)
My advice: if you are going to use remote replication in production, then ask demand your vendor to include it in your POC.
Snapshots? Cloning? Idem dito. Automatic storage tiering? Same thing. Not with test workloads, but with large data sets. Complex, heavy, mixed workloads. The real thing.
My friends in our solutions labs will probably kill me for these statements – sorry guys for setting you up with the extra work 😉 – but if you’re going to do a POC, you better do it right.
You might just as well pull a cable or two, or try a firmware upgrade in the middle of the year-end batch run. Kill a disk drive or power off a complete storage tray or something, just to see what happens to your application if you do that.
It happens in the real data center, too.
Comments are closed.