Maybe you have heard the story of the Monkey Experiment. It is about an experiment with a bunch of monkeys in a cage, a ladder, and a banana. At a certain point one of the monkeys sees the banana hanging up high, starts climbing the ladder, and then the researcher sprays all monkeys with cold water. The climbing monkey tumbles down before even getting the banana, looks puzzled, wait until he’s dry again and his ego back on its feet. He tries again, same result, all monkeys are sprayed wet. Some of the others try it a few times until they learn: don’t climb for the banana or you will get wet and cold.
The second part of the experiment becomes more interesting. The researcher removes one of the monkeys and replaces him with a fresh, dry monkey with an unharmed ego. After a while he spots the banana, wonders to himself why the other monkeys are so stupid not to go for the banana, and gives it a try. But when reaching the ladder, the other monkeys kick his ass and make it very clear he is not supposed to do so. After the new monkey is conditioned not to go for the banana, the researcher replaces the “old” monkeys, one by one, with new ones. Every new monkey goes for the banana until he learns not to do so.
Eventually the cage is full of monkeys who know that they are not allowed to climb the ladder to get the banana. None of them knows why – it’s just the way it is and always has been…
Whether the experiment really ever took place is under debate (see Banana Experiment), but the thought behind it is interesting, and illustrates how human organizations work. In some cases it is a good thing. You don’t have to be able to explain everything you do (or don’t do) or why you are doing it that way. Our conscious brains are only capable of dealing with a small portion of our total workload anyway, the rest is done more or less on autopilot without thinking about it too much.
But sometimes something changes (a new technology innovation arises that allows us to do things in a different way). I wrote in earlier blogposts how EMC flash drives can drive performance of databases up, and cost down. In a nutshell: if you can move 80% or more of an I/O workload to a few flash drives, then you will drastically increase response times for those 80% of the transactions. At the same time, the “legacy” high speed disk drives (typically Fibre Channel or SAS drives) will experience much lower utilization. So even the transactions being served from classic “rotating rust” will speed up.
So my big question is: why are we still using relatively small capacity, fast rotating, energy hungry disk drives to stick our databases on? Because that’s the way it is and always has been?
If you have a storage system that can really drive high I/O and low latency from flash technology, I argue that we should get rid of 15,000 rpm disks, especially the ones that only offer relatively low capacity (such as 146 GB and 300 GB form factors).
Instead, use, for example, 10,000 rpm drives – they offer 66% of the I/O’s per second of a 15,000 rpm drive, but have much higher capacities (i.e. 600GB) and waste way less energy. Also, you need less floorspace, power and cooling. As an additional bonus, the drives are cheaper per gigabyte in acquisition costs (less “Dirty Cash” to spend). And configure them in RAID-5 instead of RAID-1 and you will have more usable capacity out of every raw gigabyte.
Here is a strong statement:
If you have to use high-performance diskdrives (i.e. 15,000 rpm) then either you are stuck with a storage system that cannot drive performance out of Flash drives (so you might want to reconsider your storage strategy) – OR – you are configuring your systems too conservative.
Ideally you have a balanced system with some flash disks to drive performance, a set of classic, regular performance disks (i.e. 10,000 rpm) to add the required capacity (not performance) and maybe some big, slow SATA disks to dump data that you can’t or don’t want to remove but nobody needs it anymore.
Of course, exceptions exist. For example, a data warehouse (such as those running EMC Greenplum) do massive high-bandwidth scans to perform business intelligence and intensive analytics, and there it might make sense to use ultra fast rotating disk drives (and Flash disk would be too expensive in such large amounts). There is no silver bullet. But just ask yourself (and your colleagues) whether you really need those inefficient, energy burning high-rpm drives. Or are you deploying them “just in case, you never know, it can’t hurt” – to find out later, that your expensive, screamingly fast disks are running at less than 10% I/O utilization… ?
I started my blog earlier this year because I strongly believe we can run our business applications more efficient. But we have to get rid of “banana thinking” if we really want to reduce the cost of running our applications.
Comments are closed.