I’ve been in discussions with some customers recently about what is the best way to store Oracle database data. Needless to say that the foundation should be EMC infrastructure of course, but apart from that, what kind of volume manager and/or filesystem works best for performance and other features?
There are many volume managers, and many filesystems available, more or less dependent on what hardware and operating system you run the database.
Some have a long track record, some are new kids on the block. Some are part of the operating system, others are 3rd party add-ons, for which you might need to pay licenses.
One way of storing Oracle database data is Oracle ASM.
Oracle ASM is Oracle’s volume manager specially designed for Oracle database data. It is available since Oracle database version 10g and many improvements have been made in versions 11g release 1 and 2. Oracle uses ASM in their own production environments and it is a core component in many of Oracle’s own offerings (such as Oracle Exadata) when maximum performance is needed.
It offers support for Oracle RAC clusters without the requirement to install 3rd party software, such as cluster aware volume managers or filesystems.
Although ASM is not absolutely required for an Oracle RAC cluster to be supported on EMC systems, we highly recommend using ASM because it lowers risk, cost, administration overhead and increases performance.
Oracle and other parties have developed alternatives for storage management of Oracle datafiles, such as Oracle OCFS (and OCFS2), SUN/Oracle ZFS, IBM GPFS and others. However, not all of these support Oracle clustering, and most of these filesystems (and volume managers) are complex to set up and require intensive tuning to drive good performance. Support (from Oracle, or OS vendors) can also be an issue.
Compared to standard volume managers and filesystems (either clustered or single system), ASM has a number of advantages:
- It does not require large amounts of memory for cache. The memory not required for file system caching can be configured for Oracle memory (SGA) where it is more efficient (note that ASM requires typically a few hundred megabytes for internal administration, shared across all databases)
- ASM distributes chunks of data pseudo-randomly across all available logical disks in a disk group, thereby removing potential performance “hot-spots”
- ASM does not perform any I/O itself so there is no “translation layer” for Oracle I/O to datafiles into disk block offsets. I/O from databases is directly applied to disk volumes without modification. This again reduces overhead and improves performance.
- Therefore ASM also does no read-ahead (like filesystems) to read data in (filesystem) cache that is never used by the database.
- ASM does not require intensive tuning such as setting fragment sizes correctly and tuning file system journals. When creating an ASM disk group you only need to define the “chunk size” and whether or not to perform fine striping. It is unlikely to make configuration errors (causing performance issues) if a few simple ASM configuration guidelines are followed.
- ASM does not cause fragmentation (you could argue that ASM balancing is some sort of fragmentation. However, the allocation units are large enough – typically 1MB or more – to allow for very little disk “seeks” to read a number of subsequent (typically 8K) blocks
- ASM does not break large I/O’s (i.e. 128K) in multiple smaller ones (4K or 8K) like some filesystems do. One large I/O is faster than many small ones.
- No “journal” (AKA “intent log” etc) is required for consistency (this function is already done by Oracle redo logs and having a journalled filesystem is therefore only overhead).
- ASM can be managed from within Oracle tooling and does not require Unix administration (this can be an advantage or disadvantage depending on responsibilities of various administrators in the organization).
- Adding or removing storage to/from ASM is very easy and does not require careful planning (as is the case with volume managers and filesystems). After adding storage, ASM will automatically “rebalance” storage so all disks will be utilized equally. This again increases performance.
- ASM works on all major operating systems so it is platform independent.
- SAP now supports Oracle ASM.
- Finally, EMC fully supports ASM including various tooling that integrate with Oracle (such as Replication Manager, and backup and reporting tools).
Disadvantages? A few, maybe. Biggest disadvantages that I have identified:
- Migration from legacy filesystems to ASM can be a problem and often requires an outage
- It is hard (if not impossible) to view ASM contents with standard OS tools. In some cases, ASM data can be accidentally overwritten by OS admins who are using disk volumes that (to them) seem to be empty. However, there are administrative ways to prevent this from happening
- Backup cannot be done with traditional methods that just backup OS files so you need integrated tooling or use Oracle’s native tools (RMAN)
Last but not least, I have encountered a few situations where storage I/O load balancers (multipath drivers) also mess up the I/O’s coming from the higher level. In particular, I have seen Red Hat 5.6 native multipathing breaking 64K or 128K I/O’s into multiple 4K I/O’s. Still don’t know if this is a bug, undocumented feature, or works as designed. But it does not help Oracle performance. We switched to our own EMC Powerpath driver to replace it and immediately boosted performance without further tuning.
Make sure you understand the I/O stack end to end. The less complexity the easier it is (and the risk of making configuration errors). Using ASM (ideally, with EMC Powerpath) removes the risk of misconfigured volume managers, file systems and I/O path drivers.
Finally, I also have talked to customers who plan to use Oracle ASM’s mirroring (ASM “normal redundancy” setting) capabilities as a poor man’s disaster recovery tool and build stretched clusters across datacenters. Although you can make it to work, I would strongly recommend not to do so. ASM redundancy was designed to protect from failing disks, not from failing datacenters. We at EMC can offer something that works better with less risk. More on that on a future post.
Comments are closed.