I’ve been in discussions with some customers recently about what is the best way to store Oracle database data. Needless to say that the foundation should be EMC infrastructure of course, but apart from that, what kind of volume manager and/or filesystem works best for performance and other features?

There are many volume managers, and many filesystems available, more or less dependent on what hardware and operating system you run the database.

Some have a long track record, some are new kids on the block. Some are part of the operating system, others are 3rd party add-ons, for which you might need to pay licenses.

One way of storing Oracle database data is Oracle ASM.

Oracle ASM is Oracle’s volume manager specially designed for Oracle database data. It is available since Oracle database version 10g and many improvements have been made in versions 11g release 1 and 2. Oracle uses ASM in their own production environments and it is a core component in many of Oracle’s own offerings (such as Oracle Exadata) when maximum performance is needed.
It offers support for Oracle RAC clusters without the requirement to install 3rd party software, such as cluster aware volume managers or filesystems.
Although ASM is not absolutely required for an Oracle RAC cluster to be supported on EMC systems, we highly recommend using ASM because it lowers risk, cost, administration overhead and increases performance.

Oracle and other parties have developed alternatives for storage management of Oracle datafiles, such as Oracle OCFS (and OCFS2), SUN/Oracle ZFS, IBM GPFS and others. However, not all of these support Oracle clustering, and most of these filesystems (and volume managers) are complex to set up and require intensive tuning to drive good performance. Support (from Oracle, or OS vendors) can also be an issue.

Compared to standard volume managers and filesystems (either clustered or single system), ASM has a number of advantages:

  • It does not require large amounts of memory for cache. The memory not required for file system caching can be configured for Oracle memory (SGA) where it is more efficient (note that ASM requires typically a few hundred megabytes for internal administration, shared across all databases)
  • ASM distributes chunks of data pseudo-randomly across all available logical disks in a disk group, thereby removing potential performance “hot-spots”
  • ASM does not perform any I/O itself so there is no “translation layer” for Oracle I/O to datafiles into disk block offsets. I/O from databases is directly applied to disk volumes without modification. This again reduces overhead and improves performance.
  • Therefore ASM also does no read-ahead (like filesystems) to read data in (filesystem) cache that is never used by the database.
  • ASM does not require intensive tuning such as setting fragment sizes correctly and tuning file system journals. When creating an ASM disk group you only need to define the “chunk size” and whether or not to perform fine striping. It is unlikely to make configuration errors (causing performance issues) if a few simple ASM configuration guidelines are followed.
  • ASM does not cause fragmentation (you could argue that ASM balancing is some sort of fragmentation. However, the allocation units are large enough – typically 1MB or more – to allow for very little disk “seeks” to read a number of subsequent (typically 8K) blocks
  • ASM does not break large I/O’s (i.e. 128K) in multiple smaller ones (4K or 8K) like some filesystems do. One large I/O is faster than many small ones.
  • No “journal” (AKA “intent log” etc) is required for consistency (this function is already done by Oracle redo logs and having a journalled filesystem is therefore only overhead).
  • ASM can be managed from within Oracle tooling and does not require Unix administration (this can be an advantage or disadvantage depending on responsibilities of various administrators in the organization).
  • Adding or removing storage to/from ASM is very easy and does not require careful planning (as is the case with volume managers and filesystems). After adding storage, ASM will automatically “rebalance” storage so all disks will be utilized equally. This again increases performance.
  • ASM works on all major operating systems so it is platform independent.
  • SAP now supports Oracle ASM.
  • Finally, EMC fully supports ASM including various tooling that integrate with Oracle (such as Replication Manager, and backup and reporting tools).

Disadvantages? A few, maybe. Biggest disadvantages that I have identified:

  • Migration from legacy filesystems to ASM can be a problem and often requires an outage
  • It is hard (if not impossible) to view ASM contents with standard OS tools. In some cases, ASM data can be accidentally overwritten by OS admins who are using disk volumes that (to them) seem to be empty. However, there are administrative ways to prevent this from happening
  • Backup cannot be done with traditional methods that just backup OS files so you need integrated tooling or use Oracle’s native tools (RMAN)

Last but not least, I have encountered a few situations where storage I/O load balancers (multipath drivers) also mess up the I/O’s coming from the higher level. In particular, I have seen Red Hat 5.6 native multipathing breaking 64K or 128K I/O’s into multiple 4K I/O’s. Still don’t know if this is a bug, undocumented feature, or works as designed. But it does not help Oracle performance. We switched to our own EMC Powerpath driver to replace it and immediately boosted performance without further tuning.

Make sure you understand the I/O stack end to end. The less complexity the easier it is (and the risk of making configuration errors). Using ASM (ideally, with EMC Powerpath) removes the risk of misconfigured volume managers, file systems and I/O path drivers.

Finally, I also have talked to customers who plan to use Oracle ASM’s mirroring (ASM “normal redundancy” setting) capabilities as a poor man’s disaster recovery tool and build stretched clusters across datacenters. Although you can make it to work, I would strongly recommend not to do so. ASM redundancy was designed to protect from failing disks, not from failing datacenters. We at EMC can offer something that works better with less risk. More on that on a future post.

Loading

Why use Oracle ASM for Oracle databases
Tagged on:                             

17 thoughts on “Why use Oracle ASM for Oracle databases

  • Bart,

    Nice writeup on the merits of ASM with Oracle. I worked with ASM when it first came out in beta and it has improved greatly since 10g.

    Cheers,
    Ben

  • Hey Bart,

    Interesting blog/ info – well done.

    Funny currnetly in my company there is a big debate to why should ASM/ RAC should be used in VM enviroment – VMWare attached to a storage.

    Still a lot of people believe that lvm + storage is enough and so adding ASM it does increase the complexity/ overhead/ more skils for non-dba people etc.

    Have you been involved in projects where ASM/ RAC is used in a VM enviroment?

    Thanks,
    DaniC

    1. Hi DanyC,

      If you want to use LVM + storage then you still need a file system (i.e. ext3 or something else). Although it is easy to create a file system and put your data files on it, it might turn out to be more complex later if you need to troubleshoot performance issues.

      In my personal experience I did not find it too hard to set up ASM, at least not much more than correctly installing the rest of the Oracle stack. But I’ve heared different opinions 🙂
      My personal preference is still to use ASM, even on VMware. Or dNFS as a good alternative.

      Something that currently needs consideration is if you virtualize many databases using VMware. Setting up a separate set of ASM disk groups for each can be an administrative challenge. Sharing ASM disk groups (or Linux file systems, or VMware VMFS file systems ) for multiple databases can be problematic if you later on want to use storage cloning/replication (i.e. you clone an ASM disk group with data from 3 databases – now how are you going to restore one database from the snapshot?)

      I plan to write a separate post on that if people are interested.
      Best regards,
      Bart

      1. Thanks Bart!

        Will be very much appreciated/ interested to read the new post.

        Be well,
        DaniC

      2. I have used both ASM and LVM(VXVM/LVM) and I can tell you that from a DBA perspective, thre is a quite a bias towards ASM, because the DBA DO NOT understand what is happening at the arry level, the SAN level, HBA level and disk level in the OS. The problem with ASM is it does not go beyond the what the os prsents to it. as far as DBA’s are concerned ” they wury to see disk IO performance” and thats it but how do you translate ASM disk issues to the Sd drivers, to io throtteling, que depth, SAN latency. How does ASM determine what a false positive is on a scsi inq, how does ASM handle a change udid, minor/major number change, how does ASM handle target name changes, how can you recover from multiple bad disks on the fly, use thin provisioning and reclamation, how can ASM utilize SANless clusters, How can you work on a live DG and spit off its miror live and work on those disks at the array level. there are many scenerios where ASM falls short. Oracle and DBA’s will tell you all you need is ASM, the reality is DBA’s are looking for Job security and Oracle wants to hav eits entire stack in the enterprise. I am both a DBA and an SA for over 17 years, lets leave oracle to do what it does best,and RDBMS.

        1. one other issue to look at is performance, if you take VXVM from Symantec and configur eit properly per volume or as a whole you can taylor performance at the table,index,tablespace and database level, you can’t even dream of doing so in ASM 11gR2. we have to realize volume managers have been out there for 25 years ASM has been out there for 5 . The only advantage ASM has is its free, and you get what you pay for. I ask any DBA to get me a setup where ASM can perform better then VXVM or LVM. let me know I will duplicat eit in my lab.

          1. Shad,
            My take is that ASM does not have to be “better” than others. It’s a matter of overhead. If filesystem xyz has 10% overhead (maybe due to wrong config) and filesystem pqr has 1% then how can you do much better than that? By going to filesystem abc that has 0.5% overhead you’re not going to double your performance…?

            If VXVM works for you and gives good performance, great! My experience is that in many cases filesystems cause overhead, either due to the FS design or due to wrong implementation. ASM removes a lot of those issues, but granted, might create a few different challenges… there’s no free lunch 🙂

        2. Many scenarios and interesting questions. For some, to ASM or not ASM is a matter of personal preference and that is fine. Many of the “limitations” you mention were probably never intended to be solved by ASM. Sometimes, keeping it simple is the best way to go and let other layers in the stack do other things where it makes sense…
          For example, splitting a mirror from a live DG is a piece of cake with EMC consistency technology. Why make ASM a mess to support that on the ASM level, it’s not a cloning tool.

          That DBAs are looking for job security… well I can’t blame them but I never spoke to a DBA who was thinking that by using ASM they will keep their job? That said, there is some “religion” involved. If you’re into Oracle you use Oracle’s tools and any 3rd party can never ever do better than Oracle… or so they make you believe 🙂

  • Hello Bart,

    Excellent post.

    As being a storage guy have you done any benchmarks for oracle databases likely to say ASM vs traditional file system with resp to performance.

    Could you please shed some light on the same or paste some links to refer as a guide & what is to considered when doing the benchmark so that the benchmark can be submitted to manager to go for either the ASM or traditional file systems or LVMs.

    Regards,
    Shadab

    1. Hi Shadab,

      Partly. If you browse through some of my more recent posts you will see I did some comparisons with ASM versus ZFS. Now ZFS is not a traditional FS. In terms of performance however, I expect not too much difference between say EXT3, XFS, VxFS etc. A file system is supposed to have as minimal overhead as possible so if one FS has 1% overhead and another one has 1.5% then the results from benchmarks will be very close together. If vendors claim one FS is X times faster than another, take it with a (large) grain of salt. Differences are mostly around fragmentation, overhead, caching and scalability and for such things it’s hard to do an apple-to-apple comparison. I attempted that however on the ZFS part.

      I like your comment on what would be required to make an educated decision on one or the other. I’m working on a project to make such performance testing much easier and hope to announce that soon, so stay tuned for that on this blog.

      Regards
      Bart

      1. Hello Bart,

        I enjoy your blog.

        Would wait for your announcement pertaining to performance testing benchmarks.

        Regards,
        Shadab

  • Hi All

    We are debating this topic at present.

    We are wondering are their DB characteristics, such as large or small or level of growth would lend a DB to ADM vs File System, We have several smaller databases and are considering larger DB (over TB) on ASM and smaller on traditional FS for example. As stated throughout above there does not seem to be an overbearing reason to lean to FS on EMC or ASM. We do run a little ASM now and we are considering the value of moving ASM as we upgrade to Oracle 12. In some cases we will be using multitenant architecture.

    Any thought or suggests?

    1. Hi Elaine,

      There are advantages and disadvantages for running many smaller databases on ASM.

      Advantages:
      – Easier to move (or clone) databases to/from larger environments if that’s also on ASM
      – The benefits mentioned in my post (no memory overhead except for the ASM instance, no problems with file system tuning etc)
      – Possibly a small performance benefit (due to not having an FS layer but “raw” IO, also due to ASM balancing AU across all of the disks)

      Disadvantages:
      – More admin work (setup separate volumes for ASM disks etc)
      – You may run into sizing limits (i.e. an FC port that can only serve 512 LUNs)
      – A little more DBA and Unix skills required (you need to learn the ASM management tools and commands)

      To make things simple you can have a larger ASM diskgroup serving multiple databases – but note that you always clone entire ASM diskgroups so cloning a single database would be impossible. For smaller DBs I don’t consider that an issue (a “cp” or “dd” of a few gigabytes is quickly done on the server) but it’s good to note.

      I myself have a preference for ASM even for smaller databases – but having many of them on well configured file systems is perfectly fine also.

      ps. for easier ASM disk management on Linux, take a look at my “asmdisks” toolset (http://outrun.nl/wiki/ASMDisks) – you may find it useful.

      Best regards
      Bart

Leave a Reply to DanyCCancel reply