Pinpointing I/O bottlenecks on Linux

Does this story sound familiar?

The end users of a database application start complaining about poor system response and long running batch jobs. The DBA team starts investigating the problem. DBA’s look at their database tools such as Enterprise Manager, Automatic Workload Repository (AWR) reports, etc. They find that storage I/O response times are too high (such as an average of 50 milliseconds or more) and involve the storage team to resolve it.

The storage guys, in turn, look at their tooling – in case of EMC this could be Navisphere Analyzer, Symmetrix Performance Analyzer (SPA) or similar tools. They find completely normal response times – less than 10 milliseconds average.

The users still complain but the storage and database administrators point to each other to resolve the problem. There is no real progress in solving the problem though.

Two Way Communications

(more…)

Loading

How to set disk alignment in Linux

As you might know, if disk partitions containing Oracle datafiles are not aligned with the underlying storage system, then some I/O’s can suffer from some overhead as they are effectively translated in two I/O’s.

If you want more info, google for “EMC disk alignment” and you’ll find plenty of information, explaining the issue.

Update 28-03-2013: I wrote a follow-up for this post describing the same thing for Linux (Red Hat / CentOS / OEL) versions 6. For that, you might want to jump straight to the new post as this one gets a bit outdated šŸ˜‰
(more…)

Loading

Information Lifecycle Management and Oracle databases – part 3

Archiving and purging old data

In the end, if you want to seriously reduce the effective size of a database (after using all innovations on the infrastructure level) is to move data out of the database on to something else. This is a bit against Oracleā€™s preferred approach as they propose to hold as much of the application data in the database for as long as possible (I wonder why…)

We could separate all archiving methods into two categories:

  • Methods that donā€™t change the RDBMS representation and just move tables or records to a different location in the same or different database;
  • Methods that convert database records into something else and remove it from the database layer completely

(more…)

Loading

Information Lifecycle Management and Oracle databases – part 2

Database compression

 

Compression

 

Another technique that Oracle has improved as of version 11g is compression. In versions up to 10g you could only compress an entire table, and after that, random performance on a compressed table was poor. It worked well for data warehouses where I/O bandwidth is reduced (compressed data can be read quicker from disk than uncompressed) but only in specific cases.

In 11g Oracle has introduced ā€œadvancedā€ compression. I will not go into details, but it allows compression on a much more granular basis, so that OLTP applications can benefit, and it works on a record-by-record basis. Oracle claims this reduces the total database size (no-brainer šŸ™‚ ) and therefore also the backup size (thereby ignoring the effects of tape compression that most customers use, so your mileage may vary). Data can only be compressed once, so the size of a normal database on tape compared to a compressed one will probably not be different with tape compression enabled.

(more…)

Loading