Quick post to point my readers to this:
Explosive Disclosures Validate HoB Experience with Oracle Audits
An absolute must read if you want to know more about licensing and audits.
An IT infrastructure perspective on optimizing business applications
Quick post to point my readers to this:
Explosive Disclosures Validate HoB Experience with Oracle Audits
An absolute must read if you want to know more about licensing and audits.
If you are a frequent user of Oracle SQL*Plus, you probably also know about a tool called rlwrap. Bare SQL*Plus does not offer command history, arrow-key editing or any type of word completion so it feels like you’re thrown back in the late 90s using a Spartan SQL interface.
Prefixing “sqlplus” with “rlwrap” drastically improves usability as now you can easily edit your commands, recall history and possibly add a list of frequent used words for TAB autocompletion.
Alternatives are Oracle SQLcl, Oracle Developer or 3rd party tools like GQLPlus or Quest TOAD/SQL Navigator.
But for those who have to live with the natively provided SQLPlus, wrapping it in rlwrap offers an excellent user experience. You can even search the sqlplus history (type CTRL-R and enter parts of what you’re looking for). Many more keyboard shortcuts are available much like on the Linux BASH command line.
(more…)
Quick post to announce QDDA version 2.2 has been published on Github and in the Outrun-Extras YUM repository.
Reminder: The Quick and Dirty Dedupe Analyzer is an Open Source Linux tool that scans disks or files block by block to find duplicate blocks and compression ratios, so that it can report – in detail – what the expected data reduction rate is on a storage array capable of these things. It can be downloaded as standalone executable (QDDA download), as RPM package via YUM or compiled from source (QDDA Install)
QDDA 2.2 adds:
Note to other storage vendors: If you’d like your array to be included in the tool, drop me a note with dedupe/compression algorithm details and I’ll see what is possible.
It’s almost a year since I blogged about qdda (the Quick & Dirty Dedupe Analyzer).
qdda is a tool that lets you scan any Linux disk or file (or multiple disks) and predicts potential thin, dedupe and compression savings if you would move that disk/file to an All Flash array like DellEMC XtremIO or VMAX All-flash. In contrast to similar (usually vendor-based) tools, qdda can run completely independent. It does NOT require a registration or sending a binary dataset back to the mothership (which would be a security risk). Anyone can inspect the source code and run it so there are no hidden secrets.
It’s based upon the most widely deployed database engine, SQLite, and uses MD5 hashing and LZ4 compression to produce data reduction estimates.
The reason it took a while to follow-up is because I spent a lot of evening hours to almost completely rewrite the tool. A summary of changes:
Read the overview and animated demo on the project homepage here: https://github.com/outrunnl/qdda
HTML version of the detailed manual page: https://github.com/outrunnl/qdda/blob/master/doc/qdda.md
As qdda is licensed under GPL it offers no guarantee on anything. My recommendation is to use it for learning purposes or do a first what-if analysis, and if you’re interested in data reduction numbers from the vendor, then ask them for a formal analysis using their own tools. That said, I did a few comparison tests and the data reduction numbers were within 1% of the results from vendor-supported tools. The manpage has a section on accuracy explaining the differences.
With Oracle’s uncertain SPARC future, the rise of very fast and capable All-Flash arrays and existing Exadata customers looking to refresh their hardware, I increasingly get questions on what platform we can offer as an alternative to Exadata or Supercluster. A big challenge can be to break away from the lock-in effect of HCC (Hybrid Columnar Compression although I’d like to call it Hotel California Compression) as it seems hard to get an estimate on how much storage capacity is needed in other storage or migrating to other platforms. Note that any storage could theoretically use HCC but Oracle disabled it on purpose on anything other than Exadata, ZFS appliance or Pillar (huh?) storage.
As far as I know, there is no direct query to figure out how big a HCC compressed table would get after decompressing. HCC tables can get very large and the compression ratio can be fairly high which makes sizing a new environment a challenge.
So in order to provide a reasonable guesstimate, I created a typical scenario on my VMware homelab to estimate generic storage requirements for HCC compressed environments.