Back in July, I attended the Oracle Database 12c Launch event in London, and this is my rather belated writeup of some of the new features, some of which were introduced to me on the day, and others I had already started playing with in my Debian 12c VM.
The major new feature in 12c of course is multitenancy (MT). I have worked on big consolidation projects before, in the early 2000s at Reuters I was using VMware to crunch down 42U racks into only 3U or 6U, and the idea of a “pluggable database” will be familiar to any SQL Server DBA, but Oracle have taken these two concepts and refined them. This is of extreme interest to me when wearing my architect’s hat; one of the main problems faced when consolidating is accidentally creating physical dependencies where no logical dependency exists. E.g. if system A depends on system B, or both are used by the same end users, then they can be considered logically the “same thing” as far as scheduling downtime for upgrades or maintenance (not just the DB, the OS, the hardware, the storage, the network and everything else that actually makes a useful system). But say there is no relationship between A and B, and they have completely different sets of end users (timezones, business lines, etc). Now there is an interesting problem for the consolidator: do the benefits of consolidation actually outweigh the complexity of getting any downtime required from both communities at the same time? It is all too easy to fall into the trap of short term benefits from reduced hardware, licensing, datacentre space, and all the other reasons to consolidate, but in doing so create a system that is all but unmanageable. I see MT as another tool in the toolbox for exactly this kind of problem.
One of the use cases presented for MT was fast upgrades. In this scenario, a container database (CDB) at version X would be hosting one or more pluggable databases (PDB), and the upgrade process to version X+1 would be to create a new CDB at that level, then unplug/plug the PDBs into it. This is a very fast operation as only the metadata actually moves; if both systems can see the same storage, and if the patch is to the binaries only. Once again this is another tool in the toolbox; another approach would be to actually physically create the new version X+1 DB and replicate into it with GoldenGate† at the cost of more storage, but of course now an upgrade script can be run in the new one and both can be tested. But a better option would be a hybrid of the two approaches: use the new copy-on-write cloning mechanism offered by the MT engine, and replicate changes, or do a one-off upgrade. There is a lot more flexibility in 12c compared to 11g by the introduction of this layer between CDB and PDB. Another use case for MT is management of service levels (SLA). It is very common for an infrastructure group to offer “the business” bronze, silver and gold levels of service, where these might determine the speed at which a service can be recovered in the event of catastrophic failure, how frequently it is backed up, performance levels, etc. Therefore you can have a CDB at each level and unplug/plug DBs to move them between tiers. Sounds very easy but of course there is more to it than the DB – you would also be moving the underlying DBFs from cheaper to faster/more resilient storage if you moved from Bronze to Gold in practice! Outside of the Oracle Database Machines, I don’t know how seamless this would be, so there may be some integration effort involved. Probably it will still be faster than doing an RMAN duplicate into a new tier, but it is not quite as straightforward as the marketing blurb suggests.
Another new feature that I am excited about is Privilege Analysis. This will allow us to run an application and watch what objects it touches and fine-tune its grants accordingly – a bit like AppArmor does for applications. I am less worried these days about deliberate, malicious attempts to access data (in the context of
GRANT SELECT etc, there are much bigger threats and much better strategies for mitigating them) than I am about creating accidental dependencies, e.g. app A comes to rely on tables maintained by team B, who decide they have a much better way of doing things and simply stop maintaining them, and A gets stale data back. This can be done the old-fashioned way with auditing and roles, but to be manageable, they are too coarse grained in my experience, and there is always the risk that a developer will change something without informing a DBA that new grants are needed, or using tables that a role has incidentally given them along with the one that actually needed at the time. I also think more needs to be done to educate developers that DBAs are not just the gatekeepers of data; we fulfill the vital development job of keeping track of how the system is plumbed from app to app, whereas individual developers tend to only see their app. It is not a criticism to say this, merely an observation.
Speaking of developers, the next feature on my list is pattern matching SQL. Not as in regexps which we have had for years, but patterns in data across many rows. This is radical new stuff; Oracle have shown their interest in this area with the integration of R and Exalytics but this is the first time it can be done in pure SQL, meaning fewer integration issues, less impedance mismatch (for want of a better term, between query and statistical languages) and hopefully much better performance on less hardware, since the same block buffer cache is used to fetch and process the data. The speaker presented some (obviously cherry-picked, but still impressive) code comparing the Java and SQL required to do some analysis; the SQL was much shorter and easier to read.
This is a thousand words now – in parts 2, 3 and 4 I will write about redaction policies, new features for resilience with RAC, DG and RMAN and more…
† GoldenGate and Active Dataguard licenses are now bundled.