As of today I am on gardening leave, and I intend to spend the time productively (after taking a long-planned and well-deserved holiday next week!). In no particular order:

  • Update this blog more regularly, I have some posts that I have been meaning to write but just never have the time. I have been updating my other blog tho’.
  • Some work on my sorely neglected Open Source stuff – finally LOBs in OCI*ML?? :-)
  • Get properly up to speed in a couple of new technologies, mainly Oracle 12c and C++11.
  • Repair the stack of broken BBC Micros in the living room!

I have been offered redundancy before, and take a pretty philosophical view of it. I work for organizations, or groups within larger organizations, that push hard and take risks. When it works, it works very well – this year I got a decent bonus, despite the company struggling (a matter of public record, not betraying any secrets!). And when it doesn’t, then it’s time to go.

Posted in BBC, Business, C++, Ocaml, OCIML, Oracle, Random thoughts | Leave a comment

HOWTO: Install Oracle 12c on Debian Wheezy

I can confirm that the previous post basically works too for the installation of Oracle 12c on 64-bit Debian 7.1/Wheezy, with the following modifications. Using lib64 for the library symlinks:

# mkdir /usr/lib64
# ln -s /usr/lib/x86_64-linux-gnu/libpthread_nonshared.a /usr/lib64/
# ln -s /usr/lib/x86_64-linux-gnu/libc_nonshared.a /usr/lib64/
# ln -s /lib/x86_64-linux-gnu/ /lib
# ln -s /usr/lib/x86_64-linux-gnu/ /usr/lib64/

And the following changes to $ORACLE_HOME/rdbms/lib/


The bug requiring the unsetting of JAVA_JIT_ENABLED seems to have been fixed. A 1G VM/512M SGA appears to be too small – 2G/1G is just about sufficient.


Posted in Linux, Oracle | Tagged | 6 Comments

HOWTO: Install Oracle 11gR2 on Debian Wheezy

Oracle 11gR2 on Debian still isn’t an officially supported configuration (10g XE was for a while), but it is perfectly do-able with a little cajoling. Here I am starting with a fresh installation of Debian 7.1 in a VirtualBox VM, with 1G memory and a 40G dynamically allocated virtual disk. I installed just the base desktop + system utilities options from the DVD image. Once this is done I take a snapshot of it, which I can quickly clone whenever I need a new VM.

The first thing I want to do is get it set up the way I like it, including patching to the latest Guest Additions. In a root terminal (Applications → Accessories → Root Terminal):
Add the following lines to the file /etc/sysctl.conf:


And execute the following commands:

# apt-get remove virtualbox-guest-dkms virtualbox-guest-utils virtualbox-guest-x11 gnome-shell
# apt-get install linux-headers-3.2.0-4-all
# apt-get autoremove
# eject
# cd /media/cdrom0
# sh
# reboot

After ejecting the distro ISO, insert the Guest Editions. This may take a while (esp. the autoremove).

After this the system will boot into the console, old-skool style. After logging in type startx to get the desktop (but no need if you just want to use the VM to run the DB server). It will be the less resource-guzzling Classic desktop only. I also disable screen locking and enable automatic login, since those are actually taken care of by the host machine (an MBP in this case):

  • Applications → System Tools → Preferences → System Settings → User Accounts → Automatic Login
  • Applications → System Tools → Preferences → System Settings → Brightness and Lock

Now I am ready to begin the actual Oracle installation, starting with the prereqs. Many of these will fail the Oracle installer precheck, e.g. it wants Make 3.8 whereas Wheezy comes with 3.81!

# apt-get install libaio-dev sysstat unixodbc-dev libelf-dev unzip g++ libstdc++6-4.7-dev libstdc++5

And create the users and groups necessary, and open up the display so this new user can see it:

# groupadd dba
# useradd -d /home/oracle -m -c "Oracle Database" -g dba -s `which bash` oracle
# mkdir /opt/oracle
# mkdir /opt/oraInventory
# mkdir /oradata
# chown oracle:dba /opt/oracle /opt/oraInventory /oradata
# xhost +

Next do some fakery to make it look like Red Hat/OEL (all these appear to be hard-coded paths in the Oracle tools):

# ln -s /usr/bin/basename /bin/basename
# ln -s /usr/bin/awk /bin/awk
# ln -s /usr/lib/i386-linux-gnu/libpthread_nonshared.a /usr/lib
# ln -s /usr/lib/i386-linux-gnu/libc_nonshared.a /usr/lib
# ln -s /lib/i386-linux-gnu/ /lib
# ln -s /usr/lib/i386-linux-gnu/ /usr/lib

Go and fetch the software from OTN ( or if you have access to Metalink, just grab patch 10404530 which will take you straight to Unzip these into /home/oracle where it will create a database/ folder. In a Root Terminal, su - oracle and:

$ for f in *.zip ; do unzip $f; done
$ cd database/
$ export DISPLAY=:0.0
$ export ORACLE_BASE=/opt/oracle
$ export ORACLE_HOME=$ORACLE_BASE/product/
$ export PATH=$PATH:$ORACLE_HOME/bin
$ export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/lib/i386-linux-gnu:/bin/lib:/lib/i386-linux-gnu/:/usr/lib
$ ./runInstaller 

Proceed through the installer, selecting the appropriate options for the installation you want (or just accept the defaults if you are not sure). I am using

  • ORACLE_BASE=/opt/oracle
  • ORACLE_HOME=/opt/oracle/product/
  • Datafiles in /oradata, inventory in /opt/oraInventory
  • Install software only, single instance, Enterprise Edition.

Skip all the prereq checks – they are there even tho’ the GUI installer doesn’t recognize them as such, the underlying scripts and the linker will. The installation will fail when relinking “agent nhms” due to a change in the behavior of the linker. The clue is in the log message:

/usr/bin/ld: note: 'B_DestroyKeyObject' is defined in DSO 
/opt/oracle/product/ so try adding it to the linker command line

We can fix that in the Makefile $ORACLE_HOME/sysman/lib/ by replacing:




And clicking retry. Now I can create a database with DBCA. There is one customization I make to the startup parameters to avoid a crash in the Oracle JVM (still known as Aurora internally!) while creating the data dictionary, set java_jit_enabled from TRUE to FALSE in the Advanced Parameters:


Congratulations, you now have a working Oracle installation on a halfway sane Linux distro! Of course this is all moot since 12c is out now; the same steps should apply, I will update when I have had a chance to try it.

Finally I install some software that I like, again in a Root Terminal (you can skip this step if you don’t plan to do any OCaml development!):

# apt-get install hardening-wrapper hardening-includes git ocaml-batteries-included ocaml-mode rlwrap gnome-screenshot strace valgrind
Posted in Linux, Oracle | Tagged | 12 Comments

MongoDB Days

Last week I attended MongoDB Day London. Now MongoDB itself is a technology that I’m fairly interested in, I can see where it would have its uses. But the problem is the people! They all talk like this:

  1. Some problem that just doesn’t really exist (or hasn’t existed for a very long time) with relational databases
  2. MongoDB
  3. Profit!

An example would be the first speaker, who didn’t like normalized data because it had “bad locality”. Now ignoring for a second the difference between a logical and a physical data model, and the existence of the normal forms, if you ever did find that the bottleneck on your joins specifically was seek time, you could pre-compute the join and refresh it whenever anything changed – in Oracle using a materialized view (1996!), a continuous query, the result cache… And that’s on top of the block buffer cache and the query optimizer already being very smart. Another of the same speaker’s examples overlooked the existence of nested tables, saying what you could do with them is impossible in “SQL databases”. It’s claimed that MongoDB is more flexible because it doesn’t constrain you to tables. Well that’s backwards… We don’t work the way we do because tables are a limitation of the technology, we use the relational model because it has sound mathematical underpinnings, and the technology reflects that†. Where’s the rigour in MongoDB’s model?

Another speaker claimed that it was far better for each application to have its own database, and expose all its data through web services. Sounds good, except you now need another technology, a directory to find all these things, since they aren’t just table names or stored procedure names in the one place, and manage access control and auditing. And if you need to touch data across several of them then you’ll need something to coordinate that… We could call it a transaction processing facility, since that’s what IBM called it in 1960. He handwaved over both of those. There were many similar examples.

Another recurring theme was of an organization refreshing its hardware and modifying its architecture, one component of which was introducing MongoDB, yet all the performance gains attributed to it. For example splitting OLTP and OLAP from one database into two, and introducing a delay of a few minutes between data coming in and being available for reporting. Well that will give you a massive performance boost in any database! If you can tolerate the delay, of course. But if you could, why build it that way in the first place (or having built it, complain that it’s slower than you’d like), and if you can’t, then you can’t do this. In the roadmap they are promising point-in-time recovery in a future release. Oracle had that in 1988, when I had just left primary school.

So anyway, since it’s free‡, there’s no reason not to evaluate MongoDB, and see if it suits your use cases. But just remember that these kids think they’re solving problems that IBM (et al) solved quite literally before they were born in some cases, and the features are probably already there in your existing database/technology stack (I have used Oracle for my rebuttals just because I am most familiar with it, but I expect the same is true for SQL Server and DB/2 as well). Talk to your friendly local DBA…

† I personally predict that in a few years there will be a lot of work re-normalizing the data in MongoDB and its rivals so it can actually be useful. That’s reason enough to become an expert in it. In about 2001, the company I joined then had just completed a massive engineering effort to get off Versant and (back) into Oracle… All this object-database stuff gives me massive deja vu for the 1990s when they were all the rage.

‡ In the same way that Oracle is also free for evaluation purposes. No-one would deny that Oracle is expensive in production! But there is no such thing as cheap or expensive in business, there is only worth the money or not.

Update: Someone has posted this on Hacker News and Reddit.

Posted in MongoDB, Oracle, Random thoughts | 26 Comments

The Grand Challenges

What would you say are the greatest challenges facing modern computing? Protein folding to discover new pharmaceuticals? Sifting the vast quantities of sensor readings from the Large Hadron Collider? Rendering movies so lifelike that human actors are obsolete?

Well if you are Apple, I’d say your greatest challenges were scrolling a document, responding to a mouse click and keeping up with a user typing. Hell, you can’t even manage it for this blog post…

Posted in Random thoughts | Leave a comment

Those Who Forget History Are Doomed To Repeat It…

… first as tragedy, then as farce.

There is a strange attitude among many in this industry towards what are contemptuously referred to as legacy systems. No-one would ever articulate this of course, because when you say it out loud it sounds ridiculous, but the implicit belief is, in the 70s, 80s, 90s they had: smartphones and AJAX† and Ruby-on-rails and Chrome (and a long shopping list of “modern” technologies) but because they were stupid they chose to use dial-up modems, and dumb terminals, and program in FORTRAN. And because they were stupid, we have nothing to learn from them.

On a similar note, it is a common refrain to hear that newspapers are obsolete. And as a business model, that may well be true – but since the 1980s, newspaper publishing has defined mission-critical computing. Come hell or high water, the paper has to be on the newsstands in the morning. The desktops, the networks, the servers, the presses, the logistics, and all the software and IT have to work. Nowadays, it is as much as anyone can hope for for most software to mostly work, most of the time. If a browser or an operating system crashes or freezes, you take it in your stride and restart it. If a website is down, you might try again later, or you might just not bother. Even major bits of infrastructure are unreliable. The skills required to do serious computing are simply decaying, while individuals such as myself retain and practice the old ways, I don’t think when the last newspaper switches off its presses, that talent will then make its website five-9s reliable… Even the best civil engineer can’t build a castle on a fetid swamp. We’ll have to nuke it from orbit – only way to be sure – and start again.

† I recently used AJAX to build the interface for one of my projects. Even with bolt-ons like Comet, it’s pretty crude and feeble compared to Tcl/Tk… From the last century. I’m sure by the end of this decade it will be as-good‡, only to be swept away by some shiny new thing, and we’ll be back to square one in terms of getting useful work done. Since the 80s, every decade in computing has been a shallow copy of the previous decade. Eventually it will go full circle, like pocketwatches being replaced by wristwatches, to be replaced by clocks on mobile phones in your pocket…

‡ And yet, actually no more powerful or easy to use than Curses, for either developers or end users.

Posted in Random thoughts | Leave a comment

Quick histograms

Having come back to actively working on OCI*ML recently, it’s time I cracked on with some more features (I have been promising LOBs for a long time, sorry to anyone who’s still waiting). Just to get warmed up, inspired by spark I have added a quick histogram function, similar to quick query for interactive use. This requires a query of the form of a label and a number, for example a simple view:

SQL> create view v1 as
select object_type, count(1) as howmany from user_objects group by object_type;

The histogram automatically scales to the width of the current window.

Also, I have been reading Jordan Mechner’s book The Making Of Prince Of Persia†. It’s both fascinating and inspiring. Just before that, I read The Future Was Here, the story of the Commodore Amiga‡. The book is made even more poignant by my Mac inexplicably showing the beach ball as I scroll through a simple web page, or the mighty RHEL servers at work being unable to keep up with my typing. The future is still back in the 80s.

† The original code is also on Github.

‡ I have an A500+ on my desk right now, the best of them IMHO. I might write a post comparing it with the Atari STE, and the BBC with the C64, in the cold light of day as an experienced adult. I have a fine collection of classic machines now, often acquired broken with the intention of repairing them myself. Another time-sink from OCaml work…

Posted in Ocaml, OCIML, Operation Foothold, Random thoughts | Leave a comment