Updated.

Much C++ excitement has happened since I wrote the texts below. I spent a very great deal of time working in C++14 and discovered a very great deal about the benefits and risks of cutting edge of C++ coding.

After all this, I have a much better idea of what I want in a so-called "modern C++" system and how to write something that doesn't just perform well, but also reads clearly and can be easily maintained. See the timedata project for examples of very-recent C++ code.

[original text below]

I have written hundreds of thousands of lines of C++ and C code.

While I've been doing C since the old days, I was an early adopter of and evangelist for C++ for Market Vision Corporation starting in 1991.

I thought he knew what he was doing with C++ when I joined Google in 2004 - but quickly became aware of the depths of my ignorance when confronted with the level of programming there. Taking advantage of the environment, I created and maintained the Google C++ mailing list and got consistently top marks in annual reviews for the strength of his C++ coding and design.

One big high point there was "The Deduplicator". Google Base is a very large datastore containing "every product in the world" - but it needed to be deduplicated, and the program which did this was taking far too long to complete. I decided that instead of redoing the datastore (a huge task), there was a much better solution using a lesser-known STL class (stl::multimap) and a rewrite of the deduplicator - and then when there was no consensus from the group as to whether it would succeed, wrote the code in several very-crowded days and presented it as a working solution.

The net result was that a job that took 4.5 hours and 400 machines (and wouldn't benefit at all from more machines) became a job that took 28 minutes on 50 machines (and could expand to use more).

A little high-point was "compareIgnoreCase". I had made a tiny optimization to a string utility in the central library not directly connected to Tom's work. The reviewer commented that the real bottleneck around there was "compareIgnoreCase" - a function that compared UTF-8 strings regardless of case - because it made a lower-case copy of each string and then compared them(!)

The next day was a Saturday, and rainy, so I rewrote compareIgnoreCase as a tiny function which used no heap memory, less than 32 bytes of stack memory, and with orders of magnitude greater performance (10 times faster on on a random set of "short sentences" and far over 100 times faster on long texts...) and is quite likely still there to this day.

More recently, I implemented a complete rewrite of World Wide Woodshed's popular musicians' practice tool, SlowGold, in cross-platform C++ for Mac and PC. This small but slick and powerful application uses the excellent JUCE cross-platform C++ development library, a general purpose toolkit with an emphasis on digital audio and a strong user community where Tom is an active member.

I'm using this JUCE system for my open-source art programming project echomesh, so you can see of my hot-off-the-press C++ code right here.