James Hague wrote a wonderful, vaguely controversial essay “Free your technical aesthetic from the 1970s." I’m refusing to comply, for two reasons:

  1. Those who forget the past are doomed to repeat it
  2. I hate unnecessary optimism

I think James would agree with the notion that it is too easy to get complacent. Computing, after all, did not emerge from Babbage’s garage with an Aqua look ‘n feel. The road leading to today had a lot of bends, twists, blind drops and dead alleys. James is more interested in where computing will be tomorrow. I respect that. For whatever reason, I am much more interested in where computing was yesterday.

Aside on textbook pipe usage

It’s easy enough to debunk his attitude toward Linux. There’s plenty of interesting stuff going on. Linux is hard to love as a Unix, which is one reason I promote BSD. If the goal is small commands that you glue together like plumbing, it helps if the commands are actually small. I find it tremendously useful to be able to pipe commands around, not a textbook exercise at all. For example, I’ve done all these in recent ~/.history:

ps auxw | grep firefox
awk -F, '{ print $2 }' /tmp/projects.txt  | sort -u > legacy-pcodes.txt
grep 'Hibernate: ' precaching.txt | sed 's/.*\(from [^ ]*\).*/\1/g' | sort | uniq -s
xzcat mcsql-widarmon.sql.xz  | pv > /dev/null

Some of these are mundane (finding Firefox’s pid so I can kill it); others are a bit more interesting. The second one there is a bit of very light CSV parsing, the third one is showing me what Hibernate is spending its time doing, the fourth one there is letting me see the uncompressed size of an xz file. Probably there are better ways to do some of these, but after so many years of Unix, it’s just faster and easier for me to do a one-off like this than spend 30 seconds consulting the documentation. Heck, even when I do consult the documentation, it sometimes looks like this:

curl --help | grep -i header

Yup, I didn’t want to have to read through all those options. Do these look like textbook usages of the pipe? I have no idea, because I’ve never seen a textbook of Unix pipe usage. All I know is what I’ve seen in various blogs and what I’ve seen my friends do. It came naturally, I guess.

Obscure alternatives

Yet, even BSD is not so small anymore. Using Perl to damn Linux is also a bit of preaching from the porn. Pike, after all, was one of the greats behind Plan 9, current record holder for world’s weirdest and most obscure operating system you can run on a modern computer. The dream of Unix is alive in Plan 9, so to speak.

Plan 9 is very much the answer to the question, what would Unix be like if it could be rewritten with no regard for backwards compatibility? Everything is a file, all the old stupid file formats are made a little more sane, and so forth. You can’t do this anywhere else:

  • cat /dev/keyboard: it’s what you’re typing right now
  • cat /dev/mouse: it’s screen coordinates for your mouse cursor
  • the display is a flat file, the window manager arranges for each window to think it owns it
  • cat wave audio > /dev/audio to play it

These may not seem like much, but in Plan 9:

  • any file can be transparently mounted across the network to any other location: if I want your mouse to control my screen, we can mount your mouse over mine in the filesystem; similarly for all my other hardware
  • to use the network gateway, mount its network card over yours
  • want to write a network server? use the 9p library and instantly get remote access to your apps internals

Obviously it isn’t all roses, and I think James and lots of other people would blame the “aesthetics” of Plan 9 for a chunk of that, maybe even the main chunk of that, but we must not let advances be forgotten just because they don’t pan out.

Forgotten futures

Between 1968 and 1970, there was a fantastic AI demo at MIT, called SHRDLU. When this program ran it displayed some shapes in a small 3D world. It then allowed the user to move the blocks around and create shapes as well as make queries.

When I write, “allowed the user to move the blocks around” it’s easy to imagine that I mean the user clicked on a block and moved it somewhere else. That’s not what happened though. Instead the user wrote complete English sentences which then resulted in SHRDLU creating a plan of action which it would then execute by moving things! On top of that, the queries were also complete English sentences! This was 45 years ago!!

Person: Is there anything which is bigger than every pyramid but is not as wide as the thing that supports it?
Computer: Yes, the blue block.
Person: Does a steeple
Computer interrupting: Sorry, I don't know the word "steeple".
Person: A "steeple" is a stack which contains two green cubes and a pyramid.
Computer: I understand.
Person: Are there any steeples now?
Computer: No.
Person: Build one.
Computer: OK. (does it)

This is not a fantasy, this is an actual program with actual source code you can download today. Look at everything that happened there:

  • Someone gave a very complex query and got an exact answer
  • They then defined a new term declaratively
  • They asked the computer to figure out how to do something based on that definition—which it then actually did.

I have a hard time understanding why people are not shocked and amazed constantly at stuff that we used to do. I have an app on my iPhone called Shazam, which is easily the most amazing app I have ever had on my phone and the handiest. What it does is, I click a button, it listens for 15 seconds and then tells me what song is playing in the background. This was easily the most convincing piece of artificial intelligence anybody had on their phone for about four years.

SHRDLU took a declarative definition from plain English and converted it into a plan of action, which it then executed. Hmm… sound familiar? Sound like, I dunno, Siri?

Amazingly, SHRDLU comes in at under 20K lines of code. No corpus, no statistics.

Prolog versus the world

I imagine that by virtue of having taken a compilers class I’m substantially ahead of the curve. In that class, we started out talking about regular languages, then context free languages, then briefly we talked about context sensitive languages. The general attitude was, these things are hard to do, so we’re not going to do them, and we retreated back to context free languages, one in particular (Tiger) which we implemented for the rest of the semester, and that was that.

The general attitude this instilled is, well, you can do amazing things with context free languages, and that’s all the power you’ll ever really want, so stick with that and go about your life.

A classic example of something which is impossible to do with a context free language is something like agreement in natural language. If you want to be cheesy, it’s not too hard to write a piece of Prolog that will parse a sentence:

sentence --> noun_phrase, verb_phrase.

noun_phrase --> article, noun, !.

verb_phrase --> verb, noun_phrase, !.
verb_phrase --> verb.

article --> [the].      article --> [a].

noun --> [cat].         noun --> [cats].
noun --> [mouse].       noun --> [mice].

verb --> [eats].        verb --> [sleeps].

?- phrase(sentence, [the,cat,sleeps]).

?- phrase(sentence, [the,cat,eats,the,mouse]).

Of course, this permits insanity as well:

?- phrase(sentence, [the,cats,eats,the,mouse]).

The cats eats the mouse? Bullshit! But it turns out, this can easily be rectified by passing along the number:

sentence --> noun_phrase(Num), verb_phrase(Num).

noun_phrase(Num) --> article, noun(Num), !.

verb_phrase(Num) --> verb(Num), noun_phrase(_), !.
verb_phrase(Num) --> verb(Num).

noun(s) --> [cat].         noun(pl) --> [cats].
noun(s) --> [mouse].       noun(pl) --> [mice].

verb(s) --> [eats].        verb(pl) --> [eat].
verb(s) --> [sleeps].      verb(pl) --> [sleep].

?- phrase(sentence, [the,cat,eats,the,mouse]).

?- phrase(sentence, [the,cat,eat,the,mouse]).

?- phrase(sentence, [the,cats,eat,the,mouse]).

Now we can keep on taking this further. What about this nonsense?

?- phrase(sentence, [the,cats,sleep,the,mouse]).

What? Sleeps doesn’t take a direct object!

verb_phrase(Num) --> verb(trans, Num), noun_phrase(_), !.
verb_phrase(Num) --> verb(intrans, Num).

verb(trans, s)   --> [eats].        verb(trans, pl)   --> [eat].
verb(intrans, s) --> [sleeps].      verb(intrans, pl) --> [sleep].

?- phrase(sentence, [the,cats,sleep,the,mouse]).

?- phrase(sentence, [the,cats,sleep]).

?- phrase(sentence, [the,cat,sleeps]).

?- phrase(sentence, [the,cat,eats]).

?- phrase(sentence, [the,cat,eats,the,mouse]).

Look at this, we have a 4 line grammar that handles transitive and intransitive verbs separately and ensures number agreement. This is considered a very hard problem, and yet, here is this tidy little Prolog program that just does it. The dream of the 70s is a live in Prolog.

In the real world, this kind of program would never fly because we expect natural language processors to be fuzzy and handle the kinds of mistakes that humans make when talking and writing, and something like this is very rigid. As a result we’ve married ourselves to statistical NLP, which means huge corpuses, which means lots of memory, which means heavy programs with a lot of math behind them.

It’s strange that we live in such a bifurcated world, that on the one hand we have astronomical demands for a certain class of program, but for the kinds of programs we use day-to-day, they’re shockingly procedural. The user is expected to learn all these widgets and how to use them and what they mean (intuitive my ass) and then whenever they need to do something they just do it repeatedly. At my work one of the last things to be produced before an observation is a script that runs the telescope. The script is written in Python, but it’s completely generated by a set of objects that live in a database, which were composed, painstakingly, by a user using a point-and-click web GUI. Wouldn’t it have been more efficient to just learn to program? Or, perhaps, would it have been more efficient to build a large, complex manual NLP system like this that could parse a plain-text English specification and produce the script? We’ll never know.

Melt-your-brain debugging

The standard debugging procedure for a Smalltalk app is:

  1. Something goes awry: message not understood or something happens
  2. The app pops up a dialog with the debugger window
  3. You enter some code and hit proceed
  4. The app continues like nothing happened

How does this pan out in a web app? Similarly, only this time between 2 and 3 there is a 2.5: you VNC to the Smalltalk image running your web app on the remote server.

This sounds awesome enough, right? But apparently it gets better. You can replace that “pop up a dialog” machinery with a “serialize the entire current context to a log.” If you’ve ever debugged a Java web app, you are used to rooting around in the logs reading exception traces. This is like that—except, instead of reading an exception trace, you get to load the actual moment the exception was triggered into your local debugger and pick up from there as if it just happened locally! Have you ever had a problem you couldn’t debug because you just didn’t know how to reproduce the bug locally? That problem isn’t even meaningful in Smalltalk!

Cocoa doesn’t have this. C# doesn’t have this. Nothing else has this.

Unnecessary Optimism

If you’re like me, you read the foregoing commentary and thought, wow, how awesome are Plan 9/Prolog/Smalltalk! Ninety five percent of what I see is the same old fucking around with different clothes. Now you get whooshy effects when you drag ‘n drop, but it’s still drag ‘n drop. Now you can put multiple fingers on your browser window, but you’re still clicking on drop boxes to put in your birthday.

An even bigger letdown for me is the whole NoSQL situation. The proponents of NoSQL aren’t even aware of how they are repeating history by recreating the hierarchical databases of yore. E.F. Codd was explicitly trying to rectify these mistakes when he formulated relational database theory 40 years ago, and now we’ve come so far along that we’ve wrapped around and apparently must re-learn these mistakes a second time. The morons promoting this technology have no clue, they see themselves as the new sheriffs in town, putting their scalability law above all else.

It’s easy to forget that your forebears were once wild young bucks as well. The difference, of course, is that Codd understood math and was not blown around by the winds of popular opinion, and much time passed between the refinement of his idea and the first relational database that could perform. Since then everyone seems to have forgotten why they were a good idea in the first place: no duplication, true data integrity, and proper handling of concurrency. Buckle up, it’s going to be a rough ride.


Whether or not we like it, we are products of what came before. I would like to look at the recent stuff coming out of Apple with the same optimism as James Hague. There is some interesting stuff, but how hard it is for me to forget all the things I’ve seen. And I may have rose-tinted glasses; after all, I wasn’t there when it happened. Let me say this, though. If you want to blow your mind, go to your CS department’s library and look through the books printed before 1980. You’ll be surprised how much diversity, hope and optimism there was, back when our technical aesthetics were completely enslaved to the 1970s.