Not so isomorphic

Vesa Karvonen defends static sums:

In this particular case, we have functions whose behavior depends on the values of their arguments. But the sets of argument values that invoke the different behavior also have disjoint sets of types. So, one can treat the functions as if their behavior would depend on the types of their arguments. What part of this kind of reasoning do you disagree with?

One can treat them as if their behavior depends on their types - but that's not how it actually works, and many changes can expose the difference. If we turn off the typechecker and run on dynamic semantics, the static sum still works. In a type system where the types aren't disjoint, it still works. If a weakness in the type system allows passing inL with inR's type, then the value, not the type, controls the result. Only in very restricted conditions does the isomorphism tell the whole story.

This is a general weakness in reasoning by isomorphism. Two things may be equivalent in a circumscribed context, but that doesn't make them equivalent in general. In programming languages, it's easy to extend semantics in ways that expose implementation details, so this happens a lot. This doesn't make isomorphism useless; it only means one shouldn't apply it as freely as in math.

Correction: You can apply it just as freely as in math, of course. For some reason it seems to be more tempting to overestimate the scope of the isomorphism in programming. But maybe I only think that because I don't do enough math to know better.

Compare to typeclasses, which really do allow overloading on static type. They don't keep working when the static types change, and they do keep working when the values change. A static sum built with a typeclass might be equivalent to the vanilla Hindley-Milner version, but it extends completely differently.

First impressions of Arc

New Lisps seem to be thick on the ground lately. Graham and Morris have just released an implementation of theirs, Arc. My initial impressions, based on reading the tutorial and the source, not on actually using the language:

  • There are a lot of novel names for familiar operators, but many of them are good. I will be stealing best and most and several others. The bad ones are too short: I don't like mac and rem; they're not common enough to require such short names, and they risk colliding with other things. There are also some cryptic compounds like w/uniq.
  • in is very convenient. I wish I'd thought of it.
  • There's syntax for op, complement and compose. (The latter two, oddly, are implemented as magic on symbols, not forms - that is, the compiler understands some naming conventions.) Would short names (op, o not, and o) suffice?
  • There are no user-defined types. Well, there are (see tag) but they only have one slot. This separates the identification and encapsulation aspects of user-defined types from the representation aspect. I'm not sure I like it, though.
  • The first libraries are aimed at web apps - not surprising, given the authors.
  • Assignment and define are the same operator. No, they're not! assignment is set. Do we never learn? And that apparently isn't confusing enough, so it's called =. Ugh.
  • Macros are distinguished dynamically: the function, not the variable, is tagged as a macro. This means a call to a variable can be compiled differently depending on what value its global binding happens to have at macroexpand time. This will become a problem. There's a comment (warn when a fn has a parm that's already defined as a macro) that suggests the authors are aware of the problem.
  • Scheme distinguishes #f from () from nil. Arc doesn't. Since lists are often passed back and forth between Scheme and Arc, this is a significant implementation headache. I think in situations like this its's best to just let the host language's data show through, since it makes little difference to the quality of the language. (And what is nil doing in there anyway?)

I find all this rather encouraging, because I think I can do better. I'd better go work on my new Lisp.

It's not homoiconicity

Lisp, say inumerable introductions, is a homoiconic language: its programs are represented in the same data structure they operate on.

I'm not bringing this up to complain about how often introductory material is copied uncritically. (I suspect that's because introductions tend to be conservative, and what's more conservative than repeating something other people have said before?) I'm bringing it up because it is so often repeated, even though everyone knows it is wrong.

It's commonplace to point out that C is homoiconic, and so is Perl, and every language that can process text. They can generate and manipulate their own code. It is often overlooked that this is actually useful, even in C. It's possible for a C program to generate and compile C code - this is how Goo is implemented. And of course many languages have self-hosted REPLs and other tools. Homoiconicity matters.

So why is Lisp better for metaprogramming? It's because the representation of code is a convenient one. Lisp trees are terse and closely reflect the abstract structure of programs, so they are easy to work with. This convenience isn't a simple, formalizable quality (although attempts to do so could produce some interesting papers). Like so many questions in programming languages, convenience is not a matter of adding some interesting new power, but of minimizing the amount of work spent doing uninteresting things.

When abstract code is hard to work with, it loses its metaprogramming benefits. Scheme, for instance, virtually requires a more elaborate representation of code in order to implement syntax-rules. Many Schemes expose that representation, but hacking it hasn't caught on, because it's quite inconvenient. This isn't a necessary problem with richer abstract syntax; it's just that this one is done badly. (In retrospect, it's a bad sign when you have an important type called syntax-object.) I think there could be better alternatives to S-expressions that preserve their all-important convenience.

By the way, a language needn't be homoiconic to benefit from a good representation of programs. Imagine a language whose domain does not include its programs, but which is hosted in another language which does. The host language can manipulate the trees, so it can metaprogram the embedded language - easily, if the representation is a good one. This is the case for many DSLs embedded in Lisp. They benefit from their convenient representation, even though they themselves can't manipulate it.

Please, Lispers, stop telling people what's special about Lisp is homoiconicity! If you want another fancy Greek word, you could speak of euiconicity. But let's not. Much as text is an inconvenient representation for programs, these big words are inconvenient for simple concepts. Just say S-expressions make metaprogramming convenient because they match the abstract structure. That's easier to understand, and as a bonus it transfers well to other data structure questions.

Pitfalls for who?

SISC has a test suite for R5RS Pitfalls. Most of these are ordinary compliance tests, that test the implementation's completeness or robustness or up-to-date-ness. But several are tests of perverse implications of the standard, where a reasonable person might say R5RS is overspecified or even wrong. In particular, the first two test the interaction of letrec with call/ccletrec is supposed to evaluate all the init-forms before setting any of the variables, effectively requiring an extra layer of variables. This is not obviously better than the simpler definition which initializes one variable at a time. This is less a test of the implementation's quality than of what lengths it's willing to go to to follow the letter of the standard.

The existence of tests like these — and the number of implementations that fail them — says something about how clean a language is. When expert users are surprised by something in a language, and implementors often get it wrong, this suggests a problem in the spec. In these cases, compliance tests can illuminate what parts of the language are ugliest, and why. For Scheme, it looks like the problems are in details that would ordinarily be invisible, but that are exposed by call/cc or syntax-rules. This is a situation where it's easy for language designers to make mistakes, because they don't even realize what they're specifying.

Sometimes, if a spec is widely agreed to be buggy, everyone will just ignore it. SISC's test suite contains a test for whether let-syntax creates a contour for internal define. The test follows the consensus of implementors in saying that R5RS is wrong, and that a good implementation should ignore it on this point. Scheme culture values formality, but apparently there is a limit to how much inconvenience they are willing to put up with for it. Language designers, take note.

Uncheckable bugs

Here's a bug that's hard to find automatically:

fib :: Num a => a -> a
fib 1 = 1
fib n = n * fib (n - 1)

See the bug? No typechecker can. More precisely, no checker (whether of type or anything else) can detect the problem without further information about what the function is supposed to do, because it does something reasonable. It's the wrong reasonable thing, but an uninformed checker has no way of knowing that.

This is why tests are useful: they provide constraints specific to the program. There are other kinds of checkable declarations (e.g. invariants, reference implementations) that are even more powerful. I suspect they're not used much only because they're harder to implement and harder to use.

Edit: fixed type declaration. That wasn't the intended bug.

The value of minimal tests

According to legend, Dijkstra-子 said: Testing cannot show the absence of bugs, only their presence. This is true only in the narrowest interpretation: testing cannot generally show that there are no bugs. It can, however, show the absence of a huge class of bugs: all those which cause the program to always fail.

I submit that these are a significant fraction of all bugs. When you implement something, how often does it work on the first try? When you track down a bug in the wild, how often is the affected code so completely broken that it couldn't work for any input?

Of course the answers to these questions depend on whether those bugs get caught earlier, whether by manual testing or static checkers or automated tests. I think much of the value of automated tests is that they are so good at catching those bugs. A single test case takes very little work, yet catches these bugs quickly, reliably, and (unless the test overspecifies the result) with no false positives. That set of virtues is hard to approach by any other means.

Other uses of automated tests are less impressive. Some people write laborious tests for boundary cases and other possible bugs. I'm not terribly enthusiastic about these tests, because each one normally catches only one bug. They still show the absence of specific bugs, but they don't have the power of the most basic tests.

A keyword argument

I want to believe keyword arguments are an extravagance. They're unnecessary complexity, I tell myself. They complicate function call. The most naïve implementation is inefficient, and the most efficient requires multiple entry points. Why bother, for a feature that's hardly ever used?

But I do use them. When I have a rarely used optional argument, I write it with a keyword, when I'm writing in pseudocode or in a language that supports them. When I don't have keywords, I resort to including the keyword in the function name, like (sort-by score < nodes) for (sort < nodes :key score). That's nice and simple, but it's easy to get confused about the argument order. Maybe a reliable convention would help: head-marked keyword arguments always go first, or always last.

Or maybe I should stop trying to fight the pseudocode.

Feathers of the parody

Nikodemus Siivola tries to demonstrate the hazards of overenthusiastic blogging about a new tool of expression. Unfortunately his parody doesn't make the point well, because it's too close to being good. I didn't even suspect it was a parody until I got to the first nonsensical bit:

This macro expands into an elegant recursion that constructs a list by repeatedly calling a function with the results of evaluating arguments. The last bit is the key to its power:

;;; no need for this
(let ((stream (make-stream))) (---> *n* (composition) stream))

;;; just this is enough
(---> *n* (composition) (make-stream))

Even at that point, I was willing to believe the author had just garbled his explanation. To recognise a piece as parody implies an insult: this is so bad it can't be serious. That's a harsh condemnation to make of code you don't understand. And code in an unfamiliar language can be as impenetrable as merely bad code. I don't think rejection is a good default response to anything difficult or novel.

In other words, the arrow macros I presented were intentionally horrible. If you thought they were neat, think again.

They were horrible in their details, but the basic ideas are right. <--- needs a better name, like iterate, or unfold (with a finished-p argument). ---> is take and map with a perverse way of calling its argument. Aside from producing that argument, I can't think of a good use for <-, but having a short way to write compose is important. So I thought they were neat. Done wrong, sure, but they're the right thing done wrong. What I saw in the final example was this:

(map - (take 3 (iterate 0 1+)))

Apparently Nikodemus can't write wholly bad code if he tries.

A topic-comment comment

If topic-comment order is so natural, why don't more languages support it?

Perl does, of course, through $_. This is really two features. One is anaphor — $_ holds the result of a previous statement, or of a loop-control expression. That's a useful feature (and not only for topics), and it's easy to support in other languages, by turning (progn a (b it) (c it)) into (let* ((it a) (it (b it))) (c it)). It's nice because it encodes something more interesting than order of effects in the sequential order of code.

The other feature is that $_ serves as a default argument to many operations. Perl implements it as a dynamically scoped variable, but sets it instead of rebinding it, so it's a little hazardous (see the cautions in perlvar). Rebinding instead would work better, but either way, it's easy to imitate.

ISTM that Smalltalk also supports topicalization, in two ways. One is the implicit self argument. That doesn't help with topic-comment order, but it does save a little typing. The other is the ; operator, which allows sending several messages to one receiver: a foo: b; bar; baz: c is a foo: b. a bar. a baz: c. Effectively, you can separate statements with '.' to set a new receiver, or ; to keep the old one.

But ; isn't used much. Its low frequency illuminates why implicit anaphors aren't common in programming languages: they change their referents too rapidly. In programs as in speech, a topic may last a long time, even though it is rarely referred to. But implicit anaphors and argument-passing abbreviations have too short a memory to hold the topic. It's often necessary to fall back on the very ordinary mechanism of binding a variable, which also better handles irregular references.

The underlying difference is that natural languages benefit from topicalization in ways programming languages don't. Natural language is often ambiguous, and relies on context to deal with this — and topics are great for disambiguation. Topics also help convey the structure of discourse — how parts relate to each other, and why they matter, and what the listener can safely ignore. None of this is as important in programming languages. Topicalization may still be useful, but mainly as abbreviation.

I detect a pattern

Is someone teaching programming language theory in terms of type theory, and leaving their students with the impression that all good things are derived from static type? Or are people just dazzled by ML, and confuse its various aspects? For whatever reason, a lot of strange claims are made on behalf of static typing. It's not just people confusing it with "functional". On LtU recently, someone claimed dynamically typed languages can't have formal semantics, and no one corrected them. And then there are occasional claims that pattern matching depends on static types...

The obvious counterexample is Erlang, which uses pattern matching heavily. But other than that, there aren't many dynamically typed languages with pattern matching. Surprisingly, even in Lisps, which can add it easily, it's rarely used. I know there are Scheme pattern matchers, but I don't think I've ever seen code that uses one. There is a full-featured bind for Common Lisp, which can even match regular expressions, but it's not widely used. (CL also has destructuring-bind, but that only works on lists, and therefore isn't used for much besides macros.) Why is such a convenient feature not adopted?

This is not about static type, of course; rather, I think it's about syntax. I've tried writing simple functions with a pattern-matching defun, and been disappointed with their readability. Simple numeric ones are okay:

(defun-pat fib ((0) 1)
               ((1) 1)
               ((n) (+ (fib (- n 1)) (fib (- n 2)))))

But more complex patterns have far too many parentheses:

(defun-pat cat "Simple APPEND"
  ((nil rs) rs)
  (((cons l ls) rs) (cons l (append ls rs))))

The problem isn't specific to defun-pat; bind has the same number of parentheses. This is the same parentheses problem as let, only worse. The three left-parentheses in a row are hard to read, even if you're used to sexprs. You have to count them, which is much harder than merely recognizing them.

You can make it a little easier by the same tricks used for let. You can lose one layer of parentheses by writing alternating patterns and bodies, instead of enclosing the pairs in lists. You can use a different kind of parentheses for one of the layers, like Clojure and some Schemes. (I don't like this because it requires matching right-parens.) If you're willing to forgo matching patterns in multiargument parameter lists, you can get rid of another layer. And you can replace the parentheses with something else by adding syntax.

As with let, pattern-matching has a slightly complicated structure, and syntax can flatten it. This appears to be necessary to make it useful. Pattern-matching is valuable, but not when it's hard for humans to read.

It's also not really convenient unless it's integrated into other binding constructs. It's easier to use accessors than to write an extra bind form to split an object. Pattern matching only comes into its own when it's built into lambda and define and the other common constructs, so you can pattern-match without additional work. Theoretically this can all be done with macros, but in practice too many places have to change. Pattern matching needs to be built into the language to be useful.

Unspecial forms

case-lambda, aif, with-condition-restarts, load-time-value... Lispers like to talk about special forms.

We rarely talk about the others. Most code, even in Lisp, consists almost entirely of a few anonymous constructs that aren't special forms. They're anonymous because they're too abbreviated to have names, and they're abbreviated because they're so common. These are the forms that are not special: variable reference, function call, and self-evaluating forms.

Their abbreviations are in structure, not syntax, but abbreviations can be done at the syntactic level too. And if you design syntax, keep the unspecial forms in mind. More than anything else, they need to be terse.

It's possible, but painful, to write code without the abbreviations. (I know someone else has written about this before, but I can't find the article.) Here's what the ordinary factorial might look like, in a Lisp-1, with the abbreviations replaced by special forms:

(defun factorial (n)
  (if (call (var <=) (ref n) (quote 1))
    (quote 1)
    (call (var *)
          (call (var factorial) (call (var -) (var n) (quote 1)))
          (var n))))

In these 16 forms, there are seven variable references, four calls, three literals, and two special forms (if and defun). There are also a few things that aren't forms: the binding occurrences of factorial and n, and the argument list. There are a few other nonform constructs not seen here: keywords for keyword arguments; docstrings; the structural lists that occur in forms like let and do.

I was curious about their frequencies, so I wrote a form counter to collect more data. This would be a perfect task for a code-walker, except that it has to understand the structure of every macro directly, rather than expanding it. I quickly got tired of adding cases to my form counter, so I only counted about a hundred lines. Bear in mind that this is a Lisp-1, so about half of the variable references are to functions (mostly from the standard library, of course).

Variable reference18240%
Function call7817%
Binding occurrence6514%
Special form (including macros)5111%
Argument list245%
Structural list235%
Integer literal184%
String or character literal51%
Total forms33473%

Variable bindings and references account for over half of all nodes. 30% of the bindings are top-level definitions, but if the remainder are each referenced once on average, then 20% of all nodes are spent naming local variables. This is a powerful argument for points-free style.

Both structural lists and special forms are less abundant than I expected. It's tempting to try to shrink programs by abbreviating some common special forms - lambda, define, let, and perhaps partial application - but it can't help much, unless programming style changes significantly in response.

Data can be so disappointing sometimes.

let there be fewer parentheses

Lisp binding forms like let are prone to excruciating nests of parentheses: (let ((x (some expression))) ...). They can be hard to read, hard to edit, and surprisingly verbose. Can we do anything about this?

I've tried the alternating-list variation of let, which saves a pair of parentheses per variable:

(let* (x 2
       y (+ x x))
  (= y 5))

Unfortunately, that's its only virtue. Despite the reduced parentheses, it can be harder to read than regular let, because there is nothing delimiting each binding. It's also inconvenient to use in macros, because the alternating list often has to be split into names and init-forms, and mapcar won't do it. This would be easy if there were sequence functions for alternating lists, but the lack is a reminder that the alternating list is a slightly unnatural structure. Maybe we shouldn't use it. Rather than removing the pair of parentheses around each binding, we should look for ways to remove those around the list of bindings.

Every Lisp form gets an implicit list for free: its tail. (Well, not for free - it costs a pair of parentheses, but those are already spent.) Many macros use the list as an implicit progn. This is obviously useful for stateful code, and for internal define (when available). But modern Lisp style is increasingly functional, and of course define is unlikely in a let. There is usually only one body form, so the implicit list is wasted.

We can use the implicit list as the list of bindings instead, for a form reminiscent of Haskell's where syntax (but as an expression, not a feature of top-level definitions):

(defmacro where (body &rest bindings)
  "Backwards LET* - one body form followed by zero or more bindings."
  `(let* ,bindings ,body))

(where (= y 5)
  (x 2)
  (y (+ x x)))

This puts the body first, which is often a natural order. Unfortunately it may be confusing in an eager language, since forms no longer appear in order of evaluation.

Update May 2012: Mitchell Wand proposed this in 1993.

It's possible to put the body-form last, and use the implicit list at the beginning. This is an unusual pattern, but not hard to implement:

(defmacro lets (&rest bindings-and-body)
  "LET* with fewer parentheses and only one body form."
  (if bindings-and-body
    (let ((body (car (last bindings-and-body)))
          (bindings (butlast bindings-and-body 1)))
      `(let* ,bindings ,body))

(lets (x 2)
      (y (+ x x))
  (= y 5))

Just don't expect your editor to indent it properly. :)

There is something unsatisfying about all of these solutions, because the problem they solve is purely one of syntax. The problem with let isn't that it has a list of binding pairs - that's its natural structure. The problem is that we have to explicitly delimit that list and those pairs, and we use the same characters for both. Languages with more syntax can avoid both of these problems.

Alternatively, you can avoid let by using internal define instead. I have a post on that coming soon.

Pointless abstraction?

I've noticed an odd argument from the mathematical side of the Haskell/ML community: that languages like Lisp have their semantics tangled up with a particular class of implementations. Operations like eq and mutation and dynamic type are supposed to be examples of this. Here's an example from Tim Sweeney:

But be careful with expectations of extensible types features, metareflection, and dynamic typing. They conspire globally against counter type soundness and universal properties that are desirable for program verification and concurrency.

Taken to their logical conclusion, you get things like LISP, SmallTalk, C# with LINQ and lose sight of the extensional meaning of your code as it's intractable intertwined its metareflective representation, pointer identities, the uncheckable exceptions it might throw, the interact with macros and syntax extensions which may transform it into something entirely different, etc.

This sounds absurd at first, but I think there's something to it. Lisp semantics are explicitly in terms of values - that is, heap objects. They have identity and type and slots; they can even be mutated. This is a reasonable way to define a language, but it's not the only way, nor the most abstract. ML semantics are in terms of sum types, whose representation by objects is only an implementation detail. Object operations like eq are considered unnatural because they don't make sense on that level.

This is a difference of abstraction level, just like the difference between physical memory and object memory. It's just a less useful one. Abstracting from words to objects hides a lot of irrelevant, error-prone detail. Abstracting from objects to type-based values doesn't hide much, if any. Sometimes it even requires more work (to deal with explicit tagging), so it's not obviously beneficial. The added abstraction is still claimed to improve analyzability, but it isn't helping with expressiveness.

It's easy to be accustomed to abstraction being valuable, and expect that more is always better. But it's only better when it hides something irrelevant or usefully increases generality. It's easy to have pointless abstraction in programs, and it's possible in languages too.

It's okay to be impure

In addition to its two unrelated senses, there's a better-known confusion of meaning about "functional": some people prefer to call a program, or a language, "functional" only when it's purely functional - no state. I, like most functional programmers, think this is excessive - mutation is not so toxic that a little bit condemns whole programs! But the moderate view is a little hard to defend.

You might call a language "functional" if it supports the style (either one) reasonably well - but that can be awkwardly inclusive. (Do you really think of Perl as a functional language?) You could restrict it to languages that support nothing else - but that excludes the most common, and useful, functional languages, for the very thing that makes them useful: their flexibility.

In practice people classify languages by the style they support best, or by how they are commonly used. This is not a theoretically sound test - it has the perverse effect that improving a language's support for other styles makes it count as less functional, even though its ability to express functional code hasn't changed. But it does predict what sort of code you're likely to encounter in the language, which is useful enough to outweigh its impurity.

By the way, could we stop insisting that "pure" always means "pure functional"? It is useful to be able to talk about any unmixed style. Prolog is (almost) a pure logic language; most machine languages are purely imperative; Self is purely object-oriented. There are many ways to write programs, and it does the programming-languages community no favor to declare that one of them is "pure" and the rest, however undiluted, are dirty.

Simple enough that I could do it

I wrote:

slime autoloads, and no setup is needed. It's almost simple enough that I could do it.

I could do it, if I only knew how. But I didn't know how. This is the fundamental problem of usability. It's not that it's hard to use things correctly; the problem is that it's hard to tell how to use them. It's easy to get these confused, even if you know better, and I do whenever I'm not paying attention. It's no wonder most software is so hard to use, when it's a challenge to even be aware of the problem.

What do you mean by "functional"?

There's a lot to like about functional programming, and I do, but one thing annoys me: its name. "Functional" has two senses, both of which are important concepts, and which have nothing to do with each other.

One sense is using functions as first-class values. This is such a powerful feature that it's surprising it's not yet universal. You might unambiguously call it "high-order programming", but hardly anyone does, because "high-order" sounds so affected.

The other sense is programming by constructing new values instead of modifying existing ones. This is not so much a feature (although it does depend on automatic memory management) as a failure to abuse one, so it's odd that it needs mentioning at all. State is a powerful, versatile tool, but you don't have to use it for everything! (Well, implementationally you do have to use it for everything, which is how the habit got started, but it doesn't have to be exposed.)

Both senses are important, and each one is valuable independently of the other. It's perfectly reasonable to wrangle state with combinators, or to avoid mutation in a first-order language. But in practice they're found in the same languages, and called by the same name, so it's easy to get them confused.

Here's a discussion which exhibits this confusion, starting with an extra bonus: some people appear to think "functional" implies static typing! Fortunately this is not widespread. I think it's just a result of sloppy ML advocacy; I haven't seen anyone defend it after the mistake was pointed out.

Memorable flags

Josh Parsons grades various countries' flags, which mostly means mocking them. For the Central African Republic: "Do not attempt to disprove the four-colour theorem on your flag!" Montserrat: "Features a picture of a woman crucifying a harp." Libya (plain green): "Did you even try?"

I don't agree with a lot of his grades, but that's fine; it's a matter of taste. There is a less subjective issue here, though: flags need to be recognizable, and they generally aren't.

Most flags use one of the small number of forms that are are easy to describe in the language of heraldry. This means there are a lot of tricolors and bicolors, often decorated with stars or illegibly tiny coats of arms. Heraldic flags have the advantage that they're generally easy to make, but the language's vocabulary is limited to a small set of patterns and colors, all of which are already heavily used.

It is possible to construct other forms in heraldic language. The Union Jack, for example, builds its eight-rayed star out of crosses: "Azure, the Crosses Saltire of St Andrew and St Patrick, quarterly per saltire, counterchanged Argent and Gules, the latter fimbriated of the second, surmounted by the Cross of St George of the third, fimbriated as the saltire." The language, however, expects the norms it grew up with, and doesn't make it easy. It might be interesting to try creating flags in a language with different biases, such as Pan. There are probably many memorable designs waiting to be discovered.

Sometimes originality isn't wanted. Countries are prone to copying each other's flags, especially those of their former colonial masters. Some flags are even deliberately similar: many Slavic, African, and Arab countries share the same colors (and sometimes the same shapes) to show their relatedness. Of course this only makes it harder to tell them apart.

Color, at least, can be easily made distinctive, because only a few colors are traditional in heraldry. A bright purple flag, or a fluorescent pink one, will be remembered. Black and white is also underused, even though it offers the highest contrast. There are a lot of bad flags that try to make a boring pattern memorable with garish colors. It doesn't work, because bright colors are so common. Really, most flags have too much color. Especially red and green, which fly over most of Africa. Those are both perfectly respectable colors, but they're overused, and rather unpleasant to look at side by side. If you must use them, at least separate them with a white border. Please?

Via Tyler Cowen - now I have a headache.

How to make a fast machine unresponsive

This is not news, but Mac OS X has a problem with gratuitous animation. In several places an animation appears before the system responds to user input. This is the worst time to animate, because it increases response time. The user just has to wait, as if machines had not gotten a hundred times faster in the last twenty years. The fastest computer in the world won't make a 300ms animation finish in 10ms, and the machine will feel slow, because it's not responsive to the user.

The most egregious animation is in the screensaver. At least in 10.4, it tries to soften the transitions between itself and whatever else was on screen by fading out and back in again. It's nice and pretty, but this is a terrible time to fade! When the screen saver is disturbed, this means the user is about to use the machine. They don't want to wait, not even for a third of a second. They especially don't want to wait for a fade that makes the machine look less snappy!

There's also a 200ms animation when opening some dialogs, which has the same problem. Fortunately you can turn that off:

defaults write NSGlobalDomain NSWindowResizeTime .01

The .01 is because 0 means default, which means 200ms. .01 is effectively zero.

I don't know how to turn off the screensaver's fade, so I increased the delay before it starts, which minimizes the most annoying case: when I'm staring vacantly at the screen, and haven't touched the mouse for a few minutes, and the screensaver interrupts me, and keeps interrupting for another second, while it tries to soften the jarring return to what I was already looking at.

I'm not a big fan of chrome, because the user soon stops seeing it, but its cost remains. But there are plenty of places where animation is not a problem. Places where I don't quickly learn to ignore it. Places where I don't have to wait for it. Places like screensavers.

Does infix notation have better constituent order?

Everyone knows the usual argument for infix notation: that everyone's already used to it. That's a reasonable point, but it's not very satisfying — it's arbitrary, and it's an advantage of conventional notation, not of infix per se. But there are some real advantages of infix: it gets arguments closer to the operator than {pre,post}fix, and it can save tokens by using lower-precedence operators as punctuation. There are also some advantages related to argument order, which deserve more attention than they usually get.

Consider natural languages. They frequently work in topic-comment order: you mention some topic (a noun phrase), then make some comment about it. (I just did that, by saying “consider natural languages”, and then saying something about them.) Some languages, such as Japanese, even have special grammatical features for this. Topic-comment order is useful because (among other reasons) the topic provides context for understanding the comment. This applies in programming languages too. I suspect it's one of the attractions of writing message sends as receiver.m(a, b): the receiver is often the topic, so it's convenient to write it first and separately. Unfortunately, topic-comment order is impractical in prefix notation, because the topic needs to be first. But it works fine in infix or postfix.

Natural languages also prefer to put large constituents last (or, failing that, first). This is for a simple practical reason: so you don't have to remember as much grammatical state across a large subtree. It works in code too: it's easier to read (map args (lambda (x) (do lots of stuff with x))) than (map (lambda (x) (do lots of stuff with x)) args). By the way, this is a common complaint about map — e.g. David Rush says:

I also find it awkward to have the function argument be the first argument to map. It is nearly always the most complex value to express in the call and the length of the expression tends to obscure the connection with the collection arguments.

It's also easier to format a form when the large constituent is last — think of the awkwardness of indenting a function call that has several short arguments after a big lambda. Infix doesn't make indenting easy, but it does avoid large arguments in the wrong place — it fills the difficult middle position with the small, easy-to-parse operator.

I know other people have discussed both of these argument-order effects in general before, but my search-fu is failing me. Larry Wall's article on natural-language features in Perl mentions topicalization but not right-heaviness, surprisingly.

The common thread here is that the first and last positions of a form are special: certain arguments often want to be there. The middle position is the only place the operator can go without displacing an argument from its preferred place. This is a nice story, but it has a little flaw: operators also want to be in the end positions. They want to be first to provide context for the arguments (especially for macros), and to make the tree structure more transparent; they want to be last to reflect execution order (in eager languages). So putting the operator in the middle is not an unmitigated win. But it may be enough of an advantage to explain why, even to people used to prefix or postfix, infix sometimes feels more comfortably like natural language.

Partial pedantry

“Currying” or “partial application”?

Consider this piece of the Haskell Standard Prelude:

-- curry converts an uncurried function to a curried function;
-- uncurry converts a curried function to a function on pairs.

curry            :: ((a, b) -> c) -> a -> b -> c
curry f x y      =  f (x, y)

uncurry          :: (a -> b -> c) -> ((a, b) -> c)
uncurry f p      =  f (fst p) (snd p)

This is formally, pedantically correct. Surprisingly, it's also useful from time to time, mostly when composing functions that return pairs. But since most Haskell functions are already curried, it's not as heavily used as other Prelude functions such as flip.

Dylan has a different idea. It uses curry(\+, 1) to mean (λ x → 1 + x). The same thing (partial application as a function) has been done in Python and C++, but without the misleading name.

Some people don't like this. They consider the Dylan usage a reckless abuse of words. Early versions of SRFI-26 proposed calling partial application curry, to which Dave Mason said:

It would also be really nice at ICFP or other places where Schemers and Haskell types get together if the Schemers don't look like complete idiots when they misuse the term `curry' because of this SRFI's poor choice of name.

I used to think this was silly. Curried functions exist to make partial application easy, and other tools for partial application accomplish the same goal in a different way. (The practical difference is that they specify partial application at the call site, while curried functions specify it at the function's definition.) Why use two different words? Is there ever a time when the distinction matters?

There is: when you're designing a language, you have to think about these things. While considering a partial application feature in my pet lisp, I found myself needing to talk about several terse ways of obtaining partial application. Should I use curried functions? A currying function? A partial-apply function? A macro like op? I needed to talk about all of these at once.

But most users don't, and it's futile to try to force people to make distinctions they don't care about. They just want partial application, and if they get it from a function like Dylan's, they will inevitably adopt the short, distinctive name curry for it. The abuse of the term might be a problem in other contexts, but in a language without curried functions, it is not likely to confuse.

And since partial application is so often useful, I did decide to include it in my pet lisp. I used op, and built it into the Form Without A Name. That way it doesn't require a name of its own, so the unwieldiness of “partial application” is not a big problem. Which is a good thing, because I use it a lot. Terse partial application makes functional programming easier, which means it is used more often than its lambda equivalent. I'm glad I went to the trouble of adding it.

And I won't be upset when users call it “currying”.

Composition with custom composers

Here is an improved version of composition, with less unnecessary η-expansion, flexible user-defined operators (macros, actually), and a higher documentation-to-implementation ratio:

(defun eta (sym)
  "Eta-expand SYM to a lambda-expression, unless it's in CL and therefore unlikely to change."
  (if (eq (symbol-package sym) (find-package "CL"))
    `(lambda (&rest args) (apply #',sym args))))

(defmacro composition (body)
  "Defines a function in an abbreviated combinator language.
   A symbol names a function. A list is a composition of functions.
   Normally functions are composed with B* (or H is there are more than two),
   but if the first one is a symbol with a COMPOSER property, that property
   is used instead - it may be a symbol to prefix, or a function returning
   a replacement form.
   For example, (COMPOSITION (= CAR 2)) expands to (H #'= #'CAR (K 2)),
   which is equivalent to (LAMBDA (x) (= (CAR X) 2)).
   See also the SAMEFRINGE and FACTORIAL examples."
  (labels ((translate (term)
             (typecase term
               (symbol (eta term))
               (cons (let ((composer (and (symbolp (car term))
                                          (get (car term) 'composer))))
                       (typecase composer
                         (null (cons (if (cddr term) 'h 'b*)
                                     (mapcar #'translate term)))
                         (symbol (cons composer (mapcar #'translate (cdr term))))
                         (function (funcall composer term)))))
               (t `(k ',term)))))
    (translate body)))

;;;Some composer definitions:
(setf (get 'and 'composer) 'hand)
(setf (get 'or 'composer) 'hor)
(setf (get 'if 'composer) 'hif)
(setf (get 'funcall 'composer) 'funcall)
(setf (get 'quote 'composer) (lambda (term) `(k ,term)))
(setf (get 'while 'composer) 'while)

(Why is the null branch of the typecase first instead of last? Because nil is a symbol in CL, so it can't go after the symbol clause. I've had the same problem before, because of this ugly bit of design.)

The new composition can handle the clearer version of rotate-right:

(defcomposition rotate-right
  (while (consp car) (cons caar (cons cdar cdr)))) well as a slightly more complex standard example:

(defcomposition next (if oddp (+ (* 3 identity) 1) (/ identity 2)))

(defcomposition iter (reverse ((while (> car 1) (cons (next car) identity))

(defcomposition try (format 't "~&~S: ~S" identity (1- (length iter))))

composition is not too awkward to program in, at least for simple functions. On the other hand, I wanted to add (loop for i from 1 to n do (try i)), but there is no good way to say it. Iteration in general is often hard to write with combinators. (Actually it's often messy in any style, because complex loops are common and diverse, and they aren't easily composed from smaller parts.) Composition suffers from its inability to handle lexical nesting or multiple return values. Both of these features could be straightforwardly added with suitable composer definitions.

While I was working on composition, I had a bug: I wrote (symbolp term) instead of (symbolp (car term)). SBCL's type inference happily concluded that composer was always nil, and spit out six "deleting unreachable code" warnings (excuse me, "notes") like this:

; note: deleting unreachable code

As it happened, I found the problem right away once I knew it existed, but it could easily have been a mystery. Program checkers are nice and all, but they're more useful when their output is comprehensible.

Composition that works

My earlier implementation of composition without combinators was lacking in detail, and did not support recursion, even though samefringe requires it. Here is a more complete version in Common Lisp.

First, the combinators we're trying to hide:

(defun b* (f g)
  "Multiargument composition: apply G to every argument of F."
  (lambda (&rest args) (apply f (mapcar g args))))

(defun h (f &rest funs)
  "Multiargument composition: apply a different function to every argument of F.
   (The name is from 'high-order'.)"
  (lambda (&rest args)
    (apply f (mapcar (lambda (f) (apply f args)) funs))))

(defun and* (a b) "AND as a function." (and a b))

(defun hand (f g)
  "High-order AND (conjunction of functions)."
  (lambda (&rest args) (and (apply f args) (apply g args))))

(defun hor (f g)
  "High-order OR (disjunction of functions)."
  (lambda (&rest args) (or (apply f args) (apply g args))))

(defun hif (testf thenf &optional (elsef #'identity))
  "High-order if: conditional function."
  (lambda (&rest args) (if (apply testf args)
                           (apply thenf args)
                           (apply elsef args))))

(defun k (x) "The K combinator." (lambda (&rest ignore) x))

(defun while (test step)
  "Iteration combinator.
   Returns a function that iterates STEP until TEST returns false."
  (labels ((iterate (init)
             (if (funcall test init)
               (iterate (funcall step init))

Now for the composition macro. This version η-expands symbols, to allow recursion. (This makes the expansion ugly, which could be mostly avoided by not expanding symbols from the CL package. Ideally, it should only η-expand symbols that aren't fbound or are being redefined now. But that would excessively complicate the example.) It also adds a funcall special form, to turn off composition, so you can usefully call combinators like while.

(defmacro composition (body)
  "Defines a function in an abbreviated combinator language.
   For example, (COMPOSITION (= CAR 2)) expands to
   (H #'= #'CAR (K 2)), which is equivalent to
   (LAMBDA (x) (= (CAR X) 2)).
   See also the SAMEFRINGE and FACTORIAL examples."
  (labels ((translate (term)
             (typecase term
               (symbol `(lambda (&rest args)
                          (apply #',term args)))
               (cons (case (car term)
                       (or `(hor ,@(mapcar #'translate (cdr term))))
                       (and `(hand ,@(mapcar #'translate (cdr term))))
                       (if `(hif ,@(mapcar #'translate (cdr term))))
                       (funcall `(,(car term) ,@(mapcar #'translate (cdr term))))
                       (quote `(k ',(cadr term)))
                       (t (cons (if (cddr term) 'h 'b*)
                                (mapcar #'translate term)))))
               (t `(k ',term)))))
    (translate body)))

(defmacro defcomposition (name exp &optional docstring)
  "Shorthand defining macro for COMPOSITION."
  `(progn (setf (symbol-function ',name) (composition ,exp))
          ,@(if docstring
                `((setf (documentation ',name 'function) ,docstring))

That's all we need to define samefringe and factorial:

(defcomposition rotate-right
  (funcall while (consp car) (cons caar (cons cdar cdr))))

(defcomposition samefringe (or eq (and (and* consp)
                                       ((and (eq car) (samefringe cdr))

;;Samefringe expands (without η-expansion) to:
;; (HOR #'EQ
;;      (HAND (B* #'AND* #'CONSP)
;;     (B* (HAND (B* #'EQ #'CAR)
;;                   (B* (LAMBDA (&REST ARGS) (APPLY #'SAMEFRINGE ARGS))
;;                       #'CDR))
;;             #'ROTATE-RIGHT))))

(defun test-samefringe ()
  (assert (samefringe nil nil))
  (assert (not (samefringe '(1 2) '(1 . 2))))
  (assert (samefringe '(1 nil 2 3) '((1) (2 . 3))))
  (assert (not (samefringe '(1 2 3 (4 . 5) 6 . 7) '(1 2 3 4 5 6 7)))))

(defcomposition factorial (if zerop 1 (* identity (factorial 1-))))

This minilanguage is quite convenient, except for the long composition and defcomposition names. Presumably you'd shorten those names if you intended to use it much.

It would also be nice to be able to define new noncomposing operations like and. If while were recognized as one, it would make rotate-right even simpler:

(defcomposition rotate-right
  (while (consp car) (cons caar (cons cdar cdr))))

The game of carnivorous math

You know how sometimes something cool comes along and wows you with its novelty, and then you look at the date and realize it's eight months old? And everyone already knows about it but you? This is one of those times.

But just in case it's not, and you don't know about it yet: Alligator Eggs is a children's game about - well, read it and see.

OK. Read it? Now for the criticism.

1. Old alligators are a wart. They're a wart in the form of parentheses too, but they're even worse here. In either case, the problem is that they arise purely from syntax - there's no corresponding feature in the math; all they do is patch up an ambiguity in the surface representation.

There has to be a better way. Perhaps predator-prey pairs could be disambiguated by just putting a piece of paper underneath, to serve as a visual box and a way to move them. This could also help delimit very large families, which are easy to lose track of when playing with bits of paper. It's hard when your alligators don't magically change size to cover their nests.

2. After only a few η-adoptions and β-devourings, I missed combinators and global variables. I'm not sure they would make a better game (adding random features doesn't usually improve a game), so maybe this is just my programmerhood talking, but I think they would at least increase the range of concepts you can teach with the game. It shouldn't be hard to get players to give names to a few of the simpler alligator families. Indeed, I suspect they'd eagerly adopt them, because they save so much paper-shuffling. They could be represented as non-alligator animals - the ibis, who is replaced by whatever she eats; the kangaroo, who hops away and leaves his dinner in the care of an alligator; the ωombat, who feeds anything to itself; the kitten, who turns anything into an ibis; and so on. (How do kittens turn whole families of alligators into ibises? Well, have you ever seen what a kitten does to a board game? That's how.)

3. You could add non-carnivorous data like integers, and operators on them. But I have a better idea: despite my functional preference, I think side-effecting operators could be fun. A puzzle that does something is more interesting than one that only puzzles. State makes reduction order matter, but that's not a bad thing - the order exists anyway, so it might as well mean something. A suitable choice of initial configuration and state objectives offers the bizarre prospect of controlling a game by choosing evaluation order.

Another application of state is in multiplayer play. Suppose players take turns, each independently performing one operation on their own menagerie. This game becomes much more interesting if the players can interact - and stateful operators provide a natural way to do that. Operations could have side effects on the opponent's alligators - feeding them things, for example. But their power is restrained by the difficulty of handling them.

And you could draw animals from a deck of cards... Okay, this is getting too complicated. But the complexity does have one advantage: it makes it easier to not think too hard about the biology.

(Via Philip Wadler.)

An advantage of extensions

I usually don't like filenames with extensions. The file type is rarely worth the user's notice, and it obscures the much more important filename with an ugly irrelevance. But the other day I didn't have extensions, and missed them. I was sorting through files on an old machine, and many were of types I hadn't seen in years. This was on a Mac, so the types were indicated by icons. And I had trouble telling what the types were.

Icons, as a rule, are easy to recognize when they're familiar, but inscrutable when they're not. (The early Mac UI designers had a saying: "a word is worth a thousand pictures", to a naive user.) This applies to icons for file types too. It's easier to distinguish unfamiliar types by name than by picture, because you don't know what the pictures mean yet.

This isn't really an advantage of extensions - it's just an advantage of text over icons. Of course there is a textual filetype column in the Finder, but it's too far away from the filename, and often too verbose, to be easy to read. ("Application program"? "ClarisWorks text document"?) Back when I used that machine, I hadn't needed that column, so I had turned it off in most folders, so it wasn't there to help me. But it wouldn't have been as helpful as extensions if it had been there. Extensions have three advantages: they're terse, they're easy to learn, and they're close to where you're already looking.

Okay, four advantages: they're also easy to change.

But so ugly.

A glance is too far

Recently I tried Sauerbraten, a free first-person shooter. (It's like Quake, with an emphasis on network play and in-game map editing.) I got schooled, of course. And in addition to the lesson in how (not) to dodge, I got a lesson in user interfaces.

In any such game, your current hit points are one of the most important pieces of information. Your remaining ammunition is nearly as important. Therefore Sauerbraten shows both numbers in big numerals in the corners, where you can see them at a glance.

But a glance takes too long. Sauerbraten is a fast and twitchy game, and you can't afford to take your eyes off a fight, even for a fraction of a second. Looking at the number and back to the fight takes perhaps a third of a second, but that's enough time to lose track of an opponent, or to fatally delay your reaction to a shot.

So hit points, ammo, and other information needed in the heat of battle should be displayed in a form that can be read with peripheral vision, so you don't need to move your eyes. A large bar graph - at least half as tall as the screen - works fine.

One of the rules for presenting information more efficiently is to let the users navigate with their eyes, not their hands, because the eyes are faster. In general, it's faster to look at something that's already visible than to push a button to display it. An efficient interface minimizes the number of places the user's eye needs to look, just as it minimizes the number of operations their hands need to do. But in the real-time extreme of a game, even one glance is too many. The most urgent information needs to be visible without moving the eye, because the eye is too slow.

Speaking of emacs...

Today I played M-x dunnet for the first time in ten years. I'm surprised by how much I remembered. I remembered what the coconuts and the sauna do; I remembered where to look for the password to Pokey; I even remembered how to turn an essential item into a worthless pile of protoplasm.

I didn't remember the daze of thirsty little cabbages, all alike, but I remembered the problem it exhibits. Dunnet forces the player to guess far too much. In the maze, this is just tedious - you have to exhaustively try to move in every direction and draw a map. But there are also many deathtraps which are impossible to anticipate, and these inevitably force the player to restart the game, or to give up in frustration. (Once you die, there is no obvious way to restart the game without restarting the whole emacs!) Despite its cute bits, Dunnet is not much good as a game.

It is, however, good as a teaching tool. Ten years ago, I reacted to its frustration in the obvious way: I read the source for hints. (You might call that cheating, if you like text adventures more than I do.) I didn't know much Lisp at the time, so this was quite educational. I was well aware of this, and didn't mind, although the everything-is-a-list data structures were confusing. However, its effectiveness was limited for an ironic reason: the code is too clean.

Not that it's a shining example or anything - rooms are represented by a bunch of parallel lists in globals, for one thing - but data about rooms and such is mostly separated from code. So I was able to figure out most of what I wanted without having to understand much of the code. A similar game with less separation of concerns might be a better teacher, because it would force students to read more code. It would help to have more puzzles which depend on understanding the code, not just looking at literal strings for the answer. It would also be nice if early parts of the game had simpler, easier-to-read implementations, so the student could learn gradually.

By the way, here's an interesting use of eval. Dunnet uses eval to move object names from its data structures to its environment, so its code can refer to objects by name instead of by indexes into those parallel arrays:

(let (a)
  (setq a 0)
  (dolist (x dun-room-shorts)
    (eval (list 'defconst (intern x) a))
    (setq a (+ a 1))))

It's not very clean, but it is convenient. It would be more convenient if every name didn't have to have a dun- or obj- prefix, to avoid collisions. This is not a habit you want to be teaching students, but it's necessary, because Elisp doesn't have any kind of module system. That's something else to avoid in an educational version of Dunnet.

But keep the frustration, so players have a reason to look at the source. And by all means keep the unsettling leakage between different realities (in the game, I mean, not in the names of objects). That's what makes Dunnet fun. And it's definitely something you want to be teaching your students.

In which I am too stupid to use emacs

About a year ago, I got a new machine, installed Aquamacs, and wondered why my .emacs wasn't working. I checked the documentation: yes, it claimed it would load .emacs. I checked the file: yes, it had the right name, and its contents did indeed work when run. So I supposed the documentation was wrong and Aquamacs doesn't actually run .emacs. And I gave up and thought no more about it. After all, I start emacs about as often as I reboot the machine, so it's not that much trouble to load .emacs by hand.

I am stupid. Aquamacs displays the *Messages* buffer when it starts. For the last year, every single time, it contained an error:

eval-buffer: Symbol's function definition is void: slime-setup

And it gave up loading the file. This was the first thing in my .emacs:

(setq inferior-lisp-program "sbcl")

Of course it didn't work. Because, you know, Slime wasn't loaded yet.

In addition to being blind, I was blindly following someone's instructions, perhaps ones intended for some earlier version of Slime. It's not necessary to run slime-setup - just set inferior-lisp-program and then M-x slime when you want some CL. slime autoloads, and no setup is needed. It's almost simple enough that I could do it.


Visitors to Lisp from ML-land often ask: “Why does eq work on immutable objects? ‘The same object’ isn't meaningful without mutation!”

The reasoning goes like this:

  1. Lisp supports mutable data, because state is a useful tool (even if we don't like to use it too much). So identity is meaningful.
  2. Since values are usually represented as tagged pointers, identity comparison can be implemented as simple pointer comparison.
  3. This happens to work for immutable objects as well.
  4. The Lisp Way is to be permissive, so we won't disable eq in situations where it's not thought to be interesting. (Besides, it would take additional work to disable it.)

It turns out to be useful. For one thing, eq is very fast. We can arrange to make it equivalent to another predicate, without losing its speed, by interning the values: only keeping one of each set of equivalent values (as determined by the other predicate). This is how symbols work.

eq is also the only general way to detect shared structure. This is essential for things like *print-circle*, which makes printing circular data safe.

eq is total: it works on everything. This makes it handy for implementing general-purpose library code. In particular, it's a good default method for equality predicates. It's also handy when you want to distinguish one value from all others, of any type.

Finally, object identity is a basic fact of implementations. It shouldn't be hidden without a good reason. Allowing programs to see it helps you learn to “feel the bits between your toes”. In addition to being fun, that's important to writing efficient code.

Yes, eq exposes a detail of representation which usually doesn't matter. But it's a harmless exposure. No program has a bug because it is possible to compare its values for identity; neither do any interesting properties of programs cease to be true. So we get a variety of uses for free.

That's why Lisp has eq.

Language-oriented programming

What do you call the style of programming in which the language is extended to support the program, rather than the other way round?

Paul Graham has suggested calling it bottom-up programming, on the grounds that one builds the language up toward the program, rather than translating the program down to the language. This hasn't caught on, because bottom-up order is not the point at all. Sure, programs that extend the language are often written bottom-up, especially for exploratory programming - but so are programs in other styles. And when I program by changing the language, I often do it top-down. I write the program I want to read, and then extend the language to make that pseudocode executable. (This may take several iterations until I reach the level the existing language understands.) So "bottom-up" is neither distinctive to the style nor an accurate description of it.

I call it language-oriented programming. This describes the essence of the style without misleading.

Several people have already used the term, as a little googling shows. Steve Heath has used it in roughly the same sense. M. P. Ward has used it for a related concept, programming using formally specified DSLs. A few years ago, Sergey Dmitriev also started using it for programming using DSLs. Helmut Leitner has used it to mean formalised naming conventions, but I can't see a useful idea in that. The LanguageOrientedProgramming page on the c2 wiki credits the term to M. P. Ward, but is a bit vague about the meaning, because they're trying to accomodate all of these senses.

None of these usages have caught on, so I have little concern about recycling the term. I have little expectation that my sense will catch on either, but it doesn't need to. Just as it's useful to extend a programming language to express a program, even though the extension will find no use outside that program, it's useful to extend a natural language to talk about programming, even if the extension won't be used outside a small circle. Expression, not standardisation, is what language-oriented programming is about.