Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

In math, notations are designed to make statements about the problem domain concise. Once you pass a certain degree of concision, longer names impede readability rather than enhancing it. That is because the ability to take in an entire complex expression or subexpression at a glance tells you things—and lets you see patterns—that wouldn't be as apparent if longer names were used. Programmers in the APL tradition understand this, but most programmers do not. (Many refuse to believe it's possible when they hear about it!)

In software, programmers have grown accustomed to a notion of readability that derives from large, complicated codebases where unless you have constantly repeated reminders of what is going on at the lowest levels (i.e. long descriptive names) there is no hope of understanding the program. In such a system, long descriptive names are the breadcrumbs without which you would be lost in the forest. But that is not true of all software; rather, it's an artifact of the irregularity and complexity of most large systems. It's far less true of concise programs that are regular and well-defined in their macro structure.

In the latter kind of system, there's a different tradeoff: macro-readability (the ability to take in complex expressions or subprograms at a glance) becomes possible, and it turns out to be more valuable than micro-readability (spelling out everything at the lowest levels with long names).

It also turns out that consistent naming conventions give you back most of what you lose by trading away micro-readability, and consistent naming conventions are possible in small, dense codebases. That of course is also how math is written: without consistent naming conventions and symmetries carefully chosen and enforced, mathematical writing would be less intelligible.

Edit: The fact that readability without descriptive names is widely thought to be impossible is probably because of how little progress we've made so far in developing good notations, and tools for developing good notations, in software. This may not be so hard to understand: it took many centuries to develop the standard mathematical notations and good ways of inventing new ones to suit new problems. Mathematics is the most advanced culture we have in this respect, and in computing we're arguably still just beginning to retrace those steps. If we wrote math the way we write software, mathematics as we know it wouldn't be possible.

Edit 2: The best thing on this is Whitehead's astonishingly sophisticated 1911 piece on the importance of good notation: http://introtologic.info/AboutLogicsite/whitehead%20Good%20N.... If you read it and translate what he's saying to programming, you can glimpse a form of software that would make what people today call "readable code" seem as primitive as mathematics before the advent of decimal numbers seems to us. The descriptive names that people today consider necessary for good code are examples of what Whitehead calls "operations of thought"—laborious mental operations that consume too much of our limited brainpower—which he contrasts to good notations that "relieve the brain of unnecessary work".

Applying Whitehead's argument to software suggests that we'll need to let go of descriptive names at the lowest levels in order to write more powerful programs than we can write today. But that doesn't mean writing software like we do now, only without descriptive names; it means developing better notations that let us do without them. Such a breakthrough will probably come from some weird margin, not from mainstream work in software, for the same reason that commerce done in Roman numerals didn't produce decimal numbers.



You're buying into a false dichotomy. Description should always exist. If you think there are too many characters then by all means apply a transformation on your personal copy to whatever symbols you prefer, but don't deprive everyone else of valuable context.


Dang, that was a good comment. Thanks.


If you read it and translate what he's saying to programming, you can glimpse a form of software that would make what people today call "readable code" seem as primitive as mathematics before the advent of decimal numbers seems to us.

This is an extraordinary (and enticing and often advocated) claim that has, so far, failed to produce the extraordinary evidence. It says something that a person as concerned with notation as Knuth used mathematical notation for the analysis of algorithms and a primitive imperative machine language to describe behaviour.


I see no connection here to what I wrote, which has nothing to do with functional vs. imperative programming. I'm talking about names and readability in code.

Imperativeness is a separate matter. One can easily have it without longDescriptiveNames, and although I don't have Knuth handy, I imagine he did.


At first read the idea you propose is very attractive but I think you do need to address why APL didn't take off. Perhaps they chose a poor vocabulary, are there better ways to represent algorithms?


I'm sorry I didn't reply to this during the conversation, but am traveling this week. IMO the short answer is that questions like "why didn't APL take off" presuppose an orderliness to history that doesn't really exist. Plenty of historical factors (e.g. market dynamics) can intervene to prevent an idea from taking off. Presumably if an idea is really superior it will be rediscovered many times in multiple forms, and one of them will eventually spark.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: