iPhone 4 FaceTime/Infinite Jest mashup — June 7, 2010

iPhone 4 FaceTime/Infinite Jest mashup

Apple’s introduction of FaceTime, their videophone protocol in the forthcoming iPhone 4, reminds me of this great passage in David Foster Wallace’s [book: Infinite Jest]:

> (1) It turned out there there was something terribly stressful about visual telephone interfaces that hadn’t been stressful at all about voice-only interfaces. Videophone consumers seemed suddenly to realize that they’d been subject to an insidious but wholly marvelous delusion about conventional voice-only telephony. They’d never noticed it before, the delusion – it’s like it was so emotionally complex that it could be countenanced only in the context of its loss. Good old traditional audio-only phone conversations allowed you to presume that the person on the other end was paying complete attention to you while also permitting you not to have to pay anything even close to complete attention to her. A traditional aural-only conversation – utilizing a hand-held phone whose earpiece contained only 6 little pinholes but whose mouthpiece (rather significantly, it later seemed) contained (62) or 36 little pinholes – let you enter a kind of highway-hypnotic semi-attentive fugue: while conversing, you could look around the room, doodle, fine-groom, peel tiny bits of dead skin away from your cuticles, compose phone-pad haiku, stir things on the stove; you could even carry on a whole separate additional sign-language-and-exaggerated-facial-expression type of conversation with people right there in the room with you, all while seeming to be right there attending closely to the voice on the phone. And yet – and this was the retrospectively marvelous part – even as you were dividing your attention between the phone call and all sorts of other idle little fuguelike activities, you were somehow never haunted by the suspicion that the person on the other end’s attention might be similarly divided. During a traiditional call, e.g., as you let’s say performed a close tactile blemish-scan of your chin, you were in no way oppressed by the thought that your phonemate was perhaps also devoting a good percentage of her attention to a close tactile blemish-scan. It was an illusion and the illusion was aural and aurally supported: the phone-line’s other end’s voice was dense, tightly compressed, and vectored right into your ear, enabling you to imagine that the voice’s owner’s attention was similarly compressed and focused … even though your own attention was *not*, was the thing. This bilateral illusion of unilateral attention was almost infantilely gratifying from an emotional standpoint: you got to believe you were receiving somebody’s complete attention without having to return it. Regarded with the objectivity of hindsight, the illusion appears arational, almost literally fantastic: it would be like being able both to lie and to trust other people at the same time.

This is only the beginning of a several-pages-long discussion of why videophones (from the future-retrospective stance) failed. People notice first that they look really gross on camera. Then they get self-conscious, so they wear masks when they’re on their videophones. This makes them terrified to meet people in real life, because those people will discover that they’ve been lied to during their videophone chats. So people stay indoors. There are a few other steps in there that I forget (and Google Books is no help), but the end result is that society eventually makes one big coordinated move to drop its videophones.

(You really need to read [book: Infinite Jest]. It’s one of those books that everyone knows about but few read. You should be one of the few to read it. I reviewed it on Amazon back in 2001.)

By the way: I’ve been considering switching to any of the new Android phones when my AT&T contract expires in August, but the new iPhone seals the deal for Apple.

jQuery and XMLHttpRequest objects — May 31, 2010

jQuery and XMLHttpRequest objects

I’m trying to find an answer to this, but the web has been remarkably unforthcoming. This includes the usually stellar Stack Overflow. Here’s what’s going on:

* I’ve gotten comfortable in jQuery over the past few months. It is awesome.
* I’ve just started playing with XMLHttpRequest objects (i.e.,“Ajax”).
* jQuery has a few methods to help you do XMLHttpRequest calls, e.g., .get() and the lower-level .ajax().
* If a .get() request succeeds, you can attach a success callback to it. The arguments handed to the callback include the XMLHttpRequest object itself.
* XMLHttpRequest objects, according to the spec, have a getAllResponseHeaders() method.
* It would seem to follow, then, that the XMLHttpRequest object in jQuery.get() and jQuery.ajax() would also contain a getAllResponseHeaders() method. If it does, I can’t get Firebug to print its results; it only tells me “null.” Null is a kind of sadness.
* Other pages, including the Stack Overflow one that I mentioned, indicate that other people have had this same problem. But no one seems to have solved it. So I’m putting the description up here; when I solve it, I will contribute to the world’s stock of knowledge, and a great light will shine forth upon the land. Hosannas, etc.

Methods of measuring GDP — May 30, 2010

Methods of measuring GDP

A colleague the other day mentioned his annoyance with the Hans Rosling TED talks on global poverty. His annoyance generally stems from treating developing nations’ GDP estimates as anything more than numerical hocus-pocus.

A few things I know basically nothing about:

* I really have no idea how hocus-pocusy these GDP estimates are.
* I also have no idea how hocus-pocusy the U.S.’s own GDP estimates are.
* Another thing I have no idea about is whether every year’s GDP estimates from a given country are mangled in the same way, so that (estimated GDP in year 2) minus (estimated GDP in year 1) is actually a reasonably accurate measure of year-over-year change in GDP.
* Per-capita GDP estimates might introduce another source of uncertainty, namely uncertainty in the population estimates. I likewise have no idea how accurate most nations’ population estimates are. And I have no idea whether per-capita-GDP estimates come from sampling individual people on their incomes, or estimating the country’s aggregate GDP and dividing by an estimate of the population.

I guess what I’d like, then, is a good introduction to the problems of measurement in countries with not-very-well-established economic-measurement systems — and for that matter, an introduction to how the U.S. statistical-measurement agencies do their work. Paging Chris Blattman

The iPhone is not on the side of the angels — May 24, 2010

The iPhone is not on the side of the angels

One of the very infuriating things about reading Jon Gruber is his constant “Apple rocks, open-source sucks” mantra. If you didn’t know that constant refrain, it might seem as though he links to “open is for losers” without comment; knowing Gruber, you know that he’s either doing it approvingly, or as a stick in the eye of his non-Apple-fanboy readership. He’s, honestly, a dick like that.

Paul Graham says in the linked piece that “Of course [he would invest in] iPhone. Im talking about what I hope will set us free, not what will generate opportunities.” This is a perfectly sound point, and doesn’t take away from the fact that *the iPhone is an anti-freedom device.* I say this as a happy iPhone owner. Or rather, I say it as a *conflicted* iPhone owner: I realize that by using this device, I am harming the cause of freedom. But it’s also a spectacular piece of consumer technology.

The open-source movement has always treated software as speech: if it’s not free, it doesn’t matter how good it is. If all the books that you could read had to be personally vetted by Barack Obama, you’d never stand for it. Open-source advocates feel the same way about software that needs to pass through a censor first to make sure it doesn’t conflict with what Apple is trying to sell.

That said, I used Linux exclusively for years, and no longer use it as my everyday computing environment; I use a Mac. Macs and iPhones are designed with a level of polish that you don’t appreciate until you suddenly realize that your computing experience has been painless for the first time in decades — that everything works as it should, and that you’re actually giddy at your ability to experiment without fear.

So I’m conflicted. And I’m not going to take the (as it has always seemed to me) lazy way out and say “Do I contradict myself? / Very well then I contradict myself, / (I am large, I contain multitudes.)” I think one is actually obligated to bring one’s life into harmony with one’s principles, so long as one has principles. I’m the first to admit that I suck at doing this. But it’s a conflict, and it’s an *obvious* conflict: I believe in free speech, I believe that regulated speech is not speech worth having, and it’s obvious that Apple peddles regulated speech. Yet they make operating systems that are head and shoulders above everyone else’s, despite the fact that they’ve been *sitting out there*, just *begging* for someone to make a comparable interface. No one has. Surely Apple deserves to be rewarded for making the best product.

When my iPhone contract expires in September or October, I am seriously considering switching to an Android phone of some sort. Maybe an HTC Incredible, maybe a Nexus One, maybe something else. Before I do that, I will probably pick up an Android device purely for development purposes. It may turn out that I love Android devices, and the contradiction in my life melts away. I hope so, because in the meantime it is uncomfortable.

“Less thinking. More testing.” — May 22, 2010

“Less thinking. More testing.”

(__Attention conservation notice:__ nearly 2,000 words that start with test-driven development in software, skip along to application prototyping, then take a big leap to an attack on libertarianism.)

I’ve meant for a long while to write about Kent Beck’s [book: Test-Driven Development By Example], but — as you can see from this blog in general — I’ve had a lot less time to write recently. The book hasn’t yet changed my life, but it should, and it will. And I think the idea has far broader applicability than just software development, which I’ll try to get into below.

The basic premise of test-driven development is to write your tests before you write your code. The structure is like so:

1. Write a test asserting something about the code that you’ve not yet written. For instance, if you intend to write code computing the number of days between two dates, you might make a few assertions: that the number of days between a date and itself should be zero; that the number of days between March 1 of 2009 and March 1 of 2012 is one day more than three times 365; and so forth. The more assertions you can make about this as-yet-unwritten code, the better.
2. Since you’ve not written the code, the assertions will fail. In fact, the code won’t even compile.
3. Write the simplest version of the code that will pass the test. Write this as quickly as possible.
4. Tests pass! Joy!
5. Refactor.
6. Having accomplished the task you were on, continue to bigger and better things. Go to step 1.
7. Repeat 1)-6) until you’ve achieved whatever you were trying to do.

The advantages of having unit tests are well known. The virtues of unit tests can best be understood if you know what their absence is like. If you’re like me, you’ve worked before on code bases that had absolutely no tests, and the experience is terrifying. You can’t change one bit of code without worrying that you’ve broken something in some far-off part of the code. If you’re like me, this experience makes work actually unbearable: the more code you dip your fingers into, the wider the potential swath of destruction. Again if you’re like me, this can turn your stomach into a big ulcer which actually makes it hard to sleep. On the wrong day, it can lead you to excessive caution, which keeps you from doing work. Which is bad and makes your bosses hate you. You want your bosses to love you, don’t you? Of course you do.

Imagine instead that the code is entirely covered by unit tests. (This nirvana is known as “100% code coverage.”) Now, if you change the code, you just run the tests. Do all the tests pass? Joy and rapture! You can keep changing code to your heart’s content. When you break a test, figure out why you broke it, fix it, confirm that the tests now all pass, and move on. Continue to add tests for all the code that you add. Again, if you’re like me, this gives you a feeling of calm and confidence, which makes you work faster, which makes your bosses like you more.

Of course, sometimes your code will break for reasons that you didn’t test against. This is unfortunate but expected. When this happens, add another unit test to guard against the heretofore-unanticipated case. In this way, the unit tests document your knowledge about the particular problem domain. If done well, people should be able to understand your code by reading the unit tests. A unit test can essentially be read as “the code is expected to respond like so when it encounters a world shaped like so.”

Striving for 100% code coverage leads you to write smaller functions, because it’s easier to write unit tests to cover a smaller, more-specialized function. This is a happy side-effect: smaller, more-specialized functions are a good thing, whether or not you’re writing unit tests.

Another way that test-driven development contributes to a fearless coding experience is that — per the title of this post — it encourages you to think less and code more. If you’re like me, you can get stuck inside your own mind, wondering whether the particular path you’re going down will work. The TDD approach is to move past this state of mind as fast as you can, by writing tests. Don’t speculate idly about whether your code will do what you expect; think about how it should respond to known inputs, then write code that responds appropriately to those inputs.

Any number of conclusions might come out of this testing discipline:

* your speculation proved correct; the code works.
* it proved incorrect, and you need to pursue another line of development.
* it proved partly correct, partly incorrect, and you need to course-correct.

This institutionalized course-correction is, I think, the greatest virtue of test-driven development. And it’s why some variant of test-driven development applies in much broader contexts.

Take one context that’s only slightly broader, namely the process of building an entire app from the ground up. We recently did this at work; the task for one of our sprints was to build a prototype of an app. I wasn’t entirely prepared for what “prototype” means, but now I think I get it. A few important aspects of prototyping stand out for me from this experience:

1. Build something with a terrible user interface, but the broad rough structure of what we think users will want. Explicitly *do not* make it pretty. If you make it pretty, the users who beta-test it and the designers who make it pretty will focus on the visual details rather than on how it functions. In order to keep their eyes on the prize, write a computer program that is only one or two steps up from a sketch on paper.
2. Write a prototype that exercises the necessary backend code, like databases and API calls and so forth. You might find that your API calls take too long to return, and thereby couldn’t fit into the application that you’re building. Or you might find that your database doesn’t have indexes where it needs them. Or you might find that you need to restructure the entire app to work with APIs and backend databases that are beyond your control.
3. By putting the code in front of users, you might find that they don’t actually want the program that you envisioned. Or they want it, but they’d *really* want it if you just added a little something extra.

When I first mentioned “less thinking, more testing” to my friends, one friend raised the absolutely valid point that this approach doesn’t rid you of the need for design. That’s absolutely true. First of all, you need actual hypotheses to put in front of users; you can’t put a blank piece of paper in front of them and ask them to draw what they want. You need to focus their attention in a particular direction. When you’re building the backend architecture, you likewise aren’t starting from a blank slate.

But the point is *course-correction as quickly and as often as possible*. That fundamental message is why I think test-driven development and rough prototyping are applicable far beyond software development. It’s more than a little applicable to ideologies. Take, for instance, the recent kerfuffle over Rand Paul’s opposition to the Civil Rights Act. Matt Yglesias pulls on this thread and attacks the very idea that adhering to consistent principles even when they drive you off a cliff is somehow admirable. I completely agree. Consistency is a fine virtue, and a belief system that’s not consistent can’t be entirely true. But there are many virtues apart from consistency; among the greatest is non-insanity.

To keep your beliefs from veering off into the insane, you need to course-correct as often as possible. We’re not playing some game where the purpose is to start with reasonable-seeming principles and derive hilarious conclusions that clearly make no sense; the point is to build ethics that work for you in conducting your daily life, and to build policies that work for your countrymen. If it looks like you’ve built a chain of reasoning that led from obvious-seeming premises to ridiculous conclusions, you probably need to reconsider the premises. If Goldwater believed that opposition to civil-rights legislation was obviously true, then the principles were so much chin music to defend conclusions that he would have come to anyway; if, on the other hand, he adhered to those conclusions with great regret because he believed that the principles were correct, then he should take stock of how he arrives at his conclusions.

It’s not exactly new wisdom that blind adherence to principles can lead you astray. Holmes said as much in [book: The Common Law], whose opening words are

> The object of this book is to present a general view of the Common Law. To accomplish the task, other tools are needed besides logic. It is something to show that the consistency of a system requires a particular result, but it is not all. The life of the law has not been logic: it has been experience. … The law embodies the story of a nation’s development through many centuries, and it cannot be dealt with as if it contained only the axioms and corollaries of a book of mathematics.

Earlier in my life, I thought it was very important which ethical principles one had. I thought consistency was the most important thing. (Blame college; maybe blame academia more generally? College is a great place to pick up ideas at an impressionable age and run with them until everyone smirks at you with the amusement of the non-cloistered.) I still think it’s important, but there are many more important things. Constantly clinging to reality is among the most important. Constant course-correction, with input from the real world at every branch, is extremely helpful at keeping you moored in reality.

This, by the way, is why I’ve never been able to get far into Nozick’s [book: Anarchy, State, and Utopia]. It always feels like a shell game: “Let’s suppose you believe some premises about liberty and side constraints … We’ll just shuffle around the shells a little bit and … here we go: clearly you must believe this thing about government non-intervention.” I no longer trust long chains of reasoning from seemingly self-evident low-level principles. I want principles nowadays that are closer to daily life, whence the jump from them to concrete action is smaller and less fraught with the potential for insanity.

That said, of course there’s room to move in the other direction: I tell you that I believe X and Y, and you reply that X and Y are only instances of Z. (With some fear of stretching an analogy too far, this is akin to refactoring.) So now I believe Z instead, which is a generalized version of X and Y. Or maybe you ask me whether I believe A; if I say yes, you point out that A contradicts X. If I agree with you that there’s a contradiction, I now have a choice: continue to believe X, or change my belief in X. I believe the endpoint of this process is what Rawls called reflective equilibrium.

And of course when you course-correct frequently, you still need principles. Principles help determine the path that you start down, and help determine which experiments to perform to correct your course. But the goal should be to experiment at every possible step.

Am I just calling for more use of the scientific method here? I think I am. It works at small scales like software, and it also works at large scales like philosophy.

svn commit “unable to lock” error under Mac OS X — May 3, 2010

svn commit “unable to lock” error under Mac OS X

This will be interesting to approximately none of you, but I feel I need to spread the knowledge of hell to a wider world.

I will perhaps go into greater detail at a future date about all the *other* problems preceding this one while I used svn (and Maven, and Eclipse) under OS X. They all essentially stem from the fact that OS X’s filesystem is case-insensitive. … Or rather, not case-insensitive: you can’t do

(16:41) slaniel@Steve-Laniels-MacBook-Pro:~$ mkdir test_dir
(16:42) slaniel@Steve-Laniels-MacBook-Pro:~$ mkdir test_Dir
mkdir: test_Dir: File exists

But you can do

(16:42) slaniel@Steve-Laniels-MacBook-Pro:~$ mv test_dir test_Dir
(16:42) slaniel@Steve-Laniels-MacBook-Pro:~$ rm -rf test_dir

Since it’s not entirely case-insensitive, it is *sometimes* — but not always — a problem to have both ‘foo’ and ‘Foo’ in the same svn checkout. More to the point here: what if you have ‘Foo’ and want to rename it to ‘foo’? You’d want to do `svn mv Foo foo`. There will be bequeathed unto you a sadness:

(16:45) slaniel@Steve-Laniels-MacBook-Pro:~/svn/sandbox/slaniel$ svn mkdir foo
A foo
(16:45) slaniel@Steve-Laniels-MacBook-Pro:~/svn/sandbox/slaniel$ svn mv foo Foo
svn: Unable to lock ‘Foo’

The reason it can’t lock ‘Foo’ is that it already has ‘foo’ locked, and it thinks that ‘foo’ is the same as ‘Foo’. So it can’t move, in other words, because the filesystem is case-insensitive.

To redress this just now, I had to do something like

svn mv foo bar &&
svn ci -m “Temporarily moving foo to bar” &&
svn mv bar Foo &&
svn ci -m “Moving bar to Foo”

You need to do the commit after each move; you can’t just do

svn mv foo bar && svn mv bar Foo

Maybe all of this was obvious to all OS X svn users other than me. I assure you that it was *not* obvious to me.

Mac filesystem case-insensitivity just wasted an inordinate quantity of time and money from some of my company’s smartest engineers. I am displeased. Perhaps this post will save someone else some time in the future.

At a later date, after I’ve actually completed some work, perhaps I will explain all the *other* badness that resulted from this case-insensitivity.

A Subversion problem: who was first responsible for an errant line? — May 2, 2010

A Subversion problem: who was first responsible for an errant line?

Suppose you want to find the first Subversion checkin where a particular string appeared. `svn blame [filename]` gets you some distance toward that goal, but it doesn’t entirely work: `svn blame` will tell you the person *most recently* responsible for tweaking the particular line of code where that string appears. If someone came along between when that line was introduced and now and, say, changed all Unix line endings to DOS ones, `svn blame` will suggest that the interloper is the one responsible for that line.

So what you want is the *first* revision number in which that change appeared. So far as I can tell, there’s no built-in svn command to give you this information. This shell-scripting business is the best I could come up with:

#!/bin/bash
string=$1
filename=$2

if test -z $string; then
echo “Must supply a string to search for”
exit 1
fi

if test -z $filename; then
echo “Must supply a filename to search”
exit 1
fi

# Get all svn revision numbers in which $filename
# was involved, in ascending order.
all_rev_nums=`svn log $filename
|grep -o ‘^r[0-9]+’
|grep -o ‘[0-9]+$’
|sort -n`

for revnum in $all_rev_nums; do
if svn cat -r$revnum $filename |grep -q $string; then
svn blame -r $revnum $filename |grep $string
# Since they’re in ascending order, we’ve found
# the first one. So we can quit now.
exit 0
fi
done

# If we never found $string in any revision
# of $filename, return an error.
exit 1

Tom Slee slaps down some new-economy nonsense — May 1, 2010

Tom Slee slaps down some new-economy nonsense

Tom Slee is a fabulous author; his [book: No One Makes You Shop At Wal-Mart] is one of my favorite books of the last five years. He’s made a second career (which maybe he’ll turn into a book? I’d buy it) out of dispensing with a lot of new-economy nonsense; his latest salvo is against Clay Shirky. Shirky is a fine, provocative writer, but his love of technology leads him to some silly techno-idealism. Slee looks at the abstract structure of a Shirkian argument; it turns out that a lot of “Web 2.0”-inspired authors follow the same structure.

Slee’s series of responses to Chris Anderson’s “long tail” idea are in the same vein. I’d use the word “contrarian” for these, if I didn’t think that word had been sullied by Christopher Hitchens, and if I didn’t think it implied opposition for the sake of opposition. In a lot of the Web 2.0 nonsense, it would be hugely instructive for the Web 2.0 folks to be forced to argue the contrary of whatever it is that they’re claiming at that moment. Argue that “social media” won’t actually have any world-changing effect on anything. Argue that blockbusters will have just as much of a place in the 21st-century economy as they did in the 20th-century economy, and that those folks living on the long tail will have just as much trouble making a living as they ever have. Argue that the structure of a lot of economic processes is a classical arms race: my side adopts some new technology and temporarily moves ahead, but eventually your side does the same thing; the net effect is a wash. (You could have predicted, similarly, that even if sabermetrics was a valuable technology and initially helped teams with small budgets, that its value to those teams would eventually disappear as the Yankees caught wind of it.) Argue that while the Internet makes distributed teams more feasible and reduces transaction costs, and so might temporarily help small businesses, it will eventually be adopted by large companies as well. And so forth. I’d love to see the Shirkys of the world forced to write books arguing these positions with all the passion that they apply to their chosen viewpoints.

The trouble may be that the incentives are all wrong. It is much sexier to argue that some new flavor of the month will change the world than to argue that the world of the future will look a lot like the world of today. When everyone around you is swept up in talk of Twitter and FourSquare, you’re likely to do better if you assert with everyone else that these are the waves of the future. You will be invited to conferences; you will be asked to write books. Likewise, newspapers and policymakers will do a lot better if they talk about something sexy like terrorism than if they pledge to end 600,000 annual heart-disease deaths. The old and stable and known is boring, though it may well be true.

Perhaps Slee and I should team up and write a book about all this nonsense. Or maybe we should both sell out and write a book about how FourSquare Will Change Everything. It will sell reasonably well and earn us both a decent middle-class income, which we can then convert into a second book entitled [book: Ha Ha We Were Just Kidding, Or: Your Latest Technology Idea Sucks].

Exporting an MP3 mix from iTunes — April 24, 2010

Exporting an MP3 mix from iTunes

Suppose you make an MP3 mix for someone in iTunes. It’s easy enough to export the *list* of songs in that mix. And it’s easy enough to burn those songs to a CD. But it seems nontrivial to export *the actual MP3s*. Am I just not seeing an obvious way to do this?

Using a little shell-scripting, you can do this fairly easily:

#!/bin/bash
m3u_file=$1
export_dir=$2

function check_for_executables() {
# Paths to executables
TEST=/bin/test
MKDIR=/bin/mkdir
GREP=/usr/bin/grep
CP=/bin/cp
BASENAME=/usr/bin/basename

bad_execs=()
# Test that all executables are there
for executable in $TEST $MKDIR $GREP $CP $BASENAME; do
if ! $TEST -x $executable; then
bad_execs=”$bad_execs $executable”
fi
done

if ! $TEST -z $bad_execs; then
echo “Some required executables are missing: $bad_execs”
exit 1
fi
}
function process_args() {
if $TEST -z “$m3u_file”; then
echo “Must give the path to an M3U file.”
exit 1
fi

if $TEST -z $export_dir; then
export_dir=$(realpath $(pwd))
fi

echo “Exporting to $export_dir”

if ! $TEST -d $export_dir; then
echo “$export_dir does not exist or is not a directory; trying to create it”
if ! $MKDIR -p $export_dir; then
echo “Failed to create export directory $export_dir; quitting.”
exit 1
fi
fi
}
function export_playlist() {
i=0
cat “$m3u_file” |while read line; do
# Skip comment lines and blank lines
if echo $line |$GREP -q ‘^#|^s*$’; then
continue
fi
let i=i+1
if `echo $line |$GREP -q ‘^s*$’`; then
# line is blank, so skip it
continue
fi

filename=$(printf ‘%02d’ $i)_$($BASENAME “$line”)
export_path=${export_dir}/$filename
$CP “$line” “$export_path”
done
}

function main() {
check_for_executables
process_args
export_playlist
}

main

(Obvious improvements:

1. The `printf` statement prepends each track with its track number in the mix, and assumes that there are at most 100 songs in your mix. Have the script figure out the number of tracks in the mix, then compute the appropriate number of digits from that. The appropriate number of digits would be something like floor(log10(n))+1, where n is the number of tracks in the mix and floor(x) is the greatest integer that’s no larger than x.
2. If the script is longer than about 10 lines, contains nontrivial logic, could usefully stand to compute logarithms, and can be meaningfully decomposed into functions, shell is no longer an appropriate language; use something like Perl or Python instead.

Exercises for the reader.)

Google to acquire my former employer for $1 billion? — April 21, 2010

Google to acquire my former employer for $1 billion?

If so, then holy fucking shit. Seriously.

A while back, I mentioned that Google was now in the business of giving multimodal travel directions: MBTA, say, to mass-transit system to mass-transit system to Amtrak. I mentioned that it would only be a matter of time before they’d connect up to airlines, etc.

What I didn’t mention there, but have always believed, is that the airline piece is something that only ITA could handle. They’ve been working on the problem of finding the cheapest flight between two cities for their entire history; if Google wanted to add airlines to its route-finding software, it would either have to reinvent what ITA did, or acquire ITA. Given that it took ITA the better part of a decade, and a team of the smartest people you could find, to solve this problem, it’s always been obvious that Google would acquire ITA rather than build this technology itself. If Google is planning to add air travel to its route-finding software, it follows that they’d have to acquire ITA.

ITA is sitting on the best kind of monopoly you can hope for: they’ve solved a problem that no one else can solve. They deserve any success that comes their way. And they can name their price if Google comes knocking. If Google decides not to buy, but they want to add air travel to their software, they’ll have to spend at least five years trying to do it. They’ll probably have to poach large numbers of ITA employees. They’ll need to hire people away from the airlines themselves. I’m no business strategist, but it certainly seems like ITA has them over a barrel here.

I, for one, bow deeply in the direction of ITA’s headquarters on Portland Street in Cambridge. If this works out the way it’s looking, then congratu-fucking-lations to you folks.

I’m still kind of in shock, even though this all makes perfect sense.