Google to acquire my former employer for $1 billion? — April 21, 2010

Google to acquire my former employer for $1 billion?

If so, then holy fucking shit. Seriously.

A while back, I mentioned that Google was now in the business of giving multimodal travel directions: MBTA, say, to mass-transit system to mass-transit system to Amtrak. I mentioned that it would only be a matter of time before they’d connect up to airlines, etc.

What I didn’t mention there, but have always believed, is that the airline piece is something that only ITA could handle. They’ve been working on the problem of finding the cheapest flight between two cities for their entire history; if Google wanted to add airlines to its route-finding software, it would either have to reinvent what ITA did, or acquire ITA. Given that it took ITA the better part of a decade, and a team of the smartest people you could find, to solve this problem, it’s always been obvious that Google would acquire ITA rather than build this technology itself. If Google is planning to add air travel to its route-finding software, it follows that they’d have to acquire ITA.

ITA is sitting on the best kind of monopoly you can hope for: they’ve solved a problem that no one else can solve. They deserve any success that comes their way. And they can name their price if Google comes knocking. If Google decides not to buy, but they want to add air travel to their software, they’ll have to spend at least five years trying to do it. They’ll probably have to poach large numbers of ITA employees. They’ll need to hire people away from the airlines themselves. I’m no business strategist, but it certainly seems like ITA has them over a barrel here.

I, for one, bow deeply in the direction of ITA’s headquarters on Portland Street in Cambridge. If this works out the way it’s looking, then congratu-fucking-lations to you folks.

I’m still kind of in shock, even though this all makes perfect sense.

Because Google failed me for approximately the first time ever — April 17, 2010
A self-taught master’s in CS — April 3, 2010

A self-taught master’s in CS

Perhaps this is a far-fetched idea, but let’s toss it out anyway: it occurs to me that I probably don’t have time in my schedule to get a master’s degree in computer science, but I’ve wanted for a long time to get one. I don’t have time because, as it is, I routinely work at my job until 8 or 9 at night. As I get more proficient at what I do, I expect I’ll work less, but at least for a while there’s just no way that I could do a job *and* get a master’s *and* be a reasonably good boyfriend *and* take regular trips up to New Hampshire to spend time with my girlfriend’s kids. Oh, *and* sleep.

So. That having been determined, I should spend the next few months/years building up master’s-level proficiency on my own. I should read books, watch videos, and write a lot of code on my own. A good master’s program will build a lot of theory as well, which means a lot of math. Historically, I’ve not been very good at learning math at an abstract level — but if I can code it up (e.g., writing a crypto algorithm), I can probably internalize it quickly.

Can anyone recommend a curriculum for my self-taught computer-science master’s degree? Recommend books? Recommend the sort of programming projects that one would encounter in a good master’s program?

The iPhone is a gateway Apple product —

The iPhone is a gateway Apple product

I bought an iPhone a year and a half ago. This made me really want to develop an iPhone app — something I’ve not yet done, but which I intend to start very soon. [1].

Now, the thing about the iPhone’s UI, which I don’t think you can fully grasp until you’ve actually used one, is that nearly everything — at every scale — works as it should. There is not a single sharp corner left in the product; everything has been rounded for your pleasure. The first place you see this is when you scroll to the end of a long list. When you hit the end, the iPhone doesn’t jarringly stop there. Instead, it bounces you, as though you just ran into a rubber wall on the way to catch a fly ball in the outfield.

It turns out that there is *nothing* jarring in the iPhone. If you’re rocking out to some tunes and a call comes in, the iPhone doesn’t just turn off your music and start ringing. Instead it gradually fades out the music, then starts ringing. When you’re done with the call, it gradually fades the music back in. Nothing about the product will ever put you ill at ease. That’s why I say you have to use it to understand this: in principle, written out like that, it seems like most products should and do behave like that, right? But they don’t. Getting all the details right — every detail, at every scale — is apparently so difficult that virtually no one does it. You really don’t notice how rare it is until you find yourself absolutely pleased with the iPhone, in a way that you’ve never been pleased with a piece of consumer technology before.

Having decided to do iPhone development, my terrific employer very graciously offered to buy me a MacBook Pro. The first couple days were a little difficult for me: I do a lot of command-line stuff, so I needed to get MacPorts or Fink going. And I had to get used to all the Mac OS X keyboard shortcuts.

With that out of the way, I fell into the same feeling of comfort with OS X that I have with the iPhone. The first step was realizing that every piece of the Mac UI is exactly as it should be. The second step, having gained such confidence in Apple’s UI design, was to ponder how I would do something in OS X, then ask, “If everything worked as it should, how would I perform this task?” It turns out, uniformly, that OS X’s UI always behaves the way it should. This gives you the confidence, as a friend pointed out to me last night, to go forth and try new things, and to really engage with the product in a way that you wouldn’t with some (forgive me) Microsoft piece of shit.

The first time this really took hold for me was when I asked whether I could plug my iPhone headset into my MacBook Pro and use it there the way I use it on my iPhone. For those who don’t know, the iPhone headset has a little clicky piece that performs two functions: it’s the microphone through which you carry out phone conversations, and it’s a control device for iTunes on the phone. Click the microphone once to pause the music you’re listening to or hang up the phone call you’re on; click it twice to advance to the next track. (It probably does other things as well.) If Apple designed its products the way you should expect (but which you’ve come *not* to expect from any consumer-electronics company), you should be able to pause iTunes on the laptop, advance to the next track and so forth using the iPhone headset. It turns out that you can do exactly that. And I’m not even using an Apple-manufactured headset: I’m using an incredible pair of Sennheiser MM 50 earbuds. It must just be that Apple requires single clicks to issue a certain signal and double-clicks to issue another, which both the iPhone and the MacBook Pro are programmed to respond to in the same way (namely firing off an event in iTunes). I don’t trust any other company to manage this amount of integration.

Stephanie and I discovered yesterday that there’s an app called Keynote Remote that lets you control Apple’s presentation software from your iPhone via WiFi. This is integration that everyone can use, and of course it helps Apple: the more Apple products you buy, the more value you get out of any one of them.

I’m probably going to buy a Time Capsule, which (so I gather) is so thoroughly integrated with OS X that you never even have to think about backing up; it just does it automatically. I gather that you could use other remote-backup devices in place of a Time Capsule (I believe the Time Machine software works with any number of devices), but — again — experience shows that Apple has integrated its devices spectacularly well; why would I want to use anyone else’s? Yes, I know that this is Kool-Aid drinking, but it’s Kool-Aid drinking based on a lot of positive experiences.

The final step in Apple fanboydom is to proselytize, which I unashamedly do now. But it’s proselytizing to those who could actually get a lot out of the product. Take my girlfriend, for instance (not literally; I enjoy dating her). She needed to make a movie on her ThinkPad, running Vista, so she used the built-in Microsoft Movie Maker. She spent a large fraction of a day trying to convert from the QuickTime-formatted movie that nearly every point-and-shoot camera generates to something that Movie Maker could process. Having never used iMovie, I nonetheless knew its reputation as the product you use when you want to make movies. So I brought the MacBook Pro up to New Hampshire one night, we plugged her camera into the USB port, and within a minute she was editing video. 24 hours later, she had bought a MacBook Pro of her own. People want to get shit done; they don’t care that Microsoft lacks QuickTime support because it wants to screw one of its competitors.

As a friend pointed out: Apple knows that you want to look cool. Even if Microsoft had made it easy to import .MOV files into Movie Maker, you know that it would have botched the execution after that; you would not look cool when it was done. It would offer you “wizards,” which would lead to very boring videos resembling animated PowerPoint. And those wizards would somehow, miraculously, not make your life any easier. They’d be a needless abstraction piled on top of a crummy user experience. Apple would fix the user experience so you wouldn’t *need* the wizard.

A coworker was giving me some good-natured ribbing the other day about using a Mac. He, like me, grew up during the Mac-versus-Windows wars of the early 90s. News flash: those wars are over, and the Mac has unquestionably won. I would be shocked if anyone who’s considered the matter actually believed that Windows was more usable, or more technically well-assembled, than OS X. (Though I’m fairly certain that Windows is easier to manage for enterprise installations than either OS X or Linux. But that’s not the realm that my coworker was arguing in.)

If there is still an OS battle going on, it is Linux-versus-Mac. But that battle has nothing to do with UI; again, no one could seriously assert that Linux’s UI is better than Apple’s. If there’s a Linux-Mac battle, it’s a battle over the open Linux model versus the closed OS X model. Windows is not seriously in competition with OS X for its end-user experience; it succeeds because it has succeeded. Windows is the Martha Coakley of operating systems: you hold your nose and use it because you have to, not because you want to.

[1] — One thing I’ve realized about my work style, and maybe about work styles more generally, is that I need to get something utterly trivial but functional done ASAP, and can move from there to getting something real working. As of now, I know nothing about iPhone development, so the field seems vast and intimidating. The point is to kill that feeling of intimidation as fast as I can. The way to kill it is to just get something, anything, done in the platform so that it no longer seems beyond my grasp. Had I used this technique in college, I think I would be a far better mathematician than I am.

Three progressively better ways to generate a random string in Python — March 23, 2010

Three progressively better ways to generate a random string in Python

Version 1 here is what I started with. Version 2 comes from a colleague, version 3 from another colleague.
Version 4 just makes things a little more succinct.

I enjoyed watching this simple function get cleaner as the days went by.

#!/usr/bin/python
import random
import string

def random_str_1(n):
“””
Generate a pseudorandom string of
length n.
“””
out_str = “”
letters = [‘a’,’b’,’c’,’d’,’e’,’f’,’g’,’h’,’i’,’j’,’k’,
‘l’,’m’,’n’,’o’,’p’,’q’,’r’,’s’,’t’,’u’,’v’,’w’,’x’,’y’,’z’]
for i in xrange(n):
out_str += random.choice(letters)
return out_str

def random_str_2(n):
“””
An improvement, using the fact that Python
strings are iterables that iterate over their
characters.
“””
out_str = “”
letters = ‘abcdefghijklmnopqrstuvwxyz’
for i in xrange(n):
out_str += random.choice(letters)
return out_str

def random_str_3(n):
“””
A further improvement, using an existing
library attribute.
“””
out_str = “”
for i in xrange(n):
out_str += random.choice(string.ascii_lowercase)
return out_str

def random_str_4(n):
“””
Adding a bit of concision.
“””
return “”.join([random.choice(string.ascii_lowercase)
for x in xrange(n)])

def test_all():
“””
Not really much of a unit test. Just confirms that
each of the generated strings is as long as specified.
Doesn’t test that strings are sufficiently random.
“””
methods = [random_str_1, random_str_2, random_str_3, random_str_4]
for n in xrange(40):
for method in methods:
out_str = method(n)
assert len(out_str) == n

An update on Diamond DVI-to-USB adapters and Belkin USB hubs — March 12, 2010

An update on Diamond DVI-to-USB adapters and Belkin USB hubs

All is not rosy in the land of multi-monitor MacBook Pros. As I mentioned there, I’m driving two large external monitors through USB, using Diamond adapters to connect USB to DVI; the USB plugs run into a Belkin USB hub, which runs into a single USB port on the side of the MacBook Pro with which my employer generously supplied me. (I will give you guys an iPhone app very soon; promise.) The dream is then that I can then run a bunch of other USB devices off the hub as well: iPhone, mouse, camera, etc.

It sadly hasn’t worked out that well, for reasons that illustrious Stevereads commentator mrz explained in comments to that post:

1. There’s just not enough bandwidth in USB — much less in a USB hub, which has to split one USB port’s worth of bandwidth across seven devices — to power a high-resolution monitor (much less two high-resolution monitors). My monitors would periodically slow to a crawl, and would slowly repaint the screen from top to bottom. I had to unplug the USB hub at this point, so that OS X could shift everything onto the built-in monitor; once it did that, the speed returned to where it should have been.

2. The USB hub — possibly because of item 1 — has died. None of the components plugged into it work, individually or together. When I unplug any of them — say, the mouse or a monitor — and plug them directly into the MacBook Pro, they return to life.

I can live with item 1: sure, I move windows around, and the rendering doesn’t really keep up with the movement, but it’s better by far to have two slow monitors than only a built-in MacBook Pro screen. Obviously I can’t live with item 2: I can’t stand to have a hub die after only a few days of use.

I tried to call Belkin support, but it’s another Indian call center. I find few things more disheartening than finding Indian tech support on the other end of the call; it speaks of a tech company that wants to save money (hmm: flimsy, cheap product?) and doesn’t care at all about helping its customers.

I may try to find another, better, more reliable USB hub, but the Belkin one gets fine reviews on Amazon. I’ll have to look around more closely.

Driving two external monitors off a MacBook Pro — March 8, 2010

Driving two external monitors off a MacBook Pro

Thanks to my employer for hooking me up with a beautiful MacBook Pro and two huge external monitors.

If you’re trying to do the thing mentioned in the title of this post, you’ve probably already found the perfectly comprehensive post I’m going to link to. If not, it’s this guy right here. The Cliffs Notes version is as follows:

* Your MacBook Pro has one Mini-DVI port. You want to drive two external monitors. *Problem*.
* So buy two Diamond BVU195 USB display adapters. These allow you to connect DVI cables to USB cables, of which your MacBook Pro has a few.
* “But wait!” you might say here, “I only have two or so USB ports, and I want to drive two external monitors. How will I plug in an external mouse *and* an iPod/iPhone, *and* those two monitors?” Fear not: here’s where you buy a USB hub. I got a 7-port Belkin external USB hub for $28. I run a cable from there to a USB port on the MacBook Pro, and I’m done.

To review: up to here, you’re running one DVI cable from each of your monitors into a DVI-to-USB adapter from Diamond. Then you run the resulting USB cables into a USB hub. Then you run one cable from the hub into your MacBook Pro. Now both your monitors, in summary, are being run off a single USB port on your MacBook Pro. *Sexy*.

The final step, again as detailed in that article, is

* Download and install the DisplayLink OS X drivers. Now you can use System Preferences to arrange your three monitors — two external, one built-in — in any configuration you like.

[foreign: FIN].

I would include pictures of how these things all work on my end, but the fellow who wrote that piece included everything I would have.

My only question now is how to get control of the ridiculous quantities of cabling I have laying on my desk at work as a result of these contortions:

Messy desk, lots of cables

Microsoft and its critics — February 5, 2010

Microsoft and its critics

There’s a very odd exchange between a former Microsoft VP and the official Microsoft blog. What’s odd is that Microsoft essentially tells the former VP that he’s right: when Microsoft says that “what matters is innovation at scale, not just innovation at speed,” what that says to me is “We take innovations that others have come up with, once we know that the market is established, and make that market bigger.”

In fact this is how I’ve heard Microsoft’s business model described. And there’s nothing wrong with Microsoft’s approach, actually. Little companies innovate; big companies scale up innovation. So that’s fine.

It’s just weird, though, that Microsoft even bothered to respond, if essentially their entire point was to affirm the truth of the op-ed. I’m 100% with Jon Gruber on this:

> Why in the world did they respond to this? And even worse, without refuting any of his claims, most especially his core premise that Microsoft is divided into dozens of bureaucratic fiefdoms that fight against each other to protect their turf?

__P.S.__: Microsoft *really* didn’t need to include a fucking smiley face in the middle of their blog post.

Speaking, as we were, of JavaScript — January 23, 2010
Automatic memoization: cleverness to solve the wrong problem —

Automatic memoization: cleverness to solve the wrong problem

This is the first time in my career that I’ve used JavaScript extensively, so I’m trying to learn some best practices. A few people have recommended Crockford’s [book: JavaScript: The Good Parts], so I picked it up. While skimming through it looking for something else, I ran into his description (on page 44) of using memoization (essentially, caching) to speed up the computation of certain functions.

The example people always use here — and Crockford is no exception — is the Fibonacci sequence. The shortest way to implement it is recursively. As Crockford points out, the recursive solution doesn’t scale: the number of recursive calls necessary to compute the nth Fibonacci number is proportional to the nth power of the golden ratio. (From what I can see, the constant of proportionality converges very rapidly to about 1.11803. That number must have some special name.) I’ve coded both versions up in Python; the recursive version keeps track of how many function calls it had to make.

So then Crockford’s solution, and the solution written in lots of places ([book: Higher-Order Perl], for instance) is to use memoization: cache the results of fib(n) for smaller n, then use the cached results rather than computing those values anew when you need them later on.

This isn’t really a solution, though, as my friend Seth pointed out to me some months ago. It’s certainly clever, and languages like Perl or JavaScript make it very easy. In Perl, you can automatically memoize any of your functions with the Memoize module: just do

use Memoize;
memoize(‘fib’);

and voilà: all of your calls to `fib()` will be memoized from then on. Pretty cool, really. Manipulating the symbol table is kind of neat.

But this has only temporarily disguised the problem. Let’s look at the time and space requirements for all three of our `fib(n)` implementations:

__Recursive, non-memoized__: exponentially many function calls, hence exponentially many stack frames, hence exponential memory usage. Linear running time.

__Recursive, memoized__: linear memory usage (one cache entry for every [math: i], for [math: i] less than or equal to [math: n]). Linear running time.

__Iterative, non-memoized__: constant memory usage (must keep the [math: n]th, [math: (n-1)]st, and [math: (n-2)]th values of the sequence in memory at all times, but that’s it), linear running time.

By using some language cleverness, you’ve made the problem harder than you need to: memoization increases your memory usage from constant to linear, and does nothing to your running time. By thinking harder about the problem, you can improve both performance aspects and not need any clever language business.

Seth has told me for quite a long time that recursion and higher-order programming (closures and so forth) are interesting but make debugging a lot harder. His contention would probably be that you can often replace a clever recursive solution with a non-clever, easier-to-debug, better-performing one.

That said, at least some higher-order-programming tools make my life easier. In Python, I love list comprehensions:

print [x**2 for x in xrange(1,11) if (x**2 % 2 == 0)]

or the older-school but perhaps more familiar `map()` and `filter()`:

print filter(lambda x : x % 2 == 0, map(lambda x : x**2, xrange(1,11)))

(because I favor readability over concision, in practice I would expand this into several lines: a `map()` line, and a `filter()` line, and so forth)

As I’ve mentioned, I lament the absence of first-class functions in Java. Then again, I’ve seen enough people shoot themselves (and me) in the foot with unreadable Perl or Python, using all the cleverness that the languages provide to them, that I think I’d be okay with a less expressive language that forces programmers to be really boring.