jQuery and XMLHttpRequest objects — May 31, 2010

jQuery and XMLHttpRequest objects

I’m trying to find an answer to this, but the web has been remarkably unforthcoming. This includes the usually stellar Stack Overflow. Here’s what’s going on:

* I’ve gotten comfortable in jQuery over the past few months. It is awesome.
* I’ve just started playing with XMLHttpRequest objects (i.e.,“Ajax”).
* jQuery has a few methods to help you do XMLHttpRequest calls, e.g., .get() and the lower-level .ajax().
* If a .get() request succeeds, you can attach a success callback to it. The arguments handed to the callback include the XMLHttpRequest object itself.
* XMLHttpRequest objects, according to the spec, have a getAllResponseHeaders() method.
* It would seem to follow, then, that the XMLHttpRequest object in jQuery.get() and jQuery.ajax() would also contain a getAllResponseHeaders() method. If it does, I can’t get Firebug to print its results; it only tells me “null.” Null is a kind of sadness.
* Other pages, including the Stack Overflow one that I mentioned, indicate that other people have had this same problem. But no one seems to have solved it. So I’m putting the description up here; when I solve it, I will contribute to the world’s stock of knowledge, and a great light will shine forth upon the land. Hosannas, etc.

Speaking, as we were, of JavaScript — January 23, 2010
Automatic memoization: cleverness to solve the wrong problem —

Automatic memoization: cleverness to solve the wrong problem

This is the first time in my career that I’ve used JavaScript extensively, so I’m trying to learn some best practices. A few people have recommended Crockford’s [book: JavaScript: The Good Parts], so I picked it up. While skimming through it looking for something else, I ran into his description (on page 44) of using memoization (essentially, caching) to speed up the computation of certain functions.

The example people always use here — and Crockford is no exception — is the Fibonacci sequence. The shortest way to implement it is recursively. As Crockford points out, the recursive solution doesn’t scale: the number of recursive calls necessary to compute the nth Fibonacci number is proportional to the nth power of the golden ratio. (From what I can see, the constant of proportionality converges very rapidly to about 1.11803. That number must have some special name.) I’ve coded both versions up in Python; the recursive version keeps track of how many function calls it had to make.

So then Crockford’s solution, and the solution written in lots of places ([book: Higher-Order Perl], for instance) is to use memoization: cache the results of fib(n) for smaller n, then use the cached results rather than computing those values anew when you need them later on.

This isn’t really a solution, though, as my friend Seth pointed out to me some months ago. It’s certainly clever, and languages like Perl or JavaScript make it very easy. In Perl, you can automatically memoize any of your functions with the Memoize module: just do

use Memoize;
memoize(‘fib’);

and voilà: all of your calls to `fib()` will be memoized from then on. Pretty cool, really. Manipulating the symbol table is kind of neat.

But this has only temporarily disguised the problem. Let’s look at the time and space requirements for all three of our `fib(n)` implementations:

__Recursive, non-memoized__: exponentially many function calls, hence exponentially many stack frames, hence exponential memory usage. Linear running time.

__Recursive, memoized__: linear memory usage (one cache entry for every [math: i], for [math: i] less than or equal to [math: n]). Linear running time.

__Iterative, non-memoized__: constant memory usage (must keep the [math: n]th, [math: (n-1)]st, and [math: (n-2)]th values of the sequence in memory at all times, but that’s it), linear running time.

By using some language cleverness, you’ve made the problem harder than you need to: memoization increases your memory usage from constant to linear, and does nothing to your running time. By thinking harder about the problem, you can improve both performance aspects and not need any clever language business.

Seth has told me for quite a long time that recursion and higher-order programming (closures and so forth) are interesting but make debugging a lot harder. His contention would probably be that you can often replace a clever recursive solution with a non-clever, easier-to-debug, better-performing one.

That said, at least some higher-order-programming tools make my life easier. In Python, I love list comprehensions:

print [x**2 for x in xrange(1,11) if (x**2 % 2 == 0)]

or the older-school but perhaps more familiar `map()` and `filter()`:

print filter(lambda x : x % 2 == 0, map(lambda x : x**2, xrange(1,11)))

(because I favor readability over concision, in practice I would expand this into several lines: a `map()` line, and a `filter()` line, and so forth)

As I’ve mentioned, I lament the absence of first-class functions in Java. Then again, I’ve seen enough people shoot themselves (and me) in the foot with unreadable Perl or Python, using all the cleverness that the languages provide to them, that I think I’d be okay with a less expressive language that forces programmers to be really boring.