David Foster Wallace, “Consider the Lobster” and Other Essays — September 20, 2010

David Foster Wallace, “Consider the Lobster” and Other Essays

Cover of _Consider the Lobster_: stark white background, title and subtitle in black, author in red, then 'Author of Infinite Jest' below the author's name. Finally, a photo of a deeply red lobster at the bottom of the page

(Attention conservation notice: 1700 words, having reached the end of the line with David Foster Wallace’s brand of free-associative rambling.)

I’ve spoken with a great many people by now who’ve found Weezer’s last few albums so terrible that it’s made them reconsider whether the Blue Album and [album: Pinkerton] were as great as we all thought at the time. I’m sad to say that “Consider the Lobster” has made me do the same for David Foster Wallace.

What makes Wallace really charming is him, as a person. His best essays are really about him. Take the title essay in A Supposedly Fun Thing I’ll Never Do Again, for instance; it’s one of the most enjoyable essays I’ve ever read, and what makes it so is a) that Wallace is funny, b) that Wallace is neurotic and aware of his neurosis, and to a much lesser extent c) the funny commentary Wallace deploys about society in general and what cruise ships have to say about life in late-20th-century America among upper-middle-class folks whose every want is basically already taken care of. Even on that last point, though, Wallace is at his best when he talks about his own experience as a microcosm of the larger point. He’s spoiled on a cruise ship, and he finds himself getting more and more annoyed at the little deviations from perfection that would, land-side, never have bothered him in the slightest — e.g., that all they have is Dr. Pepper rather than Mr. Pibb, when everyone knows that the former is just no goddamn substitute at all for the latter. Being spoiled beyond comprehension has made Wallace sensitive about far too much. I submit that almost none of what’s memorable in “A Supposedly Fun Thing” has to do with the world beyond Wallace’s own head.

That’s not true of that entire earlier essay collection, though. A Supposedly Fun Thing has some neat thoughts about the role of television on fiction writing (I believe that was in “E Unibus Plurum” [sic]), has an obsessive little essay about David Lynch, and so forth. Wallace is definitely a smart guy. But he’s really just run out of steam in “Consider the Lobster”. There’s an obscenely long essay reviewing an English-usage guide, ably torn to shreds 8 years ago on the Languagehat blog; most of that takedown can be reduced to “Wallace just goes on and on and on, but he doesn’t actually know what he’s talking about.” And that critique extends to most of the rest of what’s in “Consider the Lobster”. Much of it sounds like a college bull session committed to paper. For instance, on page 85, in the middle of “Authority and American Usage” (the essay that Languagehat took down), we have Wallace saying that

Even in the physical sciences, everything from quantum mechanics to Information Theory has shown that an act of observation is itself part of the phenomenon observed and is analytically inseparable from it.

Well … I’m no physicist, but I’m fairly certain that this is what happens when you get a guy who’s trained in critical theory and let him read In Search of Schrödinger’s Cat. I invite physicists to critique my interpretation here, but I believe QM says that only at very small scales does the act of observation change the thing observed. That’s because when you, e.g., shine light on a particle, you impart momentum to the particle and thereby move it. So the act of observing the particle has changed the state of the particle. Our observing the Sun has no effect at all on the Sun.

The extra-special irony here is that on page 56, in an otherwise great essay on John Updike’s self-centric, penis-centric writing, Wallace takes Updike to the woodshed for similar sins:

[One of Updike’s characters] is particularly keen on subatomic physics and something he calls the theory of “many worlds” — which actually dates from 1957 and is a proposed solution to certain quantum paradoxes entailed by the principles of Uncertainty and Complementarity, and which is unbelievably abstract and complicated but which Turnbull seems to think is roughly the same thing as the Theory of Past-Life Channeling, apparently thereby explaining the set pieces where Turnbull is somebody else. The whole quantum setup ends up being embarrassing the way something pretentious is embarrassing when it’s also wrong.

(I’ve assumed all along that Wallace’s Everything and More, which purports to cover Georg Cantor and the various shocking, counterintuitive results about infinity, would be more Wallace bull-session wankery. Nothing in “Consider the Lobster” encourages me to read Wallace’s thoughts on higher math.)

Wallace’s demeanor is so folksy and charming that I found myself not normally paying attention to whether what he says makes any sense at all. Then the Languagehat blog comes along and pricks the balloon, and suddenly I realize that Wallace just doesn’t have much to say in a lot of this book. Much of it starts to feel like a man who’s talking and talking and talking to delay something that’s not clear to the reader (and may not be clear to the author).

And talk he does. He needs an editor more than ever. Infinite Jest apparently started out as a 1,500-page work, which eventually got chopped down to just over 1,000, according to David Lipsky’s biography. Infinite Jest was great, but it would have been even greater had it been half as long. “Consider the Lobster” could be reduced from 300 pages to maybe 200 without a lot of substantive loss.

While I’m here, I have to comment on Wallace’s footnotes; they’re one of the most noticeable features of his writing. They are terrible. I have always found them terrible, especially in Infinite Jest. There, the footnotes were mostly endnotes, so one had to keep two bookmarks going and continually interrupt the flow of the novel to read some 20-page excursus about the director’s [foreign: oeuvre]. It made Infinite Jest actually cause mental pain, of exactly the same sort that you feel when you’re trying to think hard about some important problem at work and get interrupted every couple minutes by some well-intentioned but annoying coworker.

Come to find out, in Lipsky’s book and dramatically confirmed in “Consider the Lobster”, that this similarity was not coincidental. Wallace’s contention in Lipsky is that the world we live in is so fragmented, with so many streams of information coming at us at once, that literature has to reflect this somehow. There don’t exist enough capital letters, enough bolding, and enough italics in this world for me to express just how terribly wrong I think this is. The world is fragmented and saturated with news, yes, which is precisely why literature — and for that matter, the rest of our institutions — needs to provide filtration, perspective, and order. When I read a book, I want to get lost; I want to forget, for a time, the maddening flicker and noise of the outside world. I want to submerge myself in the author’s world. Wallace’s strategy, and apparently his philosophy, are to keep me from ever getting immersed in his work. The strongest evidence I can amass for this claim is the very final essay in “Consider the Lobster”, whose final two pages look like this:

Two pages from _Consider the Lobster_. There are boxes offset from the text, with arrows pointing to boxes from inside other boxes on different pages. It's a recursive, distracting mess.

(click to enlarge)

This takes Wallace’s footnote habit and runs off a cliff with it. Like the footnotes, which sometimes have sub-footnotes, the boxes and arrows sometimes have their own sub-boxes and sub-arrows; as you can see from this example, sometimes you need to follow arrows onto other pages, then trace your way back to the page where you started. I don’t believe this image captures one further annoyance of the boxes-and-arrows system, namely that sometimes a box precedes the text it refers to, so you have to train yourself to skip the boxes until the arrows tell you it’s time to read them.

Maybe you find the notes charming. After all, they’re a natural extension of what’s often charming about Wallace: you feel like you’re getting direct access to his mind and the funny things that he thinks from moment to moment. Clearly his own mind is fragmented, so his writing is the same way.

Me, I just find it lazy, and I’ve found it lazy as far back as Infinite Jest. A more disciplined writer would find a way either to flow the content of the notes into the body of the text, or would just strike out those digressions that don’t add to the content of the work. That Wallace clearly disagrees with me here, and that this isn’t laziness but is entirely deliberate, is exactly the problem: Wallace believes that the digressions and the footnotes are absolutely crucial to the body of the work.

This particular final essay, with the structural experimentation and the arrows and boxes, features Wallace sitting for a night or a few nights in a Los Angeles-area conservative talk-radio station, telling us all sorts of things: the particular mechanics of beaming a story from the station to the millions of L.A. listeners, with particular reference to which machines get used for which purposes; the sound engineers and their mastery of special devices that speed up and slow down sounds to fit within a precise window and give advertisers their allotted on-air time; the radio host himself, and what he’s like when the mic is turned off; some notes on the Fairness Doctrine and what its end had to do with the rise of talk radio; and some college-bull-session-level out-loud meditations on What It All Means.

A lot of this stuff is good, but a lot is just needless digression. When Wallace applies the same formula to John McCain, in what became “Up, Simba!”, it essentially has one through-line with a lot of useless ornamentation. The story is that John McCain spent five years in a box in Vietnam, and explicitly refused to be released from prison just because his father was a bigshot in the military; he waited to be released after others who’d gone in before him. Wallace asks us to imagine the psychological and physical torment McCain underwent, and the sense of duty that must exist inside McCain to make that sacrifice for his brothers. McCain has become a politician since then, so it’s hard to know whether what he says is just salesman bullshit, or whether maybe he really is the Leader that he wants us to believe he is. In the world we live in, it’s hard not to impart cynical motives to everyone around us — especially politicians — but Wallace holds out hope that McCain might be the real deal.

All of that is wonderful. Adorning it, though, are pages and pages of Wallace’s ramblings. I’ve reached the end of my patience for that. Much of “Consider the Lobster” feels like I’m reading a series of blog posts, albeit written by a very smart friend. The world supplies me with enough blogs; when I read a book, I want to read a book.

Karl Polanyi time-travels and addresses the Great Recession — September 17, 2010

Karl Polanyi time-travels and addresses the Great Recession

Basically a generic cover, with one little cute bit: the background is a giant dollar sign, where the inside and outside of the 'S' are composed of blocks of grey and brown.
(Attention conservation notice: Just under 1100 words, plus a long quote from the book, about one of those rare books that makes sense out of the long sweep of history, and takes your breath away in the process. And this isn’t even the final review!)

I am going to enjoy reviewing [book: The Great Transformation] once I’ve finished it. In the meantime, it suffices to note that every few pages I run into a new idea that either brings a major swath of history into clear focus, or that clarifies my side of a debate.

Before quoting something that falls into the latter category, I should explain Polanyi’s overall goal in [book: The Great Transformation]. He starts with quite a long introduction, trying to explain at a high level how Europe went through 100 years of peace between Napoleon and World War I (this is an era that whose beginning Kissinger covered brilliantly in [book: A World Restored]). To do that, Polanyi needs to cut back to the beginning of the Industrial Revolution and work his way forward. A large part of the intellectual suspense in [book: The Great Transformation] is curiosity over how he’ll get from there back to where he started. What kept the peace together, and what broke it apart?

Among the guiding structures in [book: The Great Transformation] are that

  1. The Industrial Revolution, by its very logic, required that labor, land, and money each be turned into commodities. The implication of this is that the most basic parts of any society — its people, and nature itself — must be made fungible. (Wheat and other commodities on the labor market aren’t actually all identical to one another; they’ve been cut and shaved and folded and spindled and mutilated — and, more concretely, contracted — into a uniform shape so that they may be treated as though they were identical. I read a recent blog post on this, referencing a book on the topic that seems interesting; I can’t find it on a quick skim now.)
  2. Every European nation discovered on its own that it needed to slow the societal destruction that the Industrial Revolution inevitably caused. The Revolution led to a great deal of good eventually, but a shift of this magnitude destroys everything in its wake.

In presenting these ideas, Polanyi brings a style like wind through an open window to the kind of arid economic talk that fills all of our minds nowadays. If someone tells us, for instance, that “the recession is caused by people not taking lower-paying jobs,” we’re apt to come back with mini-lectures on the economic benefits that accrue to the world when unemployment insurance gives people time to find a better-fitting job.

Fie to all that, says Polanyi:

Economically, English and Continental methods of social protection led to almost identical results. They achieved what had been intended: the disruption of the market for that factor of production known as labor power. Such a market could serve its purpose only if wages fell parallel with prices. In human terms such a postulate implied for the worker extreme instability of earnings, utter absence of professional standards, abject readiness to be shoved and pushed about indiscriminately, complete dependence on the whims of the market. Mises justly argued that if workers “did not act as trade unionists, but reduced their demands and changed their locations and occpations according to the requirements of the labor market, they could eventually find work.” This sums up the position under a system based on the postulate of the commodity character of labor. It is not for the commodity to decide where it should be offered for sale, to what purpose it should be used, at what price it should be allowed to change hands, and in what manner it should be consumed or destroyed. “It has occurred to no one,” this consistent liberal wrote, “that lack of wages would be a better term than lack of employment, for what the unemployed person misses is not work but the remuneration of work.” Mises was right, though he should not have claimed originality: 160 years prior to him Bishop Whately said: “When a man begs for work he asks not for work but for wages.” Yet, it is true that technically speaking “unemployment in the capitalist countries is due to the fact that the policy both of the government and of the trade unions aims at maintaining a level of wages which is out of harmony with the existing productivity of labor.” For how could there be unemployment, Mises asked, but for the fact that the workers are “not willing to work at the wages they could get in the labor market for the particular work they were able and willing to perform?” This makes clear what the employers’ demand for mobility of labor and flexbility of wages really means: precisely that which we circumscribed above as a market in which human labor is a commodity.

Such clarity: when Mises and all the other heroes of [foreign: laissez-faire] tell us this sort of thing, they’re treating us like bushels of apples or bales of hay. We’ve become a nameless thing called Labor which can be infinitely subdivided and used for whatever purpose the factory-owner decides on. When the fundamental assumptions beneath [foreign: laissez-faire] are laid bare, it becomes so obvious that we wonder why we never thought of it before. And it becomes immediately clear just how odious those assumptions are. Polanyi reminds us that we still need to think about ethics, even in a world dominated by economics. Yet this “just accept a lower-paying job” argument is still with us.

It was with us during the Great Depression, too, when I believe Keynes addressed it in the [book: General Theory]. Part of the great clarifying joy that comes from Polanyi is the realization that there really aren’t that many new arguments about fundamental economic problems.

Much of what’s astonishing and literally breathtaking about [book: The Great Transformation] falls under this category category of “humanizing the economic”. To give a taste: colonization destroyed the colonized peoples, at least for a time, but they got a lot of money. So what’s the problem? Well, in order to get a lot of money, the colonized countries typically had to radically industrialize. This meant moving people out of the agrarian lifestyles they’d been used to for hundreds or thousands of years and relocating them into urban factories. Yes, they got money, but in the process they were uprooted, their lives were destroyed, and millions died. In a few generations they typically adjusted. And that’s exactly the point: industrializing Western democracies knew enough to lay on the brakes to prevent utter social collapse; they weren’t so generous with their colonies. Our modern focus on the economic, rather than the social, obscures our view of the Industrial Revolution’s ravages. Its gains were substantial, but so were its costs.

This is far more than just an academic look back at the way the world was and where it settled after some initial torment. You can’t wade an inch into a debate about economics today without running into the [foreign: laissez-faire] point of view. “Just let the market settle where it will,” they say, “and you’ll do far better than any central planner could.” The fact is that Western societies have never allowed the market to manage itself unimpeded, and it’s to our everlasting benefit that they haven’t; if they’d let the market manage itself, we would most likely not have a society anymore.

This gets at a conceptual distinction that Polanyi emphasizes, which (again, par for the course with this book) I hadn’t previously kept straight in my head: [foreign: laissez-faire] — the doctrine that the market should be entirely left alone — is different from support of the free market. The West has long realized that, in order to get a well-functioning market, we often need to intervene to make it work. War is too important to be left to the generals, as Clemenceau put it, and free markets are too important to be left unmanaged. Indeed, as Polanyi spends a great deal of time detailing, the very birth of the Industrial Revolution owes everything to state intervention.

I’ve already gone on longer than I intended to. When I get to writing it up in full, I’ll fill in more details and hopefully make the connection back to World War I and the 19th-century post-Napoleonic peace.

P.S.: When I do get around to writing this up for real, I’m going to have to include references to Ernest Gellner’s [book: Nations and Nationalism] (another “let’s take in the big picture” examination of capitalism and its effects), Eric Hobsbawm’s epic, multi-volume “long 19th century” series, and James Scott’s [book: Seeing Like A State]. Thinking of these other books, when you’re in the middle of Polanyi, is unavoidable. They’re all well worth your time.

A neat identity I remembered from college calculus — September 15, 2010

A neat identity I remembered from college calculus

begin{eqnarray*} tanleft(frac{pi}{4}right) &=& 1 frac{pi}{4} &=& ta... ...+1} &=& 1 - frac{1}{3} + frac{1}{5} - frac{1}{7} + cdots end{eqnarray*}

(Proofs of any of the individual steps are available upon request, should you find yourself thinking that I’m pulling a 1=0 trick.) So then

begin{displaymath}pi = 4left(1 - frac{1}{3} + frac{1}{5} - frac{1}{7} + cdotsright).end{displaymath}

This converges very slowly, though, because for every two steps forward you take
a step back. (More precisely: for every 1 step forward, you take $(4n+1)/(4n+3)$ steps back.) You can make it converge faster by combining the forward step and the smaller backward step into a single, smaller, forward step:

begin{eqnarray*} frac{pi}{4} = sum_{n=0}^{+infty} frac{(-1)^n}{2n+1} &=& ... ...n+3} right) &=& sum_{n=0}^{+infty} frac{2}{(4n+1)(4n+3)} end{eqnarray*}

whence

begin{displaymath}pi = 8left( frac{1}{3} + frac{1}{35} + frac{1}{99} + cdotsright)end{displaymath}

A free market for traffic — September 14, 2010

A free market for traffic

Matt Yglesias has been making the point consistently for a long time, and he should keep drilling it in until its obviousness becomes apparent: traffic is caused by improperly priced roads.

As he’s said on a few occasions (can’t find the exact posts right now): when we see video of long bread lines in the former Soviet Union, we know why they exist. They exist because the price of bread has been set artificially low. Since the price is artificially low, more people go hunting for bread than would go if the price were allowed to find its level. Since the price is artificially low, companies produce less bread than they would if the price were allowed to find its level. There’s a mismatch between the number of people seeking bread and the number of loaves of bread available for sale.

But we never think of that when we see traffic, for some reason. Clearly, though, the same mechanism is at work: there’s a scarce resource (slots for cars on the road), there’s a fixed supply, and there’s a certain demand. If people had to pay more to ride on the roads, they’d presumably drive less. What you want is that the last person to get on the road is just willing to do so — if the price were just a tiny bit higher, he’d find other means of getting to work.

So the roads are clogged, and traffic is unbearable, because prices aren’t being set by the market. They’re being set artificially — at zero, in fact — by the government. As Jamie Galbraith put it in another context, “this process is so simple that the mind recoils from it.”

If libertarians should be shrieking about anything, it should be the artificially low price of driving. The environmental consequences of this are stark: more people get on the road than would otherwise, so more smoke goes in the air; more people drive than probably would otherwise, so development patterns change to accommodate cars; so the landscape gets scarred with new, wider highways, new subdivisions, new Wal-Marts, etc.

I can’t let my speculation run too rampant there. Suppose roads were privately run. Maybe people would have no problem paying more; as it is, they seem to tolerate longer and longer commutes, which impose a less-obvious but still great price on drivers (less time with their families, more frustration, road rage). Maybe private companies would build extremely wide roads, thereby incurring a large fixed cost but basically negligible marginal cost for each driver on the road. The cost of driving might then look like the fixed cost amortized over the expected number of drivers, plus some profit margin for the corporation. And maybe that cost wouldn’t be so high.

But the point is that we don’t know, and the continually worsening state of traffic gives us reason to believe that the price hasn’t been set appropriately. If it were any other commodity being allocated, the presence of long lines would indicate a failure of central planning. Somehow Americans don’t jump to that conclusion with their roads, and they should.

Yglesias seems to have taken up a [foreign: Carthago delenda est] approach with traffic pricing. I’ll join him in it.

What are the long-lasting truths from your discipline? —

What are the long-lasting truths from your discipline?

That’s a question I’d like to ask of many different disciplines. Right now, for instance, I’m reading a Matt Yglesias piece about how poorly tax breaks for the wealthy work as stimulus. I believe this has been known for a very long time — I’m tempted to say since Keynes — though I’m not certain. It would be interesting to go around to lots of different disciplines and ask them which truths have been established for 50 years or more. (Mathematics, you’ll have to sit this one out; all your truths are permanent, and many are very old. You’d win by default.)

I’m reminded here of the famous story, which appears to come from “The Way of An Economist”, though I don’t have access to it, in which someone asked Paul Samuelson to name an idea from economics that is both true and not a tautology. It took him some years to respond with the doctrine of comparative advantage. That’s been accepted as true for 200+ years, so it seems to count as a disciplinary truth. I wonder what others might be.

I probably have to narrow this down, because I’d want truths that aren’t based on obviously unrealistic premises. For instance, there are a lot of results in economics that depend on constant returns to scale — for instance, that if you double the quantity of inputs, you double the quantity of outputs. But many phenomena in our world depend on *increasing* returns. Doubling the number of users in AT&T’s network more than doubles the value of that network. The very existence of cities is proof of increasing returns to scale: Silicon Valley isn’t the world capital of software development because it’s somehow better equipped to build software; it’s the capital because it had some initial burst of development, which led to a snowball effect that drew more developers to it. (The classic economic example here is Dalton, Georgia, which by historical accident has become the carpeting capital of the America.) If you can’t understand the existence of cities using easy economic assumptions, then you need to re-examine your assumptions. Yes, I realize that constant returns to scale are easier to model, but in any case: if we’re looking for accepted wisdom from a discipline, theorems about the real world cannot be accepted as true if they’re based on premises that are known to be far off the mark. (That gives another reason why mathematics has to sit this out: strictly speaking, mathematics isn’t *about* anything. It’s a collection of tautologies.)

I’ll start bugging people about this.

Daniel Dennett, Brainchildren: Essays on Designing Minds — September 12, 2010

Daniel Dennett, Brainchildren: Essays on Designing Minds

Cover of _Brainchildren_: repeated image of a robotic dog with its 'skin' removed so that you can see its innards. It's a friendly-looking robotic dog, with red LED eyes. The author's name is in red, and its title is in black, within a black box-bordered box at the bottom of the title page

(__Attention conservation notice__: Seriously, I don’t intend to sit down and write 1,400 words (plus about 600 quoted words) about artificial intelligence, but … it just happens that way.)

Daniel Dennett is the philosopher whom geeks love. In this, and in a couple other respects, he is the heir to Betrand Russell’s throne. One of Russell’s many claims to fame was a philosophical program built around applying scientific and mathematical methods to philosophical questions, in the hopes of giving them definite answers. That didn’t really work out so well, but, as I’ve mentioned before, mistakes are valuable; we tend to sneer at them more than we should.

Dennett’s approach is based around natural selection, whereas Russell’s was based around mathematical logic. For my money, natural selection is more likely to tell us enduring truths about human knowledge than the predicate calculus. Dennett ran as far as he could with the implications of natural selection in [book: Darwin’s Dangerous Idea], where he contended that Darwin’s discovery is “universal acid”: if you accept it at the level of speciation, then you’re forced to accept it at every other layer — all the way up from the structure of atoms to the large-scale structure of the universe.

In [book: Darwin’s Dangerous Idea], and in [book: Brainchildren], Dennett continues to repeat his refrain that philosophers are deeply uncomfortable with the idea of bringing Darwin to the last holdout, namely the human mind. Dennett says philosophers hate the idea that the mind might be seated in the brain, hate that they might lose their hold over another corner of the intellectual universe when another part of the world loses its mystery, and hate that what separates the human mind from that of other species might just be a difference of degree rather than of kind.

When judging whether other humans, or machines, or animals are intelligent, Dennett advocates taking what he calls “the intentional stance”. I can’t do better than Dennett at explaining what this is:

> Here is how it works: first you decide to treat the object whose behavior is to be predicted as a rational agent; then you figure out what beliefs that agent ought to have, given its place in the world and its purpose. Then you figure out what desires it ought to have, on the same considerations, and finally you predict that this rational agent will act to further its goals in the light of its beliefs. A little practical reasoning from the chosen set of beliefs and desires will in most instances yield a decision about what the agent ought to do; that is what you predict the agent will do.

Note the empirical content of this stance. If you want to judge whether a robot is rational, you make some predictions about how it would behave if it were acting rationally; then compare the predictions to the actual behavior. There’s no [foreign: a priori] assumption in here about how a robot “in principle” could never act rationally. There is only the comparison of predicted rational behavior to actual behavior.

Dennett’s claim throughout [book: Brainchildren] is that artificial intelligence has brought much of philosophy to the “put up or shut up” stage: if you want to argue about cognition, you will soon enough have to compare your arguments to the results of programming a mind on a computer.

Of course, the potential from computer experimentation doesn’t just extend to making philosophers look bad. When we speculate that various problems in human cognition are “easy,” we get to try to solve those problems on a computer and see (as it turns out) how wrong we were. The most interesting difficulty of this sort in [book: Brainchildren] is the “frame problem”. It’s best too illustrate this with a delightful example from the book:

> Once upon a time there was a robot, named R1 by its creators. Its only task was to fend for itself. One day its designers arranged for it to learn that its spare battery, its precious energy supply, was locked in a room with a time bomb set to go off soon. R1 located the room and the key to the door, and formulated a plan to rescue its battery. There was a wagon in the room, and the battery was on the wagon, and R1 hypothesized that a certain action which it called PULLOUT(WAGON, ROOM) would result in the battery being removed from the room. Straightaway it acted, and did succeed in getting the battery out of the room before the bomb went off. Unfortunately, however, the bomb was also on the wagon. R1 *knew* that the bomb was on the wagon in the room, but didn’t realize that pulling the wagon would bring the bomb out along with the battery. Poor R1 had missed that obvious implication of its planned act.
>
> Back to the drawing board. “The solution is obvious,” said the designers. “Our next robot must be made to recognize not just the intended implications of its acts, but also the implications about their side effects, by deducing these implications from the descriptions it uses in formulating its plans.” They called their next model, the robot-deducer, R1D1. They placed R1D1 in much the same predicament that R1 had succumbed to, and as it too hit upon the idea of PULLOUT(WAGON, ROOM) it began, as designed, to consider the implications of such a course of action. It had just finished deducing that pulling the wagon out of the room would not change the color of the room’s walls, and was embarking on a proof of the further implication that pulling the wagon out would cause its wheels to turn more revolutions than there were wheels on the wagon — when the bomb exploded.

Lots of problems turn out to be much harder than they look at first sight, because we humans solve them without any difficulty. Build a mind from scratch, though, and you realize the awesome complexity of what we do “without thinking.” (My friend Dan Milstein captured and expanded upon this idea in a captivating talk a few months back. I only wish it had been captured on tape.)

There is one final, ultimate test for robot intelligence that no one has really succeeded in dislodging, namely the Turing test. This famous test places a human in one room with a teletype machine, a robot in another, and a judge in a third. The judge is asked to have a conversation with both the robot and the human, and to guess which one is the robot. If the judge can’t do so with better-than-chance odds, the robot is deemed intelligent. The basic insight here is that carrying on an intelligent conversation brings in so many other elements of human intelligence that it would be impossible to sound intelligent without actually *being* intelligent. You’d need lots of experience from living in the world, some humor, the ability to understand the context behind what someone asks you, and millions of other things besides — things that we’re not even aware of because we do them so easily (the frame problem again).

Needless to say, we’re nowhere near writing a computer program that passes the Turing test — a caveat that Dennett lays out up front. Indeed, the least satisfying part of [book: Brainchildren] is that, despite its pretensions about replacing a lot of vague philosophy with scientifically grounded results on human minds, Dennett lays out surprisingly few such results. Those he does lay out are

* a bunch of robots failing the Turing test
* the schematic — prior to any building — of a robot he and colleagues are working on called “Cog”
* a trip he took into the wilds of Kenya with a couple animal researchers, wherein Dennett claims that he taught them the intentional stance. Now, this struck me as unbelievable:

> But what does the [Moving-Into-The-Open] grunt [from vervet monkeys] mean? I suggested to Robert and Dorothy that we sit down and make a list of possible translations and see which we could eliminate or support on the basis of evidence already at hand.

Robert and Dorothy hadn’t already thought of this on their own? I’d be curious to interview Robert and Dorothy, to see whether they find the intentional stance quite as novel as Dennett makes out.
* Robert Hofstadter’s Fluid Analogies Research Group, found in Hofstadter’s [book: Fluid Concepts and Creative Analogies] and in Melanie Mitchell’s [book: Analogy-Making as Perception]. These sound like real steps forward, and I’m excited to read them. I’ve violated my earlier promise, and have gone ahead and reserved these from the library. (This is how it always goes.)

If [book: Brainchildren] needs anything, it is more science and less philosophy. The desire for more empirical results is inevitable, and makes [book: Brainchildren] self-undermining, given how much time Dennett spends castigating his own colleagues in philosophy for their lack of empirical rigor. (He really has it in for Jerry Fodor, about whom I know nothing.) I have to assume that this was deliberate on Dennett’s part, and I expect that he views his role as more of a ground-clearer: dispense with a lot of silly ideas (e.g., the “Chinese Room”) about why intelligent robots are impossible, so that others can then move in and get actual work done.

[book: Brainchildren] suggests that a lot of this work has divided between “top-down” and “bottom-up” approaches. Top-down approaches would begin, in essence, from the intentional stance: we want to solve a particular problem (vision, or boundary detection, or understanding context-filled sentences), so we write a program that can do this. The bottom-up approach would instead start at the level of neural hardware: build a device that looks like a brain — maybe a large collection of McCulloch-Pitts neurons — and don’t work on the top-level problem until the lower levels have been established. If we want any model of the mind to be complete, and we want it to say something about actual human minds (rather than about, say, artificial intelligence considered as an abstract problem), the top level will need to be consistent with a bottom layer that looks like a human brain; that is, if the top-level program could only be implemented on a supercomputer the size of the Pentagon, it probably doesn’t have much to say about human minds. So the top-down and bottom-up approaches both have their merits.

It seems likely that any progress toward an artificially intelligent machine will involve some intermediate steps where the machine doesn’t act like a full-scale human, but acts like what you might call a toy human. It can’t carry on an intelligent conversation about any topic that might reasonably come up, say, but maybe it can talk about wombats. (Imagine writing a computer program that simulates a conversation with a kid who has behavioral problems.) We’ll learn some things from this, which we’ll fix in the next iteration.

Indeed, one of the great lessons I learned from [book: Brainchildren] was the further wisdom of “less thinking, more testing”: a lot of rather silly arguments could be short-circuited by developing rough, ugly prototypes that solve a small corner of a problem; instead of talking about “machine intelligence in principle”, we could then talk about performance *relative to an existing benchmark*. Let’s not talk about a phantom; let’s talk about this, here, now, and how we could improve upon it.

While others push toward that goal, Dennett clears some room for them to work. He deserves our thanks.

__P.S.__: Cosma Shalizi has, of course, a brilliant review of [book: Brainchildren] as well.

Jamie Galbraith and the NAIRU — September 10, 2010

Jamie Galbraith and the NAIRU

I linked on Twitter to Jamie Galbraith’s old NAIRU paper, but explaining why it’s important to people who don’t care about economics, in 140 characters or fewer, turns out to be really hard. Here’s a quick note.

Basically, economists envision that there’s a tradeoff between the rate of unemployment and the rate of inflation. Suppose unemployment is very low. Now workers have more bargaining power. So they can demand higher wages. Enough of them do this, and prices rise. Eventually one can even end up with the dread “embedded inflation”: workers anticipate lots of inflation in forthcoming years, so they ask for wage contracts that guard against that inflation. Let’s say inflation was 10% per year. Now their contracts command, say, 11% raises per year. Now, inasmuch as prices depend on the costs of labor, prices will rise even more. And so the spiral goes.

There’s supposed to be a “natural rate” of inflation, an idea which apparently goes back to Milton Friedman’s 1968 presidential address to the American Economic Association and Phelps’s paper from the preceding year. This natural rate corresponds to a particular rate of unemployment called the NAIRU, for the “non-accelerating-inflation rate of unemployment”. As the name suggests, it’s supposed to be the rate of unemployment at which inflation stays where it is.

The only problem, says Galbraith, is that no one knows where the NAIRU is, and what economists say about it changes over time. Oh wait, there’s another problem: it’s not clear that labor costs have actually been responsible for inflation; it may just be that we got inflation when an “external shock” like a war or an oil embargo intervened. [1]

Most importantly, the focus on the tradeoff between inflation and unemployment takes our eyes off other, more-important things, like the unemployment-inequality tradeoff. Galbraith presents an alternative to Friedman’s “natural” rate of unemployment, which, again, is a rate above which inflation is supposed to start accelerating; Galbraith’s “natural” rate is the one above which inequality is supposed to start increasing, and he estimates it “quite stably” at 5.5 percent.

I need to emphasize just how important this is. The Federal Reserve emphasizes one goal — price stability — to the exclusion of others. Which would be fine — price stability is in the Congressional mandate — if the Fed weren’t using a phantom to achieve that goal. And the Fed is too cautious, too worried about the effects of labor costs, which likely keeps unemployment higher than it needs to be. Which, in turn, is a weapon to maintain increasing inequality.

[1] — It’s oddly unremarked-upon that the U.S. government took very active control over the U.S. economy during World War II. With the government printing so much money and dumping so much of it into the economy to get war production going, inflation would be inevitable. To avoid that end, the government had to enforce strict price controls. Jamie Galbraith’s father, the great John Kenneth Galbraith, was one of the folks in charge of these controls; he writes a bit about this in [book: Money: Whence It Came, Where It Went], and probably in other works.

At another time, I will write about how silly I find the usual American mythologizing of World War II. Yes, maybe it had something to do with “Americans coming together, as never before, to defeat a common enemy.” A more straightforward explanation is that World War II was the natural endpoint of two centuries of capitalist development, centralized control, and the deployment of industrial processes toward warmaking. “Total war,” the idea that an entire nation’s resources are devoted toward destroying one’s enemies, and that war should naturally be brought to bear against the civilians of other countries, helped. There may be room for patriotically beating hearts in here, but these other explanations seem more fruitful.

Perhaps my favorite quote of all time — September 6, 2010

Perhaps my favorite quote of all time

It is a profoundly erroneous truism, repeated by all copy-books and by eminent people when they are making speeches, that we should cultivate the habit of thinking of what we are doing. The precise opposite is the case. Civilisation advances by extending the number of operations we can perform without thinking about them.

— Alfred North Whitehead in his [book: Introduction to Mathematics]

(which reminds me that Bertrand Russell’s [book: Principles of Mathematics], which predates his and Whitehead’s [book: Principia Mathematica] by 7 years or so, has been on my queue for a long while; anyone who’d like to read it along with me is hereby welcome to)

Joshua Ferris, The Unnamed — September 5, 2010

Joshua Ferris, The Unnamed

Cover of _The Unnamed_: blue-sky background, black birds flying around it, book title in white, author's name in black (sans serif throughout)

This is the first nontrivial (as compared to, e.g., [book: Ant Farm]) book that I’ve read in a long while in a single day. It’s a strange kind of captivating, in that you wonder “what’s going to happen to this poor guy next?”

The single idea in [book: The Unnamed], drawn out brilliantly by Joshua Ferris, is that Tim Farnsworth sometimes just can’t stop walking. His legs start moving, and at that point he can’t stop. If he’s lucky, when it happens he’ll have a backpack already packed containing all the supplies he’ll need for hours, possibly days. During the winter, if he’s even luckier, he’ll already be wearing a parka and a hat. No matter how he’s prepared, his body will send him walking, and there’s not a damn thing he can do about it. Eventually he’ll grow exhausted and collapse into blissful sleep on a park bench or in a forest or wherever he happens to have landed. He’s often confused for a bum, except that he’s a high-powered New York lawyer who drops hundred-dollar bills here and there to get done whatever he needs to get done.

Is this a mental problem? Tim insists that it’s not. He’s at war with his own body: as his feet set off down the road, his rational mind tries to patch up the world around him. He asks the security guard at his law office to walk alongside him and do some favors for him while he’s off on his walk; he dictates orders to fellow lawyers as he’s marching away from them, while they stand confused and expect him to come chat.

Eventually the world is going to catch on to this. Tim is defending one of the firm’s longtime clients on a murder charge, but Tim can’t sit down long enough to talk with him. He makes up an excuse that his wife is dying of cancer and that he has to be by her side. But even that doesn’t explain why, in the middle of a client meeting, Tim stands up, unprompted by any phone call or text message, grabs his backpack, and heads off into the streets of New York.

The murder case goes to trial, and its lead attorney has been AWOL for weeks. The client is found guilty. Tim is dismissed from his job. He keeps walking. His wife considers leaving him, after so many years of waking in the middle of the night to find him gone, desperately driving around looking for him asleep under bridges or behind dumpsters. His life continues to swirl down the drain.

It’s not clear what any of this is “about,” really. Initially I thought it might be an allegory about American corporate life: a man keeps walking, and for what? But it’s not; his time as a lawyer ends, and he comes to enjoy the little things in life, but his life continues to collapse. It could well have some broader religious meaning about overcoming the absurdities of the body through the mind’s discipline.

But I don’t think any of that is why you read [book: The Unnamed]. You read it because of Ferris’s gifts for pulling you into the story and never letting you go. On the basis of [book: The Unnamed], and of many friends’ recommendations (not to mention Jay McInerney’s), Ferris’s [book: And Then We Came to the End] moves high up in the queue … after I’ve worked through 106 others, that is.

Thomas Pynchon, Inherent Vice — September 4, 2010

Thomas Pynchon, Inherent Vice

Inherent Vice cover: old beater of a car, with surfboards on its roof, sitting in front of a surf shop on the beach. Lots of loud pink colors, almost neon. The title of the book is in fact written in neon-type letters.

This is the book that would result if [film: The Big Lebowski], Chevy Chase’s [film: Fletch] series, and 1940s-era noir films were combined, if the classic noir blonde bombshell were updated into a 60s-free-love hippie chick, if the private-detective aspect of [film: Fletch] were kicked up a notch and made somewhat more serious, and if the whole thing were then novelized and relocated to 1969.

Our hero, a stoner private eye, ambles around late-60s L.A. after the Sixties have run their course and everyone in authority — cops especially — hates the hippies. Nixon is in the White House and Reagan is in the California governor’s mansion. His ex-girlfriend visits him on the very first page, announcing that her new boyfriend has disappeared and she needs help finding him. I’m not giving away terribly much if I tell you that they eventually have sex but it’s all, like, whatever, man? In any case, he can’t resist his dame (as an earlier generation of private-eye novels might have put it), so he stumbles around looking for clues. Despite smoking a really overwhelming quantity of pot (scarcely a page goes by when he doesn’t light up), he somehow manages to know a lot of people and ask the right questions. The cops hate him, but also seem to respect him. One of the pervasive mysteries in this book is how our hero manages to be both quite the slacker and also a reasonably competent PI.

People are always ending sentences with question marks? Even when it’s, you know, a declarative utterance? The endless mocking of California, and California stoner culture more specifically, is quite funny, but does bleed over into slapstick at points. This initially bothered me, but I suspect it’s Pynchon letting us know that we shouldn’t take any of it too seriously. So I didn’t. It’s a private-eye comedy, and it’s tons of fun.

This is the first Pynchon that I’ve managed to make it all the way through. I gave [book: Gravity’s Rainbow] a shot many years ago. I was a teenager at the time, which probably means I wasn’t ready for it in any case. But I think that would also be a tough read no matter when I read it. I distinctly remember giving up when the narrator dives into a toilet, goes for a long swim, and gets a turd stuck in his nose.

Not sure what made me grab this Pynchon when the author’s very name had been, for me, a mark of self-indulgent wankery. It could be the positive [mag: New Yorker] review. More likely it was that I stumbled into the Harvard Book Store while happily buzzed off strong Craigie on Main cocktails one night, and I couldn’t help myself. The purchase, given that background, was entirely appropriate.