Rosecrans Baldwin, You Lost Me There — September 23, 2010

Rosecrans Baldwin, You Lost Me There

Cover of _You Lost Me There_: crimson background, white type, everything looking hand-drawn. There are also antlers studding the page. (It's possible that they're crossing axons.)
This is the debut novel for Rosecrans Baldwin, who in 1999 cofounded the always-excellent Morning News; it’s a charming first work. I’ve spent a couple days trying to figure out what’s so captivating about it. I’ve not entirely worked it out; but herewith, some thoughts.

The narrator, Victor Aaron, is a old-ish (not sure if he ever mentions his age, but it’s in the 60s) Alzheimer’s researcher on Maine’s Mount Desert Island at a presumably fictional research lab. His wife Sara has died in a car crash at some point in the recent past, though you wouldn’t know it by watching how people interact with him. Sara’s aunt, with whom Victor spends a lot of his time, hardly mentions Sara’s death, and Victor himself has been getting private performances from a 25-year-old curvaceous burlesque performer for a good long while — possibly even while he and Sara were married, or maybe just soon after she died.

That’s part of my confusion: is everyone just really selfish? Maybe Victor himself is just selfish? Maybe, as the narrator, he just doesn’t mention those things that don’t occur to him, and maybe he’s not thinking terribly much about his late wife. If others are bestowing sympathy on him, maybe he’s just not seeing it.

Victor is a busy researcher, spending most of his time writing grants and attending meetings and so forth. He’s working 20-hour days, and one gets the sense that he worked that much when he and Sara were married, too. For large fractions of their childless marriage, she never saw him. Somewhere along the way, though, Sara got her own stellar career: her screenplays took off, and one of them — [film: The Hook-Up] — got turned into a movie starring Bruce Willis. (The scenes where Victor chats with Willis at cocktail parties, or dreams about the man’s wisdom, are hilarious little snippets.) The tables turned: now *Sara* was the jet-setting one whom Victor never saw, and his jealousy got the better of him. They drifted further and further apart.

We find out about all this through a work of inspired narrative brilliance: Victor hunts through Sara’s office after her death, and finds a set of index cards that she prepared for her psychologist, describing important turning points in her life; each chapter of [book: You Lost Me There] corresponds to Victor’s reading the next index card. This serves three purposes. First, it’s just suspenseful. Second, it helps you get to know Sara; you wouldn’t have gotten to know her otherwise, because the narrator is off in his own world in which Sara may as well never have existed. Finally, and connected to the second: it gives you and Victor a glance at what others thought of him. What Victor discovers about himself is often ugly. And the characters’ solipsism disappears for a few minutes, as Victor realizes that there are others in the world whom he’s wronged and ignored.

So in a way, this is the first novel I’ve ever read that’s written from two distinct perspectives. We learn as we go along that Victor just cannot be trusted as a judge of his own life. As he realizes this, he slowly falls apart.

If such a flashlight were turned on any of the book’s other characters, it’s likely they’d feel just as much pain as Victor. Everyone in [book: You Lost Me There] seems selfish in his or her own way. Everyone’s drifting, from Victor’s teenage goddaughter who comes to stay with him for the summer, to the goddaughter’s father who hops from one bed to the next, to Victor himself, reclaiming his youth in a young girl’s bed. Everyone’s flailing around, trying to figure out what he’ll be when he grows up.

(Baldwin is in some ways the anti-Philip Roth, by the way. Victor can’t attain an erection despite several tries throughout [book: You Lost Me There], whereas you can’t read a Roth novel without some male character — typically old, transparently a stand-in for Roth himself — having completely implausible sex with a beautiful woman who’s helpless before the narrator’s powers. Even when Roth writes about the aging man’s loss of potency, as in [book: The Dying Animal], the Roth-stand-in still ends up having sex with voluptuous women young enough to be his daughter. Victor can’t get it up by the time we meet him, and he can’t get it up by the end.)

There are touches of enlightenment as we go along. Our characters get smacked around some, and come out bruised but maybe a little smarter and a little less self-involved. It’s never schmaltzy or sentimental, though: [book: You Lost Me There] is a realistic look at getting your head straightened out.

David Foster Wallace, “Consider the Lobster” and Other Essays — September 20, 2010

David Foster Wallace, “Consider the Lobster” and Other Essays

Cover of _Consider the Lobster_: stark white background, title and subtitle in black, author in red, then 'Author of Infinite Jest' below the author's name. Finally, a photo of a deeply red lobster at the bottom of the page

(Attention conservation notice: 1700 words, having reached the end of the line with David Foster Wallace’s brand of free-associative rambling.)

I’ve spoken with a great many people by now who’ve found Weezer’s last few albums so terrible that it’s made them reconsider whether the Blue Album and [album: Pinkerton] were as great as we all thought at the time. I’m sad to say that “Consider the Lobster” has made me do the same for David Foster Wallace.

What makes Wallace really charming is him, as a person. His best essays are really about him. Take the title essay in A Supposedly Fun Thing I’ll Never Do Again, for instance; it’s one of the most enjoyable essays I’ve ever read, and what makes it so is a) that Wallace is funny, b) that Wallace is neurotic and aware of his neurosis, and to a much lesser extent c) the funny commentary Wallace deploys about society in general and what cruise ships have to say about life in late-20th-century America among upper-middle-class folks whose every want is basically already taken care of. Even on that last point, though, Wallace is at his best when he talks about his own experience as a microcosm of the larger point. He’s spoiled on a cruise ship, and he finds himself getting more and more annoyed at the little deviations from perfection that would, land-side, never have bothered him in the slightest — e.g., that all they have is Dr. Pepper rather than Mr. Pibb, when everyone knows that the former is just no goddamn substitute at all for the latter. Being spoiled beyond comprehension has made Wallace sensitive about far too much. I submit that almost none of what’s memorable in “A Supposedly Fun Thing” has to do with the world beyond Wallace’s own head.

That’s not true of that entire earlier essay collection, though. A Supposedly Fun Thing has some neat thoughts about the role of television on fiction writing (I believe that was in “E Unibus Plurum” [sic]), has an obsessive little essay about David Lynch, and so forth. Wallace is definitely a smart guy. But he’s really just run out of steam in “Consider the Lobster”. There’s an obscenely long essay reviewing an English-usage guide, ably torn to shreds 8 years ago on the Languagehat blog; most of that takedown can be reduced to “Wallace just goes on and on and on, but he doesn’t actually know what he’s talking about.” And that critique extends to most of the rest of what’s in “Consider the Lobster”. Much of it sounds like a college bull session committed to paper. For instance, on page 85, in the middle of “Authority and American Usage” (the essay that Languagehat took down), we have Wallace saying that

Even in the physical sciences, everything from quantum mechanics to Information Theory has shown that an act of observation is itself part of the phenomenon observed and is analytically inseparable from it.

Well … I’m no physicist, but I’m fairly certain that this is what happens when you get a guy who’s trained in critical theory and let him read In Search of Schrödinger’s Cat. I invite physicists to critique my interpretation here, but I believe QM says that only at very small scales does the act of observation change the thing observed. That’s because when you, e.g., shine light on a particle, you impart momentum to the particle and thereby move it. So the act of observing the particle has changed the state of the particle. Our observing the Sun has no effect at all on the Sun.

The extra-special irony here is that on page 56, in an otherwise great essay on John Updike’s self-centric, penis-centric writing, Wallace takes Updike to the woodshed for similar sins:

[One of Updike’s characters] is particularly keen on subatomic physics and something he calls the theory of “many worlds” — which actually dates from 1957 and is a proposed solution to certain quantum paradoxes entailed by the principles of Uncertainty and Complementarity, and which is unbelievably abstract and complicated but which Turnbull seems to think is roughly the same thing as the Theory of Past-Life Channeling, apparently thereby explaining the set pieces where Turnbull is somebody else. The whole quantum setup ends up being embarrassing the way something pretentious is embarrassing when it’s also wrong.

(I’ve assumed all along that Wallace’s Everything and More, which purports to cover Georg Cantor and the various shocking, counterintuitive results about infinity, would be more Wallace bull-session wankery. Nothing in “Consider the Lobster” encourages me to read Wallace’s thoughts on higher math.)

Wallace’s demeanor is so folksy and charming that I found myself not normally paying attention to whether what he says makes any sense at all. Then the Languagehat blog comes along and pricks the balloon, and suddenly I realize that Wallace just doesn’t have much to say in a lot of this book. Much of it starts to feel like a man who’s talking and talking and talking to delay something that’s not clear to the reader (and may not be clear to the author).

And talk he does. He needs an editor more than ever. Infinite Jest apparently started out as a 1,500-page work, which eventually got chopped down to just over 1,000, according to David Lipsky’s biography. Infinite Jest was great, but it would have been even greater had it been half as long. “Consider the Lobster” could be reduced from 300 pages to maybe 200 without a lot of substantive loss.

While I’m here, I have to comment on Wallace’s footnotes; they’re one of the most noticeable features of his writing. They are terrible. I have always found them terrible, especially in Infinite Jest. There, the footnotes were mostly endnotes, so one had to keep two bookmarks going and continually interrupt the flow of the novel to read some 20-page excursus about the director’s [foreign: oeuvre]. It made Infinite Jest actually cause mental pain, of exactly the same sort that you feel when you’re trying to think hard about some important problem at work and get interrupted every couple minutes by some well-intentioned but annoying coworker.

Come to find out, in Lipsky’s book and dramatically confirmed in “Consider the Lobster”, that this similarity was not coincidental. Wallace’s contention in Lipsky is that the world we live in is so fragmented, with so many streams of information coming at us at once, that literature has to reflect this somehow. There don’t exist enough capital letters, enough bolding, and enough italics in this world for me to express just how terribly wrong I think this is. The world is fragmented and saturated with news, yes, which is precisely why literature — and for that matter, the rest of our institutions — needs to provide filtration, perspective, and order. When I read a book, I want to get lost; I want to forget, for a time, the maddening flicker and noise of the outside world. I want to submerge myself in the author’s world. Wallace’s strategy, and apparently his philosophy, are to keep me from ever getting immersed in his work. The strongest evidence I can amass for this claim is the very final essay in “Consider the Lobster”, whose final two pages look like this:

Two pages from _Consider the Lobster_. There are boxes offset from the text, with arrows pointing to boxes from inside other boxes on different pages. It's a recursive, distracting mess.

(click to enlarge)

This takes Wallace’s footnote habit and runs off a cliff with it. Like the footnotes, which sometimes have sub-footnotes, the boxes and arrows sometimes have their own sub-boxes and sub-arrows; as you can see from this example, sometimes you need to follow arrows onto other pages, then trace your way back to the page where you started. I don’t believe this image captures one further annoyance of the boxes-and-arrows system, namely that sometimes a box precedes the text it refers to, so you have to train yourself to skip the boxes until the arrows tell you it’s time to read them.

Maybe you find the notes charming. After all, they’re a natural extension of what’s often charming about Wallace: you feel like you’re getting direct access to his mind and the funny things that he thinks from moment to moment. Clearly his own mind is fragmented, so his writing is the same way.

Me, I just find it lazy, and I’ve found it lazy as far back as Infinite Jest. A more disciplined writer would find a way either to flow the content of the notes into the body of the text, or would just strike out those digressions that don’t add to the content of the work. That Wallace clearly disagrees with me here, and that this isn’t laziness but is entirely deliberate, is exactly the problem: Wallace believes that the digressions and the footnotes are absolutely crucial to the body of the work.

This particular final essay, with the structural experimentation and the arrows and boxes, features Wallace sitting for a night or a few nights in a Los Angeles-area conservative talk-radio station, telling us all sorts of things: the particular mechanics of beaming a story from the station to the millions of L.A. listeners, with particular reference to which machines get used for which purposes; the sound engineers and their mastery of special devices that speed up and slow down sounds to fit within a precise window and give advertisers their allotted on-air time; the radio host himself, and what he’s like when the mic is turned off; some notes on the Fairness Doctrine and what its end had to do with the rise of talk radio; and some college-bull-session-level out-loud meditations on What It All Means.

A lot of this stuff is good, but a lot is just needless digression. When Wallace applies the same formula to John McCain, in what became “Up, Simba!”, it essentially has one through-line with a lot of useless ornamentation. The story is that John McCain spent five years in a box in Vietnam, and explicitly refused to be released from prison just because his father was a bigshot in the military; he waited to be released after others who’d gone in before him. Wallace asks us to imagine the psychological and physical torment McCain underwent, and the sense of duty that must exist inside McCain to make that sacrifice for his brothers. McCain has become a politician since then, so it’s hard to know whether what he says is just salesman bullshit, or whether maybe he really is the Leader that he wants us to believe he is. In the world we live in, it’s hard not to impart cynical motives to everyone around us — especially politicians — but Wallace holds out hope that McCain might be the real deal.

All of that is wonderful. Adorning it, though, are pages and pages of Wallace’s ramblings. I’ve reached the end of my patience for that. Much of “Consider the Lobster” feels like I’m reading a series of blog posts, albeit written by a very smart friend. The world supplies me with enough blogs; when I read a book, I want to read a book.

Karl Polanyi time-travels and addresses the Great Recession — September 17, 2010

Karl Polanyi time-travels and addresses the Great Recession

Basically a generic cover, with one little cute bit: the background is a giant dollar sign, where the inside and outside of the 'S' are composed of blocks of grey and brown.
(Attention conservation notice: Just under 1100 words, plus a long quote from the book, about one of those rare books that makes sense out of the long sweep of history, and takes your breath away in the process. And this isn’t even the final review!)

I am going to enjoy reviewing [book: The Great Transformation] once I’ve finished it. In the meantime, it suffices to note that every few pages I run into a new idea that either brings a major swath of history into clear focus, or that clarifies my side of a debate.

Before quoting something that falls into the latter category, I should explain Polanyi’s overall goal in [book: The Great Transformation]. He starts with quite a long introduction, trying to explain at a high level how Europe went through 100 years of peace between Napoleon and World War I (this is an era that whose beginning Kissinger covered brilliantly in [book: A World Restored]). To do that, Polanyi needs to cut back to the beginning of the Industrial Revolution and work his way forward. A large part of the intellectual suspense in [book: The Great Transformation] is curiosity over how he’ll get from there back to where he started. What kept the peace together, and what broke it apart?

Among the guiding structures in [book: The Great Transformation] are that

  1. The Industrial Revolution, by its very logic, required that labor, land, and money each be turned into commodities. The implication of this is that the most basic parts of any society — its people, and nature itself — must be made fungible. (Wheat and other commodities on the labor market aren’t actually all identical to one another; they’ve been cut and shaved and folded and spindled and mutilated — and, more concretely, contracted — into a uniform shape so that they may be treated as though they were identical. I read a recent blog post on this, referencing a book on the topic that seems interesting; I can’t find it on a quick skim now.)
  2. Every European nation discovered on its own that it needed to slow the societal destruction that the Industrial Revolution inevitably caused. The Revolution led to a great deal of good eventually, but a shift of this magnitude destroys everything in its wake.

In presenting these ideas, Polanyi brings a style like wind through an open window to the kind of arid economic talk that fills all of our minds nowadays. If someone tells us, for instance, that “the recession is caused by people not taking lower-paying jobs,” we’re apt to come back with mini-lectures on the economic benefits that accrue to the world when unemployment insurance gives people time to find a better-fitting job.

Fie to all that, says Polanyi:

Economically, English and Continental methods of social protection led to almost identical results. They achieved what had been intended: the disruption of the market for that factor of production known as labor power. Such a market could serve its purpose only if wages fell parallel with prices. In human terms such a postulate implied for the worker extreme instability of earnings, utter absence of professional standards, abject readiness to be shoved and pushed about indiscriminately, complete dependence on the whims of the market. Mises justly argued that if workers “did not act as trade unionists, but reduced their demands and changed their locations and occpations according to the requirements of the labor market, they could eventually find work.” This sums up the position under a system based on the postulate of the commodity character of labor. It is not for the commodity to decide where it should be offered for sale, to what purpose it should be used, at what price it should be allowed to change hands, and in what manner it should be consumed or destroyed. “It has occurred to no one,” this consistent liberal wrote, “that lack of wages would be a better term than lack of employment, for what the unemployed person misses is not work but the remuneration of work.” Mises was right, though he should not have claimed originality: 160 years prior to him Bishop Whately said: “When a man begs for work he asks not for work but for wages.” Yet, it is true that technically speaking “unemployment in the capitalist countries is due to the fact that the policy both of the government and of the trade unions aims at maintaining a level of wages which is out of harmony with the existing productivity of labor.” For how could there be unemployment, Mises asked, but for the fact that the workers are “not willing to work at the wages they could get in the labor market for the particular work they were able and willing to perform?” This makes clear what the employers’ demand for mobility of labor and flexbility of wages really means: precisely that which we circumscribed above as a market in which human labor is a commodity.

Such clarity: when Mises and all the other heroes of [foreign: laissez-faire] tell us this sort of thing, they’re treating us like bushels of apples or bales of hay. We’ve become a nameless thing called Labor which can be infinitely subdivided and used for whatever purpose the factory-owner decides on. When the fundamental assumptions beneath [foreign: laissez-faire] are laid bare, it becomes so obvious that we wonder why we never thought of it before. And it becomes immediately clear just how odious those assumptions are. Polanyi reminds us that we still need to think about ethics, even in a world dominated by economics. Yet this “just accept a lower-paying job” argument is still with us.

It was with us during the Great Depression, too, when I believe Keynes addressed it in the [book: General Theory]. Part of the great clarifying joy that comes from Polanyi is the realization that there really aren’t that many new arguments about fundamental economic problems.

Much of what’s astonishing and literally breathtaking about [book: The Great Transformation] falls under this category category of “humanizing the economic”. To give a taste: colonization destroyed the colonized peoples, at least for a time, but they got a lot of money. So what’s the problem? Well, in order to get a lot of money, the colonized countries typically had to radically industrialize. This meant moving people out of the agrarian lifestyles they’d been used to for hundreds or thousands of years and relocating them into urban factories. Yes, they got money, but in the process they were uprooted, their lives were destroyed, and millions died. In a few generations they typically adjusted. And that’s exactly the point: industrializing Western democracies knew enough to lay on the brakes to prevent utter social collapse; they weren’t so generous with their colonies. Our modern focus on the economic, rather than the social, obscures our view of the Industrial Revolution’s ravages. Its gains were substantial, but so were its costs.

This is far more than just an academic look back at the way the world was and where it settled after some initial torment. You can’t wade an inch into a debate about economics today without running into the [foreign: laissez-faire] point of view. “Just let the market settle where it will,” they say, “and you’ll do far better than any central planner could.” The fact is that Western societies have never allowed the market to manage itself unimpeded, and it’s to our everlasting benefit that they haven’t; if they’d let the market manage itself, we would most likely not have a society anymore.

This gets at a conceptual distinction that Polanyi emphasizes, which (again, par for the course with this book) I hadn’t previously kept straight in my head: [foreign: laissez-faire] — the doctrine that the market should be entirely left alone — is different from support of the free market. The West has long realized that, in order to get a well-functioning market, we often need to intervene to make it work. War is too important to be left to the generals, as Clemenceau put it, and free markets are too important to be left unmanaged. Indeed, as Polanyi spends a great deal of time detailing, the very birth of the Industrial Revolution owes everything to state intervention.

I’ve already gone on longer than I intended to. When I get to writing it up in full, I’ll fill in more details and hopefully make the connection back to World War I and the 19th-century post-Napoleonic peace.

P.S.: When I do get around to writing this up for real, I’m going to have to include references to Ernest Gellner’s [book: Nations and Nationalism] (another “let’s take in the big picture” examination of capitalism and its effects), Eric Hobsbawm’s epic, multi-volume “long 19th century” series, and James Scott’s [book: Seeing Like A State]. Thinking of these other books, when you’re in the middle of Polanyi, is unavoidable. They’re all well worth your time.

Daniel Dennett, Brainchildren: Essays on Designing Minds — September 12, 2010

Daniel Dennett, Brainchildren: Essays on Designing Minds

Cover of _Brainchildren_: repeated image of a robotic dog with its 'skin' removed so that you can see its innards. It's a friendly-looking robotic dog, with red LED eyes. The author's name is in red, and its title is in black, within a black box-bordered box at the bottom of the title page

(__Attention conservation notice__: Seriously, I don’t intend to sit down and write 1,400 words (plus about 600 quoted words) about artificial intelligence, but … it just happens that way.)

Daniel Dennett is the philosopher whom geeks love. In this, and in a couple other respects, he is the heir to Betrand Russell’s throne. One of Russell’s many claims to fame was a philosophical program built around applying scientific and mathematical methods to philosophical questions, in the hopes of giving them definite answers. That didn’t really work out so well, but, as I’ve mentioned before, mistakes are valuable; we tend to sneer at them more than we should.

Dennett’s approach is based around natural selection, whereas Russell’s was based around mathematical logic. For my money, natural selection is more likely to tell us enduring truths about human knowledge than the predicate calculus. Dennett ran as far as he could with the implications of natural selection in [book: Darwin’s Dangerous Idea], where he contended that Darwin’s discovery is “universal acid”: if you accept it at the level of speciation, then you’re forced to accept it at every other layer — all the way up from the structure of atoms to the large-scale structure of the universe.

In [book: Darwin’s Dangerous Idea], and in [book: Brainchildren], Dennett continues to repeat his refrain that philosophers are deeply uncomfortable with the idea of bringing Darwin to the last holdout, namely the human mind. Dennett says philosophers hate the idea that the mind might be seated in the brain, hate that they might lose their hold over another corner of the intellectual universe when another part of the world loses its mystery, and hate that what separates the human mind from that of other species might just be a difference of degree rather than of kind.

When judging whether other humans, or machines, or animals are intelligent, Dennett advocates taking what he calls “the intentional stance”. I can’t do better than Dennett at explaining what this is:

> Here is how it works: first you decide to treat the object whose behavior is to be predicted as a rational agent; then you figure out what beliefs that agent ought to have, given its place in the world and its purpose. Then you figure out what desires it ought to have, on the same considerations, and finally you predict that this rational agent will act to further its goals in the light of its beliefs. A little practical reasoning from the chosen set of beliefs and desires will in most instances yield a decision about what the agent ought to do; that is what you predict the agent will do.

Note the empirical content of this stance. If you want to judge whether a robot is rational, you make some predictions about how it would behave if it were acting rationally; then compare the predictions to the actual behavior. There’s no [foreign: a priori] assumption in here about how a robot “in principle” could never act rationally. There is only the comparison of predicted rational behavior to actual behavior.

Dennett’s claim throughout [book: Brainchildren] is that artificial intelligence has brought much of philosophy to the “put up or shut up” stage: if you want to argue about cognition, you will soon enough have to compare your arguments to the results of programming a mind on a computer.

Of course, the potential from computer experimentation doesn’t just extend to making philosophers look bad. When we speculate that various problems in human cognition are “easy,” we get to try to solve those problems on a computer and see (as it turns out) how wrong we were. The most interesting difficulty of this sort in [book: Brainchildren] is the “frame problem”. It’s best too illustrate this with a delightful example from the book:

> Once upon a time there was a robot, named R1 by its creators. Its only task was to fend for itself. One day its designers arranged for it to learn that its spare battery, its precious energy supply, was locked in a room with a time bomb set to go off soon. R1 located the room and the key to the door, and formulated a plan to rescue its battery. There was a wagon in the room, and the battery was on the wagon, and R1 hypothesized that a certain action which it called PULLOUT(WAGON, ROOM) would result in the battery being removed from the room. Straightaway it acted, and did succeed in getting the battery out of the room before the bomb went off. Unfortunately, however, the bomb was also on the wagon. R1 *knew* that the bomb was on the wagon in the room, but didn’t realize that pulling the wagon would bring the bomb out along with the battery. Poor R1 had missed that obvious implication of its planned act.
>
> Back to the drawing board. “The solution is obvious,” said the designers. “Our next robot must be made to recognize not just the intended implications of its acts, but also the implications about their side effects, by deducing these implications from the descriptions it uses in formulating its plans.” They called their next model, the robot-deducer, R1D1. They placed R1D1 in much the same predicament that R1 had succumbed to, and as it too hit upon the idea of PULLOUT(WAGON, ROOM) it began, as designed, to consider the implications of such a course of action. It had just finished deducing that pulling the wagon out of the room would not change the color of the room’s walls, and was embarking on a proof of the further implication that pulling the wagon out would cause its wheels to turn more revolutions than there were wheels on the wagon — when the bomb exploded.

Lots of problems turn out to be much harder than they look at first sight, because we humans solve them without any difficulty. Build a mind from scratch, though, and you realize the awesome complexity of what we do “without thinking.” (My friend Dan Milstein captured and expanded upon this idea in a captivating talk a few months back. I only wish it had been captured on tape.)

There is one final, ultimate test for robot intelligence that no one has really succeeded in dislodging, namely the Turing test. This famous test places a human in one room with a teletype machine, a robot in another, and a judge in a third. The judge is asked to have a conversation with both the robot and the human, and to guess which one is the robot. If the judge can’t do so with better-than-chance odds, the robot is deemed intelligent. The basic insight here is that carrying on an intelligent conversation brings in so many other elements of human intelligence that it would be impossible to sound intelligent without actually *being* intelligent. You’d need lots of experience from living in the world, some humor, the ability to understand the context behind what someone asks you, and millions of other things besides — things that we’re not even aware of because we do them so easily (the frame problem again).

Needless to say, we’re nowhere near writing a computer program that passes the Turing test — a caveat that Dennett lays out up front. Indeed, the least satisfying part of [book: Brainchildren] is that, despite its pretensions about replacing a lot of vague philosophy with scientifically grounded results on human minds, Dennett lays out surprisingly few such results. Those he does lay out are

* a bunch of robots failing the Turing test
* the schematic — prior to any building — of a robot he and colleagues are working on called “Cog”
* a trip he took into the wilds of Kenya with a couple animal researchers, wherein Dennett claims that he taught them the intentional stance. Now, this struck me as unbelievable:

> But what does the [Moving-Into-The-Open] grunt [from vervet monkeys] mean? I suggested to Robert and Dorothy that we sit down and make a list of possible translations and see which we could eliminate or support on the basis of evidence already at hand.

Robert and Dorothy hadn’t already thought of this on their own? I’d be curious to interview Robert and Dorothy, to see whether they find the intentional stance quite as novel as Dennett makes out.
* Robert Hofstadter’s Fluid Analogies Research Group, found in Hofstadter’s [book: Fluid Concepts and Creative Analogies] and in Melanie Mitchell’s [book: Analogy-Making as Perception]. These sound like real steps forward, and I’m excited to read them. I’ve violated my earlier promise, and have gone ahead and reserved these from the library. (This is how it always goes.)

If [book: Brainchildren] needs anything, it is more science and less philosophy. The desire for more empirical results is inevitable, and makes [book: Brainchildren] self-undermining, given how much time Dennett spends castigating his own colleagues in philosophy for their lack of empirical rigor. (He really has it in for Jerry Fodor, about whom I know nothing.) I have to assume that this was deliberate on Dennett’s part, and I expect that he views his role as more of a ground-clearer: dispense with a lot of silly ideas (e.g., the “Chinese Room”) about why intelligent robots are impossible, so that others can then move in and get actual work done.

[book: Brainchildren] suggests that a lot of this work has divided between “top-down” and “bottom-up” approaches. Top-down approaches would begin, in essence, from the intentional stance: we want to solve a particular problem (vision, or boundary detection, or understanding context-filled sentences), so we write a program that can do this. The bottom-up approach would instead start at the level of neural hardware: build a device that looks like a brain — maybe a large collection of McCulloch-Pitts neurons — and don’t work on the top-level problem until the lower levels have been established. If we want any model of the mind to be complete, and we want it to say something about actual human minds (rather than about, say, artificial intelligence considered as an abstract problem), the top level will need to be consistent with a bottom layer that looks like a human brain; that is, if the top-level program could only be implemented on a supercomputer the size of the Pentagon, it probably doesn’t have much to say about human minds. So the top-down and bottom-up approaches both have their merits.

It seems likely that any progress toward an artificially intelligent machine will involve some intermediate steps where the machine doesn’t act like a full-scale human, but acts like what you might call a toy human. It can’t carry on an intelligent conversation about any topic that might reasonably come up, say, but maybe it can talk about wombats. (Imagine writing a computer program that simulates a conversation with a kid who has behavioral problems.) We’ll learn some things from this, which we’ll fix in the next iteration.

Indeed, one of the great lessons I learned from [book: Brainchildren] was the further wisdom of “less thinking, more testing”: a lot of rather silly arguments could be short-circuited by developing rough, ugly prototypes that solve a small corner of a problem; instead of talking about “machine intelligence in principle”, we could then talk about performance *relative to an existing benchmark*. Let’s not talk about a phantom; let’s talk about this, here, now, and how we could improve upon it.

While others push toward that goal, Dennett clears some room for them to work. He deserves our thanks.

__P.S.__: Cosma Shalizi has, of course, a brilliant review of [book: Brainchildren] as well.

Joshua Ferris, The Unnamed — September 5, 2010

Joshua Ferris, The Unnamed

Cover of _The Unnamed_: blue-sky background, black birds flying around it, book title in white, author's name in black (sans serif throughout)

This is the first nontrivial (as compared to, e.g., [book: Ant Farm]) book that I’ve read in a long while in a single day. It’s a strange kind of captivating, in that you wonder “what’s going to happen to this poor guy next?”

The single idea in [book: The Unnamed], drawn out brilliantly by Joshua Ferris, is that Tim Farnsworth sometimes just can’t stop walking. His legs start moving, and at that point he can’t stop. If he’s lucky, when it happens he’ll have a backpack already packed containing all the supplies he’ll need for hours, possibly days. During the winter, if he’s even luckier, he’ll already be wearing a parka and a hat. No matter how he’s prepared, his body will send him walking, and there’s not a damn thing he can do about it. Eventually he’ll grow exhausted and collapse into blissful sleep on a park bench or in a forest or wherever he happens to have landed. He’s often confused for a bum, except that he’s a high-powered New York lawyer who drops hundred-dollar bills here and there to get done whatever he needs to get done.

Is this a mental problem? Tim insists that it’s not. He’s at war with his own body: as his feet set off down the road, his rational mind tries to patch up the world around him. He asks the security guard at his law office to walk alongside him and do some favors for him while he’s off on his walk; he dictates orders to fellow lawyers as he’s marching away from them, while they stand confused and expect him to come chat.

Eventually the world is going to catch on to this. Tim is defending one of the firm’s longtime clients on a murder charge, but Tim can’t sit down long enough to talk with him. He makes up an excuse that his wife is dying of cancer and that he has to be by her side. But even that doesn’t explain why, in the middle of a client meeting, Tim stands up, unprompted by any phone call or text message, grabs his backpack, and heads off into the streets of New York.

The murder case goes to trial, and its lead attorney has been AWOL for weeks. The client is found guilty. Tim is dismissed from his job. He keeps walking. His wife considers leaving him, after so many years of waking in the middle of the night to find him gone, desperately driving around looking for him asleep under bridges or behind dumpsters. His life continues to swirl down the drain.

It’s not clear what any of this is “about,” really. Initially I thought it might be an allegory about American corporate life: a man keeps walking, and for what? But it’s not; his time as a lawyer ends, and he comes to enjoy the little things in life, but his life continues to collapse. It could well have some broader religious meaning about overcoming the absurdities of the body through the mind’s discipline.

But I don’t think any of that is why you read [book: The Unnamed]. You read it because of Ferris’s gifts for pulling you into the story and never letting you go. On the basis of [book: The Unnamed], and of many friends’ recommendations (not to mention Jay McInerney’s), Ferris’s [book: And Then We Came to the End] moves high up in the queue … after I’ve worked through 106 others, that is.

Thomas Pynchon, Inherent Vice — September 4, 2010

Thomas Pynchon, Inherent Vice

Inherent Vice cover: old beater of a car, with surfboards on its roof, sitting in front of a surf shop on the beach. Lots of loud pink colors, almost neon. The title of the book is in fact written in neon-type letters.

This is the book that would result if [film: The Big Lebowski], Chevy Chase’s [film: Fletch] series, and 1940s-era noir films were combined, if the classic noir blonde bombshell were updated into a 60s-free-love hippie chick, if the private-detective aspect of [film: Fletch] were kicked up a notch and made somewhat more serious, and if the whole thing were then novelized and relocated to 1969.

Our hero, a stoner private eye, ambles around late-60s L.A. after the Sixties have run their course and everyone in authority — cops especially — hates the hippies. Nixon is in the White House and Reagan is in the California governor’s mansion. His ex-girlfriend visits him on the very first page, announcing that her new boyfriend has disappeared and she needs help finding him. I’m not giving away terribly much if I tell you that they eventually have sex but it’s all, like, whatever, man? In any case, he can’t resist his dame (as an earlier generation of private-eye novels might have put it), so he stumbles around looking for clues. Despite smoking a really overwhelming quantity of pot (scarcely a page goes by when he doesn’t light up), he somehow manages to know a lot of people and ask the right questions. The cops hate him, but also seem to respect him. One of the pervasive mysteries in this book is how our hero manages to be both quite the slacker and also a reasonably competent PI.

People are always ending sentences with question marks? Even when it’s, you know, a declarative utterance? The endless mocking of California, and California stoner culture more specifically, is quite funny, but does bleed over into slapstick at points. This initially bothered me, but I suspect it’s Pynchon letting us know that we shouldn’t take any of it too seriously. So I didn’t. It’s a private-eye comedy, and it’s tons of fun.

This is the first Pynchon that I’ve managed to make it all the way through. I gave [book: Gravity’s Rainbow] a shot many years ago. I was a teenager at the time, which probably means I wasn’t ready for it in any case. But I think that would also be a tough read no matter when I read it. I distinctly remember giving up when the narrator dives into a toilet, goes for a long swim, and gets a turd stuck in his nose.

Not sure what made me grab this Pynchon when the author’s very name had been, for me, a mark of self-indulgent wankery. It could be the positive [mag: New Yorker] review. More likely it was that I stumbled into the Harvard Book Store while happily buzzed off strong Craigie on Main cocktails one night, and I couldn’t help myself. The purchase, given that background, was entirely appropriate.

Books I own that I haven’t read — September 2, 2010

Books I own that I haven’t read

The other day I went through one of my periodic bouts of OCD, wherein I grab all the books off my shelves that I’ve not yet read. Turns out there are 100 books on that list; they’re below. It occurred to me that I went through the same exercise back in the day; I looked and found that indeed, I made such a list in 2006. There were 27 books on the list at that time. Of those, I’ve read five. I’ve decided that I’ll never read nine of them. That means I’ve added … uh … 87 books to the list. Yikes. This is not working out well. Here’s what the new, shameful pile looks like:

Five stacks of books, each probably two 20 inches high

The list of books in those piles is below the fold.

Then there’s the to-read list, which contains 540 books at the moment and which overlaps quite a bit with the list below. Oh, and I have a shelf full of math books that I will, in truth, probably never read (e.g., Munkres’s [book: Topology]). Not a hopeful scene at all.

I also have three books checked out of the library at the moment:

* [book: Probability with Martingales], on the recommendation of a Twitter follower
* [book: Google’s PageRank and Beyond], just to refresh my memory
* Joshua Ferris’s novel [book: The Unnamed], on Adam Kessel‘s recommendation.

Finally, I’m in queue for four books at the library:

* [book: Were You Born on the Wrong Continent?], by Tom Geoghegan (author of the heartbreaking and awe-inspiring [book: Which Side Are You On?: Trying To Be For Labor When It’s Flat On Its Back], and lamentably defeated candidate to replace Rahm Emanuel as the Congressman from Illinois’ fifth district)
* Rosecrans Baldwin’s novel [book: You Lost Me There]
* [book: Diary of a Very Bad Year], by the possibly-fake hedge-fund manager profiled on [mag: n+1] (favorably reviewed by Ezra Klein)
* Karl Polanyi’s [book: The Great Transformation] (which I could have sworn that Cosma Shalizi reviewed somewhere, but apparently not)

Objectively speaking, this collection of books bespeaks almost pathological packratitude. So now I have my unread-but-owned book stack out in the open, sitting guiltily on the floor in front of my bookshelves. I’d like to say that I won’t buy another book until I’ve polished all of those off. A boy can dream about the end of his own pathologies.

Continue reading

Upon hearing that a four-story New York City Barnes & Noble is closing — August 31, 2010

Upon hearing that a four-story New York City Barnes & Noble is closing

(…as reported in the [newspaper: New York Times]), it is perhaps appropriate to include a graphic which, while it doesn’t prove anything, is *suggestive*:

Stock prices of AMZN (Amazon), BGP (Borders), and BKS (Barnes and Noble) from late 2005 to now. B and N has fallen about 58.92% in that time; Borders has fallen 95.1%; Amazon has risen 198.21%

“BGP” here is Borders, “AMZN” is of course Amazon, and “BKS” is Barnes & Noble.

Borders’ market cap — the total value of its stock — is $67.1 million. Barnes & Noble’s is $838.49 million. Amazon’s is $55.44 *billion*. Granted, Amazon is not just a bookstore. But these numbers aren’t, I think, all that misleading.

While we’re on the topic of Amazon, it occurred to me the other day: there’s essentially nothing that makes Wal-Mart distasteful while making Amazon desirable. Both try to squeeze their suppliers as much as possible to get the cheapest prices for their customers. Both use their size as a weapon to get those low prices from their suppliers. Both are killing off neighborhood stores; it just happens that Wal-Mart does it rather more obviously than Amazon. Amazon would probably have labor troubles as well, if it had as many low-paid employees as Wal-Mart does.

And yet I buy from Amazon sometimes [1], while I wouldn’t be caught dead inside a Wal-Mart. This is partly an unconscious class thing: Wal-Mart is associated, among folks in my particular urban milieu (a friend calls us “SWPLs”, from Stuff White People Like), with trashy suburbs and poorer folks. Until recently, it hadn’t consciously occurred to me that that might be the issue, but I think it is.

That said, in both Amazon’s and Wal-Mart’s case we shouldn’t romanticize what came before. Amazon didn’t replace a nation of Harvard Book Stores; it replaced a nation of Barnes & Nobles and Borderses. B&N and Borders may have begun the trend of eliminating local bookstores [2], but they also eliminated Waldenbooks. Waldenbooks, in turn, had been a K-Mart property since 1984 (according to the Wikipedia). Likewise, Wal-Mart — at least in my limited experience — didn’t replace a nation of shopkeepers; it replaced a nation of K-Marts.

I have no real point here. I’d recommend that you buy books where you enjoy buying books. Everyone’s going to have his own tradeoff between price and localness. My cutoff is around $20: if the same book is $20 cheaper on Amazon, I’m likely to buy it there. I hope there comes a point when my income is such that I don’t pay attention to differences of that magnitude, but I’m not there yet.

Harvard Book Store facade

Around here, the Harvard Book Store is such an institution, and adds such color to the area, that its disappearance would be an incalculable loss. It’s hard to imagine Harvard Square without that beautiful black-and-gold façade; I hope I never have to imagine it.

A friend suggested a couple years ago that all the damage had been done: the market-share division between Amazon and the rest of the world was about where it was going to settle. I was hopeful then. I’m less hopeful now. Electronic books look like a real killer; Amazon made waves recently when it noted that it sells more electronic books than it does hardcovers (*new* hardcovers, presumably). That market is only going to grow, and there’s no reason to think that the Harvard Book Stores of the world can compete there.

So I’m worried. All I can do is continue to buy local when possible, and hope for the best. I’m lucky to live in a town where a Harvard Book Store is even possible; most places aren’t nearly so lucky. And even the local-bookstore market has thinned dramatically around here in recent years. When Wordsworth, the Harvard Square institution, closed six years ago, its founder bitterly noted:

> “In the 1980s … on Memorial Drive, you’d see people coming out of dorms and heading toward Harvard Square. In the 1990s, what you’d see in the windows of dorms was a Doppler effect of blue lights from computer screens, and you knew students were at their computer, hitting a key to order from Amazon.com. The only reason they’d come out of their dorms was to have Chinese food and mate.”

[1] — I normally get my books from the library. If I buy new books, I buy them from the Harvard Book Store just up the street. If I buy used books, I buy them off Amazon when they’re significantly cheaper than the Harvard Book Store’s copies. Also, Amazon’s used-book selection is just much better than any local store’s would be, particularly if I’m looking for obscure academic texts; HBS doesn’t carry those at all.

[2] — Those of us who grew up near Burlington, Vermont remember Chasman & Bem. In retrospect, it was a beautiful bookstore. At the time, I remember the service being terrible. If it were still around, and I still lived in Vermont, I’d be shopping there rather than at the Borders or Barnes & Noble up the street. It’s too late for that, though: Barnes & Noble moving in a couple miles up the street killed them off.

I visited my brother in Boston back in probably 1993 or 1994 and hit up a gorgeous bookstore with him in Faneuil Hall (the last time I actually hung out in Faneuil). I believe that was Waterstone’s, part of a British bookstore chain. It, too, is gone.

Karen Armstrong, The Case For God — August 29, 2010

Karen Armstrong, The Case For God

Cover of The Case For God: title, author, etc. in sans-serif font, emerald-green background, big stack of books from famous philosophers and theologians. Running from bottom to top, the books are: Aquinas's condensed Summa, Plato's complete works, New Testament, Talmud, Basic Works of Aristotle, Augustine's Confessions, 'On The Kabbalah and Its Symbolism', Civilization and Its Discontents, Origin Of Species, Koran

It is a very important and very modern error, says Karen Armstrong, to read the Bible as though it contained factual content. The Greeks knew that there were two kinds of knowledge, [foreign: logos] that concerns itself with facts, and [foreign: mythos] that concerns itself with things like love and courage and coping with hardship; the Greeks never confused these domains. Moving briskly through religious and philosophical history, Armstrong argues that no one confused these domains until the Scientific Revolution: Plato, Aristotle, Jesus, Augustine, Aquinas, Luther, Descartes, Pascal, and on and on — they all realized that there was a domain to which science would never properly have access. This separation seems to have disappeared in our modern scientific era, when all ideas are thought to be amenable to scientific analysis.

They also all realized that there was a realm of the literally unspeakable, which one could only reach through long study and ritual. The ritual was important: merely reading texts wouldn’t get you there. And the texts — for instance, the Bible and the Torah — don’t just stand on their own; they’re meant to be read along with a teacher. And they’re meant to be read metaphorically rather than literally ([foreign: logos] versus [foreign: mythos] again).

The reader will naturally wonder: if the Bible is meant to be read metaphorically, does that mean that Jesus did not literally perform the miracles and did not literally ascend to Heaven? She says explicitly that few people ever took the miracles literally, and in any case that they don’t matter to faith. Jesus, along with some other famous holy men, was probably very good at healing certain psychosomatic disorders, but she seems to argue that walking on the water and so forth are meant to be understood metaphorically.

What, then, of things like the Nicene Creed, which asserts that Christ “suffered, and … ascended into heaven”? Did he literally ascend into heaven, or metaphorically? Armstrong asserts that the ignorant Emperor Constantine forced this creed on Christians, who then returned to their homes and pretty much continued as they had, treating the whole thing as a metaphor. If not in the specifics, then in the general approach, Armstrong here meshes with what I know of Aquinas: when we say that God is a rock, we don’t literally mean that it’s a rock; we’re supposed to be able to identify what’s metaphor and what’s fact in the Bible.

Armstrong essentially contends that all religions, going as far back as Buddhism and Hinduism, have believed in an ineffable realm that’s only accessible through prayer and worship and charity. She contends that Christianity, Judaism, and Islam have all gone after the same basic peaceful approach to humanity. For example (p. 79):

> In a famous Talmudic story, it was said that Hillel had formulated a Jewish version of Confucius’s Golden rule. One day, a pagan had approached Hillel and promised to convert to Judaism if Hillel could teach him the entire Torah standing on one leg. Hillel replied: “What is hateful to yourself, do not to your fellow man. That is the whole of the Torah and the remainder is but commentary. Go learn it.”

Here we have a line connecting Confucius and Judaism. Armstrong connects Judaism and Christianity, and Christianity and Islam, in the same way. They’re all essentially teaching us to be good to our neighbors, and all giving us a set of rituals to tap into the ineffable.

Armstrong’s writing is so clear, and her message of universal love so captivating, that you have to step out of it periodically and wonder if she’s missing a part of the story. Why do we have separate religions, if they’re all chasing after the same basic ineffable truths? (That’s the thing: she seems to be arguing that, if you follow the rituals of each religion, you’ll eventually land on *the same* ineffable truths.) Unless I misunderstand the New Testament, Christians really do believe that they’ve replaced Abraham’s covenant with a new one; Christianity isn’t just Judaism with a new face. Islam really did take over Constantinople and turn the Hagia Sophia from an Orthodox or Latin cathedral into a mosque. I certainly do hope that a message of universal love lies beneath every world religion, but then what have they all been fighting for?

In Armstrong’s eyes, the uglier parts of the various religions are perversions of the one true idea. She regrets Augustine’s doctrine of original sin:

> …Original Sin, one of his less positive contributions to Western theology. He produced an entirely novel exegesis of the second and third chapters of Genesis, which claimed that the sin of Adam had condemned all his descendants to eternal damnation. Despite the salvation wrought by Christ, humanity was still weakened by what Augustine called “concupiscence,” the irrational desire to take pleasure in beings instead of God itself. It was expereinced most acutely in the sexual act, when our reasoning powers are swamped by passion, God is forgotten, and creatures revel shamelessly in one another. … Born in grief and fear, this doctrine has left Western Christians with a difficult legacy that linked sexuality indissolubly with sin and helped to alienate men and women from their humanity.

She gives so little time to ideas like these that you’d almost forget there’s any content to each religion on its own. Instead, she focuses on the act of learning each religion, which universally involves studying alongside a teacher (rabbit, priest, imam, whatever) and performing charitable works. You’d almost forget that many thousands (millions?) of people have died from “perversions” of religion.

Armstrong’s gentle humanity — which wants to find the good in all religion — is endearing and infectious, and her scholarship is breathtaking: [book: The Case For God] covers a vast swath of intellectual history, from the authors of the Lascaux cave paintings up to the new atheists (Dawkins, Dennett, Hitchens). She’s made me want to go back and reread Augustine and Aquinas with an eye to metaphor rather than to factual content. Her description of Aquinas’s [book: Summa] is gripping intellectual history, and I quote it at length below the fold.

I haven’t yet established to my satisfaction whether the use of reason in religion — which Aquinas made most famous — even makes sense, and I’m not sure that Armstrong answers this question. Armstrong’s tool throughout [book: The Case For God] is apophatic theology, the method of defining God by what it is not: God is not a mortal man; God is not just an infinite version of you and me; God is not made of a substance at all; and so forth. One sees in this method something akin to the Zen Buddhist koans: a different state of understanding achieved by coming to grips with paradox.

Around the Scientific Revolution, religion undermined itself by trying to make something scientific of itself. God became something whose existence one could document; “He” eventually became something a lot like a human, only infinitely large and infinitely wise and infinitely patient and so forth. The metaphor fell away, as did the awestruck stance before a fundamentally ineffable thing. When God is understood in a factual way — “He” created the Universe some fixed number of years ago, and He reached down to smite Onan, and so forth — we lose both grandeur and believability. When compared to a scientific standard, *of course* the Bible will collapse; it was never meant to be read that way. By trying to appeal to the scientists, modern Christians have doomed their religion to be laughed at and discarded.

As I also read in Diarmaid MacCulloch’s [book: The Reformation], Armstrong notes that the Catholic Church’s reaction against Galileo — and against the Copernican revolution — was really an unfortunate accident of timing: the Church had been losing to Protestants, and consequently tightened up their grip on dogma. Again, the Church should have no position on cosmology; the Bible can’t be read as a factual map of the Creation.

Then again, of course, a book like the Bible — or even a much shorter document like the Constitution — will be subject to changing interpretations as the years pass. Armstrong sounds like a strange kind of self-negating fundamentalist at points: the Bible’s, and the Torah’s, and the Koran’s meanings must change as each new learner understands it in context, but *it was never meant to be understood in my one disfavored way*. Essentially Armstrong is arguing that apophatic theology is one of the only ways to understand a religious text. She asserts with evidence that every important theologian has approached his or her book apophatically. Still, I’m sure she could have filled another book of the same size with theologians of the opposite stripe.

I’ve only scratched the surface of the evidence that Armstrong has amassed. The best thing I can say about it is that you really owe it to yourself to pick up [book: The Case For God] and devour it like I did. Armstrong’s brilliant writing is going to bring me back to theology to see if I can understand the masters better.

Continue reading

Albert O. Hirschman, Exit, Voice, and Loyalty: Responses to Decline in Firms, Organizations, and States — August 23, 2010

Albert O. Hirschman, Exit, Voice, and Loyalty: Responses to Decline in Firms, Organizations, and States

There is basically nothing on this cover but some 70s-style text: author, title, subtitle
There should be a category on bookstore shelves for “little, dense, incisive, ingenious books.” [book: Exit, Voice, And Loyalty] would be one; Herbert Simon’s [book: The Sciences of the Artificial] would be right next to it.

Hirschman here examines three methods for addressing an organization’s deterioration: “exit,” the option of leaving the organization, not buying its products anymore, etc.; “voice,” the option of sticking with the organization and protesting in the hopes of improving it; and “loyalty,” which is really more a property that encourages you to stay with the organization longer (thereby probably delaying exit and giving voice more of a chance). Normally we think of exit and voice as separate powers: exit is what you use in a market economy, while voice is what you use when dealing with a government (which is typically hard or even impossible to exit). Hirschman asks a simple question: how do these powers interact?

The interaction turns out to be fascinating. Consider parents dissatisfied with the performance of their children’s public school. The parents who are most focused on school quality are likely to leave first and put their kids in private schools, leaving behind only the less quality-focused parents who are less likely to speak up. So the exit option (parents abandoning the school) diminishes the use of the voice option (speaking up). This is likely to accelerate the school’s decline.

Or consider a product (maybe a brand of automobile is a canonical example here) whose quality has diminished over time. Again, exit is likely to be used before voice: the customers most concerned about quality will bail first. If too many customers bail too fast, the company won’t have enough time to fix the product before it’s gone out of business. Whereas if too many customers don’t bail out quickly enough, the company will never learn that its products need to improve. So one can postulate a certain optimal level of quality-sensitivity in customers: not too high to kill the company before it fixes things, but not so low that the company stagnates. Modeling this formally would naturally use an analogue of price elasticity; whereas price elasticity measures the percentage decrease in sales that results from small percentage increase in price, quality elasticity measures the percentage change in sales for a small percentage change in quality.

Indeed, one of the neat little accomplishments in this neat little book is that it uses formal economic models to study the interaction of exit, voice, and loyalty even when the context is, say, the decline of a government rather than that of a company. The indifference curves are mostly confined to appendices, and Hirschman’s writing is clear enough that normally you can get the gist just as easily — if less rigorously — from his words as from his charts.

A government would be a classic case where exit is basically unused: yes, you could leave the United States in protest — and some large fraction of our countrymen promise to move to Canada at every presidential election — but would you? So the main option available to you in a democracy is voice. Similarly, you’re likely to raise your voice in a political party before you’d exit it, particularly if the alternative party is far away from you ideologically.

Here Hirschman consults a famous model by Harold Hotelling, metaphorically applied to ice-cream stands lined up along a beach. Imagine that beachgoers are distributed uniformly along the beach, and two ice-cream stands are trying to decide where to place themselves to capture the most customers. Imagine that the ice-cream stands initially start on opposite ends of the beach, but are free to relocate. The leftmost ice-cream stand realizes that if it moves a little bit toward the center, it can continue to pick up customers to its left (because those customers are still closer to the leftmost ice-cream stand than to the rightmost) while picking up those customers that lie less than halfway between the leftmost and rightmost stands. The rightmost stand realizes, similarly, that it can move to the center and continue picking up those customers to its right while picking up a few more in the middle. This process continues until both ice-cream stands are almost exactly at the middle, separated by a tiny sliver of distance.

Replacing the beach with some measure of political ideology, the conclusion is that it’s always in political parties’ interests to move to the center. There are initial obvious reasons why the analogy isn’t quite right. It’s not obvious that voters are uniformly distributed across the ideological spectrum, for one. For another, the Hotelling model assumes that customers don’t care about the cost of walking down the beach: they’ll pick the nearest stand, regardless of how far they have to walk to get there. In a political context, this would imply that voters don’t especially care which beliefs their parties hold; they just want the party to be “closer to me than the other party is.” Which isn’t obvious: perhaps people on the right or left ends of the political spectrum really want parties on the extremes, and will opt out of party politics altogether if they’re forced to pick among centrist alternatives.

Hirschman ends up rejecting the Hotelling model as it applies to political parties, for the reasons laid out above (more to the point: because it fails to work empirically). Here he’s rejecting what is apparently known as Hotelling’s Law, not to mention the median-voter theorem.

For such a small book and so few atoms (three, in fact: exit, voice, and loyalty), Hirschman’s is remarkably dense with interesting ideas. Highly recommended.