Ben Greenman, What He’s Poised To Do: Stories — July 9, 2010

Ben Greenman, What He’s Poised To Do: Stories

Oil painting of a woman in negligee and knee-high stockings or boots, looking away from the camera toward a table lamp situated on a night-table. You can't see it from this photo, but the image continues over the spine and onto the back cover of the book; on the back cover, we see a man in a somewhat rumpled suit with a turned-down mouth. The woman sits on a rumpled bed, and the man stands on the bed's other side; looks like he's on his way out after a hotel-room assignation

Reading this collection of short stories, I felt a lot like John Mayer narrating a baseball game that he didn’t understand. This was a series of short stories that ended with my saying, “Aaaaand *that* happened!” One of them, for instance, features characters who have — yes, right — moved to the moon. But it’s like … they haven’t *really* moved to the moon. Or maybe they have, but the moon behaves a lot like Nebraska. So … that happened.

Most every story involves people at some emotional distance from their loved ones. There’s the husband away on an extended business trip and listlessly sleeping around while he’s there; he tries to have a phone conversation with his wife, but both of them burst into tears almost immediately. So they settle for writing letters or postcards to one another. That’s how the book works in general: people write letters to one another that sound listless, distant, and a little broken. Sometimes people are so distant from one another that all they can manage is a postcard.

But then sometimes the stories are just plain funny. A guy and his wife head off to a cottage for a romantic getaway; the guy is overwhelmed by the beauty of his surroundings, so he flounces about sniffing honeysuckle and taking in the natural whateverwhatever of it all. Meanwhile his wife is dragging their luggage along to the cottage, glaring at him. He thinks he knows what she means by that glare: sex is on. He rushes up to the bedroom of their cottage, leaving her to deal with the luggage. He throws the bedroom window open, takes in the perfection of the natural scene, gets naked, and reclines upon the bed to await the inevitable carnality.

The fellow who reviewed [book: What He’s Poised To Do] for The Bookslut was overwhelmed. He had the same experience with this book that I had with Nabokov’s [book: Lolita]: he was so affected by it that he had to step away often, take a breath, and think about what he’d just read. I did not feel that way. It took me a couple hours to tear through [book: What He’s Poised To Do] — not wasted hours, certainly, but basically ho-hum hours. Greenman’s characters are so beaten down by life, and (except for one, an African-American man who revisits his roots in Malawi in the late 60s) have such flat affects, that I suspect it would be hard for [book: What He’s Poised To Do] to quicken anyone’s pulse.

Ian Frazier, Lamentations of the Father — June 27, 2010

Ian Frazier, Lamentations of the Father

Cartoon of a vexed father slipping on a child's wheeled toy and falling backwards, a vexed look in his eyes. The [mag: New Yorker] sort of humor gets its canonical expression in S.J. Perelman, whose style is probably best captured in his classic essay “Insert Flap ‘A’ And Throw Away.” I can’t find any full copies of that essay on the web, though Language Log grabs some choice quotes.

At the risk of analyzing [mag: New Yorker] humor to death, it tends to combine a) an excerpt from a real-life newspaper article, which it then expands into b) an absurdist interpretation of that same article, typically including c) the narrator making an ass of himself. It may also include d) the male householder trying and failing to grasp some shred of dignity (see Perelman quote, above).

Ian Frazier does all of these things. What makes him different from Perelman or Woody Allen or any of a long line of absurdist [mag: New Yorker] writers is that Frazier is not funny. I am sorry to declare this. I laughed a few times during [book: Lamentations of the Father], but mostly I had no choice but to step outside the frame and note, “Yes, I see what you’re doing there. I see that you want me to laugh.” Whereas when you read a Perelman or an Allen or a Steve Martin essay, you’re too busy doubled over laughing, in tears, to think about what the author is trying to do.

Perelman brings something else to the enterprise that Frazier just does not have it in him to use: a crazy, effortless, ridiculous command of the English language. Perelman is the man who uses the word “firkin” in two of my favorite sentences ever:

He is a hearty trencherman, as befits a man of his girth, and has been known to consume a firkin of butter and a hectare of gherkins in less time than it takes to say ‘Bo’ to a goose.

and

Of course, five cents in those days bought a good deal more than it does now; it bought a firkin of gherkins or a ramekin of fescue or a pipkin of halvah…

These are sentences that don’t need to exist. They are very, very silly. They add up, through steady and deliberate accretion, to endless belly laughs. They are cleverer than anything I will probably ever come up with in my life. By writing for [mag: The New Yorker], Ian Frazier has placed himself beside these sainted authors; he cannot avoid a negative comparison.

So my advice is to skip [book: Lamentations of the Father] and go straight to what Frazier was aiming for in his cargo-cult-comedy exercise. Read any of Woody Allen’s short collections ([book: Without Feathers], [book: Side Effects]); [book: Most of the Most of S.J. Perelman]; or the [mag: New Yorker]’s own collection, [book: Fierce Pajamas].

Peter-Paul Koch, ppk on JavaScript —

Peter-Paul Koch, ppk on JavaScript

Cover of 'ppk on JavaScript': the letters 'ppk' spelled out hacker-like, with ']{' supposed to form a 'k' and so forth.

This is a book for beginning JavaScript developers. If they come to this book with any pre-existing software-development experience, and they have any choice in the matter, the most sensible response will be to run away from the field as fast as possible.

There’s no *elegance* in this book. It is a collection of ways to hack around browser defects. This is expected, given that its author runs the famous QuirksMode website, documenting in glorious detail all the ways that browsers vary in their support for web standards. You’ll find, for instance, that Internet Explorer calls the target of an event its `srcElement`, while the standard calls it its `target`. So then you’re required to write a little shim like ppk’s `doSomething()` method. Or you’ll find that the XMLHTTPRequest object behaves differently under different browsers, requiring another abstraction like ppk’s `createXMLHTTPObject()`.

None of this is actually interesting. At best, when you’re done using every one of these abstractions, you will have overcome some silly impediments to doing what you actually want to do. What is interesting about software development is *actually solving problems*. When a language — or, in this case, an ill-specified language with competing frameworks — gets in the way of getting the task done, it forces you to gnash your teeth just to accomplish basic chores, not to speak of the challenge that you entered the profession to solve. Syntax hurdles are not interesting; actual substantive problems are.

This sort of problem is why libraries like jQuery exist. Instead of dealing with every browser’s strange implementation of XMLHTTPRequest, you deal with a normalized jQuery object that looks like `$(someObject)`. That `$(…)` business is where all the `target`-versus-`srcObject`, `ActiveXObject`-versus-`XMLHTTPRequest` nastiness gets hidden.

Indeed, throughout [book: ppk on JavaScript], all I could think was that much of what Koch was writing should be hidden behind frameworks like jQuery. (And if Koch were writing this book today, I’m fairly certain he’d help you skip all that.) An ungodly fraction of the rest of [book: ppk on JavaScript] is devoted to the basic syntax details of JavaScript — for-loops, while-loops, and the rest. Any experienced developer is going to skip right over these.

It *is* a valuable book if you want to understand the fundamentals beneath your jQuerys and Node.jses and such. I’m sure it’ll be good to have on my shelf to grab when I encounter some corner case. But over time, the difficulties covered in [book: ppk on JavaScript] are getting hidden more and more beneath frameworks, and pushed out of existence as browsers become more standardized and websites drop support for old browsers. So [book: ppk on JavaScript] starts to look like a dated catalog of the bad old days.

__P.S.__: It’s a small nit, but I really did enjoy this line of Koch’s: “Frankly I don’t believe that Internet over mobile phones will ever amount to much in Europe and North America”. [book: ppk on JavaScript] came out in late 2006; the iPhone came out in mid-2007. Talk about bad timing. This convinces me (as if I needed any more convincing on this point) that playing the tech prognosticator is a mug’s game.

iPhone 4 FaceTime/Infinite Jest mashup — June 7, 2010

iPhone 4 FaceTime/Infinite Jest mashup

Apple’s introduction of FaceTime, their videophone protocol in the forthcoming iPhone 4, reminds me of this great passage in David Foster Wallace’s [book: Infinite Jest]:

> (1) It turned out there there was something terribly stressful about visual telephone interfaces that hadn’t been stressful at all about voice-only interfaces. Videophone consumers seemed suddenly to realize that they’d been subject to an insidious but wholly marvelous delusion about conventional voice-only telephony. They’d never noticed it before, the delusion – it’s like it was so emotionally complex that it could be countenanced only in the context of its loss. Good old traditional audio-only phone conversations allowed you to presume that the person on the other end was paying complete attention to you while also permitting you not to have to pay anything even close to complete attention to her. A traditional aural-only conversation – utilizing a hand-held phone whose earpiece contained only 6 little pinholes but whose mouthpiece (rather significantly, it later seemed) contained (62) or 36 little pinholes – let you enter a kind of highway-hypnotic semi-attentive fugue: while conversing, you could look around the room, doodle, fine-groom, peel tiny bits of dead skin away from your cuticles, compose phone-pad haiku, stir things on the stove; you could even carry on a whole separate additional sign-language-and-exaggerated-facial-expression type of conversation with people right there in the room with you, all while seeming to be right there attending closely to the voice on the phone. And yet – and this was the retrospectively marvelous part – even as you were dividing your attention between the phone call and all sorts of other idle little fuguelike activities, you were somehow never haunted by the suspicion that the person on the other end’s attention might be similarly divided. During a traiditional call, e.g., as you let’s say performed a close tactile blemish-scan of your chin, you were in no way oppressed by the thought that your phonemate was perhaps also devoting a good percentage of her attention to a close tactile blemish-scan. It was an illusion and the illusion was aural and aurally supported: the phone-line’s other end’s voice was dense, tightly compressed, and vectored right into your ear, enabling you to imagine that the voice’s owner’s attention was similarly compressed and focused … even though your own attention was *not*, was the thing. This bilateral illusion of unilateral attention was almost infantilely gratifying from an emotional standpoint: you got to believe you were receiving somebody’s complete attention without having to return it. Regarded with the objectivity of hindsight, the illusion appears arational, almost literally fantastic: it would be like being able both to lie and to trust other people at the same time.

This is only the beginning of a several-pages-long discussion of why videophones (from the future-retrospective stance) failed. People notice first that they look really gross on camera. Then they get self-conscious, so they wear masks when they’re on their videophones. This makes them terrified to meet people in real life, because those people will discover that they’ve been lied to during their videophone chats. So people stay indoors. There are a few other steps in there that I forget (and Google Books is no help), but the end result is that society eventually makes one big coordinated move to drop its videophones.

(You really need to read [book: Infinite Jest]. It’s one of those books that everyone knows about but few read. You should be one of the few to read it. I reviewed it on Amazon back in 2001.)

By the way: I’ve been considering switching to any of the new Android phones when my AT&T contract expires in August, but the new iPhone seals the deal for Apple.

“Less thinking. More testing.” — May 22, 2010

“Less thinking. More testing.”

(__Attention conservation notice:__ nearly 2,000 words that start with test-driven development in software, skip along to application prototyping, then take a big leap to an attack on libertarianism.)

I’ve meant for a long while to write about Kent Beck’s [book: Test-Driven Development By Example], but — as you can see from this blog in general — I’ve had a lot less time to write recently. The book hasn’t yet changed my life, but it should, and it will. And I think the idea has far broader applicability than just software development, which I’ll try to get into below.

The basic premise of test-driven development is to write your tests before you write your code. The structure is like so:

1. Write a test asserting something about the code that you’ve not yet written. For instance, if you intend to write code computing the number of days between two dates, you might make a few assertions: that the number of days between a date and itself should be zero; that the number of days between March 1 of 2009 and March 1 of 2012 is one day more than three times 365; and so forth. The more assertions you can make about this as-yet-unwritten code, the better.
2. Since you’ve not written the code, the assertions will fail. In fact, the code won’t even compile.
3. Write the simplest version of the code that will pass the test. Write this as quickly as possible.
4. Tests pass! Joy!
5. Refactor.
6. Having accomplished the task you were on, continue to bigger and better things. Go to step 1.
7. Repeat 1)-6) until you’ve achieved whatever you were trying to do.

The advantages of having unit tests are well known. The virtues of unit tests can best be understood if you know what their absence is like. If you’re like me, you’ve worked before on code bases that had absolutely no tests, and the experience is terrifying. You can’t change one bit of code without worrying that you’ve broken something in some far-off part of the code. If you’re like me, this experience makes work actually unbearable: the more code you dip your fingers into, the wider the potential swath of destruction. Again if you’re like me, this can turn your stomach into a big ulcer which actually makes it hard to sleep. On the wrong day, it can lead you to excessive caution, which keeps you from doing work. Which is bad and makes your bosses hate you. You want your bosses to love you, don’t you? Of course you do.

Imagine instead that the code is entirely covered by unit tests. (This nirvana is known as “100% code coverage.”) Now, if you change the code, you just run the tests. Do all the tests pass? Joy and rapture! You can keep changing code to your heart’s content. When you break a test, figure out why you broke it, fix it, confirm that the tests now all pass, and move on. Continue to add tests for all the code that you add. Again, if you’re like me, this gives you a feeling of calm and confidence, which makes you work faster, which makes your bosses like you more.

Of course, sometimes your code will break for reasons that you didn’t test against. This is unfortunate but expected. When this happens, add another unit test to guard against the heretofore-unanticipated case. In this way, the unit tests document your knowledge about the particular problem domain. If done well, people should be able to understand your code by reading the unit tests. A unit test can essentially be read as “the code is expected to respond like so when it encounters a world shaped like so.”

Striving for 100% code coverage leads you to write smaller functions, because it’s easier to write unit tests to cover a smaller, more-specialized function. This is a happy side-effect: smaller, more-specialized functions are a good thing, whether or not you’re writing unit tests.

Another way that test-driven development contributes to a fearless coding experience is that — per the title of this post — it encourages you to think less and code more. If you’re like me, you can get stuck inside your own mind, wondering whether the particular path you’re going down will work. The TDD approach is to move past this state of mind as fast as you can, by writing tests. Don’t speculate idly about whether your code will do what you expect; think about how it should respond to known inputs, then write code that responds appropriately to those inputs.

Any number of conclusions might come out of this testing discipline:

* your speculation proved correct; the code works.
* it proved incorrect, and you need to pursue another line of development.
* it proved partly correct, partly incorrect, and you need to course-correct.

This institutionalized course-correction is, I think, the greatest virtue of test-driven development. And it’s why some variant of test-driven development applies in much broader contexts.

Take one context that’s only slightly broader, namely the process of building an entire app from the ground up. We recently did this at work; the task for one of our sprints was to build a prototype of an app. I wasn’t entirely prepared for what “prototype” means, but now I think I get it. A few important aspects of prototyping stand out for me from this experience:

1. Build something with a terrible user interface, but the broad rough structure of what we think users will want. Explicitly *do not* make it pretty. If you make it pretty, the users who beta-test it and the designers who make it pretty will focus on the visual details rather than on how it functions. In order to keep their eyes on the prize, write a computer program that is only one or two steps up from a sketch on paper.
2. Write a prototype that exercises the necessary backend code, like databases and API calls and so forth. You might find that your API calls take too long to return, and thereby couldn’t fit into the application that you’re building. Or you might find that your database doesn’t have indexes where it needs them. Or you might find that you need to restructure the entire app to work with APIs and backend databases that are beyond your control.
3. By putting the code in front of users, you might find that they don’t actually want the program that you envisioned. Or they want it, but they’d *really* want it if you just added a little something extra.

When I first mentioned “less thinking, more testing” to my friends, one friend raised the absolutely valid point that this approach doesn’t rid you of the need for design. That’s absolutely true. First of all, you need actual hypotheses to put in front of users; you can’t put a blank piece of paper in front of them and ask them to draw what they want. You need to focus their attention in a particular direction. When you’re building the backend architecture, you likewise aren’t starting from a blank slate.

But the point is *course-correction as quickly and as often as possible*. That fundamental message is why I think test-driven development and rough prototyping are applicable far beyond software development. It’s more than a little applicable to ideologies. Take, for instance, the recent kerfuffle over Rand Paul’s opposition to the Civil Rights Act. Matt Yglesias pulls on this thread and attacks the very idea that adhering to consistent principles even when they drive you off a cliff is somehow admirable. I completely agree. Consistency is a fine virtue, and a belief system that’s not consistent can’t be entirely true. But there are many virtues apart from consistency; among the greatest is non-insanity.

To keep your beliefs from veering off into the insane, you need to course-correct as often as possible. We’re not playing some game where the purpose is to start with reasonable-seeming principles and derive hilarious conclusions that clearly make no sense; the point is to build ethics that work for you in conducting your daily life, and to build policies that work for your countrymen. If it looks like you’ve built a chain of reasoning that led from obvious-seeming premises to ridiculous conclusions, you probably need to reconsider the premises. If Goldwater believed that opposition to civil-rights legislation was obviously true, then the principles were so much chin music to defend conclusions that he would have come to anyway; if, on the other hand, he adhered to those conclusions with great regret because he believed that the principles were correct, then he should take stock of how he arrives at his conclusions.

It’s not exactly new wisdom that blind adherence to principles can lead you astray. Holmes said as much in [book: The Common Law], whose opening words are

> The object of this book is to present a general view of the Common Law. To accomplish the task, other tools are needed besides logic. It is something to show that the consistency of a system requires a particular result, but it is not all. The life of the law has not been logic: it has been experience. … The law embodies the story of a nation’s development through many centuries, and it cannot be dealt with as if it contained only the axioms and corollaries of a book of mathematics.

Earlier in my life, I thought it was very important which ethical principles one had. I thought consistency was the most important thing. (Blame college; maybe blame academia more generally? College is a great place to pick up ideas at an impressionable age and run with them until everyone smirks at you with the amusement of the non-cloistered.) I still think it’s important, but there are many more important things. Constantly clinging to reality is among the most important. Constant course-correction, with input from the real world at every branch, is extremely helpful at keeping you moored in reality.

This, by the way, is why I’ve never been able to get far into Nozick’s [book: Anarchy, State, and Utopia]. It always feels like a shell game: “Let’s suppose you believe some premises about liberty and side constraints … We’ll just shuffle around the shells a little bit and … here we go: clearly you must believe this thing about government non-intervention.” I no longer trust long chains of reasoning from seemingly self-evident low-level principles. I want principles nowadays that are closer to daily life, whence the jump from them to concrete action is smaller and less fraught with the potential for insanity.

That said, of course there’s room to move in the other direction: I tell you that I believe X and Y, and you reply that X and Y are only instances of Z. (With some fear of stretching an analogy too far, this is akin to refactoring.) So now I believe Z instead, which is a generalized version of X and Y. Or maybe you ask me whether I believe A; if I say yes, you point out that A contradicts X. If I agree with you that there’s a contradiction, I now have a choice: continue to believe X, or change my belief in X. I believe the endpoint of this process is what Rawls called reflective equilibrium.

And of course when you course-correct frequently, you still need principles. Principles help determine the path that you start down, and help determine which experiments to perform to correct your course. But the goal should be to experiment at every possible step.

Am I just calling for more use of the scientific method here? I think I am. It works at small scales like software, and it also works at large scales like philosophy.

Tom Slee slaps down some new-economy nonsense — May 1, 2010

Tom Slee slaps down some new-economy nonsense

Tom Slee is a fabulous author; his [book: No One Makes You Shop At Wal-Mart] is one of my favorite books of the last five years. He’s made a second career (which maybe he’ll turn into a book? I’d buy it) out of dispensing with a lot of new-economy nonsense; his latest salvo is against Clay Shirky. Shirky is a fine, provocative writer, but his love of technology leads him to some silly techno-idealism. Slee looks at the abstract structure of a Shirkian argument; it turns out that a lot of “Web 2.0”-inspired authors follow the same structure.

Slee’s series of responses to Chris Anderson’s “long tail” idea are in the same vein. I’d use the word “contrarian” for these, if I didn’t think that word had been sullied by Christopher Hitchens, and if I didn’t think it implied opposition for the sake of opposition. In a lot of the Web 2.0 nonsense, it would be hugely instructive for the Web 2.0 folks to be forced to argue the contrary of whatever it is that they’re claiming at that moment. Argue that “social media” won’t actually have any world-changing effect on anything. Argue that blockbusters will have just as much of a place in the 21st-century economy as they did in the 20th-century economy, and that those folks living on the long tail will have just as much trouble making a living as they ever have. Argue that the structure of a lot of economic processes is a classical arms race: my side adopts some new technology and temporarily moves ahead, but eventually your side does the same thing; the net effect is a wash. (You could have predicted, similarly, that even if sabermetrics was a valuable technology and initially helped teams with small budgets, that its value to those teams would eventually disappear as the Yankees caught wind of it.) Argue that while the Internet makes distributed teams more feasible and reduces transaction costs, and so might temporarily help small businesses, it will eventually be adopted by large companies as well. And so forth. I’d love to see the Shirkys of the world forced to write books arguing these positions with all the passion that they apply to their chosen viewpoints.

The trouble may be that the incentives are all wrong. It is much sexier to argue that some new flavor of the month will change the world than to argue that the world of the future will look a lot like the world of today. When everyone around you is swept up in talk of Twitter and FourSquare, you’re likely to do better if you assert with everyone else that these are the waves of the future. You will be invited to conferences; you will be asked to write books. Likewise, newspapers and policymakers will do a lot better if they talk about something sexy like terrorism than if they pledge to end 600,000 annual heart-disease deaths. The old and stable and known is boring, though it may well be true.

Perhaps Slee and I should team up and write a book about all this nonsense. Or maybe we should both sell out and write a book about how FourSquare Will Change Everything. It will sell reasonably well and earn us both a decent middle-class income, which we can then convert into a second book entitled [book: Ha Ha We Were Just Kidding, Or: Your Latest Technology Idea Sucks].

Lewis Carroll, Alice’s Adventures in Wonderland and Through the Looking-Glass — April 3, 2010

Lewis Carroll, Alice’s Adventures in Wonderland and Through the Looking-Glass

Cover of Alice's Adventures in Wonderland and Through the Looking-Glass, packaged together in one book published by Oxford. Cover is a painting of Alice looking at the queen and king of some suit that I forget; queen and king of cards, in any case.

These are books that I had inexplicably not read. I don’t know how that even happened. I certainly knew them by reputation, and I’d certainly known where many of the cultural allusions — the Red Queen running with all her might and yet still not moving, impossible things before breakfast, the Jabberwocky, etc., etc. — came from. There’s still lots of cleverness that I didn’t know about, like this bit:

> “[…]The name of the song is called “HADDOCKS’ EYES.”‘
>
> ‘Oh, that’s the name of the song, is it?’ Alice said, trying to feel interested.
>
> ‘No, you don’t understand,’ the Knight said, looking a little vexed. ‘That’s what the name is CALLED. The name really IS “THE AGED AGED MAN.”‘
>
> ‘Then I ought to have said “That’s what the SONG is called”?’ Alice corrected herself.
>
> ‘No, you oughtn’t: that’s quite another thing! The SONG is called “WAYS AND MEANS”: but that’s only what it’s CALLED, you know!’
>
> ‘Well, what IS the song, then?’ said Alice, who was by this time completely bewildered.
>
> ‘I was coming to that,’ the Knight said. ‘The song really IS “A-SITTING ON A GATE”: and the tune’s my own invention.’

This is a terrific introduction to the use-mention distinction, I gather. Though normally when people talk about the use-mention distinction, they’ll differentiate between “Mount Everest” (a string containing 13 characters) and Mount Everest, a mountain in Asia. Here Carroll is making the split more decisive: the use is one thing (the song’s name), and the mention is entirely another (a string that doesn’t look at all like a representation of the song, if that makes sense). It’s clever, and not a little bewildering. I’d love to hear how much of it little Alice Liddell could process when Carroll was spinning his tales for her.

Nothing really *happens* in Carroll’s books. Alice ambles about, weird things happen, animals speak paradoxically to her, and that’s that. These are still very fun reads, though.

I love the packaging on the particular Oxford World’s Classics edition of [book: Alice in Wonderland] and [book: Through the Looking-Glass] that I read, but I think the endnotes are useless. Most often they point you to a reference that Carroll *may* have been making in the text, but most of the time I strongly doubted that he *was* making such a reference. Even if he had been, the endnotes pull you away from the text to make a point that does absolutely nothing to improve your understanding. It’s not as though [book: Alice in Wonderland] needs an exegesis on the level of [book: Ulysses]. So I’d advise buying this lovely edition, then ignoring the endnotes altogether.

Haruki Murakami, Dance Dance Dance — March 6, 2010

Haruki Murakami, Dance Dance Dance

Cover of Dance Dance Dance: at the top of the cover, a seductive Japanese girl's eyes, staring at the reader; in the middle 2/3 or 3/4, an apartment complex viewed at night with the title overlaid; at the bottom, the author's name and a blurb

Now *that* is what I’m *talking* about: a classic Murakami novel, with

* a disaffected narrator fumbling vaguely through his days
* the wall between our world and a much darker one — a world which *may be within our own souls OMG* — falling away
* some sex (though less than you might expect from Murakami)
* a semi-pulpy story that pulls you along effortlessly

In [book: The Wind-Up Bird Chronicle], the dude’s cat disappears, *and then shit gets real*. (Oh, and by the way: he spends a lot of time at the bottom of a well. So there’s that.)

In [book: Kafka on the Shore], cats start disappearing from this one neighborhood, and a retarded guy with special powers chats up the cats’ cat friends, *and then shit gets real*.

In [book: After the Quake], people’s empty lives contribute in some undefined way to the Kobe earthquake; at the very end we see that our characters are tentatively bringing themselves out into the world again, becoming the sort of people they know they should be.

In [book: After Dark], the boundary between the Dark World and this one is paper thin, and sometimes disappears altogether. Sometimes that boundary exists on a physical device that gets left in a convenience-store refrigerator (for instance).

So you see a pattern forming. There may have been a time in my life when I would have derisively called Murakami “formulaic,” but that word is actually an insult to the power of a good formula. Philip Roth succeeded for a good forty years by adhering to the Jewish-author-with-inexplicable-sex-appeal-exploring-masculinity formula. (All right, that’s kind of a complicated formula. Roth was a [foreign: sui generis] author.) Jazz music has evolved from one formula to another. Country music, from what I can tell, has been the same formula interpreted in different ways for half a century (cheatin’ woman, lonesome highway, etc.). I’ve listened to a lot of Frank Sinatra, and it was all — down to particular trills — the same formula. But *man* was that a good formula. The formula is essentially arbitrary; I have a book in queue — Georges Perec’s [book: A Void] — whose guiding premise, if I’m not mistaken, is that all such formulas are arbitrary, so why not pick one that’s *truly* arbitrary (don’t use the letter ‘e’ at all) and see what you can do within it.

As ever, it’s what you *do* with the formula. Murakami knows how to fold, spindle, mutilate, and combine genres like no one else. [book: Dance Dance Dance] should maybe be called a supernatural mystery novel, in a way that really only makes sense if you’ve read a bunch of other Murakami. Our narrator — really a pretty excellent guy, which you can’t often say about Murakami protagonists — wakes up more than a little freaked out one day after an old lover calls to him in his sleep. She haunts him, but from where? From beyond the grave? Is she dead? Is that her ghost?

Anyway, this woman — Kiki is her name — is calling to him, and he knows exactly what he has to do: he must return to the hotel where he and Kiki spent the formative months of their relationship. It is a creepy, bizarre hotel, where everything is just a bit askew. I couldn’t help picturing an old ramshackle house, lightning flickering behind it during deepest nighttime. Our narrator returns to the hotel to find that it’s been replaced with a gleaming skyscraper of a hotel whose name is the same as the old one it replaced. Why would they bother to keep the name the same?

Here we spin off in a few directions. First of all, we run down the “I don’t know what this monkey business is, gumshoe, but I’m sure as George Peppard gonna find out what happened to that old hotel” direction. Might there be a [foreign: yakuza] connection? *Only time will tell.*

(Actually time won’t tell: it’s a Murakami novel, and I still have no idea by the end whether the mafia were involved. Just throwing that out there so that I don’t mis-set expectations.)

Second, our narrator sees a beautiful 13-year-old girl sitting in the bar with her mother. Hijinx ensue. Turns out that the mother is brilliant but spacey, and leaves her daughter all over the world while she — the mother — hops on planes to Kathmandu or wherever. The daughter is left to fend for herself. She is, as you might expect, world-weary and vulnerable and … well … motherless.

The friendship that develops between the narrator and this girl is the most convincing character development I’ve found in Murakami. He feels tenderly toward her — sort of fatherly, but more like a wise older friend. She’s a classic teenager: sullen, believing everyone else is *so lame*, smacking gum loudly and wearing her headphones whenever everyone else just gets *too lame for words*. Their relationship is very captivating, perhaps because I put myself back in the mode of a teenage boy who absolutely would have fallen in love with this gorgeous girl; back in those days, I wanted so badly for the uninterested girl to *be* interested. The narrator puts himself in that mode, too. Somehow it’s never creepy: our 34-year-old narrator doesn’t lust after a 13-year-old girl at all. They’re Butch Cassidy and the Sundance Kid, with a window into the underworld.

We start out wanting to know where Kiki is, but we end up wanting to know so much besides. The old hotel, for instance, seems to have been reincarnated on the 16th floor of the new hotel, but you can’t always see it; what’s *that* about? To take another example, our narrator has transcendent — may I say *otherworldly?* — sex with a beautiful, high-priced prostitute (oh Haruki, I can’t quit you), who subsequently ends up dead, strangled with a black stocking. How does our narrator — a journalistic hack, who describes his job as “shoveling cultural snow” — afford such an exclusive call girl? Well, turns out he’s reconnected with a high-school friend of his who’s become a big movie star. (I pictured my friend Ben in this role — Ben of the million-watt smile and charm to match.) They’re hanging out, drinking, when the movie star suggests that they “get a couple of girls.” Yadda yadda yadda, so on and so forth, our narrator and the escort are washing up together. Soon enough she’s dead. What’s *that* about?

Often in these sorts of situations, Murakami would take the lazy route out: put some balls in the air, then walk off to see what’s on TV. Here he finishes the juggling routine. The result is an uncannily gripping story that’s also emotionally affecting. I can’t recommend it strongly enough.

I am not alone in loathing Richard Epstein’s book —

I am not alone in loathing Richard Epstein’s book

apparently. Thanks to my friend Paul for passing along that link.

I think a rather enormous swath of libertarian arguments deserve the following response: you think aggregate economic output is important, but you care much less about the distribution of society’s wealth than I do. You seem to be concerned about spending money on, say, health care, and you make a lot of noise about how society can’t afford this or that. But when you drill down from the abstract principle to the detail, it all falls apart: society *can* afford to give free vaccinations to poor American children, or anti-malarial netting to African villages, or free lunches to every American schoolchild. Everyone knows we can afford this, because we afford spectacular amounts of waste on lots of things that do nothing to improve the lot of humankind.

Do I want to play the self-interest game here? No, I don’t, but I will for a moment. I could make a plausible argument, occupying just as many pages as Epstein’s [book: Mortal Peril], arguing that if we help out the poor in this country, we’ll make life better off for even the wealthy folks. Poor people spend a larger fraction of their income than the wealthy do. Give a poor person an extra dollar, and more of that dollar will go back into the economy than if a wealthy person gets that dollar. Help poor countries build sustainable infrastructure, and maybe they’ll be able to start buying cars — our cars! — rather than subsistence goods. I could bring in bits of the theory behind microfinance: the increased productivity from loaning someone a sewing machine, when all she’s previously had is a needle and thread, is much greater than the increase when you step up from a fleet of sewing machines to an industrial sewing operation. So investment in the poor may, in principle anyway, be better for investors than investment in wealthier folks. (One would have to take lots of detours along the way to explain why Citibank isn’t in a rush to fund sewing operations in remote Indian villages. I hinted in that direction in my review of that microfinance book.)

You know the counterarguments here just as well as I do: money to poor people will just go to drink and drugs; money to poor countries will just go to feather the nests of corrupt warlords. I could fill up my notional book responding to these arguments. I could fill it up with other arguments besides; I might, for instance, take up the thread that Jacob Hacker started in [book: The Great Risk Shift]: in the decades after World War II, corporations and the government bore more risk on our behalf, and the result was the greatest economic expansion the world has ever seen; in the last three decades, Americans have had to handle more of that risk on their own, which makes them frightened, which makes them hoard money and avoid things that capitalist economies are supposed to treasure, like starting new businesses. I might pull in some of Paul Krugman’s movement-defining [book: Conscience of a Liberal], and place the blame for this risk shift on the decline of unions. Then I might bow in the direction of Tom Geoghegan’s [book: Which Side Are You On?], exploring the causes and consequences of this union decline (hint: the decline was not accidental, and it’s not irreversible, though things certainly don’t look good for unions).

The general arc of this notional book might be that people like Epstein focus far too much on what individual economic actors do, too little on the economic institutions that make their actions possible, and too little on how interdependent our economic lives are. I might bring in one of my favorite books of recent years, Tom Slee‘s (ironically titled, if it’s not clear) [book: No One Makes You Shop At Wal-Mart], which argues these points more clearly than anything else I’ve read. You can’t afford health insurance? Neither can a lot of your countrymen; insurance suffers from a well-known death spiral that makes this entirely predictable. It’s not safe for your kids to walk to school? It may well be because other parents decided it wasn’t safe for *their* kids to walk to school, so they drove their kids to school instead — thereby leaving unprotected the kids who still chose to walk.

In the face of this economic picture that suggests the need for coordinated action, all Epstein and his libertarian ilk can give us is the purported Ultimate Justice of the contract that makes us all equal before the law. “The law, in its majestic equality,” wrote Anatole France, “forbids the rich as well as the the poor to sleep under bridges, to beg in the streets, and to steal bread.”

This notional book of mine, based on certain core beliefs I hold about our responsibility to the least fortunate, would have just as much inherent plausibility as Richard Epstein’s. I’m not convinced that either his book or mine would sway anyone. I suspect that you either come at the world thinking that people get what they deserve (hence that their suffering is their own fault), and that no one else can give you bootstraps to pull up; or that there’s a great measure of chance in everything we do, and that it’s the job of a just society to insulate people from risks beyond their control. I fall squarely into the latter camp. When phrased that way, I think most Americans would come along with me. Maybe this book should be something more than notional.

There’s my Murakami — February 28, 2010

There’s my Murakami

As part of my 2K10 Re-Engage With Reading! program, I’m reading a few novels in a row to get the reading muscles back from atrophy. [book: The Sea], [book: Norwegian Wood], and now [book: Dance Dance Dance]. This one’s getting off to a great start: part Murakami Weird, part Murakami Disaffected Narrator, part supernatural detective novel. Just great.

A couple quotes that have made me laugh so far (and really, I don’t laugh out loud at books very often at all):

First, about the narrator’s schoolmate, who’s now a movie star:

> Although, come to think of it, in real life the guy had been pretty much like the parts he played. He was nice enough, but who actually knew anything about him? We were in the same class during junior high school, and once we shared the same lab table on a science experiment. We were friendly. But even back then he was too nice to be real — just like in his movies. Girls were already falling all over him. If he talked to them, their eyes would go moist. If he lit a Bunsen burner with those graceful hands of his, it was like the opening ceremony of the Olympics.

His mind is wandering off, constructing a movie scene in Egypt for some reason (after he’s, for some similarly unknown reason, told a woman about swim clubs in ancient Egypt):

> Cut to a spectacle scene on the order of [film: The Bathing Beauty] or [film: The King and I]. My classmate and the princes and princesses in a grand synchronized swim routine in celebration of the Pharaoh’s birthday. The Pharaoh is overjoyed, which further boosts the youth’s stock. Still, he doesn’t let it got to his head. He’s a paragon of humility. He smiles the same as ever, and pisses elegantly. When a lady-in-waiting slips under the covers with him, he spends a full one hour on foreplay, brings her all the way to climax, then afterward strokes her hair and says, “You’re the best.” He’s a good guy.

Oh, and right now — page 84 — he’s talking with a guy (incorporeal being? I don’t know) who’s dressed in a sheep outfit. The narrator refers to him as “Sheep Man.” Soooo … that happened.