(__Attention-conservation notice__: 1,600 words on why it may be a good idea not to read cranks. Also some words on academic orthodoxy. Scattered thoughts on building institutions for seeking the truth.)

My friend, who blogs pseudonymously and will hereafter be known as “PB” (for Pseudonymous Blogger), takes me to task for suggesting the existence of cranks. (Note that I didn’t dismiss any specific people out of hand.) PB has long invited me to read the heterodox folks that he follows, including some “with a large collection of old John Birch Society literature.” As my readers know, I tend to read more from the academic wing; PB attacks academia like so:

> Academics in the social science to do not get fired or demoted if they get things wrong. They do not get additional grad students if they are right. The grad school and peer review process reward one thing – conforming to the current intellectual fashions.

I believe I’ve heard about other places where leaders reward servants less for their objective correctness and more for hewing to what the leaders believe. I believe that’s called *every single human institution ever*.

I tease. If the problem is as PB describes, it’s an institutional incentives problem, and the question is how to build better institutions. Think of the various institutions we have in this world whose purpose is (ostensibly, anyway) to seek out truth. We have juries, to find truth in the legal realm; academia, to find it within more abstract domains; the media, to find what our leaders are hiding and bring policies to the broader public; and many others. Every one of them is guilty in some way of confirmation bias. Cass Sunstein, who’s now the head of OIRA, is famous for documenting how groups of people who believe the same things and only speak with one another are likely to arrive at a more-extreme conclusion than if they had some dissenters in their midst. He’s also famous for taking this line of research and using it to suggest that the Internet needs “general-interest intermediaries”, like the [newspaper: New York Times] and the [newspaper: Wall Street Journal], to help soothe society’s emergent extremism. Of course it’s an open question whether the general-interest intermediaries serve the purpose that he thinks they do.

PB focuses on a couple successes from his heterodox sources, but recall the proverb about stopped clocks. The question is how well an institution works overall. We statisticians talk about “type I” and “type II” errors, or “false positives” and “false negatives,” respectively. A false positive, in the context that PB and I are talking about, is when you identify something as true when it’s false; a false negative is when you identify something as false when it’s true. Suppose, for instance, that I adopt as a decision rule that I will never read anything written by someone who’s been a member of the KKK. I may well reject some smart writers because my rule is too crude; these would be false negatives. The basis for my rule is that I expect most of what the KKK member utters to be false; by rejecting KKK writers out of hand, I’m trying to minimize my rate of false positives (again, accepting something as true when it’s in fact false). There are costs associated with false positives, and costs associated with false negatives. To compute the total expected cost of a decision rule, multiply the cost of a false positive by the probability of a false positive, and add to it the cost of a false negative times the probability of a false negative. Going along these lines eventually gets you to the Neyman-Pearson Lemma, which is fundamental to statistics.

Rejecting too many people as cranks may give you a high rate of type-II errors: you may reject some good people out of hand. If PB’s right, mis-labeling cranks is *also* likely to give you too high a rate of type-I errors: you’re just confirming the conventional wisdom, which has a terrible track record. If I’m reading PB right, then, his claim is that academia’s error rate is terrible in both directions, hence “dominated” in the game-theoretic sense. My response would be twofold: first, find me an institution that balances type-I and type-II errors better than academia. This isn’t a rhetorical question; if there is such an institution, I’d like to find it. But the point is not to focus on isolated instances where someone predicted something better than someone else; the point is to look at overall error rates in both directions. Second, I’d ask PB to suggest institutional improvement that would make academia — or juries, or the media, or pick-your-favorite-institution — do its job better.

Based on what PB wrote, I suspect we’d both look for changes in the incentive structure. If it’s empirically true that academia hires on the basis of confirming what the incumbents already believe, how do we change that? To pick one example out of the air: is there any way to make academics put their money where their mouths are? The examples PB cites from macroeconomics, for instance … is there any way to make Ben Bernanke suffer financially if the economy goes south and benefit if GDP rises? You can look to what corporations do — stock options, for instance — to put some skin in the game, but we also know all the sorts of gaming that go along with those incentives. Unless you structure them properly, you have the epidemic of “I’ll Be Gone, You’ll Be Gone.” Structuring incentives is an incredibly nontrivial problem. To pick but one book on the subject out of the air, take a look at [book: Managerial Dilemmas: The Political Economy of Hierarchy]. Or, from another angle, read Herbert Simon’s paper on “The Proverbs of Administration”, wherein Simon notes that most every managerial proverb has an equal and opposite proverb that gets thrown around just as confidently.

So in short, I’m not at all confident in my ability to construct incentives that reward the right behavior within institutions, and I certainly don’t feel as though, if I were made Dictator of Academia, I could build better incentives than those that are already in place.

Also, I’m fairly convinced that PB is just empirically wrong about the ideological homogeneity of academia. I think he may be confusing what happens *within one institution* with what happens *in the academy as a whole*. Does PB really contend that the University of Chicago and Princeton University are hiring the same economists? No, of course not: they argue bitterly. Just look at Princeton’s Nobel laureate Paul Krugman denouncing the U of C’s Nobel laureate Ed Prescott. Or look at a good century of arguing in statistics over whether we ought to be Bayesians or frequentists. And that’s in statistics, where empirical and mathematical confirmation are, at least in principle, much more readily available than in the social sciences or the humanities. I’m curious what PB’s standard for homogeneity is. Are academic disciplines homogeneous whenever they avoid pistols at dawn?

If academia is argumentative, it may well be so because the incentives encourage it. Judge Richard Posner, in [book: Public Intellectuals: A Study of Decline], argues that academics have every incentive to be contentious — at least within the public sphere — because it gets you attention when you reject the status quo. I see very few books entitled [book: Most Everything You Know About The World Is Basically Correct].

All of that said, PB and I would surely agree that the conventional wisdom often gets things disastrously wrong. To take but one example, you can look at conventional views of market regulation. From the New Deal through World War II and up until the 1960s, the conventional wisdom — which took canonical form, perhaps, in the great Paul Samuelson’s 1948 textbook — was that the goal of economics was to control markets toward desired ends. Eventually the conventional wisdom switched to the idea that markets were best left on their own. You can argue both sides of this — and, importantly in this context, academia *has* argued both sides of it, continuously, for half a century. What made the switch happen? Well, it’s complicated, but surely a part of it is that it’s convenient for businessmen to argue that they’re best left unregulated. They were going to argue this anyway; academic economics just offered them some tools. But a whole set of entirely orthodox economic results says something quite different: what individual actors do rationally on their own can lead to a disastrous, unwanted result in the aggregate. You can look anywhere within orthodox economics for confirmation of this idea (see Bowles, [book: Microeconomics: Behavior, Institutions, and Evolution]; and Schelling, [book: Micromotives and Macrobehavior]). The problem probably isn’t academic rejection of heterodoxy; it’s that economics can be used as a tool of ideology in a more direct way than can mathematics, so it *is* used as such a tool.

Of course PB is right that there were big glaring warning signs that we were in an unsustainable bubble. Dean Baker flagged a lot of these in [book: Plunder and Blunder]. Lots of very intelligent keepers of the conventional wisdom, like Bernanke and Greenspan (Ph.D.s both), who should have known better, got it wrong. All this tells me is that, when the economy’s booming and lots of people are making money, it’s very hard to be the guy who (as the conventional saying goes) “takes the punch bowl away.” Now that everything’s collapsed, we’ll have more people honoring the conventional wisdom. The wisdom was always there; the will to follow it was not.

As for PB’s generous invitation to read along with him on one or more topics: it’s a generous offer, but take a look at how much other stuff is either in my queue or sitting on my floor, tempting me. Add to that a chapter-by-chapter read of Adam Smith, a heretofore-unnannounced chapter-by-chapter read of Gerard Debreu’s [book: Theory of Value], and a couple bits of big news that I’m waiting to fully ferment before I mention them here; the result is that I don’t have the time to read in what sound like fascinating areas. But I appreciate the offer.