Jonah Lehrer in The New Yorker (Dec. 13, 2010) carefully investigates reasons why “the truth wears off” as scientific findings, vivid at one moment, gradually cease to be so demonstrable. He looks particularly to “publication bias” and “selective reporting” and “sheer randomness” as explanations. All have force, and perhaps they account for most of the problem that Lehrer examines with the aid of Professor Jonathan Schooler.
But I’m not sure they fully account for the example of Dr. Rhine’s extrasensory perception research, which the article offers as one of its striking instances of data decline. (These factors may not entirely explain Prof. Schooler’s own data problems either, but I won’t pursue that here.) It seems from Lehrer’s account that Rhine’s data collapsed – that is, the undergraduate who appeared to have phenomenal E.S.P. powers stopped producing his feats of perception – “just as he [Rhine] began to believe in the possibility of extrasensory perception.” That strongly suggests that Rhine’s earlier data weren’t biased somehow by his own belief in E.S.P. Lehrer reports that Rhine prepared papers for publication that would have reported the undergraduate’s remarkable achievements; it’s not clear from the article whether or when those were published, but at any rate no publication bias seems to have prevented Rhine himself from discerning the deterioration of the data.
It remains possible, certainly, that randomness is the real culprit – anything can happen, and it might just have been sheer randomness that this student got lucky, rather than psychic, for a time. But the problem with “randomness” is that it functions as a wild card, explaining only by declaring something inexplicable. What if, instead, the data pointed to something real and not random at all? For example, suppose that people do sometimes have psychic abilities, but not very often or very dependably. Perhaps psychic abilities burn out quickly – so that what happened to Dr. Rhine was that he observed the actual tendency of human E.S.P. to burn out (and not just in one undergraduate; Lehrer reports that the same fall-off took place “in nearly every case” where a subject initially showed E.S.P. ability).
So perhaps most instances of the truth wearing off reflect various forms of scientific error or sheer, random mystery. But if the data are measuring a phenomenon that burns out, then the data reflect, and quite straightforwardly, not error but actual reality. It seems important to keep in mind the possibility that the data aren’t wrong, while also being sensitive to the many reasons why they might be.
Lehrer's article happened to precede by just a few days another moment of media attention to research into E.S.P., by Benedict Carey, "Journal's Article on ESP Is Expected to Prompt Outrage," New York Times, Jan. 5, 2011. Whether the article in The Journal of Personality and Social Psychology is well-founded or not I cannot say, though it's impossible not to like the idea that people can ever so slightly predict the future when it involves their viewing erotic photographs. But it is striking that at least one "longtime critic of ESP research," quoted in the Times article, felt that the publication of the study was "craziness, pure craziness. I can't believe a major journal is allowing this work in." This is, of course, a call for "publication bias." Personally, I think the psychology journal's decision to publish the piece along with a rebuttal is a much better response than refusing to publish, precisely because it opens the issues up to examination and debate in the marketplace of ideas. But I expect that in almost every field there are propositions that simply cannot be contradicted within the boundaries of the field itself. We are in favor of freedom, but not friends of error, and policing error always risks sliding into restricting freedom.