In response to my post about out-of-body experiences, my skeptical friend Jon sent me a link to a wonderful article just posted on the Scientific American website, by Marina Krakovsky, called "Losing Your Religion: Analytic Thinking Can Undermine Belief."
The article describes research by two University of British Columbia psychologists, Will Gervais and Ara Norenzayan, research that indicates both that analytic thinking correlates with unbelief and that it "actually causes disbelief." For example, experimental subjects asked to answer test questions about religious belief that were printed in a hard-to-read font -- presumably requiring analytic effort to actually read it -- expressed less religious belief than those who answered the same questions printed in a regular font.
Even better, experimental subjects who looked at a photo of Rodin's sculpture The Thinker expressed less belief in God in a subsequent test than those who had first looked at a picture of another sculpture, Discobolus, which portrays "an athlete poised to throw a discus." Apparently The Thinker has become, in our culture, "an iconic image of deep reflection" -- just seeing it, another experiment indicated, "improved how well subjects reasoned through logical syllogisms."
The impact of the image of The Thinker is engaging all by itself. Does it make us more likely to think that the natural thing to do is deep thinking, by some mechanism of "framing"? Does it cause us to want to be deep thinkers, out of admiration or guilt? Does it cause us to feel that deep thinking will be fun rather than burdensome? Does it overcome our sense of inability to think this way? Whatever the mechanism, it's nice to see it at work -- and it seems worth thinking (deeply) about how to bottle this effect and use it more often.
But what about the point of these experiments, that analytic thinking and religion don't go well together? Well, actually, that's not precisely what the experiments show. For example, in the sculpture-viewing experiment, apparently the belief scores varied widely, whether the subjects had seen The Thinker or Discobolus. The average belief scores for the two groups were indeed quite different: 61.55 on a 100-point scale (100 presumably being utter faith) for the Discobolus viewers, and 41.42 for the Thinker observers. But 61.55 isn't overwhelmingly high, and 41.42 isn't overwhelmingly low: there seems to have been a lot of belief among those whose analytic inclinations had just been stimulated, and a lot of unbelief among those who received no such impetus.
So it seems more precise to say that analytic thinking tends, somewhat, to undermine religious belief, and I don't have any trouble conceding that proposition. Moreover, if a score of 100 represents a kind of belief in which analytic faculties are essentially turned off, I agree that that's not a life thinking beings should embrace (though moments of religious ecstasy or enlightenment are quite another matter).
But as to the potential argument, a much more sweeping one, that clear analytic thinking compels disbelief, I have two responses. The first is that I'm not certain that human beings altogether emancipated from intuition (the friend of belief, evidently) can be imagined. Intuitive thought processes, and emotionally-shaped thought processes, are integral to our mental lives, and if they are valuable for us in the ordinary moments of our lives it seems possible they're of some use in thinking about the truth of religion as well.
The second is that it's not clear to me that clear thinking does support the conclusion of disbelief. Personally I don't understand the idea of a universe without a beginning or a source, a first mover. I admit that the idea of an unmoved first mover (one of the old characterizations of God, I believe) is paradoxical too, but it seems to me that it is analytically appropriate to infer the existence of something/someone beyond our logic to explain why our logic fails us.
That's a pretty thin rationale for religion, I admit. I'd be much happier to see a miracle (and, by the way, it seems to me that if you did see an actual, straight-out miracle -- the burning bush, or the loaves and fishes -- that would be a very good logical reason for belief). But for that I'm still waiting ....
Saturday, April 28, 2012
Friday, April 27, 2012
Near-death experiences, revisited
I'm afraid I was too easily impressed by Mario Beauregard's
account of an odd, and seemingly compelling, story of an out-of-body experience
of seeing a shoe on an inaccessible hospital ledge. PZ Myers, a biologist, has
now responded to Beauregard's post, calling it "yet another piece of
evidence ... that Beauregard is a crank." (PZ Myers, Near-death,
distorted: Taking aim at a recent Salon story about the science of out-of-body
experiences, Salon.com, Apr. 26, 2012. I haven't linked to the Salon posting because it seems to be broken now (April 27), but the original piece is available, under the title The NDE [Near-Death Experience] Illusion, at Myers' blog, Pharyngula.)
Myers is pretty convincing in arguing that Beauregard has assumed
without proof that people's recollections of perceptions during periods when their brains
were completely out of action really are recollections -- rather than
after-the-fact mental constructs of the sort we create all the time and call "recollections." That wouldn't account for the shoe case I was
impressed by, but Myers finds that unpersuasive as well. In fact, he writes,
this story "has been totally demolished," and he links to a critique of it at a blog called "The Secular Web: a drop of reason in a pool of confusion."
I think the problems with this story, as delineated at The Secular Web, really are too big to get around --
assuming, of course, that the evidence in the debunking account is accurate.
The social worker who reported it did not actually do so (at least in public)
till seven years later. The patient who supposedly had the experience can't be
found and may now be dead.
But the debunkers seem to accept that the original report of the
story was honest, albeit "subconsciously embellished." Their main
point is that the patient could have learned of the shoe's existence from
conversations she overheard during her 3 days in the hospital before the heart
attack that precipitated her out-of-body experience. That seems to be right.
But it is not exactly "demolition"; there is, after all, no proof at
all that she did hear about the shoe this way.
The debunkers' theory requires that the hypothetical conversation
that the patient overheard include not only the existence of the shoe but also
details about its appearance. To be sure, the debunkers imply (reasonably) that
those details might have been later embellishments. If they weren't, they might
indeed have been discernible and so perhaps spoken of -- but one might wonder
how detailed this imagined conversation would have been likely to be.
The problems with this story are big enough that I can’t think of it
as reliable. But has it been demolished? Only in the sense that it’s clear that
it’s not clearly true. But it’s not been proven that it’s false. The reason
that Myers sees it as having been demolished, it seems to me, is that he starts
from the premise that all human perception is based on the physical brain and
body, and therefore regards any assertion to the contrary as unlikely to be
correct. That’s a reasonable approach. The problem is that if you start from
the opposite premise, that the human mind or soul has an existence beyond the
confines of the material body, then you might say that reports of out-of-body
experiences are evidence to support your premise unless they’re proven to be
false. Again, the shoe story hasn’t been proven to be false; it’s been proven not to
be provably true, which isn’t the same thing.
Which premise should one start from? The trouble with asking that
question is that the whole point of looking at the stories of out-of-body
experiences is to try to figure out what the right premise is. You can’t pick
the premise definitively until you’ve done the testing, and as of now, at least
in this instance, it seems to me that the testing can’t produce a definitive
result until you’ve picked a premise. And that means we are, in truth, in
doubt.
Sunday, April 22, 2012
Near-death experiences and rationalism
Could there possibly be life after the death? A good materialist might feel obliged to say no, since we know so clearly that those who have died are no longer in this world. In fact, materialists might say no with such emphasis that to write a blog post considering another answer is a rash act. Still, that absolute no may be too quick, even for someone without religious faith, since we also know pretty clearly that we don't understand huge aspects of the material world (e.g., dark matter, making up most of the universe, etc., etc.), and so it's not really possible to prove that physical laws mean for sure that life after death cannot exist.
But for anyone who would like to believe that life continues after death, but not to do so just as a leap of faith, it would be nice to have some positive evidence. For some time there's been some, in the form of "near-death experiences." There seems to be no doubt that people do have these experiences, and they have a notably heavenly flavor (e.g., being welcomed to the light) -- but perhaps these are just the illusory experiences of brains running out of oxygen.
Or, on the other hand, perhaps not. In a long article posted yesterday in Salon.com, a research psychologist named Mario Beauregard says not. (Near Death, explained: New science is shedding light on what really happens during out-of-body experiences -- with shocking results, Apr. 21, 2012) Here's one of the points Beauregard makes:
But this is an interesting story. If we merely assume that the participants were not lying, we at once face evidence that is much more difficult to dismiss than the coming-to-the-light category of reports. "Coming to the light" fits smoothly into religious imagery that is powerful in our culture, and so may be discounted as the product of brain distress channelled into familiar cultural channels. But the details of the appearance of a sneaker on a building ledge?
If the participants weren't lying, and if there wasn't some overlooked source for the cardiac patient's knowledge of this shoe, then it does seem that she saw something that she could not have seen with her eyes. That in turn opens up the possibility of existence outside our physical bodies, and the next step -- not the same step, but clearly a related one -- would be the possibility of existence after our physical bodies are gone.
Vast numbers of people believe in life after death -- offhand, I'd guess most people now alive hold this belief. But it is not easy to square this belief with rationalism, and the great appeal of rationalism is that it helps us not to believe in the various false claims that have so often plagued humanity and led to so much suffering. And yet, perhaps, it is important not to make rationalism too much an article of faith itself, and to consider with an open mind the possibility that, to paraphrase Hamlet, there are more things in heaven and on earth than are dreamt of in this philosophy.
But for anyone who would like to believe that life continues after death, but not to do so just as a leap of faith, it would be nice to have some positive evidence. For some time there's been some, in the form of "near-death experiences." There seems to be no doubt that people do have these experiences, and they have a notably heavenly flavor (e.g., being welcomed to the light) -- but perhaps these are just the illusory experiences of brains running out of oxygen.
Or, on the other hand, perhaps not. In a long article posted yesterday in Salon.com, a research psychologist named Mario Beauregard says not. (Near Death, explained: New science is shedding light on what really happens during out-of-body experiences -- with shocking results, Apr. 21, 2012) Here's one of the points Beauregard makes:
One of the best known of these corroborated veridical NDE [near-death experience] perceptions—perceptions that can be proven to coincide with reality—is the experience of a woman named Maria, whose case was first documented by her critical care social worker, Kimberly Clark.
Maria was a migrant worker who had a severe heart attack while visiting friends in Seattle. She was rushed to Harborview Hospital and placed in the coronary care unit. A few days later, she had a cardiac arrest but was rapidly resuscitated. The following day, Clark visited her. Maria told Clark that during her cardiac arrest she was able to look down from the ceiling and watch the medical team at work on her body. At one point in this experience, said Maria, she found herself outside the hospital and spotted a tennis shoe on the ledge of the north side of the third floor of the building. She was able to provide several details regarding its appearance, including the observations that one of its laces was stuck underneath the heel and that the little toe area was worn. Maria wanted to know for sure whether she had “really” seen that shoe, and she begged Clark to try to locate it.
Quite skeptical, Clark went to the location described by Maria—and found the tennis shoe. From the window of her hospital room, the details that Maria had recounted could not be discerned. But upon retrieval of the shoe, Clark confirmed Maria’s observations. “The only way she could have had such a perspective,” said Clark, “was if she had been floating right outside and at very close range to the tennis shoe. I retrieved the shoe and brought it back to Maria; it was very concrete evidence for me.”Of course it is possible to be skeptical about this story. The version posted in Salon, and quoted here, is obviously incomplete -- we don't know when it was supposed to have happened, when it was reported, or much at all about the people involved. And we do know that lots of things that lots of people have reported have turned out to be not only false but absurd.
But this is an interesting story. If we merely assume that the participants were not lying, we at once face evidence that is much more difficult to dismiss than the coming-to-the-light category of reports. "Coming to the light" fits smoothly into religious imagery that is powerful in our culture, and so may be discounted as the product of brain distress channelled into familiar cultural channels. But the details of the appearance of a sneaker on a building ledge?
If the participants weren't lying, and if there wasn't some overlooked source for the cardiac patient's knowledge of this shoe, then it does seem that she saw something that she could not have seen with her eyes. That in turn opens up the possibility of existence outside our physical bodies, and the next step -- not the same step, but clearly a related one -- would be the possibility of existence after our physical bodies are gone.
Vast numbers of people believe in life after death -- offhand, I'd guess most people now alive hold this belief. But it is not easy to square this belief with rationalism, and the great appeal of rationalism is that it helps us not to believe in the various false claims that have so often plagued humanity and led to so much suffering. And yet, perhaps, it is important not to make rationalism too much an article of faith itself, and to consider with an open mind the possibility that, to paraphrase Hamlet, there are more things in heaven and on earth than are dreamt of in this philosophy.
Friday, April 6, 2012
Brainstorming -- and why it isn't what it was cracked up to be
It turns out that brainstorming doesn't work.
This favorite approach of countless meetings rests on the generous idea that encouraging people to think freely without fear of criticism will liberate their creativity. The problem is that this idea, stated this baldly, is mistaken.
In fact, according to Jonah Lehrer, in his extremely interesting article "Groupthink" (The New Yorker, Jan. 30, 2012, at 22-27), brainstorming produces fewer ideas than the same number of people would generate if each worked alone.
This, incidentally, is not an argument against "crowdsourcing" -- even if people actually performed best alone, pooling many people's individual insights might well produce a greater whole than any one person, however talented, could produce on his or her own. That's what a psychologist named Keith Sawyer, whom Lehrer quotes (at 23), seems to confirm when he says that "[d]ecades of research have consistently shown that brainstorming groups think of far fewer ideas than the same number of people who work alone and later pool their ideas."
Lehrer does not say whether people ever are more creative in groups than individually. He does say, however, that people in our world have to do a lot of work in groups, because so much of what we do today requires more expertise than any one person can possess.
So what produces ideas in groups? The answer, apparently, is challenge and critique. "Maybe debate is going to be less pleasant, but it will always be more productive," says Charlan Nemeth, another psychologist quoted by Lehrer (at 24). We aren't made to be entirely nice.
But we all know that criticism can be devastating. What distinguishes helpful criticism from unhelpful? That point the article doesn't altogether resolve, but here's what seems to be the answer: we need critique to be modulated by personal connections that are significant but not overpowering.
Actually, even this formulation may overstate the role of "critique." The studies Lehrer describes seem to me (based just on his descriptions, at 24) to suggest that "challenge" may be a better word for what's needed -- an encounter with alternative views. That encounter might not take the form of critique, as long as group members actually engage with views other than their own. What may be happening in brainstorming is that the injunction against critique is felt as a directive just not to "engage" with other group members" ideas.
The need for modulated engagement with others' ideas seems to be why, as Lehrer (24-25) reports a study of Broadway shows revealed, groups composed entirely of people who are longtime colleagues, or of people who don't know each other well at all, may be less productive than those whose members have mixed degrees of closeness with each other. It may also be why buildings designed so that otherwise unconnected colleagues tend to just run into each other may be breeding grounds of innovation (Lehrer cites striking examples of this as well, at 25-27).
I do still wonder about the finding that brainstorming is less productive than working alone. Why would brainstorming actually diminish creativity? Perhaps it's because when we're alone we are harder on our own ideas -- hence more likely to come up with better ones -- than we are in a group committed to "no criticism." Perhaps that's not just because criticism seems out of style in the group but also because the group ethos congratulates all the members too readily, leading to unjustified feelings of productivity. Or perhaps it's because a group with no debate gets boring pretty fast.
Again, we're not entirely nice.
Why, then, does brainstorming have such a good reputation? One reason, at least in law schools, is that the anti-critique ethos intersects with a wider skepticism about the male-dominated, competitive, even humiliating style of Socratic classrooms of a generation or two ago.
But no matter how appealing its ideology is, a process that lots of people engage in with limited results ought to gradually lose its attraction. I suspect the reason brainstorming hasn't fallen into disuse is that it's rarely the only technique groups employ. If a no-critique hour is followed by dissection of the newly emerged ideas, perhaps the net result is better than if critique went on all the time. In fact, even if brainstorming has no intrinsic value, it may be a useful step in a group effort just because it is a change of pace. And it may be that even though most people don't actually become more creative in brainstorming sessions, some people do. Then putting some time into this process is a way. to tap the ideas that those group members will have; the critiquers can resume their favored approach soon enough.
But I do still wonder -- having been in quite a few groups that employed brainstorming -- why didn't we learn about this social science long ago? Our ignorance may not have hurt us, since we probably were employing brainstorming as one tool among many. But it's unsettling to learn how little we knew about a process we quite eagerly employed. It's proof, at any rate, of the importance of not taking things for granted.
This favorite approach of countless meetings rests on the generous idea that encouraging people to think freely without fear of criticism will liberate their creativity. The problem is that this idea, stated this baldly, is mistaken.
In fact, according to Jonah Lehrer, in his extremely interesting article "Groupthink" (The New Yorker, Jan. 30, 2012, at 22-27), brainstorming produces fewer ideas than the same number of people would generate if each worked alone.
This, incidentally, is not an argument against "crowdsourcing" -- even if people actually performed best alone, pooling many people's individual insights might well produce a greater whole than any one person, however talented, could produce on his or her own. That's what a psychologist named Keith Sawyer, whom Lehrer quotes (at 23), seems to confirm when he says that "[d]ecades of research have consistently shown that brainstorming groups think of far fewer ideas than the same number of people who work alone and later pool their ideas."
Lehrer does not say whether people ever are more creative in groups than individually. He does say, however, that people in our world have to do a lot of work in groups, because so much of what we do today requires more expertise than any one person can possess.
So what produces ideas in groups? The answer, apparently, is challenge and critique. "Maybe debate is going to be less pleasant, but it will always be more productive," says Charlan Nemeth, another psychologist quoted by Lehrer (at 24). We aren't made to be entirely nice.
But we all know that criticism can be devastating. What distinguishes helpful criticism from unhelpful? That point the article doesn't altogether resolve, but here's what seems to be the answer: we need critique to be modulated by personal connections that are significant but not overpowering.
Actually, even this formulation may overstate the role of "critique." The studies Lehrer describes seem to me (based just on his descriptions, at 24) to suggest that "challenge" may be a better word for what's needed -- an encounter with alternative views. That encounter might not take the form of critique, as long as group members actually engage with views other than their own. What may be happening in brainstorming is that the injunction against critique is felt as a directive just not to "engage" with other group members" ideas.
The need for modulated engagement with others' ideas seems to be why, as Lehrer (24-25) reports a study of Broadway shows revealed, groups composed entirely of people who are longtime colleagues, or of people who don't know each other well at all, may be less productive than those whose members have mixed degrees of closeness with each other. It may also be why buildings designed so that otherwise unconnected colleagues tend to just run into each other may be breeding grounds of innovation (Lehrer cites striking examples of this as well, at 25-27).
I do still wonder about the finding that brainstorming is less productive than working alone. Why would brainstorming actually diminish creativity? Perhaps it's because when we're alone we are harder on our own ideas -- hence more likely to come up with better ones -- than we are in a group committed to "no criticism." Perhaps that's not just because criticism seems out of style in the group but also because the group ethos congratulates all the members too readily, leading to unjustified feelings of productivity. Or perhaps it's because a group with no debate gets boring pretty fast.
Again, we're not entirely nice.
Why, then, does brainstorming have such a good reputation? One reason, at least in law schools, is that the anti-critique ethos intersects with a wider skepticism about the male-dominated, competitive, even humiliating style of Socratic classrooms of a generation or two ago.
But no matter how appealing its ideology is, a process that lots of people engage in with limited results ought to gradually lose its attraction. I suspect the reason brainstorming hasn't fallen into disuse is that it's rarely the only technique groups employ. If a no-critique hour is followed by dissection of the newly emerged ideas, perhaps the net result is better than if critique went on all the time. In fact, even if brainstorming has no intrinsic value, it may be a useful step in a group effort just because it is a change of pace. And it may be that even though most people don't actually become more creative in brainstorming sessions, some people do. Then putting some time into this process is a way. to tap the ideas that those group members will have; the critiquers can resume their favored approach soon enough.
But I do still wonder -- having been in quite a few groups that employed brainstorming -- why didn't we learn about this social science long ago? Our ignorance may not have hurt us, since we probably were employing brainstorming as one tool among many. But it's unsettling to learn how little we knew about a process we quite eagerly employed. It's proof, at any rate, of the importance of not taking things for granted.
Tuesday, April 3, 2012
Stopping genocide and the law of war
Just a brief thought about reading Philip Gourevitch's book on the Rwandan genocide, We wish to inform you that tomorrow we will be killed with our families (1998).
The brief thought begins with this: a lot of people look pretty bad in this book, beginning of course with the murderous practitioners of Hutu Power politics, but continuing to include the French (patrons of the Hutu rulers of the country), the United States (fiddling while Rome burned), and human rights organizations (unable to distinguish victims from murderers).
But in one sense the most unsettling feature of these events is what they seem to have taught the United States and perhaps the world as well. If it is intolerable to permit genocide to occur, then when it is threatened you must act, and if necessary by the use of military force. Otherwise you tolerate the intolerable.
That's what Bill Clinton declined to do when the US bombed Yugoslavia to protect people in Kosovo from their own state. Clinton acted despite the lack of authorization from Congress, in the clearest violation of our War Powers Resolution ever -- but Barack Obama's support for intervention in Libya, similarly motivated, is the next clearest case. And it is not simple to discern in the United Nations charter, which appears to permit the use of force only in self-defense, a lawful basis for the use of force to block evil governments from murdering their own people -- yet such interventions seem increasingly to have found a rationale in a duty to protect.
The prospect of genocide compels such steps. But one result is to lower the always modest legal barriers, national and international, against war.
The brief thought begins with this: a lot of people look pretty bad in this book, beginning of course with the murderous practitioners of Hutu Power politics, but continuing to include the French (patrons of the Hutu rulers of the country), the United States (fiddling while Rome burned), and human rights organizations (unable to distinguish victims from murderers).
But in one sense the most unsettling feature of these events is what they seem to have taught the United States and perhaps the world as well. If it is intolerable to permit genocide to occur, then when it is threatened you must act, and if necessary by the use of military force. Otherwise you tolerate the intolerable.
That's what Bill Clinton declined to do when the US bombed Yugoslavia to protect people in Kosovo from their own state. Clinton acted despite the lack of authorization from Congress, in the clearest violation of our War Powers Resolution ever -- but Barack Obama's support for intervention in Libya, similarly motivated, is the next clearest case. And it is not simple to discern in the United Nations charter, which appears to permit the use of force only in self-defense, a lawful basis for the use of force to block evil governments from murdering their own people -- yet such interventions seem increasingly to have found a rationale in a duty to protect.
The prospect of genocide compels such steps. But one result is to lower the always modest legal barriers, national and international, against war.
Labels:
Clinton,
genocide,
Kosovo,
law of war,
Libya,
Obama,
Philip Gourevitch,
Rwanda,
United Nations charter
Subscribe to:
Posts (Atom)