We now know quite a lot about
seeming deficiencies in human reasoning – departures from pure rationality that
appear to be regular features of how we think. If these are regular but
erroneous features of our thinking, why do we have them? The basic answer, as I
understand it, is that they are generally serviceable, and enable us to handle
the bulk of what we must address each day quickly and reasonably well. The fact
that they are misleading in circumstances where we actually need to think
carefully is a cost that’s worth bearing for the gains these thinking patterns
give us most of the time.
But this is a bit of an odd
explanation. If a particular tool of thinking is in fact mistaken, how can it
be helpful most of the time? This defense seems a bit like the explanation of
the merchant who is selling each item at a loss – that “we’ll make it up in
volume.” No, we won’t; we’ll just (back in the field of cognition) be regularly
wrong.
So what is it about the heuristics
that makes them usually helpful?
That’s a big question, and I certainly don’t have a
general answer – and I’m far from an expert on human cognition. There may be an
extensive literature on this very question, which I’m overlooking. But anyway
here’s something I’ve thought about this question, and about one cognitive
trait in particular: the tendency people have to treat a risk of loss as more
significant than a potential of gain. Tomer Broude and Shai Moses of Hebrew
University in a recent paper describe an implication of “Prospect Theory” in these terms: people
“will often invest more in
the prevention of loss than in the generation of gains of the same amount.” (Broude
& Moses, at 5.) So, it seems, people will spend more on public health if
they are told “Failure to spend X will result in 25 % more illness than there
would otherwise be” – a loss – than if they are told “Spending X will enable 25
% more people to avoid illness” – a gain. (Broude & Moses, at 10, cite a
study to this effect, though the phrasing of the “loss” and “gain” that I’ve
offered here is mine.)
In this
imagined choice, the two options are in fact the same, though they are being
described differently, and it is irrational not to see them as the same. Any
decision made without grasping that the two options are the same will be a
flawed one because the decisionmaker doesn’t understand the situation. So it’s
clear that the tendency to care more about avoiding losses than about achieving
gains can be the source of erroneous decisionmaking.
Then why
is this tendency something that’s part of our cognitive make-up? Actually, it’s
probably not part of everybody’s cognitive make-up; those who care more about
avoiding losses than about making gains are, more or less by definition,
risk-averse, and not everyone is that risk-averse. There may well be
opportunities available for those who are not risk-averse, but why are the rest
of us (certainly including me) inclined the other way? How can this tendency
serve us well, most of the time?
It seems
to me that most of the time it is likely to be accurate that risking a loss of
X will do us more harm than seeking a gain of X will bring us benefit. Why
would the risk of losing $25 matter more than a putatively equal chance of
gaining $25? For at least two reasons.
First, if
the stakes really are $25 each way, then we need to assess how much we need our
last $25 versus how much good it would do us to gain $25 more. Obviously the
answer will vary depending on our life circumstances. As a general matter
people who are able to do so probably measure their spending in light of their
current income and wealth. That means that losing $25 may immediately impair
the balance in our life, since we’ve planned our spending in light of having
that money. But we probably haven’t
planned our spending in light of getting an extra $25, since after all
we don’t know that we’ll get the extra money. That means the benefits of that
gain are naturally more nebulous, and certainly not essential to maintaining
the basic balance we’ve achieved. People with an appetite for risk may weigh
these costs and benefits differently, and I’m not saying that being risk-averse
is the only reasonable approach. But it is one reasonable approach, and
apparently the more common one – and it’s hard, I think, to say that it is
“irrational.”
There’s a second reason, however,
that may be even more important. That is that the future is actually not very predictable.
When we are told that we might gain $25, we must know that whoever is telling
us this could be wrong. Unless we are experts on the steps to be taken to
achieve that gain, we inevitably face the possibility that problems will
surface that we never thought of. (If something sounds too good to be true, as
the saying goes, it probably is.) Now of course it is possible that the chance
of losing $25 has been overestimated – but then it’s also conceivable that the
risks, either the amount we might lose or the likelihood of our losing it, have
been underestimated instead! Since in real life we generally don’t know what
the exact probabilities of either gain or loss are, if we’re initially inclined
to care more about potential losses these uncertainties give us additional
reason to follow this inclination.
The rich literature about cognitive
heuristics suggests that human reasoning is much more fallible than we might
like to think. But if other heuristics also have sound bases in experience, as
I’ve tried to show that the “over”-valuing of risks does, then maybe we are not
as fallible as we have been coming to believe. Perhaps we need to understand,
instead, why particular traits of human reasoning actually often work out well
in the world, however flawed they may be in laboratory study. Of course, even
if this is so, we’ll still know that we are capable of a lot of irrationality!
No comments:
Post a Comment