Saturday, September 12, 2015

When is a cognitive heuristic actually right?

We now know quite a lot about seeming deficiencies in human reasoning – departures from pure rationality that appear to be regular features of how we think. If these are regular but erroneous features of our thinking, why do we have them? The basic answer, as I understand it, is that they are generally serviceable, and enable us to handle the bulk of what we must address each day quickly and reasonably well. The fact that they are misleading in circumstances where we actually need to think carefully is a cost that’s worth bearing for the gains these thinking patterns give us most of the time.

But this is a bit of an odd explanation. If a particular tool of thinking is in fact mistaken, how can it be helpful most of the time? This defense seems a bit like the explanation of the merchant who is selling each item at a loss – that “we’ll make it up in volume.” No, we won’t; we’ll just (back in the field of cognition) be regularly wrong.

So what is it about the heuristics that makes them usually helpful?

That’s a big question, and I certainly don’t have a general answer – and I’m far from an expert on human cognition. There may be an extensive literature on this very question, which I’m overlooking. But anyway here’s something I’ve thought about this question, and about one cognitive trait in particular: the tendency people have to treat a risk of loss as more significant than a potential of gain. Tomer Broude and Shai Moses of Hebrew University in a recent paper describe an implication of “Prospect Theory” in these terms: people “will often invest more in the prevention of loss than in the generation of gains of the same amount.” (Broude & Moses, at 5.) So, it seems, people will spend more on public health if they are told “Failure to spend X will result in 25 % more illness than there would otherwise be” – a loss – than if they are told “Spending X will enable 25 % more people to avoid illness” – a gain. (Broude & Moses, at 10, cite a study to this effect, though the phrasing of the “loss” and “gain” that I’ve offered here is mine.)

In this imagined choice, the two options are in fact the same, though they are being described differently, and it is irrational not to see them as the same. Any decision made without grasping that the two options are the same will be a flawed one because the decisionmaker doesn’t understand the situation. So it’s clear that the tendency to care more about avoiding losses than about achieving gains can be the source of erroneous decisionmaking.

Then why is this tendency something that’s part of our cognitive make-up? Actually, it’s probably not part of everybody’s cognitive make-up; those who care more about avoiding losses than about making gains are, more or less by definition, risk-averse, and not everyone is that risk-averse. There may well be opportunities available for those who are not risk-averse, but why are the rest of us (certainly including me) inclined the other way? How can this tendency serve us well, most of the time?

It seems to me that most of the time it is likely to be accurate that risking a loss of X will do us more harm than seeking a gain of X will bring us benefit. Why would the risk of losing $25 matter more than a putatively equal chance of gaining $25? For at least two reasons.

First, if the stakes really are $25 each way, then we need to assess how much we need our last $25 versus how much good it would do us to gain $25 more. Obviously the answer will vary depending on our life circumstances. As a general matter people who are able to do so probably measure their spending in light of their current income and wealth. That means that losing $25 may immediately impair the balance in our life, since we’ve planned our spending in light of having that money. But we probably haven’t  planned our spending in light of getting an extra $25, since after all we don’t know that we’ll get the extra money. That means the benefits of that gain are naturally more nebulous, and certainly not essential to maintaining the basic balance we’ve achieved. People with an appetite for risk may weigh these costs and benefits differently, and I’m not saying that being risk-averse is the only reasonable approach. But it is one reasonable approach, and apparently the more common one – and it’s hard, I think, to say that it is “irrational.”  

            There’s a second reason, however, that may be even more important. That is that the future is actually not very predictable. When we are told that we might gain $25, we must know that whoever is telling us this could be wrong. Unless we are experts on the steps to be taken to achieve that gain, we inevitably face the possibility that problems will surface that we never thought of. (If something sounds too good to be true, as the saying goes, it probably is.) Now of course it is possible that the chance of losing $25 has been overestimated – but then it’s also conceivable that the risks, either the amount we might lose or the likelihood of our losing it, have been underestimated instead! Since in real life we generally don’t know what the exact probabilities of either gain or loss are, if we’re initially inclined to care more about potential losses these uncertainties give us additional reason to follow this inclination.

            The rich literature about cognitive heuristics suggests that human reasoning is much more fallible than we might like to think. But if other heuristics also have sound bases in experience, as I’ve tried to show that the “over”-valuing of risks does, then maybe we are not as fallible as we have been coming to believe. Perhaps we need to understand, instead, why particular traits of human reasoning actually often work out well in the world, however flawed they may be in laboratory study. Of course, even if this is so, we’ll still know that we are capable of a lot of irrationality!



           



Saturday, September 5, 2015

Are good scholars more likely to be good teachers?

What should we make of a recent study, The Teaching/Research Trade-Off in Law: Data From the Right Tail (January 2015), that finds that, at the University of Chicago Law School, faculty members' teaching evaluations rise in correlation with how much they publish (69)?

The study's authors, Tom Ginsburg and Thomas J. Miles, both members of the Chicago law faculty, are open-minded and careful in their reading of their data, and readily acknowledge that the University of Chicago might be atypical. (78) They report that many studies of the impact of scholarship on teaching in academia have concluded that these two have little or no relationship to each other. (52) Meanwhile, Chicago's faculty members are so productive as scholars that the entire environment at Chicago may be unique. The average number of publications per year for each faculty member is 4.63; the median is 3 (56-57) -- and that's a lot of publishing. Perhaps Chicago's students are so accustomed to having ardent scholars as teachers, and so convinced themselves of the value of scholarship, that they are a specially welcoming audience for this group of professors -- a possibility the authors note. (72) Perhaps Chicago's scholars are unusually good teachers as well, another possibility the authors allude to (but don't assert is correct). (57-58)

But suppose what is true at Chicago is true more generally. What would that tell us?

First, it would be clear that being a productive scholar does not prevent someone from being a strong teacher as well. We don't know if this is in fact generally the case, but if it is, that's important. 

Second, it would remain likely, nonetheless, that time matters. The Chicago faculty in this survey don't seem to teach very much; on average, the study looked at the work of each professor over about 5 years, and during that time "[t]he average professor taught an average of 10 courses and the median professor taught 9." (53) That appears to mean that the typical Chicago professor teaches two courses a year, a number that I think faculty at many other schools would envy. (The authors note that "[a]ll of the regressions in [one aspect of their study] suggest that teaching multiple courses in a single term may reduce the quality of teaching." (74)) Moreover, Chicago faculty in general have limited administrative responsibilities. (56) They have a lot of time to perfect their scholarship and their teaching. Less time, one suspects, leads to less perfection -- though it seems from the data that Chicago's faculty/administrators sacrifice scholarly productivity (63) but not teaching quality (73). 

Third, it would remain quite unclear whether scholarship promotes teaching. I find the idea that writing about issues promotes understanding and that this understanding promotes exposition in class intuitively appealing. But it may not be correct and of course the correlations the study authors have found do not directly prove causation.

Perhaps there is some other factor which makes people both productive scholars and effective teachers. Once this idea is stated, it strikes me as having intuitive appeal too. Maybe Chicago's faculty have a combination of intelligence, demonstrativeness (to catch students' and readers' interest) and focused energy. Traits like these would make these people good at a range of occupations; two of them would be teaching and scholarship, but that might not be because teaching and scholarship actually build on each other, or (more to the point) build on each other in some specially powerful way.

Fourth, the study does not tell us anything about the relationship between scholarship and clinical teaching. The Chicago clinicians weren't included in the study at all; the authors explain that this is because clinical "courses differ from traditional academic teaching, and clinical faculty often do not publish scholarly articles." (78 n.1) I'm not familiar with Chicago's clinical faculty rules, but it may be that clinicians there are not expected to be scholars as well as teachers and case supervisors. 

Suppose it turns out that Chicago's clinicians are also very effective teachers, but that their effectiveness doesn't correlate at all with scholarship. That might offer more reason to believe that the true reason for faculty members' teaching effectiveness is, as I suggested earlier, that they have a special combination of intelligence, demonstrativeness and focused energy. The clinicians may build their teaching effectiveness through their simultaneous engagement in the cases their students are handling under the clinicians' supervision; that practice engagement may have the same synergistic impact on teaching as scholarship may for their classroom colleagues.

Fifth, we would not know whether the courses the Chicago faculty in the study are teaching are the best ones for the students to be taking. So, as the authors also acknowledge (48), what the study tells us is that Chicago's non-experiential courses are well taught by its scholarly faculty, not that "law schools are teaching the right things that students need to practice." We wouldn't know, in particular, that the balance of doctrinal and experiential, skills-focused courses is what it should be.

In short, we would -- and do -- have a lot still to learn.