Wednesday, December 26, 2012

Moral thinking -- and how we ask about it


I found myself thinking the other day about how important the framing of the question is to the answers you’ll get – even in such a subtle area as the understanding of people’s moral thinking.

Here’s a famous moral question, framed to help gauge the answerer’s level of moral development: the Heinz dilemma posed by Lawrence Kohlberg. As quoted in Wikipedia from Kohlberg’s Essays in Moral Development, Vol. 1 (1981), it reads:

Heinz's wife was near death, and her only hope was a drug that had been discovered by a pharmacist who was selling it for an exorbitant price. The drug cost $20,000 to make, and the pharmacist was selling it for $200,000. Heinz could only raise $50,000 and insurance wouldn't make up the difference. He offered what he had to the pharmacist, and when his offer was rejected, Heinz said he would pay the rest later. Still the pharmacist refused. In desperation, Heinz considered stealing the drug. Would it be wrong for him to do that?

Should Heinz have broken into the store to steal the drug for his wife? Why or why not?

Carol Gilligan, in her book In A Different Voice: Psychological Theory and Women’s Development  (1993), famously contrasted 11-year-old Jake’s perception of this problem as “‘sort of like a math problem with humans,’” to be resolved using an ethic of rights, and 11-year-old Amy’s contrasting response of framing the issue as arising in “a narrative of relationships that extends over time,” to be addressed using an ethic of care. The two responses are strikingly different , though whether that difference reflects a gender difference in ethical thinking is another, and complex, question. What’s striking to me now is a point that Gilligan may also recognize, but as far as I now recall does not make central: that the presentation of the problem itself potentially shapes the answers it elicits.

            To put the matter more directly, Kohlberg’s question is precisely designed to pose an ethics question that is like a math problem. It’s meant, as many a law professor’s Socratic question is, to exclude all possible issues except one: in this case, the sheer conflict between two claims of moral right (respect for property and respect for life).

Those questions have their uses, in particular for encouraging students to practice skills of precision in identifying issues and reasoning about them. The price of asking such questions, however, is that if they work they narrow discussion and thought down to whatever line of reasoning the professor wants to focus on. They may also implicitly devalue, and they certainly aim to disregard at least for the moment, the many other thoughts and concerns that students may want to bring to bear on the matter at hand.

Perhaps these questions also reflect something true about the world – that sometimes stark choices must be made. But this claim is debatable. It’s been debated, in fact, in connection with the “ticking bomb” scenarios often advanced as the basis for moral argument about torture. If the ticking bomb scenario appeared in the actual world, its resolution might be a matter of constructing the right hierarchy of rights, the right of the terrorist not to be tortured and the right of his imminent victims not to be killed. But in the real world, there may never be a question so stark as the ticking bomb scenario’s assumed facts – which imagine that we know exactly who might have to be tortured, under circumstances so urgent as to admit of no alternative except immediate action. As some very thoughtful observers have argued, if the real world is messier than the scenario, then thinking about the ticking bomb scenario may be a beguiling distraction.  

But whatever the virtues and defects of these questions, for pedagogical or truth-seeking purposes, their power as questions is important to recognize. If we ask an 11-year-old, or a 45-year-old, a math problem about morality, it seems reasonable for us to predict that he, or she, will respond with a math answer about morality. Does that mean that the person answering actually views morality as a math problem? Perhaps – that would be one reason to respond this way. But perhaps not. Maybe he, or she, understands the question as ruling out any choices except (to use the Heinz dilemma in particular) to steal or not to steal. The question as phrased doesn’t quite do that, and it might take a much longer problem to explicitly exclude all other options. Still, the problem does seem meant to be understood this way. Maybe the person answering the question views questioners as entitled to answers that address the sort of question they meant to ask. Maybe he, or she, also assumes that math problems are problems to be responded to with math answers.

One might say a lot about the psychological traits these inclinations reflect – a conformance with authority, possibly, or maybe a generous desire to help the questioner. But whatever one might say on those scores, and whatever those observations might have to do with gender, they wouldn’t necessarily have much to do with whether the person being questioned thought about morality in terms of rights or relationships.

Here as elsewhere it’s very important to ask the right question. Otherwise the chance you’ll get the wrong answer has to increase.

Sunday, December 2, 2012

Remembering Arthur Chaskalson


Arthur Chaskalson, a truly great man, died yesterday, December 1, 2012, in Johannesburg. The list of his achievements is almost unbelievable: Fifty years ago as a young advocate (that is, a courtroom lawyer) he helped represent Nelson Mandela in the case in which Mandela was sentenced to life in prison – a victory, since the only other alternative was death, and a victory that meant a great deal to South Africa’s future. In 1979, along with the distinguished lawyer Felicia Kentridge, he founded the Legal Resources Centre, which distilled the lessons of the NAACP Legal Defense Fund’s practice in the United States to become South Africa’s leading public interest law organization – and to win cases challenging apartheid, in apartheid South Africa’s courts. Then he took up the task of representing the African National Congress as one of its principal negotiators in the drafting of South Africa’s first post-apartheid constitution. That constitution created South Africa’s Constitutional Court, the first court in South Africa with authority to enforce a constitution that genuinely protected human rights. Arthur became the Constitutional Court’s first President and then, as this Court’s centrality to South Africa’s legal system became evident, he became Chief Justice of South Africa. And after he retired as Chief Justice, he served as President of the International Commission of Jurists, and in that position he led the ICJ’s incisive examination of the US “war against terror” and its uneasy relationship to law and human rights.

I had the great good fortune to be one of Arthur’s friends for the past 25 years. Our friendship began when we taught a course on “Legal Responses to Apartheid” together at Columbia Law School in 1987. Arthur’s own scholarly approach to South Africa’s law – he was a passionate opponent of apartheid who achieved results in part by being a dispassionate scholar of the law as well – helped me to realize that if I was going to talk about South African law I had to study it as hard as any other body of law, because South African law was easy to denounce, but not so easy to understand. Then he invited me out to South Africa, and I went, in the summer of 1988, and spent three weeks, mostly staying at the Chaskalsons’ home and meeting anti-apartheid lawyers whose work I admired immensely. Those experiences (and other wonderful opportunities I had to teach with and get to know South Africans opposed to apartheid) shaped my professional career, setting me on a course of research and writing about South Africa that remains a central part of what I do, and connecting me to people I’ve remained friends with ever since.

I remember Arthur for the profound impact he had on my professional life, and of course for the extraordinary series of achievements of his own career – enough for several successful lifetimes! But like many others, I also remember him for his humanity. He and his wife Lorraine, also a dear friend of mine, opened their house to their friends. I remember discussing the wellbeing of the many cats living in their backyard, the impolite meanings South Africans and Americans gave to certain Yiddish words, and the important question of how much to wash dishes before putting them in the dishwasher (I believe he and I both belonged to the “a lot” school). I also remember the phone service going dead, presumably in an effort by the apartheid police to prevent Arthur from planning legal strategy, during my first stay in their home. Arthur stayed the course despite that kind of pressure – and the last time we visited in South Africa, he took my wife Teresa, my son Dave and me to the Constitutional Court, and Dave sat next to him in the chairs the justices of that Court use to hear the issues that arise under a democratic constitution.

Flags will be at half-mast in South Africa all this week in remembrance of Arthur Chaskalson. He will be very much missed there, and here.

Friday, November 23, 2012

Thanksgiving


To celebrate Thanksgiving with family and friends, with an abundance of food and in a house with power, is to have a great deal to be thankful for. The assault of Hurricane Sandy on New Jersey and New York has been a reminder, an all too vivid reminder, of how fragile the elaborate social and technological systems are that maintain us. Some people lost their lives as a result of that fragility. Many more people suffer daily around the world in circumstances that are fragile, or worse, all the time. As we enjoy the return of our normal life, we have plenty of reason to recommit ourselves to building a world in which everyone’s normal life is worth giving thanks for.

Saturday, November 10, 2012

"For Martin Chanock: Essays on Law and Society"


Hot off the presses: “For Martin Chanock: Essays on Law and Society,” Volume 28, Number 2 of the Australian journal Law in Context, available here. I edited this issue, with Heinz Klug and Penelope Andrews, and all of us were very pleased to have the chance to help celebrate the work of Martin Chanock, a remarkable historian of African and South African law, and someone we’ve known and liked for many years.

For those who are interested, the editors’ introduction frames the issue and discusses the eight articles which appear in it, all of which respond in one way or another to Martin’s wide-ranging work. We also quote the eloquent personal tribute to Martin from Jianfu Chen, the former Head of School at La Trobe University School of Law, where Martin is now an Emeritus Professor; Jianfu said that Martin exemplified “decency,” and explained that “the seemingly easy task of being a decent person demands the output of the highest quality of human beings: honesty, integrity, passion, and compassion.” (Page 6)

I also wrote one of the eight articles, “A Bittersweet Heritage: Learning from The Making of South African Legal Culture.” Martin’s book, whose full title is The Making of South African Legal Culture 1902-1936: Fear, Favour and Prejudice (2001), is (I said) a “deeply unsettling …. argument that race was at the heart of the entire enterprise of South African judging, not only the regrettable decisions but also the admirable ones.” (Page 76) I am inclined to think that this argument is correct, provided it is understood as a systemic observation rather than an appraisal of each and every judge – since there were individual, remarkable judges who waged legal battle against apartheid even as they held office under it. In the article I sought first to understand how Chanock’s argument could indeed be true, or more precisely to understand how even upright judges, capable of decisions that helped preserve the claims of human rights through very dark days in South Africa, were nevertheless people of their time and not somehow disconnected from its appalling problems.

But then I asked whether it followed, if Chanock’s appraisal was correct, that the right response today, as South Africa seeks to eliminate the taint of racism in its law and its life, is to disestablish entirely the institution of judging as it was practiced before the end of apartheid. My answer to this question was and is “no.” The old system’s formalism, with its “austere, independent judiciary, engaged in determination of outcomes through the application of a highly rationalised and complex logical process” (page 84), certainly needs reshaping. Its elitist manner should be diminished and its substantive reasoning made to rest on the new egalitarian liberty embodied in the constitution – changes that the Constitution, and the Constitutional Court, have aimed to accomplish. But the fundamental stance of judicial objectivity, the aim of judging “without fear, favour or prejudice,” the commitment to the idea of judges as experts on the law – all these, I urged, are both a kind of formalism and integral to liberty, in South Africa and throughout the world.

I’ll set out here the last few lines of the piece (page 88):

If the courts are to listen, and to help shape a country in which other government actors also listen, then perhaps what South Africa needs is not to beware of formalism but to beware of formulas. Let us seek a constitution of no slogans, in which courts – continuing their historic role of providing a measure of independent judgment about society – deepen their contribution by being as sensitive as possible to the entitlements, and imperfections, of all who come before them.

And the other seven articles are interesting too!

Fox News gives Romney the shove


After the network I was watching had declared Obama the winner on election night, I thought I’d see what Fox News had to say. I was pleased to find that they too had called the race for Obama. But Romney had not conceded. Initially that wasn’t startling, but I began to worry that he really might not give up and that we might be in for weeks of wrangling and litigation. We now know that he did consider exactly this course of action – his aides reportedly had their suitcases packed and were ready to depart on waiting planes to pursue challenges to the apparent results. While Romney weighed his options, what was Fox News doing?

The answer is that Fox was growing increasingly impatient. Their anchor interviewed the Fox correspondent at the Romney party in Boston, and pushed him to acknowledge that the delay was more than normal. That wasn’t all. Not much later, the anchor expressed at some length the idea that an essential part of the ritual of elections was the gracious concession, followed by the gracious victory speech, meant to enact the symbolism of bringing us all together after the divisions of the campaign. And, the anchor said, it was time for this to happen. I had the strong sense that the Fox anchor believed that Romney or his aides were watching Fox right then and there, and that the anchor was telling him that it was over. There was even a suggestion, though only a brief one, that Romney hadn’t been such a good candidate in the first place – and, again, now it was time for him to go.

A little while later, Romney went. More precisely, another network (I think it was CBS) reported that Romney had made the required concession phone call to Obama. Then, Fox said, the campaign “pool” reporters got the same news. And then Fox got confirmation too. It’s interesting that Fox seems to have been the last, or at any rate definitely not the first, to be told. Was that because the Romney people were angry about having been lectured to over the airwaves?

I haven’t seen this aspect of the Fox coverage discussed since Tuesday – though I’m not reading the conservative sites whose writers might have been the most likely to actually be watching Fox that night. But this moment when Fox helped give Romney the shove shouldn’t be forgotten.

What did client-centeredness teach us?


In October I had the honor of participating in a remarkable day-long conference at UCLA School of Law, organized by Scott Cummings in honor of David Binder, Paul Bergman, Gary Blasi, Sue Gillig and Al Moore, all of whom are retiring or have recently retired from the faculty there. Here’s a version of what I said, focusing on the impact of client-centeredness, the approach to lawyering spearheaded by David Binder and Paul Bergman:


What did client-centeredness teach us? I’ll talk about its conceptual, pedagogical, and normative implications.

Conceptually:

There was a time – that is, there still is a time in some circles – when it was often said that skills could not be taught, or learned. What skillful practitioners had was, most likely, acknowledged to be something, but what that something was was ineffable and, really, not that interesting.

It’s integral to the client-centered approach to interviewing and counseling, as I think to all of the skills thinking done by David Binder and Paul Bergman and others who have shaped the UCLA approach, that skills can be analyzed. They have component parts, from the micro level of individual questions or words to overall structures and plans. Others have shared this conviction, but I think no one has been as influential as they have in actually accomplishing this analysis and demonstrating to teachers and students that it made sense.

Moreover, because skills have component parts, it follows that it is possible to assess the performance of these skills by determining whether those component parts were present, and executed correctly, or not. Skills become measurable. Performance becomes subject to evaluation.

As a result, academics have a contribution to make to the profession’s understanding of skills. If skills are to be understood only in the crucible of practice, then only those who are in the arena can speak with authority about what they do. Academics’ role, if they have one, would just be to repeat the distilled lessons imparted to them by practitioners. And of course those lessons might not be very profound, since practitioners might be unable to speak very coherently, however authoritative they are, given what we’ve learned (from Gary Blasi and other students of cognition) about how inaccurate people often are at describing their own thought processes.

But if skills can be analyzed, it becomes entirely possible that academics’ analysis will be superior to that of practitioners – or, more precisely, that academics who are also closely engaged with practice will be able to understand practice in ways that full-time practitioners do not. One of our comparative advantages as academics is time; another is the discipline of academic analysis itself. We have our disadvantages, not least that we may be less deeply immersed in the realities and necessities of practice than those who do it full time, but time and rigor are important assets. Practice becomes an academic subject.

Pedagogically:

This of course brings me to pedagogy. What can be analyzed and understood by academics can, at least potentially, be taught by them too.

But how? Broadly speaking, perhaps, in the same way that they can be understood. It seems to me that David, Paul and their UCLA colleagues have insisted that if skills can be broken down into their component parts, the way for students to learn them is to start with those component parts, practice them, and gradually combine them in tasks of increasing complexity. I take it to be a corollary of their thinking that – as in the Depositions course about which David, Al Moore and Paul wrote not long ago  – the targeted practice and equally targeted feedback possible in simulations are integral. Correspondingly, live-client clinical teaching that actually means to teach particular skills needs to be very carefully targeted as well. Not everyone agrees; some clinicians put more weight on the experience of client representation and the opportunity for reflection as foundations for later learning of more specific skills. But I would say that David and Paul’s pedagogy is implemented, in greater or lesser degree, in “skills” courses around the country. It may have influenced the development of legal writing pedagogy as well, and it may be affecting the ongoing debate over the elements of instruction in the traditional doctrinal classroom too.

I’ll have more to say about pedagogy, but first I need to shift focus.

Normatively:

What I’ve said so far is incomplete in a very important way, because it might suggest that the contributions David and Paul have made are just about the analysis and teaching of technique. But this isn’t true at all, and so now I want to really talk about client-centeredness specifically.

Let me start this way: client-centeredness did not take shape as a response to an academic problem. I believe, which is to say I recall David saying, that client-centeredness was a response to a problem of value: that lawyers had been exercising unjustified power over their clients. To this day the profession's official rules of ethics (I’m thinking of Model Rule 2.1) speak only opaquely about how lawyers and clients should actually interact with each other, but client-centeredness helped us see the play of power - and its potential channeling and restraint - in each moment of interaction between lawyer and client.

In discerning this moment-by-moment potential for just and unjust relations between lawyer and client (just as in articulating techniques for achieving just relations) client-centeredness has been enormously influential. Exactly what client-centeredness calls for has, to be sure, become almost as debated a question as, say, what utilitarian ethical theory requires – as Kate Kruse has demonstrated – but that’s really proof of its influence. Similarly, there are now schools of clinical thought that claim different labels, such as collaborative lawyering, but I think these share a great deal of common ground with client-centeredness. So, for example, Bob Dinerstein, another panelist at the UCLA event, Isabelle Gunning, Kate Kruse, Ann Shalleck, and I recently wrote a book in which we positioned ourselves, in Bob’s happily chosen phrase, as endorsing “engaged client-centeredness.” That phrase reflects what I think is true for all clinicians today, regardless of the particulars of label: we are all client-centered now. And of course it’s also important to see that in this respect as well, David and Paul taught that academics had a distinctive contribution to make to discussions of practice, because they brought not only analytical rigor but normative challenge to the forms of practice that were once prevalent.

I think it’s appropriate to underline here the technique that may be the signature of client-centeredness: active listening. Simply to tell lawyers that a crucial part of engaging with clients was not talking was, of course, of value. But active listening is, as probably everyone here knows, much more than not talking. In fact, active listening involves a certain amount of speaking! The speech, however, is focused on conveying a particular emotional response from the lawyer to the client, a response that incorporates attentive understanding but goes beyond it to express a specific relation and connection to the client: nonjudgmental empathetic regard.

I once wrote an article arguing that sometimes more than empathy is called for between lawyer and client, but empathy, if not always sufficient, is surely always necessary. And empathy is more than a skill; I think it rests on values of acceptance, and ultimately respect, for clients. Respect, in turn, is integral to client-centeredness. The specific techniques of client-centeredness reflect a belief in the capacity of clients to arrive at thoughtful decisions if they are helped to see matters clearly – and a commitment to protecting clients’ right to make those decisions, their right of self-determination.

Just two more points about this norm of respect, this time in connection with pedagogy again: First, one of the important themes of current commentary about legal education suggests that skills and values are separate things and thus prompts concerns about whether success in teaching skills alone is a sufficient preparation for practice. Client-centeredness, however, is an approach to skills that rests on values at every step; if we teach client-centeredness, we are teaching both skills and values. Client-centeredness has a normative kick from the get-go.

Second, the implication of respect for clients is that students also should be treated with respect, and that their capacity to learn should both be recognized and assisted, with the same sort of careful attention to promoting student learning that client-centeredness gives to promoting client decisionmaking. The client-centered lawyer is not passive, nor is the student-centered teacher – they both have a lot of important work to do. But they both do that work as an expression, and a vindication, of respect.

Let me just add my personal thanks to David Binder for his own living of the norm of respect. I first showed up out here at the 1986 Arrowhead conference, where I gave a paper called “Lawyers and Clients” – a title whose rhythm I borrowed from Turgenev’s “Fathers and Sons,” with my father, then dying of Lou Gehrig’s disease, in my mind. Though I admired client-centeredness then, as I do still, in the nature of academic papers I focused on what I found to critique in it. A lesser person would have treated that paper as a reason for distance; David treated it as a basis for what’s become a quarter-century of collegial friendship. I was grateful then, and I’ve only become more grateful since.

Saturday, October 13, 2012

African masks and the costs of authenticity


Having admired and in a small way collected African masks for years, I've recently begun to study them. Isn't this a bit late? Well, no: I've followed the advice a wise friend gave me long ago -- that I should just buy what I liked. But now I find I want to know more about these things that I like.

Unfortunately a lot of what I'm learning is unsettling. For instance, the question of authenticity: is an authentic mask one that was carved for use in ritual practices embedded in African custom? If that definition is right, then two things would follow. (I say this based slightly on my own experiences looking for masks, but more on what I’ve been learning from reading, particularly in the fascinating book by Christopher B. Steiner, African Art in Transit (1994), which studies the African art trade in Côte d’Ivoire.)

First, it's very unlikely that many of the masks for sale in African markets today are authentic in this sense. The traditional practices of which the masks were a part are fading – though I doubt that these practices are entirely gone – and so presumably fewer and fewer masks are being made for actual use in ritual. Moreover, there are a lot of masks for sale, so many that there probably just aren't enough villages to support the commerce from their own ritual stock. Most masks, instead, are probably being made right now, and mainly for the tourist trade. They may look old, but that’s because they’re specially treated to appear that way.

Second, if a mask actually is authentic, how did it make its way from ritual to commerce? Steiner offers this description of bargaining “[a]t the village level of the art trade”: “[M]uch art is obtained during times of personal or regional crisis…. Bargaining here is less concerned with price as it is with the negotiation of a sale – i.e., convincing someone to sell something.” (64) On the same page, Steiner quotes an African art trader on this process:

When buying in villages, you have to be very careful about what you say. You have to be gentle and polite. You have to explain to the elders that these objects are things which people want to learn about. “Your children,” you must tell them, “won’t be able to appreciate or understand these things unless we take them and preserve them in museums and in books.”

I recently heard Michael Sandel speak about his current work on the intrusion of market thinking into areas of life that used to be regulated, at least on the surface, by other forces. (For example: government programs that pay children for reading books during summer vacation.) He was worried about the moral corrosion that might be caused by market values' "crowding out" of other human impulses, and that's an important concern. But the moment when an authentic mask is pulled from the world of its creation into the world of art and commerce seems worse -- an act of cultural destruction rather than mere corrosion.

Paradoxically, almost all masks that are really old (say, a century or more) seem to be in Western museums. There they have been preserved against climate and pests, which are not kind to wooden masks. There too, it seems, they get treated as objects for preservation rather than for use and in due course replacement.

But now, today, it seems right to say that those Westerners who simply like African masks should not seek masks that fit the definition of authenticity I started this post with, because if they succeed they will be contributing to the disintegration of a culture. Instead, they – we – should embrace masks made today, for trade, as today's expression of this traditional art. And we should buy what we like.

Sunday, October 7, 2012

From Kol Nidre

Kol Nidre, the opening service of Yom Kippur and also the name of the central prayer of that service, was last month. I was given the honor of speaking briefly at my synagogue's Kol Nidre service; what follows is a much revised version of what I said that night:


The ancient Kol Nidre prayer, which Jews recite at the beginning of Yom Kippur’s twenty-four hours of reflection and atonement, declares that all vows and obligations we have entered into shall not bind us nor have power over us. The prayerbook says that when efforts were made to drop this language, because it seemed so problematic morally, congregations resisted. Why? How can moral people embrace such a declaration?

One answer is that this is a prayer for the peace that passeth understanding. For people who are incapable of perfection – that would be all of us – only relief from the burden of seeking perfection can sustain us.

But perhaps we should understand it not as a plea but as a pathway. Like an amnesty after a war, Kol Nidre seeks a way to re-admit each of us to the community, when otherwise our past commitments and our past failures might overwhelm us. Yom Kippur calls on us to be better people in the year to come, but it does so in part by authorizing us to be merciful to ourselves.

This is no moral free pass. Actually (as one of my children pointed out to me) it must be a moral error to ask more of ourselves than we can do – as it is also a moral error to ask more of others than they can do. Instead this declaration is an assertion of our actually being moral persons, who can judge what is right and wrong, which duties are real and which are false, and who thus can freely live the most faithful and committed lives we can achieve.

Sunday, August 26, 2012

Magic and violence in South Africa

Susan Njanji, in an August 25, 2012 Mail and Guardian article called "Lonmin tragedy lays bare violent inter-union rivalry in SA," begins by discussing the extent to which the violent strike at the Lonmin platinum mine grew out of a conflict between two unions. That's certainly worrying, but the strangest feature of her article comes at the end. Under the heading "black juju" she reports:
Belief in black juju has also taken root and was partly blamed for the workers' defiance during a standoff with police before 34 of them were gunned down.
Local media report that a video report shot by the police from a helicopter during the strike, showed naked men lining up to be rubbed with herbs that were believed would make them bullet-proof.
"The use of muti has become so institutionalised in everything they (unions) do," said [Crispen] Chinguno [described as "an industrial relations researcher at the University of Witwatersrand"].
He said some of the 17 000 workers sacked and later reinstated at Impala [another platinum mine where a "violent strike" took place earlier in 2012] believed they regained their jobs thanks to juju.
Chinguno might be mistaken. The local media report about the police video might too. But the Mail and Guardian is, I believe, a quite reliable news source, and so I think it is quite likely these reports are at least in good part correct. Though it is not often discussed, my impression is that belief in magic or witchcraft remains an important feature of South African culture.

It is, of course, a risky business to discuss anyone's religious or cultural practices. None of us can really be sure of the answers to the ultimate questions; there's a long and nasty history of Western condescension towards African beliefs in particular; and of course there are many mainstream Western beliefs that nonbelievers might see as bizarre.

But equally it is a mistake to attempt to understand people's choices and actions while deliberately disregarding beliefs that they themselves hold dear.

So this at least should be said: if South African strikers are now embracing magic as a source of invulnerability to weapons, then they will be less deterred from violence in the future, because they will believe themselves protected from the risks of retaliation. And if union leaders or politicians find it in their interests to fuel violence as a strategy, then they will have reason to ally with the purveyors of magic so as to encourage their followers. Perhaps the terrible shootings at Lonmin will short-circuit this process, but believers quite often seem capable of withstanding empirical refutation of their beliefs. Finally, this: if there is a gathering storm of believers in magic and violence, that is a frightening prospect for South Africa. 

Thursday, August 23, 2012

The impact of scholarship on teaching -- not much!

I've just encountered a very nicely executed and insightful study by Ben Barton of the University of Tennessee College of Law, on the question: "Is There a Correlation between Law Professor Publication Counts, Law Review Citation Counts, and Teaching Evaluations? An Empirical Study." It's available for download on SSRN and was published in 2008 in the Journal of Empirical Studies.

Barton studied the scholarly productivity and impact, and the teaching evaluations received from students, for every tenured or tenure track faculty member at a diverse group of 19 law schools in the United States -- 623 faculty members in total. I'm no statistician, but my impression is that his statistical analysis is done with care, and with a recognition of the many imperfections of all the data -- which nonetheless remain the data that we have. The result is a finding that "there is either no correlation between teaching evaluations and these measures of scholarly output, or a very slight positive correlation" (16). Moreover, this finding appears to be quite consistent with the results of studies elsewhere in higher education (which Barton summarizes at 2-3), and his data appear to cast doubt on one much smaller study of law school teachers that had found a greater positive impact (described at 3; for Barton's contrasting analysis, see 18).

What should we make of this finding? Barton points out that his results are inconsistent with two quite opposite hypotheses, each with its adherents. (19) One group presumed that the impact of scholarship on teaching would be positive, on the theory that it is through scholarly work that teachers master their subject. But the impact was at most slight. The other group believed that the impact of scholarship on teaching would be negative, because the time required for doing scholarship would inevitably take away from the time a professor could devote to improving his or her teaching. But this effect also turns out to be absent.

This pair of results is actually quite odd. Scholars do learn about their subject as they write about it, or at least they feel they do (I personally feel I do) and it would make sense that they would. But this increased learning has little impact on their teaching. At the same time, scholarship takes time, and time is scarce, yet this substantial claim on scholars' time doesn't turn out to demonstrably impair scholars' teaching.

One possibility is that both hypotheses are right, and that they are mostly invisible in the data because they cancel each other out. That is, scholarship does enhance scholars' knowledge, and it does take away time from their work on their teaching -- and so what is gained on the one hand is mostly lost on the other, with the net result (this is what Barton's data say) that productive scholars are at most only slightly better teachers than their nonproductive colleagues.

But what if actually both hypotheses are wrong? In that case the reason that scholarship has little impact on teaching would be that (a) scholars don't learn that much from their scholarly work that can help them in their teaching and (b) the time scholars spend on their scholarship doesn't much impair their efforts to be good teachers. So again the two effects, or rather non-effects, balance each other out. Could these two propositions be correct?

As to the first, it might be argued that although scholars do learn about their subjects as they write about them, they don't learn much that they would want to convey to their students. If most scholars today are engaged in various forms of esoteric theory, then it might indeed be the case that while they learn a lot from their writing, what they learn is not what they teach. In fact, Barton finds some evidence that "practice-oriented scholarship" has the greatest impact on teaching evaluations (15) -- an ironic result, since the kinds of scholarship Barton quite reasonably appraises (see 8) as the most practice-oriented (treatises, casebooks, and "practitioner article[s] or chapter[s]") are probably not those viewed as most prestigious among scholars today.

But how could it be that the time spent on scholarship -- if it's not a positive benefit to teaching -- doesn't wind up actually impairing teaching by taking time away from scholars' focus on it? Two possibilities immediately suggest themselves. One is that the tenured and tenure-track people who don't do scholarship also don't spend much time on their teaching -- so the scholars are as attentive to their teaching as the nonscholars. The other, a much happier possibility, is that although the scholars spend less time on their teaching than they otherwise could, they (and their less productive colleagues) still spend enough time to do a good job. It may well be that tenured and tenure-track law faculty -- busy as they may feel at times -- in fact have so much time to devote to their teaching, even after they finish their research, that they can and do prepare themselves well for teaching.

I'm inclined, however, to reject all of these explanations, or rather to say that they are all unproven. I wonder if what we are seeing is a different phenomenon at work. There surely are better and worse law teachers, but I'm inclined to think we do not yet know much about how to describe who the better and worse teachers are, or about how to convey to less effective teachers the skills that will make them better. The result, I suspect, is that we cannot really measure the impact of scholarly work on teaching, because we are still at such an early stage in developing ways to improve our teaching.

One last point is important to make. Barton studied only tenured and tenure-track law faculty, because (at least usually) only they are expected, as part of their jobs, to produce scholarship. We know even less, therefore, about the impact of scholarship on the teaching of those faculty who aren't required to write, but choose to do so nonetheless. We also don't know very much about the impact of scholarship on the teaching of those faculty whose principal teaching responsibilities are in "lawyering skills" rather than in legal doctrine, since skills teachers are probably still much less likely to be tenured or on tenure-track than their doctrinal colleagues.

In short, there's a lot we don't know. I wouldn't take Barton's study as demonstrating that scholarship is without value to teaching -- though it does demonstrate that scholarship has little demonstrable impact on teaching. Instead, I would urge that we focus most directly on what seem to me to be the central uncertainties: how to be, and how to help others to become, better teachers. If we can work on these issues, I think we can safely put to one side for now (and probably for the foreseeable future) the seeming tensions between scholarship and teaching.

Tuesday, August 21, 2012

The shootings in South Africa

Last week's killings in Marikana, South Africa -- "34 dead and 78 wounded in the bloodiest day of protest since apartheid," as an article in South Africa's Mail and Guardian summed it up -- are appalling, and that may be the most important thing to say about them. Who would have believed, in 1994, that some years down the road the police of post-apartheid Africa would shoot into a crowd of striking workers and leave dead and wounded strewn across the ground? A ghastly event.

But what can we make of it? One answer surely is that the police were not well trained. But while the exact events that led to the shootings remain to be clarified, it's been reported by Devon Maylie in the online Wall Street Journal that:
Police said they fired live ammunition into the crowd, after a group of protesters shot at and charged them. The police said they had tried to disperse the crowd with water cannons, stun grenades and rubber bullets, to no avail.
The same article says that in the course of the ongoing industrial dispute that led to the shootings, 10 other lives had also been lost -- 8 employees and 2 police officers. Another Mail and Guardian article, by Kwanele Sosibo, reports that:
A man found lying in crucifixion position on the edge of the koppie on Tuesday with his head split open and stab wounds to the torso, had apparently committed the cardinal sin of "fishing for information". His lifeless body was left on display the entire day as a warning to non strikers.
Though the police were not well trained enough to deal with it -- and plainly they weren't -- still this was no easy crowd control situation.

How did such a situation ever arise in post-apartheid South Africa? One answer is that the workers who went on strike were deeply frustrated by the ANC's failure to redress the ferocious economic injustices that remained even after apartheid ended. No doubt this is true, but by itself it is not illuminating. What did the ANC's "failure" consist of? Was it a failure to move towards a more truly redistributive state (and would other policies have better negotiated the tension between domestic need and world economic pressures)? Or was it a failure to keep moral faith with the people of the country, as leaders came to seem more interested in their own power and privilege than in the grinding suffering of millions of South Africans?

The ANC's failings certainly must have contributed to the violent frustrations on display at the Lonmin platinum mine. But this explanation also misses some of what makes this situation so troubling. The workers at the mine reportedly sought a wage hike from 4000 Rands per month to 12,500 per month, or roughly from $484 to $1513 per month. (Perhaps the wage hike was somewhat smaller; I've seen multiple figures.) These are not generous salaries, viewed from an American perspective, and it may well be that they should simply be described as exploitative. But $1513 per month, or $18,156 annually, would appear to be well above the gross domestic product per capita of South Africa, estimated at $11,100 in 2011 according to the CIA World Fact Book. There are a lot of people in South Africa poorer than these workers. In fact, in a country with an estimated unemployment rate of 24.9 % (also according to the CIA's World Fact Book), these unionized workers might even be described as relatively privileged -- which is not to deny how hard their work evidently was, or how bad their living conditions reportedly are.

There is one more deeply depressing feature of this situation. I mentioned just now that the workers were unionized, but that was an oversimplification. What seems to have happened at this mine is that workers became dissatisfied with what had been their union, the National Union of Mineworkers (NUM). NUM was an important contributor to the struggle against apartheid, but apparently it has lost the faith of many of the workers it represented, and Sosibo, in the Mail and Guardian, reports  that a new and more militant union, the Association of Mineworkers and Construction Union (Amcu), now has the allegiance of many employees -- 21 %, according to a management representative.  Much of the violence, in turn, appears to have been between different groups of workers. Sosibo writes that after alleged sniper killings by people wearing NUM T-shirts, other workers "embarked on a retaliation campaign." Perhaps some of the violence was also simply labor militancy in the service of the strike; Sosibo cites a doctoral candidate "studying patterns of violence in platinum mines in the Rustenburg area," who says that "violence had become routine in strikes in the region."

So the violence is part of a pattern of labor struggles with management, and of internecine struggles among workers and unions. It is also, the same doctoral student suggests, a result of
the fact that workers have become more fragmented than before. Some are residing in informal settlements outside of the mines, some still live in hostels and some black workers occupy more skilled positions than others. Violence is used as a way of enforcing solidarity.
At this point the passions and divisions of South African society begin to seem intractably deep. I very much hope that that is not in fact the case. 

Monday, July 23, 2012

Are law students who take clinics doing "pro bono" work?


New York's Chief Judge, Jonathan Lippmann, has recently announced a new requirement for admission to the NY bar: that each applicant first complete 50 hours of pro bono legal work.

As a step to find legal resources to meet the massive need for legal service for people who cannot otherwise obtain it, this proposal has a lot to recommend it. But it raises surprisingly difficult questions of definition. Many of these have recently been discussed among clinicians, and this post grows out of that discussion (in which I participated and from which I learned a lot).

"Pro bono" work, in its purest sense, is work done purely for the sake of the good -- the "public good," pro bono publico -- that it does. Generally, such work is truly admirable. (Generally, but not always: some people's understanding of the public good may be horrendously flawed; some people may do perfectly good work but harmfully disregard their loved ones in the process; and so on.)

But the more important problem is that a great deal of pro bono work probably isn't done just for the sake of the public good.  A lawyer may take pro bono cases in part to gain valuable experience, or to put his or her name in the public eye. A law firm may make itself a more attractive place to work by allowing its members to do pro bono cases; the firm's motive then is at least partly to prosper in the hiring market. As part of this strategy, firms may  (as others have pointed out) count lawyers' pro bono hours towards the annual targets each attorney needs to achieve (and may pay the lawyer's salary while she does her pro bono work). Here the "pro bono" work actually counts toward the lawyer's employment success, and compensation, at the firm.

It's worth pausing here to think a bit about the logic of defining pro bono work as solely for the public good in the first place. We might compare the work of two imaginary lawyers, unusually named Lawyers A and B.

Lawyer A is a partner in a private firm, earning $500,000 per year. Every year she devotes 40 hours to pro bono work, which we can assume (in her favor) is in no way credited to her work for the firm. It is true, and admirable, pro bono service.

Meanwhile Lawyer B is a staff attorney at a legal services clinic. She works full-time representing poor clients who cannot afford to pay for a lawyer. But she is paid, say, $50,000 per year. As another commenter pointed out, the fact that she is paid for her work means that -- if pro bono work must be done solely for the public good -- she is doing no pro bono work at all. Yet she is spending her entire working life representing poor people, and makes one tenth the income of Lawyer A.

As a general matter (leaving aside special cases of individual psychology), It's clear, isn't it, that of these two lawyers, it is lawyer B who has made a more profound commitment to public service?  And that points to a general proposition: while working without reward is certainly morally relevant, it's not the only measure of what we ultimately are concerned with: contribution to the public good. We value such contributions when they are made, and we value experiences which tend to encourage people to make such contributions over their lives.

Meanwhile, and most clearly, pro bono work isn't purely for the public good if it is required. If a lawyer must do 50 hours of pro bono work to keep her law license, or if a bar applicant must do 50 hours of pro bono work to be admitted, it is very likely that for their 50 hours they are not working solely for the public good -- because they are also working to meet the requirements for being a lawyer and having all the possibilities of income, status and power that a law degree can support.

Now I'm definitely not saying that the presence of mixed or multiple motivations makes "impure" pro bono work valueless. I really mean the opposite: most work of any kind is done for multiple reasons, and mixed-motive pro bono work can be very valuable.

All of this brings us to the question of whether students' work in for-credit law school classes should count towards New York's soon-to-be-instituted requirement of 50 hours of pro bono work as a condition of admission to the bar. I think the answer is yes, for several reasons, partly of definition and partly of underlying purpose.

First, while it's true that clinic students get a reward for their work, that doesn't distinguish them from many other lawyers whose pro bono work, as I’ve just argued, is in some way rewarded. In particular, it doesn't distinguish them from all the other applicants to the NY bar who will be rewarded for their 50 hours of pro bono work with eligibility for admission. "Pure" motivation is rare, and is not the central issue anyway; public service, and a commitment to it, are the key points.

Second, the reward clinic students receive is notably modest. Most strikingly, as a colleague pointed out to me, clinic students have to pay to get it -- because clinic courses are part of the very expensive law school education they are paying for. Moreover, students who choose to take clinics generally must forego taking equivalent numbers of credits of other courses (though to be sure some of them may be eager to make this trade-off). The paradigm case of pro bono work is work for a good cause without remuneration; typical clinic students fit all of that plus they pay out of pocket (or from loan indebtedness) for the privilege. Their work is, in this sense, the most pro bono of all.
It's worth adding, as others have pointed out, that the 50-hour requirement will fall on a group -- new law graduates -- who are already very stretched economically. We ought to avoid adding further economic burdens if we can, and one way to do that is to let students earn their pro bono hours as part of the law school study they are already paying for.

Third (and the points in this paragraph are ones others emphasized), a central purpose of most clinics is to provide effective representation to people who cannot afford to hire a lawyer. To do this is not easy; clinical teaching and learning are intense. To disregard the contribution this work makes to meeting the needs of underserved people -- to, literally, not count it -- seems to miss the value of this work towards meeting the pro bono program's goals.

Or the impact may be worse than that: not counting clinical work may actually hurt the overall pro bono effort law students make. If students cannot count their clinical work towards their pro bono requirement, presumably the result will be to discourage students, to some degree, from allocating their scarce time toward clinics -- and to push them, to that same degree, into forms of pro bono work that are not so carefully structured and guided.

In short, "pure" pro bono should not be our touchstone: pro bono work purer than clinic students' work does not often exist, and seeking it may undercut our achievement of the real goals at issue: helping underserved people, and encouraging future lawyers to commit themselves to providing such help in the many years of their legal careers.

All of this doesn't answer all the definitional questions. In making this argument, I've meant to use  the term "clinic" broadly, to include not only the classic "live-client clinic" taught at the law school by full-time faculty, but also other experiential learning such as "externship" placements in outside law offices, and other forms of guided law-related experience as well. There are many in-house clinics, externships and related programs, and it's possible that some of them -- not many -- do not involve public service work but instead involve students doing the tasks of private practice. If so, this work may not be "pro bono" (which is not a critique of its educational value). There may be other such lines to be drawn, and certainly insight to be gained from those who've focused on such issues over many years. My point is only to advocate one part of the answer to the problem of definition -- namely, that students' work in clinics, broadly understood, should count as pro bono hours.

Thursday, July 19, 2012

Just how good is empathy after all?

Is empathy a good thing in judges? I would say that the answer is yes -- but that's not because it always leads judges to wise decisions, or specifically to liberal ones. If there were any doubt about these qualifications, here's the penultimate paragraph from Justice Scalia's opinion in Arizona v. United States, decided by the Supreme Court on June 25, 2012 (and available here). He writes:
        As is often the case, discussion of the dry legalities that are the proper object of our attention suppresses the very human realities that gave rise to the suit. Arizona bears the brunt of the country's illegal immigration problem. Its citizens feel themselves under siege by large numbers of illegal immigrants who invade their property, strain their social services, and even place their lives in jeopardy. Federal officials have been unable to remedy the problem, and indeed have recently shown that they are unwilling to do so. Thousands of Arizona's estimated 400,000 illegal immigrants--including not just children but men and women under 30 -- are now assured immunity from enforcement, and will be able to compete openly with Arizona citizens for employment.
This is the language of empathy, here directed to Arizona citizens' feel[ing] that they are "under siege by large numbers of illegal immigrants." While Justice Scalia asserts that the human realities he portrays so forcefully are not "the proper object of our attention," it is hard not to sense that in fact those realities do occupy some part of his attention. Advocates of empathy in judging would not complain of this, though apparently Justice Scalia himself might. But the familiar point that Justice Scalia's words vividly illustrate is that the impact of empathy on judicial decisions depends on who the judge empathizes with.

Even those who see empathy as a legitimate, or integral, part of judging likely agree that there is a point at which a judge's empathy obscures his or her judgment. Was that true here? It is clear that Justice Scalia is very angry about the majority's decision; he ends his opinion by saying that if Arizona can't do what it was trying to do in this case (which he describes as "securing its terrritory" and "protect[ing] its sovereignty"), then "we should cease referring to it as a sovereign State." He also refers at some length, both in the passage I've quoted above and earlier, to the President's decision not to enforce the immigration laws against a class of people who came to this country when they were under 16 (and, the point Scalia refers to in the quotation above, are not yet over 30 years old) -- a decision that was not directly at issue in the case and that was made, as Justice Scalia notes, after the case was argued to the Supreme Court. I am not sure how to measure whether any of this reflects too much feeling on Justice Scalia's part, and I'm pretty confident we could find similarly outspoken comments in the opinions of some of the great liberal justices of the past.

But in saying I'm not sure about this, I don't mean to dismiss the question. It seems to me that those (like me) who do approve of empathy in judging need to find a way to discuss, and then if possible measure, when empathy does distort judgment. Is the issue simply a matter of quantity -- too much of a good thing? Or is the quantity of empathy important only in the context of the structure of the rest of the empathetic judge's character -- so that some judges can be thoughtful and empathetic, while others can only manage one or the other? Or are there forms of empathy that support judgment and others that interfere with it? All these and no doubt other questions arise, and call out for answers, once we acknowledge (correctly) that judging isn't simply an exercise in objective reasoning.

Saturday, July 14, 2012

Affordable Care Act Part IV: When, if ever, does offering a state money amount to coercion?


After dealing with the commerce clause and the tax power, the Supreme Court in the Affordable Care Act case (available here) turned to the spending clause. (There can't be many cases that have addressed so many of the central federal powers under the Constitution.)

The text of the Constitution tells us that Congress can tax and spend for the “general welfare,” Art. I, § 8, cl. 1. Does that mean Congress can tax and spend on matters that it could not otherwise reach under the rest of its constitutional powers? The answer, the Supreme Court decided in United States v. Butler, 297 U.S. 1, 66 (1936), is yes. So Congress can raise money, and spend it, even on matters that otherwise would be the concern of the states rather than the national government. Moreover, as a general matter Congress can choose what it will spend on; that is, it can put conditions on what it spends, and if it proposes to provide money to states, it can require them to abide by such conditions.

But a year after Butler the Supreme Court suggested a limit on this authority, when it said, in Steward Machine Co. v. Davis, 301 U.S. 548, 590 (1937), that “[n]othing in the case suggests the exertion of a power akin to undue influence, if we assume that such a concept can ever be applied with fitness to the relations between state and nation.” That language is quite a bit short of a firm statement of a constitutional rule, and evidently no case until the health care decision ever found such coercion. Nevertheless, seven justices do find it here. That includes two of the court's liberals, Justices Breyer and Kagan, and their votes may have caused liberal observers a measure of the same disappointment conservatives have vitriolically expressed about Chief Justice Roberts.

What was the coercive aspect of the law? The statute provided for a dramatic expansion of Medicaid, which would now cover everyone under the age of 65 with an income up to 133 % of the federal poverty line. (Currently, Chief Justice Roberts writes, Medicaid covers “only certain discrete categories of needy individuals – pregnant women, children, needy families, the blind, the elderly, and the disabled…. There is no mandatory coverage for most childless adults, and the States typically do not offer any such coverage.” Moreover, states’ definitions of which families are “needy” typically draw the eligibility line well below the federal poverty level. Roberts at 45.) Medicaid is a program largely funded by the federal government, but operated by the states, and states can decline to have a Medicaid program within their borders. Arizona didn’t join the program till 16 years after federal law created it (opinion of Justice Ginsburg, at 59 n.26). States could also decline to take part in the expansion of Medicaid under the ACA, but if they did so then the statute authorized (though it didn't require) the Secretary of Health and Human Services to withhold from the state not only the new federal money that would have paid for the expansion but also the rest of the state's federal Medicaid funds. 42 U.S.C. § 1396c.

That's a big stick. But is it a "coercive" one?

One way to answer that is to consider whether Congress believed any states would choose not to participate in the Medicaid expansion. The answer seems to be no; state participation is an integral part of the ACA's effort to assure near-universal health coverage. But does this mean the statute is coercive or that it is attractive? After all, the ACA funds 100 % of all expansion costs through 2016, and after that “gradually decrease[] to a minimum of 90 percent.” (Roberts at 46.) What state concerned to support its people's health would want to resist such a sweet offer?

But it must be said (as the joint dissenters do, at 45) that Congress didn't just make an offer. It also added a penalty for rejecting the offer -- namely the risk of losing all current Medicaid funds, those already being disbursed by the state in existing health care arrangements. Moreover, existing federal Medicaid funds are major parts of many states' total budgets: between 10 and 15 % of the average state’s entire budget, according to Roberts (at 51); between 16 and 22 % of all total state expenditures, according to the joint dissenters (at 39 & n.14). Loss of this money would be extremely painful.

But suppose a state said "we want to run an industrial development fund with our medicaid money, and we're going to stop using those funds for health purposes." I don't think anyone would contend that the state was entitled to take the federal money and run. It is entirely legitimate, as a general matter, for Congress to say "we will spend only for X, not for Y." And that's true even though it means that the only way to get the money is for a state to use it on the programs Congress specifies. Even the joint dissenters (who are part of the majority in finding a violation of Congress’ spending clause powers) observe that “[w]hen Congress makes grants to the States, it customarily attaches conditions, and this Court has long held that the Constitution generally permits Congress to do this.” (Joint dissent at 31.)

What this points to is the proposition that what makes a financial penalty coercive is not its size per se, but its fairness. With this idea perhaps in mind (though I think not put in these term), the justices debate whether the Medicaid expansion is or is not sufficiently akin to the current Medicaid program that the expansion, and the penalties for declining it, fall within the existing law's specific declaration that Congress may enact changes at any time (42 U.S.C. § 1304). The justices seem to agree that some changes, and penalties, are covered by this provision, but they disagree about whether the very large changes wrought by the ACA were (with a majority saying they weren't).

But whether the changes were sufficiently predictable is not the whole of a fairness analysis. Congress can always change its laws, whether or not it reminds us of that in advance. Here, as Justice Ginsburg says (at pages 38 & 51 of her opinion), in theory it could have repealed "old Medicaid" and passed a brand new statute, "old and new Medicaid," and conditioned receipt of all Medicaid funds on compliance with the whole of the newly enacted law. Chief Justice Roberts responds that that would have been politically difficult (Roberts at 54 n.14); maybe so, but why does that matter -- either way -- to the measure of Congress' powers?

The justices finding a spending clause violation emphasize the idea that spending clause legislation is “in the nature of a contract” between the federal government and the states. (Roberts quotes this phrase from earlier precedents at 46; the joint dissenters use almost the same language at 33.) To my mind, however, this metaphor is quite imprecise. Congress may be setting the terms for contractual relations with the states, and it may (as cases have held) be essential that those terms be spelled out clearly. But Congress is also exercising its constitutional authority to tax and spend, and that authority should not be improperly undercut. Even with the aid of this metaphor, in any case, it remains a matter for debate just how much advance notice the states are fairly entitled to. In fact, Justice Ginsburg cites a Social Security case that invoked the same “right to repeal or amend” statutory provision that applied to Medicaid to say that “Congress put States on notice that the ‘Act created no contractual rights.’” (Ginsburg at 55, quoting Bowen v. Public Agencies Opposed to Social Security Entrapment, 477 U.S. 41, 51-52 (1986).)

I think it is not possible to say what is unfair pressure without some baseline judgment about the respective roles of the federal and state governments. (This is an application of the insight of scholars considering the general concept of "coercion.") As Ginsburg says, the conservative joint dissenters (who are 4 of the 7 justices making up the majority on this point) at times seem to imply that a federal spending program is more likely to be coercive the larger it is: “On this logic, any federal spending program, sufficiently large and well-funded, would be unconstitutional.” (Ginsburg at 57 n.24.) This idea isn't absurd -- since state taxpayers fund the federal program, for a state to decline its share of the federal funds is a painful loss, more painful with each dollar. But it is also, from the national government's perspective, perverse – the more vigorously the government uses its spending power to achieve important purposes, the more it may run into constitutional trouble.

Meanwhile, it seems quite possible that for Justice Ginsburg essentially any spending amounts and conditions would be permissible so long as they aim at a legitimate governmental purpose and do not violate individuals' constitutional rights. She declares (at 59) that “[t]he coercion inquiry, therefore, appears to involve political judgments that defy judicial calculation.” If that is right, then the coercion test is a matter for politicians in Congress and the White House, and not the business of the Courts. At one point the Supreme Court, some decades ago, did take the view that the federal system could be relied upon to protect the states – from which all federal officials come – but that is no longer the law.

Between these two possible extremes, Chief Justice Roberts, joined by Justices Breyer and Kagan, seem to be looking for a common sense understanding of coercion – though their “gun to the head” rhetoric (at 51) obscures this point. The amount of money matters; the degree of advance warning matters; the degree of states' dependence on the status quo (here, the existing Medicaid programs and their funding) matters. Perhaps the essence of their position is that states are entitled to a meaningful choice -- a standard that is a long ways from the idea that states might be entitled to an "unfettered" choice, but also quite a ways from the idea that Congress is entitled to unfettered discretion in the conditions it attaches to its money. As Roberts puts it, at 49:

In the typical case we look to the States to defend their prerogatives by adopting “the simple expedient of not yielding” to federal blandishments when they do not want to embrace the federal policies as their own…. The States are separate and independent sovereigns. Sometimes they have to act like it.

In fact, even the joint dissenters speak in these terms, saying that the test of coercion is whether “States really have no choice” (joint dissent at 35), and affirming that “courts should not conclude that legislation is unconstitutional on this ground unless the coercive nature of an offer is unmistakably clear” (at 38) – though there is room in such language for quite a spectrum of concrete results in future cases.

In all of this, we are a long ways from the nation of our past. Chief Justice Roberts calls the states “separate and independent sovereigns,” but the “sovereignty” of the 13 original states, in 1776 when we declared independence or in 1787 when the draft constitution was put before the states for ratification, has little connection to our world. But in our world there is room for debate about just how preeminent the national government should be, just how independent the states should be. The ACA decision seems to somewhat strengthen the hand of the states. I'm not sure that's the best thing to do, but I'm not unhappy with this aspect of the case -- which strikes me as a reasonable approach to a hard constitutional issue.