Monday, February 28, 2011

The Vaccine Injury Act case -- a problem for textualists

On February 22, 2011, by a vote of 6 - 2 in the case of Bruesewitz v. Wyeth LLC, the Supreme Court decided that the National Childhood Vaccine Injury Act of 1986 absolutely bars lawsuits based on allegedly defective vaccine design, as long as the vaccine was manufactured to its own specifications and came with proper directions and warnings about any risks. This result may be a good one. It is quite possible that potential litigation -- in particular, many pending cases about the possible relation between the DTP (diphtheria, tetanus & pertussis) vaccine and childhood autism -- might be so burdensome that private manufacturers would simply abandon the making of badly needed vaccines. But the various opinions in the Bruesewitz case leave me with the impression that whether or not this result is a good one, it wasn't the one Congress intended. The need for legislation does not enact it, as Justice Frankfurter once remarked.

Here, in any event, I want to put to one side the question of Congress' intentions (as manifested in the legislative history, whose various elements the justices scrutinize), in order to focus just on the text of the statutory provision at issue. A textualist must come to grips with the text. To be sure, a textualist can rightly consider other parts of a statute (or even of other laws) in order to determine the meaning of the particular bit of text at issue in a case, and I'm going to leave to one side as well the question of whether other parts of the Vaccine Injury Act weighed in the Court's favor. It seems to me as a general proposition, however, that a textualist must find something very strong in other parts of a statute or other laws to justify avoiding the apparent meaning of the words directly at issue, and I doubt that there was anything strong enough to justify the extent of the avoidance in this case.

Let's look at the statute. As Justice Sotomayor says in dissent, the act asserts as a general rule that state law -- this would be the body of law governing liability for defective products -- governs vaccine cases. (42 U.S.C. 300aa--22). It goes on to state exceptions, including this one, 42 U.S.C. 300aa--22(b)(1):

"No vaccine manufacturer shall be liable in a civil action for damages arising from a vaccine-related injury or death associated with the administration of a vaccine after October 1, 1988, if the injury or death resulted from side effects that were unavoidable even though the vaccine was properly prepared and was accompanied by proper directions and warnings."

What's wrong with the majority's argument that this statute bars all lawsuits for vaccine design defects, as long as the vaccine was "properly prepared" and came with "proper directions and warnings"? Here are several answers (most or all covered by Justice Sotomayor in her very effective dissent, which to my mind persuasively refutes the majority opinion by Justice Scalia, himself a master of statutory interpretation argumentation):

(1) The statute never says anything explicitly about design defects at all. If Congress had wanted to preclude all litigation based on design defects, it could have said "There is no liability for design defects." In fact, the statute doesn't even say in so many words that it has any relation to design defects at all. But it does, I believe: evidently vaccine litigation dealt with three issues -- warnings, preparation and design -- and the first two are explicitly referred to, suggesting that the discussion of "unavoidable side effects" is meant to refer to side effects from the third source, design. And while it is true that the statute fails to say "There is no liability for design defects," it also fails to say, "Liability may be found for design defects." So the argument from lack of explicitness is not conclusive, though I think the statute does come closer to explicitly affirming liability for design defects that aren't "unavoidable" than it does to explicitly denying such liability.

(2) The "if" clause (as Justice Sotomayor labels it): The statute says that manufacturers aren't liable "if the injury or death resulted from side effects that were unavoidable...." The use of the word "if" suggests that "if not" is also conceivable -- in other words, that some side effects are unavoidable but some are avoidable. But for the majority, as long as proper manufacture and warning are taken care of, there is no possibility of liability for design defect. In other words, there is no such thing as an "avoidable" side effect from a vaccine that is properly prepared and accompanied by proper warnings. This is an odd idea at best -- although evidently some courts in the years leading up to this statute had taken essentially this view -- and it's an idea that is very poorly conveyed by a clause beginning with "if."

(3) The meaning of the word "unavoidable": The majority maintains that if the design of a vaccine results in the risk of side effects, those side effects are unavoidable. But what if they could have been avoided by a different design? In that case, they just aren't "unavoidable" and so, for the statute to mean that all such side effects count as unavoidable, the word "unavoidable" has to have taken on some very odd definition, such as "avoidable, but not by this design" -- and textualists seek ordinary usage as a general rule, not idiosyncratic definitions, especially ones that are never spelled out.

To be sure, there's an exception to this focus on ordinary usage, for words that have become terms of art. There is in fact considerable discussion in the case of whether "unavoidable" was a term of art, but no one asserts that if it was a term of art, the consensus meaning it had acquired was "avoidable, but not by this design."

Somewhat remarkably, the majority claims that if "unavoidable" is read to mean "not avoidable by a different design" then "the word 'unavoidable' would do no work" (majority opinion at 7), on the ground that "[a] side effect of a vaccine could always have been avoidable by use of a differently designed vaccine not containing the harmful element." But Justice Sotomayor responds that "the harmful element" might be essential to the vaccine's efficacy, and that it's precisely in such cases that the side effects deserve to be called "unavoidable" -- whereas in other cases the side effects might have been avoided by better design, and would then be "avoidable." (Dissent at 14-15.)

(4) As Justice Sotomayor emphasizes, and the majority concedes, the net effect of the majority's reading is that 13 words of the statute -- italicized below -- turn out to have no meaning and to be completely superfluous:

"No vaccine manufacturer shall be liable in a civil action for damages arising from a vaccine-related injury or death associated with the administration of a vaccine after October 1, 1988, if the injury or death resulted from side effects that were unavoidable even though the vaccine was properly prepared and was accompanied by proper directions and warnings."

This is just a huge problem for a textualist. Though I don't claim to have read every case where such issues have arisen, I've never encountered a case where a textualist such as Justice Scalia accepted so extensive a violation of the well-known interpretive principle that statutes should be read so that every word has meaning. What the majority says is that this violation isn't determinative, because (says the majority) on the dissent's reading another set of 15 words become superfluous -- the ones italicized here (the "even though" clause of the statute):

"No vaccine manufacturer shall be liable in a civil action for damages arising from a vaccine-related injury or death associated with the administration of a vaccine after October 1, 1988, if the injury or death resulted from side effects that were unavoidable even though the vaccine was properly prepared and was accompanied by proper directions and warnings."

Scalia's argument is that for the dissent there's only one question -- were the side effects "unavoidable." (Majority opinion at 12.) But Sotomayor responds that the "even though" clause does have a function: it establishes that "unavoidable" side effects resulting from design defects are only exempt from liability if the vaccine was properly prepared and came with proper directions and warnings. If a manufacturer fails to prepare the vaccine properly or provide the directions and warnings that should accompany it, then even if the side effects of its design really are unavoidable, the manufacturer remains liable for them. It seems to me that this reading gives content to the "even though" clause, and therefore that the interpretive rule against finding portions of a statute superfluous quite clearly favors Sotomayor's reading.

(5) But the "even though" clause may also provide the strongest textual argument in favor of the majority's position. To say that "side effects ... were unavoidableeven though the vaccine was properly prepared and was accompanied by proper directions and warnings" seems to say that what might have made them avoidable was proper preparation and/or proper directions and warnings. The side effects are unavoidable even though -- despite, as Scalia says -- proper preparation, directions and warnings. Sotomayor's reading, on the other hand, seems to make "even though" mean "provided that" -- manufacturers aren't liable for unavoidable side effects provided that the vaccines were properly prepared and came with proper directions and warnings. Scalia gives this point a grammatical tag, telling us that the "even though" clause is "called a concessive subordinate clause by grammarians." (Opinion at 11.)

I think Scalia is right that Sotomayor's reading isn't "concessive." But the "even though" clause is awkward for all sides in this debate. After all, what makes the side effects unavoidable, on the majority's reading of the statute? The answer might be that we know these side effects are unavoidable "because" neither better preparation nor better directions and warnings could have avoided them. Otherwise their unavoidability is altogether undefined. "Because" is usually quite a ways from "even though," as Sotomayor argues (dissent at 17 n.14), but here, oddly, the meaning of these words seems to coincide. I think that on this score Scalia's reading is the more natural. Yet it's worth noting, as Sotomayor does, that Scalia's reading -- as discussed above -- actually means that the words "even though," along with the 11 words preceding them, lose all meaning, so that his emphasis on fidelity to the import of a concessive subordinate clause seems somewhat unsatisfactory. I think the main lesson to be drawn is that the statute is, truly, badly drafted. On balance, I also think Sotomayor's reading does a better job of giving meaning to as many words of the text as possible.

In short, I don't think the majority's reading of these words is easy to sustain on textualist grounds -- yet Justice Scalia is committed to textualist interpretation. Justice Sotomayor says that the majority's decision is "policy-driven," though she does so only in a footnote almost at the end of her opinion (dissent at 27 n.25). If I am right about the relative weakness of the textualist arguments in favor of the majority's position, that does suggest that something else -- policy -- drove the Court's thinking. It's possible to defend policy-driven statutory interpretation -- but not on textualist grounds. Indeed, textualists have been outspoken in objecting to other methods of interpretation, in which judges read statutory language in light of other evidence of legislators' specific intentions or broad purposes, as giving judges too much room to enact their own preferences into law. All of which makes Bruesewitz a problem for textualists.

Saturday, February 26, 2011

Al Qaeda and the rise of freedom in the Middle East

Among Muammar Qaddafi's bizarre comments as his fall from power in Libya approaches, one particularly interesting one was his accusation that the rebels were under the thrall of Al Qaeda. His notion was that Al Qaeda had imposed its will by enticing young people to take hallucinogenic drugs, but -- that bizarre idea aside -- Qaddafi's basic point was not crazy: Al Qaeda has indeed opposed the autocratic governments of the Middle East. Not because Al Qaeda opposes autocracy, at least in the form of theocracy, but because it sees these governments as tools of the West and obstacles to Islamist change.

So is the fall of several authoritarian Mideast regimes good for Al Qaeda? That is one possibility. A commentator on the news today (Saturday, February 26) mentioned that one of the Libyan tribes that has now turned against Qaddafi practices a fundamentalist form of Islam and has declared Islamic rule in its part of Libya. More generally, it's clearly possible that fundamentalists will prove the best organized and most determined citizens in the countries that have thrown off their previous rulers, and will install themselves in place of the old, more pro-Western autocrats.

But something else is possible too. It's often been said, and I think correctly, that Franklin D. Roosevelt helped save American capitalism -- even though many capitalists hated him, and even though he sharply attacked them in turn. Precisely because FDR's government paid attention to the needs and the voices of the American people, he was able to generate reforms that contributing to reestablishing a workable social contract in this country.

Something of the same thing may turn out to be true in at least some countries of the Middle East. (It's a very diverse place, so I don't want to generalize too much -- and even so I'm speculating!) It may be that the best way to stem the tide of Al Qaeda is for governments to come to power that listen to their people. If so, the fall of the governments that we relied on may in the end turn out to be not Al Qaeda's victory but the source of its eventual downfall.

Monday, February 21, 2011

The material witness statute and the rule of law

On March 2, 2011 the Supreme Court will hear a case brought by an American citizen, Abdullah al-Kidd, who maintains that the government used the "material witness" statute to hold him without trial. (See Adam Liptak, Supreme Court to Hear Material Witness Case, N.Y. Times, Feb. 20, 2011). As Liptak explains, Kidd's argument is "that policies put in place by Mr. Ashcroft [George W. Bush's first Attorney General] twisted the federal material witness law -- which allows the government to arrest people with knowledge of others' crimes to make sure they are available to testify -- into a preventive detention measure of the sort used abroad to hold and investigate citizens who are themselves suspected of terrorism."

There are indeed other countries that explicitly authorize preventive detention. The practice has a grim history -- apartheid South Africa's slide into a security state featured a number of such laws, for example -- but it also clearly has adherents. But this case poses a special rule of law question: if the material witness statute really was written to authorize holding only those known to be actual witnesses, rather than to authorize holding people for investigation into whether they might turn out to be witnesses, did the fears of terrorism after 9/11 make it legitimate to turn this law to uses which on a fair reading of its terms just weren't authorized?

It's possible, of course, that the statute doesn't spell out its intended application with much clarity, or that its intended uses weren't as narrow as I've just suggested. No doubt these points will be argued in full. But suppose it turns out that the issue is just as I've described: can a statute be twisted away from its original meaning because of the danger of terrorism?

It is, of course, possible to say that the answer to that question is "yes." It's especially possible to say so in a true, unmistakable emergency, when there simply isn't time to use the processes of law. Since I'm writing on Presidents' Day, it's particularly appropriate to remember that Abraham Lincoln apparently took this view when he claimed for himself the power to suspend the writ of habeas corpus. The Supreme Court never ruled on whether Lincoln actually had the constitutional authority he claimed. But after the Civil War, in Ex parte Milligan, 71 U.S. (4 Wall.) 2, 109 (1866), the Court emphatically insisted on the limits on governmental power even during wartime, in a decision that candidly acknowledged that a wartime judgment might have been different:

"During the late wicked Rebellion, the temper of the times did not allow that calmness in deliberation and discussion so necessary to a correct conclusion of a purely judicial question. Then, considerations of safety were mingled with the exercise of power; and feelings and interests prevailed which are happily terminated. Now that the public safety is assured, this question, as well as all others, can be discussed and decided without passion or the admixture of any element not required to form a legal judgment."

We are not yet through with the war against Al Qaeda. But we have been at it long enough to know how corrosive the logic of national security can be. Moreover, Mr. al-Kidd was not the victim of a panic-stricken move a few days after September 11, 2001. He was arrested in March 2003. We had had time to move somewhat beyond panic. More important, as a matter of legal and constitutional reasoning, we had had time for Congress to legislate to provide the Administration with any new powers that it wanted. If it didn't seek, or couldn't get, a preventive detention law applicable to U.S. citizens, then there was no such law. And if we are to continue fighting this war, while maintaining our liberties, then we need to be able to rely on the bedrock rule that if there is no such law, then what the government did was without law -- and illegal.

In one respect, however, the issue is more complicated. It could be argued -- and has been -- that the President's duty to protect the nation authorized him to take emergency steps that went beyond otherwise applicable law. Once this argument is accepted, it is hard to see what outer limits there are on it, but it is possible to make the argument nonetheless. What may have happened to Mr. al-Kidd, however, is something else: the President did not assert some special emergency authority to act beyond law, but rather, through Attorney General Ashcroft, twisted a law that did exist and purported to act under it. If there is some sort of residual emergency power that Presidents can wield, our liberty is certainly at risk. But it is less at risk if the President must call that power by its name -- rather than hiding its exercise under the forms of ordinary law. It will be up to the Supreme Court to defend the boundaries of the ordinary law, even if the question of extraordinary law still waits for debate some other day.

Saturday, February 12, 2011

Freedom in Egypt

I don't know whether the overthrow of Mubarak will lead to democracy or to some new form of tyranny. But I do know that this moment, right now, is a flowering of human liberty. It's a beautiful thing to see.

In the short-term, it's certainly good. In the middle-term, the consequences are far from clear. Perhaps that's why the US aligned itself with so many dictators in the Arab world -- there was no way to see a path that led dependably out of tyranny towards stability and freedom. That remains true. But to put the same truth in different words, it seems clear now that there was no way to move toward freedom except by dismissing the old order.

So now we are in a state of uncertainty, but one graced with the beauty of an act of self-liberation. There's a moment like this in the movie Pleasantville, a movie about the irresistible wish for freedom and about its costs. What will happen next, one of the characters asks another, after their life together has been turned upside down, and she answers that she doesn't know.

What will happen to us? We just don't know. We never know. But if we are to move forward towards an ideal of free people living justly together, we have to move. And now we -- that is, the people of Egypt -- have.

Monday, February 7, 2011

While feeling ice crack under my feet

The other day, in the midst of this snowy winter, I found myself walking over ice on my back porch. The ice was cracking under me, and I could feel and hear it with each step. I realized that the sound and the sensation were familiar, and that I had liked them ever since I was a child. And it occurred to me that while ice would crack even if no one was there to witness it, only living beings could find it beautiful.

What would the universe be without us to observe it and admire it? Life is a gift, we often say, and this is part of what is given -- the chance to see all the beauty and wonder of the world. Perhaps in some way our admiring is our own gift in return -- we look on the world and praise it.

Did Someone feel that the universe should be a place whose beauty would not be wasted? I'm feeling inclined to think so. How to explain the suffering of innocents in this same world, I don't know -- perhaps the only universe that could be created was one in which joy and sorry were always intermingled, and perhaps Someone looks at our suffering with compassion and hopes we will find ways to ease it.

Perhaps not. It may all be accident. But what a lucky accident indeed, to have the gift, or opportunity, of life.

Saturday, February 5, 2011

On why we don't know much

One of many reactions to Amy Chua's account of "Chinese parenting" was Elizabeth Kohlbert's observation, in The New Yorker, that American young people test way behind their peers elsewhere but stand out compared to kids in other countries on at least one index: self-esteem. Much as one can (rightly) deplore a demand for excellence that falls into cruelty, calling for achievement by one's kids -- and kids in general -- is good rather than bad.

Is it true that Americans no longer call for achievement by their kids, and that that is the reason our kids seem not to be learning very much? Well, maybe. But these are the same Americans who, it's been found, work quite long hours every year. (Wikipedia presents a table of Organization of Economic Co-operation and Developmwent [OECD] data from 2002 showing the average annual work year in the US at 1777 hours. Only six countries in the table had longer work years, none of them in Western Europe, and workers in a number of West European countries, including France and Germany, annually worked 400 hours less -- as if they had all taken 10-week vacations that their US counterparts didn't get.) We're not, as far as I can tell, a lazy nation. So why would we be indifferent to how much our kids learn?

In fact, I don't think we are indifferent to how much our kids learn. I think many parents, teachers, and kids -- from all sorts of social backgrounds -- are deeply committed to academic excellence. But I do suspect that as a nation we may have become somewhat ambivalent about academic achievement -- even that many of us who cannot stop caring about it and guiding our children to care about it are, at the same time, uneasy about what we are accomplishing. Why?

No doubt there are many reasons, but here is one possibility: that we are ambivalent because we tend to associate high academic achievement with claims to privilege. We have very good reason to want to dismantle the embedded privileges of particular groups within our society. But it is entirely predictable that those who have privilege will tend to be high academic achievers -- since they enjoy the benefit of all the forms of capital, cash and cultural, that fuel high achievement in school. Obviously many unprivileged people achieve marvellously too; I don't mean to deny that in the least, but rather to try to discern why our country seems distracted from fostering high achievement more widely -- and my thought is that it is tempting to discount high achievement, because it appears as a corollary of privilege.

If we say often enough that test scores are not a mark of true talent (and we do say this a lot, and what's more there's a lot of truth to it), perhaps eventually we communicate to the test-takers that their test scores don't matter so much, because those scores don't measure their true talent. And perhaps we also tend to communicate to teachers that fostering high test scores isn't a central objective, because, again, those scores don't measure true talent (or, to extend the idea, true learning). Then we set about to find what might better measure or elicit true talent, and as admirable and valuable as that search is, it may divert attention from more traditional educational steps which likely have considerable power in imparting the knowledge that our young people need to acquire in order to be educated.

Here's another data point. Richard Arum and Josipa Roksa, authors of Academically Adrift: Limited Learning on College Campuses (2010), report in The Chronicle of Higher Education -- based on testing and retesting of students using a standardized test called the Collegiate Learning Assessment -- that over a third of US college students "did not show any significant improvement over four years" in the "higher-order cognitive skills that it is widely assumed college students should master."

If their finding is correct -- and I'm sure there is much still to be debated on this score -- then it is remarkable on two grounds. First, it forces one to ask what all those students, and their professors, were doing in their years together. But, second, and perhaps even more pointedly, it raises the question of what the accreditation agencies that approved all these schools were doing. My impression is that in universities over the past decades elaborate effort has gone into articulating teaching objectives and assessing their achievement, and as I understand it no college or university today can get accredited without having such an apparatus for demonstrating the success of the educational process in place. So it appears that it is quite possible to have objectives for higher education and systematic assessment of their attainment -- institutional features that incidentally take plenty of time and resources to create -- and ... not educate.

It is possible to argue that the assessment apparatus undercuts true education. This is roughly the case widely made against elaborate objective testing (and I think this case has merit). Yet it also seems possible that actually the assessment apparatus winds up playing a different role -- that it functions to create the illusion of education, while masking the fruitless churning that's actually going on. This function would fit comfortably with Arum and Roksa's sense that despite their findings, higher education is not in crisis, because "the institutional actors implicated in the system are receiving the organizational outcomes that they seek, and therefore neither the institutions themselves nor the system as a whole is in any way challenged or threatened." Arum and Roksa note that there is, apparently, a long tradition of college not educating very much. But perhaps something has also changed. I wonder whether our assessment systems have failed to catch this lack of education in part because we created our methods of assessing our colleges at the same time that we were growing ambivalent about simple "achievement" as a central function of schools.

I think that there is a sense in which all our children truly are above average. Human beings are marvelously talented, and a world that allows those talents to flourish will be one with many virtues. If "average" means "just okay," we may all have the potential to be better than that. But not if we don't learn what's needed to make our way in the world.