Saturday, January 29, 2011

When will the 2001 Authorization for Use of Military Force expire (if ever)?

In "Did Congress approve America's longest war?," The Guardian, Jan. 27, 2011, Bruce Ackerman and Oona Hathaway ask whether the Authorization for Use of Military Force (AUMF) passed in 2001, which provided the legal basis for our war in Afghanistan, still authorizes the fighting we are engaged in. We have fought for a decade, and not just in Afghanistan, and it appears (as Ackerman and Hathaway note) that no more than 100 members of Al Qaeda -- our central target under the AUMF -- are actually still in Afghanistan. Yet we fight on and on. Ackerman and Hathaway ask, in essence, whether the war that Congress authorized is actually still continuing, or has morphed into a new war shaped solely by the President.

I am inclined to think, however, that what we are seeing is not a usurpation of power but a reflection of the fact that our Constitution simply does not tightly cabin the power to wage war. Even if we deny (and we should) that Presidents have authority to start wars on their own, the Supreme Court's decision in Bas v. Tingy in 1800 recognized that Congress can authorize the initiation of war by statute rather than by declaration of war. Here Congress certainly passed such a statute in the AUMF. So the immediate question is, "how much war did Congress authorize?"

One way to answer that question is to parse the words of the Authorization for Use of Military Force which Congress passed in 2001 to provide the legal basis for this war. But I want to approach the point from a somewhat different angle. Let's say that Congress only expressly authorized a limited war, targeted against Al Qaeda and its Taliban allies in Afghanistan. We fought that war, forced the Taliban from power, and largely drove Al Qaeda out of Afghanistan; let's say also that at that point, whenever it was, that war ended. But the end of that war didn't mean peace; on the contrary, that war flowed without interruption into the next one, the direct result of the first, in which we are fighting to establish a government in Afghanistan stable enough to withstand the assault of the Taliban and other Afghan rebel groups, and thus stable enough to keep Al Qaeda from returning in force to Afghanistan once again.

So when we crossed from the first war to the second, did our actions cease to be authorized by Congress? If peace had broken out, then when a new threat arose we might say that dealing with that threat required a fresh authorization. Perhaps if Al Qaeda had regrouped not across the border in Pakistan, but somewhere altogether different -- say, Malaysia -- then again we might say that pursuing Al Qaeda in this new location required a separate decision by Congress. But peace didn't break out; fighting never stopped; and Al Qaeda and the Taliban found refuge nearby in Pakistan, and the Taliban, at least, have used that refuge to pursue the war back in Afghanistan. To put the question starkly, then, did Congress authorize fighting the war and dealing with the aftermath, or only fighting the war?

War is full of perils, and we've always known this. I'm inclined to think that as a general proposition an authorization for war is an authorization to deal with the immediate conflict and with its aftermath. It would be possible to limit this idea by saying that the authorization extends only to the reasonably foreseeable aftermath, not to any and all terrible things that may grow out of a conflict -- but the trouble with this idea is that if anything is clear about war, it is that war's consequences are hard to foresee. It's also arguable that everything that has happened in Afghanistan was reasonably foreseeable, even if we didn't foresee it. Congress certainly could write an authorization that was very strictly limited -- but it's not likely to do that, because it too knows that war is full of unpredictable dangers. (As a case in point, the Authorization for Use of Military Force that began our war in Afghanistan uses language that, whatever its exact breadth, certainly is not "very strictly limited.")

Then is there any limit at all, in conflicts where peace does not ever arrive and fighting spreads but does not leap to altogether separate arenas? I wouldn't want to say no, but it isn't easy to see legal rules that mark out these boundaries. What is easier to say is that there are political limits: if and when Congress decides matters are out of hand, the Constitution clearly gives it power to act on that conviction. Doing so isn't easy, since Congress cannot legislate over the President's opposition unless it musters a supermajority of its members to support what it is doing -- and few members of Congress are eager to undercut a President while our troops are exposed to danger. And that's my point: once a war is authorized, I think the constitutional system we have does not make it easy either to mark out the limits of that war or to end a war that the President believes must continue.

Wednesday, January 26, 2011

On students' misjudging how much they are learning

Perhaps the most counterintuitive result from the experiments described in Pam Belluck, "To Really Learn, Quit Studying and Take a Test," N.Y. Times, Jan. 20, 2011, was the gap between students' perception of their own learning and reality. Apparently the students who did closed-book recall essays were the least confident about how much they had learned, when in fact they had learned more than their fellow students who used other methods of learning.

Why? The article quotes a psychologist, Nate Kornell, who says that "The struggle helps you learn, but it makes you feel like you're not learning."

If this is what's going on, it should be encouraging to students who feel they are struggling. If that struggle is not too intense -- there must be levels of anxiety that are simply disabling -- it may be just what they need in order to learn.

At the same time, this finding offers at least some reason to believe that professors should not be misled by students' own responses to learning exercises. That students say they learn from a technique such as concept mapping may reflect only that this process is more comfortable and less unsettling for them.

But skepticism about students' evaluations of their learning experiences has its own pitfalls. A teacher faced with negative evaluations would like them to be wrong. That students misjudge their own learning (assuming that they do make this mistake in real life as well as in this experiment) doesn't necessarily mean they misjudge their teachers. We may all be better judges of others than of ourselves.

Monday, January 24, 2011

On the intellectual significance of "recall" and "memory"

The report of experiments demonstrating that taking a test on what you recall tends to make you actually recall more than other methods of learning do (Pam Belluck, "To Really Learn, Quit Studying and Take a Test," N.Y. Times, Jan. 20, 2011) notes that it's not yet known why this effect exists. Belluck quotes one psychologist, Robert Bjork, saying that "'when we use our memories by retrieving things, we change our access' to that information.... 'What we recall becomes more recallable in the future.'"

While there may be more going on, this explanation certainly makes sense. It intersects with what modern research (and human wisdom going back many years) demonstrate, namely that memory itself is shifting and malleable and, indeed, creative. This understanding of memory is, I believe, now widely held -- but if memory is like this, isn't recall (to use a different term to describe the particular form of memory involved in the process of study) probably also like this? If recall and memory in general are similar, then recall in study is not, at least not necessarily, a process of rote repetition or regurgitation.

Moreover, recall would be more than just a change in our access to information, of the sort Professor Bjork discusses. Recall would be an ongoing process of fitting the information in question with other information we already have or subsequently acquire. Recall, then, is a form of working with information -- and efforts to replace students' practice of recall with processes of working with ideas (such as concept mapping, one of the alternatives examined in these experiments) miss the point that recall itself is working with ideas.

There may be memory chores that have no other mental significance, though I'm not sure we can assume this without actually examining what those chores' impact on the mind may be. But even if there are drills so unenlightening as to be "mindless," the import of these experiments seems to be that tests that demand the exercise of memory can be quite deeply mindful.

Learning for -- and from -- the test

Pam Belluck's article, "To Really Learn, Quit Studying and Take a Test," N.Y. Times, Jan. 20, 2011, reports on a study asking what enables students to recall material best. In one experiment, the researchers compared four techniques: "One [group] did nothing more than read the text for five minutes. Another studied the passage in four consecutive five-minute sessions. A third group engaged in 'concept mapping,' in which, with the passage in front of them, they arranged information from the passage into a kind of diagram, writing details and ideas in hand-drawn bubbles and linking the bubbles in an organized way. The final group took a 'retrieval practice' test. Without the passage in front of them, they wrote what they remembered in a free-form essay for 10 minutes. Then they reread the passage and took another retrieval practice test." (paragraph breaks omitted).

It turned out that the students in the last group retained more information -- 50 % more, evidently -- than those in the second and third groups (from the Times' report it seems the students in the first group, who read the passage once for five minutes, must have served as something like a no-recall control group). A second experiment, comparing just concept-mapping and testing, also found that the tested students "did much better."

These are very interesting experiments, but for now I want to make just one observation: What these experiments appear to show is that the way to increase recall is to practice recalling. Put that way, these experiments really shouldn't be surprising. If they are -- and apparently they are -- that's a measure of how confused our thinking about learning has become.

Tuesday, January 18, 2011

Can dogs be citizens?

Today's New York Times has a wonderful article (Nicholas Wade, "Sit. Stay. Parse. Good Girl!," N.Y. Times, Jan. 18, 2011, at D1) about a border collie named Chaser who has learned over 1000 words. Thanks to training for 4-5 hours every day, during which she learned one or two new words a day, she has now acquired what appears to be the largest vocabulary any dog has ever been shown to possess. What's most interesting is that her vocabulary includes verbs as well as nouns, and she can, apparently, tell the difference, as she can show by following commands that link one verb to one noun, another verb to another.

John W. Pilley, the psychologist who trained her, says that "[w]e are interested in teaching Chaser a receptive, rudimentary language." In this language, it seems quite conceivable that the verb "help" could be taught, along with nouns such as "man" or "woman" or "dog." So then it would be possible, presumably, for Chaser to understand the sentence, "Help the dog." And then it seems possible she could understand, or conceive, "Help the dog -- me."

She couldn't say those words, of course. But babies can't say words either, yet they evidently can be taught to communicate by sign language some months before they begin to be able to articulate words. Perhaps Chaser too could be taught a simple language of barks, or barks and signs -- the kinds of tools with which dogs already, and obviously, communicate with us all the time.

All of which leads to this: What is our response when a dog says to us, "Help me"?

Such a dog would not qualify as a citizen in the terms Bruce Ackerman defined some time ago in Social Justice in the Liberal State (1980). Ackerman argued that a citizen must be able to make a claim of right. (73) "Help me" is quite a ways from a claim of right. Ackerman entertains the possibility that dogs (his example is lions) might actually be asking us, "Why don't I get what I want instead of you?" (71), and there's some experimental evidence suggesting that monkeys have a sense of justice (or at least of being the victims of injustice) -- but still Chaser has quite a ways to go before she meets this standard of citizenship.

That standard, however, may not be the right one. To ask for our help, in words (or code substituting for words), is to enter human dialogue. It isn't so easy to see how we can refuse to answer just because the dialogue is at a very simple level. Certainly we don't refuse with our own children. Of course, they are human, dogs aren't. But if we think that what defines our fellow beings is not their species but their capacities, then we may be hard-pressed to deny that a dog with the capacity to talk is, in fact, a person. And as a purely emotional matter, lots of people will be very strongly inclined to heed an appeal from a dog. A person with claims that we believe we should heed might be ... some sort of citizen.

Saturday, January 15, 2011

Connie Willis' "Blackout" and "All Clear" -- and the meaning of our lives

Connie Willis's two-volume book, Blackout and All Clear, takes us via time travel back to the Second World War and the Nazi bombings of London. It's a long book -- over 1100 pages, in two volumes, both published in 2010, that really are not two books but just separately published halves of one. But it rewards reading, for Willis' vivid portrait of a remarkable part of our actual past, and for the way she brings her weave of plot lines to its transporting conclusion.

The book is also a work of faith. Though Willis does not assert that God's hand is at work, and in fact the vehicle for the plot is time travel and the workings of the time "continuum," nevertheless the essence of the book, I feel, is that God is in the machine (here, continuum).

Yet the practical moral of the book is not that all will be well, but rather that acts of loving kindness are worth undertaking. The second volume begins by quoting Churchill: "You will make all kinds of mistakes; but as long as you are generous and true, and also fierce, you cannot hurt the world or even seriously distress her." It is worth doing the best we can, and each of us should join in the world's efforts with the hope of achieving good results. No one is an island, and no one is disqualified.

Perhaps it is surprising, perhaps it is proof that we are all part of one culture, but these lessons are very much like those David Brooks draws in an essay called "Social Animal: How the new sciences of human nature can help make sense of a life," in The New Yorker, Jan. 17, 2011. There Brooks tells an extended parable about the life we now lead, culminating in the words of a (possibly fictional) neuroscientist, who says that his studies of the brain led him to give up an image of himself "as a lone agent." Now he feels that "we inherit a great a river of knowledge, a flow of patterns coming from many sources." When we become "immersed directly in that river," we flourish. "I've come to think," says the neuroscientist, that "[h]appiness isn't really produced by conscious accomplishments. Happiness is a measure of how thickly the unconscious parts of our minds are intertwined with other people and with activities. Happiness is determined by how much information and affection flows through us covertly every day and year."

Brooks' neuroscientist also mentions that for some people this experience comes "when they feel enveloped by God's love." For Willis, the experience of God's love doesn't seem to be one of absence or contemplation. Nor is it a feeling that even believers always enjoy. I think she would say, rather, that the times when we feel this divine love sustain us during the daily struggles of life, as we make our many mistakes.

Now is that feeling illusory? A conventional secular answer, certainly consistent with the views of Brooks' neuroscientist, is that connection is real, while envelopment by God's love just a confused perception of that reality. Perhaps so. But it appears that the capacity to feel a "mystical sense of oneness" has a concrete foundation in our brain circuitry. (So Dr. Kevin Nelson, author of a recent book on "near-death experiences," explains in an interview on Salon.) Like the capacity to feel connection with particular people, the capacity to feel connection with some larger whole is part of our biological makeup.

That neither proves nor disproves the truth of these experiences, as Dr. Nelson rightly points out. Perhaps, again, the biological capacity for connection is innate, the particular transposition to religion a misperception. The turn to religion might be useful, in the same way that overconfidence seems to be useful in enabling people to move through life, but still objectively mistaken. Perhaps. But it is interesting that our brains are primed for experiences like this. We are in some ways very good at perceiving the world despite our physical limitations -- compare the feats of baseball players discerning the path of pitches which as they approach the plate are literally moving too fast to be followed. It seems at least possible that our capacity for this kind of feeling reflects that there is indeed some greater whole to be perceived.

Wednesday, January 12, 2011

"The Truth Wears Off" and evidence of E.S.P.

Jonah Lehrer in The New Yorker (Dec. 13, 2010) carefully investigates reasons why “the truth wears off” as scientific findings, vivid at one moment, gradually cease to be so demonstrable. He looks particularly to “publication bias” and “selective reporting” and “sheer randomness” as explanations. All have force, and perhaps they account for most of the problem that Lehrer examines with the aid of Professor Jonathan Schooler.
But I’m not sure they fully account for the example of Dr. Rhine’s extrasensory perception research, which the article offers as one of its striking instances of data decline. (These factors may not entirely explain Prof. Schooler’s own data problems either, but I won’t pursue that here.) It seems from Lehrer’s account that Rhine’s data collapsed – that is, the undergraduate who appeared to have phenomenal E.S.P. powers stopped producing his feats of perception – “just as he [Rhine] began to believe in the possibility of extrasensory perception.” That strongly suggests that Rhine’s earlier data weren’t biased somehow by his own belief in E.S.P. Lehrer reports that Rhine prepared papers for publication that would have reported the undergraduate’s remarkable achievements; it’s not clear from the article whether or when those were published, but at any rate no publication bias seems to have prevented Rhine himself from discerning the deterioration of the data.
It remains possible, certainly, that randomness is the real culprit – anything can happen, and it might just have been sheer randomness that this student got lucky, rather than psychic, for a time. But the problem with “randomness” is that it functions as a wild card, explaining only by declaring something inexplicable. What if, instead, the data pointed to something real and not random at all? For example, suppose that people do sometimes have psychic abilities, but not very often or very dependably. Perhaps psychic abilities burn out quickly – so that what happened to Dr. Rhine was that he observed the actual tendency of human E.S.P. to burn out (and not just in one undergraduate; Lehrer reports that the same fall-off took place “in nearly every case” where a subject initially showed E.S.P. ability).
So perhaps most instances of the truth wearing off reflect various forms of scientific error or sheer, random mystery. But if the data are measuring a phenomenon that burns out, then the data reflect, and quite straightforwardly, not error but actual reality. It seems important to keep in mind the possibility that the data aren’t wrong, while also being sensitive to the many reasons why they might be.
________________
Lehrer's article happened to precede by just a few days another moment of media attention to research into E.S.P., by Benedict Carey, "Journal's Article on ESP Is Expected to Prompt Outrage," New York Times, Jan. 5, 2011. Whether the article in The Journal of Personality and Social Psychology is well-founded or not I cannot say, though it's impossible not to like the idea that people can ever so slightly predict the future when it involves their viewing erotic photographs. But it is striking that at least one "longtime critic of ESP research," quoted in the Times article, felt that the publication of the study was "craziness, pure craziness. I can't believe a major journal is allowing this work in." This is, of course, a call for "publication bias." Personally, I think the psychology journal's decision to publish the piece along with a rebuttal is a much better response than refusing to publish, precisely because it opens the issues up to examination and debate in the marketplace of ideas. But I expect that in almost every field there are propositions that simply cannot be contradicted within the boundaries of the field itself. We are in favor of freedom, but not friends of error, and policing error always risks sliding into restricting freedom.

Saturday, January 1, 2011

On the absence of truly "foundational" lawyering skills

A question for 2011:

What are the foundational skills of law practice?

And a possible answer: There aren't any.

Of course, there are a lot of skills that competent lawyers need to possess and employ. Although one law job is by no means necessarily the same as the next, there are probably even some skills -- an understanding of how cases and statutes work, for example -- that almost all practicing lawyers make use of.

But I mean "foundational" in a different sense -- not what do you need to be a competent attorney, but what do you need in order to learn how to be a competent attorney. Are there some skills, in other words, which lawyers need to learn first, before they can learn other elements of the total repertoire they will ultimately need?

This is the question to which I think the answer is no. We can test the correctness of this arguably counterintuitive answer by asking ourselves whether someone could begin his or her preparation for a legal career only by studying legal doctrine. American law schools in a sense are organized as if that were so -- hence the immersion in legal doctrine in the first year. But many of our students come to law school after years of experience in jobs such as paralegal positions. What these students have done, presumably, is to begin their "preparation" by engaging in circumscribed forms of the practice of law itself -- reviewing documents, perhaps, or organizing trial files, or conducting initial interviews of potential clients. Some paralegals no doubt also get initial training in the meaning and manipulation of legal doctrine, but for those who don't, the fact is that they were able to begin their acquisition of legal expertise by studying something other than legal doctrine.

To say that no particular starting point is essential leaves many possibilities still to be explored. Perhaps some particular starting point is desirable even though not essential. Perhaps some set of skills, or some combination of knowledge and skills, needs to be assembled in the course of "preparation" -- though not necessarily right at the start of the process -- for a lawyer to develop successfully. These are issues worth close examination (look for further posts this year).

But for the reasons I've just expressed, I think it is a mistake, in considering how to structure legal education or the broader process of lawyers' "preparation," to start from the idea that there is some single foundational point that must be the rock on which everything else is later assembled.