Daniel Kahneman, the famous cognitive theorist, has an article in the Oct. 23, 2011 New York Times Magazine called (in the online version) "Don't Blink! The Hazards of Confidence." His point, illustrated with striking stories from soldiering and high finance, is that people often work very hard to carry out a task which they believe will lead to some result, even when they encounter evidence that proves that the task in question has no connection to the result they desire. So, for instance, Kahneman himself, with other colleagues, evaluated military officer candidates' performance in a one-hour group leadership exercise as a way to tell whether the candidates would become good officers -- and he and his colleagues continued to make confident predictions based on the exercise even after they obtained data showing that their ability to predict was almost zero. They suffered from what he calls an "illusion of validity."
He doesn't argue that all expertise is equally self-deluding. Instead, he writes that "[t]rue intuitive expertise is learned from prolonged experience with good feedback on mistakes." In addition, "the environment" needs to be one "in which the judgment is made sufficiently regular to enable predictions from the available evidence." Some settings -- apparently the stock market is one -- are just too irregular for predictions to be made. Presumably the best course in such contexts is to make no predictions at all, for example by investing in index funds which will assure that you do exactly as well as the market as a whole.
Is it possible to predict who will be a good, or a successful, lawyer? (I realize those two adjectives have meanings that overlap but aren't the same.) Sometimes the answer is surely yes. A student from a very highly-regarded law school who does very well at that school has a strong chance of getting a job in a law office with high standards of practice and vigorous training, and he or she probably will go on to a successful career. But that example is a misleading one, because what makes the prediction possible may not be any direct function of the student's ability (reflected here only in his or her high grades and the fact that the student is at a highly regarded law school) so much as it is the result of a set of social factors that mark out a relatively smooth path to success. And even here, if we asked which of our full-of-promise students would turn out to be the best, or the most successful, I think we would have a lot more difficulty answering.
I implied just now that grades reflect ability. But do law school exams measure future legal success? I know of no study that finds evidence of this, perhaps in part because it's so difficult even to formulate an objective and usable measure of success. (Nevertheless, as a colleague has pointed out to me, many legal employers have historically placed a great deal of weight on grades.)
Rather than contending that law school exams actually measure the chance of future success, it seems more plausible to maintain that law school exams measure legal reasoning ability, which is, in turn, a component of success. But it isn't certain that exams measure legal reasoning ability very well, since (as has often been pointed out) it's not self-evident that the ability to rapidly deploy knowledge hastily crammed before an exam is a very good measure of ability to perform all or most of the various reasoning tasks that practicing lawyers actually undertake. Being able to do well on law school exams is a pretty good predictor of ability to do well on the bar exam -- which is essentially a really big and long law school test -- but the bar exam's ability to measure reasoning ability in general, or in other contexts, is also questionable.
All of this does raise the possibility that law school grades, collectively, are like the officer candidate screening test Kahneman performed. Until we can measure success, we will have to accept that it may turn out that the grades we law professors give really don't correlate much with anything except each other. (Even that correlation isn't perfect, but I think it is probably true that over 3 years of law school a pattern of grades usually emerges.)
But I would not suggest giving up altogether, and instead resorting solely to the law school equivalent of an index fund (would that be universal pass-fail grading?). That's partly because I think grades do roughly measure some aspect of reasoning ability, and I'm inclined to think that ability shown on exams does to some extent reflect ability in other reasoning contexts as well. (This may be my illusion of validity.) But I also think that grades probably measure -- imperfectly -- more than legal reasoning ability. To be specific, they measure success in law school, and not success on a one-hour, artificial exercise (as in Kahneman's military screening) but in the principal occupation of a student's life over a three-year period. Good grades are a measure of "successful-ness," which in turn is a reflection of a host of factors, including ability, determination, self-discipline and adaptability, all of which are probably relevant to the course of one's future life. It's often said that law school is a process of socialization, and "successful-ness" might be rephrased as the accomplishment of socialization.
So in the end law school grades may be a predictor of future success (and not just because so many advantages, such as Law Review membership, accrue to those who get these good grades -- and in turn promote their chances of success elsewhere). Anyone hiring a young employee would like a measure of his or her successful-ness, and we have reason to think that law school grades may be a genuine measure of that. We also have ample reason to think that they're far from a perfect measure of this complex set of qualities, let alone of how the current student will grow and develop on these scores over time. That, I think, does counsel in favor of modesty of prediction: we may know something, but we don't know too much. It is tempting to say that we should still rely on whatever we do know, because that's better than nothing (and also may be a cheap way to carry out an otherwise very difficult task). But to borrow again the language of finance, perhaps the legal profession really should diversify its portfolios, and put a healthy chunk of its resources -- that is, its hiring predictions -- into considering criteria other than grades. Perhaps employers should even make some use of "index funds" or, in other words, should acknowledge that seeming differences in credentials may actually not have predictive value. Many employers may already do this. For those that don't, the next problem for law schools is how to persuade employers to consider, but not be seduced by, the grades we give.