Wednesday, July 27, 2011

A further look: Do clinics help law students develop professionalism? -- Part II

As I wrote in my previous post, Carole Silver, Amy Garver and Lindsay Watkins report in their wonderfully rich paper (available from a link on the Best Practices blog) that, among third-year law students surveyed with beta questions in the Law School Survey of Student Engagement (LSSSE), no one feels that law school contributed to their growth as professionals more than those who took clinics do. But what do these students say they learned in clinics?

One way to answer that question is to look at the surveyed students' responses to another of the beta questions put to them:

In your experience at this law school, to what extent have you found the following settings effective for learning legal ethics? (Very much, quite a bit, some, very little, not applicable)

(a) Doctrinal classes

(b) Professional responsibility classes

(c) Clinics

(d) Paid legal work

(e) Externships or summer internships

In what follows, I interpret students' answers to this question as giving their judgments on how much, or how effectively, these five different "settings" contributed to their learning legal ethics, though the phrasing of the question is slightly different.

How valuable do students find clinics as settings for learning legal ethics?

Here is a first answer to this question, derived from the students’ answers to this beta survey inquiry: 51 percent of the responding students felt clinics contributed "very little" to their learning legal ethics. (11) That's startling. In fact, it is the highest negative score for any of the five settings. But what it turns out to mean is, in large part, that students who haven't taken clinics -- including in particular first-year students, who likely haven't yet been eligible to do so (since clinics are usually upper-year courses) -- don't recognize their value. 1L's rate clinics as a 1.46, about half-way between "very little" and "some." 3L's, in contrast, rate them at 2.63, a bit over half-way between "some" and "quite a bit." (13)

Interestingly, the higher 3L rating is not a result of a general impression shared by all 3L's. Those 3L's who've never taken a clinic still think poorly of them as settings for learning legal ethics (average scores of 1.51 or 1.67, the latter from students who didn't take a clinic but did have paid legal work experience). 3L's who did take clinics, on the other hand, score them much higher (average scores of 2.93 or 2.77, the latter from students who both took a clinic and had paid legal work experience). (15)

(A parenthetical issue of definition: The authors refer to students who "did a clinical experience." I take that to mean "took a clinic course," but the phrase could be read to encompass non-course experiences such as volunteer legal work. I also take it to refer, at least primarily, to live-client clinics, but "clinic" is a term with potentially broad definitions. At a couple of points (14, 21) the authors mention clinics using simulations, and these comments may imply that simulation courses fall in the "clinic" category here. “Externships or summer internships” are a separate “setting” category in this study, as reflected in the beta question quoted above.)

Let’s look a little more closely at those ratings of clinics from students who did take clinics. 3L's who took a clinic but didn't have a paid legal job gave clinics an average score of 2.93 as a setting for learning legal ethics, just below the "quite a bit" level. Their peers who also had a paid legal job gave clinics an average score of 2.77, a little further below "quite a bit." 2.93 and 2.77 are definitely good scores, but they aren't a 4, which would have meant that students saw clinics as contributing "very much" to their learning professional ethics. Nor are these the highest scores students gave to any setting, as we will see below.

Given the concentration on issues of ethics and values in many clinics, these scores are a bit surprising. What might account for the scores not being higher? Perhaps what's clearest is that 3L's do not see any law school setting, including clinics and jobs, as overall contributing "very much" to their learning legal ethics. The highest score any group of these students gave to any setting was in fact a 3.0 (“quite a bit”), the rating given to the professional responsibility course by students who had taken a clinic and had paid legal work. (15)

One possible explanation is that law students fail to realize how much they are learning. The reason need not be any fault on the students' part -- though one could of course speculate about students' yearning for "the answer," quick and clear – but instead might simply reflect the reality of the incremental nature of learning and understanding. The authors emphasize a finding that students increasingly see professional ethics benefit in all settings – except doctrinal courses – as they go through law school, and cite this as "some evidence that law students learn to 'connect the dots during their time in school with regard to understanding that the notion of 'legal ethics' transcends a variety of settings." (13) Perhaps this process isn't complete in the third year, and students will only recognize the full value of what they've learned years later.

Another possibility, of course, is that none of the settings law students experience does teach as much about professional ethics as we might like. Professional responsibility classes may be too dry and legalistic. Clinics may be too focused on the complexities of individual cases to give students a sense of broad understanding. Student jobs, as the authors point out (14), may be focused on legal research rather than on a "sufficiently big-picture exposure." We don't know.

What we do know is that 3L's who took clinics viewed their classroom professional responsibility courses as having as great an impact on their learning professional responsibility as clinics, or even slightly more. Specifically, 3L's who took clinics and didn't have paid legal work experience rated clinics and PR courses almost identically (2.94 for PR classes, 2.93 for clinics). Those who had both clinical and paid legal work experience rated the PR course at 2.98, clinic at 2.77, and their paid job at 2.89. (15) And 3L’s who took clinics didn’t even report that clinics were vastly more effective in teaching them about legal ethics than regular doctrinal courses (other than the professional responsibility course), which they rated at a 2.5. (15)

And so we come to the study's very intriguing result: 3L's who took clinics felt their law school experiences were more valuable in helping them grow as professionals than their peers did, but they did not report that clinics were the way they learned the most about ethics. How can this be? The authors are well aware of the problem; as they put it, “our findings suggest that clinical experience may enhance learning legal ethics, but more research is necessary to confirm the direct relationship.” (15) The next section of this post offers some hypotheses to explain these findings.

If clinics were not uniquely valuable for teaching legal ethics, why were they uniquely valuable in enhancing students’ appreciation of all settings’ value for this purpose?

One possible explanation is that these students understood ethics as a narrower field than the range of professionalism factors, as the authors speculate at 17. If this is so, then it might be that if they had been asked which of their experiences contributed most on these broader factors, their answer would have been clinics.

But another possibility is that the effect clinics had was not so much to teach students particular lessons as it was to alert them to the importance and value of learning. If, we might guess, what clinics did above all was to convince students that they had a huge amount to learn in order to undertake the work of a lawyer, then that realization might have caused those students to take every course in which they were enrolled more seriously. At that point, the students might have been right to value the professional responsibility course slightly more than clinics themselves for professional ethics – probably few clinics undertake comprehensive reviews of the rules of ethics, and professional responsibility courses can be precisely designed to do just this. Once a student realizes that the subject-matter of professional responsibility is important, the PR class may be the place to learn most of it. As the authors say, "[g]enerally, these findings point to the importance of law school classes for effective learning about legal ethics, and also to the role of clinical legal education as a means for deepening the effectiveness of these lessons." (21) We should remember, though, that even the PR course rates only as of "quite a bit" of value; providing students with multiple opportunities to learn this subject may be essential given that no one setting seems to completely provide the opportunity students may be looking for.

It’s striking, too, to remember the point with which we began: Those who took clinics liked almost everything better; those who did not, in comparison, found almost everything less helpful. In other words, those who got engaged in the study of law realized that their schools were delivering a lot to them; those who held themselves apart missed what was going on around them. Most of the 3L students surveyed at the 38 schools where these beta questions were asked did take clinics, and many took clinics and worked in paid legal jobs (in Figure 8, at 20, 2134 3L's had taken clinics, including 1012 who had taken clinics and held paid legal jobs). The minority who did neither or whose only exposure to practice was in paid jobs (in this same figure, 431 had done neither, along with 348 who had jobs but no clinical experience) were, according to their own reports, losing a lot of what was on offer.

Was this energizing effect simply the result of contact with the world of practice? As already suggested, the answer seems to be "no." Both clinics and paid legal work expose students to the world of practice, but as the authors observe (21), students who were exposed only via paid legal work were not as energized as those whose only exposure was via clinics. Clinic-only 3L's, compared to job-only 3L's, found more value in their doctrinal courses and in their professional responsibility courses – as well as in clinics, unsurprisingly. Also unsurprisingly, job-only students saw more value in jobs than clinic-only students did, but the clinic-only students saw more value even in jobs than did 3L's who did neither a clinic nor paid legal work. (15)

Admittedly, these data do not tell us clearly which way the cause-effect sequence runs. Could it be that students who are energized tend to take clinics, rather than that students who take clinics tend to get energized? In the nature of things, no doubt both processes are at work, but it is worth considering which process is more dominant. If clinics are more the pathway than the impetus, after all, we should consider whether other pathways could also be devised; one possibility, for example, is the increased use of externships and summer internships. (The study reports limited, but tantalizing, data on externships and internships (13): for 3L’s as a whole (including those who had no clinic or paid legal work experience), externships and internships ranked as somewhat more valuable settings for learning legal ethics than either clinics or jobs. The data available now do not permit the authors to correlate students’ assessments of settings with whether or not they themselves had externship/internship experience, but the authors hope to add questions in the future to permit such analysis. (14 n.56)) Still, in another sense this issue isn’t so important; even if clinics are merely a pathway, they are that, and law schools urgently need pathways that their energized students can follow.

All that said, I would be very surprised if clinics were merely the pathway; that idea would imply that students arrive at their upper years in law school either “energized” or “unenergized,” and nothing that happens thereafter affects that. But people change all the time. It’s implausible to imagine that the opportunities in clinics aren’t affecting those who take advantage of them. The 1L’s who view clinics so negatively include those students who will subsequently take clinics and will, by their third year, view clinics quite positively; those more positive 3L’s, therefore, are not simply the people who recognized the value of clinics from the start. If students learn to value clinics more as a result of taking them, it seems reasonable to infer that the more general energizing effect I’ve hypothesized is also something that largely results from taking clinics rather than the other way around. Indeed, the authors have observed that their data “indicate that in each year of law school, students increase their evaluation of effectiveness for every setting except doctrinal classes.” (13) It appears – though the authors do not speak to this precise point – that the greatest such increase is the increase that occurs among the students who take clinics.

‬Clinics thus do appear to be specially energizing. It’s worth adding that there is some evidence also of a synergy between clinics and paid legal work. Those students who had both experiences were slightly more positive than anyone else about doctrinal classes and professional responsibility classes, and considerably more positive about the value of paid legal work. The only setting for which this group was not the most positive was clinics, but this group was still decidedly affirmative about clinics too (2.77 compared to clinic-only students' 2.93). The paper does not provide aggregate overall effectiveness scores for students with different experiences, but it seems that the students with both clinic and paid work were the most satisfied group.

If there is such an energizing effect, how does it operate? The authors suggest that students “learn to ‘connect the dots’ during their time in school” (13), and I would agree that part of what is happening is this sort of cognitive change. But few things happen without motivation, and so I am inclined to see the energizing as resulting also from students’ excitement, focused ambition and, no doubt, anxiety – all of which, along with new intellectual appreciation, the experience of the clinic may foster. If that’s what clinics do, how do they do it? The authors ask many valuable questions about this (21), and it is a very important issue. If we can figure out which clinics produce these effects, and from what aspects of their work, we’ll be better positioned to shape the curricula of the future.

Saturday, July 23, 2011

Do clinics help students develop professionalism?

What do clinics teach, and how? These are important and mysterious questions. A forthcoming article by Carole Silver, Amy Garver and Lindsay Watkins, Unpacking the Apprenticeship of Professional Identity and Purpose: Insights from the Law School Survey of Student Engagement (available from a link on the "Best Practices in Legal Education" blog), confirms the value of clinics but underlines how much we don't know about how that value is created. Relying on "beta" questions posed to students at 38 accredited law schools as part of the Law School Survey of Student Engagement (LSSSE), the authors sought to measure students' own judgments about how much "your experience at this law school contributed to your development" in several areas of professionalism. (7, 23)

What they found, focusing on data concerning full-time third-year students (3L's), was that "students with a clinical experience, whether or not they also had paid legal work experience, reported higher positive gains across each item of development." (21) 3L's with clinical experience reported higher gains than their peers who had no clinical coursework, even if the no-clinic students had held paid legal jobs. In fact students who took clinics but didn't have paid legal work experience reported almost the same impact on their growth as students who took clinics and did have paid legal work experience as well; the paid jobs didn't seem to add much at all to the impact of the clinic by itself. On these measures, it seems, quite simply, that the most effective way to foster students' professional growth is for them to take clinics.

Now one might quarrel with the idea of self-reported development. Will anyone say that he or she failed to develop? Perhaps not, but the question did not ask that; it asked whether "your experience at this law school contributed to your development" -- so a student could report that her experiences had not contributed without thereby saying that she herself had not developed. In any event, the responding students did give nuanced answers. The students responded to the five questions put to them by, on average, putting their development on each criterion at between 2 and 3 on a 4-point scale -- and some students reported less development than others did. Those 3L's who never held a paid job or took a clinic reported development scores on the five different criteria ranging from 2.05 to 2.58; those who took clinics reported scores from 2.39 to 2.81. (21) On each question, moreover, there were sizable percentages, sometimes majorities, of students who reported that their experience at law school had contributed only "some" or "very little" to their development. (18)

Finally, it's important to remember that even if all the students overestimated their own development, those 3L's who took clinics reported more impact on their development than those who did not have a clinic course. It might be argued that these data show only that students who took clinics developed particularly unrealistic impressions of their own development -- but there's no apparent reason why that should be so. (The authors don't provide similar tabulations for 2L's, but 3L's are the students most likely to have actually had either clinical or work experience, and so are the ones most likely to manifest the impact of the full range of possible law school experiences.)

Moreover, the five professionalism criteria seem to capture some important aspects of growth towards a professional identity. Students were asked (23) how much their experience at their law school had contributed to their development in:
(a) Building positive relationships with your future clients
(b) Deepening your capacity for moral reasoning
(c) Preparing you to handle the stresses of law practice
(d) Strengthening your commitment to serving the public good
(e) Acting with integrity in both personal and professional settings
It's not absolutely clear to me that all students should "strengthen their commitment to serving the public good"; some students might already be fully committed, and not need that commitment strengthened, and it's even possible that some students might come to feel they had been too self-sacrificing and might -- perhaps even rightly -- conclude that they should pay more attention to their own needs and less to the abstract "public good." But my hesitation on this score may not have been shared by many students, for 83 % said that their experience in law school had strengthened this commitment "some," "quite a bit" or "very much." (18) And overall these five criteria certainly mark significant achievements for students moving into professional life.

Since the LSSSE data suggest that clinical experience lays the groundwork for improvement on important aspects of professionalism, these data offer an important reason to value clinics. But in my next post I'll turn to the puzzling question of exactly what students learn in their clinics.

Friday, July 22, 2011

Learning the feel of success in law school

What do students need to learn in law school to prepare them for the practice of law? Often the answer to this question is a listing of areas of knowledge or of practical skills -- and certainly those are important. But perhaps what students learn about themselves is also important. Again, it could certainly be argued that self-understanding and psychological insight are part of the wisdom that people need for their adult lives, but in this case I'm not referring to such forms of knowledge. Rather, I have in mind a much blunter form of self-understanding: the knowledge that you are a winner. The importance of this knowledge is an important lesson of Richard Sander and Jane Yakowitz's essay, The Secret of My Success: How Status, Prestige and School Performance Shape Legal Careers.

Sander & Yakowitz would endorse this point, I suspect, but their focus is somewhat different. The importance of this self-understanding emerges, however, from the authors' striking argument that grades matter. This position actually isn't self-evident. Sander & Yakowitz note, for instance, "the (correct) insistence by many observers that firms do not consult law school transcripts in making partnership decisions." (24 n.35) There's little doubt that firms consider grades when making initial hiring decisions, but the point of those observers' "insistence" is that after the initial hire, success or failure is determined by performance and grades are forgotten. The authors point out (24), however, that if success (measured in admittedly "conventional" terms (42) such as making partner in a law firm) actually does correlate markedly with grades, then apparently the grades are capturing something that fuels lawyers' lifetime achievements even though no one actually pays attention to the grades themselves after the lawyer gets his or her first job. Much of their article, in turn, makes the case that grades actually predict future success more powerfully than all other indicators, notably include the prestige of the student's law school.

What is that "something" that grades reflect? Sander & Yakowitz don't try to tease this out fully, but what they say makes sense. First, "high grades are shaped by individual characteristics that perhaps no other easily measured characteristic of lawyers can capture: drive, energy, clarity of thought, and perhaps a facility for good legal analysis that isn't captured well by the LSAT." (41) This explanation by itself might imply that these characteristics are fixed qualities that the future lawyer brought with him or her to law school; all that grades do is reflect their presence.

That, however, is not Sander & Yakowitz's full answer. They also say (again at 41) that "grades reflect one's relative intellectual location in a law school's incoming student body, and how that location influences what one learns and with what level of analytic mastery and confidence one emerges from law school." This statement implies that where a law student stands compared to his or her fellow students on the first day of school affects the grades he or she will earn. A relatively stronger student, the authors seem to be saying, is likely to become part of a "virtuous cycle," in which success builds on success, and to emerge from law school with grades that affirm his or her confidence, and with confidence that will shape the coming steps of the new lawyer's career.

Does it really matter that much how a student compares with his or her colleagues on day one? Using a large longitudinal data base from a study of law students in the 1990s, Sander & Yakowitz look at those students who were "admitted to their first choice school but did not attend it, often for geographic or financial reasons." The authors infer that these students probably have the same strengths and weaknesses as those students who were also admitted to their first choice schools and did choose to attend them. But it turns out that the students who go to their first-choice school, generally more "elite" than the lower-choice schools, get somewhat worse grades than those who didn't choose to attend what the authors refer to as their "reach" schools. (25-26) Perhaps the competition at the first-choice schools tends to be harder -- student bodies at American law schools are evidently sharply stratified on measurable criteria such as LSAT (4-5) -- and so the students who "reach" to the more elite schools are less likely to do well there than their equally able peers who choose to attend less elite institutions.

Sander & Yakowitz draw from this finding the conclusion that the net result of attending the reach school -- despite its higher prestige -- may be to weaken the student's future prospects as a lawyer. (36-37) This may be so, though the authors acknowledge that the exact extent of the grade-depressing impact is uncertain, and emphasize that they are comparing only plausible choices (a 20th-ranked school against a 40th, for instance -- not # 1 against # 200).

I'm more interested, however, in the apparent indication that the experience of success in law school -- even at a less elite law school -- propels the law graduate's professional trajectory. Whether or not that fact should shape applicants' choices of which school to attend, it may also have implications for schools' thinking about what programs to offer. The experience of success, even at a less elite school, fuels the graduate's achievements even after graduation, when he or she must meet the competition of the graduates of every law school. Something about success seems to make students stronger, lastingly stronger, than they otherwise would have been.

So if we ask, what is it that students need to learn in law school to prepare for the practice of law, one answer seems to be that they need to learn that they are people who succeed. Once they learn this, it's more likely to be true. They will use their abilities better -- and if (as I believe) human abilities are not simply fixed quantities but are a combination of initial capacities and lifelong refinement, they will actually come to have greater abilities as well.

And what does this mean for law school programs? It must be admitted that it counsels against such beguiling reforms as widespread use of pass-fail grading; if students cannot perceive their success, as compared to their peers, they perhaps cannot earn this success boost. But I do not think it follows that schools must relentlessly produce a single hierarchy so as to give their successful students that boost. One might well argue about the morality of such a system, in which a few were assured advantage at the expense of the many, and I hope that no such choice is necessary.

Demanding standards are necessary -- but there are many such standards, and many avenues for achievement. A law school whose students win moot court competitions is giving its students a hard-earned experience of success. A law school whose students win cases in their clinics is doing the same. A law school that insists on drafts and redrafts until a paper achieves a measure of excellence is too. Or so, at least, we may reasonably assume: provided that multiple avenues of success do not result in devaluing each individual measure of success, we should aim for multiple paths on which students of diverse skills and inclinations can find their strengths.

Finally, of course, we should not simply assume this happy harmony, but should examine whether it in fact can be achieved. Whether multiple channels of success actually have the same kind of effect as the single hierarchy of grades is an empirical question, and one that -- like so many others bearing on legal education -- deserves research.

Wednesday, July 20, 2011

Being the subject of the New York Times' attentions

My school, New York Law School, recently was. To my mind, the article is noteworthy for its seeming indifference to what we, and other schools, do -- namely to teach law. The article's focus is on the business of law schools, a topic that certainly deserves attention, but unfortunately that broad topic doesn't get very much attention, because the author seems preoccupied with a critique of our dean, Rick Matasar. The result is painful to read, not only because the author gets so much wrong about New York Law School, but because I don't know of anyone who cares more deeply about reforming American legal education, and protecting the interests of law students, than Rick Matasar. For Rick's response, which speaks both to issues of general philosophy and to what we at NYLS are actually doing, see this post on our school website.

It is not easy to refute an article in the New York Times. Americans are fortunate, really, that we have some newspapers with the institutional resources and reputation of the Times, and fortunate as well to have a constitution that protects the freedom of the press as firmly as ours does. I applaud the overall institutional picture. But when a powerful voice speaks, those less powerful -- not just ordinary individuals, but even ordinary institutions such as a law school -- must struggle to speak back. And when an institution with power makes mistakes, those mistakes are felt and not easily corrected. This post is one contribution to that effort at correction, but also simply an observation that power matters, even in the marketplace of ideas.

Tuesday, July 19, 2011

The House of Representatives' "affirmation" of our armed conflict with Al Qaeda and others

Here is the text of section 1034 of the National Defence Authorization Act for Fiscal Year 2012. That bill was passed (the vote was 322-96) on May 26, 2011 by the House of Representatives; a proposed amendment which would have deleted this particular section was defeated by a much closer vote of 234-187:


Congress affirms that--

(1) the United States is engaged in an armed conflict with al-Qaeda, the Taliban, and associated forces and that those entities continue to pose a threat to the United States and its citizens, both domestically and abroad;

(2) the President has the authority to use all necessary and appropriate force during the current armed conflict with al-Qaeda, the Taliban, and associated forces pursuant to the Authorization for Use of Military Force (Public Law 107-40; 50 U.S.C. 1541 note);

(3) the current armed conflict includes nations, organization, and persons who--

(A) are part of, or are substantially supporting, al-Qaeda, the Taliban, or associated forces that are engaged in hostilities against the United States or its coalition partners; or

(B) have engaged in hostilities or have directly supported hostilities in aid of a nation, organization, or person described in subparagraph (A); and

° (4) the President's authority pursuant to the Authorization for Use of Military Force (Public Law 107-40; 50 U.S.C. 1541 note) includes the authority to detain belligerents, including persons described in paragraph (3), until the termination of hostilities.

This proposed legislation has been sharply attacked by the ACLU as a sweeping expansion of the war on terror. Since the bill was passed by the House, it now awaits Senate action -- presumably postponed by the ferocious fight over the debt ceiling. But once we escape (or fall into) bankruptcy, this issue will be back. It's been the subject of a lot of discussion already, but I think it's still worth a close look.

To begin, I want to focus on an odd feature of the legislation: It isn't, in so many words, an authorization for the use of military force as provided for in the War Powers Resolution (WPR). The WPR specifies that an authorization must actually declare, in its text, that "it is intended to constitute specific statutory authorization" for our engaging in hostilities. This section doesn't. What it does is to "affirm" that the Authorization for the Use of Military Force passed in 2001 after the 9/11 attacks -- a statute that does contain the necessary WPR specification -- actually applies to the various targets listed in the new Act. The White House has "strongly" objected to this section, saying that "in purporting to affirm the conflict, [the provision] would effectively recharacterize its scope and would risk creating confusion regarding applicable standards. At a minimum, this is an issue that merits more extensive consideration before possible inclusion."

What difference does the odd wording make? It means that as an authorization for war, this proposed statute isn't in compliance with the War Powers Resolution. One possible result of that would be that those interpreting and applying the law would have to conclude that the statute did not add anything to the authorization contained in the 2001 statute's words; to whatever extent the new language goes beyond the old, it would simply not be binding. It might still be persuasive as to the correct reading of the old words, but it wouldn't have binding legal force as an authorization for fighting in its own right.

Another possible legal result is that it would be necessary to determine whether the War Powers Resolution's specification -- in 1973 -- of the methods by which Congress can authorize fighting is actually binding on subsequent Congresses. As a general proposition, no one Congress can deprive later Congresses of full lawmaking authority. So it might be argued that if Congress does enact this section, that will evidence Congress' decision that it prefers to use a new method -- namely what the ACLU called a "sleeper section" within a huge defense bill -- for authorizing war. Presumably Congress can do that if it wishes. The question will be whether Congress did wish to do that, or whether instead it merely wished to voice its interpretation of what the 2001 AUMF meant.

But perhaps its political purpose is clearer than its legal effect. I suspect section 1034's obscurity (both in wording and in placement in this larger statute) blunts debate -- although certainly it did not escape sharp public criticism. It also attenuates responsibility -- Congress (specifically, the Republican-controlled House) does not treat the modern equivalent of a declaration of a war as a profound decision but rather as simply a subpart of defense spending. And if the result of this statute is a dramatic expansion of the President's authority to wage war, and the nation's entanglement in war (whether this is the result I'll return to in later posts), these effects have been accomplished with as little fanfare as possible.

Given the indifference that this proposed provision suggests with regard to the requirements of the War Powers Resolution, it is ironic, to say the least, that the House has become vocally attentive to War Powers Resolution considerations as our involvement in Libya has continued. If this section winds up before the House again before its final enactment, I wonder whether legislators who in June displayed a lot of anxiety about war and Presidential warmaking will still be happy with their handiwork from late May.

Saturday, July 16, 2011

Speed-interviewing as a measure of social skills

Here's an item from the New York Times that manages to be both encouraging and unnerving at the same time: a report (Gardiner Harris, "New for Aspiring Doctors, the People Skills Test," July 10, 2011) on a new medical school's use of "the admissions equivalent of speed-dating: nine brief interviews that forced candidates to show they had the social skills to navigate a health care system in which good communication has become critical."

This report is encouraging for two reasons: First, it reflects medical schools' concern with doctors' personal skills. According to the article, research suggests that lack of such skills can and does lead to many treatment errors, as one might expect it would in a setting where communication and interaction between the patient, the patient's family, and a large medical staff is essential to effective care. Needless to say, other professions need personal skills as well -- my own, law, high on the list.

Second, the report offers a way of identifying people who actually have such skills. It's not so easy to do this, since we don't have many absolute rules about the best ways for people to treat each other. Is someone caring, or dissembling? Stern and unfeeling, or gruff but kind? Creative or opinionated? The mini-interview system's answer is pragmatic: 9 interviewers are better than 1. This approach is, essentially, crowd-sourcing: if most people feel you (the interviewee) have good personal skills, you probably do. Moreover, you have to demonstrate these skills not by discussing yourself -- the domain of pre-planned, "practiced responses" -- but rather by discussing ethical problems in discussions in which "[t]he most important part of the interviews are [sic] often not candidates' initial responses -- there are no right or wrong answers -- but how well they respond when someone disagrees with them, something that happens when working in teams."

The crowd, to be sure, wasn't completely random: evidently the medical school in question, Virginia Tech Carilion, "trained 80 people to be interviewers, including doctors and businesspeople from the community." It's possible to wonder whether these 80 people sufficiently reflected the diversity of the world of patients and medical co-workers, and also whether different approaches to medical practice might have suggested different training parameters for the interviewers. If all the interviewers were trained in a particular conception of what constitutes constructive disagreement, for example, they might all be prone to missing the value of some different, yet reasonable, style of interaction.

And, oddly enough, it doesn't take long for your true self to become visible. The article reports that "[t]he system grew out of research that found [among other things] that interviewers rarely change their scores after the first five minutes." Here we see "thin-slicing" in action -- the remarkable ability people have to form an impression of each other, an impression that's not only lasting but quite accurate, in a very short period of time.

Even so, this is a major undertaking. Virginia Tech Carilion chose 239 applicants to interview, for 42 positions. If a 6:1 ratio makes sense, then a school with an entering class of around 300 would need to organize interviews for 1800 applicants. Nine mini-interviews for each applicant would mean 16,200 interviews. Virginia Tech Carilion's 80 interviewers provided an interviewer:applicant ratio of about 1:3, so this larger school would need about 600 interviewers for those 1800 applicants. Even if all the interviewers participated on a volunteer basis, the whole enterprise would clearly entail a substantial investment of resources of all sorts by the school.

That's a reason to look for other methods of carrying out these assessments, but it may be that the speed interview system is in fact the best possible approach. There's evidence, moreover, that it actually works: the article states that "candidate scores on multiple mini interviews have proved highly predictive of scores on medical licensing exams three to five years later that test doctors' decision-making, patient interactions and cultural competency," according to the professor who developed the approach, Dr. Harold Reiter of McMaster University. These results do not directly show that those who do well on the mini interviews -- or on the medical licensing exams -- in fact become better doctors (and whether they do would be an important subject for further research), but it seems entirely reasonable to expect that the skills they show on these tests they also display in practice.

But here's where we reach the discouraging aspect of this article. The schools that the students who went through the mini interviews attended presumably made efforts to teach their students good practice skills. Perhaps those efforts paid little attention to the "human" side of medical practice -- but it seems odd if a school that invested in mini interviews did not also invest in subsequent focused training in such skills as patient interviewing. Yet despite this intervening training, whatever it consisted of, the pre-medical school results are "highly predictive" of the post-medical school test scores. That finding unfortunately suggests that whatever the schools did in the intervening years didn't much alter the distribution of social skills among their new students as of the day of their admission. That proposition, in turn, would be consistent with the idea that social skills can't be taught; at least by the time adults apply to medical school, they either have these skills or they don't.

That conclusion isn't irresistible, however. Perhaps what happened in the intervening years is that all of the students got better at the social skills their medical practice would require; the ones with strong skills improved, and so did the ones with weak skills. Even if the relative distribution of skills remained unchanged, the absolute level of everyone's skills rose -- and that would be an important accomplishment. It isn't easy to change a person, we might conclude, and so teaching that does not transform people but helps them make the best of the abilities and skills they already have is well worth undertaking.

But I still think that something more is possible. The Times report says that the mini interview results are "highly predictive" of later test scores -- but in social science, "highly predictive" is itself rather a relative term. Suppose, for instance, that 75 % of those who do well on the mini interviews also do well on the tests years later (a level of consistency that I'd expect would count as "highly predictive"). That still leaves a fourth of the good interviewers who don't do so well years later; why not? And it leaves a considerable number of not-so-good pre-admission interviewers who later do well on the tests. That reassuring unpredictability of human life marks out the space in which teachers can seek not just to help students improve but to assist some in remaking themselves.