Saturday, July 16, 2011

Speed-interviewing as a measure of social skills

Here's an item from the New York Times that manages to be both encouraging and unnerving at the same time: a report (Gardiner Harris, "New for Aspiring Doctors, the People Skills Test," July 10, 2011) on a new medical school's use of "the admissions equivalent of speed-dating: nine brief interviews that forced candidates to show they had the social skills to navigate a health care system in which good communication has become critical."

This report is encouraging for two reasons: First, it reflects medical schools' concern with doctors' personal skills. According to the article, research suggests that lack of such skills can and does lead to many treatment errors, as one might expect it would in a setting where communication and interaction between the patient, the patient's family, and a large medical staff is essential to effective care. Needless to say, other professions need personal skills as well -- my own, law, high on the list.

Second, the report offers a way of identifying people who actually have such skills. It's not so easy to do this, since we don't have many absolute rules about the best ways for people to treat each other. Is someone caring, or dissembling? Stern and unfeeling, or gruff but kind? Creative or opinionated? The mini-interview system's answer is pragmatic: 9 interviewers are better than 1. This approach is, essentially, crowd-sourcing: if most people feel you (the interviewee) have good personal skills, you probably do. Moreover, you have to demonstrate these skills not by discussing yourself -- the domain of pre-planned, "practiced responses" -- but rather by discussing ethical problems in discussions in which "[t]he most important part of the interviews are [sic] often not candidates' initial responses -- there are no right or wrong answers -- but how well they respond when someone disagrees with them, something that happens when working in teams."

The crowd, to be sure, wasn't completely random: evidently the medical school in question, Virginia Tech Carilion, "trained 80 people to be interviewers, including doctors and businesspeople from the community." It's possible to wonder whether these 80 people sufficiently reflected the diversity of the world of patients and medical co-workers, and also whether different approaches to medical practice might have suggested different training parameters for the interviewers. If all the interviewers were trained in a particular conception of what constitutes constructive disagreement, for example, they might all be prone to missing the value of some different, yet reasonable, style of interaction.

And, oddly enough, it doesn't take long for your true self to become visible. The article reports that "[t]he system grew out of research that found [among other things] that interviewers rarely change their scores after the first five minutes." Here we see "thin-slicing" in action -- the remarkable ability people have to form an impression of each other, an impression that's not only lasting but quite accurate, in a very short period of time.

Even so, this is a major undertaking. Virginia Tech Carilion chose 239 applicants to interview, for 42 positions. If a 6:1 ratio makes sense, then a school with an entering class of around 300 would need to organize interviews for 1800 applicants. Nine mini-interviews for each applicant would mean 16,200 interviews. Virginia Tech Carilion's 80 interviewers provided an interviewer:applicant ratio of about 1:3, so this larger school would need about 600 interviewers for those 1800 applicants. Even if all the interviewers participated on a volunteer basis, the whole enterprise would clearly entail a substantial investment of resources of all sorts by the school.

That's a reason to look for other methods of carrying out these assessments, but it may be that the speed interview system is in fact the best possible approach. There's evidence, moreover, that it actually works: the article states that "candidate scores on multiple mini interviews have proved highly predictive of scores on medical licensing exams three to five years later that test doctors' decision-making, patient interactions and cultural competency," according to the professor who developed the approach, Dr. Harold Reiter of McMaster University. These results do not directly show that those who do well on the mini interviews -- or on the medical licensing exams -- in fact become better doctors (and whether they do would be an important subject for further research), but it seems entirely reasonable to expect that the skills they show on these tests they also display in practice.

But here's where we reach the discouraging aspect of this article. The schools that the students who went through the mini interviews attended presumably made efforts to teach their students good practice skills. Perhaps those efforts paid little attention to the "human" side of medical practice -- but it seems odd if a school that invested in mini interviews did not also invest in subsequent focused training in such skills as patient interviewing. Yet despite this intervening training, whatever it consisted of, the pre-medical school results are "highly predictive" of the post-medical school test scores. That finding unfortunately suggests that whatever the schools did in the intervening years didn't much alter the distribution of social skills among their new students as of the day of their admission. That proposition, in turn, would be consistent with the idea that social skills can't be taught; at least by the time adults apply to medical school, they either have these skills or they don't.

That conclusion isn't irresistible, however. Perhaps what happened in the intervening years is that all of the students got better at the social skills their medical practice would require; the ones with strong skills improved, and so did the ones with weak skills. Even if the relative distribution of skills remained unchanged, the absolute level of everyone's skills rose -- and that would be an important accomplishment. It isn't easy to change a person, we might conclude, and so teaching that does not transform people but helps them make the best of the abilities and skills they already have is well worth undertaking.

But I still think that something more is possible. The Times report says that the mini interview results are "highly predictive" of later test scores -- but in social science, "highly predictive" is itself rather a relative term. Suppose, for instance, that 75 % of those who do well on the mini interviews also do well on the tests years later (a level of consistency that I'd expect would count as "highly predictive"). That still leaves a fourth of the good interviewers who don't do so well years later; why not? And it leaves a considerable number of not-so-good pre-admission interviewers who later do well on the tests. That reassuring unpredictability of human life marks out the space in which teachers can seek not just to help students improve but to assist some in remaking themselves.

No comments:

Post a Comment