Student evaluation day need not live in infamy

Yesterday, December 7, 2014 — an anniversary for a date which will live in infamy. 73 years ago the United States was suddenly and deliberately attacked by naval and air forces of the Empire of Japan.

More recently, many of my colleagues at Augustana (on a trimester calendar) felt suddenly and deliberately attacked by pen and paper missives of the Empire of Their Students.  All too soon, those of you on semester calendars will have your own students complete the dreaded SRIs (student ratings of instruction), and immediately start speculating on whether or not that surly, detached kid in the back row will nuke you as you anticipate he will.

Student course evaluations can be a valuable source of information — not just as summative assessment for department chairs and T&P committees who evaluate your work as a teacher, but as formative assessment for you, the teacher, who can use the data you receive to reflect on your classes, locate your current strengths and revise and tweak where you can. But all the positive data in the world, any collection of bright and shiny open-response affirmations from students can be overshadowed by the one or two negative responses that invariably turn up like bad pennies. All too often it is the this “bring on the rage, bring on the funk” moment of reading student course evaluations that keeps us from engaging them with open-minded inquiry as education professionals.

After a brief hiatus, APP is back with what is hopefully a timely chunk of advice! Isis Artze-Vega, an educational developer from Florida International University, provides a healthy perspective and valuable tips for engaging your SRI responses productively in the latest Faculty Focus. The day you get your summary report and completed forms need not live in infamy… indeed, it may be the first day of the rest of your continuing improvement as a teacher.


DECEMBER 8, 2014

Cruel Student Comments: Seven Ways to Soothe the Sting

By: in Faculty Evaluation

Reading students’ comments on official end-of-term evaluations—or worse, online at sites like—can be depressing, often even demoralizing. So it’s understandable that some faculty look only at the quantitative ratings; others skim the written section; and many others have vowed to never again read the public online comments. It’s simply too painful.

How else might you respond? Here are seven suggestions for soothing the sting from even the most hurtful student comments:

1. Analyze the data. First, look for outliers: anomalous negative views. In research, we would exclude them from our analyses, so you should do the same for uniquely mean-spirited or outlandish comments.

Next, find the ratio of positive to negative comments to get an overall picture of student impressions. Better yet, categorize remarks: Are students responding negatively to your assignments? The course readings? A particular behavior? Identifying themes will help you determine whether they warrant a response. If multitudes of students note that they didn’t know what was expected of them or that you were disorganized, you’ll want to reflect on the area(s) identified. What might have given students that impression? And what steps might you take to improve or to alter their perception?

The recent New York Times piece “Dealing with Digital Cruelty” offers additional ideas for responding to mean-spirited online comments. Some of those suggestions are woven into numbers 2-5 below.

2. Resist the lure of the negative. “Just as our attention naturally gravitates to loud noises and motion, our minds glom on to negative feedback,” the article explains, adding that we also remember negative comments more vividly. This finding itself is comforting. If we catch ourselves dwelling on students’ negative feedback, we can consider: Am I focusing on this because it’s “louder,” or because it’s a legitimate concern? If it’s the latter, revisit the ideas in suggestions 1 and 3. Otherwise, skip to 4 and 5 below.

3. “Let your critics be your gurus,” suggests the New York Times piece. It explains we often brood over negative comments because we suspect they may contain an element of truth. Rider University psychology professor John Suler advises us to “treat them as an opportunity.” Ask yourself, “Why does it bother you? What insecurities are being activated in you?” “It’s easy to feel emotionally attacked,” adds Bob Pozen, a senior lecturer at the Harvard Business School and senior research fellow at the Brookings Institute, “but that doesn’t mean your critics don’t have a point.”

4. Find counter-evidence. When you encounter a negative comment, look for (or recall) comments that contradict it—whether positive feedback from other students or a colleague. “Disputing to yourself what was [written]” can make “harsh comments… feel less potent” (Rosembloom, 2014).

5. Dwell on the positive ones. Because “it takes more time for positive experiences to become lodged in our long-term memory,” (Rosembloom, 2014) we should devote at least as much time to students’ positive comments as their negative ones. Plus, remembering your teaching strengths can motivate you to continue exhibiting the trait or design your courses a certain way. These positive sentiments, often heart-warming and gratifying, will also help you maintain a positive outlook toward students.

The New York Times article proposes another strategy, in the brief article segment about student evaluations. Psychology professor James O. Pawelski jokes that “bars would make a killing if at the end of each semester they offered ‘professor happy hours’ where teachers could bring their evaluations and pass the negative ones around.” He cautions that “Nobody should be alone when they’re reading these things.” That advice leads us to our next tip.

6. Read them with a friend. Whether a departmental colleague, relative, or a trusted center for teaching and learning staff member, a more objective party can help you make sense of or notice the absurdity of the comments because they’re not as personally invested in them.

7. Be proactive, especially if these comments will be the primary data used in decisions about your hiring, re-hiring, promotion, etc. In this case, revisit suggestion 1 above. If you don’t conduct this analysis yourself, you’ll be at the mercy of whomever is charged with your evaluation—and they probably won’t be as thorough. They too may focus on negative comments or outliers. Also, take the time to provide explanations about any off-the-wall student complaints, so that your reviewers don’t draw their own conclusions.

Ultimately, all parties involved—particularly academic leaders—should remember that, important as they are, student comments offer only one perspective on teaching. Thorough evaluation of teaching effectiveness requires that each of us reflect on our practices, examine artifacts from our courses (assignments, syllabi, etc.), and look closely at what our students know and can do upon completion of our courses. The proof, after all, is in the pudding.

Rosenbloom, S. (2014, August 24). Dealing with digital cruelty. The New York Times.

Dr. Isis Artze-Vega, associate director of the Center for the Advancement of Teaching, Florida International University.

– See more at:

“Do the Best Professors Get the Worst Ratings?”

Higher ed faculty angst often about student course evaluations, and with good reason — while they are an important source of data, both for formative and evaluative assessment of teaching — there are serious limitations to what evaluations can tell us and what they can’t. I appreciate being on a faculty that requires assessment of evidence of student learning independent of student evaluations… because the evals should be opne of several data points, not the be-all, end-all.

For instance, many of us have groused at one point, “my evals stink because I push my students to work hard.” And there’s something to that: many students conflate ease of activity with increased learning and difficult struggle with less learning — when the best evidence suggests the exact opposite is likely the case.

Many thanks to Facebook friend and Augie VP Kent Barnds for the heads-up on this blog post from Nate Kornell’s “Everybody Is Stupid Except You” on the Psychology Today website.  I need to think about this one a while… there are lots of things unexplained that perhaps the underlying study of USAF cadets could reveal (e.g., what kind of student evaluations are being used? what kind of teachers are teaching the follow-up course?). And the speculated explanation — older and more experienced, but less charismatic and polished professors instill deeper learning then less experienced, more polished profs who get better student evals — needs serious follow-up study. But this is a great place to start!


Do the Best Professors Get the Worst Ratings?

Do students give low ratings to teachers who instill deep learning?
Published on May 31, 2013 by Nate Kornell, Ph.D. in Everybody Is Stupid Except You

My livelihood depends on what my students say about me in course evaluations. Good ratings increase my chances for raises and tenure. By contrast, there is no mechanism in place whatsoever to evaluate how much my students learn–other than student evaluations (and, here at Williams, peer evaluations). So is it safe to assume that good evaluations go hand in hand with good teaching?

Shana Carpenter, Miko Wilford, Nate Kornell (me!), and Kellie M. Mullaney recently published a paper that examined this question. Participants in the study watched a short (one minute) video of a speaker explaining the genetics of calico cats. There were two versions of the video.

  • In the fluent speaker video, the speaker stood upright, maintained eye contact with the camera, and spoke fluidly without notes.
  • In the disfluent speaker video, the speaker stood behind the desk and leaned forward to read the information from notes. She did not maintain eye contact and she read haltingly.

The participants rated the fluent lecturer as more effective. They also believed they had learned more from the fluent lecturer. But when it came time to take the test, the two groups did equally well.

As the study’s authors put it, ‘Appearances Can Be Deceiving: Instructor Fluency Increases Perceptions of Learning Without Increasing Actual Learning.” Or, as Inside Higher Ed put it, when it comes to lectures, Charisma Doesn’t Count, at least not for learning. Perhaps these findings help explain why people love TED talks.

What about real classrooms?

The study used a laboratory task and a one-minute video (although there is evidence that a minute is all it takes for students to form the impressions of instructors that will end up in evaluations). Is there something more realistic?

A study of Air Force Academy cadets, by Scott E. Carrell and James E. West (2010), answered this question (hat tip to Doug Holton for pointing this study out). They took advantage of an ideal set of methodological conditions:

  • Students were randomly assigned to professors. This eliminated potential data-analysis headaches like the possibility that the good students would all enroll with the best professors.
  • The professors for a given course all used the same syllabus and, crucially, final exam. This created a consistent measure of learning outcomes. (And profs didn’t grade their own final exams, so friendly grading can’t explain the findings.)
  • The students all took mandatory follow-up classes, which again had standardized exams. These courses made it possible to examine the effect of Professor Jones’s intro calculus course on his students’ performance in future classes! This is an amazing way to measure deep learning.
  • There needs to be a lot of data, and there were over 10,000 students in the study in all.

The authors measured value-added scores for each of the professors who taught introductory calculus.

The results

When you measure performance in the courses the professors taught (i.e., how intro students did in intro), the less experienced and less qualified professors produced the best performance. They also got the highest student evaluation scores. But more experienced and qualified professors’ students did best in follow-on courses (i.e., their intro students did best in advanced classes).

The authors speculate that the more experienced professors tend to “broaden the curriculum and produce students with a deeper understanding of the material.” (p. 430) That is, because they don’t teach directly to the test, they do worse in the short run but better in the long run.

To summarize the findings: because they didn’t teach to the test, the professors who instilled the deepest learning in their students came out looking the worst in terms of student evaluations and initial exam performance. To me, these results were staggering, and I don’t say that lightly.

Bottom line? Student evaluations are of questionable value.

Teachers spend a lot of effort and time on making sure their lectures are polished and clear. That’s probably a good thing, if it inspires students to pay attention, come to class, and stay motivated. But it’s also important to keep the goal–learning–in sight. In fact, some argue that students need to fail a lot more if they want to learn.

I had a teacher in college whose lectures were so incredibly clear that it made me think physics was the easiest thing in the world. Until I went home and tried to do the problem set. He was truly amazing, but sometimes I think he was TOO good. I didn’t struggle to understand his lectures–but maybe I should have.