Make a great first impression… with your syllabus!

The old cliche reminds us that we never get a second chance to make a first impression.  So true.

This is particularly true for the first day of class, and that all-important document that goes along with it: the syllabus. Sure, the syllabus fulfills some specific course information and management functions. But it can also play a crucial part in how you come across as a teacher, and how your course is framed and received by students.

Just in time for your last-minute syllabus completion crunch, here’s an oldie-but-goodie post on the Teaching Professor Blog by Maryellen Weimer. There’s good stuff here to consider in order to help your syllabus help you make an effective first impression — and maintain it as long as students continue to use the syllabus in your course.


AUGUST 24, 2011

What Does Your Syllabus Say About You and Your Course?

By: in Teaching Professor Blog

A colleague shared an excellent but not yet published paper on the syllabus. It got me thinking as this is the time of year when most of us are revisiting these venerable documents. Oh, I know, some of you finished yours back in May when the semester ended. And then there are the rest of us who are working on them feverishly as the beginning of new academic year quickly approaches.

Whether yours are ready to go or just being developed, all our syllabi merit a critical review on a regular basis. I’d like to share some questions to think about as you take a contemplative look at your syllabus.

How would you characterize the tone of your syllabus? Is it friendly and inviting or full of strongly worded directives? Is the focus on what students will be learning or on all those various things that they should and shouldn’t be doing? Why do we feel so strongly that we have to lay down the law in the syllabus? Do we need a policy to cover every possible contingency? Do multiple prohibitions, rules and pointed reminders develop student commitment to the course?

Does your syllabus convey the excitement, intrigue and wonder that’s inherently a part of the content you teach? Does it hint at or openly state the enthusiasm you feel about teaching this great subject? Does it mention the many things students will know and be able to do as a consequence of their engagement with the content? If you read this syllabus, would you say the course is taught by somebody who loves learning?

Does your syllabus indicate that all the decisions about the course have been made? Or does it leave some options up to students or identify some areas where they might have a hand in deciding some of the details associated with the course? Is it really necessary for the teacher to make all the decisions about the course? When the teacher decides everything, how does that affect the motivation to learn? Does teacher decision-making help students develop as independent learners?

Have you ever asked students for feedback on your syllabus? Try this, wait until three or four weeks into the course and ask students to take out the syllabus and in a five-minute free write tell you anonymously what they thought about the course and the instructor on that first day when you went over the syllabus. Or, ask them to describe their sense of an ideal syllabus. Or, ask them to write about the most unusual syllabus they’ve ever encountered. Or, inquire why so many students don’t read their syllabi, and if you’re really daring find out if they have or haven’t read the syllabus in your course and ask why.

The authors of the paper I mentioned think we’re too oriented to the syllabus as a contract and I have to agree. When the focus is on all the logistical details, all the terms of this particular learning deal, we miss an opportunity to generate enthusiasm for the course, indeed, for learning.

Syllabi can convey messages that build rapport between the teacher and students and they can help create community among students. I know courses need policies, students need guidelines and some students take advantage of teachers. But I wonder if we don’t err on the side of being too defensive in our syllabi. We could all benefit from discussion of these syllabus-related issues, and I encourage you to share your thoughts in the comment below.

It’s also a great discussion to have with a colleague. Give a trusted colleague your syllabus and ask him or her what they conclude about the course and the instructor based on the syllabus. If you’re not comfortable doing that with your own syllabus, there are lots available online and one of those can be considered in light of these questions.

I have three final questions for you: Have you ever thought about creating a syllabus that invites students to a learning event they just might want to attend? What would that syllabus look like? How different would it be from the syllabi you’re polishing and posting for this Fall?


Don’t check out yet; check in… with yourself.

I know, it’s close to the end, you’re ready for the beach. I get it.  This late in the academic year, with finals looming (or, for some of you, finals completed! you people can shut up now), it’s easy to check out and get some well-deserved rest.

But my colleague David Gooblar at Augustana College, blogger for Pedagogy Unbound (featured in the Chronicle of Higher Education’s Vitae career website) suggests that the investment of just a little time at the conclusion of a course can reap serious benefits for your formative assessment and self-improvement as a teacher (as well as for more effective learning outcomes for your future students). This kind of self-reflection is also handy to form a basis for an eventual self-reflection report that may be part of a faculty review in your future.

Following his short piece below are reflection prompts from the self-evaluation form he references.

Best of luck to all of us at the end!


The Semester’s Just About Over! Now Grade Your Own Teaching. 

It’s been a long semester. We’ve all worked hard, tried out new things, adapted on the fly, managed to keep our heads above an ocean of work while still being present for our students. We’ve made it through the mid-semester doldrums. Depending on how much grading we’ve got left, we’re now within sight of the end. If you’re anything like me, to say that you’re looking forward to the end is an understatement. Does anyone else visualize entering that last grade, closing your folder of class notes, and then throwing that folder into the sea?

Today I’d like to suggest that you not be so quick to move on from this term, no matter how desperately you long for a summer away from teaching.

I learn a lot every semester: Trying out ideas in the crucible of the classroom is really the only way to improve as a teacher. I always feel better about my pedagogy at the end of the term than I do at the beginning. Curiously, though, these gains don’t always carry over from semester to semester. By the time that next semester rolls around—particularly if it’s the fall term—the lessons I’ve learned have been mostly forgotten. Did that new approach to a familiar text produce the results I’d hoped for? How did that new topic go over with the students? Was the multi-part assignment too much of a headache, or was it worth it? A few months later, it can all get kind of hazy.

Of course, some of you may have better recall than I do. But I think it’s valuable to take note of the semester’s gains and losses while they are still fresh in our minds. I’m suggesting giving yourself a course evaluation at the end of every term.

[details after the jump!]

Continue reading

“Do the Best Professors Get the Worst Ratings?”

Higher ed faculty angst often about student course evaluations, and with good reason — while they are an important source of data, both for formative and evaluative assessment of teaching — there are serious limitations to what evaluations can tell us and what they can’t. I appreciate being on a faculty that requires assessment of evidence of student learning independent of student evaluations… because the evals should be opne of several data points, not the be-all, end-all.

For instance, many of us have groused at one point, “my evals stink because I push my students to work hard.” And there’s something to that: many students conflate ease of activity with increased learning and difficult struggle with less learning — when the best evidence suggests the exact opposite is likely the case.

Many thanks to Facebook friend and Augie VP Kent Barnds for the heads-up on this blog post from Nate Kornell’s “Everybody Is Stupid Except You” on the Psychology Today website.  I need to think about this one a while… there are lots of things unexplained that perhaps the underlying study of USAF cadets could reveal (e.g., what kind of student evaluations are being used? what kind of teachers are teaching the follow-up course?). And the speculated explanation — older and more experienced, but less charismatic and polished professors instill deeper learning then less experienced, more polished profs who get better student evals — needs serious follow-up study. But this is a great place to start!


Do the Best Professors Get the Worst Ratings?

Do students give low ratings to teachers who instill deep learning?
Published on May 31, 2013 by Nate Kornell, Ph.D. in Everybody Is Stupid Except You

My livelihood depends on what my students say about me in course evaluations. Good ratings increase my chances for raises and tenure. By contrast, there is no mechanism in place whatsoever to evaluate how much my students learn–other than student evaluations (and, here at Williams, peer evaluations). So is it safe to assume that good evaluations go hand in hand with good teaching?

Shana Carpenter, Miko Wilford, Nate Kornell (me!), and Kellie M. Mullaney recently published a paper that examined this question. Participants in the study watched a short (one minute) video of a speaker explaining the genetics of calico cats. There were two versions of the video.

  • In the fluent speaker video, the speaker stood upright, maintained eye contact with the camera, and spoke fluidly without notes.
  • In the disfluent speaker video, the speaker stood behind the desk and leaned forward to read the information from notes. She did not maintain eye contact and she read haltingly.

The participants rated the fluent lecturer as more effective. They also believed they had learned more from the fluent lecturer. But when it came time to take the test, the two groups did equally well.

As the study’s authors put it, ‘Appearances Can Be Deceiving: Instructor Fluency Increases Perceptions of Learning Without Increasing Actual Learning.” Or, as Inside Higher Ed put it, when it comes to lectures, Charisma Doesn’t Count, at least not for learning. Perhaps these findings help explain why people love TED talks.

What about real classrooms?

The study used a laboratory task and a one-minute video (although there is evidence that a minute is all it takes for students to form the impressions of instructors that will end up in evaluations). Is there something more realistic?

A study of Air Force Academy cadets, by Scott E. Carrell and James E. West (2010), answered this question (hat tip to Doug Holton for pointing this study out). They took advantage of an ideal set of methodological conditions:

  • Students were randomly assigned to professors. This eliminated potential data-analysis headaches like the possibility that the good students would all enroll with the best professors.
  • The professors for a given course all used the same syllabus and, crucially, final exam. This created a consistent measure of learning outcomes. (And profs didn’t grade their own final exams, so friendly grading can’t explain the findings.)
  • The students all took mandatory follow-up classes, which again had standardized exams. These courses made it possible to examine the effect of Professor Jones’s intro calculus course on his students’ performance in future classes! This is an amazing way to measure deep learning.
  • There needs to be a lot of data, and there were over 10,000 students in the study in all.

The authors measured value-added scores for each of the professors who taught introductory calculus.

The results

When you measure performance in the courses the professors taught (i.e., how intro students did in intro), the less experienced and less qualified professors produced the best performance. They also got the highest student evaluation scores. But more experienced and qualified professors’ students did best in follow-on courses (i.e., their intro students did best in advanced classes).

The authors speculate that the more experienced professors tend to “broaden the curriculum and produce students with a deeper understanding of the material.” (p. 430) That is, because they don’t teach directly to the test, they do worse in the short run but better in the long run.

To summarize the findings: because they didn’t teach to the test, the professors who instilled the deepest learning in their students came out looking the worst in terms of student evaluations and initial exam performance. To me, these results were staggering, and I don’t say that lightly.

Bottom line? Student evaluations are of questionable value.

Teachers spend a lot of effort and time on making sure their lectures are polished and clear. That’s probably a good thing, if it inspires students to pay attention, come to class, and stay motivated. But it’s also important to keep the goal–learning–in sight. In fact, some argue that students need to fail a lot more if they want to learn.

I had a teacher in college whose lectures were so incredibly clear that it made me think physics was the easiest thing in the world. Until I went home and tried to do the problem set. He was truly amazing, but sometimes I think he was TOO good. I didn’t struggle to understand his lectures–but maybe I should have.