They chatted… now what? “Evaluating online discussions”

I’m joining a couple of colleagues in piloting a few online courses this summer. Because interactive discussion is a big part of what most of us do, figuring out how to handle online discussions is an important challenge. But this challenge isn’t just for online-only teachers: many of my colleagues are experimenting with various “flipping the classroom” techniques, including blogging, chats, and forum discussions outside the classroom.

Figuring out how to evaluate class participation can be tricky enough… how should we handle assessing an online conversation?

Maryellen Weimer from The Teaching Professor reports some results from a recent study that examined a number of rubrics used by online teachers, looking for major patterns. Here’s a few thoughts, then, on possible criteria for assessing these discussions.

____________________________________________________

Evaluating Online Discussions

Written by: Maryellen Weimer, Ph.D.
Published On: March 8, 2014

Discussions in class and online are not the same. When a comment is keyed in, more time can be involved in deciding what will be said. Online comments have more permanence. They can be read more than once and responded to more specifically. Online commentary isn’t delivered orally and evokes fewer of the fears associated with speaking in public. These features begin the list of what makes online discussions different. These different features also have implications for how online exchanges are assessed. What evaluation criteria are appropriate?

Two researchers offer data helpful in answering the assessment question. They decided to take a look at a collection of rubrics being used to assess online discussions. They analyzed 50 rubrics they found online by using various search engines and keywords. All the rubrics in this sample were developed to assess online discussions in higher education, and they did so with 153 different performance criteria. Based on a keyword analysis, the researchers grouped this collection into four major categories. Each is briefly discussed here.

Cognitive criteria—Forty-four percent of the criteria were assigned to this category, which loosely represented the caliber of the intellectual thinking displayed by the student in the online exchange. Many of the criteria emphasized critical thinking, problem solving and argumentation, knowledge construction, creative thinking, and course content and readings. Many also attempted to assess the extent to which the thinking was deep and not superficial. Others looked at the student’s ability “to apply, explain and interpret information; to use inferences; provide conclusions; and suggest solutions.” (p. 812)

Mechanical criteria—Almost 20 percent of the criteria were assigned to this category. These criteria essentially assessed the student’s writing ability, including use of language, grammatical and spelling correctness, organization, writing style, and the use of references and citations. “Ratings that stress clarity … benefit other learners by allowing them to concentrate on the message rather than spend their time trying to decipher unclear messages.” (p. 813) However, the authors worry that the emphasis on the mechanical aspects of language may detract from the student’s ability to contribute in-depth analysis and reflection. They note the need for more research about the impact of this group of assessment criteria.

Procedural/managerial criteria—The criteria in this group focused on the students’ contributions and conduct in the online exchange environment. Almost 19 percent of the criteria belonged to this category. More specifically these criteria dealt with the frequency of and timeliness of the postings. Others assessed the degree of respect and the extent to which students adhered to specified rules of conduct.

Interactive criteria—About 18 percent of the criteria were placed in this category, and they assessed the degree to which students reacted to and interacted with each other. Were students responding to what others said, answering the questions of others, and asking others questions? Were they providing feedback? Were they using the contributions of others in their comments? 

This work is not prescriptive. It does not propose which criteria are right or best. However, it does give teachers a good sense of those aspects of online interaction that are most regularly being assessed, which can be helpful in creating or revising a set of assessment criteria. Beyond what others are using, a teacher’s decision should be guided by the goals and objectives of an online discussion activity. What does the teacher aspire for students to know and to be able to do as a result of interacting with others in an online exchange?

Reference

Penny, L. and Murphy, E. (2009). Rubrics for designing and evaluating online asynchronous discussions. British Journal of Educational Technology, 40 (5), 804-820.

Advertisements

Share your thoughts!

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s