Using Peer Feedback to Enhance the Quality of Student Online Postings: An Exploratory Study

ANNOTATION: Ertmer et al. (2007). Using Peer Feedback to Enhance the Quality of Student Online Postings: An Exploratory Study. Journal of Computer-Mediated Communication 12: 412–433.

Ertmer et al’s study is a case study of 15 students to assess whether peer feedback improves the quality of online postings in an online course. Using numerical, rubric-driven grades, participant interviews, and entry and exit questionnaires, researchers considered students’ assessments of the educational value of giving and receiving peer feedback. Their specific goal was to determine whether the peer feedback resulted in an improvement in the quality of discussion posts according to Bloom’s taxonomy. Students numerically ranked the quality of peer posts and provided some text-based feedback. Researchers compared the quality of student posts from early and late in the course to determine whether quality improved, where quality is defined as comments that reflect higher-order thinking. Though students stated no change in their preference for instructor feedback over peer feedback, they did report that both giving and receiving peer feedback was helpful to their learning.

Regarding the paper quality: Through a well-structured theoretical framework, the authors clearly state much research that supports the value of feedback in the learning process. They define what constitutes helpful feedback, noting that feedback is a frequently cited catalyst for learning and that lack of feedback is a primary reason for withdrawal from online courses. (Interestingly, they provided no citations to support those particular assertions.) The researchers indicated that responding to discussion posts can be labor-intensive for faculty, and so peer feedback may help relieve some of the pressure. However, the way that they structured the research seemed to make the process even more intensive for faculty. Subjects were trained in the use of a Bloom’s-based rubric to evaluate their peers, but peer evaluations went through a faculty-vetting process before feedback was returned. This created a two-week delay in getting feedback and thus came too late to be incorporated into subsequent work. The research process seemed labor intensive in general, so it made sense that they used a case study approach with an appropriately-sized sample of 15. However, because of this delay the research was challenged by test-retest reliability. A similar research project delivered over the course of several semesters with a more widely representative sample of students may yield more reliable results.

A few key questions are left unanswered after my first read of this work, and I’m sure more will emerge following the upcoming Critical Review of Research assignment on this article. First, is a case study format appropriate for this kind of research? It is certainly convenient, as the research process seemed labor intensive, but is it reliable? Second, how representative is this sample group, and can results be reliably applied in other contexts? The sample group consisted of mostly graduate students in educational research, including educational administrators who presumably are already highly trained in providing effective feedback. To me, this was in no way an unbiased representative sample. Third, what were the discussion questions that were asked? The quality of a question helps to determine the quality of the answer (Meyer, 2004), and though the researchers provided one good example of a discussion question that would seem to yield “higher order” thinking and synthesis, it’s hard to know whether all of the questions were of equal quality.

Tangentially related to this, I learned two things through the lit review that are worthy of further reading: 1) Online discussion forums need to be carefully considered, as they typically require students to communicate complex ideas through written text, rather than a conversation, and this can be a barrier for some. 2) Some students feel anxiety around giving feedback to peers.

Leave a comment