Using Peer Feedback to Enhance the Quality of Student Online Postings: An Exploratory Study

ANNOTATION: Ertmer et al. (2007). Using Peer Feedback to Enhance the Quality of Student Online Postings: An Exploratory Study. Journal of Computer-Mediated Communication 12: 412–433.

Ertmer et al’s study is a case study of 15 students to assess whether peer feedback improves the quality of online postings in an online course. Using numerical, rubric-driven grades, participant interviews, and entry and exit questionnaires, researchers considered students’ assessments of the educational value of giving and receiving peer feedback. Their specific goal was to determine whether the peer feedback resulted in an improvement in the quality of discussion posts according to Bloom’s taxonomy. Students numerically ranked the quality of peer posts and provided some text-based feedback. Researchers compared the quality of student posts from early and late in the course to determine whether quality improved, where quality is defined as comments that reflect higher-order thinking. Though students stated no change in their preference for instructor feedback over peer feedback, they did report that both giving and receiving peer feedback was helpful to their learning.

Regarding the paper quality: Through a well-structured theoretical framework, the authors clearly state much research that supports the value of feedback in the learning process. They define what constitutes helpful feedback, noting that feedback is a frequently cited catalyst for learning and that lack of feedback is a primary reason for withdrawal from online courses. (Interestingly, they provided no citations to support those particular assertions.) The researchers indicated that responding to discussion posts can be labor-intensive for faculty, and so peer feedback may help relieve some of the pressure. However, the way that they structured the research seemed to make the process even more intensive for faculty. Subjects were trained in the use of a Bloom’s-based rubric to evaluate their peers, but peer evaluations went through a faculty-vetting process before feedback was returned. This created a two-week delay in getting feedback and thus came too late to be incorporated into subsequent work. The research process seemed labor intensive in general, so it made sense that they used a case study approach with an appropriately-sized sample of 15. However, because of this delay the research was challenged by test-retest reliability. A similar research project delivered over the course of several semesters with a more widely representative sample of students may yield more reliable results.

A few key questions are left unanswered after my first read of this work, and I’m sure more will emerge following the upcoming Critical Review of Research assignment on this article. First, is a case study format appropriate for this kind of research? It is certainly convenient, as the research process seemed labor intensive, but is it reliable? Second, how representative is this sample group, and can results be reliably applied in other contexts? The sample group consisted of mostly graduate students in educational research, including educational administrators who presumably are already highly trained in providing effective feedback. To me, this was in no way an unbiased representative sample. Third, what were the discussion questions that were asked? The quality of a question helps to determine the quality of the answer (Meyer, 2004), and though the researchers provided one good example of a discussion question that would seem to yield “higher order” thinking and synthesis, it’s hard to know whether all of the questions were of equal quality.

Tangentially related to this, I learned two things through the lit review that are worthy of further reading: 1) Online discussion forums need to be carefully considered, as they typically require students to communicate complex ideas through written text, rather than a conversation, and this can be a barrier for some. 2) Some students feel anxiety around giving feedback to peers.

What tools or technologies are used to improve engagement?

FOCUSED LITERATURE REVIEW 8: Exploring the potential of LMS log data as a proxy measure of student engagement

SOURCE: Henrie, C. R., Bodily, R., Larsen, R., & Graham, C. R. (2018). Exploring the potential of LMS log data as a proxy measure of student engagement. Journal of Computing in Higher Education, 30(2), 344-362. https://doi.org/10.1007/s12528-017-9161-1

Engagement—defined in this study as focused, committed energetic involvement in learning—has been shown to be directly correlated with academic success, and is particularly crucial in technology-mediated distance learning. This study looked at the efficacy of a learning management system as a tool for engagement. The researchers specifically sought a correlation between self-reported engagement survey scores and log data tracking student activity in the Canvas learning management system.

While many kinds of engagement are identified in the literature, the researchers focused on cognitive and emotional engagement because these types in particular have a strong empirically and theoretically supported connection to learning. The researchers found that LMS log data, which would seemingly be a strong indicator of learning activity in a course, did not have a statistically significant relationship to students’ self-reported measures of cognitive and emotional engagement in online courses.

The researchers looked at data from 153 students in 8 individual sections of three undergraduate courses at a single university in the western US. All courses were offered through Canvas in a blended format. Evaluating log data was chosen as a measure of engagement because it was minimally disruptive to each learner’s process since it is automatically tracked behind the scenes. They measured: URLs visited, page views, time spent per page, time stamps of page visitation, number of discussion replies, punctuality of assignment submission, and grades. Each course included discussion boards, quizzes, online videos, and projects, all of which took place within the LMS.

This data was compared against the survey, which consisted of 7 Likert-style questions. Self-report was chosen as a measure of student engagement because cognitive and emotional states are hard to measure by observation and conclusions drawn via observation are highly inferential.

Their unexpected results, they concluded, point to the complexity of learning and the difference between observed measures of engagement versus students’ self-reported intellectual and emotional states. The researchers noted a long list of limitations to their research that may have yielded their unexpected results, and they concluded that behavior captured through log data may be far more complicated than they realized. To yield more reliable results, they suggest that other factors need to be accounted for to better understand what it means to be engaged in online learning, such as previous knowledge, motivation to learn, or level of confusion or frustration.

They suggest further work with other methods for measuring student engagement like mouse tracking, physiological instruments, and human observers. A more complex and longer survey may also help gather additional nuance related to time spent on pages and on assignments. For example, high engagement and motivation may actually require less time of the students, while frustration and confusion may mean more time spent on pages. Also, time spent on pages is difficult to quantify, as without direct observation, it is difficult to determine whether a student was actually focused on their work while a page happens to be open on screen.

What are strategies that enhance engagement in online courses?

FOCUSED LITERATURE REVIEW #7: “Engagement matters: Student perceptions on the importance of engagement strategies in the online learning environment

Martin, F. & Bolliger, D.U. (2018). Engagement matters: Student perceptions on the importance of engagement strategies in the online learning environment. Online Learning 22(1), 205- 222. doi:10.24059/olj.v22i1.1092

In their research study, Martin and Bolliger issued a 38-question survey to 155 students in a variety of programs at 8 geographically and structurally diverse U.S. universities, asking online learners questions about three key engagement strategies in recent courses: learner-to-learner, learner-to-faculty, and learner-to-content. The researchers sought to answer three questions:

  1. Which strategies do students perceive to be most important in the three categories of engagement?
  2. Which strategies do learners identify as most valuable and least valuable to engaging them in the online environment?
  3. Are there differences in responses based on age, gender, and years of online learning experience?  

These questions led to the following findings:

  • In the learner-to-learner category, in which students feel a dynamic sense of community, students rated introduction discussions, icebreakers, and collaborative projects using online tools as most beneficial. Least beneficial was “virtual lounges” for informal discussions outside of structured class activities.
  • In the learner-to-instructor category, students noted instructors sending regular announcements, email reminders, timely feedback, and the provision of grading rubrics for all assignments as most beneficial. Reflection activities were rated least favored for engagement—though this is inconsistent with much prior research on the topic.
  • In learner-to-content—which refers to the intellectual interaction with content, including reading online, watching videos, taking online quizzes, and completing assignments—students preferred real-world projects, case studies, and discussions with structured or guiding questions as most beneficial to their engagement. Many students rated synchronous meetings as least beneficial, though this contradicts much prior research on the topic.

Definitions of engagement vary widely in the literature (Halverson and Graham, 2019), and the terms “interaction” and “engagement” are often used interchangeably. Accordingly the authors chose Moore’s framework (1993) on the three kinds of interaction to measure engagement. In the presentation of their framework, they cite subsequent research specifically in online learning that supports the validity and value of each of the three kinds of engagement. As such, implementing strategies to increase engagement is critical to improving student learning and student satisfaction in online courses, even moreso in an age where engagement has dethroned content as king (Banna et. al, 2015). Citing a wide body of research, Martin and Bolliger conclude that “Interactivity and sense of community result in high-quality instruction and more effective learning outcomes.”

Participation in this study was voluntary. The research sample was 67% female, and the survey respondents were not surprisingly primarily involved in the study of education. More than half of them were graduate students, and all were adults.

Notably, among adult learners, the simultaneously most and least favored activity was online discussion forums. One student commented that online forums felt like busy work, even when well-designed. Two other common strategies, synchronous meetings and videos, were rated as most valuable by some and least valuable by others. All of this clearly depends on the quality of the video and applicability to the work, and regarding sync meetings, this may be related to students’ reasons for taking online courses in the first place. Notably, those trying to manage an already busy schedule found it counterproductive to have to schedule and attend synchronous meetings. Additional strategies rated very important were: a variety of instructional materials (video, readings, web resources, multimedia), structured discussions, and real-world application.

Regarding age, gender, and experience, the only statistically significant results were the following: females appreciated having access to additional resources to explore in more depth; younger students appreciated more frequent check-ins from the instructor, and students with less online learning experience appreciated having more opportunities for online “hangouts,” check-ins from the instructor, and greater variety of content.

Interestingly, the most appreciated activity across the board was ice-breaker activities, followed by collaborative activities using online tools. The least important was the idea of a virtual lounge for informal discussions outside of class—but note that this study was of mostly adult students who likely have busy lives outside of school. Secondly, it is important to note most of the courses reported on in this study were entirely asynchronous.

It is important to note the study’s limitations: a relatively small sample size, the fact that data was self-reported, and the somewhat limited list of strategies included in the survey. Also, the researchers had no control over the design of the courses, the delivery of them, or the instructors. Despite these limitations, this 2019 study has been cited in 1462 other works as of 11/8/2023.

The most important aspect of this study confirms what much other research has also shown: Instructor presence is everything. In short: students “want to know that someone ‘on the other end’ is paying attention. Online learners want instructors who support, listen to, and communicate with them.”

SOURCES NOTED:

Banna, J., Lin, M.-F. G., Stewart, M., & Fialkowski, M. K. (2015). Interaction matters: Strategies to promote engaged learning in an online introductory nutrition course. Journal of Online Learning and Teaching, 11(2), 249–261.

Halverson, L.R., & Graham, C.R. (2019). Learner engagement in blended learning environments: A conceptual framework. Online Learning, 23(2), 145-178. doi:10.24059/olj.v23i2.1481

Moore, M. J. (1993). Three types of interaction. In K. Harry, M. John, & D. Keegan (Eds.), Distance education theory (pp. 19–24). New York: Routledge.

The New Literacy Studies and the Resurgent Literacy Myth

ANNOTATION: Graff, H.J. (2022). The New Literacy Studies and the Resurgent Literacy Myth. In: Searching for Literacy: The Social and Intellectual Origins of Literacy Studies. Palgrave Macmillan, Cham. https://doi.org/10.1007/978-3-030-96981-3_9

Graff’s chapter in Palgrave Macmillan’s 2022 release Searching for Literacy is a blazing takedown of New Literacy Studies, claiming that much of the research in this area never defines what it means by the term, lacks the evidence of empirical or theoretical studies, and totally disregards the significant research and seminal works in the field, ignoring the rich history of thinking that had its roots in the educational reform movement of the 1960s. Worse, in many cases, the concept of “multiple” literacies today is loosely and baselessly thrown around by corporate interests as a way to sell educational and other types of products.

Graff’s “literacy myth” takes aim at the “unique and innate power of ‘literacy by itself.’” His position is that new writers in the “new literacy” rarely define what they mean by the term “literacy.” It is, he writes, a problematic term in that there is no freestanding entity called “literacy,” and that “literacy”—and indeed education aimed at producing it—can never be free of context. Literacy is inextricably tied to a value system and to the complex web of conditions associated with a sense of advancement, superiority, and progress or “success,” all of which are culturally and/or socially determined by those with the power to define them. The “myth” he refers to is the belief that “the acquisition of literacy is a precursor to and inevitably results in economic development, democratic practice, cognitive enhancement, and upward social mobility.” The use of the term “myth” is not to suggest that literacy does not lead to advancement (it can, in many but not all cases). The “myth” refers to the concept of literacy itself as autonomous. Rather, it is contextual and ideological.

Graff’s definition of literacy rests on the foundations of reading, writing, and sometimes arithmetic. In contrast, the new literacies refer to skills in many and multiple domains. He lists 36 types of literacies found in a simple online search—from reading and writing to data, multimodal, media, civic and ethical, financial, health and medical, and many more. It is problematic, he writes, that the different literacies are rarely compared, interrelated, or evaluated. A sense of chaos results, blurring the lines between “scholarship and education on the one hand, and promotion and sales, on the other.”

The reader is left to ponder the difference between lower-case literacy (a general term used to describe the possession of a specific set of knowledge and skills) and uppercase Literacy, Graff’s “foundational reading, writing, and in some cases arithmetic” definition. Graff’s is a tightly written chapter that makes a great deal of sense, though the reader (at least this reader) is also left to wonder how much of his position is flavored by sour grapes: His own book, The Literacy Myth (1979), was never cited in the 2020 Routledge Handbook of Literacy Studies.

He likes this new Palgrave MacMillan book so much better.

What is engagement and is it different than motivation?

FOCUSED LITERATURE REVIEW #6: “The Relationship Between Student Motivation and Class Engagement Levels”

Nayır, F. (2017). The Relationship between Student Motivation and Class Engagement Levels. Eurasian Journal of Educational Research, 17 (71), 59-78.

This post is a little different than previous. EDU 811 “Motivation in Online and Blended Learning” requires weekly-ish deep reads of one article, accompanied by a 300-word annotation. The assignment is somewhat similar to EDU800’s weekly annotations, but these are focused on answering a teacher-led prompt. (Naturally, mine are regularly 600+ words… “I would have written a shorter letter but I didn’t have enough time.”) Because my writing in these have been of varied “just get it done on time” quality and posting is not required, I haven’t posted them, but I’m going to start revisiting them and will be posting as I get around to revising each.

This week’s article was particularly inspiring to me in that it made clear connections among a variety of theoretical frameworks and clarified many terms commonly used in studies of motivation. So, forgive me, but this entry is kind of list-heavy. For a reason: It helped me organize my thoughts.

Motivation, engagement, and learning are discrete concepts, yet in an educational setting they are mutually interdependent. Motivation is the driving force that spurs students to act; engagement is the observable, behavioral evidence of that motivation,[1] and learning is directly correlated with engagement. The equation is straightforward: To increase learning, increase engagement, and to increase engagement, increase motivation. But how—especially with teens?

An abundance of research indicates that as students advance to higher grades, they become less engaged in school. A 2018 Gallup poll discovered that by high school the number of “engaged” students shrinks to 33% from 74% in fifth grade (Parrish, 2017), and the research presented in this focused literature review, “The Relationship between Student Motivation and Class Engagement Levels” confirms this (Nayir, 2017). Nayir’s findings suggest that it is particularly important for high school educators to focus on motivational factors in order to engage students and thus improve learning outcomes.

Nayir’s study involved 500 students in a random sample from public high schools across Ankara, the capital of Turkey. The study set out to determine the relationship between student engagement and motivation using the theoretical framework of Self-Determination Theory (Desi & Ryan, 2000) and student engagement levels as defined by Schlecty (2001). Through a relational study using the Pattern Adaptive Learning Scale developed by Midgeley et al (2000), the study found that mastery-oriented learning (intrinsic motivation focused on mastering a topic) predicts all levels of classroom engagement, that vocational students are affected more by motivational factors, and that motivation level declines as grade level rises.

Most interesting to me in this study was the author’s theoretical framework setup, which served as a very helpful overview of the connections between motivation and engagement. The author cites research that shows that the more students are engaged in academic activities, the more successful they are.

Engagement has been categorized into three dimensions: emotional, behavioral, and cognitive (Fredricks, Blumenfeld, and Paris, 2004).

  • Emotional. Positive and negative reactions to classmates, student attitude, perception of the value of learning, interest and enjoyment, happiness, sense of belonging at school.
  • Behavioral. Participation, presence, compliance with rules, effort, persistence, concentration, involvement.
  • Cognitive. Learning by choice, investment, self-regulation, goal setting, thoughtfulness, mastery orientation, resiliency, persistence, self-efficacy.

Within each of these categories, the student assigns a value to this engagement, which also can be hierarchically stratified (Schlecty, 2001):

  • Authentic engagement: Students find personal meaning in activities.
  • Ritual engagement: Students do what is required.
  • Passive compliance: Students expend minimum effort to avoid punishment.
  • Retreatism: Students reject learning activities and emotionally disengage.
  • Rebellion: Students reject class activities and substitute them with their own objectives, which may be disruptive.

According to Deci & Ryan (2009), the founders of the widely-cited Self-Determination Theory (SDT), motivation is a prerequisite of student engagement in the learning process. According to SDT, motivation emanates from three universal dimensions of human need:  competence (the desire to be good at something or adaptive to the environment), autonomy (choice in the matter and self-direction), and relatedness (a feeling of connection and belonging). Motivation can be measured along a hierarchical continuum from amotivation to extrinsic to intrinsic.

  • Amotivation: No value is attributed to actions.
  • Extrinsic: External influences or reward-driven actions.
  • Intrinsic: Enjoyment or interest-driven actions.

These levels of motivation correlate with the five levels of engagement:

  • Intrinsically motivated students show the highest level of engagement, authentic engagement.
  • Extrinsic motivation manifests as ritual engagement, passive compliance, and retreatism.
  • Amotivation typically leads to rebellion.

What these frameworks reveal is, again, that the best way to increase learning is to seek ways to increase intrinsic motivation. Ryan and Deci (2002)’s work suggests that attributing meaning to learning is Job 1 for motivation, and thus engagement, and thus learning. They further suggest that goal orientation is critical toward this end, confirmed in research from Midgeley et al (2000), who studied goal orientation and described three kinds:

  • Mastery goal orientation. Individuals have self-efficacy, are aware of their strengths, and believe in their ability to succeed… and they want to.
  • Personal performance-approach goal orientation. These individuals compare themselves to others and are motivated by competition.
  • Personal performance-avoidance goal orientation. Individuals try to hide their failures, fear mistakes, and expect very little success.

Not surprisingly, much research supports a strong connection between intrinsic motivation and mastery goal orientation. Nayir’s study and the research it cites show that offering mechanisms to spur intrinsic motivation is job #1 to improve learning, especially with high school students.

Citations:

The article cites many excellent resources on the topics of motivation and engagement. The list below indicates the most significant among them to this particular focused literature review.

Fredricks. A., Blumenfeld P.C., & Paris A. H. (2004). School engagement: Potential of the concept, state of the evidence. Review of Educational Research, 74, 59-109.

Midgley, C., Maehr, M., Hruda, L., Anderman, E., Anderman, L., & Freeman, K., et al. (2000). Manual for the patterns of adaptive learning scales. Ann Arbor, MI: University of Michigan.

Parrish, N. (2017, November 2022.  To Increase Student Engagement, Focus on Motivation. Edutopia. https://www.edutopia.org/article/to-increase-student-engagement-focus-on-motivation

Ryan, R. M., & Deci, E. L. (2000). Intrinsic and extrinsic motivations: Classic definitions and new directions. Contemporary Educational Psychology, 25, 54-67. http://dx.doi.org/10.1006/ceps.1999.1020

Ryan, R.M & Deci, E.L. (2009). Promoting self-determined school engagement: Motivation, learning and well-being, In Wentzel, K.R & Wigfield, A. (Eds), Handbook on motivation at school. New York: Routledge, 171-196.

Schlechty, P. C. (2001) Okulu yeniden kurmak, (Çev. Özden, Y., 2012) Ankara: Nobel Yayıncılık.

Schlechty P.C. (2002). Working on the work. San Francisco, CA: Jossey-Bass.


[1]Schlecty’s research would indicate otherwise.

Self-determination theory: An approach to motivation in music education

ANNOTATION: Evans, Paul (2015). Self-determination theory: An approach to motivation in music education. Musicae Scientiae, 19(1), 65-83. doi: 10.1177/10298649/4568044.

Though numerous motivational models have been previously applied to understand motivation in music learning, no theoretical framework has been universally accepted. In this article, Evans provides an argument for Self-Determination Theory as an ideal theoretical model for understanding and describing why students take up an instrument, how they persist through the many challenges encountered over the long time required to acquire competence, and how they either achieve success or quit.

Deci and Ryan’s widely cited self-determination theory (SDT) suggests that motivation arises from a tendency towards personal growth and a unified sense of self (Maslow’s “self-actualization” comes to mind), supported by three universal psychological needs: competence, autonomy, and relatedness. Evans provides an SDT-based conceptual overview of motivation in music learning by presenting a wide variety of research projects that have applied it successfully to issues in music education.

His conclusion is that many of the typical ways that teachers and parents have encouraged their students and children to practice are misguided, in that they use external reward/punishment, coercion, excessive praise, and competition, all of which have been shown repeatedly in research to be demotivating over the long term. The best solution, he writes, is for parents and teachers to create social environments in music where students are more apt to generate their own interest and enjoyment by identifying the value of musical practice, aligning it with their sense of self, and finding intrinsic motivation in music making for the enjoyment of the activity in and of itself.

This article is worthy of a deeper read and would serve as a helpful launch point for further study, as its theoretical framework and literature review point to a rich treasure trove of research on motivation in instrumental education. The most interesting ideas I culled from this confirmed my own observations as a teacher:

  1. Achievement “star charts” are demotivating for many students, and for the highest achieving “star gatherers,” the motivation to continue achieving declines rapidly the moment the final star is achieved. I witnessed over and over that especially young boys were eager to reach the final star at the end of the chart, and then their interest dropped almost immediately once they finished the “final star.” Likewise, students who did not achieve as quickly typically dropped out, not because they were not interested in music but because their relative slower pace on the star chart made them question their self-efficacy.
  2. Kindness matters in early music education. Under the SDT motivator of “relatedness,” Bloom (1985) found three stages to teacher relationships, and that in the early years, students enjoy and thus persist with teachers whose lessons are fun, informal, and enjoyable; slightly more demanding in middle years; and holding much higher standards in later years where the teacher and student engage together in a pursuit of mastery.
  3. Students who could choose their repertoire typically showed higher motivation.

Evans’ application of self-determination theory to music learning has some implications for teaching an instrument in an online context. Because motivation is so central to instrument learning in general, then online instrumental teachers might consider the following: 1) pay triple attention to activities that can build relationships to support the “relatedness” factor, 2) provide support and carefully scaffolded lessons (not just effusive praise) to continually build students’ confidence through their growing sense of competence, 3) provide choice, self-direction, and autonomy in creative projects.

Understanding feedback in online learning—A critical review and metaphor analysis

ANNOTATION: Jenson, Lasse X., Bearman, M. and Boud, D. (2021). Understanding Feedback in Online Learning: A Critical Review and Metaphor Analysis. Computers & Education 173 (4), 104271. doi.org/10.1016/j.compedu.2021.104271

Jenson et al.’s paper is a critical review of online learning research to explore the way that researchers define and conceive of the concept of “feedback” in e-learning. The researchers completed a qualitative analysis of the language used to describe feedback in four leading research journals and identified six discrete meanings or “understandings” of the term based on what they refer to as conceptual metaphors. Metaphors are helpful in understanding complex concepts in that they provide a simpler, more concrete representation of a term that is by nature abstract and complex. However, in simplifying a complex concept, metaphors create conceptual entailments—that is, they limit a researcher’s view of a concept because they limit how a thing is seen based on the metaphor being used to approximate it.

To complete their work, they analyzed 17 articles published between January 2017-February 2019 in four of the leading journals in e-learning, including Computers & Education, British Journal of Educational Technology, Journal of Computer Assisted Learning, and The Internet and Higher Education. They identified six dominant metaphors to help organize the great disparity that exists in the numerous applications of the term, “feedback.”

The six metaphors for feedback that they identify are:

  1. Feedback is a treatment. (11/17 papers) Feedback serves as an intervention and learning improvement is an effect caused by feedback.
  2. Feedback is a costly commodity. (5/17 papers). Feedback is positioned as time-consuming and burdensome on faculty.
  3. Feedback is coaching. (7/17) The main purpose of feedback is to motivate learners.
  4. Feedback is a command. (5/17 papers) Feedback is controlling and directive.
  5. Feedback is a learner tool. (7/17) Here, agency lies with the learner to take the feedback and apply it to further learning.
  6. Feedback is a dialogue. (6/17 papers). The most in line with contemporary thinking on the value of feedback, here feedback is a productive discussion between the learner and peers or the instructor, and the learner then applies the feedback to improve performance.

The findings of this study indicate that only the last two of the six feedback metaphors that are used in the research relate specifically to known best practices for learner-centric practice. The first four of the metaphors reflect feedback practices that are considered inappropriate among researchers because they place the instructor as the main agent in the feedback process and assume that this feedback automatically leads to learning.

Overall, they found little agreement among instructors and students about what the purpose of feedback is. They conclude that if researchers are attempting to improve practice for educators and learners, they need to be clear about their definition of feedback in order to be specific about the effect of that feedback and about any improvements that need to be made as a result. Researchers also need to focus their work on the kinds of feedback that are widely considered to be good feedback practices.

The Effect of Individualized Online Instruction on TPACK Skills and Achievement in Piano Lessons

ANNOTATION: Kaleli, Y. S. (2021). The Effect of Individualized Online Instruction on TPACK Skills and Achievement in Piano Lessons. International Journal of Technology in Education 4(3), 399-412. doi.org/10.46328/ijte.143

This study worked with a control group and experimental group of music education majors to compare student learning outcomes and performance in online piano instruction via a pre- and post-test after 20 hours over 10 weeks of piano instruction.

The work was conducted during the pandemic, a response to the rapid transition to online learning that occurred during the pandemic. According to the author, that transition brought to light just how little some universities know about effective online learning, especially ion arts courses. Further, the author wrote, as the pandemic wound down, it became clear that online learning is here to stay and there is still not enough understanding of the best ways to teach online. The author also pointed to a number of studies that provide that “the positive contribution of technology to education is undeniable.”

This particular study of piano learning was conducted with 30 music education majors at a four-year university in Turkey around 2020. Piano skills are required of all music education majors, and the author suggested that a particular challenge with music learning is that traditionally, teachers teach private lessons via a highly individualized style that suits the learning needs, level, and pace of each individual student. Thus, any system to teach music should be designed with individuation in mind, but current online learning models are not designed as such.

The experimental group received one hour of online education and one hour of face-to-face instruction, and the distance learning group did not have face-to-face instruction. The material covered was the same in both groups, reflecting the first-level national standards for piano skills: knowledge of C major and A minor, four octaves, major and minor sounds, a few simple cadences, and staccato and legato articulation.  Overall, the study showed that the experimental group, who combined some online learning with face-to-face learning, performed far better on a Piano Lesson Achievement Test post-test. The researchers also used a TPACK model of self-reported measures of self-efficacy and technological skills, determined via 47 items on a 5-point Likert scale post-test. The study defined the TPACK (Technology Pedagogy and Content Knowledge) model as “dynamic, procedural integration knowledge between technology, pedagogy and content, and how this interaction affects student learning in the classroom.” [sic. The article’s translation was not well-edited in English.] As such, the study also sought to determine whether technological training in combination with pedagogical and content knowledge would yield better student outcomes.

There are a few issues that limit the reliability of the study results. For one, the paper does not clearly define the difference between online education and distance education, so one can only assume from the various mentions of it that “distance education” meant teaching entirely via Zoom, as the author referred to online video conferencing apps as the means by which distance education is conducted. Second, the paper does not include any description of the instructional design of the online portion of the course, but since the same design was used for both the control and experimental groups, it may not be all that significant that this was not mentioned. Though there is a lack of clarity on many terms, the study does support the conclusion that a combination of asynchronous and synchronous learning produced better results than entirely asynchronous online learning—a finding that has been reported in a wide variety of previous studies in other fields, as noted by the author.

“Using Learner Reviews to Inform Instructional Video Design in MOOCs”

ANNOTATION: Deng, R., & Gao, Y. (2023). Using Learner Reviews to Inform Instructional Video Design in MOOCs. Behavioral Sciences13(4), 330. https://doi.org/10.3390/bs13040330

Video watching is the most common instructional resource in MOOCs, and many studies have been completed to determine students’ impressions and preferences. However, according to the authors, most of these have been small studies, so these authors set out to analyze learner behavior at Chinese University MOOC, which offers more than 10,000 courses from 700+ Chinese institutions of higher education. The goal of the research was to characterize which video features higher-ed learners favored most, types of resources learners valued as supplements or in-video features, and which video production features learners liked best.  Conclusions were that learners favored videos that were, in order: organized, detailed, comprehensible, interesting, and practical. They deemed slides, readings, post-video assessments, questions embedded within the video, and case studies as helpful. And finally, LENGTH of video was learners’ most important factor for video satisfaction, more important than editing quality, video resolution, presence of subtitles, music, or voice.

The researchers collected data from 1,648,747 MOOC reviews in fourteen categories that included courses in sciences, humanities, law, the arts, sports, and psychology. Learners provided ratings and text-based reviews, all of which were publicly available. Numerous entries were excluded from the data, for a variety of reasons, such as being too brief, unclear, or generic to be considered. The final number was cut down to 4,534, and these were analyzed using MAXQDA Analytics software. The authors used a grounded method to analyze the data, allowing themes to emerge during the data analysis rather than imposing a theoretical framework on the body of data. Two teams of knowledgeable researchers coded the data separately and cross-referenced results to adjust for discrepancies in coding.

Some key takeaways suitable for the development of educational video in any online asynchronous model:

  • Organization of information is highly valued, above all. Learners appreciated carefully designed videos with adequate amounts of detail and clear organizational structure that is easy to follow and to understand.  Learners want to understand the logical connections between key concepts. According to the authors, “Our analysis suggest that learners likely prefer shorter videos due to the prominence of key points and their attentional state being optimized.”
  • “Interesting” videos are those are fun to learn from, and students appreciate humor, storytelling, and other engagement tactics.
  • Practicality matters: learners seek information that is relevant to their present life and work.
  • Length of video is critical. Learners prefer short videos because that allows the key points to stand out. The researchers suggested that often, shorter videos mean that sufficient pre-production time was devoted to clarifying the content and no information was extraneous. They felt that they could stay focused more readily in shorter videos and could fit short-video viewing into pockets of time they had available for their online studies. 

Having read an article on video-watching behavior in EdX MOOCs from 2015, I sought the most current study I could find to determine whether viewing behavior in MOOCs had changed much since the advent of short-form video. I discovered this study and discovered that the findings were very much in line with EdX’s 2015 conclusions. However, this study was done with MOOCs in China, so it is unclear as to how much those viewers may have been affected by current American video-watching trends with young audiences, including TikTok and Instagram reels. More research is needed.

“How Video Production Affects Student Engagement: An Empirical Study of MOOC Videos”

ANNOTATION: Philip J. Guo, Juho Kim, and Rob Rubin. 2014. How video production affects student engagement: an empirical study of MOOC videos. In Proceedings of the first ACM conference on Learning @ scale conference (L@S ’14). Association for Computing Machinery, New York, NY, USA, 41–50. https://doi.org/10.1145/2556325.2566239

This study evaluated engagement via student behaviors while watching video within edX® MOOCs. The authors used data from 6.9 million video-watching sessions across four edX® courses: Intro to Computer Science and Programming (MIT), Statistics for Public Health (Harvard), Artificial Intelligence (Berkeley), and Solid State Chemistry (MIT). As of 2014 when this study was published, it represented the largest-scale study of video engagement to date. The goal was to determine engagement, defined by two conditions: engagement time and problem attempt (whether the student completed an assessment question after the video). The authors noted that they did not measure true “difficult-to-measure” engagement, as that would require direct observation and questioning to determine true engagement, such as whether a student was actively watching an entire video or rather was playing it in the background while multitasking.

Data was gathered using a mixed methods approach, combining quantitative data from video watching sessions from the four courses and qualitative data via interviews with six edX® staff who were part of the production team for those courses. The quantitative data included: start and end times, video play speed, numbers of times the student pressed play/pause, and whether the student completed an assessment question after watching the video. The researchers also looked at video properties such as length, speaking rate of the instructor, video type (lecture or tutorial), production style (slides, code, Khan-style whiteboard, live classroom, studio, office desk), and finally interviews with staff for feedback and input. The six staff members interviewed included four video experts at edX®, as well as two program managers who coordinated among edX® and university faculty.

The results provide a helpful guide to producing video for any asynchronous course. The findings are as follows:

  1. Shorter videos are more engaging, with a target of six minutes. Videos of nine minutes or longer were rarely watched more than 50% through.
  2. Videos that intersperse the author’s “talking head” along with slides were more engaging than slides alone.
  3. Videos that had a more personal feel, such as a low-production personal phone camera, can be more engaging than high-production-value shoots.
  4. Khan-style live whiteboard type of videos are more engaging than static slides with narration. Motion and continuous visual flow are engaging.
  5. Classroom lectures, even when specifically recorded for a MOOC, were not engaging. Chopping them up into well-planned parts did not make a significant difference.
  6. Students are highly engaged by instructors who speak somewhat fast and have high enthusiasm.
  7. Students engage different with lecture than tutorial videos, so for lectures, focus on the first-watch experience, but for tutorials, include support for rewatching and skimming.

It would be well worth looking at similar data from humanities- and arts-focused courses at edX®, Coursera, or other large-scale MOOCs to determine whether students of non-scientific domains exhibit similar behavior and tendencies. Also, it would be telling to see whether younger viewers’ video-watching behavior in an online educational setting has or has not been affected by the revolution in short-form video such as TikTok and Instagram.