Improving Student Peer Feedback Through Instructional Design
Introduction
The ability of students to effectively evaluate themselves and their peers through written feedback is a valuable skill that will carry into full-time employment (Lal, 2020). Activities that involve students providing feedback are commonly used in higher education (Huisman et al., 2018), especially in team-based projects because “peers bring diverse perspectives, experiences and insights” (Jiang & Ironsi, 2024) which could, in turn, improve performance in the specific content domain (Sluijsmans et al., 2002). Providing quality feedback to peers can help improve group performance, address issues before they escalate, and improve motivational outcomes (Li et al., 2021). However, the ability of students to provide quality feedback is often lacking, making training pivotal in enhancing students’ ability to provide quality feedback (Gyamfi et al., 2024; Sluijsmans et al., 2002; K. Topping, 1998; van Zundert et al., 2010).
Many studies understandably focus on the empirical data of training to improve student peer feedback (see Gyamfi et al., 2024; Prilop & Weber, 2023; van Blankenstein et al., 2021), but very few demonstrate or explain the design decisions of the training interventions implemented to improve student peer feedback. This paper examines the literature on peer feedback and factors influencing peer feedback, followed by an overview, description, and analysis of two training interventions implemented to improve peer feedback in a university engineering capstone course.
Literature Review
Feedback and Learning Outcomes
Feedback offers numerous benefits to improve individual performance, quality of work, and overall career growth (Lal, 2020), and is common in the workplace, often in the form of regular performance reviews where a supervisor provides direct feedback to an employee. Peer feedback is a common activity in higher education (Huisman et al., 2018) and can be used to improve teamwork, resulting in higher grades and higher-quality work (Planas-Lladó et al., 2020). Peer feedback can be a valuable tool for instructors to address issues in team dynamics or the project before any problems become unmanageable, which will improve learning processes and outcomes (Greisel et al., 2025) and potentially positively affect students’ performance in the content domain (Sluijsmans et al., 2002). In addition to these benefits, peer assessment and feedback allow students to receive more immediate feedback than they would if they had to wait for the instructor to provide feedback to all students individually (Liu & Carless, 2006; Topping, 2009).
Peer feedback fosters social skills between peers, offering an opportunity to learn from each other, as social cognitive theory suggests (Locke & Bandura, 1987). Liu and Carless (2006) suggested that the social aspect of peer assessment is beneficial because “learning is likely to be extended from the private and individual domain, to a more public (i.e., to one or more peers) domain” (p. 281), bringing more opportunities for all students in a group to learn from each other both in the specific domain (i.e., engineering) and in the soft skills of teamwork, time management, and communication.
Barriers to Peer Feedback in the Classroom
Despite the benefits of peer feedback mentioned above, a few factors affect the adoption of peer feedback in the classroom (Adachi et al., 2018). One common concern among instructors is the reliability of the feedback students provide. Instructors do not view students as experts in the particular course content domain and, therefore, cannot provide quality feedback (Liu & Carless, 2006). Gyamfi et al. (2024) similarly noted, “if peers provide inaccurate feedback, it may misguide the recipients, hinder their learning, and strain relationships between students” (p. 1420). Conversely, students may feel uneasy about providing feedback because they believe they lack the required knowledge or expertise, or they fear that providing feedback will make their peer(s) more successful than them.
Student attitude toward giving and receiving feedback poses another barrier to quality feedback. If students are going to accept feedback, they “must be receptive and accurately understand the meaning and veracity of the feedback” (Fulham et al., 2022, p. 1). Greisel et al. (2025) found that attitudes toward feedback were not associated with the quality of the feedback, but what did impact the adequacy of the feedback was how receptive the student is to feedback and how they value peer feedback as a skill. The student’s sense of self can also impact their attitude toward feedback. Students with negative self-perception may find that positive feedback is undeserved, and students who are overconfident in their abilities may feel attacked by feedback critical of their abilities (Fulham et al., 2022). Training covering the importance and value of peer feedback is just as important as training on how to write quality feedback.
Providing and Receiving Quality Feedback
Many of the downsides of peer assessment activities can be lessened by ensuring students can provide quality feedback. Several factors impact the quality of feedback. Gyamfi et al. (2024) noted that various studies indicate that the length of feedback is a good indicator of its quality, but noted that this may not be the most accurate metric of feedback quality. In conflict with that, Yu and Schunn (2023) found “learning benefits are more closely associated with providing feedback rather than receiving it, and are influenced more by the length of peer feedback rather than its helpfulness” (p. 1). Though Yu and Schunn (2023) indicated providing feedback is more closely tied to learning benefits, Strijbos et al. (2010) found that the student's ability and level of expertise can impact the perceived quality and further supported this by noting that “feedback from a person with a high level of expertise is assumed to be perceived as more positive than from a person with low expertise” (p. 293). Aside from the perceived usefulness of the feedback, individual student motivation may play a role in the quality of feedback; however, individual motivation may not be as important to feedback quality as previously thought (Greisel et al., 2025).
Providing quality feedback is only one side of the equation. Students must implement the feedback, but “implementation of feedback comments is likely to depend on how the feedback message is perceived” (Greisel et al., 2025, p. 2). If the feedback is too vague, such as “Great job,” there is very little practical or actionable information for the recipient. Conversely, if the feedback is too lengthy and detailed, the receiver may feel intimidated and fail to read or implement it. van Blankenstein et al. (2021) suggested that students provide better feedback when presented with a model or principles of good feedback. Therefore, students need a model or framework to reference in order to provide quality feedback.
Improving Feedback Quality
The quality of feedback significantly impacts how the recipient utilizes it - recipients of low-quality feedback do not benefit as much as they would from high-quality feedback (Daou et al., 2020; Greisel et al., 2025). Exposing students to high-quality feedback and training them in giving and receiving feedback can yield significant benefits in the classroom. This is typically achieved through training. For example, Sluijsmans et al. (2002) investigated an intervention to enhance pre-service teachers' peer feedback and found that the training produced positive results in students developing peer assessment skills. Gyamfi et al. (2024) found that training positively impacted the quality of feedback, with individuals who received the training more likely to utilize the evaluation criteria and provide constructive comments than those in the control group.
Providing Feedback Using CATME
Several studies showcasing the advantages of feedback training have been mentioned earlier, but what does a peer feedback system entail in a higher education classroom? One such peer feedback system used by over 250 institutions worldwide (CATME User Institutions, n.d.) is the Comprehensive Assessment of Team Member Effectiveness (CATME) instrument. CATME measures individual team member contributions to the entire team. Loughry et al. (2007) developed CATME by reviewing the literature on effective teamwork and conducting interviews with college students. The result was a pool of 218 items related to effective teamwork. Exploratory and confirmatory factor analyses were employed to develop a final instrument comprising 87 items that measure 29 teamwork behaviors (see Ohland et al. 2012).
These 29 teamwork behaviors are categorized into five dimensions, which are the core of the CATME tool: contributing to the team’s work, interacting with teammates, keeping the team on track, expecting quality, and having relevant knowledge, skills, and abilities which had a high degree of intercorrelation (r = .76) along with a composite index representing a mean rating across all five dimensions (includes a performance scale for each of the five dimensions (α =.94) (Ohland et al., 2012). The focus of this project was the CATME rubric, which includes a performance scale for each of the five dimensions. Students use this rubric to evaluate their peers, providing a numerical rating along with written feedback based on the criteria outlined in the rubric (Peer Evaluation - CATME, n.d.).
Context
The context for this intervention is a senior-level capstone course at a research-intensive university in the Midwest region of the United States. This capstone course is mandatory for undergraduate students in the School of Engineering Technology and consists of a two-course sequence offered in the Fall and Spring semesters, referred to as cohorts. Typically, the Fall cohort includes approximately 250 students, while the Spring cohort consists of 50 to 80 students. In this course, students are organized into teams (using CATME Teammaker) and collaborate with faculty members and industry mentors to tackle problems presented by program partners.
CATME has been used for student peer evaluations in this course since 2016. However, the lead instructor found that students frequently assessed themselves and their peers inaccurately, often giving either all fives or all low scores. To address this issue, the course has implemented various interventions to enhance the accuracy of these assessments (Berry et al. 2021, 2024; Huang et al., 2022).
In 2020, when the lead instructor collaborated with the research team to identify ways to enhance students' learning experiences, it became evident that students struggled to provide written feedback for peer assessments. This discovery presented an opportunity to create an intervention that assists students in acquiring essential professional skills for providing constructive, detailed, and impactful feedback on peer evaluations, aligning with CATME’s five dimensions.
Training Design and Interventions
The literature suggests that training for providing feedback (Gyamfi et al., 2024; van Blankenstein et al., 2021) along with improving student motivation for feedback reception (Greisel et al., 2025) are critical to improving the quality of feedback students provide. CATME offers peer evaluation practice exercises (Teamwork Training Toolbox - CATME, n.d.), but they are solely focused on the numerical rating for each CATME dimension and not on the written feedback students provide to team members. Therefore, to fill this gap, the research team developed training for students using the CATME system in a year-long senior-level capstone course. This section will review the content of the training and its evolution over time.
Face-to-Face Training
The face-to-face CATME training was initially designed and implemented in the 2021 Fall cohort. It covered four main topics: 1) An introduction to CATME and its five dimensions; 2) An overview and discussion on “The Five Components of a Helpful Recommendation” (see Figure 1) adapted from Quality Matters (2012); 3) Sharing actual examples of students’ CATME written feedback (see Figure 2); and 4) Evaluation and discussion of these examples using CATME five dimensions and QM components. The topics included in this training were based on the discussions with the lead instructor of the Capstone course. Additionally, they decided to implement this training one week before the students' first peer assessment. This deliberate choice aimed to enhance a student's ability to apply the insights gained in the workshop to peer assessment.
Figure 1
Actual Written Feedback Examples

Figure 2
The Five Components of a Helpful Recommendation

The training lasted approximately 20 to 25 minutes, during which all students actively participated in the activities. They particularly thrived during discussions and evaluations of real feedback examples, articulating their thoughts on the feedback and suggesting enhancement ideas. Moreover, they discovered that the five components of helpful recommendations provided a valuable framework for giving peer feedback.
Feedback from the lead instructor and student group mentors indicated that the training significantly impacted students' self-assessment and peer assessment written feedback. Our data analysis (Huang, Wynkoop, et al., 2022) demonstrated that students’ written feedback entries were better aligned with the CATME dimensions compared to the previous year. The team identified the need to incorporate a team-building activity to enhance student interactions, particularly during the challenges posed by COVID-19. Consequently, a teambuilding activity – the Marshmallow Challenge (Wujec, 2010), was added to the end of the training when it was implemented in the Fall 2022 cohort. The Marshmallow Challenge is a teambuilding activity that gives the participants 20 sticks of spaghetti, one yard of tape, one yard of string, and one marshmallow. The participants are asked to build the tallest structure with the marshmallow on top within 18 minutes. Figure 3 features a collage of photos capturing how students collaborated to construct a tall tower. It was undeniably a fun and engaging experience for the students.
Figure 3
Photos of Marshmallow Challenge

E-learning Module
The successful implementation of the face-to-face training led the team to explore the creation of an asynchronous e-learning module designed in Articulate Storyline. The module replicated the face-to-face training content, incorporated interactivity to keep students engaged, and provided opportunities to practice identifying quality feedback. The module covered three objectives:
Summarize the CATME dimensions
List characteristics of good feedback
Identify quality feedback.
The module was integrated into undergraduate and graduate-level courses. The undergraduate course was conducted in person, but the module was completed asynchronously. The graduate-level course was entirely online and asynchronous.
Figure 4
Opening Page of the E-learning Module

Videos
The module integrates videos produced by CATME that explain each of the dimensions (Figure 5). These videos are approximately one minute each and are required to be viewed to proceed through the module. We decided to require viewing of the videos because they provide an in-depth explanation of the domain and the criteria for each before proceeding. CATME hosts the videos on a YouTube channel. The video is embedded in the slide and appears as a pop-up. Students can view the video within the module or on YouTube, depending on their preference. Each video button changes to black, indicating which video has been viewed.
Figure 5
Overview of CATME Dimensions in the E-learning Module


Interactivity
The module offers learners opportunities to engage with the training, thereby enhancing their attention and motivation. For example, the first slide of the training asks the learners to enter their names (Figure 6). Their name is then used throughout the training to add personalization.
Figure 6
Name Entry Page

Resources
A key component of providing good feedback in CATME is continually referring to the Teamwork Rating Scale rubric. Students are encouraged to refer to the rubric as they provide feedback to their peers and use language directly from it when offering feedback. Since the rubric is crucial for students to provide feedback, they must download a copy before proceeding with the module (Figure 7). The rubric is used later in the module for an activity identifying the quality of feedback.
Figure 7
CATME Rubric Download Page

Quality Feedback Practice
The core of this module is the chance to identify valuable feedback using actual examples from comments provided by students in prior semesters. The designers analyzed over 2000 individual entries to find examples of high-quality feedback (Figure 8) and less effective feedback (Figure 9). Students are presented with the feedback and shown how it aligns with the CATME dimensions. They then have the opportunity to practice identifying quality feedback.
Figure 9
Example of high quality feedback presented to students

Figure 8
Examples of unhelpful feedback provided by students

Figure 9
Sample question to practice CATME alignment with feedback

Results
Analysis of student feedback from Fall 2019 (without intervention) compared to Fall 2021 with intervention) revealed an increase in feedback aligning with each of the CATME dimensions, as shown in Table 1. Though there was an increase in mentions of each of the dimensions, it is notable that the number of comments that were “Too General to Know” (meaning the comment did not include any of the CATME dimensions) decreased by nearly 50% with the intervention.
Table 1
Feedback Alignment with CATME With and Without Intervention
CATME Dimension | Fall 2019 (without intervention) | Fall 2021 (with intervention) | Percent Change |
|---|---|---|---|
Contributing to Team’s Work | 46.40% | 74.60% | +60.78% |
Interacting with Teammates | 25.10% | 28.80% | +14.74% |
Keeping the Team on Track | 11.10% | 20.20% | +81.98% |
Expecting Quality | 2.70% | 4.50% | +66.67% |
Having Related Knowledge, Skills, and Abilities. | 12.20% | 17.30% | +41.80% |
Too General to Know | 34.10% | 19.00% | -49.27% |
Student feedback also became more robust, as shown in Table 2. The number of feedback containing no mention of CATME dimensions decreased after the intervention, and the amount of feedback with two or more interventions increased. Most notably, no individual piece of feedback encompassed all five CATME dimensions in Fall 2019; however, there was a subsequent increase observed following the intervention. This suggests the training provided tangible ways for students to provide more robust feedback when using CATME.
Table 2
Number of Dimensions Mentioned in Student Feedback
Number of Dimensions Mentioned | Fall 2019 (without intervention) | Fall 2021 (with intervention) |
0 | 34.10% | 19.12% |
1 | 40.20% | 35.10% |
2 | 20.50% | 30.84% |
3 | 4.60% | 11.81% |
4 | 0.60% | 2.49% |
5 | 0.00% | 0.64% |
Affordances and Limitations of the Designs
Face-to-face Training
Among the limitations of face-to-face training is the ability to reach more institutions, instructors, and students who use CATME. Creating standardized in-person training for instructors to implement in their courses would be possible. However, the face-to-face training utilized an entire class session, and some instructors may not be willing to give up that time, particularly in a hands-on capstone course. Furthermore, if the training is implemented at institutions worldwide, maintaining fidelity may be challenging and potentially reduce the efficacy and effectiveness of the training if not implemented correctly.
The Marshmallow Tower teambuilding activity is a key component of the in-person training. The in-person training was implemented early in the first semester of the capstone course, providing a first opportunity for student teams to work together and begin to understand each other’s leadership and communication styles. Team members need to begin understanding how each other works and interacts so that they have context on which to base their feedback. For example, if one team member observes that another team member is dominant and takes charge of everything during the marshmallow tower building activity, it may help them to understand why that team member performs in a certain way when working on their capstone project. This understanding of their behavior could inform the type of feedback provided.
E-learning Module
The e-learning module can address some of the limitations of in-person training but it has its limitations. The asynchronous e-learning module provides similar content as the in-person training except for the hands-on marshmallow tower-building activity. Though the e-learning module cannot replace the teambuilding activity, it still provides an engaging way to present critical information and core concepts about quality feedback when using CATME, which is the primary goal of both trainings.
The e-learning module can be added as a SCORM package into many learning management systems, allowing the instructor to make it a graded assignment and view completion. The asynchronous nature of the e-learning module means students can complete it outside of the classroom, reducing the need to use an entire class period for feedback training. The module includes opportunities to identify quality feedback, like in-person training. However, the e-learning module allows students to revisit and review the content anytime.
A Hybrid Option
Despite the differences in affordances and limitations of the designs, they can be implemented together, particularly in the context in which they were first implemented. Utilizing the in-person training early in the first semester will introduce the teams to CATME, explore the criteria of quality feedback, and allow them to begin developing their team with the Marshmallow Tower. Then, because the course is one school year (Fall and Spring semesters), the e-learning module could be implemented as an asynchronous assignment to refresh students’ memories about quality feedback and the dimensions.
Both designs can be packaged and distributed to institutions that use CATME, which includes over two million students across more than 2,600 institutions worldwide (CATME User Institutions, n.d.). However, the training would likely need to be revised if used in countries outside of the United States. Both trainings were developed through the lens of higher education and business in the United States. The training would need to be evaluated and updated to address cultural and social differences in giving and receiving feedback and working in teams in other countries.
Conclusion
Ample opportunities exist to explore how students provide peer feedback and how to train them to provide high-quality feedback. In the future, the research team will continue to investigate innovative pedagogical methods to enhance the quality and effectiveness of peer feedback in the context of CATME. Given the preliminary positive outcomes from these interventions, the team intends to refine the training module based on student feedback and assess the efficacy of both in-person and online training formats. Should the online module demonstrate significant effectiveness as an intervention, the team plans to develop it into an open educational resource for CATME to distribute to participating institutions.
In addition to our training interventions, the team will examine student attitudes toward the feedback they receive. We will investigate why students choose to accept or ignore this feedback, drawing on the insights from Greisel et al. (2025). This research contributes to the ongoing discussion about student perceptions of feedback and offers valuable insights on enhancing training to align with students' attitudes and motivations related to giving and receiving feedback, which can positively influence educational outcomes.
References
Adachi, C., Hong-Meng Tai, J., & Dawson, P. (2018). Academics’ perceptions of the benefits and challenges of self and peer assessment in higher education. Assessment and Evaluation in Higher Education, 43(2), 294–306. https://doi.org/10.1080/02602938.2017.1339775
Berry, F. C., Huang, W., & Exter, M. (2021). Interventions for improving accuracy of self-and peer reviews in an engineering technology capstone course. Annual Meeting of the American Educational Research Association.
Berry, F. C., Huang, W., & Exter, M. (2024). Continuous improvement in an engineering technology capstone sequence. International Journal of Designs for Learning, 15(3), 34–44. https://doi.org/10.14434/IJDL.V15I3.36678
CATME User Institutions. (n.d.). Retrieved February 22, 2025, from https://info.catme.org/instructor/history-research/our-user-base/catme-user-institutions-alphabetical/
Daou, D., Sabra, R., & Zgheib, N. K. (2020). Factors that determine the perceived effectiveness of peer feedback in collaborative learning: A mixed methods design. Medical Science Educator, 30(3), 1145–1156. https://doi.org/10.1007/s40670-020-00980-7
Fulham, N. M., Krueger, K. L., & Cohen, T. R. (2022). Honest feedback: Barriers to receptivity and discerning the truth in feedback. Current Opinion in Psychology, 46, 101405. https://doi.org/10.1016/J.COPSYC.2022.101405
Greisel, M., Hornstein, J., & Kollar, I. (2025). Do students’ beliefs and orientations toward peer feedback predict peer feedback quality and perceptions? Studies in Educational Evaluation, 84(101438). https://doi.org/10.1016/j.stueduc.2024.101438
Gyamfi, G., Hanna, B. E., & Khosravi, H. (2024). Impact of an instructional guide and examples on the quality of feedback: insights from a randomised controlled study. Educational Technology Research and Development. https://doi.org/10.1007/s11423-024-10346-0
Huang, W., Exter, M., Wynkoop, R., & Berry, F. (2022). What did students focus on in their self- and peer-written feedback when using the comprehensive assessment of team member effectiveness? American Educational Research Association Conference.
Huang, W., Wynkoop, R., Exter, M., & Berry, F. C. (2022). Feedback matters: Self-and-peer assessment made better with instructional interventions. ASEE 2022 Annual Conference. www.slayte.com
Huisman, B., Saab, N., van Driel, J., & van den Broek, P. (2018). Peer feedback on academic writing: undergraduate students’ peer feedback role, peer feedback perceptions and essay performance. Assessment and Evaluation in Higher Education, 43(6), 955–968. https://doi.org/10.1080/02602938.2018.1424318
Jiang, X., & Ironsi, S. S. (2024). Do learners learn from corrective peer feedback? Insights from students. Studies in Educational Evaluation, 83, 101385. https://doi.org/10.1016/J.STUEDUC.2024.101385
Lal, M. M. (2020). Peer feedback: A tool for growth. InThe Journal of Nursing Administration (Vol. 50 Issue 1, pp. 3–4). Lippincott Williams and Wilkins. https://doi.org/10.1097/NNA.0000000000000829
Li, H., Bialo, J. A., Xiong, Y., Hunter, C. V., & Guo, X. (2021). The effect of peer assessment on non-cognitive outcomes: A meta-analysis. Applied Measurement in Education, 34(3), 179–203. https://doi.org/10.1080/08957347.2021.1933980
Liu, N. F., & Carless, D. (2006). Peer feedback: The learning element of peer assessment. Teaching in Higher Education, 11(3), 279–290. https://doi.org/10.1080/13562510600680582
Locke, E. A., & Bandura, A. (1987). Social foundations of thought and action: A social-cognitive view. The Academy of Management Review, 12(1), 169. https://doi.org/10.2307/258004
Ohland, M. W., Loughry, M. L., Woehr, D. J., Bullard, L. G., Felder, R. M., Finelli, C. J., Layton, R. A., Pomeranz, H. R., & Schmucker, D. G. (2012). The comprehensive assessment of team member effectiveness: Development of a behaviorally anchored rating scale for self- and peer evaluation. Academy of Management Learning and Education, 11(4), 609–630. https://doi.org/10.5465/amle.2010.0177
Peer Evaluation - CATME. (n.d.). Retrieved April 9, 2025, from https://info.catme.org/features/peer-evaluation/
Planas-Lladó, A., Feliu, L., Arbat, G., Pujol, J., Suñol, J. J., Castro, F., & Martí, C. (2020). An analysis of teamwork based on self and peer evaluation in higher education. Assessment & Evaluation in Higher Education, 0(0), 1–17. https://doi.org/10.1080/02602938.2020.1763254
Sluijsmans, D. M. A., Brand-Gruwel, S., & van Merriënboer, J. J. G. (2002). Peer assessment training in teacher education: Effects on performance and perceptions. Assessment and Evaluation in Higher Education, 27(5), 443–454. https://doi.org/10.1080/0260293022000009311
Strijbos, J. W., Narciss, S., & Dünnebier, K. (2010). Peer feedback content and sender’s competence level in academic writing revision tasks: Are they critical for feedback perceptions and efficiency? Learning and Instruction, 20(4), 291–303. https://doi.org/10.1016/j.learninstruc.2009.08.008
Teamwork Training Toolbox - CATME. (n.d.). Retrieved April 9, 2025, from https://info.catme.org/instructor/teacher-materials/teamwork-training-tools-toolbox/
Topping, K. J. (1998). Peer assessment between students in colleges and universities. Review of Educational Research, 68(3), 249–276. https://doi.org/10.3102/00346543068003249
Topping, K. J. (2009). Peer assessment. Theory Into Practice, 48(1), 20–27. https://doi.org/10.1080/00405840802577569
van Blankenstein, F. M., O’Sullivan, J. F., Saab, N., & Steendijk, P. (2021). The effect of peer modelling and discussing modelled feedback principles on medical students’ feedback skills: a quasi-experimental study. BMC Medical Education, 21(1), 1–9. https://doi.org/10.1186/S12909-021-02755-Z/TABLES/5
van Zundert, M., Sluijsmans, D., & van Merriënboer, J. (2010). Effective peer assessment processes: Research findings and future directions. Learning and Instruction, 20(4), 270–279. https://doi.org/10.1016/j.learninstruc.2009.08.004
Wujec, T. (2010). Build a tower, build a team. https://www.youtube.com/watch?v=H0_yKBitO8M
Yu, Q., & Schunn, C. D. (2023). Understanding the what and when of peer feedback benefits for performance and transfer. Computers in Human Behavior, 147, 107857. https://doi.org/10.1016/J.CHB.2023.107857