College Assessment

Inter-rater Reliability

You and each IA will participate as a rater. You as the lead faculty in the course will be the primary rater. Each IA will be listed as Rater 1, Rater 2, etc.

Choose the assessment in your current course you wish to evaluate through the IRR process. Pull two student submissions submitted during a previous semester. One submission should illustrate superior student performance, and one should illustrate poor student performance. You and each IA should evaluate the student samples using the rubric for your current class. After all the sample values have been inputted and an agreement percentage generated, identify any samples rated under 90% agreement.  Visit with each of the raters if your assessment agreement falls below the 90% threshold of agreement.  Look for any small disparities in scoring and offer feedback to the rater and clarify any parts of the rubric that may be confusing. Continue to consult with your raters until you build an agreement of at least 90%.

In cases of substantial disagreement, any  disputed rubric construct should be revisited and possibly revised based on rater feedback.

Consider these questions:

1. Was the rubric able to differentiate between high and low student performance?

2. Did the rubric provide actionable feedback for students?

3. How many consultations did it take for you to come to consensus with your raters?

Download the IRR Calculator

Submit your IRR data for each course in the part of term in which you are teaching.