How is comment quality determined?

  • Updated

Your instructor may include the quality of your comments as part of your Perusall assignment score. This page displays an example of some comments at various comment quality levels, so you have an idea of how comment quality is determined.

 

Perusall uses a machine learning algorithm that uses linguistic features of the text to create a predictive model for the score a human instructor would give. In other words, instead of trying to figure out a set of rules to measure these things, we create a "training set" consisting of a large number of comments along with grades given by multiple expert human graders that are grading according to the rubric, and then create an algorithm that combines the linguistic features to best predict the scores given by the expert human graders. What we found in our validation work is that Perusall agreed with the expert human graders about as often as two humans agreed with each other!You are in full control of how your students are evaluated.

 

From Perusall's Perspective, we are trying to save an instructor time by suggesting a score. By default, will we not show students scores until you are ready to release and approve. To view how we calculated a score, you can click on their grade in the gradebook and change the score if needed. To adjust when the score is released to students, you can go to settings>scoring and adjust the setting "Release scores to students". Also, under settings> general > analytics you can choose how you want our algorithm to interact with your course.

Share this article

Was this article helpful?

25 out of 41 found this helpful