Santa Clara University

Office of Assessment

Create and Use Rubrics

What is a rubric?*

A rubric is an assessment tool that takes the form of a matrix, which describes levels of achievement in a specific area of performance, understanding, or behavior. Faculty often use rubrics for assessment or feedback on specific assignments in their courses. They are also valuable for program level assessment. Rubrics are criterion-referenced, rather than norm-referenced. Raters ask, "Did the student meet the criteria for acceptable work on the rubric?" rather than "How well did this student do compared to other students?"

Two types of rubrics

Analytic Rubric: An analytic rubric specifies at least two characteristics to be assessed at each performance level and provides a separate score for each characteristic. For example, several dimensions of critical thinking can be separately assessed using a rubric isolating each dimension on a row. This allows the faculty to evaluate student performance in a more nuanced way. Taking another example, students may score highly on the use of content development on a written communication assessment, but lower on their use of sources and evidence.

Holistic Rubric: A holistic rubric provides a single score based on an overall judgment of a student's performance on a task. This is used when programs are interested in a less differentiated assessment and when a single dimension is adequate to define quality.

Steps in developing a rubric

Step 1: Identify what you want to assess (e.g., Learning Outcome)

Step 2: Identify the characteristics to be rated (rows). These are also called "dimensions."

  • Specify the skills, knowledge, and/or behaviors that you will be looking for.
  • Limit the characteristics to those that are most important to the assessment.

Step 3: Identify the levels of mastery/scale (columns).

·Often four levels of mastery are used: Exceeds, Meets, Approaches, Does not meet the learning outcome.

Step 4: Describe each level of mastery for each characteristic/dimension (cells).

  • Describe the best work you could expect using these characteristics. This describes the top category.
  • Describe an unacceptable product. This describes the lowest category.
  • Develop descriptions of intermediate-level products for intermediate categories, making sure that you can differentiate the characteristics between a 2 & 3 level, since the conclusions of a 2 & 3 are different (with 3 & 4 "meeting or exceeding" and 1 & 2 "falling short").

Important: Each description and each characteristic should be mutually exclusive.

Step 5: Test rubric.

  • Apply the rubric to an assignment. Modify as needed.
  • Share with colleagues.

Tip: Faculty members often find it useful to establish the minimum score needed for the student work to be deemed passable. For example, faculty members may decided that a "1" or "2" on a 4-point scale (4=exemplary, 3=proficient, 2=marginal, 1=unacceptable), does not meet the minimum quality expectations. They may set their criteria for success as 90% of the students must score 3 or higher. If assessment study results fall short, action will need to be taken.

Step 6: Discuss with colleagues. Review feedback and revise.

Important: When developing a rubric for program assessment, enlist the help of colleagues. Rubrics promote shared expectations and consistent grading practices which benefit faculty members and students in the program.

Tips for developing a rubric

  • Find and adapt an existing rubric!It is rare to find a rubric that is exactly right for your situation, but you can adapt an already existing rubric that has worked well for others and save a great deal of time. A faculty member in your program may already have a good one.
  • Evaluate the rubric. Ask yourself: A) Does the rubric relate to the outcome(s) being assessed? (If yes, success!) B) Does it address anything extraneous? (If yes, delete.) C) Is the rubric useful, feasible, manageable, and practical? (If yes, find multiple ways to use the rubric: program assessment, assignment grading, peer review, student self assessment.)
  • Collect samples of student work that exemplify each point on the scale or level. A rubric will not be meaningful to students or colleagues until the anchors/benchmarks/exemplars are available.
  • Expect to revise.
  • When you have a good rubric, SHARE IT!

Scoring rubric group orientation and calibration

When using a rubric for program assessment purposes, faculty members apply the rubric to pieces of student work (e.g., reports, oral presentations, design projects). To produce dependable scores, each faculty member needs to interpret the rubric in the same way. The process of training faculty members to apply the rubric is called "norming." It's a way to calibrate the faculty members so that scores are accurate and consistent across the faculty. Below are directions for an assessment coordinator carrying out this process.

Suggested materials for a scoring session:

  • Copies of the rubric
  • Copies of the "anchors": pieces of student work that illustrate each level of mastery. Have at least 3 anchor pieces (1 low, 1 middle, 1 high). Even if most of the student work will be accessed electronically, paper copies of the norming materials are helpful.
  • Extra pens, tape, post-its, paper clips, stapler, etc.
  • White board or large paper for writing down scores

Process:

  1. Describe the purpose of the activity, stressing how it fits into program assessment plans.  Explain that the purpose is to assess the program, not individual students or faculty, and describe ethical guidelines, including respect for confidentiality and privacy.
  2. Describe the nature of the products that will be reviewed, briefly summarizing how they were obtained.
  3. Describe the scoring rubric and its categories. Explain how it was developed.
  4. Analytic: Explain that readers should rate each dimension of an analytic rubric separately, and they should apply the criteria without concern for how often each score (level of mastery) is used. Holistic: Explain that readers should assign the score or level of mastery that best describes the whole piece; some aspects of the piece may not appear in that score and that is okay. They should apply the criteria without concern for how often each score is used.
  5. Give each scorer a copy of the first student work sample and a rubric. It is best not to begin with a "high performance" artifact.
  6. Once everyone is done, collect everyone's ratings and display them so everyone can see the degree of agreement. The facilitator may ask for a show of hands for the "1's", "2's," "3's", "4's" for each row of the rubric and record them on a whiteboard.
  7. Guide the group in a discussion of their ratings. There will be differences. This discussion is important to establish standards. Attempt to reach consensus on the most appropriate rating for each of the products being examined by inviting people who gave different ratings to explain their judgments. Raters should be encouraged to explain by making explicit references to the rubric. Usually consensus is possible, but sometimes a split decision is developed, e.g., the group may agree that a product is a "3-4" split because it has elements of both categories. This is usually not a problem. You might allow the group to revise the rubric to clarify its use but avoid allowing the group to drift away from the rubric and learning outcome(s) being assessed.
  8. Repeat this process for the next two work samples. You should see the standards upon which consensus will be reached emerge.
  9. Once the group is comfortable with how the rubric is applied, the rating begins. Reviewers begin scoring.
  10. If you can quickly summarize the scores, present a summary to the group at the end of the reading. It is useful to leave about ½ hour at the end of the session for discussion. You might end the meeting with a discussion of five questions:
    • Are results sufficiently reliable?
    • What do the results mean? Are we satisfied with the extent of students' learning?
    • Who needs to know the results?
    • What are the implications of the results for curriculum, pedagogy, or student support services?
    • How might the assessment process, itself, be improved?

* We gratefully acknowledge the University of Hawaii, Manoa, for their excellent web page on Rubrics. Much of their content is directly reproduced here.

Additional resources:

Rcampus.com has a searchable website with rubrics across many disciplines

Mary Allen, Handout on rubrics (pdf)

 
Printer-friendly format