Skip to main content

Understanding Student Ratings

This introduction to the student ratings tool is to help faculty and administrators capitalize on its features to promote student learning and evaluate faculty progress in achieving this aim.

Watch this video for detailed instructions on how to understand student ratings. Additionally, there are links to certain video segments with their associated topics provided below the video.

The context for the video is found at the bottom of this page.

Student Ratings Presentation

An index to the contents of the video recording of the presentation and discussion

0:00 Brent Webb provides background on the work of the task force

1:35 Issues around previous instrument that gave rise to new survey

3:11 Survey overview (Burlinggame)

4:42 7-year timeline overview

6:04 Task force charge

7:51 How the old system guided formation of the new

9:43 Issues addressed by the task force I

12:33 Issues addressed by the task force II

13:02 Sample” instructor” item from the New Instrument

14:17 Sample “AIMs” item from the New Instrument

14:51 Improvements in Reporting

15:18 Sample semester report for a class

18:15 “AIMs items” discussed

19:55 Questions

19:58 What does the composite score “cover”?

20:47 Why does the form on my device look different from the one on today’s screen?

21:58 What does the composite score for course “cover”?

23:28 What does the composite score represent?

24:44 How do we deal with the reliability band; is there any guidance on that?

26:45 How will the data (and uncertainty band considerations) be used in the CFS process?

28:35 How do we “frame” and “sell” this to our students correctly?

29:25 Why not make survey participation required of students?

31:01 You have bands on the “section and department”, but not on “course, college, and university” and I wonder why.

32:54 Comment: “I’m still worried about the response rate, because… 90% participation is needed for the data to be really reliable.”

34:36 Two slides to provide insight into how “trends” might appear

32:54 (this is partly in response to the comment and expression of concern).

34:54 Multi-year report: Positive Change

35:14 Multi-year report: Negative Change

35:40 Questions addressed by W-2015 pilot (Reese)

  • Can there be a single reliable composite score?
  • How does the new composite score relate to the old global instructor item?

40:53 Summary of what the pilot looked like (participation and response rate)

41:50 Can 5-items be summed into a composite? (slide)

42:02 What is the estimated reliability of the composite? (slide)

43:07 Relationship of Items/Composite to old global

43:18 Comment from Brent Webb about course size

44:14 Comment about reliability improving with aggregation over time

45:04 Response pattern affects shape of uncertainty band

46:28 Summary recommendations (future)

47:20 Questions

47:26 How do you handle a class that has a large enrollment, but perhaps 10 separate lab sections? How do we aggregate the sections and improve certainty?

49:00 How do we deal with the bias resulting from any one student commenting over and over?

51:42 When will faculty receive training on this?

53:19 How do we deal with pre-CFS faculty who have spent 4 or 5 years on the old system, and now we’re switching to the new one?

55:00 What to do in a small department when there’s only one person to use as a comparison?

56:54 How can we compare the old “single point” rating to the new “range of values”?

59:33 How are we to view the uncertainty band; what are it’s components?

1:00:43 What do you do about uncertainty bands in a department where there’s a lot of heterogeneity (odd duck courses)?

1:03:47 Will there eventually be a “trend” component built into the instrument?

1:05:09 Can you assure us that students comments in the new instrument will be better than those we received in the past?

1:08:10 What do you do about an uncertainty band for an instructor when only one person teaches the course?

1:08:56 Comment/caution about how the new instrument is presented

1:10:44 Question about how the grades component of the new survey is derived

1:12:23 Have you done any analysis on the individuals who are more likely to make comments?

1:12:56 What are the sources of variability in the uncertainty band?

1:13:39 Is there going to be a summary sheet for all the courses in our department?

1:14:27 Comment: it would be helpful to be able to summarize by core courses versus service courses

1:15:20 Have you checked on the relationship between GPA (or ACT score) and student ratings?

1:16:14 Do you have any sense for what kind of bias occurs with a low response rate?

1:18:16 What do we do about inappropriate or even profane student comments?

1:20:47 Most of our classes are small – what is that magic number where reliability becomes no longer a problem?

1:23:04 What are the quality concerns about the other approaches to teacher evaluations, such as peer review?

1:25:21 Brent Webb provides concluding comments

1:31:52 End

On September 18, 2015, department chairs were given a preview of the new student ratings tool. Two members of the Student Ratings Task Force, Professors Gary Burlingame (Psychology) and Shane Reese (Statistics), as well as members of the Academic Vice President’s Council, presented an overview of the development process, discussed the new instrument, and answered questions. The recording has been edited to reduce the nearly two-hour meeting to a reasonable length for viewing.