Understanding Student Ratings
This introduction to the student ratings tool is to help faculty and administrators capitalize on its features to promote student learning and evaluate faculty progress in achieving this aim.
Watch this video for detailed instructions on how to understand student ratings. Additionally, there are links to certain video segments with their associated topics provided below the video.
The context for the video is found at the bottom of this page.
Understanding and interpreting uncertainty bands
26:45 How will the data (and uncertainty band considerations) be used in the CFS process?
45:04 Response pattern affects shape of uncertainty band
59:33 How are we to view the uncertainty band; what are it’s components?
1:12:56 What are the sources of variability in the uncertainty band?
An index to the contents of the video recording of the presentation and discussion
0:00 Brent Webb provides background on the work of the task force
1:35 Issues around previous instrument that gave rise to new survey
3:11 Survey overview (Burlinggame)
7:51 How the old system guided formation of the new
9:43 Issues addressed by the task force I
12:33 Issues addressed by the task force II
13:02 Sample” instructor” item from the New Instrument
14:17 Sample “AIMs” item from the New Instrument
14:51 Improvements in Reporting
15:18 Sample semester report for a class
19:58 What does the composite score “cover”?
20:47 Why does the form on my device look different from the one on today’s screen?
21:58 What does the composite score for course “cover”?
23:28 What does the composite score represent?
24:44 How do we deal with the reliability band; is there any guidance on that?
26:45 How will the data (and uncertainty band considerations) be used in the CFS process?
28:35 How do we “frame” and “sell” this to our students correctly?
29:25 Why not make survey participation required of students?
34:36 Two slides to provide insight into how “trends” might appear
32:54 (this is partly in response to the comment and expression of concern).
34:54 Multi-year report: Positive Change
35:14 Multi-year report: Negative Change
35:40 Questions addressed by W-2015 pilot (Reese)
- Can there be a single reliable composite score?
- How does the new composite score relate to the old global instructor item?
40:53 Summary of what the pilot looked like (participation and response rate)
41:50 Can 5-items be summed into a composite? (slide)
42:02 What is the estimated reliability of the composite? (slide)
43:07 Relationship of Items/Composite to old global
43:18 Comment from Brent Webb about course size
44:14 Comment about reliability improving with aggregation over time
45:04 Response pattern affects shape of uncertainty band
46:28 Summary recommendations (future)
49:00 How do we deal with the bias resulting from any one student commenting over and over?
51:42 When will faculty receive training on this?
55:00 What to do in a small department when there’s only one person to use as a comparison?
56:54 How can we compare the old “single point” rating to the new “range of values”?
59:33 How are we to view the uncertainty band; what are it’s components?
1:03:47 Will there eventually be a “trend” component built into the instrument?
1:08:56 Comment/caution about how the new instrument is presented
1:10:44 Question about how the grades component of the new survey is derived
1:12:23 Have you done any analysis on the individuals who are more likely to make comments?
1:12:56 What are the sources of variability in the uncertainty band?
1:13:39 Is there going to be a summary sheet for all the courses in our department?
1:14:27 Comment: it would be helpful to be able to summarize by core courses versus service courses
1:15:20 Have you checked on the relationship between GPA (or ACT score) and student ratings?
1:16:14 Do you have any sense for what kind of bias occurs with a low response rate?
1:18:16 What do we do about inappropriate or even profane student comments?
On September 18, 2015, department chairs were given a preview of the new student ratings tool. Two members of the Student Ratings Task Force, Professors Gary Burlingame (Psychology) and Shane Reese (Statistics), as well as members of the Academic Vice President’s Council, presented an overview of the development process, discussed the new instrument, and answered questions. The recording has been edited to reduce the nearly two-hour meeting to a reasonable length for viewing.