Faculty Portal

NEWS


Colleges Are Getting Smarter About Student Evaluations. Here’s How.

Emily Wu and Kenneth Ancell, two students at the University of Oregon, approached their honors research professor, Bill Harbaugh, a few years ago about studying the relationship between student evaluations and grade inflation. Harbaugh, a professor of economics, was enthusiastic. Wu and Ancell dived into the university’s extensive data on evaluation and transcripts, focusing on its two largest schools, journalism and business.
What they found surprised them.
“Having a female instructor is correlated with higher student achievement,” Wu said, but female instructors received systematically lower course evaluations. In looking at prerequisite courses, the two researchers found a negative correlation between students’ evaluations and learning. “If you took the prerequisite class from a professor with high student teaching evaluations,” Harbaugh said, “you were likely, everything else equal, to do worse in the second class.”
The team found numerous studies with similar findings. “It replicates what many, many other people found,” said Harbaugh. “But to see it at my own university, I sort of felt like I had to do something about it.”.Click here to read more.

A University Overhauled Its Course Evaluation to Get Better Feedback.
Here’s What Changed.

Until recently, the University of Southern California took a conventional approach to student course evaluations. At the end of the semester, students answered about a dozen questions, including ones asking them to rate the instructor and the course. Their feedback was used as the primary — or in some cases, sole — evidence of professors’ teaching effectiveness in formal performance reviews, including those to determine tenure and promotion.
For years, professors at USC had expressed concerns about how their course evaluations were designed and used, echoing similar worries across the country. Research has found that course evaluations are a poor measure of learning, prone to bias, and often interpreted in ways that make little statistical sense.
The research — particularly the evidence on gender bias — persuaded the university’s provost, Michael W. Quick, to end the use of course evaluations as a direct measure of teaching effectiveness this past spring. Students are still providing feedback, but now they’re using a new tool that asks them to weigh in on the learning experience more than on the instructor. Their feedback will be used differently, too. It will no longer serve as the main mechanism for evaluating teaching. Instead, it will help individual instructors improve, and help their schools observe larger patterns.Click here to read more.

ANNOUNCEMENTS


Diversity and Inclusivity Week 2019


FACULTY HONORS AND AWARDS

ASHSS
Associates Awards
Faculty Honors and Distinctions