Every quarter, the end of Week 9 rolls around and students are on their toes itching to cram for their finals. Students are in class jotting down study guides, pleading for exam hints from professors, preparing reference sheets, and most importantly, calculating baselines for their desired grades. At the same time, the university executes its quarterly student-teacher evaluation system, CAPE (Course and Professor Evaluations). Though a largely forgotten program on campus, there are indications that a program of this sort reveals some interesting biases in students. In the age of behind-the-screen bullying, this warrants a closer look.
A recent study from the American Political Science Association sheds some light on the subject of anonymous student-teacher evaluations like CAPE. Researchers found that the language used by students in evaluations differs significantly depending on the whether the instructor is male or female. The results also showed that “students tend to comment on a woman’s appearance and personality far more often than a man’s” and “women are referred to as ‘teacher’ more often than men, which indicates that students generally may have less professional respect for their female professors.” These findings are troubling but not surprising: What system is more ripe for the imposition of implicit bias than one of advertised anonymity?
The system also presents a recurring free rider problem: Students can look at CAPE results when signing up for classes, but don’t have to contribute to do so. This naturally leads to an abundance of opinions from the relative extremes in support or disapproval of instructors. In layman’s terms, you’re really only filling out that CAPE if you’re the cathartic student who just blew your final or the beholden teacher’s pet. To make matters worse, professors assign additional credit for completing the survey which can, in clear ways, manipulate responses. Throw in a mix of angry and overly anxious students and CAPE evaluations sound more and more like the oft-inaccurate comments on online forums like Rate My Professor.
To understand if these biases tangibly manifest, it’s important to be clear as to how these evaluations weigh on the faculty. Unfortunately, determining this at UC San Diego is nearly impossible. The FAQ section of the CAPE website provides no indications of how the data may be used, although it firmly asserts that no teacher can opt out. The university lacks transparency in its use of the data accumulated by the CAPE survey, and thus I’d have difficulty arguing that it has any tangible effect on teachers’ pay, class assignment, or tenure. But these decisions are largely administrative calls, and a standardized faculty evaluation system would surely be an easy reference to justify, say, promotions or pay cuts. If biased evaluations are impacting these decisions, many professors, particularly women, should be questioning whether the practice as a whole promotes discrimination in the workplace.
The fact of the matter becomes that if CAPE evaluations draw any similarities to the decorum of Rate My Professor forums, it’s time for more information on how they’re being used and how they affect instructors. There are problems of fairness and accuracy that stories like this one are incapable of addressing because the university simply won’t tell us anything about CAPE. But that will take time. For now, students should be warier as they fill out their evaluations, leaving their biases under the Sun God statue.
Arsham Askari is a Staff Writer for the Opinion section of The Triton.
The positions stated here do not necessarily represent the opinions of The Triton, any of its members, or any of its affiliates. We welcome responses to opinion pieces. If you’d like to submit a response, or comment on a different issue affecting the UC community, please submit here.