Student Experiences of Teaching Revision Project

UCLA is keeping consistent with the national trend of reframing the language around what are typically referred to as student evaluations of teaching to student experiences of teaching (SETs).  SETs are generally administered for three reasons:

  1. to improve the quality of teaching,
  2. to provide input for personnel decisions (e.g., tenure/promotion), and
  3. to establish institutional accountability (Spooren, Brockx, & Mortelmans, 2013).

Recent research suggests that SETs often capture student satisfaction rather than student learning (Uttl, White, & Wong Gonzalez, 2017).  Though SETs are not without their criticisms and potential biases (e.g., Boring, 2017), decades of extensive literature reviews have shown that they are relatively reliable and valid, useful and valuable, and multidimensional when considered appropriately (Benton & Cashin, 2014; Marsh, 2007; Spooren, Brockx, & Mortelmans, 2013).

The initiative to improve SETs has gained significant traction at UCLA due to mounting interest in the documentation of teaching effectiveness.  As such, after a comprehensive review of the practices at nine major research universities, a committee of distinguished faculty (the Faculty Committee on Student Experiences of Teaching) in conjunction with the Center for the Advancement of Teaching (CAT) has developed a new form with the goal of improving SETs to provide a more reflective assessment of student achievement and inclusive teaching practices as well as to provide instructors with more meaningful feedback regarding their teaching effectiveness.

The new form aims to provide evidence of course and faculty effectiveness based on the articulation, understanding, and realization of goals set forth by the instructor, and aims to incorporate more open-ended questions throughout the form to strengthen the capture of qualitative data and provide more contextualized and constructive feedback to instructors.  The new form removes overall questions since such rankings are not necessarily reflective of learning and can be biased by such considerations as instructor “charisma,” instructor gender and ethnicity, and students’ anticipated grades.  The removal of these types of questions serves as a first step in beginning a campus-wide shift in attitudes regarding the use and interpretation of student feedback for such things as merit advancement and teaching awards.

New questions focused on three major themes:

  • Did the course structure allow students to achieve the learning goals and demonstrate this achievement?
  • Were students actively engaged and feel included?
  • Did the course provide an opportunity for growth?

Another feature of the proposed form allows instructors to add customized learning questions (both closed and open-ended) to solicit feedback on specific aspects of their course.  Due to challenges of scaling up this feature using our current software, it has not been offered in all versions of the pilot, but future plans involve developing a strategy for allowing widespread use.

If you have questions regarding the SET revision project, please contact Brit Toven-Lindsey at btovenlindsey@teaching.ucla.edu.

The new form was piloted in three upper division courses: a psychology course, a biochemistry course, and a history course.

The new form consisted of 5 general closed-ended questions and 6 general open-ended questions interspersed throughout as well as 1-2 course-specific closed-ended questions followed by an open-ended question to elaborate on the responses (custom questions for psychology and biochemistry only).  The previous form consisted of 17 closed-ended questions and 1 open-ended question.  The Likert-scale for the closed-ended questions on the pilot form was changed to be 6 points instead of 9 points, as research on scale construction suggests that introducing more options simply increases noise in the responses without substantial gains in reliability (Preston & Colman, 2000) and might possibly reduce bias (Rivera & Tilcsik, 2019).

Half of the students in the course were assigned to complete the standard form while the other half of the students were assigned to complete the new form (based on student ID number).  Both quantitative and qualitative data from both forms were compared.  Reports back to the instructors were approximately 2.5x longer due to the increase in student comments.  Student comments offered more specific feedback to instructors rather than broad summary statements about the course/instructor.  Instructors reported satisfaction with the quality of the comments on the new pilot form relative to those made on the standard form.

All students who participated were sent a survey to comment on the form.  Students (N = 51) reported liking the open-ended questions and felt that comment boxes allowed them the opportunity to leave all desired feedback.  Overall response to the new form was positive.

Willing students were interviewed to provide more in-depth comments and whether or not it helped them reflect on their learning in the course.  Eleven students were interviewed in person or by phone during the next quarter of instruction.  Interview protocol focused on student interpretations of the questions, whether the questions matched the intended topics, whether the students liked the increased number of open-ended questions, whether those questions made them more thoughtful in their responses, and their thoughts on the idea of custom questions.

Students provided comments on individual questions and generally reported satisfaction with the new form, feeling the nature of the questions focused more on learning, and was an improvement from the old form.  They felt as though they were more engaged with the form and less likely to select answers without thinking.  Students reported a strong preference for including custom learning questions that were specific to their course, as it made them feel like the instructor cared about and was willing to listen to their feedback.  Although all students interviewed reported that giving feedback was important and that they typically completed their SETs, they felt that the inclusion of custom questions might increase participation rates, another key issue affecting SETs.

Feedback from the interviews was used to refine the instrument by the committee and the updated version was piloted in fall 2018 across a broader range of disciplines.

Department chairs recruited 22 instructors (from 23 courses ranging in size from 35 students to 747 students) in 8 departments across campus (Chemistry and Biochemistry, History, Near Eastern Languages and Cultures, English, Life Sciences, Physics and Astronomy, Computer Science, and Mechanical and Aerospace Engineering) to complete the revised version of the form.  This version consisted of 7 closed-ended questions and 5 open-ended questions (plus additional course-specific open and closed-ended questions for courses that opted to include them).  Additionally, a question more directly focused on diversity and inclusivity in the classroom was also included.

For each course that participated in the pilot study, half of the students were randomly assigned to complete the standard form while the other half of the students were randomly assigned to complete the new form.

1492 students completed the new form.  Again, generated report summaries for the instructors were substantially longer than those generated by the standard form.  All participating students as well as instructors were sent a brief survey in winter 2019 asking about the instrument, touching on the length of the instrument, the questions themselves, the type of feedback generated, the comment boxes, and whether or not the new form seems to match the goals of the committee to move towards providing a more reflective assessment of student achievement, inclusive teaching practices, and more meaningful feedback regarding instructors’ teaching effectiveness.  Preliminary data analysis suggests both students (N = 231) and instructors (N = 9) are generally satisfied with the new form and the feedback it generated and believe that the form achieves its main goals.

Selected comments from students:

  • I think it guides meaningful responses and gives students an opportunity to address any positives or negatives.
  • I like being able to give more commentary on my answers and explain more specifically why I feel the way I do about instructors.
  • By being able to express with words how I felt about certain aspects of the class, specific issues or congratulatory efforts can be made.
  • I felt like you genuinely had to think about the course and professor rather than just clicking numbers which could be assign[ed] arbitrarily. It also helps gauge how students are evaluation [sic] as what one person considers a 5 can be a 3 to another.
  • Multiple choice is okay but every answer is subjective to a student’s perspective. If the evaluation is actually done so professors can learn then the choice to explain the answer would clear up the variation between students.
  • I feel that there is so little that can be interpreted from yes or no answers compared to being able to write out why you say yes or no to a certain question. Did you feel you learned something new? Yes or no? A yes or no does not give you a good enough answer, so I appreciated the extra lines of being able to say I feel this because or why is it that I felt this way.
  • The questions were directed more towards the instructor’s teaching style and were more detailed.
  • The questions are more specific, which allow you to comment on specific areas of the class, instead of just having a giant paragraph at the end of the survey.

Selected comments from instructors:

  • In the new form the students reflect more about the course and what they got out of it, rather than the personality of the instructor.
  • The more tailored questions seems [sic] to give the students more direction in their comments, so that they don’t end up saying things like “He sucks” or “He’s awesome” without any real context.
  • I think that before, students were not lead [sic] to think about inclusiveness or teaching methods. If you read comments from the old one, it’s more about popularity than learning.
  • Even as someone who already attempted to make their objectives clear, the question on learning objectives encouraged me to be even more transparent and explicit about course learning objectives, and I think this may have translated into an improved ability of students to be metacognitive and practice more deliberately.
  • The kind of feedback I got was much more focused than what I used to get from the previous form. Apparently, students have more to say when asked structured questions, which the new form did. I got some detailed comments on my teaching style that I never heard before, especially with regards to the class atmosphere and how students felt they were treated (e.g., encouragement to ask questions and participate in class discussion). This was rewarding. It also made it clear to me what students tend to appreciate, what inspires them, and what they look to in an educational experience.

Using feedback from the fall 2018 pilot participants as well as feedback from the Committee on Teaching and Undergraduate Council, the Faculty Committee on Student Experiences of Teaching further refined the instrument and began plans to scale up use across campus.  The updated version of the form consisted of 6 closed-ended questions and 4 open-ended questions (plus custom questions when applicable).  Rather than inviting individual instructors to participate, entire departments and schools were recruited to participate through their leadership.  In winter 2020, the Life Sciences and Urban Planning departments (39 courses) used the form in classes with more than 30 students using the same procedure as fall 2018 where half of the class completed the standard form and half completed the pilot form.  The School of Engineering and the English department had agreed to participate in a larger spring 2020 pilot.  Winter 2020 SETs, however, took place at the time of the transition to remote instruction due to COVID-19.  Due to the unique nature of the timing and circumstances, the data have not been analyzed.  The Engineering and English pilots planned for spring 2020 were put on pause.

In spring 2021, the School of Arts and Architecture (SoAA) leadership approached CAT and requested that all of their courses use the pilot form with department-level custom questions for all of their students given the school’s interest in gaining feedback on issues related to diversity and inclusivity.  The form will continue to be used and tested throughout the 2024-25 school year.  Plans to use the form in other Schools/Departments are currently being discussed.