The TLC’s Maintaining Instructional Continuity page has newly-added resources for supporting instructor wellbeing during this unprecedented and uncertain time.

HET FAQs

FAQs

There are two answers to this question. First, in departments that decide to use HET, faculty spend a lot of time tailoring the “off-the-shelf” HET resources to fit their disciplinary context. So, HET tools in Nursing might refer to students’ clinical skills, HET tools in Engineering might refer to problem sets or design projects, and HET tools in English might refer to essays or term papers. Second, the HET tools do not prescribe a single, lockstep way to teach. Instead, they help instructors and evaluators focus on general principles of good pedagogy, like the idea that students benefit from feedback about their learning as that learning is underway. For example, we would discourage a department from using an evaluation criterion like, “instructor grades and returns student work within a week and with at least three comments per student.” A more useful criterion could be, “instructor provides regular and timely feedback on student work,” along with the opportunity for the instructor to explain what this criterion looks like in their teaching practice.

In HET, as in most teaching evaluation systems in higher education, an instructor might present a student assignment that is carefully scaffolded and aligned to learning objectives, has a detailed grading rubric, and incorporates student choice… but this might be atypical in their teaching practice. Or an instructor might write in their statement, “I use a range of strategies to promote equitable airtime among students” when, in fact, a few talkative students dominate every class session.

HET is grounded in the idea that principles of sound pedagogy play out differently in different contexts. An evaluator might have their own idea of what it looks like to “incorporate student choice” into a course, but the instructor they’re reviewing might use different strategies. One way that HET prioritizes the instructor’s autonomy and unique pedagogical identity is by soliciting self-reported data. The limitations of these data are precisely what the question above names: the risk of cherry-picking and misrepresentation. This tension (between the value of hearing directly from an instructor and the risk of misleading data) is not unique to HET. It’s a tension that’s inherent in any evaluation system that includes self-reported data, a tension HET does not claim to have resolved.

Job performance in higher education is evaluated through peer review, and a clear limitation of this approach is the potential for preexisting interpersonal relationships to color evaluator judgments. HET does not eliminate this concern, but it does aim to mitigate it. HET provides an explicit structure and process for evaluation, one familiar to both the instructor and evaluator. In simplest terms, evaluators are tasked to look for evidence of teaching practices X, Y, and Z… criteria of which instructors are aware as they prepare their materials. The idea is that this precision and transparency reduces opportunities to make vague, unfairly negative assessments (or positive ones, sometimes referred to as “departmental love letters” about teaching).

Whether HET is more work than what a department has done in the past depends on two factors: what the department has been doing to evaluate teaching and what the department’s version of HET looks like. Consider a hypothetical department that previously required a lengthy teaching statement and a classroom observation by a colleague as part of teaching evaluation. If this department adopted a streamlined version of HET (e.g., without these requirements and with simple rubrics), teaching evaluation might take less time. Alternatively, a department that previously evaluated teaching primarily by considering student evaluations of teaching might adopt a version of HET that asks instructors to provide more and richer evidence of their teaching. This will likely require more work the first time an instructor uses HET (the work should decrease in subsequent HET reviews).

As departments adapt the HET materials and process to fit their needs, one of their central tasks is to balance the desire for fairer, more substantive evaluation that supports instructors in improving against the time this requires. Each department determines the best balance for its context.

One thing the word “Holistic” is meant to convey in “Holistic Evaluation of Teaching” is that different perspectives on an instructor’s teaching are important, including those of the instructor, their peers, and students. This means that student evaluations are part of the evidence an instructor may use to showcase their teaching in HET — but they’re not the only evidence. Put differently, HET encourages evaluators to consider a “basket of evidence” that contains more than just student evaluations.

No. The UCLA campus requires peer evaluation, but not necessarily peer observation. Peer evaluation might take the form of peers reviewing an instructor’s teaching materials, for example. Departments that adapt and use a version of HET decide whether to require peer observation as part of teaching evaluation.

Receive the latest news

Get TLC Updates

The TLC offers monthly and quarterly updates highlighting events, resources, and other opportunities to foster teaching excellence on campus. Sign up to receive communications.