Using the student perspective involves gathering information from students. In addition to Student Experience Surveys, instructors might also collect and analyze information from mid-semester student surveys, pre/post-testing to measure outcomes, and instructor-created or research-based surveys.

Using the student perspective can achieve important goals (below left) via robust practices (below right).

Decision Guide

Click through for guiding questions for consideration by units, as they develop processes for using Student Experience Surveys (SES).

Standardizing Use of SES Data

Standardizing the ways SES data are collected, analyzed, and interpreted will help ensure fairness among faculty and produce more trustworthy information for evaluations. To standardize, consider:

  • What additional questions, if any, will your department add to UGA’s mandatory Student Experience Survey?
  • How will your department encourage high response rates from students while maintaining their confidentiality?
  • How will your department support faculty in analyzing and reporting quantitative data appropriately?
    For example, response questions like strongly agree, agree, neutral, etc. are ordinal and should be analyzed as distributions, rather than averages.
  • How will your department support faculty in summarizing and reporting student comments systematically?
    Responses should be summarized and trends identified. Comments should not be cherry-picked for specific use.
  • Which SES results will your department expect faculty to report on for annual review? For promotion?
Interpreting SES Data to Minimize Bias

Inequities can result from biases in SES data. Biases can arise in SES ratings based on:

  • instructor characteristics (e.g., race & ethnicity, gender, age, nation of origin)
  • course characteristics (e.g., upper versus lower division, lab versus “lecture”)
  • student characteristics (e.g., major, year in school, performance in the course), and
  • other contextual factors (e.g., discipline, fall versus spring semester)

As a result of these known biases, comparisons across instructors are not valid. We cannot conclude that differences in SES ratings across instructors result from differences in teaching effectiveness. Instead, comparisons of SES ratings for the same instructor in the same course provide valuable information about growth.

To help minimize bias, consider the following questions:

  • How will you ensure that instructors are compared only to themselves?
  • How will you ensure that faculty discussions about data from SES focus on comparisons of instructors to themselves, not cross-instructor comparisons, and that the potential for biases are kept in mind?
  • How will you educate department members about the appropriate ways to analyze and interpret student experience survey data?
Using SES Data to Document Change

Designing processes to document change recognizes effort toward improvement, rather than just excellence:

  • What items from the Student Experience Survey will you use to document change over time?
Using Data Beyond SES

Student experience surveys provide just one source of information from students. Mid-semester formative student experience surveys, research-based surveys, instructor-generated surveys, and student interviews can all be effectively used to learn about students’ experiences and outcomes.

  • What additional student data will the department encourage faculty to collect?
  • What departmental data about student learning outcomes could be leveraged for teaching evaluation?

Departmental Quick Start GuideDepartmental and student quick start guide graphic

Additional Resources

Guidance for Appropriate Use of Data From Students

Example Questions to Include on Student Experience Surveys

Increase Response Rates and Elicit Constructive Student Feedback

Selected Papers Demonstrating Bias in Student Evaluations

Teaching Evaluation Home Peer Perspective Instructor Self Perspective