Rethinking Teaching Evaluation Reports: Designing AI-transformed Student Feedback for Instructor Engagement
Shang, R., Mallari, K., Au Yeong, W., Yasuhara, K., Tang, A., and Hsieh, G. (2025). Rethinking Teaching Evaluation Reports: Designing AI-transformed Student Feedback for Instructor Engagement. In Proc. ACM Hum.-Comput. Interact., CSCW320.
Abstract
Student feedback is critical for improving teaching, yet instructors often avoid reading evaluations due to emotional burden and information overload. We present a systematic exploration of how language models can distill and transform student evaluations into adaptive, actionable insights. Through a systematic design space exploration combining 4 feedback strategies (removing harmful content, paraphrasing criticism, sandwiching negatives, adding constructive suggestions) with 4 presentation formats (themes, cards, letters, chatbots), we created six AI-augmented prototypes of teaching evaluations. Interviews with 16 post-secondary instructors revealed that effective use of AI in feedback processing should: (1) support action formation through focused views and divergent thinking, (2) reduce emotional costs while enabling celebration and sharing, (3) facilitate longitudinal engagement and re-contextualization across terms, and (4) maintain transparency and preserve access to original context to build trust. Our work provides design guidelines for AI-augmented feedback systems and demonstrates how language models can adaptively process and present information based on feedback receivers' specific needs and contexts.
Materials
PDF File (https://ink.library.smu.edu.sg/context/sis_research/article/11614/viewcontent/Eval_Study.pdf)
URL (https://doi.org/10.1145/3757501)
DOI (https://doi.org/10.1145/3757501)
Keywords
educational technology, human-AI interaction, interface design, language models, student evaluations of teaching
BibTeX
@article{shang2025rethinking,
pdfurl = {https://ink.library.smu.edu.sg/context/sis_research/article/11614/viewcontent/Eval_Study.pdf},
type = {journal},
keywords = {educational technology, human-AI interaction, interface design, language models, student evaluations of teaching},
numpages = {40},
articleno = {CSCW320},
month = {October},
journal = {Proc. ACM Hum.-Comput. Interact.},
abstract = {Student feedback is critical for improving teaching, yet instructors often avoid reading evaluations due to emotional burden and information overload. We present a systematic exploration of how language models can distill and transform student evaluations into adaptive, actionable insights. Through a systematic design space exploration combining 4 feedback strategies (removing harmful content, paraphrasing criticism, sandwiching negatives, adding constructive suggestions) with 4 presentation formats (themes, cards, letters, chatbots), we created six AI-augmented prototypes of teaching evaluations. Interviews with 16 post-secondary instructors revealed that effective use of AI in feedback processing should: (1) support action formation through focused views and divergent thinking, (2) reduce emotional costs while enabling celebration and sharing, (3) facilitate longitudinal engagement and re-contextualization across terms, and (4) maintain transparency and preserve access to original context to build trust. Our work provides design guidelines for AI-augmented feedback systems and demonstrates how language models can adaptively process and present information based on feedback receivers' specific needs and contexts.},
doi = {10.1145/3757501},
url = {https://doi.org/10.1145/3757501},
number = {7},
volume = {9},
address = {New York, NY, USA},
publisher = {Association for Computing Machinery},
issue_date = {November 2025},
year = {2025},
title = {Rethinking Teaching Evaluation Reports: Designing AI-transformed Student Feedback for Instructor Engagement},
author = {Shang, Ruoxi and Mallari, Keri and Au Yeong, Wei Bin and Yasuhara, Ken and Tang, Anthony and Hsieh, Gary},
}