Publications

Selected papers and projects representing recent directions in the lab’s research.

Illustration of assessing student code explanations with an LLM

Enhancing Intelligent Tutoring Systems with Instruction-Tuned LLMs: Automated Assessment of Student Code Comprehension

Evaluating students’ natural-language explanations of code is challenging for real-time tutoring feedback. This paper fine-tunes open-source LLMs to automatically assess line-by-line Java code explanations, achieving strong correlations with human judgments and outperforming few-shot prompting baselines.

Illustration of mapping program logic steps to code blocks

Evaluating Logical Structure in Computer Programs Using LLMs

This work examines how well LLMs can identify and explain a program’s logical steps and corresponding code blocks in well-structured tasks. Similarity between LLM and expert annotations reaches up to 64.4% under multiple alignment strategies.

Illustration of a novice student using an LLM for code comprehension

A Study on How Well LLMs Can Assist Novices with Code Comprehension Tasks

We study how intro-to-programming students prompt LLMs for help with code comprehension without training on advanced prompting. LLM explanations are generally accurate and complete; students use three main prompt types, and LLM access is associated with increased confidence and improved performance.

Illustration suggesting productive learning effort versus quick AI answers

Are LLMs actually good for learning?

When are LLMs bad for learning? In our view, LLMs are bad for learning when they eliminate productive struggle. Unrestricted access to LLMs is fundamentally at odds with effective learning because they do too much for the learner.

Paper