In our recent paper From Assistance to Misconduct: Unpacking the Complex Role of Generative AI in Student Learning (published at the 2024 IEEE Frontiers in Education Conference), we explore how computing students use tools like ChatGPT—and where they draw the line between help and cheating. The paper is co-authored by Andreas Axelsson, Åsa Cajander, Daniel Tomas Wallgren, Mats Daniels, Udit Verma, Anna Eckerdal (Uppsala University), and Roger McDermott (Robert Gordon University, UK).

Based on interviews with nine students, we found that GenAI is deeply integrated into their workflows: for debugging, quizzing themselves before exams, and even overcoming motivational dips. One student noted they’d started consistently getting top grades thanks to AI-assisted study strategies.

What stood out most was the students’ own ethical reasoning. Most agreed that copy-pasting AI outputs felt like misconduct—but using GenAI for inspiration, feedback, or clarification was generally seen as acceptable. Still, the boundaries were blurry and often context-dependent.

The study raises questions not just about tools, but about pedagogy and policy. As educators, we need to better support students in navigating this grey zone—through clearer guidance and learning designs that promote reflection, not just results.

? Read the full paper here