An editorial for the Academic Integrity Digest
ChatGPT: It’s the subject of social media posts, institutional emails, and click-bait news stories that prophesize the rise of the machines and the death of essays, word problems, and the social contract in general. If a software can identify language patterns, pass standardized tests, and produce intelligible (if not always accurate) responses in grammatically-correct prose, how can we know who – or, rather, what – is the source of a student’s work? Will AI – that is, artificial intelligence – totally undermine our attempts to foster AI, as in, academic integrity? Or, does its arrival instead present us with opportunities?
Even this early in ChatGPT’s lifecycle, it is clear that language generating artificial intelligence presents significant challenges for teaching and learning that require quick – and dynamic – responses, including changes to pedagogy and policy. Faced with this challenge, some institutions have moved to ban the bots outright (as have some academic journals), while some educators have pivoted in the other direction, exploring how instructors can adopt and adapt AI as a teaching tool.
These responses – forbidding AI and embracing it – likely reflect different ends of what is instead a spectrum of options to acknowledge and address this new reality. As academic integrity scholars and educators, we see both the challenges and the potential that ChatGPT presents to engage students in reciprocal learning and productive dialogue about what it means to work with integrity.
We do want to acknowledge one reason educators may feel defeated rather than inspired by this situation. We recognize that ChatGPT and similar artificial intelligence software are game-changers that are rightfully prompting new and urgent considerations for how we uphold a culture of academic integrity in our courses and institutions. We also recognize that instructors are facing “change fatigue” after the constant reactive pedagogical shifts of the pandemic. Having to rethink aspects of our teaching yet again certainly seems daunting, but we see how that experience and the conversations about academic integrity inspired by Covid-era teaching have given us tools, strategies, and resources we can adapt to this latest challenge, as well as confidence to know that we are capable of taking up emerging learning technologies ourselves (with some practice).
We find it reassuring that what we’ve learned through the pandemic about effective assessment design strategies that reward integrity will also address the issue of text-generating apps. Assessments that include specific rather than general/generic topics, comparative approaches, scenario-based problem-solving, timed in-person writing or computational exercises, and oral responses make it difficult for students to substitute AI for their own efforts. Of course, the feasibility and applicability of these methods varies, and we need to consider what works best in different courses, disciplines, and professions. This rethinking is another opportunity, giving us a chance to review how we ask students to demonstrate their learning and why, and check that those practices reflect intentional assessment design.
Preparing ourselves and our students to work with integrity in this new context means thinking together – crucially, with, rather than against our students – about how we can make responsible, ethical use of technology in professional (and personal) applications. Such a collaborative approach brings students in as partners in integrity, rather than (re)introducing an adversarial dynamic of policing (assumed) misconduct, and introduces opportunities for broader reflection on the ethical implications that AI raises beyond the classroom. In keeping with this spirit of collaboration and transparency, if you don’t want your students to use ChatGPT in their work in your course, tell them explicitly. But first, have a discussion with them about why not, how you see its use as at odds with how they are expected to learn in your course, and thus how its use impedes your ability to assess their learning. This can also be an opportunity to review other learning technologies the course may choose to use and their application towards learning goals.
As we respond, it is essential that we not forget the important lessons of pandemic teaching about accessibility, as well as about technology and its very human designers and constraints: the recognition of the implicit bias and racist, ableist norms built into online proctoring platforms stands out in this regard. The response of academia to ChatGPT must avoid perpetuating harmful stereotypes and assumptions about who we expect to commit misconduct, or what it looks like. Though user-testing has demonstrated that ChatGPT has limitations – for instance, it lacks true understanding, common sense and creativity, and demonstrates bias- these qualities can also be found in students’ work at different stages in their development.
For those deep in the middle of an academic year, it might be hard to see any of this as an opportunity, rather than additional complications and more hard work in an already busy semester. We recommend taking a lesson from AI: the job is a lot easier when we aren’t tackling it alone. Globally, educators are creating online resource banks and sharing strategies. At UBC, CTLT and CTL are actively producing resources and recommendations for instructors, and UBC’s Academic Integrity website has information and materials for both students and faculty – this existing expertise is ready for you to use it. Collaborate with colleagues in your department to generate discipline-specific responses and quick fixes. Most importantly, invite your students to share their ideas on how they can use AI with integrity. The key strategy here is (human) conversation, another opportunity we should take.
About the authors
Dr. Laurie McNeill is Professor of Teaching and Associate Head, Undergraduate, in the Department of English Language and Literatures, Faculty of Arts at UBC’s Vancouver campus.
Dr. Anita Chaudhuri is Assistant Professor of Teaching in the Department of English and Cultural Studies, FCCS and Faculty Advisor on Academic Integrity at UBC’s Okanagan campus.