AI & Academic Integrity

Although the Artificial Intelligence (AI) landscape is evolving rapidly, one clear impact is that this technology is already being integrated into the academic lives of both students and faculty. As a result, all instructors should consider how they will provide clear guidelines concerning the use of generative AI in academic work for every class they teach.

Here are a few things to keep in mind when addressing AI and academic integrity in your course.

Clearly Communicate your Expectations

Students benefit from clearly communicated expectations for their coursework, and will likewise benefit from transparency regarding limits to and expectations for use of generative AI in their course assignments and exams.

To reduce the chance of violations of academic integrity, explicitly communicate your course’s generative AI policies by:

  • Including in your syllabus clear expectations regarding the use of generative AI tools and any differentiation in the usage policy for specific assignments.
    • Clearly identify in what situations generative AI use is prohibited or permitted.
    • When generative AI is permitted, faculty should be clear about student expectations in terms of documentation and attribution, as well as what work is expected to be produced by the student themselves, and how the student is expected to validate or verify output from generative AI. See Crediting generative AI in an assignment, below.
  • Discussing the expectations spelled out in the syllabus during a class session.
  • Reviewing expectations and communicating how they apply to specific assignments.
  • Engaging in ongoing conversations about the importance of academic integrity, including the fact that basic academic integrity principles remain important and still apply regardless of the existence of generative AI tools.

Example Course Policy Language for Generative AI Use

As you plan your syllabus and course policies with respect to generative AI, consider modifying the following language to communicate a general position in your syllabus. Please note that the following sample language reflects general, course-level perspectives on broadly permitting or prohibiting the use of generative AI tools. For sample statements at the assignment-level, see AI in Assignment Design.

Prohibiting Generative AI Use in Your Course

Permitting Generative AI Use with Attribution in Your Course

Encouraging Generative AI Use with Attribution in Your Course

Permitting Generative AI Use on an Assignment-by-Assignment Basis

Crediting Generative AI in an Assignment 

When using material generated from an LLM in course materials or in assignments that students submit, transparency is key, and these instances should be properly referenced. When possible, you may want to select citation guidelines and share them with your students using the style guide appropriate to your discipline. Currently, the APA style guide, MLA style guide, and Chicago Manual of Style include recommendations and examples for citing LLM-generated materials that you can share with students.

  • For example, consider:
    • Including an in-text citation with reference to the LLM model, i.e. include text in quotes and reference the author, i.e. (OpenAI, 2023).
    • Asking students to include an AI-generated source in the methods section of their research paper.
    • Having students provide the text or prompt they used for the LLM to generate a response and include what LLM model, date, and version they used.

Similarly, if instructors choose to use LLMs to help in the preparation of course materials, such work should be acknowledged and attributed.

If you have specific questions and would like to schedule an individual or group consultation, please contact us

Detecting AI-Generated Content

Generative AI does provide an increased risk that students may use it to submit work that is not their own. It is tempting to seek technology-based solutions to identify inappropriate use of generative AI tools such as ChatGPT.

Unfortunately, it is unlikely that detection technologies will provide a workable solution to this problem. It can be very difficult to accurately detect AI-generated content. Although some instructors have reported that the tone and feel of AI-generated content is different from the student work they normally receive, in some cases it is almost undetectable from the work produced by students. Detection tools claim to identify work as AI generated, but cannot provide evidence for that claim, and tests have revealed significant margins of error. This raises a substantial risk that students may be wrongly accused of improper use of generative AI. For a more in-depth look at the reliability of detection technologies, we recommend the article "How Reliable are AI Detectors? Claims Versus Reality."

We currently do not recommend using current automatic detection algorithms for academic integrity violations using generative AI, given their unreliability and current inability to provide definitive evidence of violations. We believe that establishing trusting relationships with students and designing authentic assessments will likely be far more effective than policing students.
 

References:
Leechuy, Jasmine (2023, August 16). "How Reliable are AI Detectors? Claims Versus Reality." The Blogsmith.