AI & Academic Integrity
To best support student learning and reduce violations of academic integrity, be sure to clearly communicate your policies regarding the use of generative AI in your syllabus, in assignment instructions, and verbally in class. When generative AI is permitted, clarify expectations for documentation and attribution, as well as what aspects of the work should be produced by the students themselves. Discuss with students how generative AI output can be incorrect or problematic and that they are responsible for verifying the output and references if AI use is allowed for an assignment.
A set of course policy icons has been developed by the GenAI Advisory Council to help Cornell instructors communicate with students about appropriate AI use for different courses and assignments. They are downloadable and can be incorporated into your syllabus or assignment instructions, to help provide clear guidance to students about course expectations.
Elizabeth Karns, in her role as a Provost Fellow and in consultation with University Counsel, has developed the following recommendations for instructors on communicating expectations and addressing possible academic integrity violations.
Prevention
Set Clear Expectations in the Syllabus
To reduce the ambiguity surrounding appropriate uses of generative AI, faculty should clearly define for students both your expectations and their responsibilities around using generative AI tools.
Here are a few recommended steps you can take. Making these explicit can help clarify expectations for students.
- Require that students verify the accuracy of all citations and references they include in their work.
- Request that students provide verification of references or methods, with a student's response determining whether a formal academic integrity notification is warranted.
- Clearly limit allowable methods, tools, or references, and specify that deviations from these restrictions could constitute evidence of potential integrity violations.
- Inform and remind students that they should expect to verbally explain the work they submitted.
Canvas Course Module on Academic Values & Integrity
Dr. Karns has also developed a Canvas Course module to help students recognize academic integrity concerns that are specific to your course. This resource is designed to encourage students to reflect on their personal values and learning goals within the context of your course.
Violation Awareness
Inquire About Unusual Submissions
Instructors should use their initial impressions of potential violations involving generative AI as a starting point for deeper inquiry, supported by objective and verifiable evidence, before proceeding with academic integrity hearings. A conversation with the student in question may garner new information that clarifies the problem.
Instructors can follow up, in person or by email, on any submissions that include unexpected methods, references, or approaches that appear to surpass course expectations. Students can be asked to explain how they arrived at such content. If the explanation raises concerns, the instructor can document the concern and proceed with further investigation.
Evidence of Violation
Gather Objective Evidence
Faculty should follow up their initial concerns about a generative AI-related violation with tangible evidence, such as:
- Submitted work includes citations or references to content that does not exist or cannot be verified.
- Inclusion of advanced references, methods, or approaches that exceed the scope of the course or seem implausible without external assistance.
- The student is unable to verbally explain, expand upon, or defend their submitted work when queried.
- The work is inconsistent with previously submitted work or preparatory steps submitted by the student, such as outlines, drafts, or prior discussions during office hours.
Evidence of an academic integrity violation needs to meet a clear and convincing standard. This is a medium level of proof – it’s above the “more likely than not” and below the “beyond a reasonable doubt.” The instructor will need to show that the allegation is far more likely to be true than false. It does not require first-hand observation of an act. The use of multiple forms of evidence helps to reach this standard of evidence.
Crediting Generative AI in an Assignment
When using material generated from an LLM in course materials or in assignments that students submit, transparency is key, and these instances should be properly referenced. The APA style guide, MLA style guide, and Chicago Manual of Style include recommendations and examples for citing LLM-generated materials that you can share with students.
- For example, consider:
- Including an in-text citation with reference to the LLM model, i.e. include text in quotes and reference the author, i.e. (OpenAI, 2023).
- Asking students to include an AI-generated source in the methods section of their research paper.
- Having students provide the text or prompt they used for the LLM to generate a response and include what LLM model, date, and version they used.
Similarly, if instructors choose to use LLMs to help in the preparation of course materials, such work should be acknowledged and attributed.
Detecting AI-Generated Content
Generative AI does provide an increased risk that students may use it to submit work that is not their own. It is tempting to seek technology-based solutions to identify inappropriate use of generative AI tools such as ChatGPT.
Unfortunately, it is unlikely that detection technologies will provide a workable solution to this problem. It can be very difficult to accurately detect AI-generated content. Although some instructors have reported that the tone and feel of AI-generated content is different from the student work they normally receive, in some cases it is almost undetectable from the work produced by students. Detection tools claim to identify work as AI generated, but cannot provide evidence for that claim, and tests have revealed significant margins of error. This raises a substantial risk that students may be wrongly accused of improper use of generative AI. For a more in-depth look at the reliability of detection technologies, we recommend the article "How Reliable are AI Detectors? Claims Versus Reality."
We currently do not recommend using current automatic detection algorithms for academic integrity violations using generative AI, given their unreliability and current inability to provide definitive evidence of violations. We believe that establishing trusting relationships with students and designing authentic assessments will likely be far more effective than policing students.
For more information on promoting academic integrity in your classroom, view our strategies.
References
Leechuy, Jasmine (2023, August 16). "How Reliable are AI Detectors? Claims Versus Reality." The Blogsmith.