CU Committee Report: Generative Artificial Intelligence for Education and Pedagogy
In Spring 2023, the Cornell administration assembled a committee to develop guidelines and recommendations for the use of Generative AI for education at Cornell. Their final report evaluating the feasibility, benefits, and limitations of using generative AI technologies in an educational setting and its impact on learning outcomes is below.
To download "Generative Artificial Intelligence for Education and Pedagogy" as a PDF, see: Full CU Committee Report.
Chairs: Kavita Bala, Alex Colvin
Committee members: Morten H. Christiansen, Allison Weiner Heinemann, Sarah Kreps, Lionel Levine, Christina Liang, David Mimno, Sasha Rush, Deirdre Snyder, Wendy E. Tarlow, Felix Thoemmes, Rob Vanderlan, Andrea Stevenson Won, Alan Zehnder, Malte Ziewitz
- Executive summary
- Section 1: Introduction
- Section 2: Generative AI capabilities and limitations
- Section 3: Guidelines and recommendations for educational settings
- Section 4: Recommendations for faculty and administration
- Section 5: Conclusions
- Appendix A: State of the art in Generative AI
- Appendix B: Courses that develop writing as a skill
- Appendix C: Creative courses for music, literature, and art
- Appendix D: Courses in social sciences
- Appendix E: Mathematics, physical sciences, and engineering
- Appendix F: Courses in programming
- Appendix G: Courses in law
Executive summary
Educators must take generative artificial intelligence (GAI) into account when considering the learning objectives for their classes, since these technologies will not only be present in the future workplace, but are already being used by students. While these tools have the opportunity to customize the learning experience for individual students and could potentially increase accessibility, they also hold risks. The most obvious risk is that GAI tools can be used to circumvent learning, but they may also hide biases, inaccuracies, and ethical problems, including violations of privacy and intellectual property. To address the risks of GAI while maximizing its benefit, we propose a flexible framework in which instructors can choose to prohibit, to allow with attribution, or to encourage GAI use. We discuss this framework, taking into consideration academic integrity, accessibility, and privacy concerns; provide examples of how this framework might be broadly relevant to different learning domains; and make recommendations for both faculty and administration.
Section 1: Introduction
Generative artificial intelligence (GAI) has attracted significant attention with the introduction of technologies like ChatGPT, Bard, and Dall-E, among others. This new technology has spurred major investments by Amazon, Google, Microsoft, and spawned many new startups. While there is much excitement about GAI’s potential to disrupt various industries, many have voiced significant concerns about its potential for harmful use [ 1 ]. This excitement and concern has been echoed in the context of education.
In Spring 2023, the Cornell administration assembled a committee to develop guidelines and recommendations for the use of GAI for education at Cornell, with the following charges:
- Evaluate the feasibility and benefits and limitations of using AI technologies in an educational setting and its impact on learning outcomes.
- Assess the ethical implications of use of AI technologies in the classroom.
- Identify best practices for integrating AI technologies into curriculum and teaching methodology.
Recommend guidelines for the safe and effective use of AI technologies in an educational setting. Provide recommendations for ongoing evaluation and improvement of the use of AI technologies in an educational setting.
The committee included a broad spectrum of educators who span disciplines across the university. Over a series of meetings in Spring 2023, the committee developed the guidelines shared in this report.
Opportunity: GAI has been touted as a potential paradigm shift in education. Proposed benefits include providing a customized learning experience for all learners matching their individual needs; increasing accessibility for students with learning disabilities, anxiety, or language barriers; allowing instructors to scale constructive critiques for iterative learning and improvement in writing; and assisting in tasks in a number of domains including coding, creative composition, and more.
Concerns: Currently, GAI output can include inaccurate information, toxic output, biases embedded in the model through the training process, and infringement of copyrights on material and images. Students can use GAI to circumvent the process of learning and assessment in classes. In cases when GAI tools can serve learning outcomes, lack of affordable access for all students could exacerbate systemic inequalities. In addition, overreliance on these tools risks atrophying students’ ability and willingness to interact with instructors and peers.
Recommendations
- Rethink learning outcomes. GAI requires rethinking our goals in teaching and the learning outcomes we hope to achieve. With the tremendous projected impact of GAI in many industries, students will be working in a GAI-enabled world. We strongly encourage instructors to integrate GAI into learning outcomes to focus student education on higher-level learning objectives, critical thinking, and the skills and knowledge that they will need in the future.
- Address safety and ethics. We expect that GAI technology will continue to improve, and Cornell will play an important role in enabling its ethical and safe use. However, instructors must educate their students about the pitfalls of current technology. They must teach them to approach GAI critically and to validate GAI-produced information rigorously.
- Explicitly state policies for use of GAI. Instructors should clearly and consistently communicate to students their expectations on the use of GAI in their assignments and classes, including when it is and is not allowed, and what uses of GAI are considered violations of academic integrity. Further, when GAI is permitted, it should be correctly attributed, and instructors should discuss the importance of student validation of the information generated. Many foundational skills will still need to be developed without the use of GAI. In such cases, instructors must directly explain to students why the process of achieving the specified learning outcomes for a class, without reliance on tools that create “shortcuts,” is integral to a student’s academic and personal growth.
We recommend instructors consider three kinds of policies either for individual assignments or generally in their courses.
- To prohibit the use of GAI where it interferes with the student developing foundational understanding, skills, and knowledge needed for future courses and careers.
- To allow with attribution where GAI could be a useful resource, but the instructor needs to be aware of its use by the student and the student must learn to take responsibility for accuracy and correct attribution of GAI-generated content.
- To encourage and actively integrate GAI into the learning process where students can leverage GAI to focus on higher-level learning objectives, explore creative ideas, or otherwise enhance learning.
Roadmap. We structure this report as follows. In Section 2, we present a brief, high-level overview of
Generative AI technologies in various domains, with links to detailed resources in Appendix A. In Section 3, we present our guidelines for the three policies described above, including discussions on academic integrity and accessibility issues, and examples and recommendations of use cases for various styles of teaching in diverse disciplines (further expanded in Appendices B-G). In Section 4, we make specific recommendations for faculty and administration, and we summarize our discussion in Section 5.
Section 2: Generative AI capabilities and limitations
The last year has seen qualitative leaps in the capability of Generative AI that took even experienced AI researchers by surprise. These tools share certain commonalities in their use and training, but also have critical differences in capabilities, use cases, and availability. Generative AI models of natural language, such as ChatGPT or Bard, are known as Large Language Models (LLMs) and are trained on massive corpora of textual data. A different method, diffusion, is used to generate images with models trained on massive collections of image data, including artwork. The capabilities of these systems will likely change drastically in the coming years. While we provide resources in Appendix A to survey existing models, we do not intend these to be comprehensive, but rather to provide a snapshot of capabilities and common models to ground the report in specific approaches. Appendix A gives more details on how these models are trained for various domains like text, code, images, and other modalities, and how the technology is currently made available to users.
The current generation of GAI is based on large-scale, deep neural networks. While this technology is not new, recent developments in model design, hardware, and the ability to aggregate large amounts of training data have made it possible to create models that are larger by orders of magnitude. The models are pretrained, which means they are fit to a large number of human-generated examples. The more data that is available, the better generative outputs become in the precision, detail, and memory of their training data, and also in their ability to generalize based on abstract patterns. Sourcing data to train better models has led to many of the ethical concerns about GAI, including (but not limited to): violations of copyright; bias, or even toxicity, deriving from the training data; and forms of plagiarism within the model outputs. However, it is critical to note that these models are not just memorizing and regurgitating these data, and outputs are rarely identical to any single observed training instance.
Current GAI provides capabilities that would not have been thought possible even at the beginning of this decade. The most popular use patterns allow users to "ask" for output using natural language, and then refine those requests based on the system's output. Systems can produce fluent, grammatical, detailed, and mostly accurate descriptions of historical events, in the style of a Wikipedia page, without including any sentence that actually occurs in an original page. When prompted with a description of a computational process and user interface, systems can generate executable program code that could have come from a Stack Overflow post or a GitHub repository, but does not actually occur in either of those services. Diffusion models can produce images of "a hedgehog driving a train in the style of Botticelli" that have never actually existed. The compelling, immediately satisfying nature of these tools led to the explosion of GAI from a niche research area to popular culture, almost overnight.
Understanding the limitations of GAI technologies.
New users often trust GAI models more than they should. While chatbots are increasingly being trained to avoid low-confidence statements, LLMs answer questions and justify their answers using plausible and confident-sounding language, regardless of the quality of the available evidence. LLMs have only one function: given a history of a conversation, they can predict the next word in the conversation. Thus, if asked to justify a sensible response, an LLM will often provide a reasonable-sounding answer based on its training data. However, when asked to justify a nonsensical response, it will use the same techniques, resulting in a reasonable-sounding but false answer.
Users also often overestimate GAI ability by assuming they can do certain computational tasks well that current computing systems are good at; for example, mathematical calculations or finding references. But at time of writing, GAI models are not able to mathematically reason about floating point numbers, although they provide incorrect—but confident-sounding— answers to complex trigonometric questions. GAI models also fluently create plausible but false academic and other references. However, developers are actively working to augment GAI with more traditional tools (calculators, search engines) to address these issues, so GAI models may become more reliable over time.
Section 3: Guidelines and recommendations for educational settings
Educational settings vary significantly across Cornell: from large lecture halls to seminars, from lab, field, or studio courses to clinical and practicum-based settings. Generative AI has potential uses in almost all of these settings. For example, educators can use GAI to develop lecture outlines and materials, generate multiple versions of assignments, or compose practice problems. Students can use GAI to research topics and areas, iterate on text to improve written work, program, design and create code, art, and music, and many other uses. Depending on the use, however, GAI could end up “doing the learning” that an assignment is designed to elicit, or distorting the value of assessments, depending on student use of GAI tools.
To address these risks and take advantage of these opportunities, instructors should reassess learning outcomes for each class in light of the advent of GAI. To do so, they should consider the expectations of future courses that may build on the understanding and knowledge their course is expected to develop in the student; the expectations that future workplaces will have for workers’ responsible and ethical use of GAI tools; and the new opportunities for the purposeful use of GAI tools to aid learning.
The introduction of calculators into mathematics education provides a useful, if imperfect, analogy. Students in elementary school still learn how to do long division and multiplication. However, once students have mastered these skills, calculators are used in higher level classes to allow students to solve complex problems without being slowed down by the minutiae of arithmetic, and students are taught to use computational tools to assist in solving hard problems. Education curricula in mathematics have adapted to the increasing availability of calculators to prioritize students’ learning basic skills without calculators at first, and later, allow and/or encourage their use to support higher level learning.
Section 3.1: Academic integrity and accessibility
One major concern around the use of AI is the potential for academic integrity violations where students use these technologies to do assignments when the work itself is meant to develop skills; for example, practicing problem sets, or to assess basic skills before students proceed to higher levels of learning. In these cases, the use of GAI tools may appropriately be prohibited. A second area of concern is if students use these tools without properly citing them, and/or without questioning the underlying mechanisms or assumptions that produce the content. In this case, the need for students to learn to appropriately attribute and critique these tools is key. However, there are also cases in which the use of GAI should be encouraged; for example, to promote the universal accessibility of assignments, or to provide tools that will enhance high-level learning and allow students to be more creative and more productive. Below, we discuss each of these options in turn, bearing in mind that clear policies and processes are critical for a constructive learning environment and trust between the instructor and student.
Section 3.1.1: Prohibiting GAI tools
Current free access to at least limited versions of GAI tools greatly accelerates the concern that students may be tempted to violate academic integrity principles by turning in AI-generated content as their own. ChatGPT’s ability to almost instantly complete assignments for students makes it significantly more tempting for students facing competing priorities or a last-minute deadline crisis.
Given the widespread use of LLMs, a natural demand has arisen for tools to detect the content of LLM output. Many different tools have been developed for this task including TurnItIn, GPTZero, and OpenAI's classifier. However, in the absence of other evidence, technical methods are not currently very helpful for regulating AI usage in the classroom. The objective of LLMs is to produce text with the same statistical properties of natural language, making the detection problem adversarial as GAI improves. LLMs rarely output long snippets of text that are verbatim copies of existing content, which is the basis of traditional plagiarism detection. Therefore, attempts to identify text generated by GAI can only be done statistically. This method will likely continue to produce both false positives and false negatives, and cannot decisively provide evidence of academic integrity violations. Using these methods currently could lead to unfairly identifying academic integrity violations (for example, bias against non-native speakers), creating a lack of trust between the instructor and students, and damaging the learning environment.
Another potential remedy would be for GAI providers to explicitly limit the outputs of their systems to prevent them from answering common basic homework questions, analogous to the way question forums handle homework questions [ 2 ]. While it is technically possible for providers to implement these restrictions, or even for universities to run their own restricted services, the barriers have not proven to be foolproof. There are now many documented examples of GAI "jailbreaks" that allow adversarial parties to work around the restrictions put on GAI models. Jailbreaking works by constructing a long and complex prompt that can be fed into a model to cause it to ignore its constraints and generate in an alternative manner. Jailbreaking has been used to circumvent prohibitions on certain topics or behaviors such as advocating violence; however, it could also easily be used to obtain answers to homework assignments.
Beyond fairness to other students and to faculty, an overarching concern is that if students rely on GAI, they will not put in the practice needed to learn nor gain confidence in their ability to master needed knowledge or skills. Instructors should communicate to students why completing assignments without “shortcuts” is necessary to meet learning outcomes; why meeting specific learning outcomes is necessary to a student’s academic and personal growth; and why academic integrity violations are so harmful to both the individual student and to the larger learning communities at Cornell and beyond.
Faculty may take other measures to avoid the risk of academic integrity violations by moving to assessments and assignments less suited to GAI models; for example, aligning assessments more closely to class content; or moving assessments from take-home to in-class, e.g., timed oral and written exams, or in-class written essays. These forms of assessment could potentially disproportionately impact students with disabilities, although accommodations such as extended time and distraction-free testing zones might help to address these issues. We examine below the dynamic potential for GAI to support students with diverse disabilities; however, we also note that overreliance on these tools can put these students at an even greater disadvantage. Students with disabilities may prefer using these tools to interactions with faculty and other support systems, and become dependent on GAI to meet their needs–especially in the absence of fuller classroom access. They may also face greater vulnerability in proving that they did not violate academic integrity standards.
Section 3.1.2: Attribution, authorship, access, and accountability in the use of GAI
GAI is a very rapidly evolving technology that is in a major state of flux. While companies are swiftly working to identify and fix issues as they are discovered, it is important for instructors to be aware of the risks involved in using GAI as is currently available. Instructors must educate their students about these risks, and develop plans to mitigate the negative impact of risks in their classroom if they decide to use or allow the use of GAI in their teaching.
GAI tools pose potential privacy risks because data that is shared may be used as training data by the third-party vendor providing the service. Therefore, any information that educators are obligated to keep private, for example, under the Family Educational Rights and Privacy Act (FERPA) or the Health Insurance Portability and Accountability Act (HIPAA), should not be shared with such tools or uploaded to these third party vendors of GAI.
GAI tools also have implications for intellectual property rights. Original research or content that is owned by Cornell University, our students, or employees should not be uploaded to these tools, since they can become part of the training data used by the GAI tools. These include student assignments, data produced in projects or research groups, data that contains personally identifiable information, data from research partners (for example, companies) that may contain proprietary information, data that could be protected by copyright, etc.
If a class expects the use of GAI, it is important to make sure all students have equal access to the technology without cost barriers resulting in differential access. Licensing agreements for the use of GAI tools should be provided or negotiated by the institution, while ensuring that the tools do not limit the university’s educational activities and academic freedom, respect privacy and intellectual property rights, and do not impose cost barriers or constraints.
At the time of writing, the US Department of Education has issued a new report [ 3 ] encouraging the development of policies that advance learning outcomes while protecting human decision making and judgment; that focus on data quality of AI models to ensure fairness and unbiased decisions in educational applications; that understand the impact on equity and increase the focus on advancing equity for students. We agree with these important considerations and expect Cornell researchers and educators will contribute to these improvements in GAI.
Section 3.2: Encouraging the responsible use of GAI tools
GAI will inevitably be part of the future workplace, and thus a tool that all students will eventually need to learn to use appropriately. Consequently, instructors now have the duty to instruct and guide students on ethical and productive uses of GAI tools that will become increasingly common in their post-Cornell careers. GAI also has the potential to provide support for students with disabilities, particularly for individuals who experience difficulties with cognitive processing and focusing, “social scripting” (i.e., neurotypical means of communication), and anxiety. We have discussed above some of the dangers of reliance on GAI in ways that are counterproductive to learning. However, it is also important for faculty to recognize the barriers that students with disabilities face, and how GAI tools can help implement and sustain fuller modes of access and inclusion for all students in the classroom.
Below, we identify settings across a range of different areas of study where use of GAI could advance teaching goals and learning objectives, and make recommendations based on the different needs of each category. Appendices B-G provide detailed examples. This list is not exhaustive, but can help identify immediate practical use cases to instructors in a wide range of disciplines. While we see a range of possible ways in which GAI can be useful in teaching, common themes include:
- Use of GAI for individualized practice, help, and tutoring
- Use of GAI to generate material for students to analyze, organize, and edit
- Use of GAI for routine, preparatory work leading to higher order thinking and analysis
- Analysis of GAI’s use and impact in a domain
- Practice of the use of GAI as a tool
We now describe the uses within various disciplines and areas of study.
Section 3.2.1: Courses which develop writing as a skill (e.g., the writing seminars)
GAI tools offer opportunities to help students develop their writing skills through assisting in planning, outlining, editing, as well as by providing individualized feedback. However, use of GAI for generating text and editing in the writing process raises major concerns about attribution of work, academic integrity, plagiarism, and failure to develop foundational and advanced writing skills and judgment. Creativity and originality in writing are key learning outcomes that could be threatened by dependency on GAI in the writing process. Guided use of GAI is encouraged as the best approach to further, rather than undermine, learning outcomes. Examples could include the use of GAI to: generate an outline for a written report that students practice revising; summarize themes from a meeting transcript that students organize and prioritize; brainstorm ideas that students then evaluate; or generate lists of sources that students validate and assess. Appendix B outlines detailed examples and scenarios.
Section 3.2.2: Creative courses for music, literature, and art
Creative fields such as art and music have long engaged in discussions on what is “original work” and how technology can enhance creativity. Practitioners, including students, are highly motivated to develop their skills, but may also be eager to use new technologies to create. While there are many opportunities, concerns exist around ethical attribution of sources and copyright violations. This is an evolving field with companies attempting to add attribution into their processes. Current cases already exist of using GAI for creative brainstorming, or development of final artifacts, as a partner to enable higher level creation by the artist, and many other uses. The rest of the academy might look to creative fields for help working through their own disciplinary considerations of how GAI tools might change notions of original work. Appendix C discusses these considerations in more detail.
Section 3.2.3: Courses in the social sciences
In the social sciences, the advent of GAI raises particular concerns for the written assignments, homework, papers, and exams that are a core component of student work in many courses. Passive reliance on GAI by students to generate literature reviews or written work risks undermining the learning objectives of assignments, producing poor quality work, and violating academic integrity standards. Instructors are also encouraged to explore ways to purposefully incorporate GAI into social science courses in ways that enhance student learning. This can include having students evaluate GAI output and explore ways to test its validity. Appendix D and Appendix B discuss use cases.
Section 3.2.4: Mathematics, physical sciences and engineering
Technical and mathematical courses have adjusted well in the past to incorporate new technologies, such as computing and visualization tools. GAI may provide similar opportunities to enhance education in the space of mathematics, physical sciences, and engineering. For example, students and instructors may use LLMs in an explanatory capacity or use LLMs to synthesize code to support data analysis and visualizations. Some of the biggest concerns with current systems is their inaccuracy (“hallucinations”) and circular reasoning. Instructors should make themselves aware of the capabilities of current systems and the fast changing behavior of these systems on mathematical and engineering problems. We recommend educating students on the capabilities and limitations of these systems, prohibiting their use where basic skills need to be developed, and encouraging their use in cases where LLMs can improve student learning. Appendix E gives detailed examples of use cases for courses in this domain.
Section 3.2.5: Courses in programming
GAI is already used heavily in industry to assist in coding through applications like GitHub Copilot. Opportunities for LLMs in programming education exist in: the generation of code from specifications (text to code), the generation of ancillary tools such as tests (code to code), and the generation of explanations or suggestions (code to text). However, the concern is that students will rely on GAI and will not learn the skills necessary to generate working, understandable, and updatable code. They may be unable to move beyond the solutions favored by an AI system, to identify and fix problems, or at worst, to even recognize that alternatives exist. We recommend using GAI in advanced courses as a tutor or helper in programming, but not as the sole creator of code. See Appendix F for details.
Section 3.2.6: Courses in law
For law, GAI threatens the integrity of the take-home exams that are a common feature of many courses. For foundational courses, particularly in the first year core, use of in-person written exams with restricted access to the internet and ability to access GAI is recommended to ensure the validity and integrity of exams. At the same time, GAI tools are in increasingly widespread use in the practice of law and it is important that this shift be addressed in legal education. This could be done through addressing use of GAI in legal practice explicitly in second and third year classes, including examination of these tools in legal research and writing courses. The strong ethical core of the discipline and practice of law should be reflected in how GAI is addressed. Appendix G elucidates further.
Section 3.3: Use of GAI by instructors for course content creation
Instructors can use GAI to create content; for example, as a first draft for course structure/syllabi, lecture structure, examples, figures and diagrams, etc. Instructors can also generate large banks of practice problems or assessment questions, though it is important to validate any questions assigned to students for accuracy and appropriateness.
We recommend that instructors also follow the guidelines of attribution if they choose to use GAI to produce course materials. This way, faculty can model for students how to use GAI with attribution. This will also provide clarity for students about where GAI is not being used and avoid assumptions by students that they are being provided educational material that the instructor has not personally created or vetted.
While GAI may have selective utility in assisting in providing feedback for low-stakes formative assessment (for example in practice problems), we currently do NOT recommend it be used in summative evaluation of student work. Evaluation and grading of students is among the most important tasks entrusted to faculty, and the integrity of the grading process is reliant on the primary role of the faculty member.
Section 4: Recommendations for faculty and administration
Based on the aforementioned opportunities and concerns, we believe that the use of GAI technologies can be integrated into teaching in ways that enhance learning objectives, but that these implementations must be accompanied by strategies to improve students’ understanding, and practice of academic integrity. Such strategies may include: 1) instructing students on the necessity of academic integrity, and what it constitutes; 2) guiding students toward scholarly and applied practices consistent with academic integrity; and 3) clarifying faculty’s intentions around learning outcomes. Students should be taught why using GAI in prohibited ways is not just unethical, but also counterproductive to learning essential content and skills. In addition, faculty must instruct students in best practices for using GAI.
We make the following recommendations for faculty:
- Faculty should be explicit in identifying expectations regarding the use of GAI tools in each course, and potentially for individual assignments. Cornell resources such as the Center for Teaching Innovation may be helpful in identifying standardized language and clear examples.
- Faculty are encouraged to identify well-defined learning outcomes to provide rationales for how and when GAI can/cannot be used in a particular course.
- When GAI is permitted, faculty should be clear about student expectations in terms of documentation and attribution, what work is expected to be produced by the student themselves, and how the student is expected to validate or verify output from GAI.
- Faculty members are encouraged to engage in ongoing conversations about the importance of academic integrity, including the fact that basic academic integrity principles remain important and still apply regardless of the existence of GAI tools. (See "Communicating Why Academic Integrity Matters").
- Integrating critique of current practices and uses of GAI, including ethical issues, into all stages of learning is vital.
- We currently discourage the use of automatic detection algorithms for academic integrity violations using GAI, given their unreliability and current inability to provide definitive evidence of violations.
- While faculty may use GAI as a tool for developing teaching materials, we encourage them to adhere to the same standards of attribution that they require of their students.
- We do not recommend the use of GAI for student assessment.
The Center for Teaching Innovation is available to consult with departments and individual faculty on how to best implement these recommendations.
We make the following recommendations for university administrators:
- The Code of Academic Integrity should be updated with clear and explicit language on the use of GAI, specifically indicating that individual faculty have authority to determine when its use is prohibited, attributed, or encouraged, and that use of GAI on assignments by students is only allowed when expressly permitted by the faculty member.
- When considering a move (back) to more in-person assignments and assessment, policies and practices should consider how doing so could have a disproportionate impact on students with disabilities and other marginalized students.
- The university should recognize the additional burden on instructors to adapt to the rapidly changing effects of GAI on education, and provide additional support to teaching faculty and staff.
- The university administration, in consultation with faculty and academic staff, should develop and issue best practices on assessments, in light of the growing tension between the need to ensure academic integrity and the need to ensure access and inclusion for marginalized students.
Specifically, aligning pedagogical practices with Universal Design for Learning (UDL) can promote fuller access and inclusion for all students. While this does necessitate rethinking the current design of classroom and assessment practices, doing so can achieve the dual goals of greater access for students and appropriate integration of AI tools into the classroom.
Finally, GAI technology continues to increase in capability and ubiquity. Tech companies are actively working to incorporate GAI in every aspect of their products, making it increasingly difficult to avoid or even identify its use. The recommendations in this report should provide a framework for immediate and future actions, but they are not the last word. There must be procedures in place to monitor new advances, communicate new capabilities widely, and adapt policies and course technologies.
Section 5: Conclusions
The impact of Generative AI tools on education is likely to grow over time. The use of these tools already threatens some standard educational approaches and poses challenges for academic integrity. At the same time, GAI is likely to become an important tool across many domains and students must learn about its strengths and limitations. If used thoughtfully and purposefully, GAI has the potential to enhance educational outcomes. For these reasons, we recommend that Cornell adopt a forward-looking approach that incorporates the use or non-use of GAI specifically into learning objectives.
Our core recommendations to faculty are that they reconsider their learning objectives in light of GAI tools, and incorporate explicit directions regarding use of GAI into their syllabi and assignments. We recommend that faculty formally adopt one of the three different approaches, depending on the learning objectives of the course or assignment.
- Prohibit use of GAI where its use would substitute for or interfere with core learning objectives, particularly in courses where students are developing foundational knowledge or skills.
- Allow with attribution the use of GAI where it can serve as a useful resource to support higher level thinking or skill development.
- Encourage use of GAI in courses or assignments where it can be used as a tool to allow exploration and creative thinking, or level the playing field for students with disparate abilities and needs.
Our core recommendation to the administration is to provide material support to faculty as they grapple with adapting individual courses to the new reality of GAI tools. For example, the administration should provide assistance in implementing accommodations for new assignment and assessment mechanisms, provide additional TA support when needed for course redesigns, and support faculty as they implement new teaching techniques that may be unfamiliar, and initially perhaps unwelcome, to students.
To guide students to obtain the potential benefits from GAI in enhancing higher order thinking and learning, and to avoid the dangers of GAI undermining the gain of key skills and knowledge, Cornell must take a proactive approach to the use of GAI in education. Our students need to understand both the value and limitations of GAI, not only because they will encounter it on a regular basis in their future careers and lives, but also because many of them are likely to guide its development and use in the future.