FAQ for Faculty: GAI & Experimental AI Programs

In your role as an instructor, you may be interested in exploring AI tools with your students. You may also be wondering how best to engage with these tools in your teaching, how to design effective assignments that incorporate AI, or how to talk to your students about their AI use.

We put this FAQ together in the hopes that it might help answer some foundational questions (and maybe spark new ones!) as you investigate using established or new, experimental AI systems. As always, please reach out if you have any questions or concerns, or just want to get together to chat about your ideas or experiences.

Using AI in your course

Q. How do I get started? What Cornell-supported platforms can I use? 
A: The University provides access to Microsoft Copilot for use with low-risk data. You and your students can login using your NetID and password and access a space where your data is not stored or used to train their model.

You may also want to try other AI projects like OpenAI, Claude, or any number of discipline-specific technologies. There are hundreds on the market now. As always, we remind you that using third party software comes with additional risks to you and your students. As such, if you are considering using an unsupported technology, faculty are asked to submit a Statement of Need through their local IT department. This triggers a set of reviews ensuring that security, privacy, accessibility, and other data protection standards can be reviewed.

Q: What policies exist at Cornell that can help guide my use of AI in the classroom?
A: While Cornell has no “official” policies, it has a series of recommendations for instructors. They can be found in the teaching and learning with AI report.

Q: How do I determine what is an appropriate use of AI for my students?
A: Certainly, there are fundamental skills and types of knowledge that students need to experience and master in order to progress as learners, professionals, and scholars. No one is better off for missing this foundational learning. At the same time, the world of work is quickly evolving around the use of AI (World Economic Forum, 2023). Students will need to learn effective and ethical workflows that incorporate AI in discipline-specific ways that support their intellectual growth and professional transformation.

As you look at your course content and its assessments, identify which skills, learning objectives, and outcomes are critical for students to master. The goal is to imagine ways that AI can align with, rather than supplant, this critical learning. Harvard’s AI Pedagogy project or CTI’s work on assignment redesign might offer inspiration for reworking your assignments. Remember, AI can be a mentor, a tutor, a coach, a teammate, a simulator, or a tool that accomplishes tasks. It might even lighten the mood with a harmless joke or two.

Q: How do I communicate what is and is not OK with students?
A: Your syllabus should include a statement that clearly expresses your beliefs on AI and outlines your policy of acceptable use. Individual assignment directions can also communicate to students whether or not AI can be used – and how.

Q: How do I evaluate assignments that incorporate AI?
A: Evaluating AI-generated content requires clear rubrics that distinguish between student and AI contributions. Focus on your student’s learning process rather than just the final product. Scaffold large projects into smaller, measurable ones that can perhaps be tackled in class. You might consider having students document and cite their AI use, and then reflect on their experience, explaining how they integrated it into their work and what decisions they made about its use.

Q: What AI tools does Cornell provide?
A: Cornell has a license with Microsoft Copilot. It provides a protected space for using low-risk data. 

Q: Who at Cornell can help me learn how to use AI technology?
A: CTI can assist to an extent. We have experience using several different LLMs and proprietary tools, and can help imagine ways to incorporate AI into your teaching practice. We can also point you towards resources and tutorials that can help guide development of your own on-or-offline RAG-LLM for higher-risk data. As always, we suggest reaching out to your community or discipline, (including students!).

Q: How can AI help to make courses more accessible?
A: AI can support multiple means of representation, engagement, and expression. For example, AI tools can provide alternative formats for reading materials, translate texts, suggest schedules for accomplishing coursework, or assist in brainstorming and organizing ideas. AI can also assist students by co-creating personalized learning paths. See a focused list of tips here: AI & Accessibility.  

Q: How do I level the playing field in my course? If some students have high levels of AI literacy and others do not, will they be at a disadvantage?
A: Try co-creating and sharing knowledge about using AI by building a space where students can articulate their use cases and effective practices. You can explore GenAI together if you choose, and build literacy as a larger group. At the same time, including ethical discussions about its risks, tips on how to verify data, and thoughts about when and how people can (and should) stay in the loop as decision-makers can help students to nurture critical thinking and build literacies.

Using “experimental” AI in your courses

A number of students are developing AI bots designed to be deployed in courses here at Cornell, with the approval of course instructors. These experimental projects vary, and might include AI bots designed to help students navigate course content within or outside of Canvas; bots built to generate quiz questions, sample problem sets, or study guides; or bots that act as 24/7 tutors by drawing on course materials (PowerPoint slides, readings, syllabus, etc.) to discuss learning content and provide feedback. Bots can often discuss concepts in a variety of ways, increasing access to complex concepts. See FAQs below for more information.

Q: Why should I consider deploying student-made AI bots into my course?
A: Instructors who incorporate experimental learning bots into their course can benefit from a set of opportunities. They may find themselves co-designers of these new technologies, working with young teams to develop and tweak tools that act in ways that are specific to their course content, discipline, or student group. They may choose models, or limit bots to certain data sets or collections of materials. This allows instructors greater control of what students are accessing and studying – and what outside companies have access to. In addition, models can become more transparent, or be more ethically-sourced. Developing a class-specific tool can also help to level the playing field between students with varying levels of expertise and access to commercial AI.

For assessing student learning, AI bots can deliver new insights into student engagement with material, or provide deeper understandings of where students are struggling. For example, the bots can produce new analytic “dashboards” with interaction metrics and reports of how students interacted with bots (ie: questions asked).

While these technologies have the potential to assist learning and student engagement, they do come with some potential risks that all users should be aware of.

Q: What are the risks of deploying an experimental learning bot into my course?
A: Risks include potentially sharing FERPA protected data, proprietary or copyright information, and user interaction and engagement metrics with an unknown service provider. Also, models are not always transparent, and can return biased, harmful language that can impact different communities in different ways. Bias can also be implicitly felt when models leave groups or communities of their conversations. Further, despite the potential to assist student learning in the short term, there may be long term negative outcomes (Bastani et al., 2024). For these reasons, faculty should take care to consider the longitudinal impacts on student learning, particularly when mastering foundational knowledge and processes are critical to their disciplinary success.

Q: How is Cornell balancing the risks of using experimental AI with opportunities for student learning?
A: Balancing risks is a task we must all attend to. For their part, Cornell IT has implemented a set of standards experimental AI companies should meet, and a pathway to mentor new AI groups. These include technical standards around security and privacy, as well as benchmarks for providing support and maintenance of their bots. This helps ensure that the AI tools that faculty may choose to integrate into their teaching are secure. By setting these standards, Cornell helps safeguard the educational environment, allowing faculty to focus on enhancing student learning without compromising on data privacy or educational quality. Additionally, it places the responsibility of supporting the tools on the software providers, who are aware of these expectations and have a plan in place, ensuring that faculty are not burdened with troubleshooting and maintenance.

Q: How can I minimize risk when adopting an experimental technology in my class?
A: To balance innovation with risk, you can start by piloting AI tools in a small, controlled setting, perhaps in one assignment or module. Gather feedback from students and reflect on the outcomes before scaling up. Consider documenting the process, including any challenges and how they were addressed. Remember that care should be taken when introducing AI into your course. One study has found that students might use AI as a “crutch”, leading to dependence that reduces learning and productivity in the longer term.  

Q: What else do I need to know?
A: These technologies are young, and may have bugs to varying degrees. Because they are third party/outside companies not supported by Cornell, there is no commitment to long-term access for students and faculty. There are also no contractual obligations, and access does not mean endorsement from the university. Student and faculty users may lose data they have entered into the bot (i.e., conversations). Student users may choose to opt-out. Student learning may be impacted in unexpected ways.

Q: How can I determine whether GenAI is helping my students with their learning?
A: We may be able to help you along your journey! Please contact CTI to schedule a conversation about evaluating student learning and engagement.


References

Bastani, Hamsa and Bastani, Osbert and Sungu, Alp and Ge, Haosen and Kabakcı, Özge and Mariman, Rei, Generative AI Can Harm Learning (July 15, 2024). Available at SSRN: https://ssrn.com/abstract=4895486 or http://dx.doi.org/10.2139/ssrn.4895486

World Economic Forum (April 30, 2023). The Future of Jobs Report 2023. [Annual Report]. Available at WEF: https://www.weforum.org/publications/the-future-of-jobs-report-2023/