Ethical AI for Teaching and Learning

For both instructors and students, it is important to understand, evaluate, and overall familiarize yourself with the uses of generative artificial intelligence tools, whether you decide to incorporate their use into your course or not. Engaging with generative A.I. tools means using a thoughtful, critical and ethical lens to determine whether their use will benefit your assignments and assessments, as well as considering how your students may independently be trying to engage with these tools in their learning, either productively or in ways that may challenge their work’s academic integrity.

For faculty and students alike, this engagement process encompasses recognizing when and how generative AI is used in various domains, assessing the reliability and validity of AI-generated outputs, identifying the ethical and social implications stemming from the design and use of generative AI applications, and creating and communicating with generative AI systems in appropriate ways.

Building literacy in Generative AI includes addressing ethics, privacy, and equity with intention. 

There are many open questions, including legal questions, regarding the ethical design, development, use, and evaluation of generative AI in teaching and learning. While generative AI may potentially be powerfully useful, concerns and sensitivities surround a number of key issues including: 

  • Transparency and oversight: as many generative AI are developed and owned by corporations, how can we know how the tools are trained or what safeguards exist protecting users against inaccurate information or harmful interactions?
  • Political impact: what protections exist against generative AI being used to spread inaccurate or discriminatory content?
  • Environmental impact: as generative AI tools are trained with ever larger data sets, requiring more and more energy consumption, what is the energy use impact on the environment?
  • Diversity, non-discrimination and fairness: how can we ensure that tools avoid unfair bias and are universally accessible?
  • Privacy, data governance, technical robustness and safety: how is user data or copyrighted material used, stored, or shared? Who has access to user data?  (European Commission, 2022).  

Researchers and educators have begun outlining multiple ways to encourage, practice, and support ethical and responsible use of generative AI in the classroom (Dwivedi et al, 2023). Drawing from these resources, in the sections that follow, we spotlight different ways to think through the use of generative AI in teaching and learning. 

Ethics & Equity

Not all technologies impact all users in the same way. Some student populations may be at greater risk of harm than others (Gašević et al., 2023). Human and systemic biases in generative AI algorithms and large language models' (LLMs) data impact the output of AI tools and consequently can perpetuate inequities when these biases are not removed or addressed. 

LLMs are designed to use statistical algorithms to analyze vast amounts of data and determine patterns and textual connections.  The data used to train the generative AI tool will mimic the data it receives. This means if the majority of data input the model receives relates to a certain industry, language, demographic, or time period, the output it generates will do so as well. That is, the content generated is based on learned language patterns and on the examples it has received.

Due to this, there could be an inherent bias in LLMs that students should be aware of. These models can provide inaccurate, misleading, and unethical information. LLMs can impersonate people and organizations, share intellectual property without attribution, and influence users based on the way information is presented (Antoniak, 2023). Therefore, it is important to evaluate what AI generates through a critical lens. 

Here are a few questions for instructors to consider, and to invite students to consider, when critically evaluating AI or Generative AI content:

  • Is the AI-generated content accurate? How can you test or assess the accuracy?
  • Can other credible sources (outside of generative AI) validate the data or item produced? 
  • How does the information generated impact or influence your thinking on this topic?
  • Who is represented in this data? Is the data inclusive in terms of the material’s scope and the perspectives that it presents? 
  • Knowing LLMs may also be collecting the data your students input (i.e., in their prompts), how will you make students aware of this practice so they will in turn safeguard their own privacy?

Again, we stress that the LLMs that drive generative AI are only as inclusive and equitable as the information informing them. If only certain works inform a generative AI tool (e.g., English-language works only, U.S.-centric works, or works from one specific demographic), the output you receive from that tool will also be both limited to and influenced by the voices or works it has been trained on. 

With this in mind, it is important that students are made aware of these potential limitations, and that they are given flexible options for completing assignments. It is equally important to provide transparency in your assignment rubric and grading criteria to account for these limitations and options (OpenAI, 2023; Frąckiewicz, 2023). 

For more information on how you can provide equitable learning activities in your classroom, refer to Universal Design for Learning (UDL).

Privacy & Equity

As you and your students interact with AI, it is important to consider the issue of privacy. According to Antoniak (2023) "LLMs store your conversations and can use them as training data," which means any input and any materials uploaded to LMM processors can become part of the model’s training set, and can then be shared in the future without attribution. Resources and intellectual property may thus be used in unexpected ways. Any uploaded data flows through an assortment of technological providers who together make up a technology’s ecosystem or infrastructure, each with their own privacy policy and terms of use. Note that opting out of terms may not be possible if you intend to use the technology. As a result, you may wish to only share open information or data that does not need to remain private. It’s also important to ensure that you never upload or share any student information covered under FERPA and other protections.


Additional Resources

References

Antoniak, M. (2023, June 22). Using large language models with care - AI2 blog. Medium. https://blog.allenai.org/using-large-language-models-with-care-eeb17b0aed27

De Vynck, G. (2023, June 28). ChatGPT maker OpenAI faces a lawsuit over how it used people’s data. Washington Post. https://www.washingtonpost.com/technology/2023/06/28/openai-chatgpt-lawsuit-class-action/

Dwivedi, Yogesh K. et al. Opinion Paper: “So what if ChatGPT wrote it?” Multidisciplinary perspectives on opportunities, challenges and implications of generative conversational AI for research, practice and policy, International Journal of Information Management, Volume 71, 2023, 102642,ISSN 0268-4012, https://doi.org/10.1016/j.ijinfomgt.2023.102642

European Commission, Directorate-General for Education, Youth, Sport and Culture, Ethical guidelines on the use of artificial intelligence (AI) and data in teaching and learning for educators, Publications Office of the European Union, 2022, https://data.europa.eu/doi/10.2766/153756

Frąckiewicz, M. (2023). OpenAI and the Risks of AI Bias: Addressing Stereotypes and Discrimination. TS2 SPACE. https://ts2.space/en/openai-and-the-risks-of-ai-bias-addressing-stereotypes-and-discrimination

Gašević, D., Siemens, G., & Sadiq, S. (2023). Empowering learners for the age of artificial intelligence. Computers & Education: Artificial Intelligence, 4, 100130. https://doi.org/10.1016/j.caeai.2023.100130

Uzzi, B. (2020, November 4). A simple tactic that could help reduce bias in AI. Harvard Business Review. https://hbr.org/2020/11/a-simple-tactic-that-could-help-reduce-bias-in-ai