Skip to Main Content

AI Literacy & Help Guide

AI Literacy & Help Guide

 

Ethical Considerations

Ethical Considerations

  • The AI Literacy Framework developed by Mills et al. (2024) outlines several ethical considerations for using AI. These ethical considerations are emphasized in the core values of the Framework.   

  • AI tools use historical data, which may contain biases and assumptions that are not taken into account in the AI tool's output. Moreover, the AI tool could generate something completely wrong. Recently, AI tools have been used to generate deepfakes. 

  • Do not anthropomorphize an AI tool, and you should refer to the tool as “it” rather than any other pronoun.

Biases and Filter Bubbles   

  • The fact is that biases exist in all information used in training an AI tool. This often results in biased or exclusionary outputs (Stewart, 2024.)   
  • AI tools reinforce biases by creating a filter bubble. A filter bubble is when a user’s content is filtered through an algorithm that is based on what you click on. This can result in a user being isolated into one particular viewpoint (GCF Global, 2019). 
  • There have been numerous examples of AI tools being developed and used, only to discover that there was a bias inherent in the training data. For example, Amazon built an AI tool to screen applicants. Amazon soon found this tool was biased against female hires.

Academic Integrity Concerns

Since the unveiling of Chat GPT, there have been several concerns within the academic community. Chiefly among these concerns is the ability of students to cheat using Chat GPT. Surovell (2023, February 8) notes that this consternation has proceeded with many technological advancements throughout the 21st century. However, several resources address cheating using artificial intelligence, including the latest update to Turnitin.

No Concept of Equality or Justice 

  • AI systems could impact nearly everyone in modern society. However, AI systems are not able to represent everyone's interests equally. There are biases inherent in AI systems.   

  • Researchers have observed that children can identify race and ethnicity biases within these systems. For example, facial recognition systems have been shown to recognize white faces more accurately than faces from other races.  

  • Researchers specify that there are multiple causes for biases to occur in AI systems. The first cause is the data itself that the AI uses to train. Often, data from marginalized communities is underrepresented. This cannot easily be fixed because efforts to collect more data can cause more harm to marginalized communities.  

  • Researchers and instructors need to be aware that all AI systems can collect and use data unethically because many of these AI systems are trained on large public data sets and sometimes from internet forums. The second cause of bias is in the variables or information in the algorithm of the AI model. Most people in leadership roles who are responsible for determining an algorithm used are unevenly white and male.    

Biases

  • The fact is that biases exist in all information used in training an AI tool. This can often result in biased or exclusionary outputs (Stewart, 2024.)   

  • AI tools reinforce biases through creating a filter bubble. A filter bubble is when a user’s content is filtered through an algorithm that is based on what you click on. This can result in a user being isolated into one particular viewpoint (GCF Global, 2019). 

Environmental Impact 

  • The computational power required to run AI systems can significantly impact the environment (Stewart, 2024). 

  • An increase in the use of AI tools will lead to the construction of more data centers. These large, climate-controlled data centers house thousands of servers to support computing (Zewe, 2025).

  • When Open AI trained the LLM, GPT-3, it produced around 500 tons of carbon dioxide, but smaller models produced smaller emissions (Coleman, 2023). 

  • AI tools can positively impact environmental disasters by assessing building damage without putting first responders in harm's way (Coleman, 2023).   

  • A considerable energy demand is expected to drive data centers and implement AI systems.  

  • The impact on local environments can be significant. AI data training centers can lead to a loss of fresh water through evaporation.

  • AI will impact the environment differently in different areas, and this could lead to inequity (Ren & Wierman, 2024). 

Privacy  

  • AI systems continuously collect data, including personal information, that a user enters. There are often challenges in collecting data, using data, and how to protect user data (Stewart, 2024.) 

  •  Privacy is a large concern related to many internet tools; however, AI tools consume data at such a high rate it is nearly impossible to control what information about your interactions and information is collected (Miller, 2024.)  

Lack of Human Judgement  

  • AI is similar to human intelligence. However, it cannot consider context and emotion. When discussing AI systems with students, emphasize that they lack human judgment.     

  • AI cannot detect human sarcasm, nor can it understand the contexts of a question in different situations.  

Transparency  

  • Researchers refer to AI tools as “black boxes”, because their algorithms are difficult to understand or interpret. Therefore, it is difficult for companies to ensure transparency and accountability (Stewart, 2024). 

  • It is important for the stakeholders of an AI tool to understand the AI model and the data being used in training. A robust understanding of how content is collected and created can ensure transparency (Jonker et al., 2024). 

The Output of Generative AI

Can you copyright something you made with AI?
Open AI says:
"... you own the output you create with ChatGPT, including the right to reprint, sell, and merchandise – regardless of whether output was generated through a free or paid plan."

The U.S. Copyright Office says:
The term “author" ... excludes non-humans.

But, if you select or arrange AI-generated material in a sufficiently creative way... In these cases, copyright will only protect the human-authored aspects of the work. For an example, see this story of a comic book. The U.S. Copyright Office determined that the selection and arrangement of the images IS copyrightable, but not the images themselves (made with generative AI).

In other countries, different rulings may apply, see:
Chinese Court’s Landmark Ruling: AI Images Can be Copyrighted


The Input to Generative AI (training data)
Should it be considered fair use? This is widely debated.

Argument A. No it's copyright violation
Copyright law is AI's 2024 battlefield - "Copyright owners have been lining up to take whacks at generative AI like a giant piñata woven out of their works. 2024 is likely to be the year we find out whether there is money inside," James Grimmelmann, professor of digital and information law at Cornell, tells Axios. "Every time a new technology comes out that makes copying or creation easier, there's a struggle over how to apply copyright law to it." 

This will affect not only OpenAI, but Google, Microsoft, and Meta, since they all use similar methods to train their models.
 

Argument B. Yes, it's fair use

Thoughts from the Association of Research Libraries
Training Generative AI Models on Copyrighted Works Is Fair Use

Thoughts from Creative Commons:
Fair Use: Training Generative AI - Stephen Wolfson
Better Sharing for Generative AI - Catherine Stilher

Thoughts from UC Berkeley Library Copyright Office
UC Berkeley Library to Copyright Office: Protect fair uses in AI training for research and education

Thoughts from EFF: Electronic Frontier Foundation:
AI Art Generators and the Online Image Market - Katharine Trendacosta and Cory Doctorow
How We Think About Copyright and AI Art - Kit Walsh
“Done right, copyright law is supposed to encourage new creativity. Stretching it to outlaw tools like AI image generators—or to effectively put them in the exclusive hands of powerful economic actors who already use that economic muscle to squeeze creators—would have the opposite effect.”

Other countries

Several corporations have offered to pay legal bills of users of their tools
AdobeGoogle,  Microsoft, and Anthropic (for Claude) have offered to pay any legal bills from lawsuits against users of their tools.


Copyright is only one lens through which to consider generative AI

Creative Commons defends better sharing and the commons in WIPO conversation on generative AI 

“In particular, since all creativity builds on the past, copyright needs to continue to leave room for people to study, analyze and learn from previous works to create new ones, including by analyzing past works using automated means.

Mr. Chair, copyright is only one lens through which to consider generative AI. Copyright is a rather blunt tool that often leads to black-and-white solutions that fall short of harnessing all the diverse possibilities that generative AI offers for human creativity. Copyright is not a social safety net, an ethical framework, or a community governance mechanism — and yet we know that regulating generative AI needs to account for these important considerations if we want to support our large community of creators who want to contribute to enriching a commons that truly reflects the world’s diversity of creative expressions.”

Is A.I. the Death of I.P.?
Generative A.I. is the latest in a long line of innovations to put pressure on our already dysfunctional copyright system.
The New Yorker, Louis Menand, January 15, 2024
Interesting review of the book, Who Owns This Sentence?: A History of Copyrights and Wrongs by Bellos and Montagu

Balancing AI, Copyright, and Data Privacy in Education: A Guidebook for Educators

Author: Ilana Grimes (Eastern Florida State College, Dean of Education)

This book explores the topics of Generative AI as seen through the lenses of copyright, intellectual property rights, data privacy and academic integrity.

Each chapter is linked to the original open-access eBook for easier navigation

Introduction

I. Copyright

1. Copyright Overview

2. The History of Copyright

3. What is Copyright?

4. The Purpose of Copyright

5. What is Copyrightable?

6. The Relationship Between Copyright and Other Methods of Protecting Intellectual Property

7. Copyright Protections

8. Public Domain

9. Public Domain Essential Knowledge

10. Public Domain Guidelines

11. Copyright and AI Part 2. Copyrightability

II. Creative Commons

12. Creative Commons Overview

13. The Creation of Creative Commons

14. The Sonny Bono Copyright Term Extension Act

15. Eldred vs Ashcroft

16. CC: Organization, Licenses, Movement

III. Anatomy of a CC License

17. Three Layers of the CC Licenses

18. Four License Elements and the Icons

19. The Six Creative Commons Licenses

20. Combined Licenses

21. Public Domain Tools and Differences from the CC Licenses

22. Exceptions and Limitations to Copyright with CC Licensed Works

IV. Remixes, Adapted Works, and Derivative Works

23. Attribution

24. Collections

25. Licensing Considerations for Collections

26. Remixes, Adapted Works, and Derivative Works

27. Licensing Considerations for Adaptations

28. Combining CC Licenses

V. Generative AI, Copyright, and Creative Commons

29. Copyright and Generative AI

30. Non-Human Authorship

31. Update: Part 2.- Copyrightability 2025

32. Creative Commons and Generative AI

33. Questions to Consider

34. Training Input and Consent

35. AI Outputs and Intellectual Property

VI. Property Rights and Data Privacy

36. Data Privacy in Education

37. Data Risks

38. Extrapolation

39. Extrapolation in Higher Education

40. FERPA and Data Security

41. Submitting Student Work to AI

42. FERPA and Transparency

43. Violating Intellectual Property Rights

44. Data Privacy and Academic Integrity

References

References

Coleman, J. (2023, December 7). AI’s Climate Impact Goes beyond Its Emissions. Scientific American. https://www.scientificamerican.com/article/ais-climate-impact-goes-beyond-its-emissions/ 

GCF Global. (2019). Digital Media Literacy: How Filter Bubbles Isolate You. GCFGlobal.org. https://edu.gcfglobal.org/en/digital-media-literacy/how-filter-bubbles-isolate-you/1/ 

Gibson, R. (2024, September 10). The Impact of AI in Advancing Accessibility for Learners with Disabilities. EDUCAUSE Review. https://er.educause.edu/articles/2024/9/the-impact-of-ai-in-advancing-accessibility-for-learners-with-disabilities 

Jonker, A., Gomstyn, A., & McGrath, A. (2024, September 25). AI transparency. Ibm.com. https://www.ibm.com/think/topics/ai-transparency

Miller, K. (2024, March 18). Privacy in an AI Era: How Do We Protect Our Personal Information? Hai.stanford.edu; Stanford University. https://hai.stanford.edu/news/privacy-ai-era-how-do-we-protect-our-personal-information 

Reilley, M. (2024, November 10). Opinion: AI and Copyright — Can We Protect Original Works of Authorship? Opinion: AI and Copyright — Can We Protect Original Works of Authorship? https://redlineproject.news/2024/11/10/opinion-ai-and-copyright-can-we-protect-original-works-of-authorship/

Ren, S., & Wierman, A. (2024, July 15). The Uneven Distribution of AI’s Environmental Impacts. Harvard Business Review; Harvard Business Publishing. https://hbr.org/2024/07/the-uneven-distribution-of-ais-environmental-impacts  

Stewart, K. (2024, March 21). The ethical dilemmas of AI. USC Annenberg. https://annenberg.usc.edu/research/center-public-relations/usc-annenberg-relevance-report/ethical-dilemmas-ai 

Zewe, A. (2025, January 17). Explained: Generative AI’s environmental impact. MIT News | Massachusetts Institute of Technology; Massachusetts Institute of Technology. https://news.mit.edu/2025/explained-generative-ai-environmental-impact-0117 


The Culinary Institute of America | Conrad N. Hilton Library | 1946 Campus Drive | Hyde Park, NY 12538-1430
Telephone: 845-451-1747 | Email: library@culinary.edu