2 Ethical Considerations

While GenAI can produce useful content and materials, reliance on GenAI has drawn ethical concerns. If someone uses GenAI to create the images for a book, should that person really be considered the artist? If someone uploads their doctoral thesis’ survey data and asks GenAI to analyze it, can that person really claim this research as their own? It’s important to draw a line and define what is ethical use of GenAI. It is also important to be aware and critical of your use of GenAI, developing your own standards on ethical use.

Intellectual Property and Ownership

To what degree do you “own” what GenAI produces? Although you provided the prompt and guided edits, GenAI produced the content, and GenAI used existing materials to construct it. Determining the degree to which you can claim intellectual property rights to something made with GenAI might depend on use. Is the intention to profit monetarily from this output? or benefit academically (such as with publication in a scholarly journal)?

In one case, a judge determined that GenAI output can’t be copyrighted as it is not human output. This result asserts the necessity of human oversight and interference, that the human contribution must outweigh the GenAI contribution.

An initial compromise was citing GenAI tools as a co-author in scholarly or published works. The title of author can vary by profession and culture, but it often refers to someone who has a large degree of contribution or responsibility in the research. In response, APA states “AI cannot be named as an author on an APA scholarly publication.” Instead, APA argues that GenAI should be referenced as a resource, which would require GenAI to be cited in-text and explained in the Methods section, cited in the references section, and any output (copies of the chatbot conversation) added to the Appendix. Below is the citation template that can be used to cite GenAI.

Example APA Citation for GenAI

OpenAI. (2023). ChatGPT (Mar 14 version) [Large language model]. https://chat.openai.com/chat

Overall, claiming ownership of GenAI output is still contested. The advice is to proceed with caution, using AI as an assistant rather than the main creator of ideas or analysis.

Large Language Models (LLM)

GenAI is based on a database of existing materials. When responding to a prompt, it draws on a large collection of materials, searching for what matches best, scanning the content for keywords or tags. This process of collecting, tagging, and identifying key words in the database describes the Large Language Model (LLM). With this, it matters what is in that database.  In some cases, the creators of the LLM “feed” GenAI nuanced and field-specific materials so it is more likely to create field-specific results. In other cases, the LLM fed off of anything available on the Internet. Additionally, GenAI can store prompts from users as reference materials. No matter the source, there are ethical concerns.

FERPA and HIPAA

If GenAI stores prompts, then you should not provide student data, personal identifiable information, or protected health information. GenAI could use this as a reference material and generate it for someone else, thus giving that unknown person identifiable and private information.

Diversity and Bias

It might seem like a computer can’t be biased, but GenAI can replicate stereotypes and biases based on its LLMs. Either the LLMs must be more intentionally curated to be representative, or users must actively review and revise GenAI output. For the case of Canva and their text-to-image tool, a better LLM database is being created to reduce bias. As you use GenAI, you might evaluate your input and output more critically to ensure it is representative of your intentions or goals.

Copyright and Compensation

In these initial trainings and trials, GenAI gleamed anything on the Internet. This includes art on social media, blog posts on personal websites, or writings from journalists. The implications of this are that artists and creators were not compensated for either their work being used to train models as well as their being widely mimicked by others. This has led to concerns about fair use, compensation, and copyright infringement.

The consequence of this has been recognized by small businesses and large corporations such as The New York Times. The NY Times claims OpenAI’s use of their content “AI tools ‘damage’ its ‘relationship with its readers and deprive The Times of subscription, licensing, advertising, and affiliate revenue'” (Nguyen 2023). As a large name, the NY Times can take on such court cases, but many small businesses and creators cannot.

Another consequence is the emergence of counter-GenAI tools. This includes watermarks to indicate if the LLM is using source material without consent, layers that mask original material so it can’t be read by GenAI, and the flooding of an LLM with content that was intentionally made to disrupt the accuracy. Nightshade is one example: an LLM that uses a source containing Nightshade will be fed false information that sabotages its output, which in turn “teaches” the LLM to produce an unreliable output.

Using GenAI, you might consider selecting tools that have collected their LLMs more ethically or compensated the authors of LLM materials.

License

The GenAI Cookbook: GenAI Recipes for eLearning Design and Services Copyright © by Emilie Schiess; Tori Abram-Peterson; and Dalton Gilo. All Rights Reserved.

Share This Book