3 Current IU Policies and Guidelines

Key Takeaways

  • Microsoft Copilot (formerly Bing Chat Enterprise) and Adobe Firefly are the only enterprise tools widely available at IU.
    • Copilot (formerly Bing Chat Enterprise) at IU is currently only available to faculty and staff; it is not available to students.
    • Note: Copilot (A general text-based system like ChatGPT, marketed as ‘Your everyday AI companion’) is different than Copilot for Microsoft 365, which is a deeper integration of Copilot into the Microsoft 365 ecosystem; that is being piloted at IU and is available for purchase by individual teams at IU.
  • Microsoft Copilot is approved to interact with data classified up to and including university-internal data. No other generative AI tools have been approved for data beyond public classification. See IU’s Data Classification Matrix.
  • Individuals should not submit data to public versions of GenAI tools, even when anonymized, that is university-internal or higher. Also individuals should not submit any data to public GenAI systems that may be considered student, faculty, or staff intellectual property, unless the individual submitting that intellectual property created it.

Click here to go access Copilot!

Copilot and Approved Programs

From the KB: About Microsoft Copilot at IU

At IU, Microsoft Copilot is approved to interact with data classified up to and including University-Internal data  if you are logged into Copilot with your Microsoft 365 at IU  account. Because of the protections in place, Copilot is the recommended way to use generative AI within the IU environment.

Make sure you are logged in with your IU account (your @iu.edu email address and IU passphrase) to access Copilot and not the consumer-based Copilot service.

From AI’s UITS AT @ AI Page

As part of IU’s software license with Adobe, you have access to the Firefly web app and generative AI features inside apps like Photoshop and Illustrator as well as Adobe Stock.

Acceptable Uses

From the KB: Acceptable uses of generative AI services at IU

Specific examples that are not appropriate for the public versions of generative AI tools include:

  • Sharing names and information about a real student, employee, research participant, or patient
  • Asking an AI service to summarize and grade a student paper or assignment
  • Sharing employee-related data such as performance or benefit information for communication drafting or analysis
  • Asking an AI service to generate code for IU systems protecting institutional data or sharing IU source code for editing
  • Sharing grant proposals still under review

Academic Integrity and AI

From the KB: Acceptable uses of generative AI services at IU

Students should use generative AI in ways that align with university academic integrity policies and communicate with their instructors before using generative AI in their coursework. Schools and departments may elect to further restrict generative AI.

From the CITL: How to Productively Address AI-Generated Text in Your Classroom

Note that Turnitin has released their own AI detection product, but at the current time, IU’s implementation of Turnitin does not include this new tool. Before including this tool in our suite of supported products, IU is examining its accuracy and efficacy, as well as how it handles student data. Further, current university guidelines indicate instructors should not upload student work to ChatGPT or AI-detection tools, due to privacy and intellectual property concerns.

From the KB: About AI detection tools

Instructors should consider several precautions before requesting AI detection tools via the SSSP. AI detection tools have demonstrated low rates of accuracy. For example, OpenAI, the creator of ChatGPT, has shut down its own AI writing detector due to a low rate of accuracy. AI detection tools are known to return false positives, identifying content written by students as AI-generated.

More About Students and Plagiarism

Instructors and SME’s might be concerned that students are using GenAI to complete assignments for a course. This fear has led to some schools, departments, and instructors banning all GenAI tools. Banning GenAI might not be so easy. For example, the app/browser extension Grammarly uses AI to check grammar and spelling. More recently, Grammarly added a GenAI function to help students paraphrase, write thesis statements, and summarize sources. Because Grammarly is a recognizable and widely used (and endorsed) tool, students might not realize the GenAI feature is off-limits. Additionally, since Grammarly’s GenAI uses student submissions as its LLM, a plagiarism detector might mistake a submission as developed by GenAI when the student only used the grammar checker feature. The student submitted their paper to Grammarly under the assumption it was used for grammar check, not that it would be used to inspire other writings. Some plagiarism might be more intentional. According to an Inside HigherEd article, students have developed various methods and have access to more tools to avoid detection.

Use of GenAI and the tools available are ever evolving. Banning GenAI might be difficult to fully achieve. There are larger issues at the root of the problem. Working with instructors, your choice to use GenAI might be critiqued, or you might be asked to help an instructor navigate the GenAI field. With this, transparency and communication can be an effective approach. You can model responsible and accountable use of GenAI as well as model critical evaluation of GenAI. Knowing how to have these conversations can help instructors more confidently determine how to accept, integrate, or mediate GenAI in the classroom.

License

The GenAI Cookbook: GenAI Recipes for eLearning Design and Services Copyright © by Emilie Schiess; Tori Abram-Peterson; and Dalton Gilo. All Rights Reserved.

Share This Book