Resources & Guidance
Before using any tool with artificial intelligence capabilities, it is important to learn more about the technology that powers these innovative tools and about the university’s policies that govern and guide the use of our tools and data.
Understanding Models and University Data
Always consider data protection and liability risk prior to using data within AI tools or services. When considering data protection and liability risk determining the type of AI model that will be used is an important step, especially for AI-powered tools leveraging large language models (LLMs).AI models are typically either a public or private model:
- public model is a tool or service that utilizes data inputs by individual users to help train the model and to provide more comprehensive outputs to other users in the future.
- A private model keeps data used as an input to the tool within the organization or institution utilizing those models.
Determining the sensitivity of the University data or information you will utilize is an equally important step in considering AI tool use.Is the University data to be utilized considered confidential, highly confidential, or moderate/high impact? Knowing this will guide appropriate use, for example, sensitive data or protected information (confidential, highly confidential, or moderate/high impact) should never be uploaded to nor utilized in a public AI model.If you need assistance to determine the sensitivity of your data, please see the University of Colorado data classification levels section below for guidance on the University’s data classification and impact standards.
It is important to note:
- Only private models approved by the university are vetted for use with sensitive or protected data sets.
- Most free and widely available AI tools operate using a public model and therefore should never be used with sensitive data as an input. As a general consideration of what data can be used in a public model AI tool, if a data set can and should be published to a public-facing web page, then it can be used in a public model which meets the minimum security standards for .
Finally, keep in mind that with AI tools, third parties might include the right to reuse your data to further develop their services and tools so please consider product terms of use carefully before proceeding.
Sensitive university data must be protected from compromise, such as unauthorized or accidental access, use, modification, destruction or disclosure. The University of Colorado system classifies data as either:
- Highly Confidential Information
- Confidential Information
- Public Information
Please refer to the for specific criteria on how data is classified.
The use of Confidential or Highly Confidential information with an AI-based service (or any service for that matter) requires the service be reviewed for compliance with .Submit a digital technology compliance review request form to initiate a compliance review for your use case.
AI Limitations & Considerations
Although AI tools have made significant strides in the areas of text and image generation, data analysis, and personal productivity, it is important to recognize their limitations. Many tools are still prone to mistakes or inaccuracies. Generative AI, even with recent advancements, is still susceptible to providing false outputs known as “hallucinations” where an output provided to a user is comprised of incorrect facts or citations. Additionally, AI algorithms and tools have been created by humans and trained on human-generated data and therefore can include biased human decisions or reflect historical or social inequities. Any output received from an AI-powered tool should be human reviewed thoroughly for inaccuracy and bias.
Because copyright law and artificial intelligence precedence is evolving globally, 鶹ѰBoulder recommends caution when utilizing AI-powered tools that may put faculty, staff, or students at risk for copyright infringement.Please review 鶹ѰBoulder’s copyright resources for additional details on use of academic materials or contact contact 鶹ѰBoulder’s copyright team with further questions.
The development of increasingly capable artificial intelligence has the potential for major impacts within the classroom. Faculty and instructors should consider the implications of Generative AI within their existing syllabi to intentionally avoid the possibility of unintended AI use by students or to intentionally integrate the new possibilities that AI allows. Instructors are encouraged to review the resources made available by the Center for Teaching and Learning, Teaching & Learning in the Age of AI.
For students, the acceptability of AI use within a curriculum could be unclear and may differ from course to course. If it is ever unclear to a student what level of AI use is acceptable, it is recommended to reach out to the course instructor to clarify. The unapproved use of AI is considered academic misconduct and could violate the Honor Code. Please review AI and the Honor Code for more clarification.
Campus Access
All 鶹ѰBoulder Information Technology purchases and adoptions must undergo the Information Technology (IT) Accessibility and Security Review Process regardless of the cost. For any substantive change in an existing IT Technology offering, such as new AI functionality, the technology service managers should contact the ICT team to review the product changes before they are implemented.
AI is rapidly evolving with new AI-powered tools being made available every day. Contact OIT about adding an AI tool that is not already offered in our list of AI-capable tools.
Teaching & Learning with AI
The Center for Teaching & Learning's Technology & AI page offers resources to help you make the most of technology and AI in your teaching. You’ll find:
- Tips for integrating tech tools into your classroom.
- Strategies for crafting clear AI and tech policies in your syllabus.
- Guidance on maintaining academic honesty.
- Creative ideas for engaging students with technology.
As artificial intelligence tools like ChatGPT and DALL-E 2 become increasingly accessible, it is important for educators to clearly communicate their policies on AI use in coursework. The AI Syllabus Statements webpage provides key considerations to help you align your AI policy with the pedagogical goals of your course. Whether you choose to prohibit, allow, or conditionally permit AI usage, a clear statement can prevent misunderstandings and support student accountability.
Learn how educators can intentionally establish trust with students by providing them with clear guidelines around gen AI use. The AI Dialogue with Students page includes reflections to prepare you to engage with students, questions to foster dialogue and other tips and resources to build engagement.
: This resource is designed to help students learn how to use AI comfortably, effectively, safely and ethically. Learn AI’s capabilities and limitations. Understand when and how it can augment your work and when your unique human expertise and creativity is invaluable.