
Teach-In on AI held by professors in media and computer science in the CASE building on the University of Colorado Boulder campus on Tuesday, March 3, 2026. (Camryn Montgomery/CU Independent)
The University of Colorado announced a three-year agreement with OpenAI that will soon provide students, faculty and staff across all four campuses access to ChatGPT Edu, a version of ChatGPT “built for universities to responsibly deploy AI to students, faculty, researchers, and campus operations,” according to Open AI.
The $2 million contract will be paid for by the CU System Office in the first year, and each individual campus will have to fund the cost during the following two years of the agreement. This agreement goes further than acknowledging the increasing presence of AI in daily life, it classifies it as a “needed investment.”
The announcement came on Feb. 11 in a letter sent by CU President Todd Saliman and the chancellors of all four CU campuses explaining their recognition that “access to emerging technologies is increasingly important for teaching, learning, research and administrative work…”
The letter says that the initiative aims to provide equitable access and help students engage with new technology in a “rapidly evolving workforce.” It also outlines an intent to help faculty access “tools that may support their scholarly and instructional work…”
The announcement comes at a time when Boulder Valley School District banned ChatGPT for students on school devices and Wi-Fi networks in response to safety concerns.
Use of ChatGPT and AI
While many college students and individuals across the country have begun using ChatGPT and other AI technologies, the free plans have usage limitations. Free users of ChatGPT get 10 messages every five hours on the GPT-5.2 Instant model. After 10 messages, users are downgraded to a GPT-5 Mini model.
The free model of ChatGPT also defaults to saving conversations and data to train OpenAI’s large language models (LLM’s) and improve the chatbots responses. While there is an option to turn off data training, most users are unaware of this function buried deep within ChatGPT’s settings.
ChatGPT Edu will provide each CU campus with their own data environment that is said to be protected from data collection used to train its LLMs. It will also provide users with higher conversation limits, a broader range of tools within the AI system such as Deep Research and higher memory storage.
According to the timeline and rollout plan provided by the Office of Information Technology, a limited test group already has access to ChatGPT Edu and their activity is being monitored to test functionality and guide campus readiness efforts.
The CU System anticipates access will be released to the broader CU community in late March. Unlike the free version of ChatGPT that is available for anyone to use instantly, CU’s version will require users to take a brief training session on Percipio, an online training platform, before gaining access.
The guiding principles for implementing artificial intelligence on college campuses came from an AI Working Group, consisting of two faculty members from across the four CU campuses in the Fall of 2025. Some of the main concerns included privacy and data protection, fairness and access, security and safety, ethical use and societal benefit, transparency, staying human-first and explainability.
However, many professors and students at CU Boulder say this deal was made with a lack of shared governance and input from those with expertise in the field.
University of Colorado Regent Law 5A establishes the principle of shared governance, placing a critical role on collaboration between faculty and administrators across the campuses to ensure participation in major decisions affecting the university.
“This was all in violation of shared governance,” Jed Brown, an associate professor in computer science at CU Boulder, said. “It did not involve consent of the faculty council or the faculty assemblies on any of the campuses.”
A “GPT Agreement Dissent Letter” written by third-year computer science PhD student Aaron Gluck and other students and faculty in Natural Language Processing also expresses concern with the lack of involvement from CU Boulder professors with expertise in AI. The letter currently has over 450 signatures opposing the agreement with OpenAI.
“When you’re trying to implement something like this that’s going to affect students, staff and faculty across all of our campuses, I think it’s really important to make sure that those people have a say in what’s going on and are able to work with decision makers,” said Gluck.
Some of the main concerns raised in the dissent letter regard privacy and a loss of student engagement with the complex ideas that promote critical thinking and education.
While privacy and data protection seemed to be two of CU’s main priorities entering the agreement with OpenAI, they will still have access to all the data, it is just claimed that it will not be used to train their LLM’s.
“The point is really that they have access to it, and we just have to take their word for it when they say they’re not going to train models on it,” said Gluck.
Implementation at CU
Though not required, it has become common for course syllabi to include a professor’s guidelines for use of AI in their courses, with many professors asking their students not to use it entirely or to identify how they used it to guide their thinking.
The informational page CU released relating to the launch of ChatGPT Edu says that “faculty remain in control of their curriculum and learning environment,” however Brown believes that the lack of communication from administrators undermines the faculties ability to set policies that establish “mutual trust.”
“I am very straight with my students that I don’t want them to use these products,” Brown said. “But I have no enforcement mechanism.”
Scott Ritner, lecturer in political science at CU Boulder, raised concerns about the lack of campus policy regulating AI and the encouragement of its use that comes with adopting, paying for and providing members of the CU community with the system.
“There’s less and less ability (to regulate AI use) for faculty like myself who don’t want it in our classroom,” Ritner said. “Because if I now say starting next semester, ‘all AI use, all LLM use… is banned in my classroom,’ a student can say, ‘well, the university is providing it for us.’ And like, what am I going to say to that? Because I’m not going to ban students from going to the library, but I’m still going to ban students from using AI so long as I’m allowed to.”
Ritner and Brown both prohibit students from using AI in their coursework out of a desire to have students critically engage with educational materials, among other reasons. The Dissent Letter from members of the Natural Language Processing department shares these concerns about the lack of engagement that comes with AI usage.
Bryan Turns, a junior at CU Boulder studying computer science, expressed concern over how this deal seemed to come “in the middle of the night,” with no prior warning given to students or faculty.
Turns said, “I worry a lot about how students are already regularly relying on this, and by putting a CU stamp of endorsement on it…” he worries it will further encourage students to use the model.
Gluck shares similar concerns about increased use of AI and the tendency for users to become over-reliant and trusting in the model.
“When you do offload lots of different tasks to these kinds of models, there is research that shows that you have the potential to become over reliant on these tools, and there is a shift in cognitive task from actually completing whatever task you’ve been given to really just trying to verify the outputs of a model,” Gluck said.
In a learning environment like a college campus, Gluck worries about students not having the expertise to be able to accurately verify the information given to them from these models and in turn gaining a false sense of proficiency in their given subject matter.
Reliance on chatbots not only affects the learning environment, but also the individual psyche. Brown and Gluck raised concerns about chat bot psychosis, citing three deaths by suicide in Colorado and failure of AI guardrails to prevent harm.
While many students and instructors take issue with the individual harms that come with the encouragement of AI use, they also have concerns about the environmental impacts.
CU’s informational page acknowledges that AI technologies use significant resources and computing power. Data centers that house AI supercomputers require significant amounts of electricity and water to operate, causing problems in drought ridden communities where residents are forced to battle data centers for access to fresh water.
CU Boulder maintains that they are committed to “environmentally responsible stewardship of the campus and its resources.” The informational page encourages ChatGPT Edu users to “optimize AI prompts” and ”turn off unneeded AI integrations” in order to use the tool more sustainably.
Brown calls this “classic greenwashing,” and not a real solution to the problem.
CU Boulder’s 2024 Climate Action Plan (CAP) aims for a 50% reduction in Scope 1 and 2 emissions by 2030. Scope 1 includes direct emissions of primarily natural gases, where Scope 2 is indirect emissions through the purchase and use of electricity, and Scope 3 accounts for goods not directly controlled by the University but are results of its activities.
According to the CAP, in 2019 CU produced 130,471 metric tons of carbon dioxide equivalent (MtCO2e) from its scope 1 and 2 activities and 129,625 MTCO2e from scope 3 emissions.
With 100,000 users in CU’s ChatGPT Edu system, CU’s contract with OpenAI accounts for 0.03% of the US population and total emissions. According to Brown, 0.03% of total US carbon emissions will account for 138 ktCO2e, effectively adding 50% to CU Boulders total carbon impact.
Teach-In on AI
In response to CU’s deal with OpenAI, a teach-in on AI was held by professors in media and computer science on Tuesday on the CU Boulder campus to educate students and faculty about the hidden costs of this new deal.
Speakers addressed the environmental impacts of AI, trust between students and faculty in relation to educational integrity, data transparency, bias within chatbots, political implications and alternative forms of AI.
Attendees were encouraged to sign the dissent letter, spread information on the consequences of AI implementation, demand professors not to use/allow use of generative AI in their courses and in the future demand shared governance in decisions affecting the student body.
Nathan Schneider, founder of the Media Economies Design Lab at CU Boulder, encouraged students to think about alternative forms of AI and reimagine relationships with technology in intentional, healthy and balanced ways.
To combat growing unhealthy uses of AI, Schneider says he assigns less busy work, and focuses more on student interaction and communication. While he acknowledges that AI will be an important tool, he emphasizes the need for people to have choices in the ways they interact with it.
“What’s going to matter most is the degree to which we are able to double down on what makes us human. Discernment, relationships, understanding differences, working across differences, thinking around problems, identifying what problems are really important,” Schneider said. “That’s what education should always be about.”
Contact CU Independent Assistant News Editor Camryn Montgomery at camryn.montgomery@colorado.edu.
