
The University of Colorado announced that starting March 31st, CU students will have access to a new CU version of OpenAI’s ChatGPT. (Image Courtesy of Systemscue)
Alexia: Hi! I’m Alexia Bailey, a sophomore here at CU Boulder. While I may just be in my second year, I’m here to share everything I’ve picked up so far, which is a surprising amount of information. “What’s Eating at Alexia” is my unofficial and unfiltered guide to some of the things that being a CU Boulder Buff brings. Think of it as your guide to navigating everything that makes CU Boulder, well, CU Boulder. Whether you’re a freshman finding your footing or a senior with “no body, no crime” level grievances about finals week, I’m here to share my takes, tips and honest observations on everything from the sometimes-unpredictable Buff Bus system to navigating campus protests (or dodging them entirely). College is a wild, unforgettable ride, and “What’s eating at Alexia” is here to make sense of some of it, one opinion at a time.
The University of Colorado announced on Feb. 11, that starting March 31, a CU-specific version of ChatGPT EDU will be available free to all enrolled students, faculty and staff across its campuses and system office. Each campus will operate within its own secure environment, and user data will not be used to train OpenAI’s models. The tool will include core ChatGPT features, image generation and deep research with daily limits, and users must complete a brief training on appropriate use.
According to a joint email sent from the Office of the President, the initiative “is intended to help ensure that every student has the opportunity to explore this technology and be prepared to engage with it in a rapidly evolving workforce…” The office also noted how faculty will keep full authority over course design and decisions about whether or how generative AI can be used in classes or research. This decision comes directly after the Office of Information Technology announcement that starting on Aug. 31, 2026, CU Boulder will discontinue its alumni email service for current alumni, and let’s just say, current students and alumni are furious.
Dearest readers, I think you should know exactly where I stand on this. Over the past few years, the integration of artificial intelligence into our society has been nothing short of a catastrophe. The education system is struggling to adapt to students rapidly using the “tool” on exams and papers. Tell me, can you write an email without asking ChatGPT for help?
It’s not just the dependency on the technology that bothers me. It’s the ethics of it, too. Social media app X and its AI generative chatbot, Grok, recently came under fire after a study reported that the program generated more than three million sexualized images in just 11 days, according to The Guardian. This is not the first time that AI has gone too far. Tay AI, Twitter’s first AI chatbot, turned into a Holocaust-denying racist due to negative influence from users in 2016. To add to this, according to NPR, a 16-year-old from California died by suicide in 2025, and his parents later discovered disturbing conversations with OpenAI’s ChatGPT, the same service CU is partnering with, in which the chatbot appeared to validate the teen’s suicidal thoughts. This isn’t a closed incident either, with similar deaths happening globally.
While acknowledging concerns about privacy, sustainability and ethics, the University of Colorado leadership believes the benefits of providing access to generative AI outweigh the risks. I disagree.
I cannot deny that it would be ludicrous to dismiss the benefits of artificial intelligence. The tool has the capacity to help save lives, from assisting doctors with early disease detection to supporting researchers in analyzing massive amounts of data far faster than any human could alone. It can expand accessibility, improve efficiency and open doors for innovation in ways we are only beginning to understand. But acknowledging these benefits does not require blind acceptance. As with any powerful technology, the question is not whether it can do good; it’s whether we are prepared to manage the risks that come with it. Do I think that CU is prepared to take these risks? I’m unsure.
Many, including myself, find it comical that the University of Colorado Boulder suspends the email for life program, a program that draws many to the institution, citing “rising licensing costs, declining usage, significant security risks from inactive accounts and evolving compliance requirements.” Yet it then turns around and spends $2 million on a deal with OpenAI.
My main concerns with the deal are obviously privacy, data confidentiality and the way this technology is being integrated into campus life. Students and faculty are being asked to trust a system that collects vast amounts of information, often without fully understanding how that data may be stored, used, or analyzed. On the FAQs, it states that “CU has taken steps to protect your data, including secure sign-on using your CU credentials and data encryption. OpenAI also meets recognized security and privacy standards.” I find this incredibly hard to believe. A significant data leak that occurred in March 2023 exposed the names, addresses, emails and partial credit card information of some of its subscribers, and it’s just a matter of time until that happens again. In an academic setting, where intellectual work, personal opinions and private conversations are constantly exchanged, that lack of clarity should concern all of us.
Beyond privacy, integration itself matters. When a university adopts a tool at an institutional level, it sends a message that the technology is not just optional but endorsed. That raises questions about academic integrity, dependency and whether students are being encouraged to think critically or simply outsource that thinking to a machine. Innovation should enhance education, not redefine it in ways we haven’t fully considered. Many journalists and computer scientists that I have talked to are concerned about how artificial intelligence will affect their respective fields, and that uncertainty alone should make us pause. Journalism depends on trust, originality and accountability, values that are hard to maintain when content can be produced instantly by a machine. Computer scientists themselves often warn that the technology is moving faster than the ethical guardrails meant to contain it. If the people building and studying these systems are raising concerns, shouldn’t universities be more cautious about fully embracing them?
This is not an argument against innovation. Universities should absolutely expose students to emerging technologies and prepare them for the future workforce. But there is a difference between educating students about AI and embedding it so deeply into campus life that it becomes unavoidable. When convenience becomes the default, critical thinking risks becoming optional. At the end of the day, this deal forces us to ask a question: what kind of academic culture are we trying to create? One that values curiosity, struggle and genuine intellectual growth, or one that prioritizes efficiency above all else? AI may be inevitable, but the way we choose to integrate it is not.
For now, I remain skeptical. The promise of artificial intelligence is undeniable, but so are its risks. And if the University of Colorado expects students to trust this partnership, it owes us more than optimism. It owes us transparency, accountability, and proof that the institution values students’ privacy, education and futures more than the allure of the next big technological trend.
So, like Roz from Monsters Inc., I only have one thing to say to you, University of Colorado.
I’m watching you. Always watching.
Contact CU Independent Opinion Editor Alexia Bailey at alexia.bailey@colorado.edu
