
Eliyahu Gasson | editor-in-chief
You wouldn’t let someone drive your car on the freeway if they’ve never driven before. So why do we let average people have access to AI?
I’ve written in The Duke before about generative AI models, particularly those capable of generating images. In that, I was concerned about the sidelining and plagiarism of artists whose creations are fed into AI models to be synthesized into “new” images.
I still hold that it’s unfair, and in the years since I wrote that article, we’ve seen large companies use AI to create promotional materials and advertisements in lieu of hiring true blue, red- blooded artists. It’s no good, in my opinion. But AI has demonstrated that it is capable of more harm than cheapening visual art.
Large language models (LLMs) have been around for quite some time. Industry leader OpenAI released its flagship product, ChatGPT, in Nov. 2022 — nearly three years ago, giving us plenty of time to see the disastrous effects LLMs could have on us, both individually and socially.
According to OpenAI, ChatGPT sees a spike in inquiries during the school year and a dip during the summer, indicating its primary users are students. Users of the chatbot generated an average of 79.6 billion tokens (the way LLMs measure inquiries) in May and a comparatively low 36.7 billion in June when schools typically go on break.
Bad news for students, according to a study from MIT. Researchers used electroencephalography (EEG) to record the brain activity of participants as they wrote essays. They split 54 participants into three groups: One used LLMs, one used search engines (i.e. Google) and the last was left to use their brain alone.
The study found that the convenience of an LLM came at a “cognitive cost,” negatively impacting their inclination to question the LLM’s responses. This, the study said, “highlights a concerning evolution of the ‘echo chamber’ effect.”
Programs like ChatGPT will sometimes generate two responses to an inquiry and ask the user to pick which was more “helpful.” Users then tend to pick the answer that more closely agrees with their preexisting opinions.
LLMs take that feedback and assume the user’s preference has to do with usefulness and factuality, rather than emotional resonance. And BAM!
An echo chamber is formed as the LLM will try to make its future responses match the one the user preferred. This study isn’t conclusive, but it speaks to a major issue with cognitive training, satisfaction and echo chambers.
In late August, NBC News reported on the story of 16-year-old Adam Raine, who had committed suicide.
His parents, Matt and Maria Raine, are suing OpenAI and its CEO Sam Altman, blaming ChatGPT for exacerbating Adam’s suicidal ideation calling it a “suicide coach.” After his death, Raine’s parents gained access to their son’s chat logs with ChatGPT, where the chatbot replied to Adam’s prompts, at first with resources to help with his mental health problems, and later with step-by-step methods for committing suicide.
OpenAI replied to NBC News saying they were “deeply saddened by Mr. Raine’s passing,” and explained the safeguards programmed into ChatGPT to try to prevent these sorts of outcomes.
“While these safeguards work best in common, short exchanges, we’ve learned over time that they can sometimes become less reliable in long interactions where parts of the model’s safety training may degrade,” they told NBC News.
OpenAI went on to promise that they are working on implementing changes that would connect users directly to support services.
But the damage has already been done. The launch of LLMs seems almost like a long beta testing program, and anyone with an internet connection is serving as a guinea pig.
The developers have failed to put the required research and safety considerations into these products before foisting them onto the general public. Everything from search engines to operating systems to social media websites have thrown their own AI into the laps of their users in an attempt to profit off of them.
According to Stanford University’s Human-Centered Artificial Intelligence, corporate AI investment reached $252.3 billion in 2024. What’s mildly funny is that most of those corporations report that they’ve lost a combined $35 to $40 billion in the last year according to another MIT study.
We’re clearly in the midst of a new gold rush — every tech company is pouring billions of dollars into their in-house LLMs in the hopes that it will pay off and give them a leg up on their competition.
LLMs are dangerous machines. There’s little harm in letting companies develop their software — who am I to get in the way of innovation (or self-sabotage)? My concern is with billionaires using people as unwitting test subjects.
The answer is more regulation of LLMs. Whether there is political will to do so is unlikely, considering all of the money that could be made.
Eliyahu Gasson can be reached at gassone@duq.edu
