FEB 14, 2023 10:00 AM PST

ChatGPT: Overhyped or cause for concern?

WRITTEN BY: Ryan Vingum

Unless you’ve been living under a rock, you’ve probably heard about the latest AI chatbot, ChatGPT, that took the internet by storm when it was first released to the public at the end of 2022.

As a natural language processing tool, ChatGPT is designed to work with language as humans do. It learns using real, written human responses to questions and vast quantities of information it pulls from the internet, which has allowed its prototype release to produce writing that is surprisingly human-like, sometimes comical, sometimes moving, and sometimes unsettling. For example, people using ChatGPT found that it did an exceptional job producing coherent, seemingly-quality level writing on just about any topic or theme a user requests. Want a thorough analysis of the Great Gatsby? ChatGPT, at first glance, could probably do that.

But as the proverbial quote goes, the devil is in the details. With all the buzz also comes a great deal of skepticism and worry. Education professionals, in particular, are concerned about how this tool could affect student learning and writing. So, are these concerns valid around ChatGPT, or will it be another passing trend?

ChatGPT represents an impressive step forward in AI language processing

On a technical level, ChatGPT is an impressive advancement in language-based AI technology; in particular, it’s a remarkable advancement on GPT-3 AI technology, an older AI that Chat-GPT is based on. The new version, GPT-3.5, was trained using real human writing, responding to certain questions, and doing miraculous-looking things with language. Engineers also used a “reinforcement” mechanism to help the AI learn to produce better responses, a process similar to positive and negative reinforcement in human learning. ChatGPT also remembers previous inputs from users, allowing it to learn and grow.

However, as a large language model (LLM), tools like ChatGPT are largely predictive. In other words, they use large quantities of data to guess the best response to a question.

Still, ChatGPT represents an important step towards a key goal for AI chatbots: The ability to sound human. AI chatbots have become more prominent as scientists and engineers seek to realize this goal. Chatbots are becoming more popular in the field of mental health, for example, with tools like Woebot seeking to replicate cognitive behavioral therapy you might receive in person. You may have also seen job postings for “conversational designers,” which focus on producing language chatbots can use to sound more human.

Educators are sounding the alarm

While ChatGPT is still a fledgling technology, its potential impact in educational circles was spotted almost immediately, with several school districts banning the use of any type of AI chatbot over worries that students could use them to cheat or plagiarize work.

The concerns raised by educators brings to mind a quote circulating in some corner of the internet about how technology impacts schooling:

“Students today depend on paper too much. They don’t know how to write on a slate without getting chalk dust all over themselves. They can’t clean a slate properly. What will they do when they run out of paper?”

Though debunked as a fabricated quote, its general meaning behind is relevant here. You’ve undoubtedly heard complaints that using computers will affect student penmanship or that texting affects students’ ability to spell and write correctly. That’s because technology equals new ways of doing things, which are (often) at odds with the status quo. The quote also has an air of alarmist sentiment, suggesting that the newest technology will always destroy something sacred about the student learning experience. The question is: are these worries over ChatGPT in school overblown?

The answer? Depends on who you ask.

Some critics suggest that the tool makes it easier for students to avoid doing school work. For example, when it comes to writing short responses or even shorter essays, students may be tempted to turn to ChatGPT to complete these assignments. However, some point out that ChatGPT’s responses are not always accurate, a reminder of ChatGPT’s limitations as a predictive LLM. That’s because many AI-driven language processing systems, like chatbots, do not learn in the way humans do. As a result, ChatGPT could be used to produce writing that is inaccurate, flawed, or full of uncritically-examined assumptions, even if it “sounds” good.

Others see these shortcomings as opportunities for learning. Using ChatGPT could be an opportunity to critically examine these technologies or the text they produce, offering new ways for students revise writing or to understand language-processing technologies and the potential they play in society.

Language-processing tools are here to stay

Despite a range of responses to ChatGPT’s launch, it’s still in its early stages of testing. OpenAI, the company that produces the GPT AI technology, even tried to tamper expectations, highlighting that a lot of work is still needed

Still, it’s an interesting and exciting development for language-processing AI tools. With how common they are becoming, it would be naïve to think they’re going anywhere anytime soon. But the hype highlights hard questions that learning institutions may have to grapple with and address soon. Perhaps it’s overhyped at this stage, but the growing prevalence of chatbot AIs is reason enough to pay attention.

Sources: IBM; Wired; USA Today; Medium; LinkedIn

About the Author
Master's (MA/MS/Other)
Science writer and editor, with a focus on simplifying complex information about health, medicine, technology, and clinical drug development for a general audience.
You May Also Like
Loading Comments...