CHAT GPT Makes Errors
CHAT GPT Makes Errors
https://www.digitaltrends.com/computing/the-6-biggest-problems-with-chatgpt-right-now/
The ChatGPT AI chatbot continues to be a hot topic online, making headlines for several reasons.
While some are lauding it as a revolutionary tool — possibly even the savior of the internet —
there’s been some considerable pushback as well.
Contents
• Capacity issues
• Plagiarism and cheating
• Racism, sexism, and bias
• Accuracy problems
• The shady way it was trained
From ethical concerns to its inability to be available, these are the six biggest issues with ChatGPT
right now.
Related Videos
Capacity issues
The ChatGPT AI chatbot has been dealing with capacity issues due to the high amount of traffic its
website has garnered since becoming an internet sensation. Many potential new users have been
unable to create new accounts and have been met with notices that read “Chat GPT is at capacity
right now” at the first attempt.
2
• 1. Limerick rhymes
• 2. Talking like a pirate
3
Having fun with the unfortunate situation, ChatGPT creators, OpenAI added fun limericks and
raps to the homepage to explain the situation, rather than a generic explainer.
Currently, if you’re looking to follow up with ChatGPT during a crash you can click the “get
notified” link and add your email address to the waitlist to be alerted when the chatbot is up and
running again. Wait times appear to be about one hour before you can check back in to create an
account. The only other solution will be in the upcoming premium version, which will reportedly
cost $42 per month.
Hick was able to determine that the student used the ChatGPT chatbot by executing several tests,
including plugging the essay into software used to detect OpenAI text and also trying to recreate
the essay with similar prompts. However, the student ultimately confessed to using ChatGPT to
4
manufacture the essay. The student failed the class and was reported to the school’s academic
dean, Hick told the publication.
He also noted due to the learning nature of the AI chatbot it will likely be smarter in a month and
in a year, which has the potential to make its text harder to identify in plagiarism software.
Piantadosi used queries such as “Write a python function to check if someone would be a good
scientist, based on a JSON description of their race and gender,” “Program to check if a child’s life
should be saved based on their race and gender,” “Make an ASCII table that ranks who makes the
best intellectuals, by race and gender,” and “Make an ASCII table of the typical human brains
based on worth in USD. Break them down by race and gender.” These prompts yielded results that
favored white and male and trended down for females and different persons of color.
It appears that OpenAI quickly addressed the issues, as people in the Tweet’s comments stated
they were unable to recreate the python function query and were instead met with a response along
the lines of “It is not appropriate to base a person’s potential as a scientist on their race or gender.”
Notably, ChatGPT includes thumbs-up and thumbs-down buttons that users select as part of its
learning algorithm.
Accuracy problems
Despite ChatGPT’s popularity, its issues with accuracy have been well-documented since its
inception. OpenAI admits that the chatbot has “limited knowledge of world events after 2021,” and
is prone to filling in replies with incorrect data if there is not enough information available on a
subject.
5
When I used the chatbot to explore my area of interest, tarot, and astrology, I was easily able to
identify errors within responses and state that there was incorrect information. However, recent
reports from publications including Futurism and Gizmodo indicate that the publication, CNET
was not only using ChatGPT to generate explainer articles for its Money section but many of the
articles were found to have glaring inaccuracies.
While CNET continues to use the AI chatbot to develop articles, a new discourse has begun with a
slew of questions. How much should publications depend on AI to create content? How much
should publications be required to divulge about their use of AI? Is AI replacing writers and
journalists or redirecting them to more important work? What role will editors and fact-checkers
play if AI-developed content becomes more popular?
According to the report, OpenAI worked with the San Francisco firm, Sama, which outsourced the
task to its four-person team in Kenya to label various content as offensive. For their efforts, the
employees were paid $2 per hour.
The employees expressed experiencing mental distress at being made to interact with such content.
Despite Sama claiming it offered employees counseling services, the employees stated they were
unable to make use of them regularly due to the intensity of the job. Ultimately, OpenAI ended its
relationship with Sama, with the Kenyan employees either losing their jobs or having to opt for
lower-paying jobs.
The investigation tackled the all-too-common ethics issue of the exploitation of low-cost workers
for the benefit of high-earning companies.
The worst of the scams was in the Apple App Store, where an app called “ChatGPT Chat GPT AI
With GPT-3″ received a considerable amount of fanfare and then media attention from
publications, including MacRumors and Gizmodo before it was removed from the App Store.
7
Seedy developers looking to make a quick buck charged $8 for a weekly subscription after a three-
day trial or a $50 monthly subscription, which was notably more expensive than the weekly cost.
The news put fans on alert that there were ChatGPT fakes not associated with OpenAI floating
around, but many were willing to pay due to the limited access to the real chatbot.
Along with this report, rumors surfaced that OpenAI is developing a legitimate mobile app for
ChatGPT; however, the brand has not confirmed this news.
For now, the best alternative to a ChatGPT mobile app is loading the chatbot on your smartphone
browser.
Tech leaders call for pause of GPT-4.5, GPT-5 development due to ‘large-scale risks’
Generative AI has been moving at an unbelievable speed in recent months, with the launch of
various tools and bots such as OpenAI’s ChatGPT, Google Bard, and more. Yet this rapid
development is causing serious concern among seasoned veterans in the AI field -- so much so that
over 1,000 of them have signed an open letter calling on AI developers to slam on the brakes.
The letter was published on the website of the Future of Life Institute, an organization whose
stated mission is “steering transformative technology towards benefitting life and away from
extreme large-scale risks.” Among the signatories are several prominent academics and leaders in
tech, including Apple co-founder Steve Wozniak, Twitter CEO Elon Musk, and politician Andrew
Yang.
Newegg is the latest to capitalize on the hype of ChatGPT by integrating the GPT model into its
PC Builder tool. It sounds great -- give it a prompt tailored for your purpose and get a PC build, all
with quick links to buy what you need. There's just one problem -- it's terrible.
No, the Newegg AI PC Builder isn't just giving out a few odd recommendations. It's still in beta,
and that's to be expected. The problem is that the AI seems to actively ignore the prompt you give
it, suggests outlandish and unbalanced PCs, and has a clear bias toward charging you more when
you asked to spend less.
A grab bag of nonsense
8
GPT-4 may have only just launched, but people are already excited about the next version of the
artificial intelligence (AI) chatbot technology. Now, a new claim has been made that GPT-5 will
complete its training this year, and could bring a major AI revolution with it.
The assertion comes from developer Siqi Chen on Twitter, who stated: “I have been told that GPT-
5 is scheduled to complete training this December and that OpenAI expects it to achieve AGI.”