Revised Algorithm and Ai Chatbot Technologies and The Threats They Pose
Revised Algorithm and Ai Chatbot Technologies and The Threats They Pose
Ana Mendoza
28 April 2024
(Revised) Algorithm and AI Chatbot Technologies and the Threats They Pose
Since its conception, the internet has drastically changed the course of human history, allowing us
to seamlessly connect and share our ideas and information across the globe. While this still holds to the
present-day internet, like anything in life, the internet has evolved and like how the printing press and its
by-products, newspapers, once started with a similar goal, like the newspaper, it has become a platform
for businesses to push advertisements for profit. Unlike the newspaper, however, technology has allowed
for the personalization of ads through algorithmic recommendation technologies. Along with this change,
companies can push out their products in new innovative ways which has led to the newest technology, AI
Chatbots, to be introduced to many social media platforms. However, without the proper regulation of
algorithmic technologies along with the latest technological achievement, AI Chatbots, the world at large
can be looking at a new dystopian era of misinformation coupled with the complete violation of privacy.
As our algorithmic technologies continue to improve, big companies such as Google and
Facebook begin to use these technologies to help better personalize their apps and websites for a better
user experience. While at face value this may seem harmless as a more personalized profile of a user
means an overall better user experience on these sites, the potential risk this technology creates cannot be
ignored as stated by De S.J. and Imine A’s article in which they examine Facebook's recommendation
algorithm and try to see if it follows GDRP requirements. The aforementioned article defines these GDRP
requirements as a list of rules and guidelines that companies must follow when designing a website or app
that ensures that a user’s privacy is not at risk and that users are given the option to uphold sensitive
information from big companies. In the article, they discover that Facebook finds ways to bypass the
Guzman III 2
requirements put in place to keep users' data safe by the usage of third-party software that collects data
regardless of whether you opt out of the collection or not on their website. They also state that Facebook
does not make it clear how a user's data is used and when it comes to data leaks, they do not take
responsibility for the stealing of personal data as they claim that the users always had a choice to opt out
of data collection, which isn’t true. The collection of personal data by algorithmic recommendation
systems as argued by the article above can lead to privacy risks that the user may not even be aware of
which is why it is important that we look to regulate the usage of these algorithmic systems to ensure the
Privacy risks are but one of the glaring issues that come with the usage of algorithmic
recommendation systems in popular apps and websites, yet another issue that stems from this technology
can be observed when this system is fed misinformation by malicious parties which the system then
proceeds to spread due to the very nature of its existence. One prime example of such a case would be
during the 2020 COVID-19 Pandemic in which the world was plunged into fear and uncertainty due to
the lack of information available during the time of the Pandemic. Many people trapped in their homes
took to the internet and more specifically, social media to gather information about the virus to help better
protect themselves against it. As panic continued to ensue, the unregulated recommendation systems on
social media began to spread misinformation fed to it by both malicious groups seeking to further spread
panic and discourse among the public as well as misinformed people who may have been influenced by
these groups as stated by Elia Gabarron, Oyeyemi SO, and Wynn R in their article. Doing what they were
designed to do, of course, the recommendation algorithms of popular social media sites began to rapidly
spread this misinformation regarding the virus. The unregulated system went about this process by
profiling users based on their race, gender, age, or political affiliation and using this profiling to
recommend users to groups that may share similar interests. Unfortunately, the political aspect of this
profiling led to recommendation algorithms directing users to radical groups of individuals causing a
feedback loop that resulted in the formation of dangerous bubbles of misinformation called Echo
Guzman III 3
Chambers. While CDC officials tried to alleviate this issue by posting facts about the virus on social
media sites, the damage was already done, and the repercussions of this event can still be felt to this very
day. If we are to prevent such an unfortunate event from ever occurring again, we must make sure that
these systems are properly regulated to make sure that misinformation does not corrupt it and cause it to
wreak havoc on not only the internet but the world at large.
However, not all companies are so unaware of the dangers that may occur with the usage of
algorithmic technologies for user personalization as the information and research provided by Mark
Ledwich, and Anna Zaitsev in their paper titled “Algorithmic extremism: Examining YouTube's rabbit
hole of radicalization” demonstrates that the popular website, YouTube, has taken precautions when it
comes to dealing with issues such as the formation of Echo-Chambers and more importantly
Radicalization on their platform. Radicalization is an issue that forms when certain groups of people,
especially online, begin to form small bubbles of echo chambers that feed not only misinformation but
hate in their networking spheres. This can range from political radicalization over a certain party to
outright hate groups. YouTube ensures that radicalization does not occur on their site through the usage of
careful moderation of channels and their content as well as clever tweaking of the recommendation
algorithms used on its site that allow it to identify what may be deemed radical content which it then
actively avoids recommending to users. YouTube’s Algorithm and its moderators use a variety of different
restrictions on channels and their content such as demonetization, the removal of a channel's content from
the recommendation page, and the removal of both accounts and their content to prevent the spread of
radicalization on their platform. As shown in the case of YouTube, with careful moderation and
consideration, it is possible to combat the possible dangers that come with the usage of algorithmic
While this may help the issues that may be present in Social Media sites, a new danger has
presented itself with this technology and that is the fact that News sites/apps have begun to employ
algorithmic technology in their platforms which has the potential to be even more dangerous and harmful
Guzman III 4
than the problems faced with social media sites. Information provided by Ying Roselyn Du in their paper
“Personalization, Echo Chambers, News Literacy, and Algorithmic Literacy: A Qualitative Study of AI-
Powered News App Users, Journal of Broadcasting & Electronic Media” shows us that many people may
not be aware of the heavy amount of personalization found in modern news sites and the people who are
aware are more prone to simply brushing it off as they feel it simply increases the user experience. The
problem with this is unlike social media sites, a news site’s sole purpose is to spread what people would
assume to be factual information with little to no bias about current world events and issues. Believing
that this is true, many people would assume that the information provided to them on these sites is 100%
accurate which is dangerous when algorithms may only show you information on things you would want
to see rather than the whole picture. This can lead to the user getting only half the information on a story
or worse yet, completely false information on the very site that is supposed to provide you with the news.
The issue of misinformation and disinformation inherent with algorithmic technologies in news sites can
The dangers of algorithmic technologies are not completely new and foreign to us as they have
been around for a decade; however, this argument cannot be made for AI Chatbots which have begun to
take over the internet like a storm in 2022 with the introduction of Chat GPT. Although this technology is
still in its infancy, the impact it has left on the world cannot be understated. From being able to generate
entire texts to creating realistic images and art that cannot be easily identifiable as AI-generated, the
possibilities with this technology are endless. With endless possibilities comes endless problems however
as shown with Victor Galaz’s study into the matter in their paper entitled “AI could create a perfect storm
of climate misinformation” which demonstrates how this technology can be used to spread believable
misinformation about important topics such as climate change. This can be done by creating fake images
relating to climate change as well as easily generating believable text of misinformation about climate
change. People who are not very internet literate may not know of this fact and thus, are unaware of the
importance of double and triple checking to verify that information that is presented to you online may or
Guzman III 5
may not be true. With the birth of this new technology comes the birth of a whole new era of
misinformation that can lead to a dystopian nightmare where one can never be too sure of what they are
Like the recommendation algorithms, AI Chatbots too have not one, but two major problems that
are presented by their existence as stated in “Social Bots and the Spread of Disinformation in Social
Media: The Challenges of Artificial Intelligence” authored by Nick Hajli, Usman Saeed, Mina Tajvidi,
and Farid Shirazi. Their article reveals that they have discovered that chatbots can be made to run fake
social media accounts in mass in order to spread large amounts of disinformation on these social media
sites. This poses a huge issue not only because of the amount of damage that can be done in a short time
with these bots but also because as these bots grow, the ratio of real users and bots is skewed creating a
for the lack of a better term, a “Dead Internet” in which sites are completely flooded with bots and AI and
real users are drowned out by this technology. While this is still a long way from happening and could be
easily prevented with careful monitoring of social media sites, it is a very real possibility that the
technology that was meant to connect us can be used by malicious entities in order to alienate us and
further divide us. For the aforementioned reasons, it is imperative that we properly regulate the usage of
chatbots on the internet as the dangers it presents can be catastrophic to the way we use the internet.
It can be reasonably inferred from the previously provided about algorithmic technologies and AI
chatbots that it is vital that we properly regulate these technologies as failure to do so can lead to an
uncontrollable spread of misinformation and privacy risks on the internet. If we are able to properly
regulate these technologies and solve the problems that come with them, we can use their limitless
potential to help push humanity to a new age of technological innovation and discovery.
Guzman III 6
Citations
Mark Ledwich and Anna Zaitsev. "Algorithmic extremism: Examining YouTube's rabbit hole of
Ying Roselyn Du (2023) Personalization, Echo Chambers, News Literacy, and Algorithmic Literacy: A
Qualitative Study of AI-Powered News App Users, Journal of Broadcasting & Electronic
review. Bull World Health Organ. 2021 Jun 1;99(6):455-463A. doi:10H.2471/BLT.20.276782. Epub 2021
De, S.J., Imine, A. Consent for targeted advertising: the case of Facebook. AI & Soc 35, 1055–1064
(2020). https://doi.org/10.1007/s00146-020-00981-5
Galaz, Victor, et al. "AI could create a perfect storm of climate misinformation." arXiv preprint
arXiv:2306.12807 (2023).
Hajli, N., Saeed, U., Tajvidi, M. and Shirazi, F. (2022), Social Bots and the Spread of Disinformation in
Social Media: The Challenges of Artificial Intelligence. Brit J Manage, 33: 1238-
1253. https://doi.org/10.1111/1467-8551.12554
Guzman III 7