Draft of Algorithm and Ai Chatbot Technologies and The Threats They Pose
Draft of Algorithm and Ai Chatbot Technologies and The Threats They Pose
Ana Mendoza
25 March 2024
Draft of: Algorithm and AI Chatbot Technologies and the Threats They Pose
Since its conception, the internet has drastically changed the course of human history, allowing us
to seamlessly connect and share our ideas and information across the globe. Nowadays, the internet is
populated by big companies who seek to profit from their sites through the usage of personalization and
through the allure of having the newest and innovative technologies for all to use. The former has led to
the heavy use of algorithmic technology in many popular websites while the latter has led to the inclusion
of the newest and most controversial piece of technology, AI Chatbots. Without the proper regulation of
As our algorithmic technologies continue to improve, big companies such as Google and
Facebook begin to use these technologies to help better personalize their apps and websites for a better
user experience. While at face value this may seem harmless as a more personalized profile of a user
means an overall better user experience on these sites, the potential risk this technology creates cannot be
ignored as stated by Bojić, Ljubiša in their article titled “The Scary Black Box: AI-Driven Recommender
Algorithms As The Most Powerful Social Force”. A good real-world example of algorithmic technology
posing a great risk to public safety would be in the case of the 2020 COVID-19 Pandemic in which the
world was plunged into fear and uncertainty due to the lack of information available during the time about
the COVID-19 virus. Many people trapped in their homes took to the internet and more specifically,
social media to gather information about the virus to help better protect themselves against it. Due to the
lack of information at the time, many malicious entities began to spread false information about the
Guzman III 2
COVID-19 virus mostly as a political move to pit one party against one another. Due to the political
affiliation that occurred with the virus, much of the false information surrounding COVID-19 began to be
associated with far right-wing parties which was an issue as the Algorithms for social media sites such as
Facebook, Twitter, and TikTok began to pick up on this pattern, and according to the information
provided by Gabarron, Oyeyemi SO, and Wynn R, it can reasonably be inferred that the algorithmic
technologies present on these sites began to act on this pattern, spreading an alarming amount of
disinformation based on the users political views and standpoints. This helped feed and grow discourse
and unrest leading to a chain reaction that formed what would be described as an “Echo chamber” of
ideas and viewpoints as people began to only the information they wanted to see about the virus and not
the unbiased factual statements about the COVID-19 virus provided by official medical professionals and
the CDC. If we wish to prevent this level of chaos from happening again, we must regulate these
algorithmic technologies to ensure that they do not accidentally spread misinformation which could lead
to civil unrest not just online but in the real world as well.
However, not all companies are so unaware of the dangers that may occur with the usage of
algorithmic technologies for user personalization as the information and research provided by Mark
Ledwich, and Anna Zaitsev in their paper titled “Algorithmic extremism: Examining YouTube's rabbit
hole of radicalization” demonstrates that the popular website, YouTube, has taken precautions when it
comes to dealing with issues such as the formation of Echo-Chambers and Radicalization on their
platform. They go about this process via the usage of careful moderation by YouTube staff to make sure
that radical content is not present in their platform as well as the clever tweaking of their algorithmic
technology that actively tries to steer away users from harmful misinformation that may be present in the
far corners of the site. Videos and channels are carefully moderated by staff in order to ensure that they do
not pose a threat to public safety by spreading radical ideas or by spreading misinformation to a large
audience. If a channel or video is deemed dangerous or unhealthy YouTube can tweak its algorithm to
hide its content from recommendations or can even remove the account altogether. With careful
Guzman III 3
moderation and consideration, it is possible to combat the possible dangers that come with the usage of
While this may help the issues that may be present in Social Media sites, a new danger has
presented itself with this technology and that is the fact that News sites/apps have begun to employ
algorithmic technology in their platforms which has the potential to be even more dangerous and harmful
than the problems faced with social media sites. Information provided by Ying Roselyn Du in their paper
“Personalization, Echo Chambers, News Literacy, and Algorithmic Literacy: A Qualitative Study of AI-
Powered News App Users, Journal of Broadcasting & Electronic Media” shows us that many people may
not be aware of the heavy amount of personalization found in modern news sites and the people who are
aware are more prone to simply brushing it off as they feel it simply increases the user experience. The
problem with this is unlike social media sites, a news site’s sole purpose is to spread what people would
assume to be factual information with little to no bias about current world event and issues. Believing that
this is true, many people would assume that the information provided to them on these sites is 100%
accurate which is dangerous when algorithms may only show you information on things you would want
to see rather than the whole picture which can lead to the user getting only half of the information on a
story or worse yet, completely false information on the very site that is supposed to provide you with the
news. The issue of misinformation and disinformation inherent with algorithmic technologies in news
sites can only be resolved with heavy enforcement of moderation on these platforms.
The dangers of algorithmic technologies are not completely new and foreign to us as they have
been around for a decade; however, this argument cannot be made for AI Chatbots which have begun to
take over the internet like a storm in 2022 with the introduction of Chat GPT. Although this technology is
still in its infancy, the impact it has left on the world cannot be understated. From being able to generate
entire texts to creating realistic images and art that cannot be easily identifiable as AI-generated, the
possibilities with this technology are endless. With endless possibilities comes endless problems however
as shown with Victor Galaz’s study into the matter in their paper entitled “AI could create a perfect storm
Guzman III 4
of climate misinformation” which demonstrates how this technology can be used to spread believable
misinformation about important topics such as climate change. This can be done by creating fake images
relating to climate change as well as easily generating believable text of misinformation about climate
change. Another issue that may arise from these social chatbots is stated in “Social Bots and the Spread of
Disinformation in Social Media: The Challenges of Artificial Intelligence” authored by Nick Hajli,
Usman Saeed, Mina Tajvidi, and Farid Shirazi in which it is discovered that chatbots can be made to run
fake social media accounts in mass in order to spread large amounts of disinformation on these social
media sites. This poses a huge issue not only because of the amount of damage that can be done in a short
time with these bots but also because as these bots grow, the ratio of real users and bots is skewed
creating a lack of a better term, a “Dead Internet” in which sites are completely flooded with bots and AI
and real users are drowned out by this technology. While this is still a long way from happening and could
be easily prevented with careful monitoring of social media sites, it is a very real possibility that the very
technology that was meant to connect us can be used by malicious entities in order to further divide us by
generating misinformation on the internet or completely flood the internet with fake accounts and bots
making us unable to distinguish what or who is real online. For the aforementioned reasons, it is
imperative that we properly regulate the usage of chatbots on the internet as the dangers it presents can be
It can be reasonably inferred from the previously provided about algorithmic technologies and AI
chatbots that it is vital that we properly regulate these technologies as failure to do so can lead to the
spread of disinformation on the internet on scales unheard of. If we are able to properly regulate these
technologies and solve the problems that come with them, we can use their limitless potential to help push
Citations
Mark Ledwich and Anna Zaitsev. "Algorithmic extremism: Examining YouTube's rabbit hole of
Ying Roselyn Du (2023) Personalization, Echo Chambers, News Literacy, and Algorithmic Literacy: A
Qualitative Study of AI-Powered News App Users, Journal of Broadcasting & Electronic
review. Bull World Health Organ. 2021 Jun 1;99(6):455-463A. doi:10H.2471/BLT.20.276782. Epub 2021
Bojić, Ljubiša, et al. “The Scary Black Box: AI Driven Recommender Algorithms As The Most Powerful
Social Force”. Etnoantropološki Problemi Issues in Ethnology and Anthropology , vol. 17, no. 2, Oct.
Galaz, Victor, et al. "AI could create a perfect storm of climate misinformation." arXiv preprint
arXiv:2306.12807 (2023).
Guzman III 6
Hajli, N., Saeed, U., Tajvidi, M. and Shirazi, F. (2022), Social Bots and the Spread of Disinformation in
Social Media: The Challenges of Artificial Intelligence. Brit J Manage, 33: 1238-
1253. https://doi.org/10.1111/1467-8551.12554