BACKGROUND GUIDE
UN GENERAL ASSEMBLY
WEAPONIZATION OF SOCIAL
MEDIA
LETTER FROM THE EXECUTIVE BOARD
Greetings Delegates,
Welcome to this simulation of the United Nations General Assembly: Disarmament and
International Security Committee. The agenda for the committee is “Weaponization of Social
Media”.
The main aim for this background guide is to provide you with a starting point in your venture to
participate in the conference. However, by no means must you consider this as an exhaustive
document as it is prepared only to give you a direction and you have absolutely no one to tell you
to limit yourself within the sphere of this guide.
You all are expected to use your research and analysis to come up with logical arguments and
viable solutions. Even though the conference is online due to the present conditions, the aim is to
gain an enriching and an intellectually learning experience, just like an offline conference.
Please note that you might read some opinionated articles in the background guide, those are just
to provide you with a perspective. In no way, they represent the Executive Board member’s
personal opinions. That being said, please feel free to get in touch with me via e-mails in case
you have any questions or if you wish to seek any clarifications.
Aryan Agarwal (Chairperson)
TABLE OF CONTENTS
1. Letter From the Executive Board
2. Committee Mandate
3. Accepted Sources of Evidence in the Committee
4. Background to the Agenda
● Ways and Processes of Weaponization
5. Relevant Conventions and Frameworks
● Budapest Convention
● General Data Protection Regulation (G.D.P..R)
6. Relevant Case Studies
● US Capitol Riots
● Islamic State of Iraq and Levant
7. Case Studies: International Response
● The Sentinel Project (Una Hakika)
● Digital Storytelling Initiative by Sri Lanka
● Dangerous Speech Project’s Nipe Ukweli by Kenya
8. Content Moderation
9. Use of Dark Web
10. The Role of Artificial Intelligence
11. Questions to Consider
12. Bibliography
COMMITTEE MANDATE
Established in 1945, pertaining to the United Nations charter, the United Nations General
Assembly is the main deliberative, policy making and representative organ of the United Nation.
It acts a recommendatory body to both, the members and the security council with it being able
to discuss any matters within the scope of the Charter or relating to the powers of any organ
mentioned under the charter, with the exception being mentioned under Article 12 of the charter
itself.
The General Assembly may deliberate upon the general principles of international co-operation
in maintenance of international peace and security, including principles governing disarmament
and regulation of armament, it may also make recommendations to the members or the Security
Council for the same. It may also bring matters in front of the Security Council that are likely to
disrupt international peace and security.
1. The General Assembly shall initiate studies and make recommendations for the purpose
of:
1. promoting international co-operation in the political field and encouraging the
progressive development of international law and its codification;
2. promoting international co-operation in the economic, social, cultural,
educational, and health fields, and assisting in the realization of human rights and
fundamental freedoms for all without distinction as to race, sex, language, or
religion.
Furthermore, it shall receive annual reports from the Security Council and all other organs that
accounts for all measures taken or decided upon to maintain international peace and security.
The General Assembly is also responsible for considering and approving the budget for all
organs of the United Nations.
ACCEPTED SOURCES OF EVIDENCE IN THE
COMMITTEE
Sources of Evidence Whenever presenting any kind of facts, it is important that the given
information is from a credible source. Following are the kinds of sources that are acceptable in
the committee.
1) Government Reports: Reports published by different departments and ministries of a
government are considered as credible sources of information. They can be refuted or denied by
opposing states, but will nevertheless be taken into consideration during assessment of the point
conveyed.
Some government websites-
● India- National Portal of India
● United States of America-Official Guide to Government Information and Services |
USAGov
● United Kingdom- Welcome to [Link]
Reports and information from government departments and ministries shall also be considered as
credible sources of information. For Example,
● Home Ministry (India)- Home | Ministry of Home Affairs | GoI
● US State Department- [Link]
● United Kingdom Home Ministry- contact Home Office
2) State-operated News Agencies: Reports published by official news agencies of a country
(which are owned by the respective governments) can be used in the support of or against the
state that owns the news agency.
Some examples of state-owned news agencies are-
● RIA Novosti (Russia) [Sputnik News - World News, Breaking News & Top Storie]
● BBC (United Kingdom) [BBC - Home]
● Xinhua News Agency (PR China) [Xinhua – China, World, Business, Sports, Entertainment,
Photos and Video | [Link]]
Note- Private news agencies, such as the Guardian, are not usually considered credible.
However, information from certain sources such as the Associated Press and Reuters will be
accepted in the committee
3) United Nations Reports: All UN Reports are considered credible sources of evidence for the
Executive Board. These include, but are not limited to
a) UN Bodies like the UNSC [United Nations Security Council |]
b) UN Affiliated Bodies like the International Atomic Energy Agency [International Atomic
Energy Agency | Atoms for Peace and Development]
c) Treaty Based Bodies like the Antarctic Treaty System [Antarctic Treaty]
Note- (Sources like Wikipedia, Amnesty International, Human Rights Watch are not considered
as appropriate sources of evidence, however, you may use them to read and research to gain a
better understanding about the agenda at hand)
BACKGROUND TO THE AGENDA
Social Media has emerged as a powerful tool for communication and connection in recent times,
it has also been seen, however, as something that has been a cause for conflict in the modern
world. With the emergence of social media, a highly accessible platform for people to spread
misinformation, disharmony, divisiveness and this ultimately leads to violence outside of the
virtual platform and it has been noticed that this transcends into the offline world and
riots/communal violence and violence in general has increased. Prima Faece, social media may
seem like a rather harmless tool/platform, but it is actually something that has an immensely
large impact on our lives and the world, it is also complex and rapidly evolving. This impact
stretches across borders, countries, poses a challenge to international development and
peacebuilding
Diplomacy, or the freedom from manipulation, has gone for a toss with respect to global
conflicts, electoral politics, and much more. Social media with expanded horizons is not only
used by teenagers but 62% of adults as per a 2016 Pew Research. Disinformation, hate speech,
and recruitment to violent groups through social manipulation are not new phenomena. These
activities have long been identified as drivers or triggers of conflict and a focus of violence
prevention efforts. In the past, these activities happened through traditional media and in-person
communication. Social media changes the game.
Here’s how:
1) Social media platforms increase communication power. The international reach and ease of
access to social media mean that a higher volume of weaponized information can reach more
people faster, and via multiple channels.
2) The personalization of social media targets individuals and amplifies impact. Social media
platforms tailor information to individual users’ preferences. Machine learning takes that
personalization further, serving up more targeted content, and narrowing the scope of
information an individual receives to topics and viewpoints that confirm and reinforce one
another.
3) Personalization increases polarization and exacerbates conflict risk. Social media platforms
organize users into groups that share preferences and demographic characteristics, creating
“bubbles” or “echo chambers” that align with ethnic, ideological, linguistic, or
other societal divisions. Rumors can seem like credible facts, and collective online outrage can
quickly trigger real-world violence.
4) Weaponized information is difficult to police. Online conversations and the offline actions
that flow from them can evolve quickly, and it can be nearly impossible to identify individual
wrongdoers among the billions of social media users. Because the same qualities that make
social media prone to weaponization also make it a powerful driver of positive engagement,
regulations, and technology companies’ own policies have struggled to isolate and
keep pace with threats.
Ways and Processes of Weaponization
This section explores how weaponized social media can contribute to offline conflict by
examining real world case studies. These examples are not exhaustive. Rather, they surface a
range of concepts and implications that can help humanitarian, development and peacebuilding
organizations — as well as technology companies and policymakers — understand what’s
happening and develop effective responses.
1) Case studies Information operations (IO): Coordinated disinformation campaigns are designed
to disrupt decision making, erode social cohesion and delegitimize adversaries in the midst of
interstate conflict. IO tactics include intelligence collection on specific targets, development of
inciteful and often intentionally false narratives and systematic dissemination across social and
traditional channels. The Russian government used such tactics to portray the White Helmets
humanitarian organization operating in Syria as a terrorist group, which contributed to violent
attacks against the organization.
2) Political manipulation (PM): Disinformation campaigns can also be used to systematically
manipulate political discourse within a state, influencing news reporting, silencing dissent,
undermining the integrity of democratic governance and electoral systems, and strengthening the
hand of authoritarian regimes. These campaigns play out in three phases: 1) the development of
core narratives, 2) onboarding of influencers and fake account operators, and 3) dissemination
and amplification on social media. As an example, the president of the Philippines, Rodrigo
Duterte, used Facebook to reinforce positive narratives about his campaign, defame opponents
and silence critics.
3) Digital hate speech (DHS): Social media platforms amplify and disseminate hate speech in
fragile contexts, creating opportunities for individuals and organized groups to prey on existing
fears and grievances. They can embolden violent actors and spark violence — intentionally or
sometimes unwittingly. The rapid proliferation of mobile phones and Internet connectivity
magnifies the risks of hate speech and accelerates its impacts. Myanmar serves as a tragic
example, where incendiary digital hate speech targeting the majority Muslim Rohingya people
has been linked to riots and communal violence.
4) Radicalization & recruitment (RR): The ability to communicate across distances and share
user generated, multimedia content inexpensively and in real time have made social media a
channel of choice for some violent extremists and militant organizations, as a means of
recruitment, manipulation and coordination. The Islamic State (ISIS) has been particularly
successful in capitalizing on the reach and power of digital communication technologies.
RELEVANT CONVENTIONS AND
FRAMEWORKS
Budapest Convention
The Budapest Convention, signed on 23rd November 2001, is the first and only binding legal
treaty/convention that deals with Cybercrime. It serves as a guideline for any country developing
comprehensive national legislation against Cybercrime and as a framework for international
cooperation between State Parties to this treaty. The main aim of this Convention is to: -
(1) Harmonizing the domestic criminal substantive law elements of offences and connected
provisions in the area of cyber-crime
(2) Providing for domestic criminal procedural law powers necessary for the investigation and
prosecution of such offences as well as other offences committed by means of a computer system
or evidence in relation to which is in electronic form
(3) Setting up a fast and effective regime of international cooperation.
With the signing of this convention, a substantive outline and legal framework has been formed,
which allows and helps countries in legislation and formation of domestic law pertaining to
cybercrime. This convention further gives clarity on terms and matters that were previously
ambiguous and didn’t have a widely accepted definition. This convention also outlines the
measures that countries must take at a domestic level, with it highlighting procedures for both
substantive and procedural law, furthermore, it also provides measures to build and encourage
international cooperation that will ultimately help countries in tackling cybercrime.
General Data Protection Regulation (G.D.P.R)
The General Data Protection Regulation (GDPR) is the toughest privacy and security law in
the world. Though it was drafted and passed by the European Union (EU), it imposes obligations
onto organizations anywhere, so long as they target or collect data related to people in the EU.
The regulation was put into effect on May 25, 2018. The GDPR will levy harsh fines against
those who violate its privacy and security standards, with penalties reaching into the tens of
millions of euros.
With the GDPR, Europe is signaling its firm stance on data privacy and security at a time when
more people are entrusting their personal data with cloud services and breaches are a daily
occurrence. The regulation itself is large, far-reaching, and fairly light on specifics, making
GDPR compliance a daunting prospect, particularly for small and medium-sized enterprises
(SMEs). The GDPR defines an array of legal terms at length. Below are some of the most
important ones:
Personal data — Personal data is any information that relates to an individual who can be
directly or indirectly identified. Names and email addresses are obviously personal data.
Location information, ethnicity, gender, biometric data, religious beliefs, web cookies, and
political opinions can also be personal data. Pseudonymous data can also fall under the definition
if it’s relatively easy to ID someone from it.
Data processing — Any action performed on data, whether automated or manual. The examples
cited in the text include collecting, recording, organizing, structuring, storing, using, erasing, etc.
Data subject — The person whose data is processed. These are your customers or site visitors.
Data controller — The person who decides why and how personal data will be processed. If
you’re an owner or employee in your organization who handles data, this is you.
Data processor — A third party that processes personal data on behalf of a data controller. The
GDPR has special rules for these individuals and organizations. They could include cloud servers
like Tresorit or email service providers like ProtonMail.
RELEVANT CASE STUDIES
US Capitol Riots
The US Capitol Attacks are evidence to the point and argument of how social media can act as a
ground for violence and acts of aggression. With the results of the voting for United States
President looming large, and an expectation that Joe Biden from the Democrat Party will beat
Donald Trump, social media sites and platforms like Facebook, Twitter saw discussions of
violence and aggression in order to stop/delay the announcement of the next United States
president, with the common theme being “stop the steal” referring to Joe Biden beating Donald
Trump and it being termed as a steal.
The aim of these Pro-Trump groups was to infiltrate the Congress session and delay the
certification of Biden’s presidency and extend Trump’s stay in the White House. Considering the
sensitivity of the event and the importance of uncovering the root cause for this, the United
States Congress requested platforms like Facebook and Twitter to hand over documents relating
to the event. Subpoenas were issued and handed to out to around 15 companies, the subpoenas
however, did not have the power to force private companies to hand over private documents to
the government which would create a further barrier between the parties
Social Media platforms have long been criticized for not being able to curb and prevent extremist
violent sentiments, racist sentiments and the US Capitol attacks further fueled debate and
controversy regarding the same. This attack, on a symbol and historic monument of the United
States, the most violent one for over 2 centuries, which resulted in the death of 4 people is
testament enough to the idea of social media proving to be a platform which has proven to be a
cause for concern, and this begs the question- What must be the plan of action for states and
enterprises to curb any similar events that may occur in the future?
Islamic State of Iraq and Levant
On 3 February 2015, ISIS uploaded a video showing a Royal Jordanian Air force Moaz al
Kasasbeh, being burned to death by Islamic State extremists. Due to this killing, the Jordanian
population was outraged, and so the captured jihadists were condemned to death.
Throughout social media, ISIS is recruiting fighters from the Western World. Around 6,000
citizens from Europe and North America have joined ISIS since 2014. They were encouraging
individuals to join them by either participating in their cyber war or ‘boots on the ground’ in
Syria and Iraq. According to the National Security Studies (INSS), the technological progress of
ISIS surpasses those of al-Qaeda and other jihad movements. It is known that jihad movements
such as Hezbollah, Hamas, al-Qaeda, and ISIS are all well familiar with the power behind social
media particularly in exerting strong political messages.
ISIS was able to utilize media exploitation at a high scale. They have shown an unparalleled rate
of manipulation on Twitter and Facebook accounts showing that a new era of cyber warfare has
appeared combining both physical and cybernetics jihad.
ISIS has used a new technique in cyberspace by resulting in “psychological warfare”. They have
drowned the internet with videos showing brutal acts of beheading and mass executions, in
addition to victory parades in order to demonstrate their power and strength.
CASE STUDIES: INTERNATIONAL RESPONSE
The Sentinel Project (Una Hakika)
In Kenya’s Tana Delta, the Sentinel Project’s Una Hakika program counters rumors that have
contributed to inter-ethnic violence by creating a platform for community members to report,
verify and develop strategies to address misinformation. Una Hakika is an information service
which provides subscribers with neutral, accurate information in response to rumours that arise
in the Tana Delta. Most of the communication for Una Hakika takes place through SMS as well
as voice calls and the engagement of volunteer community ambassadors. People who hear
rumours can report them by sending a toll-free SMS which essentially acts as a rumor
verification hotline. Once Una Hakika receives a report about a rumor, a team goes into action to
verify it and report back to the community about whether the rumor is true or not. This process
involves gathering a lot of information from various different sources and trying to make sense
of it while mapping subsequent reports of rumors to see how they develop and flow through the
area.
Digital Storytelling Initiative (Sri Lanka)
The Digital Storytelling initiative in Sri Lanka seeks to build skills in citizen storytelling as a
way to balance polarizing online rhetoric, while also helping individuals become more
responsible consumers of online information. The central core of their mission is empowering
youth to tell the stories of their communities, and to take ownership of their narratives using
tools of visual storytelling.
Dangerous Speech Project’s Nipe Ukweli [Kenya]
The Dangerous Speech Project’s Nipe Ukweli (Kiswahili for “gimme truth”) in Kenya, is a
campaign developed with the Umati project to educate Kenyans about dangerous speech and
what individuals can do to combat its effects. These fliers were published for distribution in
Kenya, in Kiswahili and in English. They helped provide public information on dangerous
speech as well as mechanisms to report and remove such speech online during the height of
electoral tensions. The Dangerous Speech Project was founded in 2010 to study speech (any
form of human expression) that inspires violence between groups of people – and to find ways to
mitigate this while protecting freedom of expression. Their work was focused primarily in four
areas: 1. Tracking and studying dangerous speech in many countries 2. Researching effective
responses to dangerous speech and other forms of harmful expression 3. Advising social media
and other tech companies on their policies, and encouraging them to engage in transparent
research 4. Teaching dangerous speech ideas to a variety of people who use them to study and
counter dangerous speech in each of these areas, they work closely with a diverse group of
partners to maximize the quality and impact of their efforts, sharing work by writing articles,
reports, blog posts, and op-eds and giving frequent talks.
CONTENT MODERATION
Content Moderation is the practice of monitoring and applying a predetermined set of rules and
guidelines to user-generated submissions to determine best if the communication (a post, in
particular) is permissible or not.
There are 5 common types of moderation:
1. Pre-moderation
When someone submits content to a website and it is placed in a queue to be checked by a
moderator before it is visible to all, it’s pre-moderation. Pre-moderation has the benefit of
ensuring (in the hands of a good moderator) that undesirable content is kept off the visible
community sections.
While pre-moderation provides high control of what community content ends up being
displayed, it has many downsides. Commonly thought to cause the death of online communities,
it creates a lack of instant gratification on the part of the participant, who is left waiting for their
submission to be cleared by a moderator. In turn, content that is conversational becomes stilted
and judders to a halt if the time delay between submission and display is too long. The other
disincentive to use pre-moderation is the high cost involved if and when the community grows
and submissions cross a threshold of user-generated content unmanageable by your maximum
team of moderators.
2. Post Moderation
In an environment where active moderation must take place, post-moderation is a better
alternative to pre-moderation from a user experience perspective, as all content is displayed on
the site immediately after submission, but replicated in a queue for a moderator to pass or
remove afterwards.
The main benefit of this type of moderation is that conversations take place in real time, which
makes for a faster paced community. People expect a level of immediacy when interacting on the
web, and post moderation allows for this whilst also allowing moderators to ensure security.
3. Reactive moderation
Reactive moderation is defined by relying on community members to flag up content that is
either in breach of the House Rules, or that the members deem to be undesirable. It can be
utilized alongside pre- and post- moderation, as a 'safety net' in case anything gets through the
moderators, or more commonly as the sole moderation method.
The members themselves essentially become responsible for reporting content that they feel are
inappropriate as they encounter this content on the site or community platform. The process is
usually to include a reporting button on each piece of user-generated content, that if clicked, will
file an alert with the administrators or moderator team for that content to be looked at, and if in
breach of the site's rules of use, to remove.
The main advantage of this method of moderation is that it can scale with community growth
without putting extra strain on moderation resource or cost, as well as theoretically avoiding
responsibility for defamatory or illegal content uploaded by the users of a website, as long as the
process for removing content upon notification within an acceptable time frame is in place.
4. Distributed moderation
Distributed moderation is still a somewhat rare type of user generated content moderation
method. It usually relies on a rating system which members of the community use to vote on
whether submissions are either in line with community expectations or within the rules of use. It
allows control of comments, or forum posts to mostly reside within the community, usually with
guidance from experienced senior moderators.
Expecting the community to self-moderate is very rarely a direction companies are willing to
take, for legal and branding reasons. For this reason, a distributed moderation system can also be
applied within an organization, using several members of staff to process contributions and
aggregating an average score to determine whether content should be allowed to stay public or
be reviewed.
5. Automated moderation
In addition to all of the above human-powered moderation systems, automated moderation is a
valuable weapon in the moderator's arsenal. It consists of deploying various technical tools to
process and apply defined rules to reject or approve submissions.
The most typical tool used is the word filter, in which a list of banned words is entered and the
tool either stars the word out or otherwise replaces it with a defined alternative, or blocks or
rejects the message altogether. A similar tool is the IP ban list. There are also a number of more
recent and sophisticated tools being developed, such as those supplied by Crisp Thinking. These
include engines that allow for automated conversational pattern analytics, and relationship
analytics.
USE OF DARK WEB
Dark Web, a term used for online content that has been encrypted so as to not make it available
to the average internet surfer, websites and content that have been enlisted on the dark web do
not appear on normal search engines, a special kind of browser is required for people to access
the dark web.
With that, we can establish that the dark web is a gray area and rather secretive/hidden to the
normal eye, however, with secrecy comes the problem of illegal and shady activities. The same
has been witnessed through the dark web. The dark web has emerged as an important hub of
criminal commerce, a fully functional marketplace where hidden customers can buy from hidden
sellers with relative confidence, often with customer ratings available, just as on the public-
facing web. The criminal side of the dark web relies on anonymizing technology and
cryptocurrency to hide its trade in an assortment of contraband such as opioids and other drugs,
bomb parts, weapons large and small, child pornography, social security numbers, body parts —
even criminal acts for hire. The dark web’s anonymity not only encourages illegal activities, it
keeps many law enforcement agencies largely unaware of its existence, even while their
jurisdictions are impacted by online transactional crimes.
The dilemma that cyber law enforcement authorities face is that even though we know that the
dark web is a place where illicit activities are a common theme and the accessibility and ease of
accessing the dark web has seen even more activity, law enforcement authorities are unable to
track the said activities and perpetrators down. There is also a lack of quantitative data to analyze
and decide what plan of action would be the best to choose, to limit the illicit activities on the
dark web.
This stands out as a point of concern, and we must deliberate upon this and how this is used for
weaponization and how to stop this in depth.
THE ROLE OF ARTIFICIAL INTELLIGENCE
Most people active on social media are aware of ‘bots’, automated social media accounts or users
that are either partially or fully automated and are modelled to mimic human users. In the
contemporary world, where the internet has become a key tool in political and other campaigns,
artificial intelligence like bots have started playing major roles in affecting public opinion.
The European Union funds a project called the Programme on Democracy & Technology
(previously called the Project on Computational Propaganda), which works under the University
of Oxford. It researches the use of algorithms, automation and computational propaganda in
public life— that is, how bots and bot scripts can be used as a tool by political entities.
Research conducted by the aforementioned COMPROP and other organizations found that AI
played an important role in spreading disinformation, misinformation and swaying political
opinions during various significant events such as Brazilian presidential campaigns, the Brexit
referendum and the 2016 US Presidential Election among others. The project found that
computational propaganda spread through bots was present across applications, which include
but are not limited to Facebook, Instagram and YouTube, each of which enjoy massive amounts
of popularity. More recently, algorithms, accounts and chatbots have also been used to spread
misinformation on COVID-19.
As social media operates largely without control upon who’s circulating what content, its
networks become easy targets for such propaganda. Social media firms have started employing
their own AI to control the spread of such bots, meant to catch false information, flag and
remove content, etc. However, it does not appear to be very effective as of today.
Targets are picked through many methods. Algorithms are designed to build upon things users
buy, the hashtags they use, and the type of content they interact with. Projects like COPMROP
also attempt to identify bots and map their networks.
The research we have at hand prompts vast amounts of concerns— if AI can affect daily political
activity to this extent, can it entirely undermine the democratic process? Is there an immediate
need for people to be more vigilant of who, or what they interact with on social media? If done at
a much larger scale, what can be the impact of weaponized AI?
QUESTIONS TO CONSIDER
1. How can we eliminate the problems existing with content moderation? What
role can Artificial Intelligence play in the process?
2. How can we eliminate Political Manipulation and Terror Activities taking
place via the medium of social media?
3. What are the implications of the sudden rise in dark web usage and what are
the suitable measures to counter the same?
4. How can we improve transparency in the operations of private social media
platforms like Facebook?
5. How can we ensure that critical data sharing from private enterprises to
governments doesn't result in leaks?
6. What are the inconsistencies and problems with the existing frameworks and
how do we negate them?