<AI Pub>WEF AI Agents
<AI Pub>WEF AI Agents
Capgemini
Contents
Foreword 3
Executive summary 4
Introduction 5
1 Definition of an AI agent 6
3 Looking ahead 17
Conclusion 22
Contributors 23
Endnotes 26
Disclaimer
This document is published by the
World Economic Forum as a contribution
to a project, insight area or interaction.
The findings, interpretations and
conclusions expressed herein are a result
of a collaborative process facilitated and
endorsed by the World Economic Forum
but whose results do not necessarily
represent the views of the World Economic
Forum, nor the entirety of its Members,
Partners or other stakeholders.
© 2024 World Economic Forum. All rights
reserved. No part of this publication may
be reproduced or transmitted in any form
or by any means, including photocopying
and recording, or by any information
storage and retrieval system.
Foreword
Fernando Alvarez
Jeremy Jurgens
Chief Strategy and
Managing Director, World
Development Officer,
Economic Forum
Capgemini
In the contemporary world, where technology is attention and proactive management. Ensuring
rapidly reshaping every aspect of our lives, that AI development aligns with societal values
AI agents are emerging as transformative tools and aspirations is paramount to its successful
that are redefining human interactions and the integration into daily life. The aim of these
operation of our society. These agents, which innovations is to amplify human ingenuity – not to
began as simple computer programs, have replace it – within our economy.
evolved into sophisticated systems with the
capability for autonomous decision-making. This comprehensive overview serves as an
This evolution signifies a major shift, positioning important resource for those involved in shaping the
AI agents as active participants in crucial sectors future of AI technology. By exploring the capabilities
such as healthcare, education, financial services and implications of AI agents, stakeholders can
and beyond. better understand how to leverage the power of
these systems to drive meaningful progress across
The advancement of AI agents brings with it a various sectors. It is through this understanding
wealth of exciting possibilities and transformative that we can ensure AI technologies are developed
potential. Their ability to manage complex tasks responsibly and used in ways that enhance human
with minimal human intervention offers the promise well-being. With careful stewardship, AI agents can
of significantly increased efficiency and productivity. become invaluable allies in fostering innovation and
improving quality of life worldwide.
However, as we step into this AI-driven era, it
is essential to not only harness the immense In partnership, the World Economic Forum and
benefits these technologies offer, but also to Capgemini have joined forces through the AI
address the challenges they present. Issues Governance Alliance to advance this critical topic in
such as ethical considerations require careful collaboration with the AI community.
Defined as autonomous systems that sense and act sectors such as healthcare, customer service
upon their environment to achieve goals, artificial and education. However, AI agents also present
intelligence (AI) agents are being deployed in a wide novel risks, including potential misalignment,2
range of roles in different industries. This requires along with ethical concerns about transparency
the adaptation of governance frameworks to ensure and accountability.
responsible adoption.
Future advances in the area are likely to involve
AI agents, comprising components such as multi-agent systems (MAS), where AI agents
sensors and effectors, have evolved from rule- collaborate to address complex challenges such as
based systems to advanced models capable urban traffic management. More advanced systems
of complex decision-making and independent introduce new demands for interoperability and
operation. Enabled by breakthroughs in deep communication standards to function effectively,
learning, reinforcement learning and the transformer while these protocols still need to be debated and
architecture,1 AI agents span applications from agreed upon by a wider community.
workflow automation to personal assistants. This
progression now encompasses more sophisticated This paper highlights the need for robust
utility-based AI agents that incorporate memory, governance, ethical guidelines and a cross-sectoral
planning and tool integration, broadening their consensus to integrate AI agents safely into society.
capabilities and relevance. As more advanced AI agents continue to proliferate,
it is imperative that their transformative potential
The benefits of AI agents include productivity gains, remains balanced with essential safety, security and
specialized support and improved efficiency in governance considerations.
AI agent
Percepts Sensors
Environment
Digital Physical
infrastructure infrastructure
Effectors
Actions
Figure 1 highlights how an agent is made up of information, make decisions and plan actions.
several core components, including: Based on the capabilities of the AI agent, the
control centre involves complex algorithms and
– User input: the external (e.g. human, another models that allow the agent to evaluate different
agent) input that the AI agent receives. This options and choose the best course of action.
could be instructions such as typing via a chat-
based interface, voice-based commands or – Percepts: the data inputs that the AI agent
pre-recorded data. receives about its environment, which could
come from various sensors or other data
– Environment: the bounds in which the AI sources. They represent the agents’ perception
agent operates. It serves as the area in which or understanding of its environment.
the agent applies its sensors and effectors to
percept and modify its surroundings based on – Effectors: the tools an agent uses to take
the inputs received and the actions decided actions upon its environment. In physical
upon by the control centre. The environment environments, effectors might include robotic
can be physical infrastructure such as the arms or wheels, while in the digital environment,
mapped area of an autonomous vehicle or they could be commands sent to other software
digital infrastructure such as the intranet of a systems, such as generating a data visualization
business for a coding agent. or executing a workflow.
– Sensors: mechanisms through which the – Actions: represent the alterations made by
agent perceives its environment. Sensors can effectors. In physical environments, actions
range from physical devices (e.g. cameras or might be pushing an object, whereas in digital
microphones) to digital ones (e.g. queries to environments they could be linked to updating
databases or web services). a database.
Over the past 25 years, the increase in computing 1. Supervised learning: facilitates learning from
capacity, the availability of large quantities labelled datasets, so the model can accurately
of data on the internet and novel algorithmic predict or classify new, previously unseen data.8
breakthroughs have enabled significant
developments in the base technologies behind 2. Reinforcement learning: enables agents
recent advances in the capabilities of AI agents. to learn optimal behaviours through trial and
These are briefly described below. error in dynamic environments. Agents can
continuously update their knowledge base
without needing periodic retraining.9
Large models
3. Reinforcement learning with human
feedback: enables agents to adapt and
Large language models (LLM) and large multimodal improve through human feedback, specifically
models (LMM) have revolutionized the capabilities focusing on aligning AI behaviour with human
of AI agents, particularly in natural language values and preferences.10
processing and the generation of text, image, audio
and video. 4. Transfer learning: involves taking a pretrained
model, typically trained on a large dataset (e.g.
The emergence of large models has been driven to recognize cars) and adapting it to a new but
by several technological advances and by the related problem (e.g. to recognize trucks).11
transformer architecture, which has paved the way
for a deeper understanding of context and word 5. Fine-tuning: involves taking a pretrained model
relationships, considerably improving the efficiency and further training it on a smaller, task-specific
and performance of natural language processing dataset. This process allows the model to retain
tasks.7 In summary, advanced AI models have its foundational knowledge while improving its
enabled better understanding, generation and performance on specialized tasks.12
engagement with natural language.
These and other learning paradigms are often used
in combination and have dramatically expanded the
Machine learning and deep problem-solving capabilities of AI agents in various
areas of application. The evolution of AI agents
learning techniques is detailed in Figure 2, while the agent types are
further expanded in the following section.
A range of techniques have greatly improved AI
models through increased efficiency and greater
specialization. Some examples of machine- and
deep-learning techniques include:
Large models
Agent type
Simple reflex Model-based Goal-based Utility-based Future type
Agent examples
Basic anti-virus Smart Advanced Autonomous Smart city
software thermostat chess AI driving traffic planner
Key characteristics Condition– Internal model of Transfer learning Evaluating scenarios Collaborative
action rules the environment and reinforcement to choose the best methodologies that
learning outcome represent the current
state of the art
Source: World Economic Forum
This section outlines different types of AI agent development. AI agents can be considered as either
and traces their evolution, highlighting the key deterministic or non-deterministic, based on their
technological advances that have supported their defining characteristics, which are outlined below.
Rule-based: operate with fixed rules and logic, Data-driven and probabilistic: make
meaning the same input will always produce the decisions based on statistical patterns in
same output. data, with outcomes that are not fixed but
instead are probabilistic.
Predictable behaviour: the decision-making Flexible and adaptive: able to learn from data,
process is transparent and consistent, which adapt to new situations and handle uncertainty,
makes the outcomes predictable. often resulting in varied outcomes for similar inputs.
Limited adaptability: these systems cannot learn Complex decision-making: use algorithms that
from new data or adjust to changes; they follow factor in probabilities, randomness or other non-
only predefined paths. deterministic elements, allowing for more nuanced
and complex behaviours.
Simple reflex agents operate based on a perception – Basic spam filters using keyword matching
of their environment, without consideration of past
– Simple chatbots with predefined responses
experiences.13 Instead, they follow predefined rules
to map specific inputs to specific actions. The – Automated email responders that send
implementation of condition–action rules allows for prewritten replies following specific triggers
rapid responses to environmental stimuli.
Model-based reflex agents are designed to track – Smart thermostats that optimize energy
parts of their environment that are not immediately usage by adjusting to current and historical
visible to them.14 They do this by using stored temperature data, as well as user preferences
information from previous observations, allowing
– Smart robotic vacuum cleaners that use
them to make decisions based on both current
sensors and maps to navigate efficiently,
inputs and past experiences. By basing their
avoiding obstacles and optimizing cleaning
actions on both current perceptions and their
paths
internal model, these agents are more adaptable
Model-based
reflex agents than simple reflex agents even though they are also – Modern irrigation systems that use sensors to
governed by condition–action rules. collect real-time data on environmental factors
such as soil, moisture, temperature and
precipitation, to optimize water dispensation
Goal-based agents are able to take future – Advanced chess AI engines that have the
scenarios into account. This type of agent goal of winning the game, planning moves
considers the desirability of actions’ outcomes that maximize the probability of success and
and plans to achieve specific goals.15 The considering a long-term strategy
integration of goal-oriented planning algorithms
– Route optimization systems for logistics that
allows the agent to make decisions based on
set goals for efficient delivery and plan optimal
future outcomes, making them suitable for
routes by setting clear priorities
complex decision-making tasks.
Goal-based agents
– Customer service chatbots that set goals
to resolve customer issues and plan
conversation flows to achieve their
goals efficiently
Utility-based agents employ search and planning – Autonomous driving systems that optimize
algorithms to tackle intricate tasks that lack a safety, efficiency and comfort while evaluating
straightforward outcome, thereby going beyond trade-offs such as speed, fuel efficiency and
simple goal achievement. passenger comfort
They use utility functions to assign a weighted score – Portfolio management systems such as robot-
to each potential state, facilitating optimal decision- advisers that make financial decisions based
making in scenarios with conflicting goals or on utility functions that weigh risk, return and
Utility-based agents uncertainty. Rooted in decision theory, this method client preferences
allows for more advanced decision-making in – Healthcare diagnosis assistants that analyse
complex environments. These agents can balance patient medical records, label patient data
multiple, possibly conflicting objectives according to (e.g. tumour detection) and optimize
their relative significance.16 treatment strategy recommendations in
cooperation with doctors
The architecture of many current AI agents is often overview of the key components leading to current
based on or linked to LLMs, which are configured breakthroughs in AI agents and their growing range
in complex ways. Figure 3 presents a simplified of capabilities.
AI agent
Environment
Model
Actions
The AI agent begins with user input, which is is a technique where an AI agent systematically
directed to the agent’s control centre. The user processes and articulates intermediate steps to
input could be a prompt given to carry out an reach a conclusion, which enhances the agent’s
instruction. The control centre directs the ability to solve complex problems in a transparent
user input to the model, which forms the core manner, as each step of the model’s underlying
algorithmic foundation of the AI agent. This model reasoning is reproduced in natural language.19
could be an LLM or an LMM, depending on the
application’s needs. The model then processes Memory management is vital for the continuity and
the input data from the user’s instructions to relevance of operations. This component ensures
generate the desired result.17 that the AI agent remembers previous interactions
and maintains context. This is essential for tasks
At the core of the architecture is the control centre, that require historical data to inform decisions or for
a crucial component that manages the flow of maintaining conversational context in chatbots.
information and commands throughout the system.
It acts as the orchestration layer, directing inputs Tools enable the AI agent to access and interact
to the model and routing the output to appropriate with multiple functions or modalities. For example,
tools or effectors. In simple terms, this layer in an online setting, an AI agent could have access
orchestrates the flow of information between 1) to external tools such as web searches to gather
user inputs, 2) decision-making and planning, 3) real-time information and scheduling tools to
memory management, 4) access to tools and 5) the manage appointments and send reminders, as well
effectors of the system enabling action in digital or as project management software to track tasks and
physical environments.18 deadlines. In terms of modalities, an AI agent could
use natural language processing tools alongside
The decision-making and planning component image recognition capabilities to perform tasks
of an AI agent uses the model’s outputs to assist that require understanding of text-based as well as
in decision-making and planning of multistep visual-based data sources.
processes. In this segment, advanced features
such as chain-of-thought (CoT) reasoning are Once decisions are made or plans set, the
implemented, which allows the AI agent to effectors component of the AI agent executes
engage in multistep reasoning and planning. CoT the required actions. This could involve interacting
Multi-agent systems (MAS) consist of multiple with the MAS’s objectives. For example, when
independent AI agents as well as AI agent systems autonomous vehicles (AVs) park in a tight
that collaborate, compete or negotiate to achieve space, they communicate to avoid collision.
collective tasks and goals.25 These agents can be In this case, the MAS objective to prevent
autonomous entities, such as software programs accidents aligns with each AV’s goal of safe
or robots, each typically specialized with its own navigation, allowing them to coordinate
set of capabilities, knowledge and decision-making effectively and reach consensus.
processes. This allows agents to perform tasks in
parallel, communicate with one another and adapt – Supervised architecture: In this model, a
to changes in complex environments. “supervisor” agent coordinates interactions
among other agents. It is useful when agents’
The architecture of a MAS is determined by goals diverge, and consensus may be
the desired outcomes and the goals of each unattainable. The supervisor can mediate and
participating agent or system. There are several prioritize the MAS’s objectives while considering
architectural types,26 for example: each agent’s unique goals, thereby finding a
compromise. An example could be when a
– Network architecture: In this set-up, all buyer and seller agent cannot reach agreement
agents or systems can communicate with on a transaction, which is then mediated by an
one another to reach a consensus that aligns AI agent supervisor.
AI agent
AI agent
superivsor
AI agent
AI agent
system
AI agent
AI agent
system
AI agent AI agent
system system
While current efforts largely focus on developing These agents can communicate and interact within
AI agents within closed environments or specific a broader adaptive system, enabling them to handle
software ecosystems, the future is likely to see both specific tasks and complex situations more
multiple agents collaborating in different domains efficiently than a single agent, or even an AI agent
and applications. In MAS, different types of agent system, could on its own.
could work together to tackle increasingly complex
tasks that require multistep processes, integrating In some cases, multi-agent systems address
expertise from various fields to achieve more the limitations of single-agent systems, such as
sophisticated outcomes. scalability issues, lack of resilience in the event of
FIGURE 5: The structure and relationships among the AI agent, AI agent system and multi-agent system
Sensors Learning
AI AI AI agent AI
agent 1 agent 2 system 1 agent 2
Control centre
Orchestration
Effectors
Model
AI Other
Decision- Memory Tools agent 3 AI agent
making management
and planning
AI Other
agent 3 agents
In a smart city, a multi-agent system (MAS) energy usage. For example, if an accident occurs,
manages traffic flow in real time, using vehicle-to- AI agents can reroute traffic, adjust signal timings,
everything (V2X) communication, enabling vehicles notify emergency services and communicate with
to interact with other vehicles, pedestrians and road vehicles and pedestrians to avoid the area, all with
infrastructure.27 Each traffic signal is controlled by minimal human intervention. This system optimizes
an AI agent system that communicates with nearby traffic flow, improves road safety and reduces
signals, public transport systems, emergency energy consumption by dynamically adapting to
services and parking services to check availability. real-time conditions. For instance, if a parking lot
Vehicles, equipped with their own AI agent is full, the system can direct vehicles to available
system, share data such as speed, location and parking further away, even if it conflicts with the
road conditions, allowing for coordinated actions driver’s and the onboard AI agent’s preference
to enhance road safety, traffic efficiency and for proximity.
One technical challenge in multi-agent systems – Emergent protocols: these allow agents to
is associated with enabling effective learn how to communicate effectively based
communication between different AI agents on their experiences, often using reinforcement
and AI agent systems.28 In some cases, learning techniques. This enables agents
interactions are limited by the boundaries of to adapt their communication strategies
native application environments, restricting the to changing environments and tasks.30
potential of AI agents to narrower and more However, decoding and understanding
specialized subdomains, where control is more emergent communication remains an ongoing
easily retained. research challenge.31
Software development
AI agents can help generate, run and check code and other artefacts needed, allowing
software developers to focus on higher value-added activities.
Healthcare
AI agents could improve diagnostics and personalized treatment, reducing hospital
stays and costs through data analysis and decision-making support. For example, in
under-resourced areas, AI agents could help alleviate the workload of clinical specialists
by assisting doctors in developing tailored treatment plans.33
Education
AI agents could help personalize learning experiences by adapting content to each
student’s needs, offering real-time feedback and supporting teachers with grading and
administrative tasks. This allows educators to focus more on creative and interactive
learning experiences.
Finance
AI agents could help enhance fraud detection, optimize trading strategies and offer
personalized financial advice. They can analyse large datasets to identify patterns and
trends, providing faster and more accurate insights for decision-making.
While AI agents have the potential to offer numerous – Goal misgeneralization: When AI agents
benefits, they also come with inherent risks, as apply their learned goals inappropriately to
well as novel safety and security implications. For new or unforeseen situations.39
example, an AI system independently pursuing
misaligned objectives could cause immense – Deceptive alignment: When AI agents
harm, especially in scenarios where the AI agents’ appear to be aligned with the intended goals
level of autonomy increases while the level of during training or testing, but their internal
human oversight decreases. AI agents learning objectives differ from what is intended.40
to deceive human operators, pursuing power-
seeking instrumental goals or colluding with other – Malicious use and security vulnerabilities: AI
misaligned agents in unexpected ways could pose agents can amplify the risk of fraud and scams
entirely novel risks.35 increasing both in volume and sophistication.
More capable AI agents can facilitate the
Agent-specific risks can be both technical and generation of scam content at greater speeds
normative. Challenges associated with AI agents and scale than previously possible, and AI
stem from technical limitations, ethical concerns agents can facilitate the creation of more
and broader societal impacts often associated convincing and personalized scam content.
with a system’s level of autonomy and the overall For example, AI systems could help criminals
potential of its use when humans are removed evade security software by correcting language
from the loop. Without a human in the loop at errors and improving the fluency of messages
appropriate steps, agents may take multiple that might otherwise be caught by spam
consequential actions in rapid succession, which filters.41 More capable AI agents could automate
could have significant consequences before a complex end-to-end tasks that would lower the
person notices what is happening.36 point of entry for engaging in harmful activities.
Some forms of cyberattacks could, for example,
AI agents can also amplify known risks associated be automated, allowing individuals with little
with the domain of AI and could introduce entirely domain knowledge or technical expertise to
new risks that can be broadly categorized into execute large-scale attacks.42
technical, socioeconomic and ethical risks.
– Challenges in validating and testing complex
AI agents: The lack of transparency and non-
Technical risks deterministic behaviour of some AI agents
creates significant challenges for validation
and verification. In safety-critical applications,
Examples of technical risks include: this unpredictability complicates efforts to
assure system safety, as it becomes difficult
– Risks from malfunctions due to AI agent to demonstrate reliable performance in all
failures: AI agents can amplify the risks from scenarios.43 While failures in agent-based
malfunctions by introducing new classes of systems are expected, the varied ways in which
failure modes. LLMs, for example, can enable they can fail adds further complexity to safety
agents to produce highly plausible but incorrect assurance. Failsafe mechanisms are essential
outputs, presenting risks in ways that were but could be harder to design due to uncertainty
not possible with earlier technologies. These on potential failure modes.44
emerging failure modes add to traditional issues
such as inaccurate sensors or effectors and Socioeconomic risks
encompass capability- and goal-related failures,
as well as increased security vulnerabilities that
could lead to malfunctions.37 Examples of socioeconomic risks include:
To enable the autonomy of AI agents for cases Within the context of a specific application
where it would greatly improve outcomes, and environment, it is important to adopt a
several challenges must be addressed. These risk analysis methodology that systematically
challenges include safety and security-related identifies, categorizes and assesses all of the
assurance, regulation, moral responsibility and legal risks associated with the AI agent. Such an
accountability, data equity considerations, data approach helps ensure that appropriate and
governance and interoperability, skills, culture and effective mitigation mechanisms and strategies
perceptions.47 Addressing these challenges requires can be implemented by relevant stakeholders
a comprehensive approach throughout the stages at the technical, socioeconomic and ethical levels.
of design, development, deployment and use of
AI agents as well as changes across policy and
regulation. As advanced AI agents and multi-agent Technical risk measures
systems continue to evolve and integrate
into various aspects of digital infrastructure,
associated governance frameworks that take Examples of technical risk measures:
increasingly complex scenarios into consideration
need to be established. – Improving information transparency: Where,
why, how, and by whom information is used
In assessing and mitigating the risks of potential is critical for understanding how a system
harm from AI agents, it is essential to understand operates and why certain decisions are made
the specific application and environment of the AI by the agent. Measures can be implemented
agent (including stakeholders that may be affected). to improve the transparency of AI agents such
The risks of potential harm from an AI agent stem as the integration of behavioural monitoring
largely from the context in which it is deployed.48 and implementation of thresholds, triggers and
In high-stakes environments such as healthcare or alerts that involve continuous observation and
autonomous driving, even small errors or biases can analysis of the agent’s actions and decisions.
lead to significant consequences for the users of Implementing behavioural monitoring helps to
such systems. Conversely, in low-stakes contexts, ensure that failures are better understood and
such as customer service, the same AI agent might properly mitigated when they occur.49
pose minimal risks, as mistakes are less likely to
cause serious harm.
– Public education and awareness: Developing – Clear ethical guidelines: Prioritizing human
and executing strategies to inform and engage rights, privacy and accountability are essential
the public are essential to mitigate the risks of measures to ensure that AI agents make
over-reliance and disempowerment in social decisions that are aligned with human and
interactions with AI agents. These efforts societal values.51
should aim to equip individuals with a solid
understanding of the capabilities and limitations – Behavioural monitoring: Implementing
of AI agents, allowing for more informed measures that allow users to trace and
interactions, along with healthy integrations. understand the underlying reasoning
behind an AI agent’s decisions is necessary
– A forum to collect public concerns: to mitigate transparency challenges.52
Acceptance and involvement, trust and Behavioural monitoring can make system
psychological safety are crucial to tackle behaviour and decisions visible and
societal resistance and for the proper adoption interpretable, which enhances overall user
and integration of AI agents into various understanding of interactions. This approach
processes. Without sufficient human “buy-in”, also strengthens the governance structure
the implementation of AI agents would face surrounding AI agents and helps increase
significant challenges. In addressing societal stakeholder accountability.53
resistance and creating wider trust in AI agents
and autonomous systems, it is important that As the adoption of AI agents increases, critical
public concerns are heard and addressed trade-offs need to be made. Given the complex
throughout the design and deployment of nature of many advanced AI agents, safety should
advanced AI agents.50 be regarded as a critical factor alongside other
considerations such as cost and performance,
– Thoughtful strategies for deployment: intellectual property, accuracy, and transparency,
Organizations can embrace deliberate as well as implied social trade-offs when it comes
strategies around increased efficiency and task to deployment.
augmentation rather than focusing on outright
worker replacement efforts. By prioritizing The level of autonomy of advanced AI agents is
proactive measures such as retraining likely to continue to increase due to ever more
programmes, workers can be supported in capable models and reasoning capabilities.54 The
transitioning to new or changed roles. complexities of more advanced systems call for
a multidisciplinary approach that includes diverse
stakeholders, from scientists and researchers to
psychologists, developers, system and service
integrators, operators, maintainers, users and
regulators, all of whom are needed to establish
appropriate risk management frameworks and
governance protocols for the deployment of more
sophisticated AI agent systems.
The development of AI agents has been marked The rapid advance of AI agent capabilities is set to
by significant milestones, from the early days of be followed by a wave of innovation in AI agents,
simple reflex agents to sophisticated multi-agent which could have the ability to transform the global
systems. Recent advances in LLMs and LMMs economy and the roles of human labour in new and
have resulted in the next evolution of AI agents, significant ways.
which have moved from basic systems that react
to immediate stimuli to complex entities capable of Further research is necessary to explore the
planning, learning and making decisions based on a safety, security and societal impacts of AI agents
comprehensive understanding of their environment and multi-agent systems, emphasizing both
and user needs. technical solutions and organizational governance
frameworks. These efforts are critical for mitigating
The ongoing development of AI agents is risks associated with the ongoing development,
fundamentally linked to increased autonomy, deployment and increasing use of more
improved learning capabilities, enhanced decision- sophisticated AI agents in a range of domains.
making abilities and multi-agent collaboration. As
the architecture and emerging use cases for AI At this point, it is vital for stakeholders to come
agents continue to proliferate, the shift towards together throughout technical, civil society,
multi-agent systems that can collaborate in applied and governance-facing communities to
increasingly complex environments is likely research, discuss and build consensus on novel
to continue. governance mechanisms.
Increased autonomy plays an important part This white paper has offered an initial exploration of
in the evolution of AI agents and creates novel the rapidly evolving landscape of AI agents, aiming
opportunities for new applications while also to promote deeper understanding of this emerging
presenting unique risks to society. The introduction field and spark conversation on responsible
of AI agents will likely reduce the need for human adoption and diffusion practices. Through equitable
involvement and oversight in some areas, bringing a development, deployment and governance, the
more efficient approach to tedious tasks. However, a growing presence of advanced AI agents holds the
reduction in human oversight could also increase the promise of driving positive societal transformation
risk of accidents. Furthermore, increased automation for many years to come.
of workflows could be a way for malicious actors to
exploit novel vulnerabilities, while also exacerbating
socioeconomic and ethical risks.
Acknowledgements
Jun Seita
Team Leader (Principal Investigator), Medical Data World Economic Forum
Deep Learning Team, RIKEN
Li Tieyan
Chief AI Security Scientist, Huawei Technologies