Superagency in The Workplace Empowering People To Unlock Ais Full Potential v3
Superagency in The Workplace Empowering People To Unlock Ais Full Potential v3
in the Workplace
Empowering people to unlock AI’s full potential
Hannah Mayer
Lareina Yee
Michael Chui
Roger Roberts
January 2025
Contents
Introduction 2
Chapters:
Acknowledgments 42
Methodology 43
Glossary 44
Introduction
Almost all companies invest in AI, but just 1 percent believe they are at
maturity. Our research finds the biggest barrier to scaling is not
employees—who are ready—but leaders, who are not steering fast enough.
A
rtificial intelligence has arrived in the workplace and has the potential to be as transformative
as the steam engine was to the 19th-century Industrial Revolution.1 With powerful and capable
large language models (LLMs) developed by Anthropic, Cohere, Google, Meta, Mistral, OpenAI,
and others, we have entered a new information technology era. McKinsey research sizes the
long-term AI opportunity at $4.4 trillion in added productivity growth potential from corporate use cases.2
Therein lies the challenge: the long-term potential of AI is great, but the short-term returns are unclear. Over
the next three years, 92 percent of companies plan to increase their AI investments. But while nearly all
companies are investing in AI, only 1 percent of leaders call their companies “mature” on the deployment
spectrum, meaning that AI is fully integrated into workflows and drives substantial business outcomes. The
big question is how business leaders can deploy capital and steer their organizations closer to AI maturity.
This research report, prompted by Reid Hoffman’s book Superagency: What Could Possibly Go Right with
Our AI Future,3 asks a similar question: How can companies harness AI to amplify human agency and unlock
new levels of creativity and productivity in the workplace? AI could drive enormous positive and disruptive
change. This transformation will take some time, but leaders must not be dissuaded. Instead, they must
advance boldly today to avoid becoming uncompetitive tomorrow. The history of major economic and
technological shifts shows that such moments can define the rise and fall of companies. Over 40 years ago,
the internet was born. Since then, companies including Alphabet, Amazon, Apple, Meta, and Microsoft have
attained trillion-dollar market capitalizations. Even more profoundly, the internet changed the anatomy of
work and access to information. AI now is like the internet many years ago: The risk for business leaders is
not thinking too big, but rather too small.
1
“Gen AI: A cognitive industrial revolution,” McKinsey, June 7, 2024.
2
“The economic potential of generative AI: The next productivity frontier,” McKinsey, June 14, 2023.
3
Reid Hoffman and Greg Beato, Superagency: What Could Possibly Go Right with Our AI Future, Authors Equity, January 2025.
3× 1.4×
more employees are using gen AI more likely for millennials to report
for a third or more of their work extensive familiarity with gen AI tools
than their leaders imagine; than peers in other age groups;
more than 70% of all employees they are also 1.2× more
believe that within 2 years gen AI will likely to expect workflows
change 30% or more of their work to change within a year
47% 1.3×
of the C-suite say their companies are more likely for employees to trust their
developing gen AI tools too slowly, even though own companies to get gen AI deployment right
69% started investing more than a year ago than they are to trust other institutions
Companies are investing in gen AI but have not yet achieved maturity
92% 1%
of companies plan believe their
to invest more investments have
in gen AI over the reached maturity
next 3 years
2.4× 48%
more likely for C-suite to cite employee readiness as of employees rank training as the
a barrier to adoption vs their own issues with leadership most important factor for gen AI adoption;
alignment, despite employees currently using yet nearly half feel they are receiving
gen AI 3× more than leaders expect moderate or less support
Chapter 1 looks at the rapid advancement of technology over the past two years and its implications for
business adoption of AI.
Chapter 2 delves into the attitudes and perceptions of employees and leaders. Our research shows that
employees are more ready for AI than their leaders imagine. In fact, they are already using AI on a regular
basis; are three times more likely than leaders realize to believe that AI will replace 30 percent of their work
in the next year; and are eager to gain AI skills. Still, AI optimists are only a slight majority in the workplace; a
large minority (41 percent) are more apprehensive and will need additional support. This is where millennials,
who are the most familiar with AI and are often in managerial roles, can be strong advocates for change.
Chapter 3 looks at the need for speed and safety in AI deployment. While leaders and employees want to
move faster, trust and safety are top concerns. About half of employees worry about AI inaccuracy and
cybersecurity risks. That said, employees express greater confidence that their own companies, versus
other organizations, will get AI right. The onus is on business leaders to prove them right, by making bold and
responsible decisions.
Chapter 4 examines how companies risk losing ground in the AI race if leaders do not set bold goals. As the
hype around AI subsides, companies should put a heightened focus on practical applications that empower
employees in their daily jobs. These applications can create competitive moats and generate measurable
ROI. Across industries, functions, and geographies, companies that invest strategically can go beyond using
AI to drive incremental value and instead create transformative change.
Chapter 5 looks at what is required for leaders to set their teams up for success with AI. The challenge of AI
in the workplace is not a technology challenge. It is a business challenge that calls upon leaders to align
teams, address AI headwinds, and rewire their companies for change.
All the survey findings discussed in the report, aside from two sidebars presenting international nuances,
pertain solely to US workplaces. The findings are organized in this way because the responses from US
employees and C-suite executives provide statistically significant conclusions about the US workplace.
Analyzing global findings separately allows a comparison of differences between US responses and
those from other regions.
Many breakthrough technologies, including the internet, smartphones, and cloud computing, have
transformed the way we live and work. AI stands out from these inventions because it offers more than
access to information. It can summarize, code, reason, engage in a dialogue, and make choices. AI can lower
skill barriers, helping more people acquire proficiency in more fields, in any language and at any time. AI
holds the potential to shift the way people access and use knowledge. The result will be more efficient and
effective problem solving, enabling innovation that benefits everyone.
Over the past two years, AI has advanced in leaps and bounds, and enterprise-level adoption has
accelerated due to lower costs and greater access to capabilities. Many notable AI innovations have
emerged (Exhibit 1). For example, we have seen a rapid expansion of context windows, or the short-term
memory of LLMs. The larger a context window, the more information an LLM can process at once. To
illustrate, Google’s Gemini 1.5 could process one million tokens in February 2024, while its Gemini 1.5 Pro
could process two million tokens by June of that same year.4 Overall, we see five big innovations for business
that are driving the next wave of impact: enhanced intelligence and reasoning capabilities, agentic AI,
multimodality, improved hardware innovation and computational power, and increased transparency.
AI superagency
What impact will AI have on humanity? Reid Hoffman and Greg Beato’s book Superagency: What
Could Possibly Go Right with Our AI Future (Authors Equity, January 2025) explores this question.
The book highlights how AI could enhance human agency and heighten our potential. It envisions a
human-led, future-forward approach to AI.
Superagency, a term coined by Hoffman, describes a state where individuals, empowered by AI, super-
charge their creativity, productivity, and positive impact. Even those not directly engaging with AI can
benefit from its broader effects on knowledge, efficiency, and innovation.
AI is the latest in a series of transformative supertools, including the steam engine, internet, and
smartphone, that have reshaped our world by amplifying human capabilities. Like its predecessors, AI
can democratize access to knowledge and automate tasks, assuming humans can develop and deploy
it safely and equitably.
4
The Keyword, “Our next-generation model: Gemini 1.5,” blog entry by Sundar Pichai and Demis Hassabis, Google, February 15, 2024; Google
for Developers, “Gemini 1.5 Pro 2M context window, code execution capabilities, and Gemma 2 are available today,” blog entry by Logan
Kilpatrick, Shrestha Basu Mallick, and Ronen Kofman, June 27, 2024.
Gen AI capabilities have evolved rapidly over the past two years.
Google
Gemini
Meta
Microsoft
OpenAI
Note: Exhibit is not intended as an evaluation or comparison but as an illustration of the rapid progress in capabilities.
Initial models released between Mar 2022 and Mar 2023.
1
The advent of reasoning capabilities represents the next big leap forward for AI. Reasoning enhances AI’s
capacity for complex decision making, allowing models to move beyond basic comprehension to nuanced
understanding and the ability to create step-by-step plans to achieve goals. For businesses, this means they
can fine-tune reasoning models and integrate them with domain-specific knowledge to deliver actionable
insights with greater accuracy. Models such as OpenAI’s o1 or Google’s Gemini 2.0 Flash Thinking Mode
are capable of reasoning in their responses, which gives users a human-like thought partner for their
interactions, not just an information retrieval and synthesis engine.7
5
GPT-4 technical report, OpenAI, March 27, 2023.
6
Dana Brin, Vera Sorin, Akhil Vaid, et al., “Comparing ChatGPT and GPT-4 performance in USMLE soft skill assessments,” Scientific Reports,
October 1, 2023.
7
“Learning to reason with LLMs,” OpenAI, September 12, 2024; “Gemini 2.09 Flash Thinking Mode,” Google, January 21, 2025.
Software companies are embedding agentic AI capabilities into their core products. For example,
Salesforce’s Agentforce is a new layer on its existing platform that enables users to easily build and
deploy autonomous AI agents to handle complex tasks across workflows, such as simulating product
launches and orchestrating marketing campaigns.8 Marc Benioff, Salesforce cofounder, chair, and CEO,
describes this as providing a “digital workforce” where humans and automated agents work together to
achieve customer outcomes.9
8
Sammy Spiegel, “The future of AI agents: Top predictions and trends to watch in 2025,” Salesforce, December 2024.
9
Marc Benioff, “How the rise of new digital workers will lead to an unlimited age,” Time, November 25, 2024.
10
Ivan Solovyev and Shrestha Basu Mallick, “Gemini 2.0: Level up your apps with real-time multimodal interactions,” Google, December 23, 2024.
11
“OpenAI releases AI video generator Sora but limits how it depicts people,” Associated Press, December 10, 2024.
Beyond LLMs, other forms of AI and machine learning (ML) are improving explainability, allowing the outputs
of models that support consequential decisions (for example, credit risk assessment) to be traced back to
the data that informed them. In this way, critical systems can be tested and monitored on a near-constant
basis for bias and other everyday harms that arise from model drift and shifting data inputs, which happens
even in systems that were well calibrated before deployment.
All of this is crucial for detecting errors and ensuring compliance with regulations and company policies.
Companies have improved explainability practices and built necessary checks and balances, but they must
be prepared to evolve continuously to keep up with growing model capabilities.
Achieving AI superagency in the workplace is not simply about mastering technology. It is every bit as much
about supporting people, creating processes, and managing governance. The next chapters explore the
nontechnological factors that will help shape the deployment of AI in the workplace.
12
“The Foundation Model Transparency Index,” Stanford Center for Research on Foundation Models, May 2024.
The good news is that our survey suggests three ways companies can accelerate AI adoption and move
toward AI maturity.
Web <2025>
<Superagency>
Exhibit 2
Exhibit <2> of <21>
Employees are three times more likely to be using gen AI today than their
leaders expect.
C-suite Employees
US employees’ and C-suite’s
timeline for employees using Already using 4 3×
gen AI for >30% of daily tasks, % 13
of respondents Less than a year 16
34
1–5 years 56
37
Over 5 years 11
5
Don’t anticipate it 10 7
Not sure 3 4
Even those with a skeptical take on AI are familiar with it; 94 percent of Gloomers and 71 percent of Doomers say
they have some familiarity with gen AI tools. Furthermore, approximately 80 percent of Gloomers and about half
of Doomers say they are comfortable using gen AI at work.
Web <2025>
<Superagency>
Exhibit
Exhibit <3> of <21>
Employee segments differ, but all indicate a high familiarity with gen AI.
Has extensive
familiarity
with gen AI1 16 42 55 67
Has at least
some familiarity
with gen AI2 71 94 96 96
Is comfortable using
results from gen AI 47 79 91 91
Expects 30% of
workflows to change
in the next year 19 38 50 64
Share of respondents
in archetype group, % 4 37 39 20
Defined as those who have “extensive experience (use several tools for complex tasks)” and “experts.”
1
Defined as those who have “some familiarity (use 1–2 tools a few times)” and “extensive experience (use several tools for complex tasks)” and “experts.”
2
Yet employees are not getting the training and support they need. More than a fifth report that they have
received minimal to no support (Exhibit 3). Outside the United States, employees also want more training
(see sidebar “Global perspectives on training”).
Web <2025>
<Superagency>
Exhibit 3
Exhibit <4> of <21>
Share of US employees agreeing that a company initiative would make them more likely to
increase day-to-day usage of gen AI tools, %
US employees’ perceived level of support for gen AI capability building at their organizations,
% of respondents
Current 6 22 44 29
In 3 years 4 10 56 31
Exhibit
0 20 40 60 80 100
Use is mandated
Manager
Peers
C-suite leadership
Developer of AI tool
Generic communications
0 20 40 60 80 100
Provide feedback
in tool itself
Provide feedback via
other channels
Beta testing or
pilot program
Submit specific requests
for features
Not involved
Source: McKinsey international employee survey, Oct–Nov 2024 (Australia and New Zealand, n = 139; India, n = 134; Singapore, n = 140; UK, n = 201); McKinsey
US employee survey, Oct–Nov 2024 (n = 3,002)
Web <2025>
<Superagency>
Exhibit 4
Exhibit <6> of <21>
50 49 62 47 26 22
Is comfortable using
gen AI at work
80 87 90 82 70 71
Provides feedback
on gen AI tools
76 77 76 65 47 55
Wants to participate
in the design of
gen AI tools
70 76 81 77 73 76
1
Defined as those who have “extensive experience (use several tools for complex tasks)” and “experts.”
Source: McKinsey US employee survey, Oct–Nov 2024 (n = 3,002)
Frequency of team inquiries about using new gen AI tools at work, % of US manager respondents
(n = 1,440)
10 5 5 12 15 28 9 16
Not at all
Use of gen AI tools to resolve a team member’s challenge, % of US manager respondents (n = 1,440)
Since many millennials are managers, they can support their teams to become more adept AI users. This
helps push their companies toward AI maturity. Two-thirds of managers say they field questions from their
team about how to use AI tools at least once a week, and a similar percentage say they recommend AI tools
to their teams to solve problems (Exhibit 5).
It’s critical that leaders meet this moment. It’s the only way to accelerate the probability that their companies
will reach AI maturity. But they must move with alacrity, or they will fall behind.
The majority of employees describe themselves as AI optimists; Zoomers and Bloomers make up 59 percent
of the workplace. Even Gloomers, who are one of the two less-optimistic segments in our analysis, report
high levels of gen AI familiarity, with over a quarter saying they plan to use AI more next year.
Business leaders need to embrace this speed and optimism to ensure that their companies don’t get left
behind. Yet despite all the excitement and early experimentation, 47 percent of C-suite leaders say their
organizations are developing and releasing gen AI tools too slowly, citing talent skill gaps as a key reason for
the delay (Exhibit 6).
Web <2025>
<Superagency>
Exhibit 6
Exhibit <8> of <21>
Half of business leaders believe the development and release of gen AI tools
is too slow in their organizations.
US C-suite categorization of the pace at which their organizations are developing and releasing
gen AI tools, % of respondents
Too slow About right Too fast
47 45 9
US C-suite’s perceived top reason organizations are developing and releasing gen AI tools too slowly,
% of respondents reporting development pace as “too slow”
46 38 8 8
13
Hayden Field, “OpenAI’s active user count soars to 300 million people per week,” CNBC, December 4, 2024.
14
Krystal Hu and Dawn Chmielewski, “OpenAI’s Altman pitches ChatGPT Enterprise to large firms, including some Microsoft customers,”
Reuters, April 12, 2024.
We are at a turning point. The initial AI excitement may be waning, but the technology is accelerating. Bold
and purposeful strategies are needed to set the stage for future success. Leaders are taking the first step:
One quarter of those executives we surveyed have defined a gen AI road map, while just over half have a
draft that is being refined (Exhibit 7). With technology changing this fast, all road maps and plans will evolve
constantly. For leaders, the key is to make some clear choices about what valuable opportunities they
choose to pursue first—and how they will work together with peers, teams, and partners to deliver that value.
Web <2025>
<Superagency>
Exhibit 7
Exhibit <9> of <21>
Most C-suite respondents have road maps to guide their gen AI strategies
and have begun identifying use cases.
21 53 25
Level of identifying revenue-generating use cases for gen AI, % of US C-suite respondents
1 10 38 39 12
Not identified Minimally identified Partially identified Mostly identified Fully identified
Have not yet identified Have only just begun Have identified a few Have several promising Ready to implement
gen AI use cases for exploring how gen AI use cases, but more gen AI use cases, value-generating gen AI
revenue generation or can drive revenue or exploration is needed but there is still room use cases, with no
cost reduction reduce cost to map to value for discovery and plans for other use
generation optimization cases
Web <2025>
<Superagency>
Exhibit 8
Exhibit <10> of <21>
51 50 43 40 35 34 30
28 24 16 15 14 13
According to our research, this is in line with a broader trend in which employees show higher trust in their
employers to do the right thing in general (73 percent) than in other institutions, including the government
(45 percent). This trust should help leaders act with confidence as they tackle the speed-versus-safety
dilemma. That confidence also applies outside the United States, even though employees in other regions
may have more desire for regulation (see sidebar “Global perspectives on regulation”).
Web <2025>
<Superagency>
Exhibit 9
Exhibit <11> of <21>
Employees trust their employers most for a safe rollout of gen AI.
Share of US employees who highly trust institution to deploy gen AI tools responsibly, safely,
and ethically, 1 %
1
High trust is defined as a 4 or 5 on a scale of 1 to 5.
Source: McKinsey US employee survey, Oct–Nov 2024 (n = 3,002)
However, our research shows that attitudes about regulation are not inhibiting the economic expectations of busi-
ness leaders outside the United States. More than half of the international executives (versus 41 percent of US
executives) indicate they want their companies to be among the first adopters of AI, with those in India and
Singapore being especially bullish (exhibit). The desire of international business leaders to be AI first movers can
be explained by the revenue they expect from their AI deployments. Some 31 percent of international C-suite
leaders say they expect AI to deliver a revenue uplift of more than 10 percent in the next three years, versus just 17
percent of US leaders. Indian executives are the most optimistic, with 55 percent expecting a revenue uplift of 10
percent
Web or more over the next three years.
<2025>
<Superagency>
Exhibit <12> of <21>
Exhibit
Australia and
New Zealand India Singapore UK US
Want to be one of
the first to try new
technologies
54 84 76 55 41
Prefer to research
and understand new
technologies in depth
before trying them
71 85 51 59 59
Adopt new
technologies only
once they have
been proven and
thoroughly tested
51 49 28 41 40
Source: McKinsey international employee survey, Oct–Nov 2024 (Australia and New Zealand, n = 139; India, n = 134; Singapore, n = 140; UK, n = 201); McKinsey
US employee survey, Oct–Nov 2024 (n = 3,002)
One powerful control mechanism is respected third-party benchmarking that can increase AI safety and
trust. Examples include Stanford CRFM’s Holistic Evaluation of Language Models (HELM) initiative—which
offers comprehensive benchmarks to assess the fairness, accountability, transparency, and broader societal
impact of a company’s AI systems—as well as MLCommons’s AILuminate tool kit on which researchers from
Stanford collaborated.16 Other organizations such as the Data & Trust Alliance unite large companies to
create cross-industry metadata standards that aim to bring more transparency to enterprise AI models.
While benchmarks have significant potential to build trust, our survey shows that only 39 percent of C-suite
leaders use them to evaluate their AI systems. Furthermore, when leaders do use benchmarks, they opt to
measure operational metrics (for example, scalability, reliability, robustness, and cost efficiency) and
performance-related metrics (including accuracy, precision, F1 score, latency, and throughput). These
benchmarking efforts tend to be less focused on ethical and compliance concerns: Only 17 percent of
C-suite leaders who benchmark say it’s most important to measure fairness, bias, transparency, privacy,
and regulatory issues (Exhibit 10).
Web <2025>
<Superagency>
Exhibit 10
Exhibit <13> of <21>
More than a third of C-suite respondents use benchmarks for gen AI,
but with less focus on ethical metrics.
15
Reid Hoffman and Greg Beato, Superagency: What Could Possibly Go Right with Our AI Future, Authors Equity, January 28, 2025.
16
“MLCommons launches AILuminate, first-of-its-kind benchmark to measure the safety of large language models,” Business Wire, press
release, December 04, 2024.
Even companies that excel at all three categories of AI readiness—technology, employees, and safety—are
not necessarily scaling or delivering the value expected. Nevertheless, leaders can harness the power of big
ambitions to transform their companies with AI. The next chapter examines how.
Pilots fail to scale for many reasons. Common culprits are poorly designed or executed strategies, but a lack
of bold ambitions can be just as crippling. This chapter looks at patterns governing today’s investments in AI
across industries and suggests the potential awaiting those who can dream bigger.
Web <2025>
<Superagency>
Exhibit 11
Exhibit <14> of <21>
8 39 31 22 1
Note: Not all industries represented in survey are shown; only industries in top quartile based on self-reported gen AI spend as % of revenue are
represented. Economic potential was determined using the following mapping of industries listed in the McKinsey report The economic potential of gener-
ative AI, (advanced industries: advanced manufacturing), (technology: high tech), (financial services: banking), (hardware engineering and construction:
advanced electronics and semiconductors and construction).
1
Includes biotechnology, healthcare equipment and services, life sciences, and pharmaceuticals.
2
Includes aerospace and defense and automotive and assembly.
Source: The economic potential of generative AI: The next productivity frontier, McKinsey, June 14, 2023 ; McKinsey US CxO survey, Oct–Nov 2024 (n = 118)
Employees in the social sector, aerospace and defense, and public sector
are least optimistic about gen AI.
Expects workflows Receive support High level of trust that Believe gen AI will
to change by 30% from organization employer will safely have a net benefit
in the next year on gen AI roll out gen AI in next 5 years
Extensive Perceive gen AI Comfortable
familiarity results as being using gen AI
with gen AI ≥70% accurate results in work
Real estate
Telecommunications
Technology
Pharmaceuticals
and medical products
Automotive and assembly
Engineering, construction,
and building materials
Semiconductors
Chemicals
Consumer and
packaged goods
Advanced electronics
Financial services
Travel, logistics,
and infrastructure
Retail
Agriculture
Business, legal, and
professional services
Healthcare
Public sector
Social sector
Note: Level of familiarity is defined as those who have “extensive experience (use several tools for complex tasks)” and “experts.” High trust is “Level 4” and
“Level 5” on a scale of 1 to 5. Perceived accuracy is based on past gen AI usage in a workplace setting.
Source: McKinsey US employee survey, Oct–Nov 2024 (n = 3,002); The economic potential of generative AI: The next productivity frontier, McKinsey,
June 14, 2023
Web <2025>
<Superagency>
Exhibit 14
Exhibit <17> of <21>
The employees most optimistic about gen AI do not represent the most
economic value potential.
Potential economic value from gen AI and share of employees with a positive outlook on gen AI, 1
by function
1.0
Sales and marketing
Software engineering
0.8
0.6
Potential Share of total potential
economic economic value, %
value,
$ trillion Sales and marketing 28
0.4 Customer service
R&D Engineering 25
Strategy IT
0 20 40 60 60 100
Share of employees with positive outlook on gen AI, %
Note: For AI outlook data from survey, strategy function includes business development. Economic potential of functions was determined by grouping
corporate use cases listed in the McKinsey report The economic potential of generative AI (eg, sales and marketing, legal and risk and compliance) others
were aligned as follows: (customer service: customer operations), (procurement: procurement and management), (HR: talent and organization, including
HR), (IT: corporate IT).
1
Optimism defined as share of Bloomer (believes in gen AI’s potential and supports iterative deployment) and Zoomer (believes in speedy gen AI deployment by
technologists) respondents.
Source: The economic potential of generative AI: The next productivity frontier, McKinsey, June 14, 2023 ; McKinsey US employee survey, Oct–Nov 2024
(n = 3,002)
Despite this, company leaders are optimistic about the value they can capture in the coming years. A full 87
percent of executives expect revenue growth from gen AI within the next three years, and about half say it
could boost revenues by more than 5 percent in that time frame (Exhibit 16). That suggests quite a lot could
change for the better over the next few years.
Web <2025>
<Superagency>
Exhibit 15
Exhibit <18> of <21>
Gen AI has not yet delivered significant return on investment for enterprises.
US C-suite’s perception on how gen AI has affected revenues and costs, % of respondents
Increased by 1–10% 29
No change 36
Increased by 11–19%
Decreased 2 Increased 20% or more 4 10
Not tracking revenue related to gen AI 3 Not tracking cost changes related to gen AI 2
Don’t know 2 Don’t know 3
US C-suite’s perception on how gen AI will affected revenue, next 3 years, % of respondents
No change Increase 1–5% Increase 6–10% Increase >10%
3 10 36 34 17
Decrease 0
51
Not tracking revenue related to gen AI
To assess how far along companies are in this shift, we examined three categories of AI applications:
personal use, business use, and societal use (see sidebar “AI’s potential to enhance our personal lives”). We
mapped over 250 applications from our work and publicly shared examples to understand the spectrum of
impact levels, from localized use cases to transformations with more universal impact. Our conclusion?
Given that most companies are early in their AI journeys, most AI applications are localized use cases still in
the pilot stages (Exhibit 17).
Over the past two years, personal and business gen AI applications have
often focused on localized impact.
Gen AI use cases by impact level, illustrative Primary impact area Personal Business Societal
Use cases Boost productivity by Domains Reshape multiple roles Transformations Fundamentally
automating a specific task or job across an area of operations reshape industries, fields, and lives
Conducting “smarter searches” for Developing and executing data- Accelerating discovery and
everyday information based campaigns, including content manufacturing in material science
creation
Planning events, including Predicting natural disasters, while
personalized invites, tracking of Conducting synthetic customer supporting crisis management
guests, run of show research strategy
Assessing potential candidates’ Real-time monitoring for supply Accelerating drug development
recruiting performance chain management and inventory by reducing cost and time
control
Accelerating contract generation
Accelerating coding processes
Processing customer information and improving efficiency
faster
In many cases, that’s perfectly appropriate. But creating AI applications that can revolutionize industries and
create transformative value requires something more. Robotics in manufacturing, predictive AI in renewable
energy, drug development in life sciences, and personalized AI tutors in education—these are the kinds of
transformative efforts that can drive the greatest returns.17 These weren’t created from a reactive mindset.
They are the result of inspirational leadership, a unique concept of the future, and a commitment to
transformational impact. This is the kind of courage needed to develop AI applications that can
revolutionize industries.
17
Dario Amodei, “Machines of loving grace: How AI could transform the world for the better,” Dario Amodei website, October 2024.
Outside of the business context, individuals are increasingly using AI in their personal lives. In previous
research, we analyzed the potential impact of AI across 77 personal activities and across age, gender,
and working status in the United States. While individuals have limited desire to automate certain
personal activities, including leisure, sleeping, and fitness, the data shows significant opportunity for AI
combined with other technologies to help with chores or labor-intensive tasks. Already in 2024, our
research identified about an hour of such daily activities with the technical potential to be automated. By
2030, expansion of use cases and continued improvements in AI safety could increase automation
potential up to three hours per day. When people use AI-enabled tools—say, an autonomous vehicle for
transportation or an interactive personal finance bot—they can repurpose time for personal fulfillment
activities or being productive in other ways.
Using human-centric design and tapping into gen AI’s potential for “emotional intelligence” are unlocking
new personal AI applications that go beyond basic efficiencies. Individuals are beginning to use conver-
sational and reasoning AI models for counseling, coaching, and creative expression. For example, people
are using conversational AI for advice and emotional support or to bring their artistic visions to life with
only verbal cues. Further, to the notion that AI superagency will advance society, AI has potential to
become a democratizing force, making experiences that were previously expensive or exclusive—such
as animation generation, career coaching, or tax advice—available to much wider audiences.
To make their companies part of the minority that succeed, C-level executives must turn the mirror on
themselves. They need to embrace the vital role their leadership plays. C-suite leaders participating in our
survey are more than twice as likely to say employee readiness is a barrier to adoption as they are to blame
their own role. But as previously noted, employees indicate that they are quite ready.
This chapter looks at how leaders can take the reins, recognizing and owning the fact that the AI opportunity
requires more than technology implementation. It demands a strategic transformation. There is no denying
that companies face a set of AI headwinds. To tackle these challenges, leadership teams will need to commit
to rewiring their enterprises.
These AI-specific headwinds are formidable but addressable. Companies are pushing ahead. For example,
they might use dynamic cost planning or look at procuring NVIDIA clusters to secure the infrastructure they
expect to need.18 Chief HR officers (CHROs) are developing training programs to upskill their current
workforces and support some employees in job transitions. But lasting success will take more than that.
18
“AWS and NVIDIA announce strategic collaboration to offer new supercomputing infrastructure, software and services for generative AI,”
NVIDIA, press release, November 28, 2023.
While these six elements are universally applicable, AI has introduced a few important wrinkles for leaders
to address:
— Adaptability. AI technology is advancing so rapidly that organizations must adopt new best practices
quickly to stay ahead of the competition. Best practices may come in the form of new technologies,
talent, business models, or products. For example, a modular approach helps future-proof tech stacks.
As natural language becomes a medium for integration, AI systems are becoming more compatible,
allowing businesses to swap, upgrade, and integrate models and tools with less friction. This modularity
allows enterprises to avoid vendor lock-in and put new AI advancements to use quickly without
constantly reinventing their tech stacks.
— Federated governance models. Managing data and models can give teams autonomy to develop new AI
tools while centrally controlling risk. Leaders can directly oversee high-risk or high-visibility issues, such
as setting policies and processes to monitor models and outputs for fairness, safety, and explainability.
But they can set direction and delegate other monitoring to business units, including measuring
performance-based criteria such as accuracy, speed, and scalability.
— AI benchmarks. These tools can serve as powerful means to quantitatively assess, compare, and
improve the performance of different AI models, algorithms, and systems. If technologists come together
to adopt standardized public benchmarks—and if more C-level executives start employing benchmarks,
including ethical ones—model transparency and accountability will improve and AI adoption will
increase, even among more skeptical employees.
— AI-specific skill gaps. Notably, 46 percent of leaders identify skill gaps in their workforces as a
significant barrier to AI adoption. Leaders will need to attract and hire top-level talent, including AI/ML
engineers, data scientists, and AI integration specialists. They will also need to commit to creating an
environment that is attractive to technologists. For example, this can mean providing them with plenty of
time to experiment, offering access to cutting-edge tools, creating opportunities to engage in open-
source communities, and promoting a collaborative engineering culture. Upskilling existing employees is
just as critical: Research from McKinsey’s People and Organizational Performance Practice underscores
the importance of tailoring training to specific roles, such as offering technical team members
bootcamps on library creation while offering prompt engineering classes to specific functional teams.19
— Human centricity. To guarantee both fairness and impartiality, it is important that business leaders
incorporate diverse perspectives early and often in the AI development process and maintain transparent
communication with their teams. As it stands, less than half of C-suite leaders (48 percent) say they would
involve nontechnical employees in the early development stages of AI tools, specifically ideation and
requirement gathering. Agile pods and human-centric development practices such as design-thinking and
reinforcement learning from human feedback (RLHF) will help leaders and developers create AI solutions
that all people want to use. In agile pods, technical team members sit alongside employees from business
functions such as HR, sales, and product, and from support functions such as legal and compliance.
Further, leaders can empathize with employees’ uneasiness about AI’s impacts on potential job losses by
being honest about new skill requirements and head count changes. Forums where employees can
provide input on AI applications, voice concerns, and share ideas are valuable for maintaining a
transparent, human-first culture.
19
People & Organization Blog, “Upskilling and reskilling priorities for the gen AI era,” blog entry by Sandra Durth, Kiera Jones, Lisa Christensen,
and Naveed Rashid, McKinsey, September 30, 2024.
for tomorrow.’ They might notice that their employees are already using
AI and want to use it even more. They may find that
– Albert Einstein, millennial managers are powerful change champions
theoretical physicist ready to encourage their peers. Instead of focusing on the
92 million jobs expected to be displaced by 2030, leaders
could plan for the projected 170 million new ones and the
new skills those will require.20
This is the moment for leaders to set bold AI commitments and to meet employee needs with on-the-job
training and human-centric development. As leaders and employees work together to reimagine their
businesses from the bottom up, AI can evolve from a productivity enhancer into a transformative
superpower—an effective partner that increases human agency. Leaders who can replace fear of
uncertainty with imagination of possibility will discover new applications for AI, not only as a tool to optimize
existing workflows but also as a catalyst to solve bigger business and human challenges. Early stages of AI
experimentation focused on proving technical feasibility through narrow use cases, such as automating
routine tasks. Now the horizon has shifted: AI is poised to unlock unprecedented innovation and drive
systemic change that delivers real value.
To meet this more ambitious era, leaders and employees must ask themselves big questions. How should
leaders define their strategic priorities and steer their companies effectively amid disruption? How can
employees ensure they are ready for the AI transition coming to their workplaces? Questions like the
following ones will shape a company’s AI future:
20
Future of jobs report 2025, World Economic Forum, January 7, 2025.
• Is your strategy ambitious enough? Do you want to transform your whole business? How can you
reimagine traditional cost centers as value-driven functions? How do you gain a competitive
advantage by investing in AI?
• What does successful AI adoption look like for your organization? What success indicators will you use
to evaluate whether your investments are yielding desired ROI?
• What skills define an AI-native workforce? How can you create opportunities for employees to develop
these skills on the job?
— For employees:
• What does achieving AI mastery mean for you? Does it extend to confidently using AI for personal
productivity tasks such as research, planning, and brainstorming?
• How do you plan to expand your understanding of AI? Which news sources, podcasts, and video
channels can you follow to remain informed about the rapid evolution of AI?
• How can you rethink your own work? Some of the most innovative ideas often emerge from within
teams, rather than being handed down from leadership. How would you redesign your work to drive
bottom-up innovation?
These questions have no easy answers, but a consensus is emerging on how to best address them. For
example, some companies deploy both bottom-up and top-down approaches to drive AI adoption. Bottom-
up actions help employees experiment with AI tools through initiatives such as hackathons and learning
sessions. Top-down techniques bring executives together to radically rethink how AI could improve major
processes such as fraud management, customer experience, and product testing.
These kinds of actions are critical as companies seek to move from AI pilots to AI maturity. Today only
1 percent of business leaders report that their companies have reached maturity. Over the next three years,
as investments in the technology grow, leaders must drive that percentage way up. They should make the
most of their employees’ readiness to increase the pace of AI implementation while ensuring trust, safety,
and transparency. The goal is simple: capture the enormous potential of gen AI to drive innovation and
create real business value.
The authors wish to thank Alex Panas, a senior partner in the Boston office; Eric Kutcher, a senior partner in
the Bay Area office; Kate Smaje, a senior partner in the London office; Noshir Kaka, a senior partner in the
Mumbai office; Robert Levin, a senior partner in the Boston office; and Rodney Zemmel, a senior partner in
the New York office, for their contributions to this report.
The authors were inspired by the impact delivered by our QuantumBlack, AI by McKinsey, colleagues, led by
Alex Singla and Alex Sukharevsky, and our gen AI lab leaders, especially Carlo Giovine and Stephen Xu.
The research was led by consultants Akshat Gokhale, Amita Mahajan, Begum Ortaoglu, Estee Chen, Hailey
Bobsein, Katharina Giebel, Mallika Jhamb, Noah Furlonge-Walker, and Sabrina Shin. This report was edited
by executive editors Kristi Essick and Rick Tetzeli.
Thank you to Reid Hoffman, along with his chief of staff Aria Finger and representatives at Superagency:
What Could Possibly Go Right with Our AI Future publisher Authors Equity, especially Deron Triff, for their
ongoing collaboration. Working together with Hoffman, who brings the distinctive perspective of being both
an investor in and a mentor to the creators of AI, we looked at a central question: How can businesses win
with AI in the medium and long terms? We benefited from working sessions with Hoffman, CEOs, and AI
industry thought leaders.
We would like to thank the members of the Stanford Institute for Human-Centered Artificial Intelligence
(HAI) who challenged our thinking and provided valuable feedback.
This report contributes to McKinsey’s ongoing research on AI and aims to help business leaders understand
the forces transforming ways of working, identify strategic impact areas, and prepare for the next wave of
growth. As with all McKinsey research, this work is independent and has not been commissioned or
sponsored in any way by any business, government, or other institution. The report and views expressed
here are ours alone. We welcome your comments on this research at [email protected].
Learn more about our gen AI insights and sign up for our newsletter.
All the survey findings discussed in the report, aside from two sidebars presenting international nuances,
pertain solely to US workplaces. The findings are organized in this way because the responses from US
employees and C-suite executives provide statistically significant conclusions about the US workplace.
Analyzing global findings separately allows a comparison of differences between US responses and those
from other regions.
Three-quarters of survey respondents in the United States work at organizations generating at least $100
million in annual revenue, and half work at companies with annual revenues exceeding $1 billion. All US
C-suite leader respondents work at organizations with annual revenues of at least $1 billion. Looking at
workforce size, 20 percent of US respondents work at companies with fewer than 10,000 employees, 49
percent work at companies with between 10,000 and 50,000, and 31 percent work at companies with more
than 50,000.
The analysis extended far beyond surveys. The researchers also conducted interviews with dozens of
C-level executives and industry experts to understand their perspectives on AI’s transformative potential
and the steps they are taking to lead their organizations through this transition. The report was further
enriched by discussions with experts at Stanford HAI, the Digital Economy Lab at HAI, and McKinsey’s
leading AI experts. Our survey and research primarily focus on gen AI; however, it is important to note that
participants in the survey may not have consistently differentiated between gen AI and other forms of AI.
Additionally, we developed a comprehensive database featuring more than 250 potential AI use cases,
building on the 63 gen AI use cases identified by McKinsey’s Digital Practice. This database also
incorporates proprietary McKinsey research on personal productivity as well as industry reports, along
with secondary research from the US government’s Federal AI Use Case Inventories, NASA, press articles,
and public interviews with technology leaders.
Application programming interface (API): Generative AI (gen AI): AI that is typically built using
Intermediary software components that allow foundation models and has capabilities that earlier
two applications to talk to each other; a structured forms Iacked, such as the ability to generate
way for AI systems to programmatically access content. Foundation models can also be used for
(usually external) models, data sets, or other pieces nongenerative purposes (for example, classifying
of software. user sentiment as negative or positive based on
call transcripts).
Artificial Intelligence (AI): The ability of software to
perform tasks that traditionally require human Graphics processing units (GPUs): Computer
intelligence, mirroring some cognitive functions chips originally developed for producing computer
usually associated with human minds. graphics, such as for video games, that are also
useful for deep learning applications. In contrast,
Deep learning: A subset of machine learning that traditional machine learning usually runs on central
uses deep neural networks, which are layers of processing units (CPUs), normally referred to as a
connected “neurons” whose connections have computer’s “processor.”
parameters or weights that can be trained. Deep
learning is especially effective at learning from
unstructured data such as images, text, and audio.
Large language models (LLMs): A class of Reasoning AI: AI systems that perform logical
foundation models that can process massive thinking, step-by-step planning, problem solving,
amounts of unstructured text and learn the and decision making using structured or
relationships between words or portions of words, unstructured data, going beyond pattern
known as tokens. This enables LLMs to generate recognition to draw conclusions and solve
natural-language text, performing tasks such as complex problems.
summarization or knowledge extraction. GPT-4
(which underlies ChatGPT) and the Llama family of Superagency: A state where individuals,
models from Meta are examples of LLMs. empowered by AI, amplify their creativity,
productivity, and positive impact. Even those
Modality: A high-level data category such as not directly engaging with AI can benefit from
numbers, text, images, video, and audio. its broader effects on knowledge, efficiency,
and innovation.
Multimodal capabilities: The ability of an AI system
to process and generate various types of data (text, Unstructured data: Data that lack a consistent
images, audio, video) simultaneously, enabling format or structure (for example, text, images, video,
complex tasks and rich outputs. and audio files) and typically require more advanced
techniques to extract insights.
Productivity (from labor): The ratio of GDP to total
hours worked in the economy. Labor productivity
growth generally comes from increases in the
amount of capital available to each worker, the
education and experience of the workforce, and
improvements in technology.
January 2025
Copyright © McKinsey & Company
Designed by McKinsey Global Publishing
www.mckinsey.com