First Annual Generative AI Study
First Annual Generative AI Study
Business Rewards
vs. Security Risks
By the Numbers�������������������������������������������������������������������� 5
Executive Summary������������������������������������������������������������� 6
Survey Results����������������������������������������������������������������������21
Conclusions�������������������������������������������������������������������������32
Introduction
Welcome to this report summarizing the First Annual
Generative AI Study: Business Rewards vs. Security Risks:
More than just survey results, this report offers expert analysis of
what organizations perceive to be the main security challenges and
business opportunities associated with the introduction of generative
AI. This report benchmarks what your competitors are doing so that
you can use these results to help enhance your own defenses and
identify the productivity opportunities that GenAI presents.
Tony Morbin
Executive News Editor, EU
Information Security Media Group [email protected]
cloud.google.com exabeam.com
clearwatersecurity.com onetrust.com
microsoft.com/en-us/security
By the Numbers
Statistics that jump out from the First Annual Generative AI Study:
Business Rewards vs. Security Risks:
62%
48%
15%
13%
0 20 40 60 80 100
More than half of all respondents who say Throughout the survey, more cybersecurity
they are actually deploying AI report more professionals than business leaders give the
than 10% productivity gains, and some report answer “Don’t know,” which is unsurprising
substantially more. At the lower end of since business leaders would be more
productivity gain, twice as many cybersecurity expected to know their organization’s plans.
No
Don't know
Fifteen percent of all respondents say they currently implement generative AI and it is in production,
while 28% say it is in the pilot phase. So, 42% have some current use.
Twenty-seven percent say they plan to The business leaders are between 5% and
implement it while another 27% neither use it 10% ahead of cybersecurity professionals
or plan to do so – a figure potentially pushed when it comes to reporting implementation
up to 30% if we add in the 3% who say they of AI until it comes to those with no plans.
don’t know. There, cybersecurity professionals are at 34%
compared to 19% for business leaders.
Yes
No
Don't know
4. Who in your organization is responsible for For cybersecurity leaders, the CIO and CTO
securing generative AI productivity solutions have roughly equal representation, with even
(job title)? more answering “don’t know”, “undecided” or
“We’re figuring it out in the pilot.”
Simulation/testing of apps/processes
Find/fix vulnerabilities
Infrastructure management/
server management
Network management
Non-automated processes,
e.g., signal processing
0-5%
6%-10%
11%-20%
21%-30%
31%-40%
41%-50%
51% or more
The results are impressive. Fifty-one percent are over-represented at 27% compared to 14%
of all respondents report more than 10% of business leaders. This could be due to the
productivity gains. The most frequently increased likelihood of non-deployment of
reported figure is 11% to 20% productivity generative AI due to operational restrictions,
gain, reported by 27% of respondents. More plus fewer use cases, since administration,
than twice as many business leaders - 8% - sales and marketing – significant leaders in AI
than cybersecurity leaders - 3% - reported deployment – more often fall into the business
productivity gains of more than 51%. leader category.
Among the 20% of respondents who report Where AI is implemented, productivity gains
gains of 5% or less, cybersecurity professionals are significant, but business leaders report
higher gains across a wider range of tasks.
Legal/Regulatory
Compliance
Intended to use
Currently in use
CIO & Business Leaders CISO & Security Leaders
Software development
Intended to use
Currently
Currently in
in use
use
Fraud
Sales
Intended
Intended to
to use
use
Currently in
Currently in use
use
Marketing
AML
Intended to
Intended to use
use
Currently in
Currently in use
use
Cybersecurity from threat
Customer Service
detection to incident response
Intended to
Intended to use
use
Currently in
Currently in use
use
Document automation/Customer
Legal/Regulatory
or patient data processing
Compliance
Intended to
Intended to use
use
Currently in
Currently in use
use
Enterprise knowledge
Foundation technology/
management
infrastructure
Intended to
Intended to use
use
Currently in
Currently in use
use
Software development
Production
Intended to
Intended to use
use
Currently in
Currently in use
use
Currently in
Currently in use
use
Currently in
Currently in use
use
Cybersecurity from threat
Medical/pharmaceutical research
detection to incident response
Intended to
Intended to use
use
0 20 40 60 80 100
Currently in use
Document automation/Customer
or patient data processing
Intended to use
Currently in use
Document automation/Customer
or patient data processing
Intended to use
Currently in use
Foundation technology/
infrastructure
Intended to use
Currently in use
Production
Intended to use
Currently in use
Intended to use
Currently in use
Intended to use
Currently in use
Medical/pharmaceutical research
Intended to use
0 20 40 60 80 100
Yes
No
Don't know
Thirteen percent of both business leaders and cyber security professionals say yes, but 14% of
cybersecurity professionals say they don’t know, compared to 8% of business leaders. An average of
76% of the respondents say no.
Yes
No
Sixty percent of business leaders and 49% of cybersecurity professionals say. Over the next year, this
would represent a quadrupling of organizations that have a specific AI budget.
Increase of 11%-20%
Increase of 6%-10%
Increase of 1%-5%
No change
Decrease
Twenty percent of business leaders and 35% of cybersecurity say there will be no change in
their budget. None of the business leaders foresee a reduction in their budget for AI, but 3% of
cybersecurity professionals do.
Yes
No
Don't know
0 10 20 30 40 50 60 70 80
Thirty-eight percent of business leaders and 24% of cybersecurity professionals say yes. The number
who don’t know is also high, at 20% for business leaders and 38% for cybersecurity leaders.
14. If “yes,” please list up to top 5 desired use marketing and other communications, report
cases generative AI will address. writing, research, diagnosis and treatment of
medical conditions, speed for code writing,
Responses from business leaders include newsletters and blog publishing.
security detection and prevention, marketing
content creation, marketing automation, sales Cybersecurity professionals also list use cases
decision support, sentiment and behavioral for chatbots for customer support, language
analysis, back office productivity, media post- translation and localization, more accurate and
production - speech to text, tagging, and context-aware language translation, art and
image generation. design, and software development.
Responses from cybersecurity professionals Although cybersecurity leaders were less likely
include asset management and patching, to have specific purchase plans than business
vulnerability management, legal and regulatory leaders, the 24% who did - see Chart 13 - had
compliance, SOC operations, effective a wider range of specific planned purchases
business continuity management, risk than business leaders.
management, incident management, coding,
AI bias/ethical concerns
Lack of transparency of
data sources used/chosen
Existential threats
Although there are differences between the two groups regarding concerns about particular threats,
the top concern for both groups is leakage of sensitive data by staff using AI, cited by 80% of
business leaders and 82% of cybersecurity professionals.
Second for both groups is ingress of In third place for both groups is AI bias/ethical
inaccurate data - hallucinations, cited by 71% concerns cited by 61% of business leaders and
of business leaders and 67% of cybersecurity 57% of cybersecurity professionals.
professionals.
Currently use
Encryption of data
Intend to use
Currently use
Psuedoanonymization of data
Intend to use
Intend to use
Currently use
Blocking software to prevent export
of specified data types
Intend to use
Currently use
Blocking software to prevent ingress
of specified data/software categories
Intend to use
Currently use
Whitelisting of specified
generative AI
Intend to use
Currently use
Blacklisting of specified
generative AI
Intend to use
Currently use
Banning use of all
generative AI
Intend to use
Currently use
Staff education and training
around secure use of AI
Intend to use
Currently use
Managed Security Service
Provider offerings
Intend to use
Currently use
Ban certain personae/departments
from using generative AI
Intend to use
Currently use
Only allow specified personae/
departments to use generative AI
Intend to use
Fifty eight percent of business leaders and In comments, one business leader says: “We
48% of cybersecurity professionals currently have a policy on the use of generative AI in
use AI for psuedoanonymization of data. place,” and one cybersecurity leader says:
“Currently - no controls in place or planned
It is significant that 38% of business leaders until after something bad happens to peers.”
and 48% of cybersecurity leaders intend to Another says, “While currently banned, GenAI
continue banning the use of generative AI will be governed by policy requiring human
in the workplace and that 73% of business intervention/review of any generated work
leaders and 78% of cybersecurity professionals product.”
intend to take a walled garden/own AI
approach going forward. Both suggest a return
Yes
No
Don't know
Thirty percent of business leaders and 31% of cybersecurity professionals say that they do have
playbooks for AI deployment.
Yes
No
Don't know
Thirty-five percent of business leaders and 31% of cybersecurity leaders say their competitors use
generative AI. An exceptionally large number of respondents - 56% - say they do not know.
Yes
No
A worryingly low 38% of business leaders say they do understand these regulations, as do 52% of
cybersecurity leaders. Yet, given the pace of change and the lack of global standard regulations, this
is perhaps not surprising.
MCNALLY: At the companies that I was The guidelines for organizations ultimately will
working with that banned ChatGPT, other tools come down to: Do you have the mechanisms
such as Aha were using it, so people were in place to know what the risks of using AI are,
using that. A ban just pushes the problem and do you have the people and processes in
down to somewhere else. place to address it? Some data scientists are
POVOLNY: The concerns about employee CHUVAKIN: In the survey, sensitive data
data and company data making it into models, leakage is number one, ingress of inaccurate
about the way that attacks are being deployed data hallucinations is number two, and then
and used, the strengthening of common the third bucket is broad bias/ethical concerns.
and legacy types of attacks, such as social And that make sense. The only slight change
engineering and phishing, are obviously being is the ingress of copyrighted IP. For some
dramatically improved through some of these reason, they’re talking about ingress, not
tools. All of those hold true and are some of egress. IP in certain copyright being produced
the risks that we see inherently. is not coming up in the surveys. Google just
announced indemnification for the enterprise
This is an explosion of technology much in the AI models. It comes up a lot, and it doesn’t
same way as the development of the iPhone, come up at all in the survey. It’s not about your
or maybe the personal computer. There is IP showing up in the AI; it’s about whose IP is
going to be a red-hot period where the world the stuff that the AI produced. If somebody
innovates and decides how they’re going to points at it and says, “Hey, I recognize this
use and explore and push the boundaries of code. I wrote it,” then suddenly problems
AI. Even though it’s 70 years old as a concept, happen.
it has a rebirth now. It’s no-holds-barred,
Contact
(800) 944-0401 • [email protected] • [email protected]