0% found this document useful (0 votes)
41 views34 pages

First Annual Generative AI Study

First Annual Generative AI study
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
41 views34 pages

First Annual Generative AI Study

First Annual Generative AI study
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 34

FIRST ANNUAL GENERATIVE AI STUDY:

Business Rewards
vs. Security Risks

FIRST ANNUAL GENERATIVE AI STUDY 1


Table of content
Introduction���������������������������������������������������������������������������� 4

By the Numbers�������������������������������������������������������������������� 5

Executive Summary������������������������������������������������������������� 6

Survey Results����������������������������������������������������������������������21

Conclusions�������������������������������������������������������������������������32
Introduction
Welcome to this report summarizing the First Annual
Generative AI Study: Business Rewards vs. Security Risks:

This survey of over 400 business and cybersecurity professionals


conducted in the fall, Q3 2023 comprises responses from two
cohorts, business leaders – comprising CIOs, board members,
executives or other business leaders – and CISOs or other
cybersecurity professionals. Both groups represent a wide range of
vertical sectors from around the world, and the largest group comes
from North America.

In the survey, we look at the differences in perspective between


business leaders and cybersecurity professionals when it comes
to their current and intended use cases for generative AI. Where
generative AI is deployed, we look at measuring productivity gains, Tony Morbin
and where it is not currently used, we look at the anticipated gains Executive News Editor, EU
and intended deployment. This includes current and intended Information Security Media Group
allocation of expenditure and its projected growth as well as areas Morbin is a veteran cybersecurity and tech journalist, editor,
for investment going forward. publisher and presenter working exclusively in cybersecurity for
the past decade – at ISMG, SC Magazine and IT Sec Guru. He
previously covered computing, finance, risk, electronic payments,
We also compare prioritization of concerns, what the concerns are telecoms, broadband and computing, including at the Financial
Times. Morbin spent seven years as an editor in the Middle East
for each group, where they align and where they differ. Then we and worked on ventures covering Hong Kong and Ukraine.

consider what mitigation strategies are being used or could be


deployed to address these concerns.

Also, the survey seeks to get a snapshot of current understanding


of generative AI, including the range of generative AI tools being
explored/trialed, as well as respondents’ understanding of current
regulation.

More than just survey results, this report offers expert analysis of
what organizations perceive to be the main security challenges and
business opportunities associated with the introduction of generative
AI. This report benchmarks what your competitors are doing so that
you can use these results to help enhance your own defenses and
identify the productivity opportunities that GenAI presents.

Tony Morbin
Executive News Editor, EU
Information Security Media Group [email protected]

4 FIRST ANNUAL GENERATIVE AI STUDY


About the Sponsors

cloud.google.com exabeam.com

clearwatersecurity.com onetrust.com

microsoft.com/en-us/security

By the Numbers
Statistics that jump out from the First Annual Generative AI Study:
Business Rewards vs. Security Risks:

62%

48%

15%

13%

0 20 40 60 80 100

15% currently implement GenAI.


13% have a specific budget for GenAI solutions.
62% of business leaders and
48% of cybersecurity leaders do not understand
AI regulations that apply to their sector.

FIRST ANNUAL GENERATIVE AI STUDY 5


Executive Summary
When comparing the responses of professionals - 27% - report gains of less than
business leaders and cybersecurity 5%, compared to business leaders at 14%.
professionals in relation to their views
For both business leaders and cyber security
on implementation of generative AI,
professionals, 13% report having a specific
this report finds that business leaders – budget for generative AI thus it is clearly still at
while aware of the risks – are generally an early stage in enterprise rollout and budget
more enthusiastic about adopting cycles.
generative AI than their cybersecurity
compatriots. They are more likely to The top concerns about use of AI are leakage
report using or trialing GenAI, and they of sensitive data by staff using AI, cited by 80%
are doing so via a wider variety of AI of business leaders and 82% of cybersecurity
professionals. Second for both groups is
iterations. They are also less likely to
ingress of inaccurate data - hallucinations,
say that GenAI has no place in their which is cited by 71% of business leaders and
operation. 67% of cybersecurity professionals.

In contrast, cybersecurity professionals – Particularly significant is that 38% of business


while aware of the productivity opportunities leaders and 48% of cybersecurity leaders
for deployment in their own sector – have expect to continue banning all use of
a higher level of concern about the risks generative AI in the workplace. Also, 73% of
entailed and how they might be mitigated. business leaders and 78% of cybersecurity
professionals say they intend to take a walled
Among all respondents, there is roughly a garden/own AI approach going forward.
70/30 split between those keen to adopt AI
and those currently rejecting its use or who Regarding understanding of AI regulations, a
are in organizations/roles where its use is not worryingly low 38% of business leaders say
allowed. Outright bans on use of generative they do understand these regulations, as
AI are reported more frequently among do 52% of cybersecurity leaders. Yet these
cybersecurity professionals than business figures should not be surprising given the rate
leaders, but it is not an uncommon response of change and lack of universally accepted
to tackling the risk. standards and regulations.

More than half of all respondents who say Throughout the survey, more cybersecurity
they are actually deploying AI report more professionals than business leaders give the
than 10% productivity gains, and some report answer “Don’t know,” which is unsurprising
substantially more. At the lower end of since business leaders would be more
productivity gain, twice as many cybersecurity expected to know their organization’s plans.

6 FIRST ANNUAL GENERATIVE AI STUDY


Survey Results
1. Does yourCIO
company currently use generative AI?
& Business Leaders
CISO & Security Leaders

Yes, implemented and in production

Yes, in pilot phase only

No, but we have plans to do so

No

Don't know

0% 10% 20% 30% 405 50% 60%

CIO & Business Leaders CISO & Security Leaders

Fifteen percent of all respondents say they currently implement generative AI and it is in production,
while 28% say it is in the pilot phase. So, 42% have some current use.

Twenty-seven percent say they plan to The business leaders are between 5% and
implement it while another 27% neither use it 10% ahead of cybersecurity professionals
or plan to do so – a figure potentially pushed when it comes to reporting implementation
up to 30% if we add in the 3% who say they of AI until it comes to those with no plans.
don’t know. There, cybersecurity professionals are at 34%
compared to 19% for business leaders.

The FUD – the fear, uncertainty and doubt –


surrounding generative AI shouldn’t be a reason
to holistically ban it. We should control and
educate and enforce the usage of it effectively.
-Steve Povolny

FIRST ANNUAL GENERATIVE AI STUDY 7


2. Does your organization allow staff to use generative AI for
work purposes on their own initiative?
CIO & Business Leaders
CISO & Security Leaders

Yes

No

Don't know

0% 20% 40% 60% 80% 100% 120%

CIO & Business Leaders CISO & Security Leaders

Sixty-three percent of business leaders reported that it is allowed, compared to 47% of


cybersecurity professionals.

3. Who in your organization is responsible The leading title mentioned by business


for deploying generative AI productivity leaders was CISO/CSO, followed by CIO, CTO
solutions (job title)? and CEO or president at 6%.
Cybersecurity professionals answer CTO, with
The most frequent answer from business CIO not far behind, followed by CEO. IT gets
leaders is CIO. CTO and CEO are also several mentions and CISO also comes up.
mentioned. Other titles mentioned included IT,
COO and various heads of projects/products
– plus the poignant “nobody” and more 5. Who in your organization will be
enigmatic “Still a bit of a mystery.” responsible for ongoing management of
generative AI productivity solutions (job title)?
Most cybersecurity professionals answer CTO,
with CIO not far behind, followed by CEO and Among business leaders, the answers are led
CISO. Other responses include “not allowed,” by CIO, followed by CTO. There are just a few
“not decided” “no one” and “don’t know.” CEOs and IT departments mentioned, and a lot
more say “don’t know” or “undecided.”

4. Who in your organization is responsible for For cybersecurity leaders, the CIO and CTO
securing generative AI productivity solutions have roughly equal representation, with even
(job title)? more answering “don’t know”, “undecided” or
“We’re figuring it out in the pilot.”

8 FIRST ANNUAL GENERATIVE AI STUDY


6. Which of the generative AI tools/platforms Cybersecurity leaders also most frequently
do you use, or are aware of? (Please list) cite Chat GPT, Google Bard and Bing, and
other providers are rarely mentioned. A likely
Both groups say Chat GPT/GPT4, followed explanation is that generative AI is not yet
by Google Bard and Bing. Midjourney was robust enough for many critical cybersecurity
also often mentioned by business leaders. applications, and the operational nature of
It appears that they are experimenting a lot cybersecurity tasks demands more proven and
with new GenAI entrants as each offering tested solutions.
scrambles to establish itself as a niche leader,
looking to see how they can grasp the Also, many respondents say that no generative
productivity gains that might be delivered. AI is currently used as generative AI tools are
not approved for use or not allowed.

7. What are the main productivity gains you get/envision your


organization getting from use of generative AI? (Check all that
apply - there will beCIO
some overlap/duplication)
& Business Leaders
CISO & Security Leaders

Automate repetitive tasks

Increase speed of production/service/


results analysis

Perform routine and administrative tasks

Help write code/app development process

Write policies/courses, e.g., for security


awareness/training/education

Reduce staffing requirement

Reduce non-staff costs/budget

Simulation/testing of apps/processes

Find/fix vulnerabilities

Infrastructure management/
server management

Network management

Strengthen our own defenses,


including choose better passwords

Non-automated processes,
e.g., signal processing

Other (please specify)

0 10% 20% 30% 40% 50% 60% 70%

CIO & Business Leaders CISO & Security Leaders

FIRST ANNUAL GENERATIVE AI STUDY 9


Business leaders show greater support for Comments indicate that there are AI skeptics
all options than cybersecurity professionals, in both groups. One business leader says,
except when the task is explicitly part of a “I don’t trust generative AI to produce
cybersecurity professional’s workload. anything without human supervision yet,”
and a cybersecurity professional describes
In both groups, the most chosen option is to generative AI as more of a risk than a benefit.
automated repetitive tasks, cited by 67% of
business leaders and 58% of cybersecurity
leaders – thus 62% for all respondents. While organizations
This is followed by increasing the speed of
production/service/results analysis at 65% and may have business
52%, respectively, and 59% overall. Performing
routine and administrative tasks comes in third leaders that want to
at 58% and 45%, respectively, and 52% overall.
embrace the use of
Forty-one percent of security professional
choose “Write policies/courses, e.g. for
AI, they do not yet
security awareness/ training/education”
compared to 38% of business leaders. And
have in place the
one cybersecurity professional correctly right governance, the
commented, “prefer you say ‘draft policies’
vs. ‘write policies’ and in general switch to the right stakeholders
concept that it’s assistive but a human is still
responsible.” identified or the right
It may seem surprising, but reducing staffing understanding of what
requirement is not mentioned until sixth, and
then only by 24% of respondents. it takes to address the
Reducing non-staff costs/budget, at 18% total,
impacts of AI.
is more of a concern for business leaders, at - David Bailey
23%, compared to cybersecurity professionals
at 13%. Conversely, strengthening our
own defenses, including choosing better
passwords is more of a concern for
cybersecurity professionals at 20%, compared
to just 10% of business leaders.

10 FIRST ANNUAL GENERATIVE AI STUDY


8. If you currently use AI systems, what productivity gains do you
CIO & Business Leaders
estimate you
CISO &achieve
Security Leaderscompared to the systems they replace?

0-5%

6%-10%

11%-20%

21%-30%

31%-40%

41%-50%

51% or more

0% 10% 20% 30% 40% 50% 60%

CIO & Business Leaders CISO & Security Leaders

The results are impressive. Fifty-one percent are over-represented at 27% compared to 14%
of all respondents report more than 10% of business leaders. This could be due to the
productivity gains. The most frequently increased likelihood of non-deployment of
reported figure is 11% to 20% productivity generative AI due to operational restrictions,
gain, reported by 27% of respondents. More plus fewer use cases, since administration,
than twice as many business leaders - 8% - sales and marketing – significant leaders in AI
than cybersecurity leaders - 3% - reported deployment – more often fall into the business
productivity gains of more than 51%. leader category.

Among the 20% of respondents who report Where AI is implemented, productivity gains
gains of 5% or less, cybersecurity professionals are significant, but business leaders report
higher gains across a wider range of tasks.

FIRST ANNUAL GENERATIVE AI STUDY 11


Currently in use

Legal/Regulatory
Compliance
Intended to use

9. For what use cases/environments do you use/envision your


Currently in use

organization using generative AI? (Select what applies to your organization)


Enterprise knowledge
management
Intended to use

Currently in use
CIO & Business Leaders CISO & Security Leaders
Software development

Intended to use

Currently
Currently in
in use
use

Fraud
Sales

Intended
Intended to
to use
use

Currently in
Currently in use
use

Marketing
AML
Intended to
Intended to use
use

Currently in
Currently in use
use
Cybersecurity from threat
Customer Service
detection to incident response
Intended to
Intended to use
use

Currently in
Currently in use
use
Document automation/Customer
Legal/Regulatory
or patient data processing
Compliance
Intended to
Intended to use
use

Currently in
Currently in use
use

Enterprise knowledge
Foundation technology/
management
infrastructure
Intended to
Intended to use
use

Currently in
Currently in use
use

Software development
Production
Intended to
Intended to use
use

Currently in
Currently in use
use

Fraud diagnosis & treatment


Medical
Intended to
Intended to use
use

Currently in
Currently in use
use

AML results analysis


Medical
Intended to
Intended to use
use

Currently in
Currently in use
use
Cybersecurity from threat
Medical/pharmaceutical research
detection to incident response
Intended to
Intended to use
use

0 20 40 60 80 100
Currently in use
Document automation/Customer
or patient data processing
Intended to use

12 FIRST ANNUAL GENERATIVE AI STUDY


Currently in use
Foundation technology/
infrastructure
Cybersecurity from threat
detection to incident response
Intended to use

Currently in use
Document automation/Customer
or patient data processing
Intended to use

Currently in use
Foundation technology/
infrastructure
Intended to use

Currently in use

Production

Intended to use

Currently in use

Medical diagnosis & treatment

Intended to use

Currently in use

Medical results analysis

Intended to use

Currently in use

Medical/pharmaceutical research

Intended to use

0 20 40 60 80 100

There is a complete divergence in the


answers from both groups, which is entirely
understandable given the wide-ranging remit
of business leaders compared to the more Traditional security teams
focused remit of cybersecurity leaders.
don’t know what to do
Among business leaders, the leading about [securing Al]. That
envisioned future use cases are preventing
fraud at 84%. Legal/regulatory compliance is is an exciting challenge.
ranked second at 80% and medical diagnosis
and treatment and medical results analysis are The expansion of the
tied for third place at 75%. mandate is what freaks
Among cybersecurity professionals, the top a lot of people out - not
envisioned future use cases are a tie in first
place for medical results analysis, e.g., imaging that they have to deal
and medical/pharmaceutical research, both
at 90%. Second is medical diagnosis and
with adversarial prompts.
treatment at 85%, and third is legal/regulatory - Anton Chuvakin
compliance at 79%.
FIRST ANNUAL GENERATIVE AI STUDY 13
10. Do you
CIO have aLeaders
& Business specific budget for generative AI solutions?
CISO & Security Leaders

Yes

No

Don't know

0 10% 20% 30% 40% 50% 60% 70% 80%

CIO & Business Leaders CISO & Security Leaders

Thirteen percent of both business leaders and cyber security professionals say yes, but 14% of
cybersecurity professionals say they don’t know, compared to 8% of business leaders. An average of
76% of the respondents say no.

CIO & Business Leaders


11. If “no,”
CISO & do you
Security expect to have one within 12 months?
Leaders

Yes

No

0 10% 20% 30% 40% 50% 60%

CIO & Business Leaders CISO & Security Leaders

Sixty percent of business leaders and 49% of cybersecurity professionals say. Over the next year, this
would represent a quadrupling of organizations that have a specific AI budget.

14 FIRST ANNUAL GENERATIVE AI STUDY


In terms of risk level, not all AI is of the same risk.
If you have an employee using ChatGPT or Bard
internally to help them draft an email, that’s very
different than an AI system that’s predicting loans
or being used in healthcare.
- Laurence McNally

12. If “yes,” what % increase in budget for generative AI solutions


CIO & Business Leaders
do you expect CISOin 12 months’
& Security Leaders time?

Increase of more than 20%

Increase of 11%-20%

Increase of 6%-10%

Increase of 1%-5%

No change

Decrease

0 5% 10% 15% 20% 25% 30% 35%

CIO & Business Leaders CISO & Security Leaders

Twenty percent of business leaders and 35% of cybersecurity say there will be no change in
their budget. None of the business leaders foresee a reduction in their budget for AI, but 3% of
cybersecurity professionals do.

FIRST ANNUAL GENERATIVE AI STUDY 15


13. Do you have specific plans to purchase AI-driven solutions
over the CIO
next 12 months for any of the use case options earlier
& Business Leaders
mentioned?
CISO & Security Leaders

Yes

No

Don't know

0 10 20 30 40 50 60 70 80

CIO & Business Leaders CISO & Security Leaders

Thirty-eight percent of business leaders and 24% of cybersecurity professionals say yes. The number
who don’t know is also high, at 20% for business leaders and 38% for cybersecurity leaders.

14. If “yes,” please list up to top 5 desired use marketing and other communications, report
cases generative AI will address. writing, research, diagnosis and treatment of
medical conditions, speed for code writing,
Responses from business leaders include newsletters and blog publishing.
security detection and prevention, marketing
content creation, marketing automation, sales Cybersecurity professionals also list use cases
decision support, sentiment and behavioral for chatbots for customer support, language
analysis, back office productivity, media post- translation and localization, more accurate and
production - speech to text, tagging, and context-aware language translation, art and
image generation. design, and software development.

Responses from cybersecurity professionals Although cybersecurity leaders were less likely
include asset management and patching, to have specific purchase plans than business
vulnerability management, legal and regulatory leaders, the 24% who did - see Chart 13 - had
compliance, SOC operations, effective a wider range of specific planned purchases
business continuity management, risk than business leaders.
management, incident management, coding,

16 FIRST ANNUAL GENERATIVE AI STUDY


15. What are your main concerns when it comes to implementing
generative AI by yourself and/or by others? (Please select your top 6)
CIO & Business Leaders
CISO & Security Leaders

Leakage of sensitive data by staff using AI

Ingress of inaccurate data - hallucinations

AI bias/ethical concerns

Lack of transparency of
data sources used/chosen

Lack of understanding of the algorithm’s


decision-making process

Potential compromise of compliance with regulations,


standards, contracts - including PI leakage
AI use by malicious actors, from vulnerability
search to improved phishing lures/deepfakes
and automated attacks

Ingress of malicious data/malware, where AI learning


has been poisoned or is created by malicious actors

Ingress of copyrighted IP poisoning new build software

Loss of skills/understanding of underlying


processes by staff - inability to revert to manual

Technical requirements, e.g., processing power

Existential threats

Other (please specify)

0 20% 40% 60% 80%

CIO & Business Leaders CISO & Security Leaders

Although there are differences between the two groups regarding concerns about particular threats,
the top concern for both groups is leakage of sensitive data by staff using AI, cited by 80% of
business leaders and 82% of cybersecurity professionals.

Second for both groups is ingress of In third place for both groups is AI bias/ethical
inaccurate data - hallucinations, cited by 71% concerns cited by 61% of business leaders and
of business leaders and 67% of cybersecurity 57% of cybersecurity professionals.
professionals.

FIRST ANNUAL GENERATIVE AI STUDY 17


16. What do you view as the biggest risk by generative AI, deepfakes, copyright issues,
within code repositories in cybersecurity access; leakage of sensitive and proprietary
when it comes to generative AI use? data, difficulty in auditing the actions of
individuals vs AI, and incorrectly configuring/
Business leaders are most concerned about implementing AI products.
loss of code. They also mention unintended
consequences and privacy concerns; the Then comes the deliberate misuse of AI
embedding of malicious or dysfunctional products, poisoned concepts or poisoned
code; misuse by bad actors, e.g., deepfakes or inference of decision vectors; algorithms
misleading information; loss of confidentiality; that don’t actually work but appear to and
ethics/bias; ransomware; and phishing. accidental usage of open-source code in
proprietary code creation.
Cybersecurity professionals are most
concerned about visibility of where code One respondent says certificate management,
comes from, i.e., Is it proprietary, open source, which is already very hard to do well, will
poisoned or malicious? They also share many become critically essential to maintain
of the concerns of business leaders, including confidence. Most organizations are not ready
introduction of malware or copyrighted source to do certificate management even poorly
code, skills loss, information getting into the let alone at the level required to provide
wrong hands, and information code getting assurance in data and systems.
corrupted.
Another comment: “We’re starting from
Comments on this question include: The the business end with AI and haven’t yet
employee need enough skills in the area to considered generative AI’s access to code
know when the AI is hallucinating or returning repositories ... that’ll come late next year at the
bad code. There is also concern about staff not earliest.”
understanding the code but using it because it
works, leaks of data and ransomware created

This is an explosion of technology much in


the same way as the development of the
iPhone, or maybe the personal computer.
There is going to be a red-hot period where
the world innovates and decides how they’re
going to use and explore and push the
boundaries of AI.
- Steve Povolny

18 FIRST ANNUAL GENERATIVE AI STUDY


17. What tools, processes or approaches do you currently use and
intend to use to mitigate the concerns around use of AI by your
own organization or your supply chain or partners? (Select what applies
to your organization)

Currently use
Encryption of data

Intend to use

Currently use
Psuedoanonymization of data

Intend to use

Walled garden - own AI Currently use

Intend to use

Currently use
Blocking software to prevent export
of specified data types
Intend to use

Currently use
Blocking software to prevent ingress
of specified data/software categories
Intend to use

Currently use
Whitelisting of specified
generative AI
Intend to use

Currently use
Blacklisting of specified
generative AI
Intend to use

Currently use
Banning use of all
generative AI
Intend to use

Currently use
Staff education and training
around secure use of AI
Intend to use

AI-driven automated Currently use


software from third party
Intend to use

Currently use
Managed Security Service
Provider offerings
Intend to use

Currently use
Ban certain personae/departments
from using generative AI
Intend to use

Currently use
Only allow specified personae/
departments to use generative AI
Intend to use

0% 10% 20% 30% 40% 50% 60% 70% 80%

CIO & Business Leaders CISO & Security Leaders

FIRST ANNUAL GENERATIVE AI STUDY 19


Seventy-three percent of business leaders and to the wall and moat of the past as businesses
69% of cybersecurity professionals currently strive to regain control of the AI genie that has
use AI for encryption of data. been let loose from its bottle.

Fifty eight percent of business leaders and In comments, one business leader says: “We
48% of cybersecurity professionals currently have a policy on the use of generative AI in
use AI for psuedoanonymization of data. place,” and one cybersecurity leader says:
“Currently - no controls in place or planned
It is significant that 38% of business leaders until after something bad happens to peers.”
and 48% of cybersecurity leaders intend to Another says, “While currently banned, GenAI
continue banning the use of generative AI will be governed by policy requiring human
in the workplace and that 73% of business intervention/review of any generated work
leaders and 78% of cybersecurity professionals product.”
intend to take a walled garden/own AI
approach going forward. Both suggest a return

18. Is there a process/playbook/guidelines/policy in place to ensure


that all generative AI usage/deployment in your organization
compliesCIOwith agreed
& Business security policies?
Leaders
CISO & Security Leaders

Yes

No

Don't know

0 10% 20% 30% 40% 50% 60% 70% 80%

CIO & Business Leaders CISO & Security Leaders

Thirty percent of business leaders and 31% of cybersecurity professionals say that they do have
playbooks for AI deployment.

20 FIRST ANNUAL GENERATIVE AI STUDY


19. Do your competitors currently use generative AI?
CIO & Business Leaders
CISO & Security Leaders

Yes

No

Don't know

0 10% 20% 30% 40% 50% 60%

CIO & Business Leaders CISO & Security Leaders

Thirty-five percent of business leaders and 31% of cybersecurity leaders say their competitors use
generative AI. An exceptionally large number of respondents - 56% - say they do not know.

20. Do you know and understand what regulatory restrictions/


guidance applies to your use of generative AI in your geography/
industry
CIO &vertical?
Business Leaders
CISO & Security Leaders

Yes

No

0 10% 20% 30% 40% 50% 60% 70%

CIO & Business Leaders CISO & Security Leaders

A worryingly low 38% of business leaders say they do understand these regulations, as do 52% of
cybersecurity leaders. Yet, given the pace of change and the lack of global standard regulations, this
is perhaps not surprising.

FIRST ANNUAL GENERATIVE AI STUDY 21


Standout Survey Results
TONY MORBIN: What particularly stood out for you in the
results, and what’s your take on that?

ANTON CHUVAKIN: I saw some adoption anomalies


as I was reading the report, but the report made sense
in most cases. The contradictions between security and
leaders made sense, but some of the adoption numbers
or perceived adoption numbers looked really high. These
are maybe slightly biased.

STEVE POVOLNY: The results of the report are pretty on


point with what we see in industry. Some of the largest
standout surprises were the discrepancies between
business leaders and cybersecurity professionals.
Anton Chuvakin
Security Adviser at Office of the CISO DAVID BAILEY: I was pleased to see the different
Google Cloud respondents from the types of leaders within the
organization. It’s good to want business leaders to be
Dr. Anton Chuvakin works for the Office of the CISO of
Google Cloud, where he arrived via Chronicle Security (an
able to utilize technology to be successful, and AI is going
Alphabet company) acquisition in July 2019. Anton was, to help that, which is great. The downside of that is the
until recently, a Research Vice President and Distinguished
Analyst at Gartner for Technical Professionals (GTP) Security concern and apprehension from security professionals
and Risk Management Strategies team.
as well as those that need to manage risk within the
Anton is a recognized security expert in the field of log organization. But I’m glad that some of that apprehension
management, SIEM and PCI DSS compliance. He is an
author of books “Security Warrior”, “Logging and Log is there because there are a lot of unknowns yet to be
Management: The Authoritative Guide to Understanding
the Concepts Surrounding Logging and Log Management” decided on how organizations have to manage their risk.
and “”PCI Compliance, Third Edition: Understand and
Implement Effective PCI Data Security Standard Compliance”
(book website) and a contributor to “Know Your Enemy II”, LAURENCE MCNALLY: The survey results correlate
“Information Security Management Handbook” and other
books. to what I’m seeing as I talk to businesses on using AI.
Anton has published dozens of papers on log management,
Business leaders are more bullish as opposed to our
SIEM, correlation, security data analysis, PCI DSS, security cybersecurity folks who are definitely more skeptical and
management. His blog “Security Warrior” was one of the most
popular in the industry. thinking about the trustworthiness and side effects of the
In addition, Anton presented at many security conferences AI. Another thing that stood out to me was the number of
across the world; he addressed audiences in United States,
UK, Australia, Singapore, Spain, Russia and other countries.
people that said they understood the regulations.
He works on emerging security standards and serves on

Why Banning AI Usage Won’t Work


advisory boards of several security start-ups.

Before that, Anton was running his own security consulting


practice, focusing on logging, SIEM and PCI DSS compliance
for security vendors and Fortune 500 organizations. Dr.
Anton Chuvakin was formerly a Director of PCI Compliance
MORBIN: Quite a few respondents, particularly the
Solutions at Qualys. Previously, Anton worked at LogLogic as cybersecurity professionals, say that they were banned
a Chief Logging Evangelist, tasked with educating the world
about the importance of logging for security, compliance from using AI in their organization. Is banning the use of
and operations. Before LogLogic, Anton was employed by general AI for employees or the business an effective way
a security vendor in a strategic product management role.
Anton earned his Ph.D. degree from Stony Brook University.
to mitigate threats?

22 FIRST ANNUAL GENERATIVE AI STUDY


There are two types of AI systems. There
is the LLM world that is not really high-risk,
but these other applications, like regression
models and all of these other vision models,
are very different from the LLM world.
- Laurence McNally

POVOLNY: We know how things work when Guidelines for AI Usage


you ban holistically or make a broad strokes
approach like a ban: People find ways to MORBIN: Is there a lack of guardrails for the
work around it. This is one of the most use of AI because people don’t know what the
commonplace and polarizing issues around best options are or because they don’t have
generative AI – how to use it appropriately. Do the skills to implement them? Or is the issue of
we get aggressive with something like a ban? security just not high enough up the priority list
This is a personal decision and a business compared to getting the benefits of being an
decision, and it’s hard to be too judgmental of early adopter?
either of those. It can be an effective way to
mitigate risk holistically. BAILEY: One of the foundations of a really
strong security program is to ensure that
But on the flip side, employees will actively find you’ve got good governance, guidelines and
ways to work around it, which can be more standards. Security is not just an IT problem
damaging than just training them effectively or a security problem; it’s a business problem.
how to use it or limiting, controlling or having While organizations may have business
some oversight on the approach to usage. leaders that want to embrace the use of
The FUD – the fear, uncertainty and doubt AI, they do not yet have in place the right
– surrounding generative AI shouldn’t be a governance, the right stakeholders identified
reason to holistically ban it. We should control or the right understanding of what it takes to
and educate and enforce the usage of it address the impacts of AI – the trustworthiness
effectively. and the risks associated with it – and then
implement that throughout an entire system or
CHUVAKIN: Bans ultimately cause usage to software development life cycle. A lot of the
increase – sometimes in all sorts of insecure organizations we deal with are struggling to
ways. I’m against banning because ultimately, just get the maturity that is required for today,
banning often produces the opposite effect. let alone using AI.

MCNALLY: At the companies that I was The guidelines for organizations ultimately will
working with that banned ChatGPT, other tools come down to: Do you have the mechanisms
such as Aha were using it, so people were in place to know what the risks of using AI are,
using that. A ban just pushes the problem and do you have the people and processes in
down to somewhere else. place to address it? Some data scientists are

FIRST ANNUAL GENERATIVE AI STUDY 23


excited about using AI for outcomes and look a massive CRM application, a traditional data-
at AI as an enabler of their process, but some intensive enterprise app, you filter data. You
security professionals look at it as a disruptor. want to not have malicious data coming in, but
They know AI is not going anywhere and that ultimately whatever comes out is only what you
they are going to have to embrace it, but they put in. With AI, what comes out is not what you
are concerned about all of the things that are put in. It may be something else. So, filtering
required to do it in a way that is reasonable inputs is a great idea. But filtering outputs is
and appropriate from a security standpoint new. That’s an example of how data security
and a risk standpoint. We’re going to have control morphs quite a bit when you add AI.
to develop some new processes in order to
make sure we’re doing that effectively. If I think of data governance, I think of
decisions to be made more tightly coupled
Securing AI to the data life cycle, like, “What data goes
into training? How do you secure prompts?
MORBIN: What exactly can we do to mitigate Who can see the prompts?” All this is a blend
the risk of generative AI being used for of traditional and novel controls. With threat
malicious purposes? detection and response, if you stick to security
scope, there are some changes, but not
CHUVAKIN: This is my main goal. We recently dramatic changes. But when you start thinking
published a paper called “Securing AI: Similar about the content safety, a whole world opens
or Different?” which answers some of these up that you may not have encountered as
questions. Let me give you a broad framework. a CISO. You’ve dealt with threats, badness,
First, some of my colleagues rush to thinking hackers and insiders, but you haven’t dealt
that to secure AI you need AI. In reality, one with machine-produced content that harmed
of the guardrails may be improved data your company. Now, you need to think about
governance. Some of the recent breaches it. The CISO team’s responsibilities expand to
involving AI, including losses of training data, areas that they’re not familiar with.
had nothing to do with actual AI workloads;
they had to do with processes related to The final example I’ll give on the controls
training data being broken. is: Some people say that the number one
problem they have with AI is intellectual
Think about whether the controls that you property. My reaction is, “But your job is
have always had and used are relevant in their security, right? You are a CISO. Why is this
intact form. For infrastructure security, if you your problem? Can you shove it to somebody
are securing where you prepare the data or else’s inbox? And the person says, “Guess
where you run the AI workloads, this applies how I ended up with the problem? Everybody
verbatim. ChatGPT or Bard or commercial shoved it off their inboxes, and it ended up in
enterprise-type AI solutions are ultimately my inbox because it vaguely connects to risk.”
software-as-a-service products, so much of We have to solve these problems, but they’re
the SaaS security applies. This bucket is called unfamiliar problems. Traditional security teams
“Ultimately, there’s no difference.” Some of the don’t know what to do about [securing Al].
controls are the same. That is an exciting challenge. The expansion
of the mandate is what freaks a lot of people
But there’s also a more exciting bucket called out - not that they have to deal with adversarial
“These controls are different,” and these prompts.
controls maybe have different emphasis. For
example, think about data filtering. If you have

24 FIRST ANNUAL GENERATIVE AI STUDY


BAILEY: I totally agree. It’s important to focus on data
governance – understanding what data you have and then
knowing how AI will impact that data. Most people put data
in and then want to interact with it to get an outcome. Well,
the outcome may be completely new, and that requires
determining trustworthiness, potential harm and potential
impact. We may have to adapt new things for existing
processes in order to affect that good data outcome. Data
governance is extremely important in your AI journey.

POVOLNY: A lot of the threats that we think of surrounding


AI in general as a concept aren’t fundamentally new in
the way that we protect and monitor data. Data protection
extends to protecting your models and your training
data from poisoning. Data validation and explainability
are very similar to code reviews and code auditing. A
lot of techniques that we know already just have a new David Bailey
application here. Vice President of Consulting Services
Clearwater
Cybercriminals are going to find ways to deploy and Driven by the belief that veterans represent the embodiment of
exploit AI-based attacks regardless of how well we do resilience, duty, and sacrifice, Bailey is grateful to be the VP of
Consulting Services at Clearwater. He has the opportunity to work
that, so when we can simulate research and have a deep alongside men and women for the only company combining deep
healthcare security and compliance expertise with comprehensive
understanding of what those attack methods look like, it service and technology solutions to help organizations become
really helps us to identify and determine what the tools and more secure, compliant, and resilient. Bailey is honored to serve
integrated delivery networks, digital health companies, and the
techniques will look like when we see them in the wild. defense industrial base in achieving their missions.

This is one of those rare times as an industry that we’re


on equal footing with the cybercriminals. We’re just as far
into the research and development of techniques and
applications as they are for the malicious counterpoints. We
have at least an even footing there, if not a step up, and
that’s exciting.

MCNALLY: Even outside of cybersecurity or cybercriminals,


when your own data scientists are putting data into the
model, especially LLMs, people are exposing all their
Confluence documents without looking through and doing
data discovery and redaction. Then, they’re surprised that
sensitive information is coming out of the model. There
should be a gatekeeper between what gets fed into these
LLMs and whether you put any security keys or tokens
into the model. If you put all Confluence in and there were
some security tokens in that, the LLM model can give an
output of the security token.

POVOLNY: The risk of poisoning your own models is


higher, or at least equal, internally as it is externally.

FIRST ANNUAL GENERATIVE AI STUDY 25


Regulations for AI

MORBIN: Fewer than 40% of business leaders in the


report say they understand the regulations relevant to their
geography or industry. You may be skeptical of even the
40% figure, but how can organizations catch up, and how
can we ever hope to have globally agreed regulations,
given cultural perspectives on privacy and security and
where the balance is? *(note, this discussion took place
prior to the US Biden Executive Order on AI).

MCNALLY: It’s a really hard question in terms of the


agreeability of all the different regulatory bodies. I’ll keep
that piece out because that’s a very long rabbit hole.
But in terms of a business leader trying to adhere with
whatever regulation that they choose that they want to
Laurence McNally adhere to, I go back to the example of GDPR. It was one
AI Governance, Data Discovery and of the trendsetters for the privacy space, and we see this
Technical Product Manager happening again with the EU AI Act in Europe. So, getting
OneTrust companies up to speed and getting prepared for the EU AI
McNally leads governance of AI products and is responsible Act is one piece where they can get ahead of the curve.
launching the latest at OneTrust, specifically for OT Global Data
Platform Products that consists of internal products (developer
microservices) and consumer-facing products (Low Code No In terms of risk level, not all AI is of the same risk. If you
Code LCNC Platform).
have an employee using ChatGPT or Bard internally to
help them draft an email, that’s very different than an AI
system that’s predicting loans or being used in healthcare.
That’s way more risk. We can help organizations build out
an inventory, rank the riskiest AI systems, go after the high
risks and put the regulations or policies and procedures on
those high risks.

CHUVAKIN: I am super skeptical about the respondents


saying they have full understanding of regulations because
I don’t think that’s the case. We have a team that tracks
regulations affecting AI, and they’re about to overflow the
spreadsheet maximum row number with all the entries. A
lot of stuff is being reapplied or refocused on AI, and the
future is going to be very freaky.

I don’t know if small startups will build it for their own


regions and then hope for the best. I don’t know how
we’re going to deal with that, especially when it comes
to contradicting regulations. Which one do we follow?
The lack of understanding combined with high speed of

26 FIRST ANNUAL GENERATIVE AI STUDY


adoption is a hugely explosive combination. I but we have to be prescient about what the
have no idea what will happen in this area, and applications and risks are. We have to think
I don’t know anybody who does. about how to control them and apply them
without putting handcuffs on the capabilities
MCNALLY: Regarding the right for your of AI.
personal data to be forgotten, once a model
has been trained, deployed and shipped, MCNALLY: An interesting point from the
your data is in there. The right to be forgotten survey was the consensus around the
goes away. You can’t put in a DSA request and leakage of IP. In two questions, people say
expect them to train the model again. That is a leaking IP of a company is one of their main
whole other rabbit hole of technicalities there. concerns. A company’s data uses so many
different vendors, you don’t know what data
POVOLNY: To be fair, we lost that right with of yours they’re using to retrain. Jira and Aha
the advent of social media as well. Privacy is a have introduced generative AI within their
complete fallacy nowadays. But this is another applications. Are they using our documents
application of it. to train their model that’s being shared with
an organization? Even worse: Are they using
Top-Priority Concerns some of our customer data? In the example
of Salesforce Einstein, is our CRM being used
MORBIN: Are the priorities that the to train Einstein, which is being shared with
respondents express around their concerns other organizations? That threat goes beyond
broadly in line with what you or your an employee going on ChatGPT and putting in
organization sees as the most important risks, something that they shouldn’t. When it involves
or what do you see as the most important the vendors that you’re using, there’s a huge
risks? level of risk there.

POVOLNY: The concerns about employee CHUVAKIN: In the survey, sensitive data
data and company data making it into models, leakage is number one, ingress of inaccurate
about the way that attacks are being deployed data hallucinations is number two, and then
and used, the strengthening of common the third bucket is broad bias/ethical concerns.
and legacy types of attacks, such as social And that make sense. The only slight change
engineering and phishing, are obviously being is the ingress of copyrighted IP. For some
dramatically improved through some of these reason, they’re talking about ingress, not
tools. All of those hold true and are some of egress. IP in certain copyright being produced
the risks that we see inherently. is not coming up in the surveys. Google just
announced indemnification for the enterprise
This is an explosion of technology much in the AI models. It comes up a lot, and it doesn’t
same way as the development of the iPhone, come up at all in the survey. It’s not about your
or maybe the personal computer. There is IP showing up in the AI; it’s about whose IP is
going to be a red-hot period where the world the stuff that the AI produced. If somebody
innovates and decides how they’re going to points at it and says, “Hey, I recognize this
use and explore and push the boundaries of code. I wrote it,” then suddenly problems
AI. Even though it’s 70 years old as a concept, happen.
it has a rebirth now. It’s no-holds-barred,

FIRST ANNUAL GENERATIVE AI STUDY 27


Generative AI and Healthcare

MORBIN: David, looking at organizations that Clearwater


works with within the healthcare sector, what are the
concerns there? Is generative AI already being deployed in
medical applications, and how do they manage to do that
given the potential liabilities in that sector?

BAILEY: There is a level of awareness at the industry level.


The industry understands and knows that AI is here. We’re
dealing with many organizations that are struggling with the
knowledge that they have to implement the governance
aspect. Healthcare today is all about the patient
engagement, patient experience and clinical outcomes. AI
applies well to patient engagement, patient experience and
productivity, and you can see where the vendors can utilize Steve Povolny
productivity and outcome. When you’re dealing with true Director of Security Research
medical application, you’re at the bedside with the patient Exabeam
and you’re at some level of use of AI for clinical outcome,
Steve Povolny is a distinguished cyber security leader with
there’s still a lot of concern about trustworthiness and more than 15 years of experience leading global teams of
knowing how to address the right outcomes. security researchers, data scientists and developers. He
brings diverse technical expertise and is an effective people
leader with a track record of building high-performing teams.
Steve has a deep understanding of the latest developments
In research, AI is being used in imaging to address and in cyber security and is a frequent subject matter expert for
the media. As a regular speaker at industry conferences,
look at images, process images, find tumors and scan. Steve often shares insights on emerging trends, attack
There’s so much applicable use. The full-level adoption surfaces and cutting-edge vulnerability and malware
research.
is not there yet, but the concern is real. Organizations will
In his role as Director of Security Research at Exabeam,
struggle over the next year or two to ensure that they have Steve and team have a singular focus: Integrating world-class
the right stakeholders and processes in place and that they research into the industry’s top cyber security solutions to
disrupt cybercrime and defend customers’ critical assets.
can look at what that outcome is, especially from a clinical
outcome perspective, and know that they can trust the
outcome to make good decisions for their patients and use
that technology with good clinical care in mind.

MORBIN: It’s the difference between strategy and


operations. Our cybersecurity professions, the people
who have to implement it, have a bigger struggle than
our business leader respondents, who are talking largely
of intended use or expected use. A lot of them put
the medical applications very high up on their list, but
implementation is a little bit harder.

BAILEY: We’re seven to 10 years into a network-connected


medical device, and it has been a struggle to ensure that
there is an appropriate level of security and reasonable
and appropriate controls with network-connected medical
devices, knowing the threats that exist on the network. So
now, when you add generative AI, learning models and

28 FIRST ANNUAL GENERATIVE AI STUDY


machine learning to the process, we’ve got a about a broader set of terminology around
long way to go. There’s a lot of risk to identify vulnerability, discovery, and mitigation.
and mitigate.
The Need for Human Intervention
MCNALLY: The EU AI Act is aiming to bucket
these systems into four different categories, MORBIN: Others, what can we trust or where
starting with unacceptable. Unacceptable do we need to get humans involved?
things include something that’s a threat to
someone’s safety or livelihood or the rights of CHUVAKIN: For a lot of answers we want, AI
people. Companies can’t build that. Under the will give you a candidate answer, but if you
EU AI Act, all of the healthcare uses would be treat it as the right answer every time, you’re
seen as high-risk. That’s why the inventory going to go very badly and spectacularly
of AI systems is so important. The company wrong. Ultimately, human skills are very much
needs to know all the systems that they have needed. But if you use these AI models for
in place and the models that roll up into that. ideas – for things to try, things to do, candidate
answers – they’re really good.
AI for Vulnerability Discovery,
Mitigation MCNALLY: There are two types of AI systems.
One is the really cool applications that have
MORBIN: One of the biggest issues with AI is democratized AI to users to help them draft
trust, with hallucinations potentially impacting an email and help them with ideas. That’s one
the validity of results. Twenty-three percent sense of AI, and there are regulations with
of respondents say they are using generative that. But the systems you’re talking about are
AI to find and fix vulnerabilities, and Steve really complicated, with core data scientists
has said he is skeptical about implementing involved and very different procedures and
vulnerability discovery and mitigation via policies. There is the LLM world that is not
generative AI. Steve, please explain that. really high-risk, but these other applications,
like regression models and all of these other
POVOLNY: I’m skeptical of the concept of vision models, are very different from the LLM
23% of respondents truly discovering and world.
mitigating code-based vulnerabilities in any
kind of automated and effective fashion using POVOLNY: That’s a super important distinction
generative AI. What I’m not skeptical of is to make: Is there a fundamental difference
that there is probably a frequent use of code between generative AI, which is the creation
review and basic bug fixes and development of computer-driven or computer-aided content
processes that generative AI can aid in where in some form of media, at least in most uses
classical software bugs and configuration are today, and traditional AI and ML, which
issues are likely possible to be discovered might be GANs or AGNs or the creation or
and mitigated. We’re seeing research leading recognition of content, pattern recognition and
the effort, but this is very much prior to any creation, and classification algorithms. These
kind of market application in things like zero- things don’t tend to overlap, but they do get
day discovery, deep reverse engineering conflated in the concept of generative AI
of code, and complex bugs that still require versus AI in general. We need to be super
a lot of human intervention and human careful when we use these definitions that we
knowledge to discover. So it’s probably more don’t overlap them.

FIRST ANNUAL GENERATIVE AI STUDY 29


Other Survey Results respondents indicate that the businesses think
the C-level staff is responsible for deploying
MORBIN: Did any of the other results we and maintaining generative AI solutions. That
haven’t mentioned so far stand out or surprise makes no sense to me, except when you think
you? about it, the C-level staff is ultimately writing
the check and is responsible for the strategy
BAILEY: For the one in which 31% say that behind it. We’re going to see an evolution as
they already had plans to purchase AI-driven companies start to realize that they’re missing
solutions in the next 12 months, what is an AI- skill sets and capabilities in the data science
driven solution? You would hope that 31% had realm. They’ll need to make sure that they
gone through some level of risk analysis and have a chief data science and a data science
understanding of what that means, what the organization that can effectively deploy and
risks and impacts are to the organization, and maintain these solutions, obviously rolling up
how it feeds into the entire business impact to to the C-level staff.
that organization. It’s great that you can go buy
some AI-driven system, but how it fits into the MCNALLY: At the very start, I mentioned the
whole life cycle and trustworthiness and risk overall bullishness of the business leaders
acceptance is where we’re lacking. to use these solutions versus the concerns
of cybersecurity. That stood out to me
CHUVAKIN: The real surprise is not just the because the use of AI across businesses
high usage, but the question: For which use is so distributed. You have different teams
cases do you either use or predict the use using different ML ops tooling, and you have
of AI? Legal and compliance is at 80% with employees using vendors that might buy
current use at 20%. So while I can understand some shadow IT that has AI. You have a
the desire to use, say, an LLM-based distorted view across the whole landscape of
summarizer to understand certain inscrutable an organization. If you go into an organization
compliance mandates, I can imagine a very and ask, “Where are you using AI systems
safe, very auditor-proof, very tame usage and why?” they can’t quickly pull out a report.
for compliance use cases. I have the deep There’s a lot of confusion among the C-level
suspicion that’s not what they mean. I have executives and the higher-ups on where AI is
a suspicion that they’re going to answer actually being used and why it’s being used.
compliance questionnaires with LLM bots.
They’re going to write destination statements
with machines and with light review of humans,
and that’s going to produce incredibly fun-to-
watch disasters for them. Securing AI for business
When lawyers tried to argue cases using is a long slog. There is no
ChatGPT reasoning, it ended up being 90%
faulty and based on made-up data. So, the
magic in this area; you just
compliance usage predicted at 80% of all need to work hard and
respondents is exciting, fun and probably very
failure-prone, and to me it is a surprise. learn it and then secure it.
POVOLNY: One of the things that really
- Anton Chuvakin
stood out to me was that most of the survey

30 FIRST ANNUAL GENERATIVE AI STUDY


The Future of AI profound impact on nearly every industry
vertical worldwide, and the pros will outweigh
MORBIN: What are your predictions for the the cons so long as we can get past the idea
future of AI and security, particularly whether that it’s a silver bullet that fixes everything and
it’s going to be more of an ally for security or a find out where the real applications are.
threat?
CHUVAKIN: Our CISO, Phil Venables, makes a
BAILEY: I’m not a “sky is falling” security good argument that ultimately, in the long run,
professional. I try to apply reason to this. AI will favor defenders not attackers because
Organizations that are not in front of this train ultimately defenders are the side with more
will get run over by the train, and it’s important data. That generates a lot of very exciting
for organizations to focus on this now. AI is optimism for using AI for security because if
here to stay, and we have to start addressing it. the technology revolution inherently favors
defenders, the security of other things will
MCNALLY: The different types of AI, improve because of AI. It is a useful prediction
your regression models and legacy AI, to say that AI favors the defenders over
won’t change in the next couple of years.
The hallucination space of LLMs, that attackers because of the amount of data, but
fearmongering, will reduce. You Google what about the other side – securing the AI
something today and if it’s not the right used for business for other purposes?
article, you use your common sense to figure
out what’s right and not; you don’t just take That prediction is a long slog. Let’s say we’re
everything verbatim. The risk of models going to secure mobile. Likely in 10 years, we
hallucinating will die down a little bit. A lot more or less know what we are doing. Despite
of the scare around deepfake images is all this noise about AI, we do see companies
warranted – what’s actually AI-generated that just encountered cloud for the first time
content versus what’s not? So, we should have and they’re marveling at how different things
labels or some system that has to add them. are in the cloud. For them, the revolution of
But then, there are ways around that too. But securing a new venue is now, but the venue
I don’t buy into the idea that the sky is falling is cloud, not AI. And we roughly know what
because of it. It’s a net positive on productivity. will happen. They will go through a journey
and normalize their relationship with this new
POVOLNY: The misinformation is the biggest terrain to secure.
risk that I see coming out. We’ll have to have
systems in place to identify, defense in depth, We are in the beginning of that journey for AI.
validation, and additional checks to ensure that We know what to do. We know which data
the content that we’re consuming is actually security controls are more relevant. We know
the content that we think we’re consuming. what governance tricks work. We know how
The world is badly trained on that front, and to detect and respond to new threats. But
generative AI is going to make that problem ultimately, securing AI for business is a long
more difficult – no question about it. slog. Some people, like Google, will be there
first, but many others will encounter AI for
But I’m definitely on the ally side of things. I the first time in 10 years. That’s my prediction.
think it’s going to be revolutionary, already There is no magic in this area; you just need to
is revolutionary. The applications have a work hard and learn it and then secure it.

FIRST ANNUAL GENERATIVE AI STUDY 31


Conclusions
The use of generative AI is expanding, Use cases for generative AI are
and so are expenditures for it. growing, and so is productivity.

Utilization of generative AI is exploding, and What is clear is that, notwithstanding


though only 15% of respondents are actively concerns around security, privacy and safety,
deploying AI, when those conducting trials Generative AI represents a paradigm shift
or planning to implement it are included, the in how business works, and it is currently
figure reaches 70%, hence the high growth seeing unprecedented accelerating adoption.
projections. This is being driven by our business leaders
who are experimenting across a wide
Expenditure specifically on generative AI is range of GenAI tools and a plethora of use
multiplying rapidly. Our research shows that cases. While cybersecurity leaders are more
the numbers reporting specific budgets for cautious, they too recognise the gains and are
GenAI is set to increase fourfold, and budgets experimenting, albeit in a narrower range of
allocated are expected to increase by 10%, use cases and tools.
however it is likely that these are minimum
figures. The productivity gains exceed 10% in most
cases, though they appear to be higher
Surprisingly, only 38% of business leaders, for business leaders than cybersecurity
and even fewer cybersecurity leaders - 24% professionals.
- have specific plans to purchase AI for any
of the use cases covered. The difference Both business leaders and cybersecurity
reflects both the more cautious approach of professionals are aware of the potential
security professionals and the wider range of pitfalls and are largely in agreement about
deployments expected by business leaders. the prioritization of potential negative
Consequently, a significant proportion of consequences of inherent flaws, accidental
businesses expects multiples of growth in or deliberate misuse. In particular, data
expenditure in a technology where they are loss, ethical concerns/bias and ingress of
not sure what they will buy or how they will inappropriate/poisoned data need to be
use it when they do. But they expect to buy it prevented/mitigated, and cybersecurity
so anyway. professionals tasked with achieving this tend
to prioritize the need for security above the
The growth in deployment and expenditure need to improve productivity.
is expected to be much higher than even our
respondents’ projections as the introduction Generative AI is still being banned, and
of new GenAI use cases and their increasing a walled garden approach is coming.
familiarity and proven productivity gains
both see wider and deeper adoption. These Approaches to mitigating threats vary, and
expenditure growth figures will be further outright bans on the use of generative AI are
masked by the adoption of generative AI more common among cybersecurity leaders,
within the tools and services of existing but the number advocating an outright ban
suppliers. on use in their organization is surprisingly

32 FIRST ANNUAL GENERATIVE AI STUDY


high. Thirty-eight percent of business leaders generative AI deployment affords their
and 48% of cybersecurity leaders expect to company and their own profession, and
continue banning the use of generative AI in therefore, the need to embrace deployment.
the workplace – which contradicts the 70%
planning to use AI. Also, this approach is not While the perspectives of business
considered viable by many in the industry, professionals and cybersecurity professionals
as our expert analysis shows, since it could differ, it appears that they are cooperating to
replicate the “shadow IT” issue in AI as users implement guardrails to ensure productive
circumvent the rules with less known and and secure deployment of generative AI. But
potentially less secure AI variants. knowledge and understanding of how best
to do that has not been established when it
The need to address risks is also reflected in comes to the details of what approaches will
the statistic that 73% of business leaders and be most effective, and we are currently in a
78% of cybersecurity professionals intend to period of trial and error.
take a walled garden/own AI approach going
forward. These approaches may create issues
about limiting the ability of generative AI to
learn, but the respondents did not name this
as a concern.
Organizations that
Understanding of AI regulation is low. are not in front of
Understanding of regulations in any particular this train will get run
vertical or geography is low, as 38% of
business leaders say they do understand
these regulations, and 52% of cybersecurity
over by the train,
leaders say the same. Our expert panel feels
that even these low figures are probably
and it’s important
higher than reality given how quickly
regulations are developing and the fact that for organizations to
they are not standardized internationally and
are potentially contradictory. focus on this now.
Guardrails are needed. AI is here to stay,
It broad terms, it appears that business leaders
understand that generative AI represents an and we have to start
addressing it.
unprecedented opportunity for increased
productivity, and cybersecurity professionals
see the unprecedented risks posed by
generative AI. But at the same time, business - David Bailey
leaders know the risks and the need to
engage their cybersecurity professionals
to mitigate that risk. And cybersecurity
professionals recognize the opportunities

FIRST ANNUAL GENERATIVE AI STUDY 33


About ISMG
Information Security Media Group (ISMG) is the world’s largest media organization devoted solely to information
security and risk management. Each of our 28 media properties provides education, research and news that is
specifically tailored to key vertical sectors including banking, healthcare and the public sector; geographies from
North America to Southeast Asia; and topics such as data breach prevention, cyber risk assessment and fraud.
Our annual global Summit series connects senior security professionals with industry thought leaders to find
actionable solutions for pressing cybersecurity challenges.

Contact
(800) 944-0401 • [email protected][email protected]

902 Carnegie Center • Princeton, NJ • 08540 • ismg.io


34 FIRST ANNUAL GENERATIVE AI STUDY

You might also like