8 Questions About Using AI Responsibly, Answered
8 Questions About Using AI Responsibly, Answered
Summary. Generative AI tools are poised to change the way every business
operates. As your own organization begins strategizing which to use, and how,
operational and ethical considerations are inevitable. This article delves into eight
of them, including how your organization should prepare to introduce AI
responsibly, how you can prevent harmful bias from proliferating in your systems,
and how to avoid key privacy risks. close
While the question of how organizations can (and should) use AI
isn’t a new one, the stakes and urgency of finding answers have
skyrocketed with the release of ChatGPT, Midjourney, and other
generative AI tools. Everywhere, people are wondering: How can
we use AI tools to boost performance? Can we trust AI to make
consequential decisions? Will AI take away my job?
[ 1]
How should I prepare to introduce AI at my
organization?
To start, it’s important to recognize that the optimal way to work
with AI is different from the way we’ve worked with other new
technologies. In the past, most new tools simply enabled us to
perform tasks more efficiently. People wrote with pens, then
typewriters (which were faster), then computers (which were even
faster). Each new tool allowed for more-efficient writing, but the
general processes (drafting, revising, editing) remained largely
the same.
[ 2]
How can we ensure transparency in how AI
makes decisions?
Leaders need to recognize that it is not always possible to know
how AI systems are making decisions. Some of the very
characteristics that allow AI to quickly process huge amounts of
data and perform certain tasks more accurately or efficiently than
humans can also make it a black box: We can’t see how the output
was produced. However, we can all play a role in increasing
transparency and accountability in AI decision-making processes
in two ways:
[ 3]
How can we erect guardrails around LLMs so
that their responses are true and consistent
with the brand image we want to project?
LLMs come with several serious risks. They can:
[ 4]
How can we ensure that the dataset we use to
train AI models is representative and doesn’t
include harmful biases?
Sanitizing datasets is a challenge that your organization can help
overcome by prioritizing transparency and fairness over model
size and by representing diverse populations in data curation.
First, consider the trade-offs you make. Tech companies have
been pursuing larger AI systems because they tend to be more
effective at certain tasks, like sustaining human-seeming
conversations. However, if a model is too large to fully
understand, it’s impossible to rid it of potential biases. To fully
combat harmful bias, developers must be able to understand and
document the risks inherent to a dataset, which might mean
using a smaller one.
[ 5]
What are the potential risks of data privacy
violations with AI?
AI that uses sensitive employee and customer data is vulnerable
to bad actors. To combat these risks, organizations should learn as
much as they can about how their AI has been developed and
then decide whether it’s appropriate to use secure data with it.
They should also keep tech systems updated and earmark budget
resources to keep the software secure. This requires continuous
action, as a small vulnerability can leave an entire organization
open to breaches.
[ 6]
How can we encourage employees to use AI
for productivity purposes and not simply to
take shortcuts?
Even with the advent of LLMs, AI technology is not yet capable of
performing the dizzying range of tasks that humans can, and
there are many things that it does worse than the average person.
Using each new tool effectively requires understanding its
purpose.
[ 7]
How worried should we be that AI will replace
jobs?
Every technological revolution has created more jobs than it has
destroyed. Automobiles put horse-and-buggy drivers out of
business but led to new jobs building and fixing cars, running gas
stations, and more. The novelty of AI technologies makes it easy
to fear they will replace humans in the workforce. But we should
instead view them as ways to augment human performance. For
example, companies like Collective[i] have developed AI systems
that analyze data to produce highly accurate sales forecasts
quickly; traditionally, this work took people days and weeks to
pull together. But no salespeople are losing their jobs. Rather,
they’ve got more time to focus on more important parts of their
work: building relationships, managing, and actually selling.
The long-term effects on jobs are complex and uneven, and there
can be periods of job destruction and displacement in certain
industries or regions. To ensure that the benefits of technological
progress are widely shared, it is crucial to invest in education and
workforce development to help people adapt to the new job
market.
[ 8]
How can my organization ensure that the AI
we develop or use won’t harm individuals or
groups or violate human rights?
The harms of AI bias have been widely documented. In their
seminal 2018 paper “Gender Shades,” Joy Buolamwini and Timnit
Gebru showed that popular facial recognition technologies
offered by companies like IBM and Microsoft were nearly perfect
at identifying white, male faces but misidentified Black female
faces as much as 35% of the time. Facial recognition can be used
to unlock your phone, but is also used to monitor patrons at
Madison Square Garden, surveil protesters, and tap suspects in
police investigations — and misidentification has led to wrongful
arrests that can derail people’s lives. As AI grows in power and
becomes more integrated into our daily lives, its potential for
harm grows exponentially, too. Here are strategies to safeguard
AI.
...
Billions of people around the world are discovering the promise of
AI through their experiments with ChatGPT, Bing, Midjourney,
and other new tools. Every company will have to confront
questions about how these emerging technologies will apply to
them and their industries. For some it will mean a significant
pivot in their operating models; for others, an opportunity to scale
and broaden their offerings. But all must assess their readiness to
deploy AI responsibly without perpetuating harm to their
stakeholders and the world at large.