0% found this document useful (0 votes)
58 views

8 Questions About Using AI Responsibly, Answered

8 Questions About Using AI Responsibly, Answered

Uploaded by

Narsimham B
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
58 views

8 Questions About Using AI Responsibly, Answered

8 Questions About Using AI Responsibly, Answered

Uploaded by

Narsimham B
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 13

Señor Salme

The Big Idea Series / Ethics in the Age of AI

8 Questions About Using AI


Responsibly, Answered
by Tsedal Neeley
May 09, 2023

Summary. Generative AI tools are poised to change the way every business
operates. As your own organization begins strategizing which to use, and how,
operational and ethical considerations are inevitable. This article delves into eight
of them, including how your organization should prepare to introduce AI
responsibly, how you can prevent harmful bias from proliferating in your systems,
and how to avoid key privacy risks. close
While the question of how organizations can (and should) use AI
isn’t a new one, the stakes and urgency of finding answers have
skyrocketed with the release of ChatGPT, Midjourney, and other
generative AI tools. Everywhere, people are wondering: How can
we use AI tools to boost performance? Can we trust AI to make
consequential decisions? Will AI take away my job?

The power of AI introduced by OpenAI, Microsoft, and Nvidia —


and the pressure to compete in the market — make it inevitable
that your organization will have to navigate the operational and
ethical considerations of machine learning, large language
models, and much more. And while many leaders are focused on
operational challenges and disruptions, the ethical concerns are
at least — if not more — pressing. Given how regulation lags
technological capabilities and how quickly the AI landscape is
changing, the burden of ensuring that these tools are used safely
and ethically falls to companies.

In my work, at the intersection of occupations, technology, and


organizations, I’ve examined how leaders can develop digital
mindsets and the dangers of biased large language models. I have
identified best practices for organizations’ use of technology and
amplified consequential issues that ensure that AI
implementations are ethical. To help you better identify how you
and your company should be thinking about these issues — and
make no mistake, you should be thinking about them — I
collaborated with HBR to answer eight questions posed by readers
on LinkedIn.

[ 1]
How should I prepare to introduce AI at my
organization?
To start, it’s important to recognize that the optimal way to work
with AI is different from the way we’ve worked with other new
technologies. In the past, most new tools simply enabled us to
perform tasks more efficiently. People wrote with pens, then
typewriters (which were faster), then computers (which were even
faster). Each new tool allowed for more-efficient writing, but the
general processes (drafting, revising, editing) remained largely
the same.

AI is different. It has a more substantial influence on our work


and our processes because it’s able to find patterns that we can’t
see and then use them to provide insights and analysis,
predictions, suggestions, and even full drafts all on its own. So
instead of thinking of AI as the tools we use, we should think of it
as a set of systems with which we can collaborate.

To effectively collaborate with AI at your organization, focus on


three things:

First, ensure that everyone has a basic understanding of how


digital systems work. A digital mindset is a collection of attitudes
and behaviors that help you to see new possibilities using data,
technology, algorithms, and AI. You don’t have to become a
programmer or a data scientist; you simply need to take a new
and proactive approach to collaboration (learning to work across
platforms), computation (asking and answering the right
questions), and change (accepting that it is the only constant).
Everyone in your organization should be working toward at least
30% fluency in a handful of topics, such as systems’ architecture,
AI, machine learning, algorithms, AI agents as teammates,
cybersecurity, and data-driven experimentation.

Second, make sure your organization is prepared for continuous


adaptation and change. Bringing in new AI requires employees to
get used to processing new streams of data and content, analyzing
them, and using their findings and outputs to develop a new
perspective. Likewise, to use data and technology most efficiently,
organizations need an integrated organizational structure. Your
company needs to become less siloed and should build a
centralized repository of knowledge and data to enable constant
sharing and collaboration. Competing with AI not only requires
incorporating today’s technologies but also being mentally and
structurally prepared to adapt to future advancements. For
example, individuals have begun incorporating generative AI
(such as ChatGPT) into their daily routines, regardless of whether
companies are prepared or willing to embrace its use.

Third, build AI into your operating model. As my colleagues


Marco Iansiti and Karim R. Lakhani have shown, the structure of
an organization mirrors the architecture of the technological
systems within it, and vice versa. If tech systems are static, your
organization will be static. But if they’re flexible, your
organization will be flexible. This strategy played out successfully
at Amazon. The company was having trouble sustaining its
growth and its software infrastructure was “cracking under
pressure,” according to Iansiti and Lakhani. So Jeff Bezos wrote a
memo to employees announcing that all teams should route their
data through “application programming interfaces” (APIs), which
allow various types of software to communicate and share data
using set protocols. Anyone who didn’t would be fired. This was
an attempt to break the inertia within Amazon’s tech systems —
and it worked, dismantling data siloes, increasing collaboration,
and helping to build the software- and data-driven operating
model we see today. While you may not want to resort to a similar
ultimatum, you should think about how the introduction of AI
can — and should — change your operations for the better.

[ 2]
How can we ensure transparency in how AI
makes decisions?
Leaders need to recognize that it is not always possible to know
how AI systems are making decisions. Some of the very
characteristics that allow AI to quickly process huge amounts of
data and perform certain tasks more accurately or efficiently than
humans can also make it a black box: We can’t see how the output
was produced. However, we can all play a role in increasing
transparency and accountability in AI decision-making processes
in two ways:

Recognize that AI is invisible and inscrutable and be transparent


in presenting and using AI systems. Callen Anthony, Beth A.
Bechky, and Anne-Laure Fayard identify invisibility and
inscrutability as core characteristics that differentiate AI from
prior technologies. It’s invisible because it often runs in the
background of other technologies or platforms without users
being aware of it; for every Siri or Alexa that people understand to
be AI, there are many technologies, such as antilock brakes, that
contain unseen AI systems. It’s inscrutable because, even for AI
developers, it’s often impossible to understand how a model
reaches an outcome, or even identify all the data points it’s using
to get there — good, bad, or otherwise.
As AIs rely on progressively larger datasets, this becomes
increasingly true. Consider large language models (LLMs) such as
OpenAI’s ChatGPT or Microsoft’s Bing. They are trained on
massive datasets of books, webpages, and documents scraped
from across the internet — OpenAI’s LLM was trained using 175
billion parameters and was built to predict the likelihood that
something will occur (a character, word, or string of words, or
even an image or tonal shift in your voice) based on either its
preceding or surrounding context. The autocorrect feature on
your phone is an example of the accuracy — and inaccuracy — of
such predictions. But it’s not just the size of the training data:
Many AI algorithms are also self-learning; they keep refining their
predictive powers as they get more data and user feedback,
adding new parameters along the way.

AIs often have broad capabilities because of invisibility and


inscrutability — their ability to work in the background and find
patterns beyond our grasp. Currently, there is no way to peer into
the inner workings of an AI tool and guarantee that the system is
producing accurate or fair output. We must acknowledge that
some opacity is a cost of using these powerful systems. As a
consequence, leaders should exercise careful judgment in
determining when and how it’s appropriate to use AI, and they
should document when and how AI is being used. That way
people will know that an AI-driven decision was appraised with
an appropriate level of skepticism, including its potential risks or
shortcomings.

Prioritize explanation as a central design goal. The research brief


“Artificial Intelligence and the Future of Work,” by MIT scientists,
notes that AI models can become more transparent through
practices like highlighting specific areas in data that contribute to
AI output, building models that are more interpretable, and
developing algorithms that can be used to probe how a different
model works. Similarly, leading AI computer scientist Timnit
Gebru and her colleagues Emily Bender, Angelina McMillan-
Major, and Margaret Mitchell (credited as “Shmargaret
Shmitchell”) argue that practices like premortem analyses that
prompt developers to consider both project risks and potential
alternatives to current plans can increase transparency in future
technologies. Echoing this point, in March of 2023, prominent
tech entrepreneurs Steve Wozniak and Elon Musk, along with
employees of Google and Microsoft, signed a letter advocating for
AI development to be more transparent and interpretable.

[ 3]
How can we erect guardrails around LLMs so
that their responses are true and consistent
with the brand image we want to project?
LLMs come with several serious risks. They can:

perpetuate harmful bias by deploying negative stereotypes or


minimizing minority viewpoints
spread misinformation by repeating falsehoods or making up
facts and citations
violate privacy by using data without people’s consent
cause security breaches if they are used to generate phishing
emails or other cyberattacks
harm the environment because of the significant
computational resources required to train and run these tools

Data curation and documentation are two ways to curtail those


risks and ensure that LLMs will give responses that are more
consistent with, not harmful to, your brand image.

Tailor data for appropriate outputs. LLMs are often developed


using internet-based data containing billions of words. However,
common sources of this data, like Reddit and Wikipedia, lack
sufficient mechanisms for checking accuracy, fairness, or
appropriateness. Consider which perspectives are represented on
these sites and which are left out. For example, 67% of Reddit’s
contributors are male. And on Wikipedia, 84% of contributors are
male, with little representation from marginalized populations.
If you instead build an LLM around more-carefully vetted
sources, you reduce the risk of inappropriate or harmful
responses. Bender and colleagues recommend curating training
datasets “through a thoughtful process of deciding what to put in,
rather than aiming solely for scale and trying haphazardly to
weed out…‘dangerous’, ‘unintelligible’, or ‘otherwise bad’ [data].”
While this might take more time and resources, it exemplifies the
adage that an ounce of prevention is worth a pound of cure.

Document data. There will surely be organizations that want to


leverage LLMs but lack the resources to train a model with a
curated dataset. In situations like this, documentation is crucial
because it enables companies to get context from a
nonproprietary model’s developers on which datasets it uses and
the biases they may contain, as well as guidance on how software
built on the model might be appropriately deployed. This practice
is analogous to the standardized information used in medicine to
indicate which studies have been used in making health care
recommendations.

AI developers should prioritize documentation to allow for safe


and transparent use of their models. And people or organizations
experimenting with a model must look for this documentation to
understand its risks and whether it aligns with their desired brand
image.

[ 4]
How can we ensure that the dataset we use to
train AI models is representative and doesn’t
include harmful biases?
Sanitizing datasets is a challenge that your organization can help
overcome by prioritizing transparency and fairness over model
size and by representing diverse populations in data curation.
First, consider the trade-offs you make. Tech companies have
been pursuing larger AI systems because they tend to be more
effective at certain tasks, like sustaining human-seeming
conversations. However, if a model is too large to fully
understand, it’s impossible to rid it of potential biases. To fully
combat harmful bias, developers must be able to understand and
document the risks inherent to a dataset, which might mean
using a smaller one.

Second, if diverse teams, including members of underrepresented


populations, collect and produce the data used to train models,
then you’ll have a better chance of ensuring that people with a
variety of perspectives and identities are represented in them.
This practice also helps to identify unrecognized biases or
blinders in the data.

AI will only be trustworthy once it works equitably, and that will


only happen if we prioritize diversifying data and development
teams and clearly document how AI has been designed for
fairness.

[ 5]
What are the potential risks of data privacy
violations with AI?
AI that uses sensitive employee and customer data is vulnerable
to bad actors. To combat these risks, organizations should learn as
much as they can about how their AI has been developed and
then decide whether it’s appropriate to use secure data with it.
They should also keep tech systems updated and earmark budget
resources to keep the software secure. This requires continuous
action, as a small vulnerability can leave an entire organization
open to breaches.

Blockchain innovations can help on this front. A blockchain is a


secure, distributed ledger that records data transactions, and it’s
currently being used for applications like creating payment
systems (not to mention cryptocurrencies).
When it comes to your operations more broadly, consider this
privacy by design (PbD) framework from former Information and
Privacy Commissioner of Ontario Ann Cavoukian, which
recommends that organizations embrace seven foundational
principles:

1. Be proactive, not reactive — preventative, not remedial.


2. Lead with privacy as the default setting.
3. Embed privacy into design.
4. Retain full functionality, including privacy and security.
5. Ensure end-to-end security.
6. Maintain visibility and transparency.
7. Respect user privacy — keep systems user-centric.

Incorporating PbD principles into your operation requires more


than hiring privacy personnel or creating a privacy division. All
the people in your organization need to be attuned to customer
and employee concerns about these issues. Privacy isn’t an
afterthought; it needs to be at the core of digital operations, and
everyone needs to work to protect it.

[ 6]
How can we encourage employees to use AI
for productivity purposes and not simply to
take shortcuts?
Even with the advent of LLMs, AI technology is not yet capable of
performing the dizzying range of tasks that humans can, and
there are many things that it does worse than the average person.
Using each new tool effectively requires understanding its
purpose.

For example, think about ChatGPT. By learning about language


patterns, it has become so good at predicting which words are
supposed to follow others that it can produce seemingly
sophisticated text responses to complicated questions. However,
there’s a limit to the quality of these outputs because being good
at guessing plausible combinations of words and phrases is
different from understanding the material. So ChatGPT can
produce a poem in the style of Shakespeare because it has learned
the particular patterns of his plays and poems, but it cannot
produce the original insight into the human condition that
informs his work.

By contrast, AI can be better and more efficient than humans at


making predictions because it can process much larger amounts
of data much more quickly. Examples include predicting early
dementia from speech patterns, detecting cancerous tumors
indistinguishable to the human eye, and planning safer routes
through battlefields.

Employees should therefore be encouraged to evaluate whether


AI’s strengths match up to a task and proceed accordingly. If you
need to process a lot of information quickly, it can do that. If you
need a bunch of new ideas, it can generate them. Even if you need
to make a difficult decision, it can offer advice, providing it’s been
trained on relevant data.

But you shouldn’t use AI to create meaningful work products


without human oversight. If you need to write a quantity of
documents with very similar content, AI may be a useful
generator of what has long been referred to as boilerplate
material. Be aware that its outputs are derived from its datasets
and algorithms, and they aren’t necessarily good or accurate.

[ 7]
How worried should we be that AI will replace
jobs?
Every technological revolution has created more jobs than it has
destroyed. Automobiles put horse-and-buggy drivers out of
business but led to new jobs building and fixing cars, running gas
stations, and more. The novelty of AI technologies makes it easy
to fear they will replace humans in the workforce. But we should
instead view them as ways to augment human performance. For
example, companies like Collective[i] have developed AI systems
that analyze data to produce highly accurate sales forecasts
quickly; traditionally, this work took people days and weeks to
pull together. But no salespeople are losing their jobs. Rather,
they’ve got more time to focus on more important parts of their
work: building relationships, managing, and actually selling.

Similarly, services like OpenAI’s Codex can autogenerate


programming code for basic purposes. This doesn’t replace
programmers; it allows them to write code more efficiently and
automate repetitive tasks like testing so that they can work on
higher-level issues such as systems architecture, domain
modeling, and user experience.

The long-term effects on jobs are complex and uneven, and there
can be periods of job destruction and displacement in certain
industries or regions. To ensure that the benefits of technological
progress are widely shared, it is crucial to invest in education and
workforce development to help people adapt to the new job
market.

Individuals and organizations should focus on upskilling and


scaling to prepare to make the most of new technologies. AI and
robots aren’t replacing humans anytime soon. The more likely
reality is that people with digital mindsets will replace those
without them.

[ 8]
How can my organization ensure that the AI
we develop or use won’t harm individuals or
groups or violate human rights?
The harms of AI bias have been widely documented. In their
seminal 2018 paper “Gender Shades,” Joy Buolamwini and Timnit
Gebru showed that popular facial recognition technologies
offered by companies like IBM and Microsoft were nearly perfect
at identifying white, male faces but misidentified Black female
faces as much as 35% of the time. Facial recognition can be used
to unlock your phone, but is also used to monitor patrons at
Madison Square Garden, surveil protesters, and tap suspects in
police investigations — and misidentification has led to wrongful
arrests that can derail people’s lives. As AI grows in power and
becomes more integrated into our daily lives, its potential for
harm grows exponentially, too. Here are strategies to safeguard
AI.

Slow down and document AI development. Preventing AI harm


requires shifting our focus from the rapid development and
deployment of increasingly powerful AI to ensuring that AI is safe
before release.

Transparency is also key. Earlier in this article, I explained how


clear descriptions of the datasets used in AI and potential biases
within them helps to reduce harm. When algorithms are openly
shared, organizations and individuals can better analyze and
understand the potential risks of new tools before using them.

Establish and protect AI ethics watchdogs. The question of who


will ensure safe and responsible AI is currently unanswered.
Google, for example, employs an ethical-AI team, but in 2020 they
fired Gebru after she sought to publish a paper warning of the
risks of building ever-larger language models. Her exit from
Google raised the question of whether tech developers are able, or
incentivized, to act as ombudsmen for their own technologies and
organizations. More recently, an entire team at Microsoft focused
on ethics was laid off. But many in the industry recognize the
risks, and as noted earlier, even tech icons have called for
policymakers working with technologists to create regulatory
systems to govern AI development.

Whether it comes from government, the tech industry, or another


independent system, the establishment and protection of
watchdogs is crucial to protecting against AI harm.

Watch where regulation is headed. Even as the AI landscape


changes, governments are trying to regulate it. In the United
States, 21 AI-related bills were passed into law last year. Notable
acts include an Alabama provision outlining guidelines for using
facial recognition technology in criminal proceedings and
legislation that created a Vermont Division of Artificial
Intelligence to review all AI used by the state government and to
propose a state AI code of ethics. More recently, the U.S. federal
government moved to enact executive actions on AI, which will
be vetted over time.
The European Union is also considering legislation — the
Artificial Intelligence Act — that includes a classification system
determining the level of risk AI could pose to the health and
safety or the fundamental rights of a person. Italy has temporarily
banned ChatGPT. The African Union has established a working
group on AI, and the African Commission on Human and Peoples’
Rights adopted a resolution to address implications for human
rights of AI, robotics, and other new and emerging technologies in
Africa.

China passed a data protection law in 2021 that established user


consent rules for data collection and recently passed a unique
policy regulating “deep synthesis technologies” that are used for
so-called “deep fakes.” The British government released an
approach that applies existing regulatory guidelines to new AI
technology.

...
Billions of people around the world are discovering the promise of
AI through their experiments with ChatGPT, Bing, Midjourney,
and other new tools. Every company will have to confront
questions about how these emerging technologies will apply to
them and their industries. For some it will mean a significant
pivot in their operating models; for others, an opportunity to scale
and broaden their offerings. But all must assess their readiness to
deploy AI responsibly without perpetuating harm to their
stakeholders and the world at large.

Tsedal Neeley is the Naylor Fitzhugh Professor


of Business Administration and senior
associate dean of faculty and research at
Harvard Business School. She is the coauthor of
the book The Digital Mindset: What It Really
Takes to Thrive in the Age of Data, Algorithms,
and AI and the author of the book Remote Work
Revolution: Succeeding from Anywhere.

You might also like