0% found this document useful (0 votes)
20 views

MBA_Case_Study_Intake_2025___Huyen

MBA_Case_Study_Intake_2025___Huyen focus on AI

Uploaded by

Heidi Vietmano
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
20 views

MBA_Case_Study_Intake_2025___Huyen

MBA_Case_Study_Intake_2025___Huyen focus on AI

Uploaded by

Heidi Vietmano
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 7

MBA Case Study

CASE STUDY
ESSAY

This Case Study is part of the application process for the 2025 full-/part-time MBA programme at Frankfurt School of Finance &
­Management. Please carefully read the article from Financial Times on the following pages and answer the questions below in the field
­provided. Please write no more than one page (500 words).
HUYEN NGUYEN THI THANH HUYEN
Applicant Name ...............................................................................................................

27th Nov 2024


Date ..............................................

Please read the attached article from the Financial Times.


What are the main take-aways?
What AI measures / initiatives have been implemented in your organisation?
What risk and opportunities do you see for business moving forward?

1. The main take-aways are:

AI as a Strategic Asset: Artificial Intelligence (AI) represents a pivotal innovation, driving


efficiency and providing businesses with unparalleled data analysis capabilities. With AI,
organizations can automate complex tasks, identify trends, and even predict future outcomes,
creating opportunities to enhance competitiveness.

Managing AI-Driven Risks: Despite its potential, AI carries significant risks, such as data privacy
concerns, potential biases in decision-making, and reputational damage if deployed
irresponsibly. Businesses must balance the drive for innovation with a strong commitment to
ethical and transparent practices to safeguard stakeholder trust.

Establishing Robust AI Governance: To leverage AI responsibly, companies should implement


comprehensive frameworks that include safety protocols, security measures, and ethical
guidelines. This ensures compliance with evolving regulations and mitigates legal and
reputational risks, especially in sectors like healthcare and finance.

Human Oversight as a Necessity: Even the most advanced AI systems require human
intervention to validate and oversee decisions. 'Human-in-the-Loop' (HITL) frameworks are
essential to maintain control, prevent errors, and ensure AI systems align with organizational
values and societal norms.

AI's Dual Impact on Sustainability: While AI can be resource-intensive, it also plays a crucial
role in advancing environmental, social, and governance (ESG) objectives. For example,
AI-driven inn
ovations can optimize energy use, accelerate the development of eco-friendly technologies, and
provide insights to combat climate change, making sustainability a strategic imperative.

2.Some AI measures / initiatives have been implemented in my organization?

AI-Powered EdTech Talent Matching


At Skale Works Pty. Ltd., we expanded our HR tech expertise to create an AI-powered EdTech
www.frankfurt-school.de
platform designed to address the Australian market for Vietnamese international students, a
MBA Case Study

WHAT DOES AI MEAN FOR A


RESPONSIBLE BUSINESS?
How to navigate the opportunities and challenges
posed by a technology few can afford to ignore
Sarah Murray, 27 March 2024

It was what many called an iPhone moment: the launch in late 2022 prevent companies from meeting their promises around social and
of OpenAI’s ChatGPT, an artificial intelligence tool with a humanlike environmental challenges — not least because of AI’s hefty carbon
ability to create content, answer personalised queries and even footprint, which arises from the energy consumed in training
tell jokes. And it captured the public imagination. Suddenly, a chatbots or producing content.
foundation model — a machine learning model trained on massive
data sets — thrust AI into the limelight. A 2020 analysis conducted by the journal Nature found that high
energy use, along with a lack of transparency and poor safety and
But soon this latest chapter in AI’s story was generating something ethical standards, could cause AI to erect obstacles to meeting 59
else: concerns about its ability to spread misinformation and of the 169 targets in the UN’s Sustainable Development Goals.
“hallucinate” by producing false facts. In the hands of business,
many critics said, AI technologies would precipitate everything from However, the Nature research also brought positive news: that AI
data breaches to bias in hiring and widespread job losses. could help progress towards 134 of the SDG targets by enabling
innovations in areas from sustainable food production to better
“That breakthrough in the foundation model has got the attention,” access to health, clean water and renewable energy.
says Alexandra Reeve Givens, chief executive of the Center for
Democracy & Technology, a Washington and Brussels-based With its ability to analyse millions of data points at speed and to
digital rights advocacy group. “But we also have to focus on the identify patterns that humans would miss, AI can certainly help to
wide range of use cases that businesses across the economy are drive positive impact.
grappling with.”
For example, by creating “digital twins”, it can analyse data from
The message for the corporate sector is clear: that any company sensors, along with historical and real-time data, to find energy
claiming to be responsible must implement AI technologies without and other efficiencies in building systems. It also offers speed in the
creating threats to society — or risks to the business itself, and the development of everything from life-saving drugs to alternative
people who depend on it. materials for electric vehicle batteries that could reduce reliance on
scarce resources such as lithium.
Companies appear to be getting the message. In our survey of FT
Moral Money readers, 52 per cent saw loss of consumer trust as the Some see AI as supercharging progress on climate goals through
biggest risk arising from irresponsible use of AI, while 43 per cent everything from enhancing electric grid efficiency to applying
cited legal challenges. analytics to satellite imagery to map deforestation and carbon
emissions in real time.
“CEOs have to ensure AI is trustworthy,” says Ken Chenault, former
chief executive of American Express and co-chair of the Data & “It’s a very big deal,” says Mike Jackson, managing partner at San
Trust Alliance, a non-profit consortium of large corporations that Francisco-based Earthshot Ventures, which invests in climate tech
is developing standards and guidelines for responsible use of data start-ups. “Things are going to change much faster than people
and AI. realise — and that’s going to be a significant boon for the climate.”

“AI and machine learning models are fundamentally different With AI holding both promise and peril, the challenge for
from previous information technologies,” says Chenault. “This is a companies across all sectors will be to temper the instinct to race
technology that continuously learns and evolves, but the underlying ahead with appropriate caution. Businesses will need to commit
premises must be constantly tested and monitored.” to thorough testing of AI models, and introduce policies and
procedures to address risks of accidental harm, increased inequity
Some have warned that inappropriate use of AI technologies could and something every organisation fears: loss of control.

www.frankfurt-school.de Copyright The Financial Times Limited 2024. All rights reserved.
MBA Case Study

we suggest companies go back to the international human rights


instruments and use them as a template.”

Some have not yet implemented any governance structures at all.


While 30 per cent of FT Moral Money readers said their organisa-
tions had introduced enterprise-wide guidelines on the ethical use of
AI, 35 per cent said their organisations had not introduced any such
measures.

Reid Blackman, founder and CEO of Virtue, an AI ethics consultancy,


sees no excuse for inaction. A rigorous approach to AI does require
companies to make change, which takes time and effort, he says.
“But it’s not expensive relative to everything else on their budget.”

While some might turn to the services of consultancies like Virtue or


products such as watsonx.governance, IBM’s generative AI toolkit,
another option is to build internal capabilities.

This was the approach at Walmart, which has a dedicated digital


Handle with care citizenship team of lawyers, compliance professionals, policy experts
and technologists. “Given our scale, we often build things ourselves
In 2023, New York lawyer Steven Schwartz was ridiculed in court because the bespoke model is the only one that’s going to work for
when it emerged that his brief included fake citations and opinions our volume of decision making,” says Nuala O’Connor, who leads the
generated by ChatGPT. For Schwartz, the revelations were deeply team.
embarrassing. But they also raised awareness of the fact that AI
programs can make glaring errors, something that is worrying when Whether turning to internal or external resources, there is one ele-
considering their possible use in industries such as nuclear power or ment of a responsible approach to AI that so many agree on it has
aviation, where mistakes can be fatal. its own acronym: HITL, or human in the loop — the idea that human
supervision must be present at every stage in the development and
Even where physical safety is not at risk, AI can introduce bias into implementation of AI models.
decisions such as who to hire, who to arrest or who to lend to. In
healthcare, concerns range from data breaches to relying on models “Let’s not give up on human expertise and the ability to judge
trained on data sets that ignore marginalised communities. things,” says Ivan Pollard, who as head of marketing and communica-
tions at The Conference Board leads the think-tank’s development of
For companies, among the biggest risks of getting it wrong is losing online guidance on responsible AI.
public trust. When KPMG polled 1,000 US consumers on generative
AI, 78 per cent agreed on the responsibility of organisations to de- For Walmart, putting humans front and centre also means treating AI
velop and use the technology ethically — but only 48 per cent were systems used for, say, managing trucks and pallets differently from AI
confident they would do so. programs that can affect the rights and opportunities of employees.
“Those tools have to go through a higher order of review process,”
“You’re going in with a level of scepticism,” says Carl Carande, US says O’Connor.
head of advisory at KPMG. “That’s where the frameworks and safe-
guards are critical.”

Approaches to AI governance will vary by sector and company size,


but Carande sees certain principles as essential, including safety,
security, transparency, accountability and data privacy. “That’s consis-
tent regardless of whatever sector you’re in,” he says.

In practical terms, a responsible approach to AI means not only


creating the right frameworks and guidelines but also ensuring that
data structures are secure, and that employees are given sufficient
training in how to use data appropriately.

But responsible AI does not always mean reinventing the wheel.


The UN Guiding Principles on Business and Human Rights provide a
ready-made means of assessing AI’s impact on individuals and com-
munities, says Dunstan Allison-Hope, who leads the advisory group
BSR’s work on technology and human rights.

“There’s been all kinds of efforts to create guidelines, policies and


codes around artificial intelligence, and they’re good,” he says. “But

www.frankfurt-school.de Copyright The Financial Times Limited 2024. All rights reserved.
MBA Case Study

A vendor in the loop

If companies are still grappling with how to manage AI responsibly,


their efforts must extend beyond their own four walls. “The vast
majority of companies won’t develop their own AI,” says Chenault.
“So they need to ensure they have the right governance and controls
in procurement.”

Without these controls, the exposure is both legal and reputational,


says Reeve Givens from the Center for Democracy & Technology.
“This is a hugely important piece of the AI governance puzzle — and
not enough people are thinking about it,” she says. “Because it’s the
downstream customers that will have the most at stake if something
goes wrong.”

Not all organisations appear to be aware of this. When ranking the


risks posed by the adoption of AI and big data, only 11 per cent of
the 976 institutional investors polled in a 2022 CFA Institute survey
highlighted reliance on third-party vendors.
Watchful eyes
It was for this reason that one of the first publications of the Data &
Trust Alliance was a guide to evaluating the ability of human resour- Companies may be trying to demonstrate that they can be respon-
ces vendors to mitigate bias in algorithmic systems. sible stewards of AI technologies. But governments are not leaving it
to chance. In fact, for once policymakers seem to be acting relatively
The evaluation includes questions on the data that vendors use to swiftly to bring order to an emerging technology.
train their models and steps taken to detect and mitigate bias, as
well as measures vendors have put in place to ensure their systems First out of the regulatory gate was the EU, which in December
perform as intended — and what documentation is available to agreed on its wide-ranging Artificial Intelligence Act, which many see
verify this. as the world’s toughest rules on AI.

The alliance focused on HR vendors for the guidance because many While the UK’s version is still a work in progress, the AI Safety Sum-
companies’ first foray into AI is for recruitment purposes. “But those mit, convened by Prime Minister Rishi Sunak in November, sent a
guidelines could be adopted for other tech vendors,” says JoAnn signal that regulating the technology would be taken seriously.
Stonier, a member of the Data & Trust Alliance leadership council
and chief data officer at Mastercard, which helped develop the A month earlier, US President Joe Biden sent a similar message in an
guidelines. executive order directing government agencies to ensure AI is safe,
secure and trustworthy. “To realise the promise of AI and avoid the
“When we’re using third-party vendors, we interrogate them heavily,” risk, we need to govern this technology, there’s no way around it,”
she says. “Because we’re ultimately responsible for the outcome of Biden said at the time.
their solutions.”
The desire to create safeguards around AI technologies has even
To make things even more complicated, because AI technologies prompted a rare moment of collaboration between the US and
learn and evolve, vendors cannot know what will happen to their ­China. In January, Arati Prabhakar, director of the White House Office
models when trained on the data sets of their clients. of Science and Technology Policy, told the Financial Times that the
two countries had agreed to work together on mitigating the risks.
This means that vendor-customer partnerships need to be far more
collaborative and long-lasting than in the past. “That will change the “All around the world we’re seeing policymakers feel the need to
supply chain relationship,” says Reeve Givens. “They have a shared respond,” says Reeve Givens. “I haven’t seen a moment that is as
responsibility to get this right.” concentrated as this AI policy moment.”

However, she also points to gaps, particularly on standards. “We


can’t expect the average manager of a factory or supermarket chain
to run a deep analysis on how an AI system is working,” she says. “So
what is the approach to certification? That’s a massive global conver-
sation that needs to happen.”

Debates continue over whether new regulations are either appro-


priately tough or risk stifling innovation. Meanwhile, it appears that
they have not yet had much impact on corporate behaviour, at least
among FT Moral Money readers, 92 per cent of whom said they had
not had to change their use of AI to meet emerging regulations or
standards.

www.frankfurt-school.de Copyright The Financial Times Limited 2024. All rights reserved.
MBA Case Study

Yet there are signs that, having failed to act to prevent the worst Karin Riechenberg, director of stewardship at Sands Capital, suggests
effects of social media, policymakers are determined not to let the investors start by identifying high-risk sectors, which range from
same thing happen with AI. technology, healthcare, financial services to hiring and defence. Then,
she says, they should identify high-risk use cases — those where AI
“If we let this horse get out of the barn, it will be even more difficult will have a significant impact on aspects of people’s lives, such as
to contain than social media,” Richard Blumenthal, the Democratic credit scores, safety features in self-driving cars, chatbots and surveil-
senator from Connecticut, said in his opening remarks at a Decem- lance and hiring technologies.
ber hearing on AI legislation.
“It’s important to look at each company individually and ask what AI
tools they are using, what they are intended for and who might be
affected by them and how,” she says.

Where AI meets ESG

ESG ratings have frequently come under fire for being inconsistent,
unreliable and part of a confusing “alphabet soup” of acronyms.
Now, however, two more letters of the alphabet — A and I —
offer assessment tools that some believe could transform the way
investors evaluate the ESG credentials of the companies in their
portfolios.

Axa Investment Managers, for example, has developed a natural


language processing tool that the firm runs over large volumes of
corporate documents, including sustainability reports, to enable
analysts to assess whether companies’ business activities are helping
advance the UN’s SDGs.

Investing with an AI lens “AI can bring super-useful solutions in digesting huge quantities of
data,” says Théo Kotula, an ESG analyst at the firm. “That’s not to say
Regulators are not alone in keeping a watchful eye on how compa- it will replace ESG analysts. But it could make our jobs easier and
nies use AI. Investors are also starting to ask tough questions. For quicker.”
asset managers and asset owners, responsible AI is partly about
building internal governance systems. But it also means finding out FT Moral Money readers agree. When asked to select the biggest
whether the companies in their portfolios are using AI responsibly benefits of AI to their organisation’s sustainability goals, the largest
— particularly when investors are applying environmental, social and group picked the ability to measure and track their positive or nega-
governance criteria to those portfolios. tive social and environmental impact.

“In pretty much every ESG conversation I have, AI is a topic,” says AI could also improve ESG decision-making for asset managers by
Caroline Conway, an ESG analyst at Wellington Management. “And incorporating a far broader set of data points. These range from
mostly what I’m trying to get at is governance — how well the com- news reports, blogs and social media to data from satellites and
pany is doing at managing the risk, pursuing the potential benefits sensors that can monitor pollution, deforestation and water scarcity
and thinking about the trade-off between benefit and risk at a high in real time.
level.”
At Amundi Investment Institute, the research arm of the Amundi
Yet if FT Moral Money readers are anything to go by, it is early days group, Marie Brière says AI harnesses these new forms of data to
for investors: only 19 per cent who identified as corporate executives assess companies’ environmental impact, physical risks, social cont-
said investors were asking their company about the use of AI. And roversies and potential costs while also uncovering greenwashing.
63 per cent of investors in the same survey said AI use did not affect
decisions on whether or not to invest in companies. “You could do this before,” says Brière, who is head of investor intelli-
gence and academic partnerships at the institute. “But it’s now much
The responses are perhaps unsurprising given the difficulties inves- quicker and uses quantitative tools.”
tors face in assessing the risks AI poses to portfolios. “They are see-
king basic understanding of how it can be used, which few of them
and us truly have, to be honest,” one reader told us.

To help investors navigate this new risk landscape, a group of asset


managers has formed Investors for a Sustainable Digital Economy,
an initiative to pool resources and generate research on digital best
practices in asset management. Members include Sands Capital,
Baillie Gifford, and Zouk Capital and asset owners such as the Church
Commissioners for England.

www.frankfurt-school.de Copyright The Financial Times Limited 2024. All rights reserved.
MBA Case Study

Serving people and planet young businesses that aim to expand access to essential services.
At 25madison, a New York-based venture capital firm, the portfolio
If AI technologies are helping to measure social and environmental includes companies in the healthcare sector that are using AI to drive
impact, they are also enabling innovators to create businesses that operational efficiency.
drive positive change in everything from healthcare to clean techno-
logy. They include Midi, a virtual clinic specialising in perimenopause and
menopause that uses AI to manage patient records and billing. The
“We see it as a really amazing tool for engineers,” says Jackson of start-up aims to fill the large gap in access that women have to this
Earthshot Ventures. “It allows us to tease out correlations, to run kind of care, explains Jaja Liao, a principal at 25m Ventures, a fund at
through millions of simulations much faster and to model things in 25madison that invests in early-stage companies.
software before building them in hardware or biology.”
She says AI relieves specialists of time-consuming administrative
Given these capabilities, it is no surprise that AI technologies are tasks allowing them to spend more time with patients. “That’s how
permeating the portfolios of impact-focused venture capitalists and they make care more equitable.”
accelerators.
As these and other companies are demonstrating, AI technologies
Jackson says AI is being used by almost every company in its can be used for good. But as is the case with KoBold Metals, now
portfolio and is at the core of the strategy for at least one-third. The valued at $1.15bn, using AI to benefit people and the planet can also
same is true of the portfolio companies at Hawaii-based Elemental create highly successful businesses.
Excelerator, says Dawn Lippert, its founder and CEO.

Jackson points to Mitra Chem, which is using machine learning to


speed up the development of the iron-based cathode materials Moving fast and slow
needed in energy storage and transport electrification. The company
says its technology and processes cut lab-to-market time by about AI may be ushering in an exciting new era in technological innova-
90 per cent. tion and potential solutions to social and environmental challenges.
But as the University of Oxford’s Colin Mayer points out, it is also a
Also in the portfolio is California-based KoBold Metals, backed by gold rush with similarities to previous booms.
Bill Gates and Jeff Bezos. The company uses AI to scrape the world’s
geological data (even including old hand-painted maps on linen) “At the moment it’s clear the motive is to become as profitable as
and deploys algorithms to find deposits of minerals such as lithium, possible,” says Mayer, who has spent many years exploring the pur-
nickel, copper and cobalt. pose of business. “The only way to solve this is to align the interests
of companies with what we as humans and societies want.”
“To facilitate the transition to electric vehicles, we’re going to need
to find a lot more of these resources,” explains Jackson. “Through But with corporate leaders anxious to seize opportunities ahead of
that ingestion of a tremendous amount of data, AI is helping predict the competition, is this alignment possible? “There’s pressure to get
where these resources might be.” it done first,” says The Conference Board’s Pollard. “But with that
comes risk — the risk of doing the wrong thing with the wrong tool
Decarbonising the economy also involves making better use of exis- in the wrong way.”
ting resources — something AI technologies are particularly good at.
And while many organisations have appointed chief ethics officers
The technologies can be used to optimise energy use in buildings or to maintain ethical behaviour and regulatory compliance, they may
adjust traffic lights to keep cars on the move rather than idling. “AI need to go further. One solution, says Virtue’s Blackman, is to put
technologies find those marginal gains — and they find so many of someone in charge of responsible approaches to AI. “If you’re the
them that the cumulative value is massive,” says Solitaire Townsend, chief innovation officer, you want to move fast, but if you’re the chief
co-founder of sustainability consultancy Futerra. ethics officer, you don’t want to break things — so there’s tension,”
he says. “Someone with a dedicated role doesn’t have that conflict of
AI can also help keep valuable resources in circulation for longer. For interest.”
example, San Francisco-based Glacier, one of Elemental’s portfolio
companies, is using AI technologies to bring greater efficiency and And while large, well-established companies may need to do some
precision to waste sorting, a job for which it is hard to find human organisational retrofitting to put appropriate guardrails around their
workers. use of AI, young companies have an opportunity to get it right from
the start.
Equipped with computer vision and AI, its robots can identify and
remove more than 30 recyclable materials from general waste at 45 This is something Responsible Innovation Labs, a coalition of
picks per minute, a speed far greater than even legions of human founders and investors, is promoting among the next generation of
workers could achieve. “Recycled aluminium, for instance, generates high-growth tech companies. “Responsible AI should be an essen-
about 95 cent per cent fewer emissions than new aluminium,” says tial mindset and operating norm in the earliest stage of company
Lippert, who is also founding partner at Earthshot Ventures. “So it building,” says Gaurab Bansal, executive director of the San Francis-
has a huge climate impact.” co-based non-profit.

By enabling new efficiencies, AI is also spawning a generation of

www.frankfurt-school.de Copyright The Financial Times Limited 2024. All rights reserved.
MBA Case Study

For Bansal, the right approach is to assess the potential impact of


products and technologies on customers and society more broadly.
“We think responsible innovation is about designing and accounting
for that,” he says. “It’s not about putting your head in the sand or
worrying about it some other time.”

Unfortunately, as sluggish progress on meeting climate goals has


shown, putting its head in the sand is something business does all
too well. The question is whether it will take the same approach with
AI. Or can capitalism harness AI for good and use awareness of its
risks to prioritise long-term thinking over short-term gain?

So far, the jury is out. Yet there is a sense that, at this early stage of
what is expected to be the next great tech revolution, this is a mo-
ment when it is still possible to get the governance right.

“We’ll constantly have to tweak it,” says Riechenberg at Sands Capital.


“But if we start doing that now, we have the potential to make the
most of this technology — to control it and not be controlled by it.”

www.frankfurt-school.de Copyright The Financial Times Limited 2024. All rights reserved.

You might also like