JKU - S. Bolda - MS Thesis - Navigating The EU AI Act - Proposed Compliance Measures For AI Providers and Deployers (2024)
JKU - S. Bolda - MS Thesis - Navigating The EU AI Act - Proposed Compliance Measures For AI Providers and Deployers (2024)
Submission
Institute of Business
Informatics – Information
Engineering
Thesis Supervisor
Dr. Barbara Krumay
October 2024
Master’s Thesis
to confer the academic degree of
Master of Science
in the Master’s Program
Business Informatics
JOHANNES KEPLER
UNIVERSITÄT LINZ
Altenberger Straße 69
4040 Linz, Österreich
jku.at
DVR 0093696
Table of Contents
1. Introduction ........................................................................................................................... 5
2. State of the Field................................................................................................................... 6
2.1. Understanding Artificial Intelligence ............................................................................... 6
2.1.1. Definitions........................................................................................................... 6
2.1.2. Types of AI ......................................................................................................... 6
2.1.3. Machine Learning ............................................................................................... 7
2.1.4. Generative AI ..................................................................................................... 8
2.2. EU AI Act....................................................................................................................... 8
2.2.1. Introduction......................................................................................................... 8
2.2.2. Background and Milestones ............................................................................... 9
2.2.3. Scope and Applicability..................................................................................... 10
2.2.4. Risk Classification ............................................................................................ 13
2.2.5. Enforcement and Penalties ............................................................................... 21
2.2.6. Critiques and Anticipated Challenges ............................................................... 22
2.3. Other Regulatory AI Frameworks and Principles ......................................................... 24
2.4. Compliance with the EU AI Act .................................................................................... 29
2.5. NIST AI RMF – complementary framework ................................................................. 35
3. Methodology ....................................................................................................................... 40
3.1. Compliance Measure Survey....................................................................................... 40
3.1.1. Survey Design .................................................................................................. 41
3.1.2. Data Collection ................................................................................................. 44
3.2. Integrated compliance approach with NIST AI RMF .................................................... 45
3.3. Proposed Action for Identifying EU AI Act Compliance Measures................................ 50
4. Results ............................................................................................................................... 51
5. Discussion .......................................................................................................................... 52
6. Conclusion .......................................................................................................................... 53
6.1. Limitations of the Study ............................................................................................... 54
6.2. Future Research.......................................................................................................... 54
7. References ......................................................................................................................... 55
8. List of Figures ..................................................................................................................... 61
9. List of Tables ...................................................................................................................... 62
10. Appendices ......................................................................................................................... 63
10.1. LinkedIn Post by Hans Baldinger ...................................................................... 63
Abstract
This study focuses on identifying and suggesting compliance measures for the EU AI Act, a
pioneering legislation on regulating AI, that has been passed in August 2024. The law imposes
significant requirements and obligations on AI providers and deployers. AI systems that are
classified as ‘unacceptable risk’ will be banned 6 months after the law came into force. AI systems
classified as ‘high-risk’ are subject to stringent requirements and obligations, like risk management
systems, data governance, human oversight, technical documentation, and record-keeping. As
there is a lack of concrete compliance measures in the literature, a survey on this matter has been
conducted to provide suggested actions for complying with the law. Literature suggests that the
law might hinder innovation in the EU. Especially SME’s might struggle to overcome compliance
challenges despite so called ‘regulatory sandboxes’. However, the literature also suggests that
this regulation can lay the foundation for a trustworthy AI landscape. Lack of participation in the
survey may indicate that there has been little compliance efforts with the EU AI Act yet. It is
hypothesized that there is a lack of awareness for the EU AI Act and its regulations and obligations,
however the ratio of ‘partial’ to ‘completed’ in the survey may indicate an interest in the law.
Information from participation in a webinar Q&A about standardizing high-risk AI systems shows
that experts suggest creating interdisciplinary teams consisting of law and technical experts to
take on the challenge of compliance. Additionally, complying with standards and norms in the field
of activity of the provider can simultaneously lead to compliance with the EU AI Act. The
‘compliance by design’ method is proposed by experts to embed compliance from the start and
reduce the risk of violations in the future. Further research in the literature showed an integrated
approach of using existing risk management frameworks, such as the NIST AI RMF, to
complement compliance efforts with the EU AI Act. Mapping of key principles of the NIST AI RMF
to the requirements and obligations of the EU AI Act can be used to identify suggested actions
from companies, like Google DeepMind, which have published their NIST AI RMF template on the
internet. This approach was tested on the requirement for ‘human oversight’, which is criticized on
having vague definitions in the requirements. By researching the appropriate NIST AI RMF
principle and the corresponding data from Google DeepMind, a measure could be identified called
Human-In-The-Loop, which can support efforts to comply with the ‘Human Oversight’ requirement.
With this proposed workflow of mapping the EU AI Act requirements to the NIST AI RMF, AI
providers and deployers who are subject to high-risk AI system regulations, can identify
appropriate compliance measures for the EU AI Act.
Disclaimer
For the purposes of readability, ‘Regulation (EU) 2024/1689 of the European Parliament and of
the Council of 13 June 2024 laying down harmonised rules on artificial intelligence and amending
various Regulations and Directives’ will be referred to as the ‘EU AI Act’, ‘AI Act’, ‘AIA’ or just ‘the
Act’. These abbreviations are used throughout the document to simplify references to the
regulation.
“[…] if one were to draw a parallel with the immediate aftermath of the
adoption of the General Data Protection Regulation (“GDPR”) back in 2016,
one thing is clear: both compliance and enforcement are bound to take
time […]” (Dewitte, 2024)
This thesis aims to provide answers on how companies and organizations adapted or will adapt
their business processes to the EU AI Act, which is the world-wide first legislation that provides a
regulatory framework for the use of AI technology inside the EU. It is estimated that globally 63%
of organizations are about to adopt AI technology in their business processes within the next three
years, which indicates a significant urge for companies to implement regulatory frameworks and
manage AI risks (Securiti, n.d.-b).
The research question arose in mid-2024 from correspondence with the Information Engineering
Institute at the Johannes Kepler University in Linz.
As AI is rapidly evolving in the technological landscape, policymakers around the world try to
create regulatory frameworks to mitigate risks that are associated with the use of artificial
intelligence. The EU AI Act is the first comprehensive legislative framework specifically designed
to regulate AI technology inside the European Union. Its main goal is to ensure safe and lawful
use of AI systems that aligns with its fundamental values of human dignity, freedom, democracy,
equality, rule of law, and human rights.
Main research objective of this thesis is to introduce the reader to the fundamentals of AI,
summarize the key points of the EU AI Act and show the current state in the field regarding
possible compliance measures and strategies for this novel regulatory framework. As compliance
efforts for the world’s first comprehensive AI law are still ongoing, research in this field is essential
to offer companies guidance to navigate through this complex legislation.
Furthermore, an empirical survey aimed to gather data on how organizations or companies which
use AI technologies prepare themselves to fulfill the requirements and obligations mentioned in
the AI Act.
What measures are companies and organizations which use or provide AI systems in their
business processes taking to comply with the regulatory framework of the EU AI Act?
According to the Munich Business School compliance is about adherence to legislations,
guidelines, norms or standards, and ethical principles that apply to an organization. Compliance
encompasses all actions or measures that ensures that an organization reaches this objective.
There are different kinds of compliance categories: Legal compliance, ethical compliance, and
operational compliance. For example, compliance measures for complying with the GDPR are
considering data protection guidelines and audits or deploying a data protection officer role.
Compliance measures for complying with consumer protection laws are considering consumer
protection guidelines and continuously reviewing the product catalogue to ensure consumer
satisfaction and safety. By taking compliance measures, an organization effectively avoids the risk
of being subject to penalties or fees and maintaining its reputation (Munich Business School, n.d.).
In this thesis compliance measures of AI organizations or companies, which seek compliance with
the EU AI Act, are researched and proposed.
According to Nikolinakos (2023), the European Commission stated in 2018 that AI ‘refers to
systems that display intelligent behaviour by analysing their environment and taking actions – with
some degree of autonomy – to achieve specific goals. AI-based systems can be purely software-
based, acting in the virtual world (e.g. voice assistants, image analysis software, search engines,
speech and face recognition systems) or AI can be embedded in hardware devices (e.g. advanced
robots, autonomous cars, drones or Internet of Things applications). We are using AI on a daily
basis, e.g. to translate languages, generate subtitles in videos or to block email spam. Many AI
technologies require data to improve their performance. Once they perform well, they can help
improve and automate decision making in the same domain. For example, an AI system will be
trained and then used to spot cyberattacks on the basis of data from the concerned network or
system’ (European Commission, 2018, as cited in Nikolinakos, 2023).
2.1.2. Types of AI
Bartneck et al. (2021) mentioned that John Searle divided AI into two distinct in 1980: Weak and
strong AI. Weak AI is created to solve specific, narrowly defined tasks and is not capable of solving
related problems. Most modern AI systems fall into the category of weak AI. Strong AI would have
a mind equivalent to that of a human person. Most researchers in the field are not mainly focused
on developing strong AI. Their goal is to develop and create machines that are capable of solving
a wide range of problems, leading to general intelligence. However, no AI system has yet reached
this level of general intelligence (Bartneck et al., 2021).
2.2. EU AI Act
2.2.1. Introduction
The Artificial Intelligence Act of the European Union (EU AI Act) is a regulation concerning artificial
intelligence technology. It provides a common regulatory and legal framework for all member
states of the EU (EU AI Act Proposal, 2021). As of now, the EU is the first jurisdiction in the world
to create and pass a law which regulates the supply and use of AI systems. The goal of the AI Act
is to accelerate innovation and employment and simultaneously to emphasize the need to defend
democratic principles and the rule of law, to protect the environment and to ensure protection of
health, safety and fundamental rights of the EU (Ho & Caals, 2024). In order to ensure proper
enforcement of the law, governing entities were established. For example, an ‘AI Office’ was
created within the European Commission, which consists of a scientific panel of independent
expert. Furthermore, an ‘AI Board’ was formed with representatives of member states and an
advisory forum for stakeholders (Council of the EU, 2024).
The law defines different requirements and obligations which certain AI systems have to adhere
to. Some AI systems or practices will be banned, like certain biometric categorization and
identification systems (e.g. social scoring). On the 2nd of August 2025 (1 year after the law comes
into force), all obligations of general-purpose AI models become binding. On the 2nd of August
2026 the bulk of the remaining requirements for high-risk AI systems become binding (Garrod et
al., 2024). Non-compliance with the law can expose companies and organizations to fines of up
to a maximum of 30 million euros, or up to 6% of the total yearly global turnover for the previous
financial year, depending on what number is higher. However, the member states of the EU have
to ensure that the penalties are properly implemented by the date of application and have to take
the size and interest of small and medium sized enterprises into account (Nikolinakos, 2023).
The initial proposal of the AI Act also responded to proposals made by the Conference on the
Future of Europe (COFE), which were published in the report on the final outcome of May 2022.
The 12th proposal of the report on enhancing the EU’s competitiveness in strategic sectors had
significant influence. Its objective was to highlight the importance of strengthening the
competitiveness and resilience of the economy in the EU by creating an entrepreneurial culture of
innovation. The 5th proposed measure states: “Promoting policies for a strong industrial base and
innovation in key enabling technologies, […]”. Furthermore, the EU AI Act is also a response to
the 33rd proposal for a safe and trustworthy digital society with regards to cyber security and
disinformation. It states measures should be taken like ensuring sanctions and quick effective
enforcement in the EU states in case of cybercriminal activity, countering disinformation for
example on social media platforms and ensuring that humans are ultimately in control. The 35th
proposal promotes digital innovation and states that human oversight, trustable and responsible
use of AI, and safeguards for transparency have to be ensured. Proposal 37 wants to ensure
citizens’ access to information by using digital tools and AI, including persons with disabilities
(Conference on the Future of Europe, 2022). On the 6th of December 2022, the European Council
adopted its position on the artificial intelligence act. The goal of the new proposed regulation is to
ensure that AI systems used inside the EU are safe and align with existing laws and fundamental
values of the EU (Council of the European Union, 2024).
In 2023, the European Parliament adopted its negotiating position on the AI Act, with 499 in favor
of the changes. The European Council and the European Parliament reached a provisional
agreement later that year (Future of Life Institute, n.d.-b). The European Parliament finally passed
the EU AI Act on the 13th of March 2024, with 523 votes in favor, 46 votes against and 49
abstentions (Kroet, 2024). On the 21st of May 2024, the European Council approved the law.
(Council of the European Union, 2024). On the 12th of July 2024, the AI Act was published in the
Official Journal of the European Union, which subsequently meant, that it entered into force on the
1st of August 2024.
According to Werkmeister et al. (2024) the term provider is relevant in relation to AI systems but
also to general purpose AI (GPAI) models. An organization becomes a provider when they develop
a GPAI model or an AI system. They also become providers when they let third parties develop
an GPAI model or AI system and then place the product on the market under their own name or
trademark. The authors states that problems with determining whether an entity is a provider can
arise when organizations alter an already existing AI system to the extent that it can be already
declared as a ‘new’ AI system. The term ‘deployer’ applies to entities that use AI systems (but not
GPAI models) under their authority and are located inside the EU. Distributors are actors that
place an AI system on the EU market. Importers are organizations that place an AI system on the
EU market, which carries the name or trademark of a legal or natural entity from a country outside
of the EU. The definition of importers and distributors are only relevant when it comes to AI
systems, but not GPAI models. (Werkmeister et al., 2024).
Due to the broad geographical scope of the EU AI Act, it applies not only to entities based within
the EU, but also to those outside of the EU that provide AI systems used within the territory of the
EU. The goal of this extraterritorial scope is to ensure that AI systems, that impact EU citizens,
are subject to the same regulatory standards, regardless of whether the provider is situated inside
or outside of the EU. Werkmeister et al. (2024) describe that, as with other EU legislations, the EU
AI act is not limited to actors who are located within the EU. The law is also applicable to providers
and deployers of AI which are located in a third country, but the output produced by their AI
systems is used inside of the EU. Generally speaking, obligations and requirements also apply to
providers of AI and GPAI systems outside of the territorial scope of the EU, when their systems
have an effect on citizens of the EU (Werkmeister et al., 2024).
There are different regulations in the EU AI Act that apply according to the risk level of the AI
system. Limited and minimal risk AI systems are subject to less stringent requirements but still
must comply with transparency obligations. The requirements and voluntary best practices for
these systems are mainly found in Chapter 3. High-risk AI systems, which have significant
implications for health, safety, or fundamental rights of EU citizens are subject to a set of
obligations and regulations which are defined in Chapter 2.
The act contains a broad territorial and personal scope. However, it doesn’t apply to areas outside
the scope of EU laws and doesn’t have influence on member states’ competence regarding
national security. Additionally, it excludes AI systems that are used, put into service, or placed on
In order to support innovation in the field of AI regulatory sandboxes have been established. These
regulatory sandboxes have been particularly created for SMEs (Small and medium-sized
enterprises) and start-ups. The Commission encourages national authorities to create controlled
environments, which are called AI regulatory sandboxes, to allow developers to test and validate
AI technologies under supervision before market release (Nikolinakos, 2023).
Truby et al. (2022) state that innovative technology can be validated and demonstrated in a real-
world environment with real consumers. Additionally, direct communication between policymakers
and developers creates a more collaborative and supportive environment in the AI field. Testing
an AI system in a controlled environment like regulatory sandboxes also mitigates unintended
consequences and risks, like undetected security vulnerabilities. Another example of a regulatory
sandbox can be found in the financial sector, where regulatory sandboxes can avoid flaws which
could lead to negative effects on any global economy.
The authors claim that supplementing AI development within regulated sandboxes can promote
innovation. Strict liability alone could have negative effects on investments in AI due to its
unpredictable risks and the involvement of multiple stakeholders in AI innovation and
development. The complex nature of the technology and the independent nature of AI makes
liability less effective (Truby et al., 2022).
These risk levels have corresponding articles in the EU AI Act, in which the majority of the
respective regulation is stated:
• Unacceptable Risk → Art. 5
• High risk → Art. 6 – 49
• Limited or Transparency Risk → Art. 50
• Minimal Risk → Art. 95 - 96
Important to note is that some sources in the literature label specific AI risk-levels in different ways.
For example in Figure 1: EU AI Act risk-based approach (Madiega, 2024), the author labels limited-
risk AI systems as ‘transparency risk’ systems, whereas Barenkamp (2024) labels corresponding
According to Edwards (2022) the legislation is mostly concerned with ‘high-risk AI’. Although the
categorization is described as a ‘risk based’ based scheme, it lacks a sliding scale of risk. He
mentions that merely one category, the high-risk category, is highly regulated, with limited risk
levels only having to comply with some minor transparency requirements. There are a number of
‘red-lines’, which have rhetorical effects but will have limited application in practice (Edwards,
2022).
Compared to the original proposal of the EU AI Act from 2021, the final regulation that entered
into force also takes General-purpose AI models into account. Madiega (2024) reimagined the
pyramid by also adding the General-purpose AI models (GPAI) to the bottom of the illustration.
GPAI systems in general have to comply with transparency requirements, whereas GPAI systems
with systemic risks additionally have to follow risk assessment and mitigation.
These transparency obligations require up-to-date technical documentation to be available for
providers of AI systems that use the GPAI model. GPAI model providers are obliged to put policies
in place to respect Union copyright law, for example by ‘watermarking’. They also have to provide
a publicly available and detailed summary of the data they used to train the model. If the provider
is located outside of the EU, a representative has to be appointed inside the EU. Open source and
free AI systems are exempted from some obligations, such as ‘disclosure of documentation’, as
they have positive effects on research, innovation, and competition.
Madiega (2024) continues by stating that GPAI models that are trained using an overall computing
power exceeding 10^25 floating-point operations per second (FLOPs) are considered ‘high-impact
capabilities. These GPAI models have to be constantly assessed and risk must be mitigated.
System-risk GPAI are, according to Madiega (2024), systems that can have negative effects on
public health, safety, public security, fundamental rights, or society in general.
GPAI can process a multitude of types of data, like audio, video, text, and physical data. Examples
of influential general-purpose AI systems are AlphaStar, Chinchilla, Codex, DALL-E 2, Gopher,
GPT-3, MuZero, PaLM and Wu Dao 2.0. GPAI are also often called ‘foundation models’ because,
for example, a single language processing AI system can be used as the foundation for several
other applied AI systems like chatbots, ad generation, decision assistants, spambots etc (Future
of Life Institute, 2022).
Chapter 2, Article 5 of the AI Act focuses on unacceptable risk AI systems. It states that the
legislation prohibits specific use of artificial intelligence. Systems which:
• Exploit vulnerable people groups (e.g. disabled or elderly people) or manipulate people’s
decisions.
• Evaluate or classify people based on their social behavior or personal traits.
• Generate predictions on a person’s risk of committing a crime.
• Scrape facial images from public surveillance systems or the internet.
• Infer data on emotions in the workplace or educational institutions.
• Classify people based on their biometric information.
Exceptions are made for law enforcement purposes, like searching for missing people or
preventing terrorist attacks.
Zhong et al. (2024) stated in their study that the terms ‘subliminal’, ‘manipulative’, and ‘deceptive’
of the Article 5 are ambiguous and therefore might pose challenges for further practical application.
Subliminal techniques are methods that may influence individuals below their ‘conscious
perception threshold’ like tachistoscopic presentation, masked stimulus, and conceptual priming.
Tachistoscopic presentation is a technique where visual stimuli are presented for a very short time
with the goal of unconsciously influencing an individual. Conceptual priming is used to expose
individuals to stimuli that convey a certain meaning. According to Zhong et al. (2024), there is yet
no evidence for individuals being exploited by subliminal techniques of AI systems. Manipulative
techniques will distort the decision-making process of individuals which may subsequently lead to
decisions against their best interest. These techniques can include representativeness,
availability, anchoring effect, status quo bias, and social conformity. Deceptive techniques provide
false information and can also distort decision-making content.
The entirety of Chapter III: High-Risk AI systems, Article 6-49 of the EU AI Act is dedicated to high-
risk AI systems. If an AI system is used as a safety component of a product, or if it is the product
itself that is covered by EU regulation, it can be considered a high-risk system. These products
have to be assessed by a third party before they can be utilized or sold. If a system doesn’t pose
a risk to citizens’ safety, health, or rights, then it might not be considered as a high-risk system. A
provider who considers a system not to be high-risk has to put forth a request and provide
documentation of assessment. The European Commission can change the conditions for a high-
risk classification; however, any changes must not decrease the level of protection for rights,
safety, and health. Article 6, which defines the classification of high-risk AI systems, will enter into
force in July 2026 (EU AI Act, 2024).
Systems mentioned in Annex III of the EU AI Act are considered high-risk systems and therefore
additional obligations apply to them. These systems include:
• Biometrics:
o Biometric categorization systems based on sensitive or protected attributes
(religion, race, political opinion, etc.).
o Systems that detect emotions.
• Critical infrastructure:
o AI systems that manage digital infrastructure, road traffic, or utilities (electricity,
gas, heating, or water).
• Education and vocational training:
o AI systems that can aid determining entry into educational institutions like
universities.
o Systems that evaluate learning results or guide the learning process.
According to the European Commission it can change the list of high-risk AI systems if they meet
specific conditions. This applies to AI systems that are used in sectors already listed and pose a
risk to health, safety, or rights on the same level or greater than current high-risk systems. The EU
Commission will consider following aspects of AI systems: The purpose of a system, how much it
is used, the data it processes, its autonomy, previous harm history, potential harm, and the ability
to reverse or correct outcomes. However, the Commission can also remove systems from the list
if these factors do not apply anymore (EU AI Act, 2024).
Furthermore, Article 6 for classification of high-risk AI systems states that a system is high-risk if
“(a) the AI system is intended to be used as a safety component of a product, or the AI system is
itself a product, covered by the Union harmonisation legislation listed in Annex I;”
In the journal article of Wagner et al. (2024) ‘Navigating the Upcoming European Union AI Act’,
the requirements for high-risk AI systems are illustrated. The authors explain that there are nine
key requirements for high-risk systems. However, in the AI Act there are seven key requirements
(Articles 9-15) for high-risk systems. The following information is retrieved from the EU AI Act and
summarized:
Furthermore, there are specific obligations for providers of high-risk AI systems. The key
obligations that have to be met before placing a high-risk system on the market are found in
Articles 16-18, 49, 72.
Obligations for high-risk AI deployers and users can be found in Article 26. It states that use of the
AI system has to be in accordance with the instructions of the AI system provider. This prevents
any misuse that could lead to safety hazards. Furthermore, continues monitoring helps in early
risk or malfunction detection. The obligation for record-keeping ensures that logs and records
18. October 2024 Stefan Bolda 20/85
provide a traceable history of the system. This is important for accountability, compliance audits,
and understanding the system’s decision-making processes. Additionally, as with high-risk AI
providers, users or deployers have to ensure human oversight of the system as well. This
obligation prevents harmful decisions from being made by the system. Transparency obligations
ensure that stakeholders, such as users and affected individuals, of the system are thoroughly
informed and understand the AI system’s operation.
AI systems labeled as limited, or transparency risk systems have to comply with transparency
regulations. Limited risk AI systems are generally defined in Article 50: Transparency Obligations
for Providers and Users of Certain AI Systems and GPAI Models of the EU AI Act. These
transparency regulations include labelling, or disclosing to the user that content has been
manipulated. According to Edwards (2022), the utility of this categorization is debatable, both
technically and in terms of the overlap with the General Data Protection Regulation, as the GDPR
already has certain similar transparency requirements for the use of profiling and automated
decision-making. The author states that in Article 52, there are three main limited-risk AI systems:
When it comes to chatbots, solely providers, but not the users, have the obligation of transparency.
Providers of these systems have to design them in such a way that the user is aware that they are
interacting with a machine and not a human person. So, if a provider sells a chatbot application to
a customer, only the provider has to ensure this transparency regulation. Contrary to chatbots,
emotion identification and deepfakes, those transparency regulation falls on the user (Edwards,
2022). In Article 50 it states that companies must inform the users of their AI system, unless it is
obvious, or the AI is used for legal purposes like crime detection.
Minimal risk AI includes systems like spam filters or AI-enabled video games. The European
Commission proposes that these are mainly regulated by voluntary codes of conduct (Edwards,
2022). Systems that present only a minimal risk to citizens will not be subject to additional
regulations and are only subject to already applicable EU regulations, such as the GDPR (General
Data Protection Regulation) (Madiega, 2024).
According to Madiega (2024), a number of actors at national and EU level are responsible for
implementing and enforcing the legislation. The member states of the EU must establish or
designate at least one market and notifying authority to ensure the application and implementation
of the EU AI Act. Non-compliance will result in large fines. The European Commission, the AI
Board, the AI Office, EU standardization bodies like CEN and CENELEC, an advisory forum and
independent expert in the field of AI will aid the implementation of the law (Madiega, 2024).
Penalties range from market restrictions to substantial fines. Fines of up to 7% of annual global
turnover can be imposed for the use of prohibited AI systems. Non-compliance with the
requirements for high-risk systems is punishable by fines of up to 3% of annual global turnover
(Barenkamp, 2024). However, the final regulation states that fines for prohibited AI systems are
either 7% of the annual global turnover or €35,000,000, whichever is higher. For high-risk AI
systems, the fines are either 3% of the annual turnover or €15,000,000, depending on which
amount is greater. Additionally, providers of general-purpose AI (GPAI) systems, in case of non-
compliance, also face fines of up to 3% of their total worldwide turnover from the previous year or
€15,000,000, whichever is higher. There are special considerations for small and medium-sized
enterprises, including startups. Any fines imposed will not exceed the lower percentages or
amounts outlined in the regulation. Therefore, the lower amount will be considered (EU AI Act,
2024).
2.2.6. Critiques and Anticipated Challenges
Barenkamp (2024) hypothesized that while regulating generative AI models, the law addresses
specific risks, however, it also harbors the danger of potential distortions of competition for small
and medium-sized companies in the EU. The AI Act contains a large number of strict and
complicated regulations that could jeopardize the competitiveness of the EU in the technology
sector. AI is one of the key enablers in digital transformation and the law could lead to the EU
lagging behind development in other regions. At the current stage, the AIA still contains a lot of
Shahlaei & Berente (2024) stated that the ACEA (European Automobile Manufacturers'
Association) criticized the definition of artificial intelligence in the EU AI Act and by the High-Level
Expert Group on Artificial Intelligence (AI HLEG) as too vague and broad. The definition is based
on ‘phenomenological’ description that aims to provide a comprehensive definition that is
applicable in different contexts and industries. The ACEA warns that this could lead to regulations
for traditional software systems and narrowly defined AI applications. To avoid these
misclassifications, the ACEA calls for a precise and narrow definition of high-risk AI systems that
solely applies to specific AI applications and not generally to all applications that include AI
systems (Shahlaei & Berente, 2024).
According to Zhong (2024), the AI Act lacks precise definitions for terms mentioned like ‘subliminal
stimulus’, ‘manipulative’, or ‘deceptive’. This may lead to challenges when it comes to practical
application of the regulation. Another study made by Zhong clarifies and interprets these
ambiguous terms in the Article 5 of the EU AI Act, which is explained in Error! Reference source n
ot found.. The complexity of banning harmful AI practices under the EU AI Act is addressed,
particularly when it comes to psychological harm, which can be more difficult to define and
measured than other harm categories. However, the author also states that techniques like social
conformity, which would be considered manipulative in the EU AI Act, can have positive benefits.
For example, it can help build trust in AI but also pose risks if used for spreading misinformation.
Navigating nuances in the AI Act will require industry-specific legal standards and expert
consultation. Zhong (2024) proposes therefore that in order to effectively implement the EU AI Act
in real life, an interdisciplinary governance framework is needed. An interdisciplinary workflow
mechanism would coordinate experts from diverse fields, similar to an orchestra conductor who is
directing a piece of music. This would for example involve psychologists and decision scientists,
who define prohibited AI practices. Ethicists could determine ethical lines that should not be
crossed. Computer scientists have the ability to assess technical feasibility and detect prohibited
tactics. Clinical psychologists can identify psychological harm and evaluate impacts. Legal experts
can ensure compliance with regulations by recommending measures. Compliance and ethical
safeguards can additionally ensure robust governance, like musical instruments in an orchestra.
These safeguards could be AI assurance mechanisms or user education (Zhong, 2024).
One key critique of the EU AI Act mentioned in a webinar of the Merantix AI Campus called “The
Finalised EU AI Act: Implications for Businesses, Engineers and Entrepreneurs” was, that the
regulation causes disadvantages for Startups and SMEs. The EU AI Act could disproportionately
impact startups and SMEs, as these smaller companies may struggle with the compliance costs
compared to larger enterprises. Furthermore, definitions, particularly of what constitutes an AI
system, are criticized for being too broad and vague. This can lead to uncertainty and makes it
difficult for companies to determine whether their system falls under the Act, which subsequently
can complicate compliance efforts. Additionally, there is concern that the stringent requirements
of the AI Act could stifle innovation, particularly in high-risk AI applications. The regulatory burden
of the Act might deter companies and providers of AI systems, especially within the EU, from
In February of 2024 the California Senate Bill 1047 (SB-1047) called “Safe and Secure Innovation
for Frontier Artificial Intelligence Systems” was introduced. According to Murray (2024), this bill
aims to regulate the development and deployment of advanced AI systems and models to ensure
public security and safety. Compared to the EU AI Act, the main focus of the bill is on the developer
rather than the end-user. It includes several key provisions:
• Scope of AI Systems Covered:
o It targets systems that require a computing power of over 10^26 FLOPS for training.
• Safety Assessment Requirement:
o Developers have to conduct safety assessments before the training process. This
should prevent hazardous activities of the model.
• Third-Party Model Testing:
The White Paper “proportionate and pro-innovation regulatory framework” for AI was published by
the UK on the 29th of March 2023. There have been early efforts to govern AI technology in the
UK dating back to 2018. A framework called National AI Strategy was published in September
2021, which proposed a 10-year vision on how to maintain UK’s status as an AI superpower. This
framework aims to create an environment for trustable AI systems but also not hinder innovation.
The goal of the National AI Strategy is to develop the “most trusted and pro-innovation system of
AI governance in the world”. The White Paper of March 2023 focuses on context-specific and
initially non-statutory governance. This means that regulators and policymakers have no new
enforcement powers (Roberts et al., 2023). As seen in Figure 4, ‘a pro-innovative approach to AI
regulation’ is categorized as ‘Principles’. Therefore, they are not legally binding or mandatory.
However, this proposal can support further policymaking decisions.
The regulatory AI framework of the UK has several key principles. These principles are designed
to be cross-sectoral and adaptable by various policymakers.
In order to implement and coordinate these principles the AI White Paper established several main
government functions to manage and monitor AI risks. These measures include a central
regulatory guidance to help implement the principles, a cross-economy AI risk register, which
supports risk evaluation, ‘horizon scanning’ techniques to identify AI risks in the future,
coordination functions to clarify responsibilities of regulators and promote cohesive guidance,
innovation support to help companies navigate through these regulatory complexities and
international alignment which focuses on aligning UK’s regulatory frameworks with global
initiatives (Roberts et al., 2023). Furthermore, the White Paper proposes a sector-led governance
approach, which allows for context-specific regulation. This means that the regulation can adapt
to the diverse risks associated with different AI capabilities and abilities. This flexibility is essential
regarding the different ethical implications and risks associated with different AI sectors, from high-
risk like medical devices to low risk like logistic systems (Roberts et al., 2023).
Compared to the EU AI Act, the proposed UK framework for AI regulation proposes non-statutory
principles which are not legally binding and focuses on a sector-specific instead of risk-based
approach. The AI White Paper emphasizes the clear focus on innovation and minimal regulatory
burden to increase competitiveness on the global market.
In the report GAO-21-519SP of the U.S. Government Accountability Office (2022) the OECD
Framework for the Classification of AI Systems is explained. It is a framework that provides
accountability structures for federal agencies and other entities to ensure responsible and
transparent use of artificial intelligence technologies. It may have complementary characteristics
to the EU AI Act, particularly in the areas of governance, data quality, transparency, and risk
management. It is centered around U.S. accountability standards, which may align with the goals
of the EU AI Act. It is organized around four key principles:
1. Governance:
This key principle emphasizes the need for clear goals, roles, and responsibilities for AI
systems, promoting values and ethical standards, involving multidisciplinary
stakeholders, and implementing a comprehensive risk management plan. It ensures
organizational and system-level accountability through internal controls, compliance, and
transparency.
This key principle is identical to the requirement of risk management systems for high-
risk AI systems in the EU AI Act.
2. Data:
High-quality, reliable, and representative data are crucial for AI model development and
operation. This principle focuses on documenting data sources, assessing reliability,
addressing biases, and ensuring data security and privacy.
This principle aligns with the data governance EU AI Act requirement, in which bias
mitigation, data security and privacy is regulated.
Each key principle of the framework provide a set of practices and questions for auditors and third-
party assessors to help maintain accountability throughout the AI systems lifecycle (U.S.
Government Accountability Office, 2022). While the framework itself does not explicitly mention
the EU AI Act, its suggested actions provide a thorough basis for actors within the scope of the
EU to ensure compliance with its requirements and obligations. The focus of both AI regulation
frameworks is providing trustworthy AI.
A study was conducted by Walters et al. (2024) to research and determine on which aspects
organizations should focus in order to comply with the EU AI Act. A questionnaire was developed
to collect quantitative but also qualitative data on this matter. The study also indicated that
organizations struggle with training their staff on data and model bias. The authors also state that,
as of now, most research is theoretical in nature and focuses on the content quality of the EU AI
Act rather than practical application. There is a notable research gap regarding how organizations
will comply with the AIA and their preparedness for it. Current state of the field literature covers
compliance with already existing literature like the GDPR and critical analysis of the EU AI Act’s
content but there is a lack of insight into how organizations will navigate compliance with this
regulation (Walters et al., 2024).
The authors continued by hierarchically breaking down the relevant key subjects of EU AI Act
compliance for high-risk AI providers. This categorization was the foundation of their
questionnaire. Furthermore, focus was put on the most essential parts for organizations.
A three-point range was used to rate the fifteen responses from the questionnaire. The data was
rated utilizing a rule-based system. Following categories were used: Data and model internals,
technical documentation, user communication, model monitoring, and risk management. The
authors used an automated scoring process.
Figure 8: Potential impact of MBSE on complying with requirements (Vereno et al., 2024)
As seen in Figure 8, the EU AI Act regulation for technical documentation can be greatly impacted
using MBSE. The documentation must be prepared before market placement or service initiation
and must detail the system’s complexity, including software architecture and algorithms. MBSE is
a methodology suitable for this documentation due to its focus on creating detailed models that
integrate requirements, use cases, and technical architecture. However, research has shown that
there is a lack of a unified approach to AI-specific modelling within MBSE (Vereno et al., 2024).
Schuett (2023) describes measures for risk management in order to comply with the risk
management requirement of the EU AI Act. These risk management measures are derived from
Article 9 of the EU AI Act itself but also referenced from ISO/IEC Guide 51 to provide additional
context. The risk management process must be repeated until all risks can be described as
acceptable. Therefore, it must be designed in an iterative manner, as shown in Figure 9.
Firstly, risk identification must be conducted by using information to identify potential sources of
harm or hazards. The AI Act does not specify how risks should be identified. However, techniques
to achieve this are taxonomies, incident databases, and scenario analysis. Afterwards, risk
analysis is suggested to remove any confusion. According to the author it is unclear what the AIA
means by risk analysis. However, the term typically means both risk identification and risk
estimation.
Risk estimation is the process of estimating the probability and severity of hazards using
techniques like Bayesian networks and influence diagrams. In the risk evaluation process, it is
determined whether a risk is acceptable or not. Schuett (2023) states that this step only covers
risks from intended uses, foreseeable misuse, and those identified during post-market monitoring.
Is the risk not deemed acceptable then actions are taken to reduce the identified and assessed
risks. This process is also called risk response or risk treatment, and it is of iterative nature and
requires continuous reassessment and measures.
The whole risk management process should run throughout the whole life cycle of a high-risk AI
system. It must be planned and executed at various stages of the AI system’s lifecycle. Providers
should perform an initial risk evaluation at the beginning of the development and continue doing
so in iterations (Schuett, 2023).
Scantamburlo et al. (2024) recommends certain measures companies should perform to prepare
for the EU AI Act. Firstly, they must make sure their legal department studies the AI Act thoroughly
and focuses on definitions of the AI risk levels and their use cases. Integrate the new EU AI Act
compliance process into already existing compliance processes to minimize the impact on the
resources and workload. All stakeholders and interest groups must be informed about the
importance of the EU AI Act through training and awareness initiatives. Policies must be defined
for actors who are responsible for the AI system. A traceability framework must be introduced
during AI design processes, so entities responsible for essential decisions can be identified.
Furthermore, organizations must be cautious when using personal data. Documentation of AI use
cases, systemic monitoring, log-keeping and periodic reviews must be conducted. All
dependencies and integrations within third-party software and services as well as copyright
The Future of Life Institute (2022) recommends several measures for GPAI providers. Providers
must ensure that their GPAI systems comply with the fundamental requirements mentioned in the
EU AI Act. Especially those specified in Article 15 of Chapter 2. The system must be accurate,
robust, and secure. Like with high-risk AI systems, risk assessments must be conducted, to
prognose potential misuse of their system. Identification of risks related to health, safety and
fundamental rights must take place before placing the product on the market. Risk identification
and assessment must be a continuous and regular process. Clear and comprehensive instructions
and information must be provided for downstream users. GPAI providers should take steps to
mitigate potential legal liabilities. For example, by creating contractual agreements with
downstream users to clarify responsibilities and liabilities related to performance and safety. A
focus on ethical implications must be set, to avoid biases and to ensure that the AI does not
propagate harmful and discriminatory content. Regular audits and updates to training data can
support providers in maintaining ethical standards. In some cases, if a GPAI system is modified
or adapted for specific applications, the user or integrators might also be considered providers.
This creates a shared responsibility, which makes close cooperation of the stakeholders essential
in order to achieve compliance with the law (Future of Life Institute, 2022).
In a webinar of IBM about what the EU AI Act means for businesses and how to prepare, specific
measures are mentioned. Hans-Petter Dalen, leader of IBM’s AI governance initiative in Europe,
Middle East, and Africa, mentions that the seven requirements for high-risk use cases are
formulated quite loosely. This led to ten requests for technical standards which two of the
European standard decision organizations are currently developing. Implementation of these
technical standards will be the most efficient way to achieve conformity.
Practical steps for companies are suggested. Such as starting the conformity process by creating
an inventory of all AI systems in use and assessing their risk levels accordingly. Establishing cross-
company AI governance frameworks and an AI Ethics Board (such as the AI Ethics Board of IBM
established in 2017) are crucial for ongoing compliance.
Additionally, the hosts of the webinar emphasized the importance of starting compliance efforts
early, upskilling teams, and integrating AI governance into the company’s overall strategy. They
also discussed the potential rise of roles like Chief AI Officer as companies increasingly focus on
AI governance.
IBM’s Watson X governance platform was presented as a tool to help businesses manage AI
responsibly, ensuring compliance through lifecycle management, risk governance, and
monitoring. The platform supports various technical measures to address biases and ensure
transparency (Dalen et al., 2024).
In the webinar of the Merantix AI Campus about the EU AI Act, it is suggested that the development
of a strong compliance team for this matter is crucial. This team should have diverse expertise,
including individuals knowledgeable in machine learning (ML) and those with a strong
understanding of legal frameworks. This doesn’t necessarily have to consist solely of lawyers but
also of professionals in the AI field with a background in privacy, technology, or even political
science. The importance of thorough documentation of AI systems is emphasized. This includes
information on how the AI model has been trained, the data that has been used, and any decisions
made during the development process. A two-page memo format is suggested as a practical
approach to keep documentation concise but also comprehensive, which can later be updated as
Sources like Drum (2024), Dotan (2024), and Securiti (n.d.) claim that the NIST AI RMF can be a
complementary governance framework to achieve compliance with the EU AI Act. Both
frameworks are two of the most influential and comprehensive artificial intelligence governance
frameworks in the world. The NIST AI Risk Management Framework (RMF) was conceptualized
by the government of the United States and has gained popularity in the AI industry (Dotan, 2024).
The NIST AI RMF is a voluntary industry standard and it provides a comprehensive array of
controls and a clear roadmap to comply with AI governance standards (Securiti, n.d.-a). NIST or
National Institute of Standards and Technology is a non-regulatory federal agency within the US
Department of Commerce. Their main goals are to promote domestic innovation and industrial
competitiveness by advancing standards, measurement science and technology to improve
economic security and quality of life. It was founded in 1901 and since then their standards
influenced technologies like power grids, electronic health records, nanomaterials, computer chips
etc (NIST, 2008).
The NIST AI RMF emphasizes practical steps for managing AI risks throughout the entire lifecycle
of AI systems. It provides guidance on identifying and assessing risks, prioritizing risks based on
potential impact, mitigating risks with specific actions and controls, and continuously monitoring
for new or emerging risks. It is highly adaptable and prescriptive, offering templates, models, and
actionable steps that AI providers can implement depending on the complexity of their AI systems
(Nist, 2024). The two frameworks complement each other, with the EU AI Act providing the
regulatory ‘what’ and the NIST AI RMF offering the practical ‘how’.
The AI RMF 1.0 aims to offer guidance for the design, development, deployment, and use of AI
systems. It emphasizes managing risks of AI technology to ensure trustworthiness, ethical use
and alignment with societal values. However, compared to the EU AI Act which is an EU regulation
and therefore law across EU member states, it is voluntary and sector-agnostic to provide flexibility
to organizations. Due to the vast potential of AI technologies, they can pose risks that may affect
18. October 2024 Stefan Bolda 35/85
individuals, organizations, and societies negatively if not managed properly. As with the EU AI Act,
the NIST AI RMF focuses on managing AI risks through a trustworthy perspective. The framework
defines several characteristics which an AI system should possess in order to be defined as
trustworthy AI (Tabassi, 2023).
In the EU AI Act, mitigating biases is mentioned in the data governance requirement for high-risk
AI systems. AI systems that lead to discrimination based on protected characteristics are
prohibited in Article 5.
The framework encourages users to continuously evaluate if the RMF has improved their ability
to manage risks involved with AI, including policies, processes, practices, implementation plans,
indicator, measurements and expected outcomes. The NIST AI RMF also ensures benefits like
The main components of this regulatory framework are the core functions. These functions provide
a structured approach to help companies or organizations manage their AI-related risks and
promote trustworthiness. These functions are Govern, Map, Measure, and Manage as seen in
Figure 11: Core functions of NIST AI RMF (Tabassi, 2023). Each of these functions include specific
actions and outcomes. The framework allows flexibility in how organizations or companies
implement them based on their needs and resources. The following information about the core
functions is retrieved from the official NIST AI RMF paper.
The govern function focuses on establishing and maintaining the infrastructure and processes
needed to manage AI risks effectively across the company or the organization. Its main purpose
is to create a risk-aware environment and align AI system development with organizational
principles, policies, and goals. A risk management culture must be established with policies and
18. October 2024 Stefan Bolda 37/85
procedures, that embed risk management across AI system design, development, deployment,
and use. Roles and responsibilities must be clearly defined to properly manage AI risk. Diversity
and inclusion can help assessing and evaluating risk more comprehensively due to different
experiences and perspectives. A strong governance can ‘drive and enhance internal practices and
norms to facilitate organizational risk culture’. The core function ‘Govern’ consists of 6 main
categories and several subcategories.
• Govern 1: Procedures across the organization related to identifying and managing risks
are implemented.
• Govern 2: Accountability and training of teams to identifying and managing AI risks.
• Govern 3: Workforce diversity is prioritized in identifying and managing AI risk in the life
cycle.
• Govern 4: Commitment of teams to the organizational culture
• Govern 5: Processes to ensure robust interaction with essential AI actors are
implemented.
• Govern 6: Processes are implemented to mitigate risks from third-party systems.
The ‘Govern’ function is a cross-cutting function which is distributed across the entire AI risk
management process and enables the other key functions of the framework (Securiti, n.d.-b).
This function focuses on framing and identifying the context for AI risks. The intended purpose,
settings, and potential impacts of an AI system, including both positive and negative
consequences, must be understood. The lifecycle of an AI system consists of several
interdependent processes involving a varied set of actors. This complexity may introduce
uncertainty into risk management systems. Outcomes from this function are the fundamental
data on which the functions ‘Measure’ and ‘Manage’ are being conducted. The ‘Map’ function
consists of 5 main categories.
The ‘Map’ function essentially offers context to frame risks related to an AI system, which allows
it to be categorized, by comparing all essential factors like benefits, costs, appropriate
benchmarks, or impacts on individuals or groups (Securiti, n.d.-b). This function can have
complementary effects on the risk classification of AI systems in the context of the regulatory
framework of the EU AI Act.
Objectives of the ‘Measure’ function involves quantifying and assessing the risks identified during
the ‘Map’ function. Qualitative and quantitative metrics are required to monitor and track an AI
system’s trustworthiness, performance, and potential impacts on individuals, groups, and
stakeholders in general. The ‘Measure’ function also includes rigorous software testing and
performance evaluation methodologies. Measuring methods can support decision-making
• Measure 1: Suitable measuring methods and metrics are identified and utilized.
• Measure 2: AI systems are assessed for characteristics of trustworthiness.
• Measure 3: Processes for continuously tracking and monitoring AI risks are implemented.
• Measure 4: Feedback on efficacy of the measurement process is collected and evaluated.
This core function puts emphasis on developing risk metrics, continuously monitoring, and testing
the system, evaluating robustness, involving internal experts or independent third parties to review
the system to avoid biases in risk assessment, and keeping clear records of assessment
processes.
This function focuses on responding to risks and taking appropriate actions to mitigate or address
them throughout the lifecycle of an AI system. This includes preventive actions and continuous
tracking to adjust risk measures and strategies as new risks may emerge. As mentioned above,
this function utilizes data gathered by the ‘Measure’ function to actively mitigate risks, respond to
incidents, use insights to continuously improve the system, track and document any remaining
risks, and implement feedback loops. It consists of 4 main categories.
• Manage 1: AI risks identified and assessed from the ‘Map’ and ‘Measure’ functions are
being prioritised, responded to, and managed.
• Manage 2: Strategies for maximising benefits and minimizing negative impacts of the
system are established, implemented, and documented with input from essential
stakeholders of the system.
• Manage 3: Risks and benefits from external software of third-party entities are managed.
• Manage 4: Continuous monitoring and documentation of risk treatments, responses,
communication plans for the identified and measured AI risks are in place.
Risk resources are being allocated continuously to mapped and measured risks as defined by
the ‘Govern’ function.
AI RMF Profiles allow customizable applications of the Core functions tailored to specific needs,
requirements, use cases, or risk scenarios of the specific organization. These profiles allow users
of the NIST AI RMF to adjust the framework to their unique context, goals, and risk tolerance,
which offers flexibility to implement risk management systems in a way that fits the operational
environment. These profiles are designed to address the specific contexts and scenarios in which
the AI system is being deployed, considering industry, sector, or organizational needs. The
customizable nature of AI RMF profiles allows organizations to prioritizing core functions most
relevant to their requirements. These profiles are scalable for organizations of different sizes and
capabilities, to ensure that all sizes can implement effective risk management measures. Each
profile is driven by specific outcomes relevant to the AI system’s intended purpose, which helps
organizations to align their risk management efforts with their strategic long-term goals. The AI
RMF profiles offer flexibility and adaptability, allowing organizations to customize this regulatory
framework to their specific risks and benefits associated with providing AI systems.
To address the research question – “"What measures are companies and organizations which use
or provide AI systems in their business processes taking to comply with the regulatory framework
of the EU AI Act?" – a survey was developed as a primary data collection tool due to its capacity
to efficiently collect quantitative, and in this case, qualitative data from a wide range of
organizations. Additionally, surveys can be efficiently distributed online and through networking
with relevant stakeholders. Given that companies seem to struggle with new regulations, for
example the GDPR, it is essential to conduct research in this field and collect data on compliance
for knowledge sharing across the AI field. Therefore, a survey allows for structured collection of
information on compliance measures and EU AI Act awareness or challenges in a way that would
be comparable across different participants.
During data collection efforts a multi-vocal literature review (MLR) was being conducted as well.
Due to the extremely recent nature of the EU AI Act, traditional literature lacks concrete
compliance measures for AI organizations, particularly AI providers. Therefore, so called grey
literature, e.g., blog posts, videos and whitepapers, such as the whitepaper from Securiti (n.d.-a)
are being used to synthesize information to offer guidance and suggested actions on how to
achieve compliance with the EU AI Act. According to Garousi et al. (2019) MLRs can be useful for
both practitioners and researchers, as they provide summaries of state-of-the-art and practices in
a given area. This approach is popular in other field and has recently started to appear in the field
of software engineering (Garousi et al., 2019). With data collected by this multi-vocal literature
review a workflow was suggested to identify compliance measures and to navigate through this
novel regulatory landscape in the field of AI.
Tools like hunter.io, snov.io or Gmass were tested for mass survey distribution and e-mail
campaigning. Additionally, LinkedIn, especially the SalesNavigator, were being used to find
appropriate leads and request correspondence. Broadcasting was also done by individually
targeted e-mail enquiries. Additionally, the survey was being posted on the platform SurveyCircle
for more outreach. Base-language is English, but a German version was created as well.
Participants were informed that their involvement in this survey could not only advance this novel
field of research but also raise awareness of the EU AI Act, potentially benefiting future compliance
18. October 2024 Stefan Bolda 40/85
strategies. Hyperlinks are implemented throughout the survey which lead to the specific articles in
the AI Act Explorer of the Future of Life Institute. At the beginning of the survey, participants are
asked to complete the ‘EU AI Act Compliance Checker’ of the Future of Life Institute first, if they
are unaware of their AI system’s risk level.
3.1.1. Survey Design
For data protection purposes, the answers of the survey are being anonymized. However, the
participants are given the choice at the beginning of the survey to optionally provide the name.
The survey is divided into seven main sections. However, some sections are only reachable if
certain criteria are met. Participants are met with an informational ‘welcome’ text element, where
the main points of the EU AI Act are explained. The deadlines for completed compliance for the
different risk levels are emphasized to invoke a sense of urgency within the affected participants.
Generally, the questions of the survey are designed to answer the research question as they
comprehensively address key areas of compliance required by the EU AI Act. First of all, the
survey tries to identify the type of AI system that the organization is either deploying or providing,
e.g. limited-risk, high-risk, or GPAI. This classification is essential to understand the specific
regulatory obligations under the EU AI Act, which has different requirements depending on the
type of risk category of the system. These questions ensure that the participants reflect on the
regulations which their system is subject to. Questions about the industry of the participant aim to
research the impact of the regulation on different industries, such as healthcare, finance, or
education, which might face varying challenges. The majority of questions aim to ask about
specific actions or measures the participating organization is taking to comply with different
aspects of the EU AI Act. The key compliance areas have been identified and focused on. Most
notably following aspects:
• risk management systems
• data governance
• technical documentation
• human oversight
The survey structure can be seen in Figure 12: Overview of survey sections with number of
questions. First of all, the ‘general information’ section focuses mainly on what risk type the
provided or deployed AI system is categorized in, with additional questions about the country of
origin, the industry in which the organization is operating, name and a short description of the use
case of the deployed or provided AI system.
The EU AI Act section focuses on awareness of the participant about the EU AI Act and its
requirements. Depending on the answers in ‘general information’ the corresponding sections are
being enabled. If the participant picked being a provider of high-risk AI systems, the section
‘Compliance Measures for High-Risk AI Systems and Providers’ is shown. In this section, the
participant is asked about compliance measures which lead to fulfilling the requirements for high-
risk AI systems, as well as fulfilling the obligations specifically for providers, such as doing a
conformity assessment for the provided AI product. If the participant answered with being a
deployer of high-risk AI systems, the section ‘Compliance Measures for High-Risk AI System
Deployers (or Users)’ is shown. Deployers of high-risk AI systems have to meet certain obligations,
such as thorough documentation, use in accordance with instructions of the provider or
transparency obligations. If the participant answered with being a limited-risk deployer or provider
the section ‘Compliance Measures for Limited-Risk AI Systems’ is shown. This section focuses on
the main transparency obligation for the limited-risk tier. Additionally, the participant is asked to
provide information if any voluntary codes of conduct were incorporated into the process of
deploying or providing the limited-risk AI system.
General-purpose AI providers have certain obligations depending on the type of the GPAI system.
A flowchart was created to illustrate which GPAI type must meet which specific requirements. This
conditional logic was implemented into the survey subsection of GPAI providers. Questions are
shown according to the answer to the question if the GPAI is free and open license or if it’s
considered systemic risk. If a GPAI system is free and open license and not considered systemic
risk, there are two requirements which have to be met. However, if it is considered systemic risk
the remaining requirements for non-open or free GPAI systems and the specific four requirements
for systemic risk GPAI have to be met. Non-open or free GPAI systems have to meet four
Sent: 45 leads
Opened: 31 leads
Clicked: 17 leads
Unsubscribed: 4 leads
Bounced: 2 leads
Replied: 2 leads
Sent means, that the e-mail was not opened. Opened means, that the receiver at least clicked on
the e-mail. Clicked means, that the receiver clicked on the survey link provided in the e-mail.
Unsubscribed means, that the receiver used the hunter.io feature to unsubscribe from receiving
any further e-mails regarding this e-mail campaign. However, only one initial e-mail was created
with no intend to send any follow-up e-mails. Bounced means, that the e-mail address provided
was wrong and the attempt to send the e-mail failed. Replied means, that the receiver of the e-
In order to broadcast the survey among domestic but also international companies, cooperation
with the economic chamber of Upper Austria (WKO) was sought. The survey was mentioned in a
newsletter of the WKO as well as in a LinkedIn posting of Hans Baldinger (WKO, Technische
Universität Graz), which increased the views of the survey significantly. The post can be found in
the annex of this thesis. Additionally, the digitization and e-government department of the Austrian
federal chancellery was contacted to collect available data on how Austrian AI companies, which
use AI in their business processes, prepare themselves to comply with the EU AI Act. However,
instead of receiving information, the office expressed interest in the outcomes of this thesis,
suggesting that they may not have comprehensive knowledge on existing data on the subject.
This interaction highlights a gap in the public office’s awareness concerning this area, further
highlighting the importance, relevance, and novelty of the research question.
15 Leads were contacted by using the LinkedIn feature called LinkedIn Sales Navigator. A different
approach was chosen to contact these leads, by creating a more individual and personally
addressed e-mail. These leads were mostly consisting of experts in the field found from webinars
or postings, as well as staff members of companies which are most likely affected by the regulatory
requirements and obligations of the EU AI Act. One lead of an Austrian AI consulting company
answered by stating he will gladly participate in the survey to increase his awareness of the EU AI
Act.
Securiti (n.d.-a) and Dotan (2024) proposed an integrated approach to achieve compliance with
the EU AI Act using the NIST AI Risk Management Framework. This integrated approach was
developed by synthesizing obligations and requirements from the EU AI Act and the NIST AI RMF
into common categories. The obligations depend on being an AI provider or deployer. This
synthesis offers a more hands-on approach to AI compliance for companies which adaption to the
EU AI Act.
However, there are certain regulatory obligations or requirements of the EU AI Act that are not
fully covered by the NIST AI RMF. For example, a conformity assessment is not part of the NIST
AI RMF. The EU AI Act mandates a high-risk system to undergo formal third-party evaluations to
demonstrate compliance with the requirements for high-risk systems to be marketable.
Subsequentially, acquiring the CE marking is not mentioned in the NIST AI RMF.
Furthermore, prohibited AI practices, which are explicitly banned like social scoring or the use of
certain AI systems for real-time biometric identification in public spaces, are not covered by the
NIST AI RMF.
On the official National Institute of Standards and Technology website there are detailed sets of
compliance measures and strategies from industry leaders such as DeepMind Google available.
These documents provide insights into the current approaches to compliance with AI regulation.
The NIST AI RMF use cases can be found under following link: https://airc.nist.gov/Usecases
The following table maps key compliance areas of the EU AI Act with the corresponding sub-
categories in the NIST AI RMF, according to the questionnaire in the study of Dotan (2024) and
data from Securiti (n.d.-a). Context to the NIST AI RMF sub-categories is added from the Google
DeepMind NIST AI RMF template.
The organization inventories data MAP 2.2: Data about the AI system’s
on the AI system in a repository. knowledge limits and how the generated
output may be used and overseen by
humans is documented. Risks and
transparency requirements are mapped
throughout the development process.
Documentation 11, 12, 18, GOV 1.6: Mechanisms and processes are
20 in place to inventory the AI systems and
Detailed technical documentation
are resourced according to organizational
has to be maintained. This risk priorities.
includes information about system
operations, benefits, costs, and MAP: 2.2, 2.3, 3.1, 3.3, 3.4
alignment with intended use and
MEA 2.8: Risks that can be associated with
information about the risk
the aspect of transparency and
management process as well as
accountability. accountability, as identified in the MAP key
function, are documented and examined.
Data Governance 10, 50, 17 MEA 2.10: Privacy risks are evaluated; AI
systems are designed to mitigate these
Providers of AI technology must
privacy risks.
ensure that their AI systems are
designed to protect data privacy, GOV 1.1: Governance ensures that privacy
and robust data governance and data security processes are in line with
practices are in place. organizational policies.
MEA 3.1: Trustworthiness characteristics,
like privacy, are regularly assessed to
ensure continuous compliance.
Table 1: Key EU AI Act compliance areas matched with NIST AI RMF actions. Data by Dotan (2024) and Securiti
(n.d.-a)
To identify suggested actions, these NIST AI RMF actions can be located in the Excel sheet of
Google DeepMind. For example, the mapping for ‘Human Oversight’ shows that the appropriate
NIST AI RMF actions are MEA 1.1, MAP 3.5 and GOV 3.2. Suggested measures from Google for
MAP 3.5 are emphasizing the need to identify which aspects require human supervision,
especially considering societal impacts and risks. They also suggest creating practices in line with
existing governance policies and developing comprehensive training materials to educate AI
actors about system performance, limitations, risks, and appropriate warning. Their suggested
actions also stress the importance of including relevant stakeholders in the prototyping and testing
stages, ensuring that testing conditions closely resemble real-world scenarios. Oversight practices
must be continually evaluated for reliability and accuracy and updated when necessary. Lastly,
clear, and understandable documentation of AI system mechanisms should be provided to support
informed, risk-based decision-making by oversight personnel (DeepMind Google, n.d.).
Additionally, Google listed appropriate sources for the respective NIST AI RMF action. For
example, in MAP 3.5 the research paper ‘Humans in the Loop’ is mentioned.
Human-in-the-loop (HITL) is an approach that involves a human entity directly in a specific
decision-making process that integrates algorithms or AI systems. A human in the loop can have
various roles in controlling, overseeing, or altering the decisions made by an AI system. The
individual is not just overseeing the entire process but participating in specific instances of the
decision-making process in collaboration with the AI. This contrasts with approaches that are ‘off-
the-loop’, where there is no human involvement, or ‘on-the-loop’, where the human entity is
overseeing the system without participating in the decision-making process. The HITL approach
is often critiqued for being deployed without clarity on the human’s role (Crootof et al., 2022).
The HITL approach can be used for the basis of a compliance strategy to meet the ‘human
oversight’ requirement for high-risk AI systems.
Suggested actions from the Google DeepMind template for the transparency requirement area of
the EU AI Act are to establish and continuously review documentation policies that address
information related to essential stakeholders of the AI system, business justification, scope and
usage of the system, limitations, description and characterization of the training data, algorithms
used, output data, explanatory visualizations and information, down- and up-stream
dependencies, deployment and monitoring strategies and stakeholder engagement plans.
Furthermore, it is suggested to establish policies for a model documentation inventory system and
continuously monitor its completeness and efficacy. The actions or measures ensure that the
organization documents information about the system and therefore are capable of describing to
downstream users how it works and its limitations and risks.
Despite all broadcasting efforts, the survey yielded no substantial data on concrete compliance
measures of companies with regards to the EU AI Act. There were a total of 13 participations, with
12 being partially completed and 1 completed. The AMS, which is the Austrian labor market
service, provided data on their transparency measures for their AI supported system called
‘Berufsinfomat’. This tool provides information on jobs and training. As they are answered with
being an AI Deployer, the AI Deployer section of the survey was shown. The question ‘Are you
aware about the EU AI Act and its regulations on AI?’ was answered with ‘Mostly aware’ and the
question ‘How clear are the requirements and obligations of the EU AI Act to you?’ was answered
with ‘Slightly clear’. This shows a lack of awareness for the regulations of the EU AI Act. The
question asking for the type of limited-risk AI system was answered with ‘Education System (for
informal use). The AMS is aware of the transparency measures of limited-risk AI systems defined
in the EU AI Act. However, the question if the participant’s transparency measure was motivated
by another regulation showed that they introduced these measures due to the GDPR (General
Data Protection Regulation). They use text labels and notifications to inform the user that they are
interacting with an AI system. Additionally, they provide a link to a data protection information site
and inform the user not to provide any personal or sensitive information. They mentioned that the
GDPR will take a few more cases to be judged in order to have clarity about the classification,
impact and measures that are actually necessary. It is unclear why they mentioned the GDPR in
the question about “What is unclear to you about the EU AI Act?”.
The rest of the data from the survey is insufficient as it consists mostly of randomly typed
characters in the mandatory questions to be able to reach the next chapters of the survey. Due to
the inauthenticity of these attempts, the data from the multiple-choice questions can be deemed
invalid.
While researching AI providers and deployers in the healthcare sector, a website called ai-
derm.com was found. The provided system is called ‘AI Dermatologist’ and can be classified as
high-risk as it is used for detecting skin diseases. The CE-marking for conformity is shown on the
main page, implying that a conformity assessment has been conducted for the high-risk AI system.
However, no information was provided by the provider of this system despite attempts to initiate
correspondence. Research focused on high-risk AI systems with CE marking, as they have
undergone a conformity assessment and, therefore, meet the requirements of the EU AI Act. In
order to find AI products with the CE-marking, V. Murovec, an author of the article ‘A new CE
marking for European healthcare: when and why?’ from Osborne Clarke, was contacted.
According to Murovec, ‘[…] please note that no high-risk AI systems are currently CE-marked
under the new regulation (EU) 2024/1689.
For AI systems requiring the involvement of third-party conformity assessment bodies (notified
bodies), no notified body has been designated to certify products under this regulation yet. The
provisions on notified bodies will only apply from 2 August 2025.
For AI systems that can be CE-marked based on internal control without a notified body, the
regulation's provision allowing internal control (Article 43 – Conformity Assessment) is applicable.
However, substantial standards and/or specifications are still missing for providers to conduct this
self-assessment effectively.’
5. Discussion
Findings of this thesis suggests that compliance with the EU AI Act is still in its early stages.
Companies show little awareness to the requirements and obligations of the regulations of the EU
AI Act. Interest is shown about the contents of the law from different stakeholders, such as the
Economic Chamber Upper Austria (WKO) or the Austrian federal chancellery.
Conducting a survey about concrete compliance measures from AI providers and deployers
generated insufficient real-world data. The lack of participation in the survey overall may indicate
a lack of awareness about the regulations of the EU AI Act. The regulatory framework of the EU
AI Act overlaps significantly with other regulations, like the GDPR. This indicates that transparency
measures may already be taken by the majority of AI providers or deployers as compliance with
the GDPR is significantly more advanced compared to compliance with the EU AI Act among
companies and organizations.
Results of this study show that compliance measures of high-risk AI providers or deployers can
be identified by researching other risk management frameworks that cater to AI technology. Only
a few studies propose this integrated approach due to the novel nature of this field. However, it
can be seen as an indicator of how future compliance strategies will be proposed.
This research challenges the initial research question by showing a lack of completed survey
participation despite rigorous broadcasting efforts. As partial participation is significantly higher
than completed participation, this may indicate an interest of AI providers or deployers and their
stakeholders to gather information on the regulations of the EU AI Act. Therefore, the results
support the hypothesis that despite the growing interest in this pioneering legislation, there is a
lack of concrete compliance measures and awareness.
This study contributes to an understanding of the theoretical frameworks behind compliance with
AI regulation, which can then be used to develop and conduct practical applications in business
processes. Furthermore, information and data from this thesis can act as the basis and fundament
for other research efforts by comparing tested and proven risk management frameworks and
applying their suggested actions to the requirements and obligations of the EU AI Act.
6. Conclusion
The literature suggests that there is little information about concrete compliance measures for the
EU AI Act. However, some compliance measures can be derived from other certificates or
regulations, such as the ISO/IEC standards or the GDPR. Furthermore, an iterative risk
management process has been proposed as a compliance measure for the risk management
requirement of the EU AI Act. For risk identification, techniques such as taxonomies, incident
databases, and scenario analysis can be used, while risk estimation can be performed using
methods like Bayesian networks and influence diagrams. MBSE has been identified as a suitable
measure to comply with the requirements for technical documentation in the power grid sector.
The literature review showed that many organizations lack procedures for technical documentation
and trained staff for compliance requirements with the EU AI Act. Furthermore, organizations
struggle with training staff on data and model bias. Metrics on user communication in the literature
also indicate challenges. Model monitoring varies greatly among organizations. Preparation
measures for the EU AI Act have been proposed, which include stakeholder training on
compliance, having legal experts study the AI Act, integrating AI Act compliances processes into
existing processes, integrating a traceability framework in the AI system design, conducting
systemic and periodic monitoring to ensure transparency.
Recommendations from experts to AI providers and deployers of high-risk systems are developing
a strong compliance team with interdisciplinary expertise, conducting thoroughly documentation
throughout the system’s lifecycle, automating compliance wherever it is possible, like automated
documentation and monitoring, informing downstream users when AI is being used and seeking
legal and financial support. Furthermore, it is suggested to research norms and standards in the
respective area of activity of the AI provider or AI deployer. AI systems, especially in the high-risk
18. October 2024 Stefan Bolda 53/85
area, should be developed with compliance in mind. This approach is called ‘compliance by
design’.
Recent whitepapers suggest achieving compliance with the EU AI Act by complementing risk
management with other regulatory frameworks such as the NIST AI RMF. The more practical
nature of the NIST AI RMF can aid companies by providing more concrete compliance measures
which subsequently can lead to a smoother conformity process. Experts consider the current
requirements and obligations as broad and vague. Regulation can hinder innovation by imposing
heavy compliance challenges for SMEs.
The conducted survey yielded little information on what concrete compliance measures are being
taken by companies affected by the EU AI Act by using AI in their business processes. Hypotheses
for the lack on real-world data can be that companies are yet to take any compliance measures
for the EU AI Act, as the requirements and obligations for high-risk AI systems are being enforced
in August 2027. Furthermore, as compliance strategies are considered sensitive information to
companies, it is hypothesized that this is a contributing factor to the lack of data that got generated
by the survey.
The proposed survey can be used for further research as it was designed in an explanatory way
which can help navigating through the EU AI Act and its requirements and obligations. When the
law will be enforced more strictly, the probability of generating useful data on the topic will
significantly increase. Furthermore, the survey structure and logic could be converted into a
reflective and interactive guideline to evaluate and assess the current compliance status of an AI
provider or AI deployer.
The research field of creating symbioses between complementary frameworks like the NIST AI
RMF and the EU AI Act can be expanded upon. As these frameworks are more concrete and
tested, they can have supportive effects on the compliance efforts of AI providers of high-risk AI
systems. The efficiency and effectiveness of this integrated approach has to be evaluated and
assessed thoroughly.
https://doi.org/10.1365/s35764-024-00520-7
Bartneck, C., Lütge, C., Wagner, A. R., & Welsh, S. (2021). An introduction to ethics in robotics
Botunac, I., Parlov, N., & Bosna, J. (2024). Opportunities of Gen AI in the Banking Industry with
regards to the AI Act, GDPR, Data Act and DORA. 2024 13th Mediterranean Conference
https://doi.org/10.1109/MECO62516.2024.10577936
Chamberlain, J. (2023). The Risk-Based Approach of the European Union’s Proposed Artificial
Intelligence Regulation: Some Comments from a Tort Law Perspective. European Journal
Cheong, I., Caliskan, A., & Kohno, T. (2024). Safeguarding human values: Rethinking US law for
Computer & Communications Industry Association (Director). (2024, July 3). How to Master AI
Conference on the Future of Europe. (2022). Report on the Final Outcome. Conference on the
followup.europarl.europa.eu/cmsdata/267078/Report_EN.pdf#page=55
Council of the EU. (2024, May 21). Artificial intelligence (AI) act: Council gives final green light to
https://www.consilium.europa.eu/en/press/press-releases/2024/05/21/artificial-
intelligence-ai-act-council-gives-final-green-light-to-the-first-worldwide-rules-on-ai/
Council of the European Union. (2024, May 21). Timeline—Artificial intelligence. Consilium.
https://www.consilium.europa.eu/en/policies/artificial-intelligence/timeline-artificial-
intelligence/
Crootof, R., Kaminski, M. E., & Price Ii, W. N. (2022). Humans in the Loop. SSRN Electronic
Journal. https://doi.org/10.2139/ssrn.4066781
18. October 2024 Stefan Bolda 55/85
Dalen, H.-P., VanDodick, J., & Simons, D. (Directors). (2024, August 2). What the EU AI Act
https://mediacenter.ibm.com/media/What+the+EU+AI+Act+means+for+you+and+how+to
+prepare/1_rs0wipdn
https://airc.nist.gov/docs/Template_Google_DeepMind_gap_analysis-
NIST_AIRMF_1.0.xlsx
Dewitte, P. (2024). Better alone than in bad company: Addressing the risks of companion chatbots
through data protection by design. Computer Law & Security Review, 54, 106019.
https://doi.org/10.1016/j.clsr.2024.106019
Dotan, R. (2024, August 7). The EU AI Act Meets NIST AI RMF A unified AI governance
framework. https://www.techbetter.ai/post/the-eu-ai-act-meets-nist-ai-rmf
Drum, S. (2024, April 9). EU: Navigating the AI Act - a comparative analysis: EU AI Act vs NIST’s
comparative-analysis-eu-ai-act
https://www.adalovelaceinstitute.org/resource/eu-ai-act-explainer/
European Commission. (n.d.). Digital transition—European Commission. Retrieved July 19, 2024,
from https://reform-support.ec.europa.eu/what-we-do/digital-transition_en
European Commission. (2024, June 26). AI Act | Shaping Europe’s digital future. https://digital-
strategy.ec.europa.eu/en/policies/regulatory-framework-ai
European Parliament. (2024, March 13). Artificial Intelligence Act: MEPs adopt landmark law |
room/20240308IPR19015/artificial-intelligence-act-meps-adopt-landmark-law
Future of Life Institute. (n.d.-a). The AI Act Explorer | EU Artificial Intelligence Act. Retrieved
Future of Life Institute. (n.d.-b). Timeline of Developments | EU Artificial Intelligence Act. Retrieved
Future of Life Institute. (2022, May). General Purpose AI and the AI Act.
Garrod, D., Arlington, J., Jamooji, J., Odubanjo, O., Kohne, N. G., Rickhoff, C., Babin, R., & Dowell,
https://www.akingump.com/en/insights/alerts/eu-ai-act-published-in-the-eu-official-journal
Australia, February 2—14, 2003, Tübingen, Germany, August 4—16, 2003, Revised
Gilbert, S. (2024). The EU passes the AI Act and its implications for digital medicine are unclear.
Ho, C. W.-L., & Caals, K. (2024). How the EU AI Act Seeks to Establish an Epistemic Environment
Jacob, T. (2019). Robot Rules: Regulating Artificial Intelligence. Springer Berlin Heidelberg.
Kroet, C. (2024, March 13). Lawmakers approve AI Act with overwhelming majority. Euronews.
https://www.euronews.com/my-europe/2024/03/13/lawmakers-approve-ai-act-with-
overwhelming-majority
Langer, P. (2020). Lessons from China - The Formation of a Social Credit System: Profiling,
https://doi.org/10.1145/3396956.3396962
https://www.iisf.ie/files/UserFiles/cybersecurity-legislation-ireland/EU-AI-Act.pdf
Merantix AI Campus (Director). (2024, February 27). The Finalised EU AI Act: Implications for
https://www.youtube.com/watch?v=_LP_WFZ6aqA
https://www.munich-business-school.de/en/l/business-studies-dictionary/financial-
knowledge/compliance
Murray, M. D. (2024). Legislating Generative Artificial Intelligence: Can Legislators Put a Box
Nikolinakos, N. T. (2023). EU policy and legal framework for artificial intelligence, robotics and
information
Intelligence Profile (NIST AI NIST AI 600-1; p. NIST AI NIST AI 600-1). National Institute
Novelli, C., Casolari, F., Rotolo, A., Taddeo, M., & Floridi, L. (2024). AI Risk Assessment: A
Scenario-Based, Proportional Methodology for the AI Act. Digital Society, 3(1), 13.
https://doi.org/10.1007/s44206-024-00095-1
OECD. (n.d.). About the OECD. OECD. Retrieved August 1, 2024, from
https://www.oecd.org/en/about.html
implementation-principles
Outeda, Prof. C. C. (2024). The EU’s AI Act: A Framework for Collaborative Governance. Internet
https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX%3A52021PC0206
Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 Laying
(EU) 2019/2144 and Directives 2014/90/EU, (EU) 2016/797 and (EU) 2020/1828 (Artificial
http://data.europa.eu/eli/reg/2024/1689/oj/eng
Roberts, H., Babuta, A., Morley, J., Thomas, C., Taddeo, M., & Floridi, L. (2023). Artificial
Intelligence Regulation in the United Kingdom: A Path to Good Governance and Global
Scantamburlo, T., Falcarin, P., Veneri, A., Fabris, A., Gallese, C., Billa, V., Rotolo, F., & Marcuzzi,
F. (2024). Software Systems Compliance with the AI Act: Lessons Learned from an
Schuett, J. (2023). Risk Management in the Artificial Intelligence Act. European Journal of Risk
Securiti. (n.d.-a). Navigating AI Compliance: An Integrated Approach to the NIST AI RMF & EU AI
Act. https://securiti.ai/whitepapers/an-approach-to-nist-ai-rmf-and-eu-ai-act/
Securiti. (n.d.-b). Tips for Implementing the NIST AI RMF. Securiti. Retrieved September 30, 2024,
from https://securiti.ai/implement-nist-ai-rmf/
Shahlaei, C. A., & Berente, N. (2024). An Analysis of European Data and AI Regulations for
Tabassi, E. (2023). Artificial Intelligence Risk Management Framework (AI RMF 1.0) (NIST AI 100-
https://doi.org/10.6028/NIST.AI.100-1
Truby, J., Brown, R. D., Ibrahim, I. A., & Parellada, O. C. (2022). A Sandbox Approach to
U.S. Government Accountability Office. (2022). OECD Framework for the Classification of AI
systems (OECD Digital Economy Papers 323; OECD Digital Economy Papers, Vol. 323).
https://doi.org/10.1787/cb6d9eca-en
Systems: The Role of Model-Based Systems Engineering in Complying with the EU AI Act:
Wagner, M., Borg, M., & Runeson, P. (2024). Navigating the Upcoming European Union AI Act.
Walters, J., Dey, D., Bhaumik, D., & Horsman, S. (2024). Complying with the EU AI Act. In S.
Dimitrova (Eds.), Artificial Intelligence. ECAI 2023 International Workshops (pp. 65–75).
Werkmeister, C., Ehlen, T., Roos, P., & Voget, J. (2024, May 13). EU AI Act unpacked #3:
https://technologyquotient.freshfields.com//post/102j7cj/eu-ai-act-unpacked-3-personal-
and-territorial-scope
Yohe, G., & Leichenko, R. (2010). Chapter 2: Adopting a risk-based approach. Annals of the New
6632.2009.05310.x
Zhong, H., O’Neill, E., & Hoffmann, J. A. (2024). Regulating AI: Applying insights from behavioural
Table 1: Key EU AI Act compliance areas matched with NIST AI RMF actions. Data by Dotan
(2024) and Securiti (n.d.-a) ........................................................................................................49
The compliance measures and strategies adopted by DeepMind Google, as well as other industry
leaders, are detailed in a publicly available Excel document. This document can be accessed at
the following link: https://airc.nist.gov/Usecases
The Excel file includes comprehensive information on compliance approaches that are relevant to
the discussions in this thesis. For further exploration, please visit the link above.