AI Governance Framework
AI Governance Framework
AI Governance Framework
– Implementation and Self-Assessment
Guide for Organizations
Prepared in collaboration with the
Info-communications Media Development Authority of Singapore
January 2020
World Economic Forum
91-93 route de la Capite
CH-1223 Cologny/Geneva
Switzerland
Tel.: +41 (0)22 869 1212
Fax: +41 (0)22 786 2744
Email: [email protected]
www.weforum.org
Foreword 4
Introduction 6
Annex 35
Acknowledgements 36
Endnotes 36
The Model Framework is published by the PDPC and provides guidance to private sector organizations deploying AI at
scale on how to do so in a responsible manner. The Model Framework translates ethical principles into implementable
practices, applicable to a common AI deployment process. It covers four key areas:
A C
Internal governance structures and measures Operations management
B D
Determining the level of human involvement in Stakeholder interaction and communication
AI-augmented decision-making
The Guide sets out a list of questions, based on and When using the Guide, organizations should consider
organized according to the four key areas described in whether the questions and practices are relevant to their
the Model Framework, for organizations to consider in a unique business context and industry. Organizations
systematic manner. Hence, this Guide should be read in would also need to consider their business needs,
conjunction with the Model Framework. Organizations resource constraints, regulatory requirements and specific
should refer to the Model Framework for definitions of terms use cases. Generally, an organization should consider
and explanations of concepts used in this Guide. adopting a risk-based approach to AI governance that is
commensurate with the potential harm of the AI solution
The Guide also provides references and examples on how deployed. The scope of the questions in the Guide may
organizations could implement the considerations and overlap and could reinforce concepts that are important in
practices set out in the Model Framework. These references ensuring responsible deployment of AI. Last but not least,
and examples include publications by the PDPC (e.g. organizations are encouraged to document the development
advisory guidelines and guides), and industry use cases of their governance process as a matter of good practice.
and practices that have been shared with the PDPC. We
have also included a list of international AI standards that
are being developed (Annex). Organizations are free to
implement other measures that best fit the purpose and
context of their AI deployment, as appropriate.
Guiding questions Useful industry examples, practices and guides for consideration
1.1 Has your organization –– Consider whether AI is able to address the identified problem or issue
defined a clear purpose in
using the identified AI solution
(e.g. operational efficiency
and cost reduction)?
1.3 Did your organization consider –– Consider developing a set of ethical principles that is in line with or can be
whether the decision to use AI incorporated into the organization’s mission statement. In addition, it would
for a specific application/use be useful to outline how to adopt (e.g. contextualise) them in practice
case is consistent with
its core values and/or –– Consider developing a Code of Ethics for the use of AI. Relevant areas to
societal expectations? consider include:
Guiding questions Useful industry examples, practices and guides for consideration
2.1 Does your organization have an –– Consider whether it is useful to adapt existing governance, risk and
existing governance structure compliance (GRC) structures to incorporate AI governance processes
that can be leveraged to
oversee the organization’s To provide oversight on the use of data and AI within an organization:
use of AI?
–– Consider a sandbox type of governance to test-bed and deploy AI solutions,
2.2 If your organization does not before fully-fledged governance structures are put in place
have an existing structure to tap
on, has your organization put in –– Consider whether it is necessary to establish a committee comprising
place a governance structure to representatives from relevant departments (e.g. legal/compliance, technical
oversee the organization’s and sales and communication) to oversee AI governance in the organization
use of AI? with proper terms of reference (e.g. refine organization’s AI governance
frameworks to ensure they meet the organization’s commercial, legal, ethical
and reputational requirements)
If there are strong concerns about how AI is being used for the project,
neither of the teams will be able to one-sidedly terminate the project, but
they can conduct further testing and validation.
2.3 Did your organization’s board –– Consider whether it is useful to form a committee/board that is chaired by
and/or senior management the senior management and include senior leaders from the various teams
sponsor, support and (e.g. chief data officer, chief privacy officer and chief information security
participate in your organization’s officer). Including key decision-makers is critical for efficiency and the
AI governance? credibility of the committee/board
2.4 Are the responsibilities of –– Consider whether it is useful or practical for the board and senior
the personnel involved in management to champion responsible AI deployment and ensure that
the various AI governance all employees are committed to implementing the practices:
processes clearly defined?
–– Strategic level: Board to be responsible for risk and corporate values,
and C-suites translate them into strategies. Committee comprising senior
management to approve the AI models
C. Equipped with the –– Consider educating key internal stakeholders to increase awareness of
necessary resources the implications of AI development/deployment as well as the need for
and guidance to perform guidelines (e.g. AI engineering guidelines)
their duties?
–– Consider whether it is useful to conduct general training for personnel
2.6 Are the relevant staff dealing involved in various AI governance processes. For staff dealing with AI
with AI systems properly trained systems, consider whether it is necessary to conduct specialized training
to interpret AI model output and
decisions as well as to detect –– Considering developing or partnering with an education institution to
and manage bias in data? create a suite of online learning modules to support AI skill development
for employees
2.7 Are the other staff who interact
with the AI system aware of –– Consider educating employees at all levels, particularly those using the AI
and sensitive to the relevant system or with customer-facing roles, to identify and report potential ethical
risks when using AI? Do they concerns relating to AI development and deployment
know who to raise such issues
to when they spot them (e.g.
subject-matter experts within
their organizations)?
2.8 Does your organization have –– Consider implementing an internal policy explanation process to retain
an existing risk management details of how decision-making on the deployment of AI was undertaken
system that can be expanded
to include AI-related risks? –– Consider implementing a knowledge management registry to archive
relevant documents to ensure proper knowledge transfer
2.9 Did your organization
implement a risk management
system to address risks
involved in deploying the
identified AI solution (e.g.
personnel risk or changes to
commercial objectives)?
Guiding questions Useful industry examples, practices and guides for consideration
3.1 Did your organization conduct –– Consider whether it is necessary to list all internal and external stakeholders,
an impact assessment (e.g. and the impact on them accordingly
probability and/or severity
of harm) on individuals and –– Consider whether it is necessary to assess risks from a technical perspective
organizations who are affected (e.g. system integrity tests) and from a personal data protection perspective
by the AI solution? (e.g. the PDPC’s Guide to Data Protection Impact Assessments3)
3.2 Based on the assessment, did –– Consider a human-in-the-loop approach when human judgement is
your organization implement able to significantly improve the quality of the decision made (e.g.
the appropriate level of human pricing recommendation of million-dollar commodity bids) or when a
involvement in AI-augmented human subjective judgment is required (e.g. market share forecasting
decision-making? for long-term decisions)
–– Risk appetite. For example, organizations could have varying risk appetite
in interrupting a transaction made by a retail customer as compared to
a transaction made by a corporate customer that could result in more
serious consequences (e.g. stopping a payroll)
–– Consider tracking the characteristics of the data that the AI is using, versus
the data the AI was trained on, and alerting relevant staff when the data
drifts too much (e.g. new categories appear, new values outside historical
values appear, or the distribution of the values changes)
3.4 For safety-critical systems, did –– Consider whether it is necessary and feasible to put in place controls to
your organization ensure that: allow the graceful shutdown of an AI system and/or bring it back to a safe
state, in the event of a system failure
A. The relevant personnel will
be able to assume control –– When an AI model is making a decision for which it is significantly unsure of
where necessary? the answer/prediction, considering designing the AI model to be able to flag
these cases and triage them for a human to review. This may occur when
B. The AI solution provides the data contains values that are outside the range of the training data, or
sufficient information for data regions where there were insufficient training examples to make a
to assist the personnel robust estimate
to make an informed
decision and take
actions accordingly?
Guiding questions Useful industry examples, practices and guides for consideration
4.1 Did your organization –– Consider adopting industry best practices and engineering standards to
implement accountability-based ensure compliance with relevant data protection laws, such as the PDPA. It
practices in data management is important for organizations to implement proper personal data-handling
and protection (e.g. the PDPA practices, such as having policies for data storage, deletion and processing,
and OECD Privacy Principles)? particularly when the data deals with personal identifiable information
–– Consider which data an AI system should have access to, and which
sensitive data it should not have access to
–– Consider applying applying for the PDPC's Data Protection Trustmark and
Asia Pacific Economic Cooperation Cross Border Privacy Rules and Privacy
Recognition for Processors (APEC CBPR & PRP) Systems certifications
4.2 Did your organization –– Consider developing and maintaining a data provenance record
implement measures to
trace the lineage of data (i.e. –– Consider whether it is useful to create a data inventory, data dictionaries,
backward data lineage, forward data change processes and document control mechanisms
data lineage and end-to-end
data lineage)? –– Consider whether data can be traced back to the source at each stage
4.3 If your organization obtained –– Consider obtaining datasets only from trusted third-party sources that are
datasets from a third party, certified with proper data protection practices
did your organization assess
and manage the risks of –– Consider adopting the practices within IMDA’s Trusted Data Sharing
using such datasets? Framework5 when establishing data partnerships (e.g. create a common
“data-sharing language”)
4.4 Is your organization able to –– Consider reviewing data in detail against its metadata
verify the accuracy of the
dataset in terms of how –– Consider whether it is useful to develop a taxonomy of data annotation to
well the values in the standardize the process of data labelling
dataset match the true
characteristics of the entity
described by the dataset?
4.5 Is the dataset used –– Consider whether it is useful to conduct validation schema checks (i.e.
complete in terms of testing whether the data schema accurately represents the data from the
attributes and items? source to ensure there are no errors in formatting and content)
4.6 Is the dataset used credible and –– Consider whether it is necessary to put in place processes to identify
from a reliable source? possible errors and inconsistencies at the exploratory data analysis stage,
before training the dataset
4.7 Is the dataset used
up-to-date? –– Consider whether it is necessary and/or operationally feasible to implement
data monitoring and reporting processes to remove and record all
4.8 Is the dataset used relevant? compromising data
4.9 Where personal data is –– Consider whether it is relevant to create internal data classification principles
involved, is it collected for developed based on legal and data governance frameworks and standards
the intended purposes? (e.g. the International Organization for Standardization (ISO) guidelines)
4.12 If any human has filtered, –– Consider whether it is necessary to assign roles to the entire data pipeline
applied labels, or edited the to enforce accountability. This would allow an organization to trace who
data, did your organization manipulated data and by which rule
implement measures to ensure
the quality of dataset used?
4.13 Did your organization –– Consider taking steps to mitigate inherent bias in datasets, especially where
take steps to mitigate social or demographic data is being processed for an AI system whose
unintended biases in the output directly impacts individuals
dataset used for the AI
model, especially omission bias –– Consider defining which data fields contain sensitive or protected attributes.
and stereotype bias? In addition, consider checking for indirect bias by measuring which data
fields are predictive of protected and sensitive attributes, and which of
4.14 Did your organization use those data fields are causative of the target outcomes versus mere
a complete dataset by not proxies for protected and sensitive attributes
removing data attributes
prematurely to minimize –– Consider whether it is useful to auto-mosaic any consumer physical
risk of inherent bias? features (e.g. face) and other personally identifiable information to
prevent this information from being collected if it is not necessary.
This could minimize potential risk for bias based on personal data
Relevant only in limited scenarios: instead of transactional behaviour
4.15 Did your organization take –– Consider whether it is necessary to identify potential biases of
steps to mitigate biases data annotation
that may result from data
collection devices (e.g. –– Consider whether not to remove data attributes and data items from
cameras and sensors)? the datasets prematurely
4.17 Did your organization use –– After training of the AI model, consider validating the AI model using a
different datasets for training, separate validation dataset
testing and validation of the
AI model? –– Consider conducting statistical tests (e.g. Area under the Receiver Operating
Characteristic Curve (ROC) and stationarity, multi-collinearity tests) to
evaluate and validate the AI model’s ability to predict results
4.18 Did your organization test the –– Consider whether it is necessary to check for data drift between the different
AI model used on different datasets and making the AI robust to any differences
demographic groups to mitigate
systematic bias? –– Consider whether it is necessary to test the results of different AI models to
identify potential biases produced by a certain model
4.20 Did your organization To ensure data accuracy, quality, currency, relevance and reliability, consider:
periodically review and update
datasets to ensure its accuracy, –– Whether it would be useful to schedule regular review of datasets
quality, currency, relevance and
reliability? –– Whether it would be necessary to update the dataset periodically with new
data that was obtained from the actual use of the AI model deployed in
4.21 Did your organization production or from external sources
implement measures to
minimize reinforcement bias? –– Allocating the responsibility to a relevant personnel to monitor on a regular
basis whether new data is available
–– Exploring if there are tools available that can automatically notify your
organization when new data becomes available
–– Deploying a new challenger model that shadows all of the predictions and
decisions made by the main AI model, and train the challenger model on
newer data than the main AI model. Flag when the challenger model is
consistently outperforming the main deployed AI model as this indicates that
the patterns in the data have changed and that the old data is no longer
valid. This would be a trigger for a review of the data, and your organization
would need to consider if the challenger model should become the new
main deployed model
–– Testing for error rates of the AI model when applied to different subgroups of
the target population
4.26 Did your organization ensure –– Consider designing, verifying and validating the AI model to ensure that it is
that AI model deployed is sufficiently robust
sufficiently robust?
–– Consider whether it is relevant to conduct adversarial testing on the AI
model to ensure that it is able to handle a broader range of unexpected
input variables (e.g. unexpected changes or anomalies)
–– Using the same version of the AI model for testing and in products
–– Creating test cases and run several model scenarios (i.e. what-ifs) to test
model efficacy. This might be relevant for applications where the AI model is
solving a puzzle (e.g. assigning resources to create a plan or a schedule)
4.29 Did your organization –– To monitor the degradation of models, consider setting up an automated
assess the degree to tool that will alert data scientists when the model performance is subpar or
which the identified AI below an acceptable threshold
solution generalized well
and failed gracefully? To assess whether the AI solution failed gracefully, consider:
4.30 Did your organization document Where practical and/or relevant, consider:
the relevant information such
as datasets and processes that –– Whether it is useful to track the AI model’s decision-making process and
yield the AI models’ decisions performance using standard documentation (e.g. dashboard). Examples
in an easily understandable of information to track could include:
manner?
–– Project objectives
–– Research approach
–– Error logs and error rate metrics (e.g. false acceptance rate and
throughput metrics)
–– Keeping a copy of training data and documenting how the data was processed
4.31 Did your organization engage Where practical and/or relevant, consider:
an independent team to
check if they can produce the –– Whether it is relevant to take into account specific contexts or particular
same or very similar results conditions that have an impact on the results produced by the AI method
using the same AI method
based on the documentation –– Whether it is useful to make available replication files (i.e. files that replicate
relating to the model made each step of the AI model’s developmental process) to facilitate the process
by your organization? of testing and reproducing behaviour
4.32 Has your organization put in Where practical and/or relevant, consider:
place relevant documentation,
procedures and processes that –– Whether the AI system can be evaluated by internal or external assessors
facilitate internal and external
assessments of the –– Whether it is useful to keep a comprehensive record of data provenance,
AI system? procurement, pre-processing, how the data has been processed, lineage of
the data, storage and security
Guiding questions Useful industry examples, practices and guides for consideration
5.1 Has your organization identified Where practical and/or relevant, consider:
the various internal and
external stakeholders that will –– Customizing the communication message for the different stakeholders who
be involved and/or impacted are impacted by the AI solution
by the deployment of the AI
solution? –– Providing different levels of explanation at:
5.2 Did your organization consider –– Data (e.g. types and range of data used in training the algorithm)
the purpose and the context
under which the explanation –– Model (e.g. features and variables used and weights)
is needed?
–– Human element (e.g. nature of human involvement when deploying the
5.3 Did your organization tailor the AI system)
communication strategy and/
or explanation accordingly –– Inferences (e.g. predictions made by the algorithm)
after considering the audience,
purpose and context? –– Algorithmic presence (e.g. if and when an algorithm is used)
Relevant only in limited scenarios: –– Whether it is relevant to provide information at an appropriate juncture on
what AI is and when, why and how AI has been used in decision-making
5.5 In circumstances where about the users. Organizations could also document and explain the reason
technical explainability/explicit for using AI, how the AI model training and selection processes were
explanations may not be conducted, the reasons for which decisions were made, as well as steps to
useful to the audience, did your mitigate risks of the AI solution on users. By having a clear understanding
organization provide implicit of the possible consequences of the AI-augmented decision-making, users
explanation (e.g. counter- could be better placed to decide whether to be involved in the process and
factuals)? anticipate how the outcomes of the decision may affect them
–– Informing users if an interaction involves AI, and how the AI-enabled features
are expected to behave during normal use. For example, your organization
could consider informing users on the website landing page that they are
interacting with an AI-powered chatbot
5.7 Did your organization evaluate –– Consider whether it is relevant to keep abreast of local and international
whether your AI governance developments relating to AI governance
structure and processes are in
line with changing standards? –– Consider whether it is necessary to also provide an explanation on how/why
an ethical evaluation was conducted
5.8 Did your organization
make available the
outcome of the evaluation
to relevant stakeholders?
5.9 Did your organization –– Consider whether it is applicable to publish an explanation of when AI
develop a policy on is used
explanations to be provided
to individuals, and implement –– Consider identifying educational tools (e.g. leaflets, newsletters, user
the policy accordingly? guides and white papers) and conducting briefing sessions or information
campaigns that could help clients/customers understand the explanation
5.10 Did your organization address –– Consider whether it is useful to conduct user testing
usability problems and test
whether user interfaces served –– Consider placing clients/consumers at the centre, when designing the user
their intended purposes? interface and deploying the AI solution by:
5.12 If users’ responses are used –– Consider designing the AI model to identify abnormal behaviour and prevent
to train the AI model, did manipulation (e.g. for chatbots, identify users who appear to respond too
your organization implement fast, or trigger parts of the bot code that other users do not)
measures to filter out
misleading and/or inaccurate –– For bots that employ automatic or supervised learning techniques,
responses? consider whether it is necessary to ensure that the AI system is able to
distinguish between maliciously-introduced data and data that is rare, yet
valid and important
Option to opt-out
5.13 Did your organization offer –– Consider informing users of the consequences of choosing to opt-out,
the option to opt out of the if such an option is available
identified AI solution by default
or only on request?
5.14 Did your organization provide a –– Consider providing an avenue for individuals to submit updated data
feedback channel for feedback about themselves
or queries?
–– Consider whether it is necessary to set expectations as to whether the user
5.15 Is the feedback channel will receive any response to feedback provided
managed by appropriate
personnel? –– Consider providing a hotline or email contact of relevant personnel
such as a data protection officer or quality service manager on the
organization’s website
5.16 Did your organization –– Consider whether it is useful to describe the process for appealing
provide an avenue for users a decision
to request for a review of
material AI decisions that –– Consider whether it is useful to keep a record of chatbot conversations
have affected them? with users
The International Organization for Standardization (ISO) and the Institute of Electrical and Electronics Engineers (IEEE) are
developing relevant AI standards. Organizations may consider referring to them, as and when they become available.
IEEE P7000™ Model Process for Addressing Ethical Concerns During System Design
IEEE P7001™ Transparency of Autonomous Systems
IEEE P7002™ Data Privacy Process
IEEE P7003™ Algorithmic Bias Considerations
IEEE P7004™ Standard on Child and Student Data Governance
IEEE P7005™ Standard for Transparent Employer Data Governance
IEEE P7006™ Standard for Personal Data Artificial Intelligence (AI) Agent
IEEE P7007™ Ontological Standard for Ethically Driven Robotics and Automation Systems
IEEE P7008™ Standard for Ethically Driven Nudging for Robotic, Intelligent, and Automation Systems
IEEE P7009™ Standard for Fail-Safe Design of Autonomous and Semi-Autonomous Systems
IEEEP70010™ Wellbeing Metrics Standard for Ethical Artificial Intelligence and Autonomous Systems
The Personal Data Protection Commission, Info-communications Media Development Authority and World Economic
Forum’s Centre for the Fourth Industrial Revolution express their sincere appreciation to the following for their valuable
feedback to this Implementation and Self-Assessment Guide for Organizations:
Endnotes
1. The PDPC's Second Edition of the Model AI Governance Framework can be downloaded at
Go.gov.sg/ai-gov-mf-2
3. The PDPC's Guide to Data Protection Impact Assessments can be downloaded at https://www.pdpc.gov.sg/-/media/
Files/PDPC/PDF-Files/Other-Guides/guide-to-dpias---011117.pdf
4. These terms are used as defined in the PDPC Anonymization Advisory Guidelines and Technical Companion Guide
[email protected]
www.weforum.org