AI-Driven Governance Enhancing Transparency and Ac
AI-Driven Governance Enhancing Transparency and Ac
Abstract
Artificial Intelligence (AI) is transforming public administration by improving efficiency, en-
hancing transparency, and facilitating data-driven decision-making. This paper explores the
integration of AI in government decision-making, examining its applications, challenges,
and policy recommendations. The study highlights the role of AI in policy analysis, admin-
istrative efficiency, financial management, and public participation. Key challenges such as
algorithmic bias, data privacy concerns, explainability issues, and regulatory gaps are dis-
cussed. The paper proposes a set of policy recommendations, including the establishment
of AI ethics councils, implementation of explainability standards, enhancement of public AI
literacy, and the creation of AI transparency laws. Future research should focus on hybrid
human-machine governance models to ensure AI adoption aligns with democratic principles
and accountability standards. Ultimately, responsible AI implementation can democratize
governance, increase public trust, and lead to more effective and transparent public adminis-
tration.
1 Introduction
Artificial Intelligence (AI) has emerged as a transformative force in numerous sectors, including
healthcare, finance, and business. However, its potential impact on public administration and
governance is still in its early stages of exploration. Governments worldwide are increasingly
looking to AI-driven tools to enhance efficiency, optimize service delivery, and improve trans-
parency in decision-making processes. This paper seeks to examine how AI can be leveraged
to ensure greater transparency in governmental decision-making, focusing on the challenges,
opportunities, and policy implications of AI-driven public administration.
Governments have traditionally relied on human expertise, historical data, and bureaucratic pro-
cesses to make policy decisions. While these approaches have been effective in many instances,
2 Digital Society & Virtual Governance Volume 1, Issue 1, 2025
they are often subject to inefficiencies, delays, and biases. AI, with its ability to process vast
amounts of data, identify patterns, and generate predictive models, has the potential to revolu-
tionize decision-making by providing evidence-based insights, improving responsiveness, and
reducing human biases.
Countries such as Estonia, Singapore, and the United States have already begun integrating
AI into various aspects of governance. For instance, Estonia’s e-Government system utilizes AI
for automated public services, while Singapore employs AI for urban planning and real-time
crisis management. These implementations highlight the potential of AI to enhance governance
efficiency and transparency when properly deployed.
The significance of AI in governance extends beyond efficiency. It has the potential to redefine
how transparency is implemented in decision-making. AI-driven tools can facilitate real-time
data processing and analytics. AI can analyze vast datasets in real time, providing governments
with evidence-based recommendations that can be shared with the public. Explainable AI models
address the issue of black-box AI, making decision-making more transparent. Automated public
reporting systems ensure that citizens have access to accurate and up-to-date information about
policy decisions and governmental actions. AI-powered citizen engagement enables governments
to analyze public sentiment and feedback from various sources, incorporating citizen perspectives
into policymaking. AI can help detect anomalies in financial transactions, procurement processes,
and budget allocations, reducing corruption risks.
As governments continue to adopt AI technologies, it is crucial to establish regulatory frame-
works and ethical guidelines that balance transparency with privacy, accountability, and fairness.
Changkui LI 3
This paper will explore how governments can effectively leverage AI for transparent decision-
making while addressing the associated risks and challenges.
This paper aims to examine the role of AI in enhancing government transparency and account-
ability. It will identify key AI applications in public administration and their impact on decision-
making. The paper will analyze the ethical, legal, and technical challenges associated with AI-
driven governance and provide policy recommendations for implementing AI in a way that max-
imizes transparency while minimizing risks.
The following sections will provide an in-depth discussion of these topics, beginning with an
overview of the theoretical foundations of AI in public governance.
AI in public governance refers to the application of machine learning, natural language process-
ing, predictive analytics, and automation to improve the efficiency and effectiveness of govern-
ment operations. Unlike traditional administrative processes that rely heavily on manual inputs,
AI-driven governance utilizes data analytics and algorithmic decision-making to enhance pol-
icy implementation and service delivery. AI can automate repetitive bureaucratic tasks, generate
predictive insights for policymakers, and streamline regulatory compliance efforts.
The integration of AI into public governance is rooted in key technological principles, in-
cluding data-driven decision-making, computational modeling, and adaptive learning systems.
Governments are investing in AI solutions that enable dynamic policy responses, real-time data
monitoring, and proactive governance models. These innovations contribute to an evidence-
based approach to policymaking, reducing subjectivity and enhancing policy consistency.
Theoretical models of governance provide critical insights into how AI can be integrated into
public administration. The bureaucratic governance model, for instance, emphasizes hierarchical
4 Digital Society & Virtual Governance Volume 1, Issue 1, 2025
One of the most significant theoretical underpinnings of AI in governance is the concept of algo-
rithmic transparency. Algorithmic governance refers to the use of AI-driven algorithms to inform
and implement policy decisions. While algorithmic governance enhances efficiency and objec-
tivity, it also raises concerns about the explainability of AI decisions. Explainable AI models aim
to bridge this gap by providing interpretable and traceable decision-making processes, allowing
policymakers and citizens to understand how AI-generated recommendations are derived.
Transparency in algorithmic governance is essential for maintaining public trust and account-
ability. Governments are exploring transparency-enhancing mechanisms such as open-source
AI models, algorithmic audits, and public disclosure of AI decision criteria. These measures help
mitigate risks associated with opaque AI systems and reinforce the legitimacy of AI-driven gov-
ernance structures.
AI’s integration into public governance raises ethical considerations related to fairness, account-
ability, and human rights. Bias in AI algorithms can perpetuate discrimination, leading to unjust
policy outcomes. Governments must implement rigorous bias detection and mitigation strategies
to ensure that AI applications uphold principles of equity and justice. Additionally, AI-driven gov-
ernance should prioritize human oversight to prevent excessive reliance on automated decision-
making.
The social impact of AI in governance extends beyond administrative efficiency. AI-driven
public services, digital identity verification, and AI-powered social programs have the potential
to enhance citizen well-being. However, concerns surrounding data privacy, surveillance, and
algorithmic bias must be addressed through comprehensive regulatory frameworks and public
consultations.
Changkui LI 5
The integration of AI into government decision-making processes has led to significant transfor-
mations in policy formulation, administrative efficiency, and public service delivery. AI-driven
tools offer governments the ability to process large datasets, identify patterns, and generate pre-
dictive models that enhance decision-making accuracy. This section explores key AI applications
in public governance, focusing on their implications for transparency, accountability, and effi-
ciency.
AI is increasingly being used to analyze complex policy issues and predict future outcomes. By
leveraging predictive analytics, governments can design data-driven policies that are more effec-
tive and targeted at societal needs.
Predictive analytics for social policies enables policymakers to assess the potential impact of
new regulations before implementation. AI models analyze historical data, economic indicators,
and social trends to generate forecasts that guide decision-making. This approach allows gov-
ernments to anticipate challenges, allocate resources effectively, and mitigate risks associated with
policy changes.
AI-powered simulations for scenario planning are another crucial application in policy anal-
ysis. By simulating different policy scenarios, governments can evaluate potential consequences
and identify the most effective solutions. These simulations use machine learning algorithms to
model various economic, environmental, and social factors, allowing decision-makers to explore
multiple policy options in a controlled setting before implementation.
communication between governments and the public. Moreover, natural language processing
(NLP) allows chatbots to understand and respond to complex queries, enhancing the overall ac-
cessibility of public services.
AI is enhancing democratic processes by enabling greater public participation and fostering open
governance. By utilizing AI-driven platforms, governments can engage with citizens more ef-
fectively and incorporate public opinions into policymaking.
AI-driven platforms for public consultation allow governments to gather input from citizens
on policy proposals, urban planning, and legislative initiatives. These platforms use AI-powered
data analysis to aggregate feedback, identify key concerns, and generate insights that inform
decision-making. Additionally, AI can facilitate participatory budgeting, where citizens con-
tribute to budget allocation decisions, promoting greater transparency and accountability.
Sentiment analysis and public opinion mining help governments understand public sentiment
on various issues. AI algorithms analyze data from social media, surveys, and news articles to
gauge public perception of policies and governance. This real-time feedback mechanism enables
policymakers to respond proactively to public concerns, adjust policies accordingly, and enhance
trust between governments and citizens.
By integrating AI into government decision-making, public administration can become more
transparent, efficient, and responsive. However, the successful implementation of AI-driven gov-
ernance requires robust ethical frameworks, regulatory oversight, and continued technological
advancements. The following sections will explore the challenges and risks associated with AI in
public governance, along with policy recommendations to ensure responsible AI adoption.
Changkui LI 7
While AI offers numerous benefits for public governance, it also introduces significant chal-
lenges and risks. These concerns must be addressed to ensure that AI-driven decision-making
is transparent, fair, and accountable. This section explores key challenges associated with AI in
government, including explainability issues, data privacy risks, algorithmic bias, and regulatory
challenges.
One of the most significant challenges in AI-driven government decision-making is the lack
of explainability in AI models. Many machine learning algorithms, particularly deep learning
models, operate as ”black boxes,” meaning their decision-making processes are opaque and dif-
ficult to interpret. This lack of transparency poses serious risks for public administration, where
accountability and fairness are paramount.
Issues with opaque AI models in public policy arise when governments rely on complex algo-
rithms to make decisions without fully understanding how those decisions are reached. This can
lead to unintended biases, incorrect predictions, and reduced public trust in AI systems. For ex-
ample, AI-driven welfare distribution systems have been criticized for denying benefits to eligible
recipients due to unexplainable model predictions.
Strategies for explainable AI (XAI) involve developing AI models that provide human-interpretable
explanations for their decisions. XAI techniques include rule-based models, decision trees, and
attention mechanisms that highlight key factors influencing AI predictions. Governments can
also adopt transparent AI policies, requiring agencies to use interpretable models for high-stakes
decisions such as criminal sentencing, social welfare allocation, and public resource management.
As governments increasingly rely on AI systems to process large amounts of data, concerns about
data privacy and security have become more pressing. AI systems often require access to sensitive
personal information, which raises ethical and legal issues related to data protection.
Risks of government surveillance and data misuse are among the primary concerns associated
with AI-driven governance. Governments have the power to collect and analyze vast amounts of
data from citizens, which can lead to surveillance practices that infringe on individual freedoms.
AI-enhanced surveillance tools, such as facial recognition and predictive policing, have sparked
debates over their potential misuse and threats to civil liberties.
Policy recommendations for data governance emphasize the need for stringent data protection
laws and ethical AI guidelines. Governments should implement transparency measures, such as
public disclosures on how AI systems process data and who has access to it. Additionally, adopting
data minimization principles—only collecting the necessary data for AI applications—can reduce
8 Digital Society & Virtual Governance Volume 1, Issue 1, 2025
privacy risks. Independent oversight bodies should be established to ensure compliance with data
protection laws and prevent government overreach.
Algorithmic bias is a critical issue in AI-driven public decision-making, where biased datasets
and flawed training processes can result in discriminatory outcomes. Biases in AI models can
disproportionately affect marginalized communities, leading to unfair policy decisions.
Case studies of biased AI in public decision-making highlight the real-world consequences
of algorithmic discrimination. For example, in the United States, an AI-powered risk assessment
tool used in the criminal justice system was found to predict higher recidivism rates for Black
defendants compared to White defendants, despite similar offense histories. Similarly, automated
hiring systems have been criticized for favoring male candidates over female candidates due to
biased training data.
Mitigation strategies and ethical AI frameworks focus on reducing bias and promoting fair-
ness in AI applications. Governments should ensure that AI models undergo rigorous bias testing
before deployment. This includes auditing training datasets for representational fairness, imple-
menting fairness-aware algorithms, and incorporating human oversight in AI-driven decision-
making. Ethical AI frameworks, such as the OECD AI Principles, stress the importance of fair-
ness, transparency, and accountability in AI governance.
The rapid advancement of AI technologies has outpaced existing legal and regulatory frame-
works, creating significant challenges for AI governance in the public sector. Governments must
establish clear guidelines to regulate AI applications and ensure that they align with ethical and
legal principles.
Existing AI governance frameworks, such as the EU AI Act and OECD AI Principles, pro-
vide a foundation for responsible AI regulation. The EU AI Act classifies AI systems based on
risk levels and imposes strict requirements for high-risk applications, including those used in law
enforcement and social services. The OECD AI Principles emphasize transparency, fairness, and
accountability in AI development and deployment.
The need for standardized AI regulations in public administration is becoming increasingly
urgent. Governments must develop comprehensive AI policies that address ethical concerns, data
privacy, and accountability. Standardized guidelines can help ensure that AI-driven decision-
making aligns with democratic values and human rights. Additionally, international cooperation
is essential for creating harmonized AI regulations that prevent regulatory gaps and inconsisten-
cies.
Addressing these challenges requires a collaborative effort between policymakers, technolo-
gists, legal experts, and civil society. By implementing robust regulatory frameworks, promoting
Changkui LI 9
explainable AI, and prioritizing ethical considerations, governments can harness the benefits of
AI while mitigating its risks. The following section will present policy recommendations for
ensuring responsible AI adoption in public governance.
To maximize the benefits of AI in public administration while ensuring transparency and account-
ability, governments must establish comprehensive regulatory and ethical frameworks. This sec-
tion outlines key policy recommendations that can guide the responsible implementation of AI in
governance. These recommendations focus on ethical oversight, explainability standards, public
participation, legal frameworks, and multi-stakeholder collaboration.
AI explainability is critical for building public trust and ensuring transparency in government
decision-making. Many AI models function as black boxes, making it difficult to understand how
decisions are made. To address this challenge, governments should implement AI explainability
standards that require AI systems to provide clear, interpretable justifications for their outputs.
Explainability standards should mandate the use of Explainable AI models that allow human
oversight and interpretation. Public agencies should be required to document the reasoning be-
hind AI-driven policy decisions. Guidelines should be developed to establish clear thresholds for
AI decision accountability. Regular audits should be conducted to assess whether AI recommen-
dations align with ethical and policy standards.
10 Digital Society & Virtual Governance Volume 1, Issue 1, 2025
By ensuring that AI-driven governance decisions are interpretable, governments can enhance
accountability and allow citizens to challenge or appeal decisions that affect them.
Public awareness and understanding of AI are essential for fostering trust and encouraging civic
engagement in AI governance. Governments should invest in AI literacy programs that educate
citizens on how AI systems work, their potential benefits, and the risks involved.
Key initiatives to enhance AI literacy and public participation include creating AI education
campaigns targeting diverse populations, including students, workers, and policymakers. Online
platforms should be developed where citizens can learn about AI governance and provide feedback
on AI-driven policies. Town halls, public forums, and consultations on AI governance should be
hosted to promote transparency and engagement. Citizen participation in AI ethics discussions
and decision-making processes should be encouraged.
By empowering citizens with AI knowledge, governments can increase public confidence in
AI-driven governance and foster greater civic engagement in digital policymaking.
Governments should establish robust legal frameworks that regulate AI use in public adminis-
tration. These laws should outline specific transparency and accountability requirements for AI
deployment in governance.
AI transparency and accountability laws should require government agencies to disclose AI use
in decision-making processes. Mandatory impact assessments should be implemented to evaluate
the potential risks and biases of AI models before deployment. AI auditing mechanisms should be
established to review AI-driven decisions and ensure compliance with ethical standards. Citizens
should be provided with legal avenues to challenge AI-based decisions that impact their rights or
access to services.
AI laws should also align with international standards, such as the EU AI Act and OECD AI
Principles, to ensure consistency in AI governance across jurisdictions.
Collaboration between governments, industry, academia, and civil society is essential for ensur-
ing the responsible use of AI in governance. Multi-stakeholder AI audits can enhance oversight
and accountability by involving independent organizations in evaluating AI applications used in
public administration.
Governments should also promote open-source AI initiatives that encourage transparency and
innovation. Open-source AI models allow researchers and policymakers to scrutinize algorithmic
decision-making processes, reducing the risks associated with proprietary black-box models.
Changkui LI 11
To maximize the benefits of AI in public administration while ensuring transparency and account-
ability, governments must establish comprehensive regulatory and ethical frameworks. This sec-
tion outlines key policy recommendations that can guide the responsible implementation of AI in
governance. These recommendations focus on ethical oversight, explainability standards, public
participation, legal frameworks, and multi-stakeholder collaboration.
ethics, addressing concerns such as bias, data privacy, and accountability. By fostering collabo-
ration between experts and stakeholders, governments can ensure that AI applications in public
administration align with ethical principles and public interest.
AI explainability is critical for building public trust and ensuring transparency in government
decision-making. Many AI models function as black boxes, making it difficult to understand how
decisions are made. To address this challenge, governments should implement AI explainability
standards that require AI systems to provide clear, interpretable justifications for their outputs.
Explainability standards should mandate the use of Explainable AI models that allow human
oversight and interpretation. Public agencies should be required to document the reasoning be-
hind AI-driven policy decisions. Guidelines should be developed to establish clear thresholds for
AI decision accountability. Regular audits should be conducted to assess whether AI recommen-
dations align with ethical and policy standards.
By ensuring that AI-driven governance decisions are interpretable, governments can enhance
accountability and allow citizens to challenge or appeal decisions that affect them.
Public awareness and understanding of AI are essential for fostering trust and encouraging civic
engagement in AI governance. Governments should invest in AI literacy programs that educate
citizens on how AI systems work, their potential benefits, and the risks involved.
Key initiatives to enhance AI literacy and public participation include creating AI education
campaigns targeting diverse populations, including students, workers, and policymakers. Online
platforms should be developed where citizens can learn about AI governance and provide feedback
on AI-driven policies. Town halls, public forums, and consultations on AI governance should be
hosted to promote transparency and engagement. Citizen participation in AI ethics discussions
and decision-making processes should be encouraged.
By empowering citizens with AI knowledge, governments can increase public confidence in
AI-driven governance and foster greater civic engagement in digital policymaking.
Governments should establish robust legal frameworks that regulate AI use in public adminis-
tration. These laws should outline specific transparency and accountability requirements for AI
deployment in governance.
AI transparency and accountability laws should require government agencies to disclose AI use
in decision-making processes. Mandatory impact assessments should be implemented to evaluate
the potential risks and biases of AI models before deployment. AI auditing mechanisms should be
established to review AI-driven decisions and ensure compliance with ethical standards. Citizens
Changkui LI 13
should be provided with legal avenues to challenge AI-based decisions that impact their rights or
access to services.
AI laws should also align with international standards, such as the EU AI Act and OECD AI
Principles, to ensure consistency in AI governance across jurisdictions.
Collaboration between governments, industry, academia, and civil society is essential for ensur-
ing the responsible use of AI in governance. Multi-stakeholder AI audits can enhance oversight
and accountability by involving independent organizations in evaluating AI applications used in
public administration.
Governments should also promote open-source AI initiatives that encourage transparency and
innovation. Open-source AI models allow researchers and policymakers to scrutinize algorithmic
decision-making processes, reducing the risks associated with proprietary black-box models.
Multi-stakeholder engagement should include establishing independent AI auditing bodies
that conduct regular assessments of government AI systems. Partnerships between governments
and research institutions should be encouraged to develop ethical AI frameworks. Open-source
AI development should be promoted to ensure transparency and enable external reviews of AI al-
gorithms. Private companies that supply AI solutions to government agencies should be required
to disclose algorithmic methodologies and data sources.
By fostering a culture of transparency and collaboration, governments can ensure that AI-
driven governance remains accountable to the public and aligned with ethical principles.
7 Conclusion
The adoption of AI in governance presents both opportunities and challenges. While AI has
the potential to enhance efficiency, policy effectiveness, and transparency, it also raises concerns
related to explainability, privacy, bias, and accountability. To maximize the benefits of AI while
mitigating its risks, governments must establish clear policies and regulatory frameworks that
uphold democratic values and public trust.
As AI technology continues to evolve, future research should focus on hybrid governance models
that integrate human oversight with AI-driven decision-making. Hybrid models can combine
the efficiency of AI with human judgment, ensuring that AI applications align with ethical con-
siderations and societal values. Research should explore mechanisms for human-AI collaboration,
including augmented intelligence systems that support policymakers rather than replace them.
Additionally, studies should investigate how AI can be leveraged to enhance civic engagement
and participatory governance. Future AI models should prioritize inclusivity, allowing diverse
stakeholders to contribute to decision-making processes. This research can provide insights into
how AI can be used to strengthen democratic institutions and improve governance outcomes.
7.3 Final Thoughts on AI’s Role in Democratizing Governance and Improving Trans-
parency
AI 驱动的治理:提升公共行政的透明度与问责制
李昌奎
开元出版有限公司
摘要 人工智能(AI)正在通过提高效率、增强透明度和促进数据驱动决策来改变公共行政。
本研究探讨了 AI 在政府决策中的应用,分析其优势、挑战以及相关政策建议。研究重点包括
AI 在政策分析、行政效率、财政管理和公共参与中的作用,同时讨论了算法偏见、数据隐私、
可解释性问题以及监管空白等关键挑战。本文提出了一系列政策建议,包括设立 AI 伦理委
员会、实施 AI 可解释性标准、提高公众 AI 素养,以及制定 AI 透明度法律。未来研究应关注
人机混合治理模式,以确保 AI 的应用符合民主原则和问责制标准。最终,负责任的 AI 应用
可以推动治理民主化,增强公众信任,并提高公共行政的透明度和效率。
关键词 人工智能;公共行政;透明度;问责制;AI 伦理
Changkui LI 15
To Cite This Article Changkui LI. (2025). AI-Driven Governance: Enhancing Transparency
and Accountability in Public Administration. Digital Society & Virtual Governance, 1(1), 1-16.
https://doi.org/10.6914/dsvg.010101
Digital Society & Virtual Governance, ISSN 3079-7624 (print),ISSN 3079-7632 (online), DOI
10.6914/dsvg, a Quarterly, founded on 2025,published by Creative Publishing Co., Limited.
Email: [email protected], https://dsvg.cc, https://cpcl.hk.
Article History Received: November 16, 2024 Accepted: January 22, 2025 Published:
February 28, 2025
References
[1] Brynjolfsson, E., & McAfee, A. (2017). Machine, platform, crowd: Harnessing our digital
future. W. W. Norton & Company.
[2] Russell, S., & Norvig, P. (2020). Artificial intelligence: A modern approach (4th ed.). Pearson.
[3] Rahwan, I., Cebrian, M., Obradovich, N., Bongard, J., Bonnefon, J. F., Breazeal, C., ... &
Wellman, M. (2019). Machine behaviour. Nature, 568(7753), 477-486.
[4] Pasquale, F. (2020). New laws of robotics: Defending human expertise in the age of AI.
Harvard University Press.
[5] Floridi, L., & Cowls, J. (2019). A unified framework of five principles for AI in society.
Harvard Data Science Review, 1(1).
[6] European Commission. (2021). Proposal for a regulation on a European approach for Artifi-
cial Intelligence (AI Act). Brussels, Belgium: European Commission.
[7] Mittelstadt, B. D., Russell, C., & Wachter, S. (2019). Explaining explanations in AI. Proceed-
ings of the Conference on Fairness, Accountability, and Transparency, 279-288.
[8] Binns, R. (2018). Algorithmic accountability and public reason. Philosophy & Technology,
31(4), 543-556.
[9] Sun, T. Q., & Medaglia, R. (2019). Mapping the challenges of Artificial Intelligence in the
public sector: Evidence from public servants. Government Information Quarterly, 36(2),
368-383.
[10] Zuboff, S. (2019). The age of surveillance capitalism: The fight for a human future at the
new frontier of power. PublicAffairs.
[11] Eubanks, V. (2018). Automating inequality: How high-tech tools profile, police, and punish
the poor. St. Martin’s Press.
[12] Crawford, K. (2021). Atlas of AI: Power, politics, and the planetary costs of artificial intelli-
gence. Yale University Press.
[13] Morozov, E. (2013). To save everything, click here: The folly of technological solutionism.
PublicAffairs.
[14] OECD. (2020). OECD principles on AI. Retrieved from https://www.oecd.org/going-
digital/ai/principles/
16 Digital Society & Virtual Governance Volume 1, Issue 1, 2025
[15] Balkin, J. M. (2017). Information fiduciaries and the first amendment. Harvard Law Review,
131(1), 1-53.
[16] Veale, M., & Edwards, L. (2018). Clarity, surprises, and further questions in the GDPR’s
provisions for algorithmic decision-making. Computer Law & Security Review, 34(2), 398-
404.
[17] Wachter, S., Mittelstadt, B., & Floridi, L. (2017). Why a right to explanation of automated
decision-making does not exist in the General Data Protection Regulation. International Data
Privacy Law, 7(2), 76-99.
[18] Kitchin, R. (2014). The data revolution: Big data, open data, data infrastructures and their
consequences. SAGE.
[19] Rahwan, I. (2018). Society-in-the-loop: Programming the algorithmic social contract.
Ethics and Information Technology, 20(1), 5-14.
[20] Wirtz, B. W., Weyerer, J. C., & Geyer, C. (2019). Artificial Intelligence and the public
sector—Applications and challenges. International Journal of Public Administration, 42(7),
596-615.
[21] Jobin, A., Ienca, M., & Vayena, E. (2019). The global landscape of AI ethics guidelines.
Nature Machine Intelligence, 1(9), 389-399.
[22] Goodman, B., & Flaxman, S. (2017). European Union regulations on algorithmic decision-
making and a “right to explanation.”AI Magazine, 38(3), 50-57.
[23] Zerilli, J., Knott, A., Maclaurin, J., & Gavaghan, C. (2019). Transparency in algorithmic
and human decision-making: Is there a double standard? Philosophy & Technology, 32(4),
661-683.
[24] Bryson, J. J. (2018). The limits of limited intelligence. Edge.org. Retrieved from https:
//www.edge.org/response-detail/27145
[25] Ananny, M., & Crawford, K. (2018). Seeing without knowing: Limitations of the trans-
parency ideal and its application to algorithmic accountability. New Media & Society, 20(3),
973-989.
Editor Sophia LI [email protected]