Towards a new ISO standard to support compliance with the AI Regulation! 🤖 𝗔𝗙𝗡𝗢𝗥 is currently working on the creation of an 𝗜𝗦𝗢 𝘀𝘁𝗮𝗻𝗱𝗮𝗿𝗱 that will strengthen the framework and compliance with the Artificial Intelligence Regulation (hashtag#AIAct). This standard is set to become a key reference for European and international companies eager to ensure the responsible and ethical use of AI. 🌍 The standard under development will be divided into 𝟰 𝗺𝗮𝗷𝗼𝗿 𝘀𝗲𝗰𝘁𝗶𝗼𝗻𝘀, covering all the key issues related to the implementation and use of AI: 💼 𝗚𝗼𝘃𝗲𝗿𝗻𝗮𝗻𝗰𝗲 𝗮𝗻𝗱 𝗿𝗶𝘀𝗸 𝗺𝗮𝗻𝗮𝗴𝗲𝗺𝗲𝗻𝘁: This section will address best practices for establishing strong governance frameworks to ensure transparency, accountability, and fairness in the development and use of AI systems. ⚙️ 𝗧𝗲𝗰𝗵𝗻𝗶𝗰𝗮𝗹 𝗰𝗼𝗺𝗽𝗹𝗶𝗮𝗻𝗰𝗲: The technical aspects of compliance will be detailed, including recommendations on algorithm design, data management, and regular technical audits to identify potential risks. 🔍 𝗧𝗿𝗮𝗻𝘀𝗽𝗮𝗿𝗲𝗻𝗰𝘆 𝗮𝗻𝗱 𝗲𝘅𝗽𝗹𝗮𝗶𝗻𝗮𝗯𝗶𝗹𝗶𝘁𝘆: This pillar will focus on the obligation to inform end users and provide clear explanations of how AI systems work, in order to build trust and public acceptance of these technologies. 🛡️ 𝗢𝗻𝗴𝗼𝗶𝗻𝗴 𝗲𝘃𝗮𝗹𝘂𝗮𝘁𝗶𝗼𝗻 𝗮𝗻𝗱 𝗺𝗼𝗻𝗶𝘁𝗼𝗿𝗶𝗻𝗴: Finally, this section will provide guidelines for continuous evaluation of AI systems, with special attention to biases, social impacts, and updating AI models based on evolving data and usage contexts. 📅 𝗣𝗿𝗼𝘃𝗶𝘀𝗶𝗼𝗻𝗮𝗹 𝘁𝗶𝗺𝗲𝗹𝗶𝗻𝗲: 𝗔𝗙𝗡𝗢𝗥 plans a stakeholder consultation process throughout 2024, with official publication of the standard expected by the end of 2025. This timeframe will allow input from experts, companies, and regulators to propose a strong standard applicable on an international scale. This standard will complement the legal framework already defined by the AI Regulation, providing companies with a practical tool to anticipate and manage their obligations. #AIRegulation #ISOStandard #AFNOR #ArtificialIntelligence #Compliance #AIAct #DigitalEthics #ResponsibleInnovation #TechForGood
DPO Consulting International’s Post
More Relevant Posts
-
𝗧𝗵𝗲 𝗘𝗨 𝗔𝗜 𝗔𝗰𝘁: 𝗔 𝗖𝗮𝘁𝗮𝗹𝘆𝘀𝘁 𝗳𝗼𝗿 𝗦𝘂𝘀𝘁𝗮𝗶𝗻𝗮𝗯𝗹𝗲 𝗗𝗮𝘁𝗮 𝗮𝗻𝗱 𝗔𝗜 The EU AI Act is an important step towards ensuring that AI is developed and used in a way that benefits society. This comprehensive regulation underlines the importance of strong data governance, transparency and accountability in the development of AI systems. 𝗥𝗶𝘀𝗸-𝗯𝗮𝘀𝗲𝗱 𝗮𝗽𝗽𝗿𝗼𝗮𝗰𝗵: The Act categorizes AI systems into different risk levels, with stricter regulations for high-risk systems. 𝗗𝗮𝘁𝗮 𝗴𝗼𝘃𝗲𝗿𝗻𝗮𝗻𝗰𝗲: The Act emphasizes the importance of data quality, transparency, and accountability in AI development. 𝗧𝗿𝗮𝗻𝘀𝗽𝗮𝗿𝗲𝗻𝗰𝘆 𝗮𝗻𝗱 𝗲𝘅𝗽𝗹𝗮𝗶𝗻𝗮𝗯𝗶𝗹𝗶𝘁𝘆: AI systems should be transparent and explainable, allowing users to understand how decisions are made. 𝗙𝗮𝗶𝗿𝗻𝗲𝘀𝘀 𝗮𝗻𝗱 𝗻𝗼𝗻-𝗱𝗶𝘀𝗰𝗿𝗶𝗺𝗶𝗻𝗮𝘁𝗶𝗼𝗻: AI systems should be designed and developed to avoid discrimination and bias. 𝗛𝘂𝗺𝗮𝗻 𝗼𝘃𝗲𝗿𝘀𝗶𝗴𝗵𝘁: Human oversight should be maintained to ensure that AI systems are used responsibly and ethically. 𝗗𝗮𝘁𝗮 𝗚𝗼𝘃𝗲𝗿𝗻𝗮𝗻𝗰𝗲 𝗮𝗻𝗱 𝗦𝘂𝘀𝘁𝗮𝗶𝗻𝗮𝗯𝗹𝗲 𝗔𝗜: Data governance plays a crucial role in ensuring the ethical and sustainable development of AI systems. By establishing clear data management practices, organizations can: 𝗜𝗺𝗽𝗿𝗼𝘃𝗲 𝗱𝗮𝘁𝗮 𝗾𝘂𝗮𝗹𝗶𝘁𝘆: Ensure that data is accurate, complete, and reliable. Reduce bias: Identify and mitigate biases in data and algorithms. 𝗘𝗻𝗵𝗮𝗻𝗰𝗲 𝘁𝗿𝗮𝗻𝘀𝗽𝗮𝗿𝗲𝗻𝗰𝘆: Track data flows and decision-making processes. Protect privacy: Safeguard personal data and comply with privacy regulations. 𝗣𝗿𝗼𝗺𝗼𝘁𝗲 𝘀𝘂𝘀𝘁𝗮𝗶𝗻𝗮𝗯𝗶𝗹𝗶𝘁𝘆: Reduce the environmental impact of AI by optimizing data storage and processing. 𝗖𝗼𝗻𝗰𝗹𝘂𝘀𝗶𝗼𝗻: The EU AI Act sets a global standard for the responsible development of AI. By emphasizing data governance, transparency and fairness, the Act helps to ensure that AI is used for the benefit of society. Organizations should bring data governance practices in line with the EU AI Act and promote sustainable AI innovation. #AI #Sustainability #EUAIACT #Data #AIRegulation #DataGovernance #DataManagement #DataPrivacy #DataEthics #AICompliance #MachineLearning #ArtificialIntelligence #Technology #Innovation #Business #Law #Regulation #EuropeanUnion #Industry #Technology #Innovation #Business #Leadership #Management #Strategy #DigitalTransformation #Technology
To view or add a comment, sign in
-
🚀 𝗕𝗿𝗲𝗮𝗸𝗶𝗻𝗴 𝗡𝗲𝘄𝘀: 𝗧𝗵𝗲 𝗘𝗨'𝘀 𝗔𝗜 𝗔𝗰𝘁 𝗥𝗲𝗰𝗲𝗶𝘃𝗲𝘀 𝗙𝗶𝗻𝗮𝗹 𝗔𝗽𝗽𝗿𝗼𝘃𝗮𝗹! 🚀 The European Union has officially adopted the world's first comprehensive regulations on Artificial Intelligence, setting a global standard for AI development and deployment. This landmark AI Act is designed to foster innovation while ensuring trust, safety, and fundamental rights. 𝗪𝗵𝘆 𝗧𝗵𝗶𝘀 𝗠𝗮𝘁𝘁𝗲𝗿𝘀: 🔍 Boosting Trust: The AI Act introduces strict compliance requirements, particularly for high-risk AI systems, ensuring they undergo rigorous assessment and transparency obligations before hitting the market. This fosters greater public trust in AI technologies by prioritizing user safety and ethical standards. 👨💼 Transforming Auditors' Work: Auditors will play a crucial role in this new regulatory landscape. They will be an important part of verifying AI systems' adherence to the new standards, conducting conformity assessments, and overseeing ongoing compliance. This not only elevates the importance of our work but also requires new skills and expertise in AI technologies. 𝗞𝗲𝘆 𝗛𝗶𝗴𝗵𝗹𝗶𝗴𝗵𝘁𝘀: 1. Risk-Based Approach: AI systems are categorized into minimal, limited, high, and unacceptable risk levels, with corresponding regulatory requirements. 2. Transparency & Accountability: High-risk AI systems must be registered, carry a CE marking, and undergo continuous oversight. 3. Global Impact: The AI Act is expected to influence AI regulations worldwide, similar to the "Brussels Effect" seen with GDPR, prompting other regions to adopt similar standards. The EU's bold move underscores the importance of ethical and trustworthy AI and sets a precedent for global AI governance. As we navigate this new era, the role of auditors and compliance professionals will be pivotal in ensuring these technologies are safe, trustworthy, and aligned with societal values. #AI #ArtificialIntelligence #EUAIAct #TrustInTech #Regulation #FutureofAudit #Innovation #TrustworthyAI
To view or add a comment, sign in
-
-
Five Things To Know About the EU AI Act :- The EU AI Act is set to revolutionise the way we develop and deploy AI systems. 1️⃣ Risk-Based Approach: The EU AI Act uses a risk-based approach to regulate AI systems. Depending on the level of risk, AI systems will be subjected to different levels of regulatory scrutiny and compliance requirements. 2️⃣ Obligations for High Risk Systems: AI systems that pose high risk will face stringent controls, including data governance, transparency, and explainability. Make sure you're prepared to meet all the requirements if you're operating or launching AI solutions that may pose high risk. 3️⃣ Transparency Requirements: AI applications that interact directly with consumers must adhere to transparency requirements. Ensure that users are aware of their interaction with an AI system and not a human. 4️⃣ (almost) No Requirements for Low Risk Applications: AI systems that pose low or limited risk will not have requirements imposed by this legislation. Make sure you can formally evidence whether your AI system poses a minimal or no risk. 5️⃣ Enforcement and Penalties: The EU AI Act will be enforced rigorously, with significant penalties for non-compliance. Don't wait to start strengthening your risk management, data governance, transparency, testing, and logging capabilities. The EU AI Act is a landmark regulation that will shape the future of AI development and usage globally. Start preparing now to ensure your business is compliant and ready for the future of AI. You can visit our blog or download our AI one pager from our website. EU AI Act: Five Things You Need To Know (fit4privacy.com) #EUAIAct #AICompliance #AIRegulation #AIEthics #AIFuture #punitbhatia #fit4privacy #privecycompliance #dataprotection
To view or add a comment, sign in
-
-
🚀 The October edition of the AI Governance Newsletter is here! This month, we cover essential AI Regulatory Updates from around the world, including: 🌍 Global Coordination: The EU, US, and UK signed a historic AI standards agreement under the Framework Convention on AI, aligning AI development with human rights and the rule of law. 🇺🇸 US: Key developments like the AI Civil Rights Act of 2024, targeting algorithmic discrimination, and California’s SB 1047 Bill veto on regulating large AI models. 🇪🇺 EU: Progress on the Code of Practice for General-Purpose AI Models and the growing EU AI Pact, with over 100 companies pledging early compliance. 🇩🇪🇫🇷 Germany & France: Joint recommendations from ANSSI and BSI on secure AI coding assistants, highlighting both productivity benefits and security risks. 🇬🇧 UK: A new AI Bill on public decision-making algorithms and inquiries on scaling up AI and creative tech businesses. 🇧🇪 Belgium: The Belgian Data Protection Authority’s guide on AI systems and GDPR alignment. Plus, check out our free resources for EU AI Act compliance, advice on AI governance best practices, and details on upcoming events! 👉 Read the full edition and subscribe for updates! https://lnkd.in/de4b9Wya hashtag #AIGovernance #AICompliance #EUAIAct #AIRegulation #AIStandards #SafeAINow #RiskManagement #AITraining
SAFE AI NOW Insights - Monthly Newsletter on AI Regulation
safeainow.com
To view or add a comment, sign in
-
𝐓𝐡𝐞 𝐄𝐔 𝐀𝐈 𝐀𝐜𝐭 𝐢𝐬 𝐧𝐨𝐰 𝐢𝐧 𝐞𝐟𝐟𝐞𝐜𝐭! Starting today, the EU AI Act sets the rules for the development, market placement, implementation and use of AI in the EU. How can you stay ahead? The "Top-20" is your roadmap to maturing AI governance. These actions ensure your AI initiatives are responsible, safe, and compliant. Think of them as your starting point and path to maturity. 𝘛𝘰𝘱-20 𝘊𝘰𝘯𝘵𝘳𝘰𝘭𝘴: 📊 Engage Executives 🤝 Align Organizational Values and Incentives 🧩 Activate Your AI Governance Team 🛠️ Integrate RAI into Roles & Responsibilities 👥 Engage Employees 📚 Continuous Education 🛡️ Establish AI Risk Management Strategy 🗂️ Inventory Your AI Assets 🔍 Conduct Impact Assessments ⚙️ Implement Adaptive Risk Controls 📈 Continuously Monitor Your AI Lifecycle 🏢 Manage Third Parties 📝 Manage (Emerging) Regulatory Compliance 🚨 Develop Incident Response Plan 🌐 Engage Impacted Stakeholders 𝘒𝘦𝘺 𝘋𝘢𝘵𝘦𝘴 𝘵𝘰 𝘞𝘢𝘵𝘤𝘩: 🔸 August 1, 2024: The EU AI Act comes into force 🔸 Early 2025: Prohibited AI uses become illegal 🔸 April 2025 (approx.): Codes of practice for AI developers apply 🔸 August 1, 2025: Rules for general-purpose AI transparency start 🔸 Mid-2026: Most provisions fully applicable 🔸 2027: Compliance deadline for some high-risk AI systems 𝘞𝘩𝘢𝘵 𝘵𝘰 𝘞𝘢𝘵𝘤𝘩: 🔹 Development of codes of practice by the EU's AI Office 🔹 Ongoing discussions about stakeholder involvement in drafting further guidelines Ref: https://lnkd.in/gEvX2AxX #AI #ResponsibleAI #GenAI #LLM #MLLM #VLM #EUAIACT #AIGovernance #AICompliance #AIRegulation #Regulations #Governance #Compliance
To view or add a comment, sign in
-
-
The European Commission released a policy briefing to provide insight into technical standardisation supporting the EU #AI Act. The #AI Act defines the essential requirements that high-risk AI systems must satisfy in order to guarantee their safety. These are harmonised standards, developed by European Standardisation Organisations upon request from the European Commission to support compliance with EU legislation. This document summaries the Standardisation deliverables requested by the European Commission. Some key takeaways: ✅Risks Management Standards should specify a risk management system for products and services using #AI. The requirements captured by standards should be aimed at identifying and mitigating risks of AI systems on the health, safety and #fundamental #rights of individuals. Harmonised standards for other EU product safety legislation can provide a reference, as standards for the AI Act will also be product-oriented. This is in contrast to existing ISO/IEC work, which often focuses on the organisations using AI, and includes technical ones, for example, related to the software nature and lifecycle of #AI, as well as non-technical ones, such as considering fundamental rights in #risk #assessment and mitigation plan. ✅Data Governance and Quality Standards should cover both data governance and management aspects, as well as dataset quality aspects for AI Act compliance, and they should explicitly take into account the risks identified as part of the risk management process. ✅Record keeping Standardisation on record-keeping for the AI Act includes the identification of situations that may result in risks, and in general, recording of all operation and performance aspects of AI systems needed to monitor compliance with the full set of legal requirements, including after being placed on the market and deployed in operation. ✅Transparency Standards on AI transparency should define all the relevant transparency information required and cover a broad range of information elements, including those needed to support understanding of how the AI system works its characteristics, capabilities, strengths, limitations and performance. ✅Human Oversight Standards should define a set of clear requirements on how the selection of human oversight measures has to be carried out, and how these have to be implemented and tested. ✅Conformity assessments Standards should define the procedures and processes required to assess the conformity of high-risk AI systems with the Regulation prior to their placement on the market or being put into service. They should also define criteria for assessing the competence of persons tasked with the conformity assessment activities, whether these are based on self-assessment by the providers or are carried out by external third-party organisations. By the European Commission . #aiact #euregulations
To view or add a comment, sign in
-
Important read. #AI #AIRegulation
Co-founder of the AI, Tech & Privacy Academy, LinkedIn Top Voice, Ph.D. Researcher, Polyglot, Latina, Mother of 3. 👉Join our AI governance training (1,100+ participants) & my weekly newsletter (54,000+ subscribers)
🚨 [AI FRAMEWORK]: The Australian government publishes its "National framework for the assurance of AI in government," a MUST-READ for everyone in AI, public administration & policymaking. Quotes: Existing decision-making and accountability structures should be adapted and updated to govern the use of AI. This reflects the likely impacts upon a range of government functions, allows for diverse perspectives, designates lines of responsibility and provides clear sight to agency leaders of the AI uses they are accountable for. Governance structures should be proportionate and adaptable to encourage innovation while maintaining ethical standards and protecting public interests. (page 7) - "During system development governments should exercise discretion, prioritising traceability for datasets, processes, and decisions based on the potential for harm. Monitoring and feedback loops should be established to address emerging risks, unintended consequences or performance issues. Plans should be made for risks presented by obsolete and legacy AI systems. Governments should also consider oversight mechanisms for high-risk settings, including but not limited to external or internal review bodies, advisory bodies or AI risk committees, to provide consistent, expert advice and recommendations" (page 8) - "Governments should also consider internal skills development and knowledge transfer between vendors and staff to ensure sufficient understanding of a system’s operation and outputs, avoid vendor lock-in and ensure that vendors and staff fulfill their responsibilities. Due diligence in procurement plays a critical role in managing new risks, such as transparency and explainability of ‘black box’ AI systems like foundation models. AI can also amplify existing risks, such as privacy and security. Governments must evaluate whether existing standard contractual clauses adequately cover these new and amplified risks." (page 10) ➡ The framework also demonstrates how governments can practically apply Australia’s eight AI Ethics Principles to their assurance of AI. These are the 8 principles: 1. Human, societal and environmental wellbeing 2. Human-centred values 3. Fairness 4. Privacy protection and security 5. Reliability and safety 6. Transparency and explainability 7. Contestability 8. Accountability ➡ Read the framework below. ➡ To stay up to date & learn more about AI compliance, regulation & governance, check out the links below. #AI #AIframework #AIgovernance #AIregulation #AIcompliance
To view or add a comment, sign in
-
Today is the day: The EU #AIAct has officially taken effect, enforcing significant changes to the way organizations use and develop AI systems in the European Union. ❓ What does the EU AI Act mean for your organization? Various rules and requirements apply depending on the risk of your #AI system and your operating role, including: • Ensuring your employees have proper AI #literacy • Prohibiting certain AI systems within the EU • Documenting general-purpose AI systems (#GPAI) and mitigating their risks • Implementing quality & risk management and documentation processes when providing high-risk AI systems (#HRAIS) • Informing end-users about AI interactions 🕚 Key dates for your preparation: The AI Act is going to be applied in multiple phases: • February 2025: AI literacy requirements and rules on prohibited AI systems apply • August 2025: New rules on general-purpose AI systems apply • August 2026: All other rules apply • August 2027: High-risk AI systems that are used as safety components in certain products must comply with the Act Implementing the necessary mechanisms to comply with the EU AI Act takes time and effort, especially for large enterprises. At trail, we help you set up and operationalize an efficient AI #governance to secure your AI innovation strength while facilitating compliance. AI governance is a strong competitive advantage and selling argument to your customers, who will now be more cautious about the AI systems they use and procure. 🏃 How to start? We’ve already gathered the most important information on the EU AI Act for you and provide a free self-assessment to help you to understand your upcoming obligations and your role under the Act. Find the link in the comments below ⬇️ At trail, we simplify and automate your AI governance. Get in touch with us if you’re interested in setting up the proper processes and tools for your EU AI Act compliance! #trailML
To view or add a comment, sign in
-
𝗨𝗻𝗹𝗼𝗰𝗸 𝗰𝗼𝗺𝗽𝗹𝗶𝗮𝗻𝗰𝗲 𝘄𝗶𝘁𝗵 𝘁𝗵𝗲 𝗘𝗨 𝗔𝗜 𝗔𝗰𝘁 𝗶𝗻 𝟭𝟬 𝘀𝘁𝗲𝗽𝘀 (𝗮𝗻𝗱 𝗳𝗼𝗰𝘂𝘀 𝗼𝗻 𝗼𝘁𝗵𝗲𝗿 𝘁𝗵𝗶𝗻𝗴𝘀) ✅ High-risk AI systems are adapting to strict safety, transparency, and rights protection measures. Benefitting from a presumption of compliance looks like a solid plan, right? Harmonized European standards are on the way to help with that! 📌 𝗪𝗵𝘆 𝗦𝘁𝗮𝗻𝗱𝗮𝗿𝗱𝘀 𝗠𝗮𝘁𝘁𝗲𝗿 European standardisation organisations, are in the process of drafting the necessary AI standards, following a request from the European Commission. 𝘞𝘩𝘪𝘤𝘩 𝘴𝘩𝘰𝘶𝘭𝘥 𝘣𝘦 𝘵𝘩𝘦𝘪𝘳 𝘬𝘦𝘺 𝘤𝘩𝘢𝘳𝘢𝘤𝘵𝘦𝘳𝘪𝘴𝘵𝘪𝘤𝘴? ⏬ 💼 𝗜𝗺𝗽𝗮𝗰𝘁 𝗼𝗻 𝗜𝗻𝗱𝘂𝘀𝘁𝗿𝘆 These standards will be especially crucial for SMEs, providing structured compliance pathways and leveling the playing field across the AI landscape. 📅 𝗦𝘁𝗮𝘆 𝗣𝗿𝗲𝗽𝗮𝗿𝗲𝗱 As we look towards 2026, it’s critical for AI stakeholders to stay updated on standardization progress. With over 37 standardization activities underway, the EU is committed to making these standards actionable, cohesive, and fit for the fast-paced AI ecosystem. Source: SOLER GARRIDO, J., DE NIGRIS, S., BASSANI, E., SANCHEZ, I., EVAS, T., ANDRÉ, A. and BOULANGÉ, T., Harmonised Standards for the European AI Act, European Commission, Seville, 2024, JRC139430 (date available: October 24, 2024)
To view or add a comment, sign in
-
🚨 [AI FRAMEWORK]: The Australian government publishes its "National framework for the assurance of AI in government," a MUST-READ for everyone in AI, public administration & policymaking. Quotes: Existing decision-making and accountability structures should be adapted and updated to govern the use of AI. This reflects the likely impacts upon a range of government functions, allows for diverse perspectives, designates lines of responsibility and provides clear sight to agency leaders of the AI uses they are accountable for. Governance structures should be proportionate and adaptable to encourage innovation while maintaining ethical standards and protecting public interests. (page 7) - "During system development governments should exercise discretion, prioritising traceability for datasets, processes, and decisions based on the potential for harm. Monitoring and feedback loops should be established to address emerging risks, unintended consequences or performance issues. Plans should be made for risks presented by obsolete and legacy AI systems. Governments should also consider oversight mechanisms for high-risk settings, including but not limited to external or internal review bodies, advisory bodies or AI risk committees, to provide consistent, expert advice and recommendations" (page 8) - "Governments should also consider internal skills development and knowledge transfer between vendors and staff to ensure sufficient understanding of a system’s operation and outputs, avoid vendor lock-in and ensure that vendors and staff fulfill their responsibilities. Due diligence in procurement plays a critical role in managing new risks, such as transparency and explainability of ‘black box’ AI systems like foundation models. AI can also amplify existing risks, such as privacy and security. Governments must evaluate whether existing standard contractual clauses adequately cover these new and amplified risks." (page 10) ➡ The framework also demonstrates how governments can practically apply Australia’s eight AI Ethics Principles to their assurance of AI. These are the 8 principles: 1. Human, societal and environmental wellbeing 2. Human-centred values 3. Fairness 4. Privacy protection and security 5. Reliability and safety 6. Transparency and explainability 7. Contestability 8. Accountability ➡ Read the framework below. ➡ To stay up to date & learn more about AI compliance, regulation & governance, check out the links below. #AI #AIframework #AIgovernance #AIregulation #AIcompliance
To view or add a comment, sign in