Enzai’s cover photo
Enzai

Enzai

Software Development

AI Governance Made Easy

About us

Enzai provide an AI governance software platform that helps organisations understand and manage the risks that come with AI, while meeting their emerging regulatory obligations. Backed by some of the best venture capital and angel investors in the world, the company is on a mission to ensure that powerful AI technologies can reach their true potential.

Industry
Software Development
Company size
2-10 employees
Headquarters
Belfast, Northern Ireland
Type
Self-Employed
Founded
2021

Locations

Employees at Enzai

Updates

  • View organization page for Enzai

    3,885 followers

    Ready to take the guesswork out of AI governance? We often hear from practitioners that they feel overwhelmed by the pace of change in AI and the complexity of new laws – on top of unique organizational considerations. That’s why we’ve developed an actionable new guide on AI governance. It focuses on what to do – and in which order. The guide includes: ➔ Objectives of AI governance ➔ Developing and applying an AI framework ➔ AI inventory management ➔ Running assessments ➔ Monitoring the AI governance program 👉 Get your copy today and set your AI governance effort up for success: https://lnkd.in/eAdnxy3W 👉 And if you’re at #HumanX in Las Vegas this week, stop at Booth E1 and get your print copy! #ResponsibleAI #AIGovernance

    • No alternative text description for this image
  • Agentic AI is everywhere, yet it’s hard to get current information about which systems are available and how people are using them. So, we are excited to see the release of the AI Agent Index from Massachusetts Institute of Technology -- the first public database to document the technical components, intended uses, and safety features of agentic systems. Three key takeaways from the index: ➔ There are 67 agents in the database, and the majority of these are focused on software engineering and computer use ➔ Deployers typically provide significant information related to capabilities and applications, but more limited information on safety and risk management processes ➔ The index is populated using publicly available information and correspondence with deployers, and current as of December 31, 2024 👉 You can have a look at the index here: https://lnkd.in/dnxZC53v #ResponsibleAI #AIGovernance

  • View organization page for Enzai

    3,885 followers

    We know the ‘why’ of AI governance -- but what about the ‘how’? Even after finding the resources and buy-in to create an AI governance program, many organizations struggle with which policies, processes and assessments to adopt. And with the EU AI Act and other AI laws now in force, AI governance has taken on new urgency. Our latest guide cuts through the jargon, focusing on what to do. It includes: ➔ Objectives of AI governance ➔ Developing and applying an AI framework ➔ AI inventory management ➔ Running assessments ➔ Monitoring the AI governance program It also contains practical guidance, accessible case studies and quick start tips. Get your copy today! 👉 https://lnkd.in/eAdnxy3W #ResponsibleAI #AIGovernance

    • No alternative text description for this image
  • View organization page for Enzai

    3,885 followers

    James (Kayliang) Ong, Founder of the Artificial Intelligence International Institute (AIII), joined the latest episode of our AI Governance Podcast. For over 35 years, James has bridged divides between scientists, businesses and impact investors. He is the co-author of 'AI for Humanity: Building A Sustainable AI for the Future.' James joined Enzai's Var Shankar to discuss: - The silver lining from the Paris AI Safety Summit - The need to develop fields of AI other than generative AI - How Asian countries can best contribute to global AI norms - How sustainable AI is built on the three pillars governance, technology, and commercialization - How to navigate incentives between scientists, startups, and investors 👉 You can find the latest episode on Spotify, Apple Music, or here: https://lnkd.in/gK2kjQZD

  • Enzai reposted this

    View organization page for Enzai

    3,885 followers

    🇪🇺 Privacy and AI governance teams have noticed a point of friction between the EU AI Act and the EU GDPR. The AI Act allows the processing of special categories of protected data (under strict conditions) to identify and combat potential discrimination, while the GDPR does not allow it (without the data subject’s explicit consent). This week, the European Parliamentary Research Service released commentary outlining the issue and sharing some potential interpretations. It discusses, for example, the view that "fighting discrimination" may constitute a "substantial public interest" under the GDPR. The commentary suggests “a reform of the GDPR or further guidelines on its interplay with the AI Act” as ways to reduce uncertainty for companies. In the meantime, companies will need to interpret and navigate this friction skillfully as they put in place fundamental elements of their AI governance programs. 👉 Read the commentary here: https://lnkd.in/dMyxPm8T #AIGovernance #ResponsibleAI

  • View organization page for Enzai

    3,885 followers

    🇪🇺 Privacy and AI governance teams have noticed a point of friction between the EU AI Act and the EU GDPR. The AI Act allows the processing of special categories of protected data (under strict conditions) to identify and combat potential discrimination, while the GDPR does not allow it (without the data subject’s explicit consent). This week, the European Parliamentary Research Service released commentary outlining the issue and sharing some potential interpretations. It discusses, for example, the view that "fighting discrimination" may constitute a "substantial public interest" under the GDPR. The commentary suggests “a reform of the GDPR or further guidelines on its interplay with the AI Act” as ways to reduce uncertainty for companies. In the meantime, companies will need to interpret and navigate this friction skillfully as they put in place fundamental elements of their AI governance programs. 👉 Read the commentary here: https://lnkd.in/dMyxPm8T #AIGovernance #ResponsibleAI

  • Can generative AI dramatically reduce the time and cost of scientific research, freeing scientists up to conduct more real-world experiments? We don’t know yet – but the scientific community’s response to tools like Google's AI Co-Scientist will be an early indicator. Co-Scientist is a multi-agent system (currently in pilot testing) “designed to aid scientists in creating novel hypotheses and research plans.” Some scientists are concerned about relying too much on an AI co-worker, deviating from longstanding protocols, or entering sensitive information into an AI system. Others believe that collaborating with an AI co-worker will allow scientists to be more creative and tackle bigger problems. 👉 You can learn more about Co-Scientist here: https://lnkd.in/erDe3BsE #ResponsibleAI #AIGovernance

    • No alternative text description for this image
  • We've seen a lot of interesting developments in UK AI policy recently: 🟩 The UK AI Safety Institute rebranding to UK AI Security Institute 🟩 The UK joining the US in not signing the Paris AI Action Summit's final statement 🟩 The AI Opportunities Action Plan suggesting rapid AI adoption to power economic growth 🟩 Government Digital Services releasing a playbook for using AI in government 🟩 The UK signing an MOU with Anthropic to explore AI use to improve public service These developments signal shifts in the UK's understanding of AI leadership and AI risk, with major implications for private sector players. #ResponsibleAI #AIGovernance

  • 🏛️ The New York Assembly is considering the proposed AI Consumer Protection Act (AICPA), which would: 🟩 require deployers to put in place risk management policies and programs equivalent or similar to the NIST AI RMF or ISO/IEC 42001 🟩 require deployers to track their high-risk AI systems and assess them annually 🟩 require developers of AI systems to provide detailed information to deployers 🟩 give developers a "rebuttable presumption" of meeting their duty of care if they undergo independent third-party audits by a government-approved auditor AICPA is one of many proposed laws steadily advancing through US state legislatures, despite policy uncertainty at the federal level. 👉 Learn more about AICPA here: https://lnkd.in/eCEb_Ppf #ResponsibleAI #AIGovernance

Similar pages

Browse jobs

Funding

Enzai 1 total round

Last Round

Seed

US$ 4.0M

See more info on crunchbase