Titelbild von ValidaitorValidaitor
Validaitor

Validaitor

Technologie, Information und Internet

Karlsruhe, Baden-Württemberg 3.690 Follower:innen

Safety and Trust for Artificial Intelligence | "Top 100 Deeptech Startups of Europe"

Info

VALIDAITOR is the all-in-one Trustworthy AI platform. We're pioneering the development of automated tools that govern, validate and manage compliancy of AI models. VALIDAITOR platform brings together all the necessary components to keep track of the AI inventory, read-team and audit AI systems, manage and automate the compliance processes so that organisations can adopt trustworthy and responsible AI best practices while staying in the realm of the AI regulations and standards. VALIDAITOR is a spin-off from prestigious Karlsruhe Institute of Technology (KIT) and supported by the government as well as a strong consortium of international investors. VALIDAITOR is headquartered in Karlsruhe, Germany. Would you like to learn more? Don't hesitate to contact us: https://validaitor.com/contact-us Join our partnership program designed for AI auditors, consultancies and system integrators: https://validaitor.com/partnership

Branche
Technologie, Information und Internet
Größe
11–50 Beschäftigte
Hauptsitz
Karlsruhe, Baden-Württemberg
Art
Privatunternehmen
Gegründet
2022
Spezialgebiete
machine learning, deep learning, artificial intelligence, trustworthy AI, responsible AI, ethical AI, AI, Safety Auditing, Functional Testing, AI Governance, AI Risk Management, AI Compliance Management, EU AI Act, ISO/IEC 42001 und NIST AI Risk Management Framework

Produkte

Orte

Beschäftigte von Validaitor

Updates

  • Unternehmensseite für Validaitor anzeigen

    3.690 Follower:innen

    The White House has opened public comments on its AI Action Plan, aiming to shape the next steps for AI policy in the U.S. This is a chance for researchers, industry leaders, policymakers, and the public to weigh in on how AI should be developed, regulated, and integrated into society. With AI rapidly evolving, decisions made today will impact innovation, economic growth, and governance for years to come. Whether you’re in tech, policy, or just passionate about AI’s future, now is the time to have your say. 🗓️ Submissions are open until March 15, 2025 via the Federal Register: 🔗 https://lnkd.in/ez4wY2ui #ArtificialIntelligence #AIPolicy #FutureOfAI

  • Unternehmensseite für Validaitor anzeigen

    3.690 Follower:innen

    The Paris AI Action Summit was meant to be a defining moment in shaping AI for the public good. While it made progress in several areas, it also left critical gaps. ✅ Key Successes: • The launch of the Current AI Foundation, a public interest partnership with an initial €400M endowment and a €2.5B funding target. • New work observatories across 11 countries. • The first International Scientific Report on the Safety of Advanced AI. • Development of an AI Energy Score Leaderboard to track sustainability. ❌ Notable Gaps: • No proposals on risk thresholds or system capabilities that could pose severe risks. • No corporate accountability mechanisms. • No comprehensive education and reskilling programs. • No clear mechanisms to ensure meaningful inclusion of civil society at future Summits. Perhaps most concerning was the Summit dismissal of systemic risks as “science fiction,” directly contradicting expert assessments. And without major players like the US and UK signing on, global coordination remains a challenge. As India prepares to host the next Summit in late 2025, the world will be watching. The challenge? Strengthening governance while ensuring AI serves everyone, not just a few. 🔗 Read more: https://lnkd.in/ds3R9qPq #TrustworthyAI #ParisAIActionSummit

  • Unternehmensseite für Validaitor anzeigen

    3.690 Follower:innen

    🚨 Can We Trust AI Chatbots for Mental Health Support? 🚨 AI chatbots as virtual friends and therapists? It sounds like the future, but a recent field test in the Netherlands revealed serious risks. 🔍 The Test: Researchers tested multiple AI chatbot apps designed for companionship and therapy to understand the real-world risks they pose. The evaluation was based on 11 key questions, covering: 1. Transparency & Consistency 2. Response to mental health issues a 3. Moments of crisis. ⚠️ The Results: The findings are alarming. Most chatbot apps scored negatively on over half of the evaluation criteria (see image below). ❌ The worst-performing app failed all 11 assessment criteria. ✅ The best-performing app succeeded in 9 out of 11 areas, proving that improvement is possible. The test makes one thing clear: without stronger regulations, better transparency, and more responsible AI development, these chatbots could do more harm than good. Validaitor helps AI companies ensure their technology is safe, compliant, and trustworthy. 🌟 If you're building AI solutions, let’s talk about making them better. 📖 Read more about the field test here: https://lnkd.in/eKGXEWfw #TrustworthyAI #AIChatbots #Validaitor

    • Kein Alt-Text für dieses Bild vorhanden
  • Unternehmensseite für Validaitor anzeigen

    3.690 Follower:innen

    On February 20, the EU AI Office hosted the 'Third AI Pact' webinar to shed light on how organizations can navigate AI literacy under Article 4 of the #EUAIAct. 🇪🇺 With enforcement in motion since 2 February 2025, understanding AI’s risks and limitations isn’t just good practice—it’s a must. Key takeaways: • Private enforcement is already in effect. Individuals can file complaints if AI literacy obligations aren't met. • Public enforcement begins August 2025. ⏳ • National authorities will determine enforcement actions and penalties, with variations based on AI use and risk level. • Not just for EU companies! 🌍 Any company offering AI-powered services or decisions within the EU must comply. • Employees using AI tools (yes, even ChatGPT) must be trained on AI risks including bias, hallucinations, and decision-making hallucinations. • Tiered training is key. Many organizations, like EnBW, tailor AI education based on employee roles and AI interaction levels. • Best practice? 💡 Basic AI literacy for all, with deeper dives for high-impact users. 📹 To learn more, watch the full webinar here: https://lnkd.in/eFe8d_ks 📖 Or read more about it here: https://lnkd.in/eN9-SSjS 🌟 Have questions about how your company can prepare? Reach out! Our experts at Validaitor are here to help. #AIAct #AILiteracy #ThirdAIPact

    Third AI Pact webinar on AI literacy

    https://www.youtube.com/

  • Validaitor hat dies direkt geteilt

    Unternehmensseite für Validaitor anzeigen

    3.690 Follower:innen

    🚀 We are thrilled to announce our upcoming webinar on Testing for Bias in Chat Assistants! This session will delve into the critical importance of identifying and mitigating biases in AI assistants to ensure fair and unbiased interactions. As AI becomes increasingly integral across various sectors, understanding and addressing bias is essential for developers, testers, and auditors committed to ethical AI practices. 📅 During the webinar, we will provide a comprehensive overview of how to effectively test AI assistants for different types of biases. The session will cover practical methodologies, tools, and best practices for detecting and correcting biases. Participants will gain valuable skills to enhance the fairness and accuracy of AI systems. ✨ Don't miss this opportunity to expand your knowledge and contribute to the development of unbiased AI. 🔗 Register now to secure your spot: https://lnkd.in/e5FqPpC6 #TrustworthyAI #ethicalAI #EUAIAct #webinar

    Dieser Inhalt ist hier nicht verfügbar.

    Mit der LinkedIn App können Sie auf diese und weitere Inhalte zugreifen.

  • Unternehmensseite für Validaitor anzeigen

    3.690 Follower:innen

    🚀 We are thrilled to announce our upcoming webinar on Testing for Bias in Chat Assistants! This session will delve into the critical importance of identifying and mitigating biases in AI assistants to ensure fair and unbiased interactions. As AI becomes increasingly integral across various sectors, understanding and addressing bias is essential for developers, testers, and auditors committed to ethical AI practices. 📅 During the webinar, we will provide a comprehensive overview of how to effectively test AI assistants for different types of biases. The session will cover practical methodologies, tools, and best practices for detecting and correcting biases. Participants will gain valuable skills to enhance the fairness and accuracy of AI systems. ✨ Don't miss this opportunity to expand your knowledge and contribute to the development of unbiased AI. 🔗 Register now to secure your spot: https://lnkd.in/e5FqPpC6 #TrustworthyAI #ethicalAI #EUAIAct #webinar

    Dieser Inhalt ist hier nicht verfügbar.

    Mit der LinkedIn App können Sie auf diese und weitere Inhalte zugreifen.

  • Unternehmensseite für Validaitor anzeigen

    3.690 Follower:innen

    📣 Navigating AI regulations can feel overwhelming, but they don't have to be! Validaitor brings together everything you need into a single platform, making it easy to govern AI responsibly under the EU AI Act and ISO AI Standards💡 With all the necessary tools in one place, you can focus on building AI that’s not just powerful but also safe, transparent, and future proof. Swipe to find out how Validaitor sets you up for success! 🔍➡️ #TrustworthyAI #ResponsibleAI #Validaitor #AICompliance

  • Unternehmensseite für Validaitor anzeigen

    3.690 Follower:innen

    🚨 Does GenAI Pose a Danger to Democratic Elections? 🚨 With the German federal election around the corner, our latest blog post dives deep into how well LLM-powered applications align with major German political parties. Spoiler alert: Accuracy rates vary, and biases can creep in, making it crucial to approach AI-generated political insights with caution. 🔍 Key Takeaways: 1️⃣ Our results show that all tested models achieved accuracy rates between 60% and 80%, indicating that LLMs struggle to provide reliable political alignment recommendations. The best-performing model was GPT-4o (version 2024-11-20), while ChatGPT, based on a GPT-4o model, had a lower accuracy of just 65%. 2️⃣ Accuracy varies by party. 3️⃣ GPT-4o outperforms ChatGPT even though ChatGPT has access to internet. 🚀 As #AI continues to weave itself into the fabric of our decision-making processes, ensuring its trustworthiness is absolutely critical. Our findings underscore the vital importance of AI assurance and trustworthiness. 🌟 🔗 Read the full article to learn more about our findings: https://lnkd.in/eRH2a-qD #AI #Politics #GermanElections #WahlOMat #AITrustworthiness #Validaitor

  • Unternehmensseite für Validaitor anzeigen

    3.690 Follower:innen

    🚨 Elections are approaching in Germany! 🚨 But how reliable are large language models (#LLMs) when it comes to staying neutral against political parties? 🤔 We put some of the most popular LLMs to the test based on "Wahl-O-Mat", and the results are striking! Almost all LLMs make mistakes when it comes to facts related to political parties. 🗳️ Here are some findings from our study: 1️⃣ GPT4o 2024-11-20 (Azure): Accuracy - 0.77 2️⃣ Meta-Llama-3.1-405B-Instruct (AWS Bedrock): Accuracy - 0.76 3️⃣ GPT4 (Azure): Accuracy - 0.68 4️⃣ Mistral Large (Bedrock): Accuracy - 0.66 5️⃣ Claude V2 (Bedrock): Accuracy - 0.63 6️⃣ GPT3.5 Turbo (Azure): Accuracy - 0.62 These findings highlight the challenges in ensuring factual accuracy and neutrality in AI-driven content. Stay tuned for our upcoming blog post where we dive deeper into the analysis! 📊 #AI #MachineLearning #Elections2025 #Germany #LLM #TrustworthyAI

    • Kein Alt-Text für dieses Bild vorhanden
    • Kein Alt-Text für dieses Bild vorhanden
    • Kein Alt-Text für dieses Bild vorhanden
  • Unternehmensseite für Validaitor anzeigen

    3.690 Follower:innen

    Big things are happening at Validaitor, and we’re so excited to share this one: Gert Paczynski is joining us as our new Chief Revenue Officer! 🎉 Gert has spent years at the forefront of disruptive tech, working with companies like MongoDB, Cloudera, and Confluent, shaping the Big Data era that laid the foundation for today’s AI. At VAST Data, he saw firsthand how AI is transforming our lives, and why it needs to be built on trust ‼️ Now, he’s bringing that expertise to Validaitor to help build a future where AI isn’t just powerful, but trustworthy and compliant. 💬 "Validaitor is at the forefront of ensuring AI systems are trustworthy and compliant, and I saw a huge opportunity to help drive that mission forward." We couldn’t be happier to have him on board. His passion, experience, and vision are going to be a game-changer as we continue to grow and help companies build AI that’s not just powerful, but responsible. 🌟 Welcome to the team, Gert! #Validaitor #TrustworthyAI

    • Kein Alt-Text für dieses Bild vorhanden

Ähnliche Seiten

Jobs durchsuchen

Finanzierung

Validaitor Insgesamt 1 Finanzierungsrunde

Letzte Runde

Seed
Weitere Informationen auf Crunchbase