Sec4AI4Sec’s Post

🚨 Is Your AI Learning from Vulnerable Code? AI is revolutionizing software development, but what if the code it learns from is poisoned? 🤯 🔴 CVE-Poisoning happens when AI models are trained on code that contains known vulnerabilities—either #unintentionally (scraping untrusted sources) or #intentionally (as an attack). This could silently weaken AI-driven development tools. At FrontEndART Software Ltd., as part of the Sec4AI4Sec project, we’re taking action: 🔍 Building a CVE knowledge base to track vulnerable code 🛡️ Developing AI-powered de-poisoning tools to clean training data ⚡ Enhancing security of AI-driven development As part of WP4, we’re researching AI attack vectors and defense strategies to keep AI training data safe. 💡 AI is only as secure as the data it learns from. Are we doing enough to protect it? Let’s discuss! 👉 Learn more here: https://lnkd.in/dfyZsNFt #AI #CyberSecurity #MachineLearning #CVE #SoftwareSecurity #Innovation

Salvatore Della Torca

Studente Ph.D. presso Università degli Studi di Napoli Federico II e Università degli Studi di Bergamo

3w

I completely agree—this is a timely and exciting topic! There's still so much to explore and improve, and I'm now working on it as well. Securing the training data is key to building truly reliable AI, expecially in the code-generation tasks.

Like
Reply
See more comments

To view or add a comment, sign in

Explore topics