MLSecOps Community Podcast S2Ep10 "Evaluating RAG and the Future of LLM Security: Insights with LlamaIndex" Watch or listen to the full episode here --> https://lnkd.in/d39JtN53 Hosts Neal Swaelens and Oleksandr Yaremchuk sit down with special guest Simon Suo, co-founder and CTO of LlamaIndex. The group discusses many topics, including: - Challenges and considerations of integrating LLMs into various applications, emphasizing the importance of contextualizing LLMs within specific environments. - The evolution of retrieval-augmented generation (RAG) techniques and the future trajectory of LLM-based applications, including the significance of balancing performance with cost and latency in leveraging LLM capabilities. - LLM security concerns and the critical need for robust input and output evaluation to mitigate potential risks. - The potential vulnerabilities associated with LLMs, including prompt injection attacks and data leakage, underscoring the importance of implementing strong access controls and data privacy measures. - Efforts within the community to address security challenges and foster a culture of education and awareness. Listen or read the transcript here to learn more about the ongoing innovation in large language model-based applications, while remaining vigilant about LLM security considerations ➡️ https://hubs.ly/Q02v63y60 Thanks again for joining and sharing your insights, Simon! Also, stop by and visit Protect AI at RSA and learn more about our MLSecOps community! #MLSecOps #aisecurity #MLSec #airisk #machinelearning #ai #llm h#genai #llamaindex #protectai #cybersecurity #rag
LlamaIndex on RAG and LLM Security
More Relevant Posts
-
❓How do we evaluate the cybersecurity knowledge of LLMs? Introducing CyberMetric-80, CyberMetric-500, CyberMetric-2000, and CyberMetric-10000—comprehensive Q&A benchmark datasets designed for this purpose. 📚 Using LLM and RAG, we created these datasets from NIST standards, research papers, and more, validated by experts over 200+ hours. We tested 25 top LLM models and involved 30 human participants in CyberMetric-80. 🔍 Results: GPT-4o, GPT-4-turbo, Mixtral-8x7B-Instruct, Falcon-180B-Chat, and GEMINI-pro 1.0 excelled, outperforming human participants, though experienced experts still surpassed smaller models like Llama-3-8B. 📖 Read the full paper here: https://lnkd.in/eaJvbmSK #Cybersecurity #AI #LLM #CyberMetric #Innovation
To view or add a comment, sign in
-
-
📖 LLM4Vuln: How good are LLMs at reasoning about vulnerabilities? 9 zero-days in smart contracts, and how much function calling, prompting, knowledge retrieval, etc. matter. The paper aims to decouple LLMs' vulnerability reasoning capability from their other capabilities (e.g. seeking additional info via function calling, retrieving vulnerability knowledge like via RAG, etc.). and proposes a unified evaluation framework named LLM4Vuln. They had GPT-4, Mixtral, and Code Llama analyze 75 ground-truth smart contract vulnerabilities as well as 4,950 different scenarios, and identified 9 zero-day vulnerabilities in two pilot bug bounty programs, earning >$1,000. The paper also examines the varying effects of knowledge enhancement, context supplementation, prompt schemes, and models. #cybersecurity #security #ai CC Caleb Sima, Daniel Miessler, Jason Haddix, Chris Hughes
To view or add a comment, sign in
-
Echoing our evaluation work on the Commission-funded initiatives, addressing the cyber resilience of digital and physical infrastructure, the use of AI systems in industry and public governance, the Center for Security and Emerging Technology (CSET) issued a brief dedicated to cybersecurity risks of AI-generated code. It addresses: ➡ Industry Adoption of AI Code Generation models and tools ➡ Associated risks, including producing insecure code, vulnerabilities to attacks, downstream Impacts ➡ Assessing the security of code generation models ➡ Methodology, evaluation, limitations, severity of generated bugs ➡ Policy implications. Brief -https://lnkd.in/e8mB8i42. Acknowledgments: Jessica J. Maggie Wu Rebecca Gelles Jenny Jun #ai #ethics #policy
To view or add a comment, sign in
-
I recently had the opportunity to attend the FS-ISAC EMEA Summit in Berlin. It was an amazing experience all-around. I thought the speeches and presentations struck the right balance between bringing key emerging risks to light and providing business/technical solutions to mitigate the risks. For example: a question every security/risk team needs to be grappling with is how do we use AI in a secure and ethical way? FS ISAC recently published a first-of-its-kind framework for AI use in the financial industry. The toolkit includes Generative AI Vendor Evaluation and a Qualitative Risk Assessment of AI use. #ai #informationsecurity #cybersecurity #fsisac
To view or add a comment, sign in
-
-
Planning to use LLMs in 2025? Join this webinar to learn from the experts, add their thoughts to your plan, you look like a genius!
📣 So excited to announce our next webinar! 📣 𝗧𝗵𝗲 𝟮𝟬𝟮𝟱 𝗢𝗪𝗔𝗦𝗣 𝗧𝗼𝗽 𝟭𝟬 𝗳𝗼𝗿 𝗟𝗟𝗠𝘀: 𝘞𝘩𝘢𝘵'𝘴 𝘕𝘦𝘸, 𝘞𝘩𝘢𝘵'𝘴 𝘊𝘩𝘢𝘯𝘨𝘦𝘥, 𝘢𝘯𝘥 𝘞𝘩𝘢𝘵'𝘴 𝘕𝘦𝘹𝘵 If you're in AI, cybersecurity, or just curious about the latest in LLM security, this is a MUST-ATTEND event! Join Jeremiah Salamon, a distinguished information security leader at one of the nation’s premier law firms and head of the Boston chapter of OWASP, along with Lee Weiner, CEO of TrojAI, as they break down the changes to the OWASP Top 10 for LLMs. In this webinar, you will learn: - What the OWASP Top 10 for LLMs is and how to apply it - What’s new in the 2025 list, and how has it changed from v1.1 - What’s on the horizon in AI security and risk management Join us: DATE: Wednesday, December 11, 2024 TIME: 11am ET | 8am PT REGISTER NOW: https://lnkd.in/eHjDphY2 Don’t miss out on this essential webinar for anyone working with AI, cybersecurity, or LLMs! #OWASPTop10 #AISecurity #TrojAI #LLMs #GenAI #Cybersecurity
To view or add a comment, sign in
-
-
📣 So excited to announce our next webinar! 📣 𝗧𝗵𝗲 𝟮𝟬𝟮𝟱 𝗢𝗪𝗔𝗦𝗣 𝗧𝗼𝗽 𝟭𝟬 𝗳𝗼𝗿 𝗟𝗟𝗠𝘀: 𝘞𝘩𝘢𝘵'𝘴 𝘕𝘦𝘸, 𝘞𝘩𝘢𝘵'𝘴 𝘊𝘩𝘢𝘯𝘨𝘦𝘥, 𝘢𝘯𝘥 𝘞𝘩𝘢𝘵'𝘴 𝘕𝘦𝘹𝘵 If you're in AI, cybersecurity, or just curious about the latest in LLM security, this is a MUST-ATTEND event! Join Jeremiah Salamon, a distinguished information security leader at one of the nation’s premier law firms and head of the Boston chapter of OWASP, along with Lee Weiner, CEO of TrojAI, as they break down the changes to the OWASP Top 10 for LLMs. In this webinar, you will learn: - What the OWASP Top 10 for LLMs is and how to apply it - What’s new in the 2025 list, and how has it changed from v1.1 - What’s on the horizon in AI security and risk management Join us: DATE: Wednesday, December 11, 2024 TIME: 11am ET | 8am PT REGISTER NOW: https://lnkd.in/eHjDphY2 Don’t miss out on this essential webinar for anyone working with AI, cybersecurity, or LLMs! #OWASPTop10 #AISecurity #TrojAI #LLMs #GenAI #Cybersecurity
To view or add a comment, sign in
-
-
I'm excited to invite you to the launch of our new study, "Cybersecurity Economics for Emerging Markets." This groundbreaking report uncovers the powerful connection between cybersecurity and economics and reveals cybersecurity's vital role in global development! 🗓️ Wed, September 18, 2024 🕘 9:00 – 10:15 AM EDT (Zoom) 💻 Register Here: https://lnkd.in/ehGT9i6w #cybersecurity #ai #digitaldevelopment #worldbank #newbook
To view or add a comment, sign in
-
-
In 2023, the U.K. launched the AI Safety Institute (AISI) to tackle AI risks. With a £100M budget, it tests models for threats like biological, chemical, and cybersecurity risks. While access to model data is limited, AISI’s work has gained respect in the AI community. With changes in government, stronger AI regulation could further bolster the institute’s impact https://lnkd.in/ddjigDCZ. Stay informed on AI developments—subscribe to the SuperDataScience newsletter at https://lnkd.in/ddFy65uK. #AISafety #AIRegulation #TechInnovation #ArtificialIntelligence #SuperDataScience #AIResearch #Cybersecurity #MachineLearning #DataScience
To view or add a comment, sign in
-
-
"Should AI be a critical infrastructure?" That's the question I ask legal expert David Simon, policy expert Jason Healey, and technical expert Jesper Johansson on the current episode of the Advancing Cyber Podcast. It's a good one - with a lot of debate and diversity of views. In this episode, we talk through a hypothetical nation state attack against AI providers and how government and industry would respond, weighing risks and benefits of regulation, the need for sector coordination and collaboration, and a potential AI-ISAC for future incidents. This episode pushes some boundaries, challenging the public and private sectors to prepare for the growing role AI is taking in our world. It's clear from the debate that this is a question that needs to be answered - before the attackers force our hand. Listen now on Spotify: https://lnkd.in/gwSBhQar Listen now on Apple Podcast: https://lnkd.in/g5bqX7iu #ArtificialIntelligence #Cybersecurity #IncidentResponse #ISACs Advancing Cyber Advanced Cyber Law #AdvancingCyber
To view or add a comment, sign in
-
What is an Indirect Prompt Injection Attack? -In common applications, multiple inputs can be processed by concatenating them together into a single stream of text. -However, the LLM is unable to distinguish which sections of the prompt belong to various input sources. -Indirect prompt injection attacks take advantage of this vulnerability by embedding adversarial instructions into untrusted data being processed alongside user commands. -Often, the LLM will mistake the adversarial instructions as user commands to be followed, creating a security vulnerability in the larger system. What are the Mitigations against Indirect Prompt Injection Attacks? 1)Reinforcement learning from human feedback (RLHF) 2)Filtering retrieved inputs. 3)An LLM moderator that can be leveraged to detect attacks beyond just fltering harmful outputs. Do note that there is no comprehensive solution for protecting models against adversarial prompting and there is scope for future work for improving defenses. Source: NIST and the paper: "Defending Against Indirect Prompt Injection Attacks With Spotlighting" #llm #indirectpromptinjection #llmattacks #ai #artificialintelligence #llmmitigation #cybersecurity #riskmanagement
To view or add a comment, sign in