So much potential here for security and risk mgmt as we refine and improve the use of ML for code migrations or transformations (think memory safety as an example) https://lnkd.in/gzjdTRBW
Royal Hansen’s Post
More Relevant Posts
-
Now, THIS is what I'm talking about. I wonder if it can ingest from a single source, e.g., Datadog... [custom tools](https://lnkd.in/emfpHzTc)
If you paste an alert into an LLM and ask why it fired - the LLM will do a poor job. But so would a great engineer if you forced them to answer on the spot without troubleshooting first. What if you let the LLM investigate like a human? Let it look at the alert. Then query your observability data. Then think. Then fetch more data. For several months, we've secretly been building the world's first troubleshooting agent for alerts and incidents. Today we're releasing it as open source to the world.
To view or add a comment, sign in
-
Many of the components that make up a model are attack vectors that organizations already focus on managing and securing. Because of this, organizations should make sure that their Data Scientists and Machine Learning Engineers are equipped with the same security tools and processes your core development teams use
To view or add a comment, sign in
-
In general, structured log events are mostly independent from each other, whereas unstructured log events collectively record an execution based on the code paths that a developer wants to track. Therefore, structured logs are typically used for monitoring purposes (e.g., detecting system errors, whereas unstructured logs are used to debug why the error occurred). This suggests the two types of logs require different types of management solutions.
To view or add a comment, sign in
-
AI brings many benefits to software development, but they can only be achieved when risk is properly mitigated. This is why we at Sonar advocate for a “trust and verify” approach. 🤝 Our Sr. Director of Product and Solutions, Manish Kapur, recently spoke with ISMG's Tom Field about this, the latest SonarQube capability AI Code Assurance, and more at AWS re:Invent. Listen to their conversation: https://bit.ly/3DdaIzG Information Security Media Group (ISMG) #SonarAISolutions #AICodeAssurance #codequality #codesecurity #AWSreInvent
To view or add a comment, sign in
-
Code Governance: Why Every Organization Needs It Right Now 1. CTOs/VPs/Directors of Engineering: The biggest challenge is ensuring that code written by developers or AI adheres to company-specific best practices, maintains security posture, and incorporates past learnings. 2. Developers: Governance and best practice docs can be really long and boring. Developers aren’t going to read through all of it. They face challenges with fast-paced development and resistance to new policies. Solution - Integrating Code Governance into IDEs and PR Reviews: Imagine if developers had access to this intelligence right within their IDEs. In real-time, they could see if they’re violating coding standards, security policies, or past learnings and fix it immediately. Even if issues slip past the IDE, they’d be flagged during the pull request review stage. CodeAnt AI (YC W24) fix this, read it here - https://lnkd.in/g9kqBdnB
To view or add a comment, sign in
-
-
Large Language Model outages ☄️ K8s post mortem: Staging environments rarely match production perfectly, especially at scale. While “shift left” testing accelerates velocity and catches many bugs early, progressive rollouts in production remain one of the best safeguards against outages caused by conditions unique to production environments. https://lnkd.in/d2GBUQcP
To view or add a comment, sign in
-
😼The Mysterious Case of the Docker Architecture. In the sleepy town of Containerville, a group of detectives known as the "DevOps Squad" were tasked with solving the mystery of the Docker architecture. Their mission was to unravel the secrets of the five main components that made up the enigmatic Docker architecture. The first clue led them to the _Daemon_, a mysterious figure known only as "The Brain" who controlled the flow of containers and images. But The Brain was elusive, and the detectives had to use their wits to uncover its true identity. Next, they encountered the _Client_, a charming interface who seemed to be hiding secrets of its own. But as they dug deeper, they discovered that the Client was merely a messenger, relaying instructions to The Brain. The trail then led them to the _Dockerfile_, a cryptic text file containing instructions for building an image. It was written in a code that only the most skilled detectives could decipher. As they cracked the code, they uncovered the _Image_, a template for creating containers. But the Image was just a ghostly outline, and the detectives had to use their imagination to bring it to life. Finally, they stumbled upon the _Container_, a running instance of the Image. But the Container was a fleeting entity, vanishing into thin air as soon as it was created. The detectives were stumped. But then, they received a tip about the _Registry_, a secret repository where images were stored. It was the key to unlocking the entire mystery. #my_way_to_express_journey Savinder Puri sir. continue reading follow this 🔗 https://lnkd.in/dEgCvqU3
To view or add a comment, sign in
-
🎯 Qdrant in the Unstructured Platform This powerful combination helps data teams tackle the challenges of preparing unstructured data for RAG systems. Our Platform handles document processing, metadata preservation, and efficient batch uploads to Qdrant collections, all while maintaining enterprise-grade security standards. Now you can streamline your RAG pipeline development with automated data preprocessing, chunking, embeddings generation, and secure batch uploads to Qdrant collections. Learn more here: https://lnkd.in/ecQUiZEC
To view or add a comment, sign in
-
Writeup of some important remarks from Chris Wysopal: 'The LLMs write code like software developers write code -- and developers don't write secure code ... '... only about 20% of applications showing an average monthly fix rate that exceeds 10% of their security flaws. 'After five years, you see new vulnerabilities introduced about 37% of the time, meaning that three out of every eight new pieces of code is flawed. 'THe problem isn't just that AIs write bad code, Wysopal pointed out. It's also that human developers believe that GenAIs write better code than they themselves can write, when in fact the opposite is true. '... large-language models are trained on existing code that itself contains a lot of errors ... large share of the training data is open-source software ... 'Eventually, we will reach a point where the code-writing large-language-models will be drow[n]ing in a sea of garbage, spouting out unusable gibberish that makes sense only to themselves'. https://lnkd.in/gnWQfn-5
To view or add a comment, sign in
-
Comparison of different frameworks that provides structured output from LLMs. https://lnkd.in/gqziuN3Y
To view or add a comment, sign in