Invisible Integrations: Why AI in Your Stack Needs Executive Oversight
- Bar-El Tayouri
- 2 days ago
- 4 min read
Guest Editorial by Bar-El Tayouri, Head of Mend IO
MCP-enabled AI agents are quietly shaping your software. Learn why leaders must govern their access, behavior, and risk, before it becomes a supply chain problem.
The rise of AI in software development didn’t happen overnight, but for many leaders it might feel like it did. What began as developer experimentation with ChatGPT or open-source LLMs has rapidly evolved into a new layer of software architecture: AI agents connected via Model Context Protocol (MCP). AI is quietly executing tasks, making decisions, and communicating with external services, all without centralized governance.

These AI components are no longer experimental; they’re operational. And as usage grows, so does risk. Security and development leaders must now take ownership of how AI behaviors are embedded into software stacks, because they’re already shaping business outcomes, compliance posture, and exposure to threats.
AI Has Quietly Changed How Software Operates
AI is increasingly embedded in core workflows, including writing code, generating outputs, parsing inputs, and even interacting with APIs and third-party tools. In many implementations, this functionality is orchestrated using MCP, a protocol designed to allow models to call plugins, tools, or data sources, without needing to hardcode everything in advance.
The result? Your organization may well have AI agents behaving like runtime automation, driven by prompt logic, calling external services, and producing outputs that influence core product behavior. But what your organization likely doesn’t have is inventory, approvals, and tracking for these agents or their integrations.
Why This Matters: You Don’t Know What’s Acting on Your Behalf
These AI agents often operate with delegated trust; they’re executing actions that previously required human intervention or system-level code. That includes calling APIs, modifying data, even triggering business logic. But they’re doing so outside of formal governance, often using long-lived tokens or unmonitored service accounts.
Let’s say an AI agent connected via MCP performs sensitive operations. Who is responsible for validating its behavior? Who controls what it can access, and whether it’s safe? For most organizations, these questions have no clear answer, because no policy exists yet.
The Risk: Automation Without Oversight
Unchecked MCP usage introduces a class of risk similar to traditional third-party software. However, unlike vetted SDKs or reviewed integrations, MCP calls can be hidden inside helper libraries, transitive dependencies, or configuration files. These agents can:
Use unsecure or over-permissioned tokens: Long-lived credentials associated with API keys or OAuth tokens may have broad access or poor rotation hygiene. These can be leaked, reused, or exploited.
Call out to external APIs with no logging or auditing: This makes it impossible to track what data was accessed, where it went, or if it violates security policies.
Operate without a centralized inventory or approval workflow: Invisible to most inventory systems or security reviews, AI agents may not be registered, approved, or accountable.
This is automation without oversight, and it undermines every effort your security and engineering teams have made to secure the software supply chain.
Real-World Patterns We’re Already Seeing
At Mend.io, we’re already seeing real signs of this trend in customer environments and open-source ecosystems:
Hardcoded OpenAI keys being shipped in internal tooling scripts
LangChain agents embedded in backend services via transitive dependencies
AI orchestration logic invoked via containers that bypass policy checks
OAuth flows implemented using outdated or unsafe libraries, exposing tokens to replay attacks
These patterns aren’t theoretical. They’re happening now, just beneath the surface of your stack.
What Security Leaders Must Do, Now
The first step isn’t technical; it’s organizational. Security and development leaders must acknowledge that AI agents are now acting as third-party actors in your system and treat them with the same focus as vendors or contractors. Here are four key steps:
Acknowledge the blind spot. Recognize that MCP-enabled behavior exists in your stack, whether it was formally introduced or not. Ignorance is not protection.
Treat AI agents as third-party integrations. Every AI tool or agent that makes decisions, calls APIs, or performs tasks must be reviewed and governed as an external service, even if it’s open source or internally deployed.
Build an AI connector inventory. Like a vendor list or dependency graph, your organization needs a living inventory of AI connectors, endpoints, SDKs, and token usage. Without visibility, you can’t control access or assess risk.
Establish governance and policy. Define usage policies for MCP and AI integrations. Set boundaries on model access, plugin usage, prompt logic, and token scope. Leverage AppSec tools to scan for violations and enforce policy-as-code in your CI/CD pipelines.
This Is the Next Supply Chain Risk – It’s Just Not Mainstream Yet
Just as we learned to secure cloud adoption, SaaS expansion, and open-source dependencies, we must now apply the same discipline to AI integrations. MCP is here, and AI agents are already operating inside your environment.
What’s missing isn’t talent or tooling; it’s executive oversight and clear accountability.
Bar-El Tayouri has been programming since the age of 12 and began hacking transportation cards while still in high school. Today, he leads Mend AI, a suite of products designed to enable the GenAI revolution for security-conscious enterprises. He co-founded Atom Security, an AppSec prioritization company now embedded within the Mend platform, following its acquisition by Mend.io. His background includes serving as an architect and the first engineer at an augmented reality startup, along with extensive expertise in data science and cybersecurity.