If the term ‘NLWeb’ first brought to mind an image of a Dutch internet service provider, you’re probably not alone. What it actually is – or tries to become – is Microsoft’s vision of a parallel internet protocol using which website owners and application developers can integrate whatever LLM-based chatbot they desire. Unfortunately for Microsoft, the NLWeb protocol just suffered its first major security flaw.
The flaw is an absolute doozy, involving a basic path traversal vulnerability that allows an attacker to use appropriately formatted URLs to traverse the filesystem of the remote, LLM-hosting, system to extract keys and other sensitive information. Although Microsoft patched it already, no CVE was assigned, while raising the question of just how many more elementary bugs like this may be lurking in the protocol and associated software.
As for why a website or application owner might be interested in NLWeb, the marketing pitch appears to be as an alternative to integrating a local search function. This way any website or app can have their own ChatGPT-style search functionality that is theoretically restricted to just their website, instead of chatbot-loving customers going to the ChatGPT or equivalent site to ask their questions there.
Even aside from the the strong ‘solution in search of a problem’ vibe, it’s worrying that right from the outset it seems to introduce pretty serious security issues that suggest a lack of real testing, never mind a strong ignorance of the fact that a lack of user input sanitization is the primary cause for widely exploited CVEs. Unknown is whether GitHub Copilot was used to write the affected codebase.
Yikes, the same companies touting AI integration into software and OS are the same ones telling us we have to have encrypted disks, secure boot and TPM. Weird.
Seems the AI revolution needs to learn some of the most basic things that websites and database users have learned over the years about input sanitization.
Path traversal expliots? checks calendar checks calendar in another office It’s still 2025, right? We have not travelled back into the late 90s or early naughts? I mean, if we actually did I’d have to make some phone calls.
What kind of imbecile writes stuff that is vulnerable to that? Or has unsanatised input fields? SQL injections? Don’t people learn about past mistakes any more?
Maybe not just copilot, maybe even a middle manager vibing, with an “if it compiles, ship it!” mentality.
I find it really hard to believe that a big mega-corporation such as Microsoft would ship something riddled with security flaws…
How is this agentic?
Sounds more like RAG.