This Week In Security: Anime Catgirls, Illegal AdBlock, And Disputed Research

You may have noticed the Anime Catgirls when trying to get to the Linux Kernel’s mailing list, or one of any number of other sites associated with Open Source projects. [Tavis Ormandy] had this question, too, and even wrote about it. So, what’s the deal with the catgirls?

The project is Anubis, a “Web AI Firewall Utility”. The intent is to block AI scrapers, as Anubis “weighs the soul” of incoming connections, and blocks the bots you don’t want. Anubis uses the user agent string and other indicators to determine what an incoming connection is. But the most obvious check is the in-browser hashing. Anubis puts a challenge string in the HTTP response header, and JavaScript running in the browser calculates a second string to append this challenge. The goal is to set the first few bytes of the SHA-256 hash of this combined string to 0.

[Tavis] makes a compelling case that this hashing is security theatre — It makes things appear more secure, but doesn’t actually improve the situation. It’s only fair to point out that his observation comes from annoyance, as his preferred method of accessing the Linux kernel git repository and mailing list are now blocked by Anubis. But the economics of compute costs clearly demonstrate that this SHA-256 hashing approach will only be effective so long as AI companies don’t add the 25 lines of C it took him to calculate the challenge. The Anubis hashing challenge is literally security by obscurity.

Continue reading “This Week In Security: Anime Catgirls, Illegal AdBlock, And Disputed Research”

Sniffing 5G With Software-Defined Radio

The fifth generation mobile communications protocol (5G) is perhaps the most complicated wireless protocol ever made. Featuring wildly fast download speeds, beam forming base stations, and of course non-standard additions, it’s rather daunting prospect to analyze for the home hacker and researcher alike. But this didn’t stop the ASSET Research Group from developing a 5G sniffer and downlink injector.

The crux of the project is focused around real-time sniffing using one of two Universal Software Radio Peripheral (USRP) software-defined radios (SDRs), and a substantial quantity of compute power. This sniffed data can even be piped into Wireshark for filtering. The frequency is hard-coded into the sniffer for improved performance with the n78 and n41 bands having been tested as of writing. While we expect most of you don’t have the supported USRP hardware, they provided a sample capture file for anyone to analyze.

The other main feature of the project is an exploitation framework with numerous attack vectors developed by ASSET and others. By turning an SDR into a malicious 5G base station, numerous vulnerabilities and “features” can be exploited to with results ranging from downgrading the connection to 4G, fingerprinting and much more. It even includes an attack method we preciously covered called 5Ghull which can cause device failure requiring removal of the SIM Card. These vulnerabilities offer a unique look inside the inner workings of 5G.

If you too are interested in 5G sniffing but don’t have access to the hardware needed, check out this hack turning a Qualcomm phone into a 5G sniffer!

This Week In Security: The AI Hacker, FortMajeure, And Project Zero

One of the hot topics currently is using LLMs for security research. Poor quality reports written by LLMs have become the bane of vulnerability disclosure programs. But there is an equally interesting effort going on to put LLMs to work doing actually useful research. One such story is [Romy Haik] at ULTRARED, trying to build an AI Hacker. This isn’t an over-eager newbie naively asking an AI to find vulnerabilities, [Romy] knows what he’s doing. We know this because he tells us plainly that the LLM-driven hacker failed spectacularly.

The plan was to build a multi-LLM orchestra, with a single AI sitting at the top that maintains state through the entire process. Multiple LLMs sit below that one, deciding what to do next, exactly how to approach the problem, and actually generating commands for those tools. Then yet another AI takes the output and figures out if the attack was successful. The tooling was assembled, and [Romy] set it loose on a few intentionally vulnerable VMs.

As we hinted at up above, the results were fascinating but dismal. This LLM successfully found one Remote Code Execution (RCE), one SQL injection, and three Cross-Site Scripting (XSS) flaws. This whole post is sort of sneakily an advertisement for ULTRARED’s actual automated scanner, that uses more conventional methods for scanning for vulnerabilities. But it’s a useful comparison, and it found nearly 100 vulnerabilities among the collection of targets.

The AI did what you’d expect, finding plenty of false positives. Ask an AI to describe a vulnerability, and it will glad do so — no real vulnerability required. But the real problem was the multitude of times that the AI stack did demonstrate a problem, and failed to realize it. [Romy] has thoughts on why this attempt failed, and two points stand out. The first is that while the LLM can be creative in making attacks, it’s really terrible at accurately analyzing the results. The second observation is one of the most important observations to keep in mind regarding today’s AIs. It doesn’t actually want to find a vulnerability. One of the marks of security researchers is the near obsession they have with finding a great score. Continue reading “This Week In Security: The AI Hacker, FortMajeure, And Project Zero”

This Week In Security: Perplexity V Cloudflare, GreedyBear, And HashiCorp

The Internet is fighting over whether robots.txt applies to AI agents. It all started when Cloudflare published a blog post, detailing what the company was seeing from Perplexity crawlers. Of course, automated web crawling is part of how the modern Internet works, and almost immediately after the first web crawler was written, one managed to DoS (Denial of Service) a web site back in 1994. And the robots.txt file was first designed.

Make no mistake, robots.txt on its own is nothing more than a polite request for someone else on the Internet to not index your site. The more aggressive approach is to add rules to a Web Application Firewall (WAF) that detects and blocks a web crawler based on the user-agent string and source IP address. Cloudflare makes the case that Perplexity is not only intentionally ignoring robots.txt, but also actively disguising their webcrawling traffic by using IP addresses outside their normal range for these requests.

This isn’t the first time Perplexity has landed in hot water over their web scraping, AI learning endeavors. But Perplexity has published a blog post, explaining that this is different!

And there’s genuinely an interesting argument to be made,that robots.txt is aimed at indexing and AI training traffic, and that agentic AI requests are a different category. Put simply, perplexity bots ignore robots.txt when a live user asks them to. Is that bad behavior, or what we should expect? This question will have to be settled as AI agents become more common.

Continue reading “This Week In Security: Perplexity V Cloudflare, GreedyBear, And HashiCorp”

Why Names Break Systems

Web systems are designed to be simple and reliable. Designing for the everyday person is the goal, but if you don’t consider the odd man out, they may encounter some problems. This is the everyday life for some people with names that often have unconsidered features, such as apostrophes or spaces. This is the life of [Luke O’Sullivan], who even had to fly under a different name than his legal one.

[O’Sullivan] is far from a rare surname, but presents an interesting challenge for many computer systems. Systems from the era of penny pinching every bit relied on ASCII. ASCII only included 128 characters, which included a very small set of special characters. Some systems didn’t even include some of these characters to reduce loading times. Throw on the security features put in place to prevent injection attacks, and you have a very unfriendly field for many uncommon names.

Unicode is a newer standard with over 150,000 characters, allowing for nearly any character. However, many older systems are far from easy or cheap to convert to the new standard. This leaves many people to have to adapt to the software rather than the software adapting to the user. While this is simply poor design in general, [O’Sullivan] makes sure to point out how demeaning this can be for many people. Imagine being told that your name isn’t important enough to be included, or told that it’s “invalid”.

One excuse that gets thrown about is the aforementioned injection prompts that can be used to affect these systems. This can cause systems to crash or even change settings; however, it’s not just these older systems that get affected. For modern-day injection prompts, check out how AI models can get affected!

Continue reading “Why Names Break Systems”

This Week In Security: Spilling Tea, Rooting AIs, And Accusing Of Backdoors

The Tea app has had a rough week. It’s not an unfamiliar story: Unsecured Firebase databases were left exposed to the Internet without any authentication. What makes this story particularly troubling is the nature of the app, and the resulting data that was spilled.

Tea is a “dating safety” application strictly for women. To enforce this, creating an account requires an ID verification process where prospective users share their government issued photo IDs with the platform. And that brings us to the first Firebase leak. 59 GB of photo IDs and other photos for a large subset of users. This was not the only problem.

There was a second database discovered, and this one contains private messages between users. As one might imagine, given the topic matter of the app, many of these DMs contain sensitive details. This may not have been an unsecured Firebase database, but a separate problem where any API key could access any DM from any user.

This is the sort of security failing that is difficult for a company to recover from. And while it should be a lesson to users, not to trust their sensitive messages to closed-source apps with questionable security guarantees, history suggests that few will learn the lesson, and we’ll be covering yet another train-wreck of similar magnitude in another few months.

Continue reading “This Week In Security: Spilling Tea, Rooting AIs, And Accusing Of Backdoors”

When Online Safety Means Surrendering Your ID, What Can You Do?

A universal feature of traveling Europe as a Hackaday scribe is that when you sit in a hackerspace in another country and proclaim how nice a place it all is, the denizens will respond pessimistically with how dreadful their country really is. My stock response is to say “Hold my beer” and recount the antics of British politicians, but the truth is, the grass is always greener on the other side.

There’s one thing here in dear old Blighty that has me especially concerned at the moment though, and perhaps it’s time to talk about it here. The Online Safety Act has just come into force and is the UK government’s attempt to deal with what they perceive as the nasties on the Internet, and while some of its aspirations may be honourable, its effects are turning out to be a little chilling.

As might be expected, the Act requires providers to ensure their services are free of illegal material, and it creates some new offences surrounding sharing images without consent, and online stalking. Where the concern lies for me is in the requirement for age verification to ensure kids don’t see anything the government things they shouldn’t, which is being enforced through online ID verification. There are many reasons why this is of concern, but I’ll name the three at the top of my list.
Continue reading “When Online Safety Means Surrendering Your ID, What Can You Do?”