recent reading
Mar. 11th, 2026 07:09 pmThese are all parts of ongoing series, and all fantasy (in significantly different styles)
Testament of Mute Things, by Lois McMaster Bujold (a Penric novella)
Apt to be Suspicious, by Celia Lake
To Ride a Rising Storm, by Moniquill Blackgoose: this doesn't just leave room for a sequel, it ends on a cliffhanger. Strongly recommended. Definitely start with her first novel, To Shape a Dragon's Breath, for world-building and if you care about spoilers. (I think the Bujold and Lake books would both work as starting points for reading those series.)
I am currently partway through Ada Palmer's Inventing the Renaissance, which is chewy nonfiction.
We just finished our latest read-aloud book, Half Magic by Edward Eager. Adrian and Cattitude had read this before, I hadn't, we all enjoyed it.
apparently we also need a new oven
Mar. 11th, 2026 10:40 pmVia divers alarums and excursions we have established that the oven seems to trip All The Electrics... when it hits A Certain Temperature. ( Read more... )
But. BUT. Today I SAW THE BAT for the first time this year (having been doing a questionable job of actually managing to watch for it at bat o'clock over the last several weeks); and my Special Interest In Moving My Body went surprisingly well; and A curled up on the sofa and did some more Reading About Special Interest with me; and I am actually doing alright.
Ymlaen i Gymru!
Mar. 11th, 2026 08:33 pmI'm in south west Wales now, helping
angelofthenorth get her stuff from storage so her nice flat will finally have her nice furniture and books and etc.
We're here with a church friend of hers who drove the rented van, and we'll get to meet local friends of hers tomorrow as we tackle it.
We had a little look when we got here and I can see why she's intimidated by the task at hand: there's a lot of stuff and while we don't want much of it, some of what she does want will be way at the back so everything else might have to get moved. I brought tape and scissors and a sharpie so boxes that have to be opened can be re-packed and labeled.
It's nice to have a few days off work, and to be only needed as a henchqueer. I've had a nasty headache most of the day, so my two wishes for tomorrow are that it fucks off and that we don't get the rain that is forecast here (the storage containers are open to the elements).
Freedom of speech
Mar. 11th, 2026 02:18 pm“Free speech culture” has a natural tendency to discount the speech rights and interests of people who criticize speech.
This is important in Europe too, not just in the US, because it's a deliberate, specific Russian infowar tactic to promote far right events at UK universities and claim censorship if anyone objects. A
network based at [Cambridge] University and backed by Thiel, which it said was using the issue of free speech to “normalise white nationalism on UK campuses”.Neither Putin nor Thiel has anyone's freedom at heart, and they're all too successful at distracting people with a toddler-like notion of "freedom" where you get to say the naughty words without being told off.
( shorter version of my original opinion, building on White's piece )
Canada Needs Nationalized, Public AI
Mar. 11th, 2026 11:04 amCanada has a choice to make about its artificial intelligence future. The Carney administration is investing $2-billion over five years in its Sovereign AI Compute Strategy. Will any value generated by “sovereign AI” be captured in Canada, making a difference in the lives of Canadians, or is this just a passthrough to investment in American Big Tech?
Forcing the question is OpenAI, the company behind ChatGPT, which has been pushing an “OpenAI for Countries” initiative. It is not the only one eyeing its share of the $2-billion, but it appears to be the most aggressive. OpenAI’s top lobbyist in the region has met with Ottawa officials, including Artificial Intelligence Minister Evan Solomon.
All the while, OpenAI was less than open. The company had flagged the Tumbler Ridge, B.C., shooter’s ChatGPT interactions, which included gun-violence chats. Employees wanted to alert law enforcement but were rebuffed. Maybe there is a discussion to be had about users’ privacy. But even after the shooting, the OpenAI representative who met with the B.C. government said nothing.
When tech billionaires and corporations steer AI development, the resultant AI reflects their interests rather than those of the general public or ordinary consumers. Only after the meeting with the B.C. government did OpenAI alert law enforcement. Had it not been for the Wall Street Journal’s reporting, the public would not have known about this at all.
Moreover, OpenAI for Countries is explicitly described by the company as an initiative “in co-ordination with the U.S. government.” And it’s not just OpenAI: all the AI giants are for-profit American companies, operating in their private interests, and subject to United States law and increasingly bowing to U.S. President Donald Trump. Moving data centres into Canada under a proposal like OpenAI’s doesn’t change that. The current geopolitical reality means Canada should not be dependent on U.S. tech firms for essential services such as cloud computing and AI.
While there are Canadian AI companies, they remain for-profit enterprises, their interests not necessarily aligned with our collective good. The only real alternative is to be bold and invest in a wholly Canadian public AI: an AI model built and funded by Canada for Canadians, as public infrastructure. This would give Canadians access to the myriad of benefits from AI without having to depend on the U.S. or other countries. It would mean Canadian universities and public agencies building and operating AI models optimized not for global scale and corporate profit, but for practical use by Canadians.
Imagine AI embedded into health care, triaging radiology scans, flagging early cancer risks and assisting doctors with paperwork. Imagine an AI tutor trained on provincial curriculums, giving personalized coaching. Imagine systems that analyze job vacancies and sectoral and wage trends, then automatically match job seekers to government programs. Imagine using AI to optimize transit schedules, energy grids and zoning analysis. Imagine court processes, corporate decisions and customer service all sped up by AI.
We are already on our way to having AI become an inextricable part of society. To ensure stability and prosperity for this country, Canadian users and developers must be able to turn to AI models built, controlled, and operated publicly in Canada instead of building on corporate platforms, American or otherwise.
Switzerland has shown this to be possible. With funding from the federal government, a consortium of academic institutions—ETH Zurich, EPFL, and the Swiss National Supercomputing Centre—released the world’s most powerful and fully realized public AI model, Apertus, last September. Apertus leveraged renewable hydropower and existing Swiss scientific computing infrastructure. It also used no illegally pirated copyrighted material or poorly paid labour extracted from the Global South during training. The model’s performance stands at roughly a year or two behind the major corporate offerings, but that is more than adequate for the vast majority of applications. And it’s free for anyone to use and build on.
The significance of Apertus is more than technical. It demonstrates an alternative ownership structure for AI technology, one that allocates both decision-making authority and value to national public institutions rather than foreign corporations. This vision represents precisely the paradigm shift Canada should embrace: AI as public infrastructure, like systems for transportation, water, or electricity, rather than private commodity.
Apertus also demonstrates a far more sustainable economic framework for AI. Switzerland spent a tiny fraction of the billions of dollars that corporate AI labs invest annually, demonstrating that the frequent training runs with astronomical price tags pursued by tech companies are not actually necessary for practical AI development. They focused on making something broadly useful rather than bleeding edge—trying dubiously to create “superintelligence,” as with Silicon Valley—so they created a smaller model at much lower cost. Apertus’s training was at a scale (70 billion parameters) perhaps two orders of magnitude lower than the largest Big Tech offerings.
An ecosystem is now being developed on top of Apertus, using the model as a public good to power chatbots for free consumer use and to provide a development platform for companies prioritizing responsible AI use, and rigorous compliance with laws like the EU AI Act. Instead of routing queries from those users to Big Tech infrastructure, Apertus is deployed to data centres across national AI and computing initiatives of Switzerland, Australia, Germany, and Singapore and other partners.
The case for public AI rests on both democratic principles and practical benefits. Public AI systems can incorporate mechanisms for genuine public input and democratic oversight on critical ethical questions: how to handle copyrighted works in training data, how to mitigate bias, how to distribute access when demand outstrips capacity, and how to license use for sensitive applications like policing or medicine. Or how to handle a situation such as that of the Tumbler Ridge shooter. These decisions will profoundly shape society as AI becomes more pervasive, yet corporate AI makes them in secret.
By contrast, public AI developed by transparent, accountable agencies would allow democratic processes and political oversight to govern how these powerful systems function.
Canada already has many of the building blocks for public AI. The country has world-class AI research institutions, including the Vector Institute, Mila, and CIFAR, which pioneered much of the deep learning revolution. Canada’s $2-billion Sovereign AI Compute Strategy provides substantial funding.
What’s needed now is a reorientation away from viewing this as an opportunity to attract private capital, and toward a fully open public AI model.
This essay was written with Nathan E. Sanders, and originally appeared in The Globe and Mail.
[books, movement] A Physical Education, Casey Johnston
Mar. 10th, 2026 10:34 pmBack at the beginning of January
beadsbuttonslace wrote up some reflections on this book, which interested me enough that I put in a hold on my library's only digital copy, which was an audiobook, and then I managed to listen to it in under a week, and now I am subscribed to Johnston's newsletter (and reading its archives) and also trying to work out whether I want to buy a physical copy or a digital copy for my own library.
Which is to say: I liked it. A lot.
( Read more... )
And some final notes:
- it was only earlier today that I realised that an article that did the rounds a little while ago, The new MacBook keyboard is ruining my life... is BY THIS SAME PERSON
- at least two of you will be delighted to know that in the Epilogue, she ( spoilers... )
Today's poem
Mar. 10th, 2026 09:00 amwith outraged, horrible noises.
The night is illegible,
the streetlights dead staves.
You move into each orbit of darkness
like an extinction.
Time the storyteller is tired.
She begins many stories
but loses track of the endings.
What will happen to the angry raccoons?
In the morning, count the cats,
count the birds, count the worms,
count the earth.
No doubt we will find all the endings
in the end.
Jailbreaking the F-35 Fighter Jet
Mar. 10th, 2026 09:50 amCountries around the world are becoming increasingly concerned about their dependencies on the US. If you’ve purchase US-made F-35 fighter jets, you are dependent on the US for software maintenance.
The Dutch Defense Secretary recently said that he could jailbreak the planes to accept third-party software.
My day
Mar. 9th, 2026 10:43 pmI had a lot to do today: a kinda tricky day at work, walking Teddy, making dinner, visiting a friend, and I wanted to go to the gym.
And I did all of it! And some chores like moving heavy things around, finalizing the grocery delivery that'll come tomorrow, and doing laundry.
Feels good.
New Attack Against Wi-Fi
Mar. 9th, 2026 10:57 amIt’s called AirSnitch:
Unlike previous Wi-Fi attacks, AirSnitch exploits core features in Layers 1 and 2 and the failure to bind and synchronize a client across these and higher layers, other nodes, and other network names such as SSIDs (Service Set Identifiers). This cross-layer identity desynchronization is the key driver of AirSnitch attacks.
The most powerful such attack is a full, bidirectional machine-in-the-middle (MitM) attack, meaning the attacker can view and modify data before it makes its way to the intended recipient. The attacker can be on the same SSID, a separate one, or even a separate network segment tied to the same AP. It works against small Wi-Fi networks in both homes and offices and large networks in enterprises.
With the ability to intercept all link-layer traffic (that is, the traffic as it passes between Layers 1 and 2), an attacker can perform other attacks on higher layers. The most dire consequence occurs when an Internet connection isn’t encrypted—something that Google recently estimated occurred when as much as 6 percent and 20 percent of pages loaded on Windows and Linux, respectively. In these cases, the attacker can view and modify all traffic in the clear and steal authentication cookies, passwords, payment card details, and any other sensitive data. Since many company intranets are sent in plaintext, traffic from them can also be intercepted.
Even when HTTPS is in place, an attacker can still intercept domain look-up traffic and use DNS cache poisoning to corrupt tables stored by the target’s operating system. The AirSnitch MitM also puts the attacker in the position to wage attacks against vulnerabilities that may not be patched. Attackers can also see the external IP addresses hosting webpages being visited and often correlate them with the precise URL.
Here’s the paper.
vital functions
Mar. 8th, 2026 10:57 pmReading. I confess I have tripped and fallen into a special interest and am therefore currently primarily working my way through the archives of She's A Beast. BUT.
- This was all kicked off by A Physical Education: How I Escaped Diet Culture and Gained the Power of Lifting, Casey Johnston, inhaled; more comprehensive notes on this topic currently part way through being typed up.
- I am also about half way through (reading!) LIFTOFF: Couch to Barbell, also Casey Johnston, and am having fun starting to play with moving my body in ways.
- Continuing the theme of Moving Bodies In Ways and What Even Are Muscles, I have also started Science of Pilates (Tracy Ward).
- I also continue to work my way through What Is Queer Food?, John Birdsall, and am nearly done. Probably more thoughts on this at some point in the upcoming week.
Writing. Words continue to, very slowly, go up.
Listening. More Hidden Almanac. Very close to being caught up to the point I've theoretically listened to with A (some of which I wound up being asleep during)...
Playing. Inkulinati Exploders run on Master difficulty continues. We have now broken a quill (DEMONS :|) but we do continue to progress...
Another round (well, most of one) of The Little Orchard, this time with The Child deciding that we SHOULD turn the Bothersome Crows back over and put them back...
Cooking. New recipe! Meera Sodha's leek & chard martabak. Unlikely to make again but not sorry to have made.
Exploring. Adventures this week have included:
- Wood Green Mall, which contains PRIDE STAIRS, and the Community Diagnostic Centre, which contains GIANT WATERFOWL MURAL
- the walk between Wood Green underground station and Wood Green Mall, feat. ACORN BOLLARDS
- went for a bit of a Cross Walk one evening earlier this week (brain said AAAAAAH) and discovered along the way a fantastic white-with-pink-stripes camellia
- generally Going Out To Run Errands is currently accompanied by Many Flowers and that is nice, actually
Observing. flowersss.
Healthcare success
Mar. 8th, 2026 10:12 pmThis will be short because I need to go to bed, but I wanted to say -- particularly for our mutual friends here -- that D had his operation today; it all went just as planned (in his family group chat, his mum his back-on-the-ward selfie looked a bit woozy, and yes, but also he looked just like that before the op because he had to be there at 7 this morning!) and smoothly. He's home, tired and sore but able to watch TV, play video games, eat dinner, watch baseball with me. It's been a nice evening.
( Boring )
I didn't get as much done today as I might have hoped, but I did a good job of prioritizing what needed to happen today vs. what can wait until tomorrow. Really hoping I get better sleep tonight; it's been kinda shitty for a couple weeks and that takes its toll on everything else; I've had a low-grade headache most of the day and I think it's largely the broken sleep and weird dreams.
Continued academic adventures
Mar. 8th, 2026 12:23 pmWait, did someone say sensible thing? How about instead I take that course (along with another one in Patristic Greek) as a standalone module - that's only 39 credits (compared to a standard of 30) this semester. What could possibly go wrong? My plan had been to start all the modules until a decision was made, and then drop at least one of the optional ones if I wasn't allowed to switch with the compulsory one. The fatal flaw in that plan is that I am now having Way Too Much Fun to do that. I will keep the option of dropping one or the other in reserve if I feel like I'm burning out. The workload is a lot, and I am slightly behind compared to where my timetable says I should be, but if life holds off on curveballs then I think I should be able to get caught up in the next week.
The Midrash course in particular is really really good. We had a couple of introductory lectures on generally background, one from an academic and theoretical perspective, and one in which we looked at what what midrash says about itself. After that we got stuck in to actually doing the reading and interpreting. We're studying the Petikot (a series of introductory comments) of Lam Rabbah, an exegesis of Lamentations. It's a completely different approach to that taken in traditional Christian Biblical Studies, somehow both more open to individual and non-literal interpretations and also more demanding of a rigorous justification based on the precise details of the words of scripture.
It's quite a small group - four students, and two professors - Rabbi Dr David Meyer, who is leading us, and Pierre van Hecke, my erstwhile teacher of Ugaritic and Hebrew, who is engaging more like a fifth student. It's really delightful, having spent a fair amount of time over the last 18 months learning to read Hebrew, to be actually putting that learning into practice. My command of the language is probably the weakest in the group, but I'm just about managing to keep up, and at least some of my hermeneutical suggestions in class have been meeting with positive responses, which is encouraging.
The roar of the crowd
Mar. 7th, 2026 10:08 pmThis afternoon, I watched the Nicaragua vs Dominicana in the World Baseball Classic.
It's so loud. I love it. I kept looking up because I heard the kind of crowd noise that my white ass expected to mean someone had just hit a home run or something, and instead it's, like, a check swing or what's almost certainly going to be an infield out or whatever.
Tonight, D and I are watching Japan vs. Korea, in the Tokyodome so I'm hearing more chants and drums and clapping than I've ever heard, even at West Indies cricket matches.
I love it, gotta soak this up as much as I can.
Friday Squid Blogging: Squid in Byzantine Monk Cooking
Mar. 6th, 2026 10:03 pmThis is a very weird story about how squid stayed on the menu of Byzantine monks by falling between the cracks of dietary rules.
At Constantinople’s Monastery of Stoudios, the kitchen didn’t answer to appetite.
It answered to the “typikon”: a manual for ensuring that nothing unexpected happened at mealtimes. Meat: forbidden. Dairy: forbidden. Eggs: forbidden. Fish: feast-day only. Oil: regulated. But squid?
Squid had eight arms, no bones, and a gift for changing color. Nobody had bothered writing a regulation for that. This wasn’t a loophole born of legal creativity but an oversight rooted in taxonomic confusion. Medieval monks, confronted with a creature that was neither fish nor fowl, gave up and let it pass.
In a kitchen governed by prohibitions, the safest ingredient was the one that caused the least disturbance. Squid entered not with applause, but with a shrug.
Bonus stuffed squid recipe at the end.
As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered.
Anthropic and the Pentagon
Mar. 6th, 2026 05:07 pmOpenAI is in and Anthropic is out as a supplier of AI technology for the US defense department. This news caps a week of bluster by the highest officials in the US government towards some of the wealthiest titans of the big tech industry, and the overhanging specter of the existential risks posed by a new technology powerful enough that the Pentagon claims it is essential to national security. At issue is Anthropic’s insistence that the US Department of Defense (DoD) could not use its models to facilitate “mass surveillance” or “fully autonomous weapons,” provisions the defense secretary Pete Hegseth derided as “woke.”
It all came to a head on Friday evening when Donald Trump issued an order for federal government agencies to discontinue use of Anthropic models. Within hours, OpenAI had swooped in, potentially seizing hundreds of millions of dollars in government contracts by striking an agreement with the administration to provide classified government systems with AI.
Despite the histrionics, this is probably the best outcome for Anthropic—and for the Pentagon. In our free-market economy, both are, and should be, free to sell and buy what they want with whom they want, subject to longstanding federal rules on contracting, acquisitions, and blacklisting. The only factor out of place here are the Pentagon’s vindictive threats.
AI models are increasingly commodified. The top-tier offerings have about the same performance, and there is little to differentiate one from the other. The latest models from Anthropic, OpenAI and Google, in particular, tend to leapfrog each other with minor hops forward in quality every few months. The best models from one provider tend to be preferred by users to the second, or third, or 10th best models at a rate of only about six times out of 10, a virtual tie.
In this sort of market, branding matters a lot. Anthropic and its CEO, Dario Amodei, are positioning themselves as the moral and trustworthy AI provider. That has market value for both consumers and enterprise clients. In taking Anthropic’s place in government contracting, OpenAI’s CEO, Sam Altman, vowed to somehow uphold the same safety principles Anthropic had just been pilloried for. How that is possible given the rhetoric of Hegseth and Trump is entirely unclear, but seems certain to further politicize OpenAI and its products in the minds of consumers and corporate buyers.
Posturing publicly against the Pentagon and as a hero to civil libertarians is quite possibly worth the cost of the lost contracts to Anthropic, and associating themselves with the same contracts could be a trap for OpenAI. The Pentagon, meanwhile, has plenty of options. Even if no big tech company was willing to supply it with AI, the department has already deployed dozens of open weight models—whose parameters are public and are often licensed permissively for government use.
We can admire Amodei’s stance, but, to be sure, it is primarily posturing. Anthropic knew what they were getting into when they agreed to a defense department partnership for $200m last year. And when they signed a partnership with the surveillance company Palantir in 2024.
Read Amodei’s statement about the issue. Or his January essay on AIs and risk, where he repeatedly uses the words “democracy” and “autocracy” while evading precisely how collaboration with US federal agencies should be viewed in this moment. Amodei has bought into the idea of using “AI to achieve robust military superiority” on behalf of the democracies of the world in response to the threats from autocracies. It’s a heady vision. But it is a vision that likewise supposes that the world’s nominal democracies are committed to a common vision of public wellbeing, peace-seeking and democratic control.
Regardless, the defense department can also reasonably demand that the AI products it purchases meet its needs. The Pentagon is not a normal customer; it buys products that kill people all the time. Tanks, artillery pieces, and hand grenades are not products with ethical guard rails. The Pentagon’s needs reasonably involve weapons of lethal force, and those weapons are continuing on a steady, if potentially catastrophic, path of increasing automation.
So, at the surface, this dispute is a normal market give and take. The Pentagon has unique requirements for the products it uses. Companies can decide whether or not to meet them, and at what price. And then the Pentagon can decide from whom to acquire those products. Sounds like a normal day at the procurement office.
But, of course, this is the Trump administration, so it doesn’t stop there. Hegseth has threatened Anthropic not just with loss of government contracts. The administration has, at least until the inevitable lawsuits force the courts to sort things out, designated the company as “a supply-chain risk to national security,” a designation previously only ever applied to foreign companies. This prevents not only government agencies, but also their own contractors and suppliers, from contracting with Anthropic.
The government has incompatibly also threatened to invoke the Defense Production Act, which could force Anthropic to remove contractual provisions the department had previously agreed to, or perhaps to fundamentally modify its AI models to remove in-built safety guardrails. The government’s demands, Anthropic’s response, and the legal context in which they are acting will undoubtedly all change over the coming weeks.
But, alarmingly, autonomous weapons systems are here to stay. Primitive pit traps evolved to mechanical bear traps. The world is still debating the ethical use of, and dealing with the legacy of, land mines. The US Phalanx CIWS is a 1980s-era shipboard anti-missile system with a fully autonomous, radar-guided cannon. Today’s military drones can search, identify and engage targets without direct human intervention. AI will be used for military purposes, just as every other technology our species has invented has.
The lesson here should not be that one company in our rapacious capitalist system is more moral than another, or that one corporate hero can stand in the way of government’s adopting AI as technologies of war, or surveillance, or repression. Unfortunately, we don’t live in a world where such barriers are permanent or even particularly sturdy.
Instead, the lesson is about the importance of democratic structures and the urgent need for their renovation in the US. If the defense department is demanding the use of AI for mass surveillance or autonomous warfare that we, the public, find unacceptable, that should tell us we need to pass new legal restrictions on those military activities. If we are uncomfortable with the force of government being applied to dictate how and when companies yield to unsafe applications of their products, we should strengthen the legal protections around government procurement.
The Pentagon should maximize its warfighting capabilities, subject to the law. And private companies like Anthropic should posture to gain consumer and buyer confidence. But we should not rest on our laurels, thinking that either is doing so in the public’s interest.
This essay was written with Nathan E. Sanders, and originally appeared in The Guardian.

