Software taketh away faster than hardware giveth: Why C++ programmers keep growing fast despite competition, safety, and AI

2025 was another great year for C++. It shows in the numbers

Before we dive into the data below, let’s put the most important question up front: Why have C++ and Rust been the fastest-growing major programming languages from 2022 to 2025?

Primarily, it’s because throughout the history of computing “software taketh away faster than hardware giveth.” There is enduring demand for efficient languages because our demand for solving ever-larger computing problems consistently outstrips our ability to build greater computing capacity, with no end in sight. [6] Every few years, people wonder whether our hardware is just too fast to be useful, until the future’s next big software demand breaks across the industry in a huge wake-up moment of the kind that iOS delivered in 2007 and ChatGPT delivered in November 2022. AI is only the latest source of demand to squeeze the most performance out of available hardware.

The world’s two biggest computing constraints in 2025

Quick quiz: What are the two biggest constraints on computing growth in 2025? What’s in shortest supply?

Take a moment to answer that yourself before reading on…

— — —

If you answered exactly “power and chips,” you’re right — and in the right order.

Chips are only our #2 bottleneck. It’s well known that the hyperscalars are competing hard to get access to chips. That’s why NVIDIA is now the world’s most valuable company, and TSMC is such a behemoth that it’s our entire world’s greatest single point of failure.

But many people don’t realize: Power is the #1 constraint in 2025. Did you notice that all the recent OpenAI deals were expressed in terms of gigawatts? Let’s consider what three C-level executives said on their most recent earnings calls. [1]

Amy Hood, Microsoft CFO (MSFT earnings call, October 29, 2025):

[Microsoft Azure’s constraint is] not actually being short GPUs and CPUs per se, we were short the space or the power, is the language we use, to put them in.

Andy Jassy, Amazon CEO (AMZN earnings call, October 30, 2025):

[AWS added] more than 3.8 gigawatts of power in the past 12 months, more than any other cloud provider. To put that into perspective, we’re now double the power capacity that AWS was in 2022, and we’re on track to double again by 2027.

Jensen Huang, NVIDIA CEO (NVDA earnings call, November 19, 2025):

The most important thing is, in the end, you still only have 1 gigawatt of power. One gigawatt data centers, 1 gigawatt power. … That 1 gigawatt translates directly. Your performance per watt translates directly, absolutely directly, to your revenues.

That’s why the future is enduringly bright for languages that are efficient in “performance per watt” and “performance per transistor.” The size of computing problems we want to solve has routinely outstripped our computing supply for the past 80 years; I know of no reason why that would change in the next 80 years. [2]

The list of major portable languages that target those key durable metrics is very short: C, C++, and Rust. [3] And so it’s no surprise to see that in 2025 all three continued experiencing healthy growth, but especially C++ and Rust.

Let’s take a look.

The data in 2025: Programming keeps growing by leaps and bounds, and C++ and Rust are growing fastest

Programming is a hot market, and programmers are in long-term high-growth demand. (AI is not changing this, and will not change it; see Appendix.)

“Global developer population trends 2025” (SlashData, 2025) reports that in the past three years the global developer population grew about 50%, from just over 31 million to just over 47 million. IDC forecasts that this growth will continue, to over 57 million developers by 2028. And which two languages are growing the fastest (highest percentage growth from 2022 to 2025)? Rust, and C++.

Developer population growth 2022-2025

To put C++’s growth in context:

  • Compared to all languages: There are now more C++ developers than the #1 language had just four years ago.
  • Compared to Rust: Each of C++, Python, and Java just added about as many developers in one year as there are Rust total developers in the world.

C++ is a living language whose core job to be done is to make the most of hardware, and it is continually evolving to stay relevant to the changing hardware landscape. The new C++26 standard contains additional support for hardware parallelism on the latest CPUs and GPUs, notably adding more support for SIMD types for intra-CPU vector parallelism, and the std::execution Sender/Receiver model for general multi-CPU and GPU concurrency and parallelism.

But wait — how could this growth be happening? Isn’t C++ “too unsafe to use,” according to a spate of popular press releases and tweets by a small number of loud voices over the past few years?

Let’s tackle that next…

Safety (type/memory safety, functional safety) and security

C++’s rate of security vulnerabilities has been far overblown in the press primarily because some reports are counting only programming language vulnerabilities when those are a smaller minority every year, and because statistics conflate C and C++. Let’s consider those two things separately.

First, the industry’s security problem is mostly not about programming language insecurity.

Year after year, and again in 2025, in the MITRE “CWE Top 25 Most Dangerous Software Weaknesses” (mitre.org, 2025) only three of the top 10 “most dangerous software weaknesses” are related to language safety properties. Of those three, two (out-of-bounds write and out-of-bounds read) are directly and dramatically improved in C++26’s hardened C++ standard library which does bounds-checking for the most widely used bounded operations (see below). And that list is only about software weaknesses, when more and more exploits bypass software entirely.

Why are vulnerabilities increasingly not about language issues, or even about software at all? Because we have been hardening our software; this is why the cost of zero-day exploits has kept rising, from thousands to millions of dollars. So attackers stop pursuing that as much, and switch to target the next slowest animal in the herd. For example, “CrowdStrike 2025 Global Threat Report” (CrowdStrike, 2025) reports that “79% of [cybersecurity intrusion] detections were malware-free,” not involving programming language exploits. Instead, there was huge growth not only in non-language exploits, but even in non-software exploits, including a “442% growth in vishing [voice phishing via phone calls and voice messages] operations between the first and second half of 2024.”

Why go to the trouble of writing an exploit for a use-after-free bug to infect someone’s computer with malware which is getting more expensive every year, when it’s easier to do some cross-site scripting that doesn’t depend on a programming language, and it’s easier still to ignore the software entirely and just convince the user to tell you their password on the phone?

Second, for the subset that is about programming language insecurity, the problem child is C, not C++.

A serious problem is that vulnerability statistics almost always conflate C and C++; it’s very hard to find good public sources that distinguish them. The only reputable public study I know of that distinguished between C and C++ is Mend.io’s as reported in “What are the most secure programming languages?” (Mend.io, 2019). Although the data is from 2019, as you can see the results are consistent across years.

Can’t see the C++ bar? Pinch to zoom. 😉

Although C++’s memory safety has always been much closer to that of other modern popular languages than to that of C, we do have room for improvement and we’re doing even better in the newest C++ standard about to be released, C++26. It delivers two major security improvements, where you can just recompile your code as C++26 and it’s significantly more secure:

  • C++26 eliminates undefined behavior from uninitialized local variables. [4] How needed is this? Well, it directly addresses a Reddit r/cpp complaint posted just today while I was finishing this post: “The production bug that made me care about undefined behavior” (Reddit, December 30, 2025).
  • C++26 adds bounds safety to the C++ standard library in a “hardened” mode that bounds-checks the most widely used bounded operations. “Practical Security in Production” (ACM Queue, November 2025) reports that it has already been used at scale across Apple platforms (including WebKit) and nearly all Google services and Chrome (100s of millions of lines of code) with tiny space and time overhead (fraction of one percent each), and “is projected to prevent 1,000 to 2,000 new bugs annually” at Google alone.

Additionally, C++26 adds functional safety via contracts: preconditions, postconditions, and contract assertions in the language, that programmers can use to check that their programs behave as intended well beyond just memory safety.

Beyond C++26, in the next couple of years I expect to see proposals to:

  • harden more of the standard library
  • remove more undefined behavior by turning it into erroneous behavior, turning it into language-enforced contracts, or forbidding it via subsets that ban unsafe features by default unless we explicitly opt in (aka profiles)

I know of people who’ve been asking for C++ evolution to slow down a little to let compilers and users catch up, something like we did for C++03. But we like all this extra security, too. So, just spitballing here, but hypothetically:

What if we focused C++29, the next release cycle of C++, to only issue-list-level items (bug fixes and polish, not new features) and the above “hardening” list (add more library hardening, remove more language undefined behavior)?

I’m intrigued by this idea, not because security is C++’s #1 burning issue — it isn’t, C++ usage is continuing to grow by leaps and bounds — but because it could address both the “let’s pause to stabilize” and “let’s harden up even more” motivations. Focus is about saying no.

Conclusion

Programming is growing fast. C++ is growing very fast, with a healthy long-term future because it’s deeply aligned with the overarching 80-year trend that computing demand always outstrips supply. C++ is a living language that continually adapts to its environment to fulfill its core mission, tracking what developers need to make the most of hardware.

And it shows in the numbers.

Here’s to C++’s great 2025, and its rosy outlook in 2026! I hope you have an enjoyable rest of the holiday period, and see you again in 2026.

Acknowledgments

Thanks to Saeed Amrollahi Boyouki, Mark Hoemmen and Bjarne Stroustrup for motivating me to write this post and/or providing feedback.


Appendix: AI

Finally, let’s talk about the topic no article can avoid: AI.

C++ is foundational to current AI. If you’re running AI, you’re running CUDA (or TensorFlow or similar) — directly or indirectly — and if you’re running CUDA (or TensorFlow or similar), you’re probably running C++. CUDA is primarily available as a C++ extension. There’s always room for DSLs at the leading edge, but for general-purpose AI most high-performance deployment and inference is implemented in C++, even if people are writing higher-level code in other languages (e.g., Python).

But more broadly than just C++: What about AI generally? Will it take all our jobs? (Spoiler: No.)

AI is a wonderful and transformational tool that greatly reduces rote work, including problems that have already been solved, where the LLM is trained on the known solutions. But AI cannot understand, and therefore can’t solve, new problems — which is most of the current and long-term growth in our industry.

What does that imply? Two main things, in my opinion…

First, I think that people who think AI isn’t a major game-changer are fooling themselves.

To me, AI is on par with the wheel (back in the mists of time), the calculator (back in the 1970s), and the Internet (back in the 1990s). [5] Each of those has been a game-changing tool to accelerate (not replace) human work, and each led to more (not less) human production and productivity.

I strongly recommend checking out Adam Unikowsky’s “Automating Oral Argument” (Substack, July 7, 2025). Unikowsky took his own actual oral arguments before the United States Supreme Court and showed how well 2025-era Claude can do as a Supreme Court-level lawyer, and with what strengths and weaknesses. Search for “Here is the AI oral argument” and click on the audio player, which is a recording of an actual Supreme Court session and replaces only Unikowsky’s responses with his AI-generated voice saying the AI-generated text argument directly responding to each of the justices’ actual questions; the other voices are the real Supreme Court justices. (Spoiler: “Objectively, this is an outstanding oral argument.”)

Second, I think that people who think AI is going to put a large fraction of programmers out of work are fooling themselves.

We’ve just seen that, today, three years after ChatGPT took the world by storm, the number of human programmers is growing as fast as ever. Even the companies that are the biggest boosters of the “AI will replace programmers” meme are actually aggressively growing, not reducing, their human programmer workforces.

Consider what three more C-level executives are saying.

Sam Schillace, Microsoft Deputy CTO (Substack, December 19, 2025) is pretty AI-ebullient, but I do agree with this part he says well, and which resonates directly with Unikowsky’s experience above:

If your job is fundamentally “follow complex instructions and push buttons,” AI will come for it eventually.

But that’s not most programmers. Matt Garman, Amazon Web Services CEO (interview with Matthew Berman, X, August 2025) says bluntly:

People were telling me [that] with AI we can replace all of our junior people in our company. I was like that’s … one of the dumbest things I’ve ever heard. … I think AI has the potential to transform every single industry, every single company, and every single job. But it doesn’t mean they go away. It has transformed them, not replaced them.

Mike Cannon-Brookes, Atlassian CEO (Stratechery interview, December 2025) says it well:

I think [AI]’s a huge force multiplier personally for human creativity, problem solving … If software costs half as much to write, I can either do it with half as many people, but [due to] core competitive forces … I will [actually] need the same number of people, I would just need to do a better job of making higher quality technology. … People shouldn’t be afraid of AI taking their job … they should be afraid of someone who’s really good at AI [and therefore more efficient] taking their job.

So if we extend the question of our “what are our top constraints on software?” to include not only hardware and power, the #3 long-term constraint is clear: We are chronically short of skilled human programmers. Humans are not being replaced en masse, not most of us; we are being made more productive, and we’re needed more than ever. As I wrote above: “Programming is a hot market, and programmers are in long-term high-growth demand.”


Endnotes

[1] It’s actually great news that Big Tech is spending heavily on power, because the gigawatt capacity we build today is a long-term asset that will keep working for 15 to 20+ years, whether the companies that initially build that capacity survive or get absorbed. That’s important because it means all the power generation being built out today to satisfy demand in the current “AI bubble” will continue to be around when the next major demand for compute after AI comes along. See Ben Thompson’s great writing, such as “The Benefits of Bubbles” (Stratechery, November 2025).

[2] The Hitchhiker’s Guide to the Galaxy contains two opposite ideas, both fun but improbable: (1) The problem of being “too compute-constrained”: Deep Thought, the size of a city, wouldn’t really be allowed to run for 7.5 million years; you’d build a million cities. (2) The problem of having “too much excess compute capacity”: By the time a Marvin with a “brain the size of a planet” was built, he wouldn’t really be bored; we’d already be trying to solve problems the size of the solar system.

[3] This is about “general-purpose” coding. Code at the leading specialized edges will always include use of custom DSLs.

[4] This means that compiling plain C code (that is in the C/C++ intersection) as C++26 also automatically makes it more correct and more secure. This isn’t new; compiling C code as C++ and having the C code be more correct has been true since the 1980s.

[5] If you’re my age, you remember when your teacher fretted that letting you use a calculator would harm your education. More of you remember similar angsting about letting students google the internet. Now we see the same fears with AI — as if we could stop it or any of those others even if we should. And we shouldn’t; each time, we re-learn the lesson that teaching students to use such tools should be part of their education because using tools makes us more productive.

[6] This is not the same as Wirth’s Law, that “software is getting slower more rapidly than hardware is becoming faster.” Wirth’s observation was that the overheads of operating systems and higher-level runtimes and other costly abstractions were becoming ever heavier over time, so that a program to solve the same problem was getting more and more inefficient and soaking up more hardware capacity than it used to; for example, printing “Hello world” really does take far more power and hardware when written in modern Java than it did in Commodore 64 BASIC. That doesn’t apply to C++ which is not getting slower over time; C++ continues to be at least as efficient as low-level C for most uses. No, the key point I’m making here is very different: that the problems the software is tackling are growing faster than hardware is becoming faster.

Speaking in Singapore (Thursday 20 November) and Sydney (Sunday 23 November)

In the next week, I will be giving talks on C++ reflection in Singapore (Thursday night) and Sydney (Sunday night). If you’re local, please RSVP! I look forward to seeing many of you there.

Singapore C++ Users Group November Meetup

Thursday November 20, 6:30pm to 9:30pm
Clifford Pier, Fullerton Bay Hotel
80 Collyer Quay, Singapore 049326

Citadel Securities Sydney C++ Talk

Sunday November 23, 6:00pm to 10:00pm
Museum of Contemporary Art Australia
140 George Street, The Rocks, NSW, Australia 2000

In related news, see yesterday’s Reddit post “Reflection is coming to GCC sooner than expected!” — our major compiler implementers understand the importance of reflection and are racing to implement it. Favorite quote: “This patch implements C++26 Reflection as specified by P2996R13, which allows users to perform magic.”

Finally, here’s my talk description.

Reflection — C++’s Decade-Defining Rocket Engine

In June 2025, C++ crossed a Rubicon: it handed us the keys to its own machinery. For the first time, C++ can describe itself — and generate more. The first compile-time reflection features in draft C++26 mark the most transformative turning point in our language’s history by giving us the most powerful new engine for expressing efficient abstractions that C++ has ever had, and we’ll need the next decade to discover what this rocket can do.

This talk is a high-velocity tour through what reflection enables today in C++26, and what it will enable next. The point of this talk isn’t to immediately grok any given technique or example. The takeaway is bigger: to leave all of us dizzy from the sheer volume of different examples, asking again and again, “Wait, we can do that now?!” — to fire up our imaginations to discover and develop this enormous new frontier together, and chart the strange new worlds C++ reflection has just opened for us to explore.

Reflection has arrived, more is coming, and the frontier is open. Let’s go.

Trip report: November 2025 ISO C++ standards meeting (Kona, USA)

On Saturday, the ISO C++ committee completed the first of two final fit-and-finish meetings for C++26, in our meeting in Kona, USA. What we have in the C++26 working draft represents exactly the set of features we have consensus on so far; the goal of these last two meetings is to fix bugs and otherwise increase consensus for the C++26 standard. We are well on track to complete our work on C++26 and set it in stone at our next meeting in March 2026.

This meeting was hosted by the Standard C++ Foundation. Our hosts arranged for high-quality facilities for our six-day meeting from Monday through Saturday. We had about 200 attendees, about half in-person and half remote via Zoom, formally representing 21 nations. At each meeting we regularly have new guest attendees who have never attended before, and this time there were 17 new guest attendees, mostly in-person, in addition to new attendees who are official national body representatives. To all of them, once again welcome!

The committee currently has 23 active subgroups, 10 of which met in 6 parallel tracks throughout the week. Some groups ran all week, and others ran for a few days or a part of a day, depending on their workloads. Unusually, there were no major evening sessions this week as we focused on completing the feature set of C++26. You can find a brief summary of ISO procedures here.

Beyond feature freeze: No new features, one removed (trivial relocatability), and many improvements

We are beyond the C++26 feature freeze deadline, so no major new features were added this time.

This week we heard and worked through concerns about several features, including comments proposing to move features like contracts and trivial relocatability out of C++26 for further work. We resolved 70% of the official international comments on C++26, which puts us well on schedule to finish at our next meeting in March, and we fixed many other issues and bugs too.

Here are a few highlights…

For contracts (pre, post, contract_assert), we spent most of Monday and Tuesday reviewing feedback. The strong consensus was to keep contracts in C++26, but to make two important bug fixes and pursue a way to specify “must enforce this assertion”:

  • We discovered two bugs, and adopted fixes for those at this meeting. One was paper P3878R1 “Standard library hardening should not use the ‘observe’ semantic” by Ville Voutilainen, Jonathan Wakely, John Spicer, and Stephan T. Lavavej.
  • For the next meeting in March, we are also pursuing something like the first proposed solution in P3911R0 “[basic.contract.eval] Make Contracts Reliably Non-Ignorable”by Darius Neațu, Andrei Alexandrescu, Lucian Radu Teodorescu, and Radu Nichita (though not necessarily with that particular syntax). The idea is to add an option to express in source code that a pre/post/contract_assert must be enforced, which some view as a necessary option in the initial version of contracts. A group of interested persons will work on that design over the winter, including to answer questions like “what syntax?” and “should violations call the violation handler or not?”, and bring a proposal to the March meeting.

Based on the national members’ feedback in the room, these changes appear “likely” to satisfactorily address the most serious national body contracts concerns — fingers crossed, we’ll know for sure in March.

For trivial relocatability, we found a showstopper bug that the group decided could not be fixed in time for C++26, so the strong consensus was to remove this feature from C++26.

For erroneous behavior (EB), we adopted P3684R1 “Fix erroneous behavior termination semantics for C++26” by Timur Doumler and Joshua Berne to tighten and improve its behavior.

  • Recall: EB is a new C++26 concept that means “well-defined to be Just Wrong”… any code whose behavior is changed from undefined behavior (UB) to EB can no longer result in time travel optimizations or security vulnerabilities. The first place we applied EB is that in C++26 reading from an uninitialized local variable is no longer UB, but is EB instead.
  • Before this meeting, C++26 said that if a program performs EB by reading from an uninitialized local variable, then the program may be terminated immediately or at any later time in the program’s execution; the latter part was problematic and a bit too loose.
  • After this meeting, the program can still be terminated immediately, or “soon” thereafter when it actually tries to use the uninitialized value; it cannot just fail ‘sometime arbitrarily later’ anymore. In short, instead of “poisoning” the entire program after an uninitialized local read, C++26 now “poisons” only the specific uninitialized value that was read. This change makes EB much easier to reason about locally and is compatible with what the existing implementations in compilers and sanitizers actually do.

We adopted P1789R3 “Library Support for Expansion Statements” by Alisdair Meredith, Jeremy Rifkin, and Matthias Wippich. It makes integer_sequence support structured bindings, so that the type is easier to use with the “template for” and structured bindings pack expansion language features we added in C++26.

We also adopted P3391R2 “constexpr std::format” by Barry Revzin. This makes it easier to apply string formatting to strings that need to be used at compile time, notably so they can be passed to static_assert which can accept std::string messages in C++26.

And many more tweaks and fixes. Whew!

A new convenor team and secretary

I’ve been serving as convenor (chair) of the C++ committee since 2002 (with a brief hiatus in 2008-09 when P.J. (Bill) Plauger did it for a year, thanks again Bill!). It’s unusual for one person to serve so many terms, and I’ve been telling the committee for over a year now that it’s time to pick someone else when my current term expires on December 31. Nothing else is changing for me: I’ll continue to be participating actively in WG 21 as “convenor emeritus” and bringing evolution proposals to the committee, I’ll continue serving as chair and CEO of the Standard C++ Foundation and all roles related to that, and I’ll keep speaking and writing… including that I intend to keep writing these post-meeting trip reports. But it’s time for others to be able to step up to take on more of the committee’s organizational and administrative work, and I was glad to see we had three qualified and willing candidates!

ISO has selected Guy Davidson as the next WG 21 convenor, effective January 1. Thank you Guy for making yourself available, and thank you too to John Spicer and Jeff Garland who also volunteered themselves as convenor candidates! Prior to this new role, Guy was co-chairing the SG14 Gaming/Low-Latency subgroup. He also had other prominent C++ community leadership roles outside of WG 21, ranging from founding the #include diversity and inclusion group to being an organizer and/or track chair at multiple C++ conferences including ACCU and CppCon which I understand he will continue to do.

Realizing that being convenor of our large WG is quite a big job, Guy wisely immediately appointed two vice-convenors…

Nina Ranns is now a vice-convenor. Thank you, Nina! Nina has been serving as WG 21 secretary for many years and is chair of our SG22 C/C++ liaison subgroup. She also handles many C++ community responsibilities outside of WG 21, including being vice-chair of the Standard C++ Foundation and a compiler writer working on implementing the C++26 contracts feature in GCC.

Jeff Garland is also now a vice-convenor. Thank you, Jeff! Jeff has been serving as an assistant chair of our LWG Library Wording subgroup since 2019. In the wider C++ community, he is a longtime Boost library developer (date-time), coauthor of the original std::chrono proposal, one of the original founders of the C++Now (then BoostCon) conference. More recently, he became executive director of the Boost Foundation and was a cofounder (and now a lead) of the new Beman project dedicated to providing future standard library implementations today.

Now that Nina is moving from secretary to vice-convenor, we also need a new WG 21 secretary: Braden Ganetsky has graciously volunteered to take on that role. Thank you, Braden! Already at this meeting, Braden once again took notes for all but one session of the week-long EWG Language Evolution subgroup sessions, which is a huge job that earned him a big round of applause on Saturday. Outside WG 21, he currently works on low latency trading systems (and designs and builds cool 3D puzzles).

Thank you very much again to Guy, Nina, Jeff, Braden, and everyone who volunteers their time and skills to help organize and run the ISO C++ committee!

During the meeting, John Spicer (who has long chaired our whole-committee plenary sessions, and leads Edison Design Group, EDG) also hosted a lovely reception for the whole committee. The reception was to celebrate the occasion of the new leadership changes, but also that John too is stepping back from chairing the U.S. C++ committee and running our WG 21 plenary sessions. John is one of the C++ committee’s longest-serving members since the early 1990s, and his company EDG has been a leading producer of compilers for C++ and other languages. John recently announced that, after a successful and storied career, it’s time for EDG to wind down, and EDG plans to open-source its world-class C++ compiler front-end within the next year.

And so I want to especially call out and congratulate John Spicer and everyone at EDG (including EDG’s retired founder Steve Adamczyk, other key long-time members such as William (Mike) Miller and Daveed Vandevoorde, and everyone else who’s been part of the EDG family over the years) as role models — not only for their high quality compilers but for being high integrity people. Perhaps the best example of their integrity was what happened with the C++98 “export template” feature over two decades ago: In the mid-1990s, John was the most vocal person pointing out technical problems with the feature; the committee decided it did not agree and standardized the feature anyway over his and EDG’s sustained objections; and then instead of sitting back and saying “we told you so,” EDG ended up being the only company in the world who ever implemented the feature! To add a sense of proportion: It took the same team longer to implement just the C++98 export template feature than to implement a compiler front-end for the entire Java language. But John and EDG went and did it, to support the committee’s consensus decision… only to have the committee remove export template again a few years later, because John and EDG had been right about the feature’s problems. That’s a model of solid professional behavior. Thank you again, very much, to John, Steve, Mike, Daveed, and everyone at EDG, past and present.

What’s next

Our next meeting will be in March in Croydon, London, UK hosted by Phil Nash. There is ongoing lively debate about whether this is the “Croydon meeting” or the “London meeting,” and whether Croydon is or is not part of London (despite that it’s inside the M25 loop). I think there’s a good chance this will continue to be the biggest controversy; if so, we’re in excellent shape.

Thank you again to the about 200 experts who attended on-site and on-line at this week’s meeting, and the many hundreds more who participate in standardization through their national bodies! And thank you again to everyone reading this for your interest and support for C++ and its standardization.

Schneier on LLM vulnerabilities, agentic AI, and “trusting trust”

Last month, I was having dinner with a group and someone at the table was excitedly sharing how they were using agentic AI to create and merge PRs for them, with some review but with a lot of trust and automation. I admitted that I could be comfortable with some limited uses for that, such as generating unit tests at scale, but not for bug fixes or other actual changes to production code; I’m a long way away from trusting an AI to act for me that freely. Call me a Luddite, or just a control freak, but I won’t commit non-test code unless I (or some expert I trust) have reviewed it in detail and fully understand it. (Even test code needs some review or safeguards, because it’s still code running in your development environment.)

My knee-jerk reaction against AI-generated PRs and merges puzzled the table, so I cited some of Bruce Schneier’s recent posts to explain why.

This week, after my Tuesday night PDXCPP user group talk, similar AI questions came up again in the Q&A.

Because I keep getting asked about this even though I’m not an AI or security expert, here are links to two of Schneier’s recent posts, because he is an expert and cites other experts… and then finally a link to Ken Thompson’s classic short “trusting trust” paper, for reasons Schneier explains.


Last month, Schneier linked to research on “Indirect Prompt Injection Attacks Against LLM Assistants.” The key observation he added, again, was this:

Prompt injection isn’t just a minor security problem we need to deal with. It’s a fundamental property of current LLM technology. The systems have no ability to separate trusted commands from untrusted data, and there are an infinite number of prompt injection attacks with no way to block them as a class. We need some new fundamental science of LLMs before we can solve this.

My layman’s understanding of the problem is this (actual AI experts, feel free to correct this paraphrase): A key ingredient that makes current LLMs so successful is that they treat all inputs uniformly. It’s fairly well known now that LLMs treat the system prompt and the user prompt the same, so they can’t tell when attackers poison the prompt. But LLMs also don’t distinguish when they inhale the world’s information via their training sets: LLM training treats high-quality papers and conspiracy theories and social media rants and fiction and malicious poisoned input the same, so they can’t tell when attackers try to poison the training data (such as by leaving malicious content around that they know will be scraped; see below).

So treating all input uniformly is LLMs’ superpower… but it also makes it hard to weed out bad or malicious inputs, because to start distinguishing inputs is to bend or break the core “special sauce” that makes current LLMs work so well.


This week, Schneier posted a new article about how AI’s security risks are amplified by agentic AI: “Agentic AI’s OODA Loop Problem.” Quoting a few key parts:

In 2022, Simon Willison identified a new class of attacks against AI systems: “prompt injection.” Prompt injection is possible because an AI mixes untrusted inputs with trusted instructions and then confuses one for the other. Willison’s insight was that this isn’t just a filtering problem; it’s architectural. There is no privilege separation, and there is no separation between the data and control paths. The very mechanism that makes modern AI powerful—treating all inputs uniformly—is what makes it vulnerable.

…  A single poisoned piece of training data can affect millions of downstream applications.

… Attackers can poison a model’s training data and then deploy an exploit years later. Integrity violations are frozen in the model.

… Agents compound the risks. Pretrained OODA loops running in one or a dozen AI agents inherit all of these upstream compromises. Model Context Protocol (MCP) and similar systems that allow AI to use tools create their own vulnerabilities that interact with each other. Each tool has its own OODA loop, which nests, interleaves, and races. Tool descriptions become injection vectors. Models can’t verify tool semantics, only syntax. “Submit SQL query” might mean “exfiltrate database” because an agent can be corrupted in prompts, training data, or tool definitions to do what the attacker wants. The abstraction layer itself can be adversarial.

For example, an attacker might want AI agents to leak all the secret keys that the AI knows to the attacker, who might have a collector running in bulletproof hosting in a poorly regulated jurisdiction. They could plant coded instructions in easily scraped web content, waiting for the next AI training set to include it. Once that happens, they can activate the behavior through the front door: tricking AI agents (think a lowly chatbot or an analytics engine or a coding bot or anything in between) that are increasingly taking their own actions, in an OODA loop, using untrustworthy input from a third-party user. This compromise persists in the conversation history and cached responses, spreading to multiple future interactions and even to other AI agents.

… Prompt injection might be unsolvable in today’s LLMs. … More generally, existing mechanisms to improve models won’t help protect against attack. Fine-tuning preserves backdoors. Reinforcement learning with human feedback adds human preferences without removing model biases. Each training phase compounds prior compromises.

This is Ken Thompson’s “trusting trust” attack all over again.

Thompson’s Turing Award lecture “Reflections on Trusting Trust” is a must-read classic, and super short: just three pages. If you haven’t read it lately, run (don’t walk) and reread it on your next coffee break.

I love AI and LLMs. I use them every day. I look forward to letting an AI generate and commit more code on my behalf, just not quite yet — I’ll wait until the AI wizards deliver new generations of LLMs with improved architectures that let the defenders catch up again in the security arms race. I’m sure they’ll get there, and that’s just what we need to keep making the wonderful AIs we now enjoy also be trustworthy to deploy in more and more ways.

Speaking on October 21 at PDXCPP: Portland OR C++ meetup

In two weeks I’ll be giving a talk at the local C++ meetup here in peaceful, quirky, dog-walking, frisbee-throwing, family-friendly Portland, Oregon, USA.

PDXCPP – Monthly Meetup
October 21, 2025 @ 7:00pm
Location: Siemens EDA in Wilsonville

Which talk will I give? That’s a great question, and there’s a poll about that!

At CppCon last month, I gave two talks that each focused on one C++26 feature (including both its status in C++26 and also its future evolution): one on reflection, and one on contracts. I’ll give an updated version of one of them in person, and the organizers are letting you decide: If you are considering attending, please fill out the poll and let us know which talk you want me to give!

I’m looking forward to seeing many of you in person.

My other CppCon talk video is now available: The Joy of C++26 Contracts (and Some Myth-Conceptions)

I usually only give one new talk a year, but this year I volunteered to give a second new talk at CppCon on a topic I haven’t spoken on before: draft C++26 contracts.

Thank you to all the experts, including the actual implementers and people who are for and against having contracts in C++26, for their time answering questions and providing papers and examples! I’ve done by best to represent the current status as I understand it, including all major positive must-knows and all major outstanding concerns and objections; any remaining errors are mine, not theirs.

I hope you find it useful!

Here is a copy of the talk abstract…


This talk is all about the C++26 contracts feature. It covers the following topics:

  • Why defensive programming is a Good Thing (mainly for functional safety, but occasionally also for memory safety)
  • Brief overview of C++26 contracts, and why they’re way better than C assert (spoiler: writing them on declarations, being able to use them in release builds, and language support is just way better than macros)
  • The 3-page “Effective C++ Contracts book” — best practices you need to know to use them (spoiler: keep compound conditions together, don’t write side effects, understand the pros and cons of installing a throwing violation handler… that’s pretty much… it?)
  • Why they’re viable, because they address the key things we need in production (which we’ll list)
  • Why they’re minimal, because we actually need every part in C++26 to use them at scale (which we’ll do by systematically summarizing why each piece is necessary)
  • What the future evolution of contracts holds (spoiler: virtual functions! groups/labels!)
  • A review of Frequently Asked Questions

Yesterday’s talk video posted: Reflection — C++’s decade-defining rocket engine

My CppCon keynote video is now online. Thanks to Bash Films for turning around the keynotes in under 24 hours!

C++ has just reached a true watershed moment: Barely three months ago, at our Sofia meeting, static reflection became part of draft standard C++. This talk is entirely devoted to showing example after example of how it works in C++26 and planned extensions beyond that, and how it will dramatically alter the trajectory of C++ — and possibly also of other languages. Thanks again to Max Sagebaum for coming on stage and explaining automatic differentiation, including his world’s-first C++ (via cppfront) autodiff implementation using reflection!

I hope you enjoy the talk and live demos, and find it useful.

Here is a copy of the talk abstract…


In June 2025, C++ crossed a Rubicon: it handed us the keys to its own machinery. For the first time, C++ can describe itself—and generate more. The first compile-time reflection features in draft C++26 mark the most transformative turning point in our language’s history by giving us the most powerful new engine for expressing efficient abstractions that C++ has ever had, and we’ll need the next decade to discover what this rocket can do.

This session is a high-velocity tour through what reflection enables today in C++26, and what it will enable next. We’ll start with live compiler demos (Godbolt, of course) to show how much the initial C++26 feature set can already do. Then we’ll jump a few years ahead, using Dan Katz’s Clang extensions, Daveed Vandevoorde’s EDG extensions, and my own cppfront reflection implementation to preview future capabilities that could reshape not just C++, but the way we think about programming itself.

We’ll see how reflection can simplify C++’s future evolution by reducing the need for as many bespoke new language features, since many can now be expressed as reusable compile-time libraries—faster to design, easier to test, and portable from day one. We’ll even glimpse how it might solve a problem that has long eluded the entire software industry, in a way that benefits every language.

The point of this talk isn’t to immediately grok any given technique or example. The takeaway is bigger: to leave all of us dizzy from the sheer volume of different examples, asking again and again, “Wait, we can do that now?!”—to fire up our imaginations to discover and develop this enormous new frontier together, and chart the strange new worlds C++ reflection has just opened for us to explore.

Reflection has arrived, more is coming, and the frontier is open. Let’s go.

My C++ on Sea talk video posted: “Three Cool Things in C++26”

Thanks to C++ on Sea for inviting me to speak in June! The talk video is now live, linked below. It was recorded just 48 hours after the Sofia meeting ended, with key updates hot off the press.

Note: Next month at CppCon, I will be going even deeper and broader on reflection.
There, I’ll spend the full 90-minute talk surveying many different ways to use reflection, both what’s in C++26 and what we’ll add soon post-C++26. The point will be, not any one use, but the sheer number of uses, showing just how much gold lies in these new hills. The CppCon talk will include lots of new material I’ve never shown before, with live demos and hopefully (fingers crossed) a special guest on-stage to help showcase some advanced uses coming soon to a C++ compiler near you.

In the meantime, I hope you enjoy this video which includes a big section on that topic… thanks again to the C++ on Sea organizers and all the great attendees who gave me such a warm welcome in Folkestone.

Trip report: June 2025 ISO C++ standards meeting (Sofia, Bulgaria)

A unique milestone: “Whole new language”

Today marks a turning point in C++: A few minutes ago, the C++ committee voted the first seven (7) papers for compile-time reflection into draft C++26 to several sustained rounds of applause in the room. I think Hana “Ms. Constexpr” Dusíková summarized the impact of this feature best a few days ago, in her calm deadpan way… when she was told that the reflection paper was going to make it to the Saturday adoption poll, she gave a little shrug and just quietly said: “Whole new language.”

Mic drop.

Until today, perhaps the most momentous single feature poll of C++’s history was the poll in Toronto in July 2007 to adopt Bjarne Stroustrup’s and Gabriel Dos Reis’ first “constexpr” paper into draft C++11. Looking back now, we can see what a tectonic shift that started for C++.

I’m positive that for many years to come we’ll be looking back at today, the day reflection first was adopted for standard C++, as a pivotal date in the language’s history. Reflection will fundamentally improve the way we write C++ code, expand the expressiveness of the language more than we’ve seen in at least 20 years, and lead to major simplifications in real-world C++ toolchains and environments. Even with the first partial reflection capability we have today, we will already be able to reflect on C++ types and use that information plus plain old std::cout to generate arbitrary additional C++ source code that is based on that information and that we can compile and link into the same program as it’s being built. (In the future we’ll also get token injection to generate C++ source right within the same source file.) But we can generate anything: Arbitrary binary metadata, such as a .WINMD file. Arbitrary code in other languages, such as Python or JS bindings automatically generated to wrap C++ types. All in portable standard C++.

This is a Big Hairy Deal. Look, everyone knows I’m biased toward saying nice things about C++, but I don’t go in for hyperbole and I’ve never said anything like this before. Today is legit unique: Reflection is more transformational than any 10 other major features we’ve ever voted into the standard combined, and it will dominate the next decade (and more) of C++ as we complete the feature with additional capabilities (just as we added to constexpr over time to fill that out) and learn how to use it in our programs and build environments.

We now return you to our normal trip report format…

The meeting

Today the ISO C++ committee completed the feature freeze of C++26, in our meeting in Sofia, Bulgaria. This summer, draft C++26 will be out for its international comment ballot (aka “Committee Draft” or “CD”), and C++26 final fit-and-finish is on track to be done, and C++26 set in stone, two more meetings after that in March 2026.

This meeting was hosted by Chaos and C++ Alliance. Our hosts arranged for high-quality facilities for our six-day meeting from Monday through Saturday. We had about 200 attendees, about two-thirds in-person and the others remote via Zoom, formally representing nearly 30 nations. At each meeting we regularly have new guest attendees who have never attended before, and this time there were 25 new first-time guest attendees, mostly in-person, in addition to new attendees who are official national body representatives. To all of them, once again welcome!

The committee currently has 23 active subgroups, 13 of which met in 7 parallel tracks throughout the week. Some groups ran all week, and others ran for a few days or a part of a day, depending on their workloads. Unusually, there were no major evening sessions this week as we focused on completing the feature set of C++26. You can find a brief summary of ISO procedures here.

More things adopted for C++26: Core language changes/features

Note: These links are to the most recent public version of each paper. If a paper was tweaked at the meeting before being approved, the link tracks and will automatically find the updated version as soon as it’s uploaded to the public site.

In addition to fixing a list of defect reports, the core language adopted 10 papers, including the following… the majority were about reflection:

Reflection, part 1: P2996R13 “Reflection for C++26” by Wyatt Childers, Peter Dimov, Dan Katz, Barry Revzin, Andrew Sutton, Faisal Vali, and Daveed Vandevoorde. This is the “basic foundation” – it does not include reflecting everything yet, and doesn’t include generation (code injection), but it’s a solid and very usable first step.

Reflection, part 2: P3394R4 “Annotations for reflection” by Wyatt Childers, Dan Katz, Barry Revzin, and Daveed Vandevoorde adds the ability to reflect additional attribute information, which makes reflection much more customizable and flexible. Definitely check out the examples in the paper.

Reflection, part 3: P3293R3 “Splicing a base class subobject” by Peter Dimov, Dan Katz, Barry Revzin, and Daveed Vandevoorde adds better support for treating base class subobjects uniformly with member subobjects, again making reflection more usable.

Reflection, part 4: P3491R3 “define_static_{string,object,array}” by Wyatt Childers, Peter Dimov, Barry Revzin, and Daveed Vandevoorde adds functions that were split off from the main reflection paper P2996, which make it easier to convert reflected data to run-time data.

Reflection, part 5 (notice a theme yet?): P1306R5 “Expansion statements” by Dan Katz, Andrew Sutton, Sam Goodrick, Daveed Vandevoorde, and Barry Revzin adds “template for” to make it easy to loop over reflection data at compile time.

Reflection, part 6 (but wait there’s more): P3096R12 “Function parameter reflection in reflection for C++26”[sic] by Adam Lach, Dan Katz, and Walter Genovese adds support for, you guessed it, reflecting function parameters.

We also added a couple of other things, including that virtual inheritance is now allowed in constexpr compile-time code, and removed undefined behavior from the preprocessor as part of the current wave-in-progress of attacking and resolving undefined behavior in C++.

Interlude: A strong recommendation

You’ll notice that this time I didn’t cut-and-paste a few illustrative code examples for each paper. That’s because I strongly recommend you make time to read all the motivating code examples in all the above-linked reflection papers, to get a sense of just how game-changing this feature is, even in its current very-initial state. And those examples are just scratching the surface of what even this first step toward general reflection makes possible.

Thank you, very much, to everyone who worked so hard to bring reflection into the standard!

More things adopted for C++26: Standard library changes/features

Not to be outdone by the core language, in addition to fixing a list of defect reports, the standard library adopted a whopping 34 papers, including the following…

P3179R9 “C++ parallel range algorithms” by Ruslan Arutyunyan, Alexey Kukanov, and Bryce Adelstein Lelbach adds what it says on the tin: parallel algorithms for the C++ Ranges library.

P2830R10 “Standardized constexpr type ordering” by Nate Nichols and Gašper Ažman makes it possible for portable C++ code to sort types at compile time.

P3149R11 “async_scope – Creating scopes for non-sequential concurrency” by Ian Petersen, Jessica Wong, and a long list of additional contributors is about enabling RAII styles to work in code that isn’t sequential and stack-based, which makes resource handling much more convenient and robust even in a heavily async world using sender/receiver, C++26’s new async model.

P2079R10 “Parallel scheduler” by Lucian Radu Teodorescu, Ruslan Arutyunyan, Lee Howes, and Michael Voss provides a standard async execution context that portably guarantees forward progress, aka an interface for thread pools.

Reflection, part 7 (you didn’t think we were done yet, did you?): P3560R2 “Error handling in reflection” by Peter Dimov and Barry Revzin enables compile-time exception handling as the error handling model for reflection code.

P3552R3 “Add a coroutine task type” by Dietmar Kühl and Maikel Nadolski provides a task type to integrate coroutines with sender/receiver, C++26’s new async model.

And much more, including constexpr shared_ptr, a bunch of std::simd extensions including enabling it to be used with ranges, and lots of other nuggets and goodies. Whew!

What’s next

Thank you to all the experts who worked all week in all the subgroups to achieve so much this week!

Our next meeting will be this November in Kona, HI, USA hosted by Standard C++ Foundation.

Thank you again to the about 200 experts who attended on-site and on-line at this week’s meeting, and the many more who participate in standardization through their national bodies!

I’ll repeat what I said last time: Don’t think C++“26” sounds very far away, because it sure isn’t… the C++26 feature freeze is past, and even before that compilers have already been aggressively implementing C++26, with GCC and Clang having already implemented about two-thirds of C++26’s language features adopted so far! C++ is a living language and moving fast. Thank you again to everyone reading this for your interest and support for C++ and its standardization.