0% found this document useful (0 votes)
2 views

Taming Our Unaccountable Societal-Scale Machines How Impossible Is It Going to Be

The document discusses the concept of 'Management Cybernetics' as a potential solution to the unaccountability of large societal systems, as highlighted in Dan Davies's book 'The Unaccountability Machine.' It critiques the current understanding of AI as intelligent agents and instead suggests viewing large AI models as cultural and social technologies that can transform information into usable forms. The author emphasizes the need to focus on the societal impacts of these technologies rather than fear of AGI.

Uploaded by

Ricardo Gómez
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
2 views

Taming Our Unaccountable Societal-Scale Machines How Impossible Is It Going to Be

The document discusses the concept of 'Management Cybernetics' as a potential solution to the unaccountability of large societal systems, as highlighted in Dan Davies's book 'The Unaccountability Machine.' It critiques the current understanding of AI as intelligent agents and instead suggests viewing large AI models as cultural and social technologies that can transform information into usable forms. The author emphasizes the need to focus on the societal impacts of these technologies rather than fear of AGI.

Uploaded by

Ricardo Gómez
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 1

Subscribe Sign in

ENLARGING THE BOUNDS OF HUMAN EMPIRE

Taming Our
Unaccountable Societal-
Scale Machines: How
Impossible Is It Going to
Be?
"Management Cybernetics" may be the name of
the only thing that can save us from our own
systems for anthology-intelligence societal-scale
coöperation among us East African Plains Apes.
But does...

BRAD DELONG
MAR 23, 2025

"Management Cybernetics" may be the name of


the only thing that can save us from our own
systems for anthology-intelligence societal-
scale coöperation among us East African Plains
Apes. But can that discipline exist? It certainly
is not really existing now...

Share

Share Brad DeLong's Grasping Reality

Here is one of the many, many eggs I have been


juggling—under variable gravity—that I have
dropped on the floor in the past six months. It
is my promise to myself revise and expand the
already 5000-word review I wrote on my last
birthday of Dan Davies’s superb little book The
Unaccountability Machine
<https://www.blackwells.co.uk/bookshop/pro
duct/The-Unaccountability-Machine-by-Dan-
Davies/9781788169547>:

A Return of "Management
Cybernetics" as a Way Forward Out
of Economics-Based Neoliberalism?
BRAD DELONG · JUNE 24, 2024
Read full story

A taste of what I wrote then:

We have built a world of vast, interlocking


systems that no one can fully understands.
From corporate behemoths to government
bureaucracies, these leviathan-like societal
machines with human beings as their parts
make decisions that shape our lives—often
with disastrous consequences.

Can there be a way to tame these monsters of


our own creation, to give them human faces?

Dan Davies thinks the forgotten discipline of


“management cybernetics” might provide a
way. That is the crux of his brand-new The
Unaccountability Machine… [which provides] a
much better road towards understanding our
current societal-organizational environment
than others currently being put forward….

Remember Henry Farrell’s setting the stage….

Human[s]… created a complex world of vast


interlocking social and technological
mechanisms… impervious to…
understanding…. Our first instinct is to
populate this incomprehensible new world
with gods and demons…”. And then [having
done so], Henry says, technoutopianbros or
technodystopianbros divide. The utopian…
“AI rationalists… [suppose] the purportedly
superhuman entities they wish to
understand and manipulate are intelligent,
reasoning beings… [with] comprehensible
goals… [that] might… be trammel[led]
through… subtle grammaries of game
theory and Bayesian reasoning…. You [may]
call spirits from the vasty deep… [and]
entrap them in equilibria where they must
do what you demand they do…

By contrast, the dystopians believe we should…


simply welcome our AI-overlords:

We are confronted by a world of rapid


change that will not only defy our efforts to
understand it…. We might as well embrace
this…. The stars are right, and dark gods are
walking backwards from the forthcoming
Singularity to remake the past in their
image. In one of [Nick] Land’s best known
and weirdest quotes: “Machinic desire… rips
up political cultures, deletes traditions,
dissolves subjectivities, and hacks through
security apparatuses, tracking a soulless
tropism to zero control…. The history of
capitalism is an invasion from the future by
an artificial intelligent space that must
assemble itself…. Digitocommodification is
the index of a cyberpositively escalating
technovirus, of the planetary technocapital
singularity: a self-organizing insidious
traumatism, virtually guiding the entire
biological desiring-complex towards post-
carbon replicator usurpation…” And this
“technocapital singularity” is to be
celebrated!…

From my view, of course, Henry Farrell is


completely write to judge all this as simply
crazy:

For one thing, none of these processes


have human intentionality.

We are primed to attribute human


intentionality to them for lots of reasons.

But actually believing that they do have


human intentionality will lead us far astray.

If these are our other potential guides—and


they are—Dan Davies is vastly to be preferred.
In the final analysis, therefore, The
Unaccountability Machine is our best guide, at
least to thought as to how to take action.

Give a gift subscription

But not only have I failed to deliver, I now have


to digest as well:

Programmable Mutter

Large AI models are cultural and social


technologies
I’ve tried to use this newsletter to highlight ideas
from various people, who are thinking about AI
without getting stuck on AGI. These include
Alison Gopnik’s argument that Large Language
Models are best understood as “cultural
technologies,” Cosma Shalizi’s…

Read more

2 months ago · 53 likes · 5 comments · Henry Farrell

My precis of what they say:

Large AI models should be viewed as


cultural and social technologies
technologies—
definitely not intelligent agents…

They are roughly of the same ilk as past


information, communication, and
coördination systems like pictures,
writing, arithmetic, records, print,
video, internet search, markets,
bureaucracies, democracies, and
ideologies
ideologies…

Thus we need to ignore fears of AGI and


focus on the immediate societal effects
of the coming of these cultural and social
technologies…

Their ability to do very high-dimensional


very big-data regression and classification
with an attached natural-language front
end makes them powerful tools to
generate lossy summaries of massive
human-generated data sets—thus
transforming very complex assemblies of
information into usable forms.

This is analogous to the market system’s


summarization of a huge amount of
information about production and demand
into a single number: a price.

This is analogous to a democracy’s


summarization of a huge amount of
information about human collective-action
goals and constraints into a single
decision: a passed parliamentary
motion…

This is analogous to a bureaucracy’s


summarization of a huge amount of
information about past procedures,
successes, and failures into a single gate-
keeping decision as to whether. for a
particular action, established
prerequisites have been satisfied
satisfied…

This is analogous to an ideology’s


summarization of a huge amount of
information about the world and
humanity’s place in it into a single
simplified picture of leaders, goals,
friends, and enemies…

Users engaging with these systems are not


consulting another mind
mind—an intelligent
oracle—any more than the market system
that directed the boule and demos of the
Athenai in the year -456 to source the ten
tons of tin for Phidias’s statue of Athene
Promakhis from Cornwall was an
intelligent being...

Neverthelss, the effects of the


introduction of these MAMLMs on society
may well rhyme with those of previous
transformative cultural-social
organizational technologies like movable-
type print in the Enlightenment
Enlightenment…

The consequences will be the spread of


misinformation
misinformation, bias
bias, and cultural
homogenization in some spheres
alongside new forms of cultural diversity
and immense
mmense creativity in others...

There use will uncover non-obvious


patterns across human knowledge,
generating new avenues for scientific
exploration and engineering progress...

They will alter economic power


dynamics
dynamics, as the tension between
information producers and distributors
will intensify…

The narrative of “AI” as superintelligent


agents blocks coherent thought about the
immediate and real social and economic
opportunities and challenges…

What do I think about this? I do not know yet. I


do know that I would like to visit an alternative
timeline in which Herbert Simon’s Sciences of
the Artificial had become a multidiscipline-
and multidepartment-sparking book.

References:
Dan. 2024. The Unaccountability
Davies, Dan
Machine: Why Big Systems Make Terrible
Decisions and How the World Lost Its
Mind. London: Profile Books.
<https://www.blackwells.co.uk/bookshop
/product/The-Unaccountability-
Machine-by-Dan-Davies/9781788169547>.

DeLong, J. Bradford. 2024. "A Return of


‘Management Cybernetics’ as a Way
Forward Out of Economics-Based
Neoliberalism??" Grasping Reality. June 24.
<https://braddelong.substack.com/p/a-
return-of-management-cybernetics>.

Farrell, Henry
Henry. 2024. “Vico’s Singularity”.
Programmable Mutter. May 1.
<https://www.programmablemutter.com/
p/vicos-singularity>.

Farrell, Henry
Henry, Alison Gopnik, Cosma
Shalizi, and James Evans. 2024. "Large AI
Models Are Cultural and Social
Technologies." Science. 387: 6739 (March
13), pp. 1153-6.
<https://www.science.org/doi/10.1126/sci
ence.adt9819>.

Farrell, Henry
Henry. 2024. “Cybernetics Is the
Science of the Polycrisis”. Programmable
Mutter. April 17.
<https://www.programmablemutter.com/
p/cybernetics-is-the-science-of-the>.

Herbert. 1996. The Sciences of the


Simon, Herbert
Artificial. 3rd ed. Cambridge, MA: MIT
Press.
<https://archive.org/details/sciencesofart
ifi0000simo>.

Programmable Mutter

Cybernetics is the science of the


polycrisis
One of the most interesting ‘might have been’
moments in intellectual history happened in the
early 1970s, when Brian Eno traipsed to a dingy
cottage in Wales to pay homage to Stafford
Beer. Eno had written a fan letter to Beer after
reading his book on management cybernetics…

Read more

a year ago · 52 likes · 28 comments · Henry Farrell

Programmable Mutter

Vico's Singularity
Vernor Vinge died some weeks ago. He wrote
two very good science fiction novels, A Fire
Upon the Deep and A Deepness in the Sky (the
other books vary imo from ‘just fine’ to ‘yikes!’),
but he will most likely be remembered for his
arguments about the Singularity…

Read more

a year ago · 48 likes · 19 comments · Henry Farrell

Leave a comment

Type your email... Subscribe

If reading this gets you Value Above Replacement, then become


a free subscriber to this newsletter. And forward it! And if your
VAR from this newsletter is in the three digits or more each year,
please become a paid subscriber! I am trying to make you
readers—and myself—smarter. Please tell me if I succeed, or
how I fail…

30 Likes ∙ 3 Restacks

30 9 3 Share

Previous Next

Discussion about this post

Comments Restacks

Write a comment...

Sarora Mar 23

The premise that "government bureaucracies" are


"behemoths" is shaky. The US federal government
is a case in point. Non-postal federal government
employment has been roughly 2 to 2.4 million since
circa 1970. Meanwhile, it has served an economy
that has grown about fivefold over that time. That
federal employment, the so-called "behemoth" is
now about 1.5% of total non-farm employment. If
you do not understand what the government does
and how it does it, presumably because it is too
large for you to understand, why not discuss it
department by department. What is this
overarching discussion about, especially when its
premise is probably wrong?
LIKE (4) REPLY SHARE

Jay L Gischer Mar 24

My go-to example of AI at the present moment is


The Algorithm, the thing that chooses what you are
going to see in Facebook, Twitter, YouTube, and I'm
sure other social media.

This AI is presented, and probably also originally


conceived to serve customers better, to help them
find what they are looking for. Given I worked at
Google very early on, I know that's what many
people there were thinking that they were looking
for - something to improve search further.

But something changed. The Algorithm serves


those who make it and pay for it. It is evaluated not
on how satisfied the customers are, but on how
engaged they are. How much time do they spend
on the channel.

This is serving the interests of the master, which is


a normal and expected development at one level.
But it makes these intelligent systems into systems
that are not the servants of the people they engage
with, but methods of seduction and control over
those people, in the service of those who paid for
them to be made.

(To be slightly fair, satisfaction is much harder to


measure than engagement. But it could be
measured.)

These AIs are not "out of control" They are not an


example of "machines taking over". They are an
example of "doing what the oligarchs want" and
"oligarchs taking over". I'm afraid that also applies
to my beloved Google at this point. Though I think
they are a lot less toxic than FB or X.
LIKE (1) REPLY SHARE

7 more comments...

Top Latest Discussions

Blame the Right-Wing Noise


Machine, Not the Facts on the
Ground, for Voters' Dismay Abou…
the Economy
No, our economic statistics were not
painting us an unusually rosy picture last…
fall. FEB
Yes,16the state
• BRAD of the American
DELONG
economy is very disappointing. But…
61 25

The "Wall Street Journal" Tells Us:


Trump Takes Command of
Inflation, Which Was Massively…
Unsatisfactory under Biden
Journamalism in a Neofascist Chaos-
Monkey Age...
JAN 24 • BRAD DELONG

283 10

PROJECT SYNDICATE: Bond


Vigilantes! Or Is It "Bond
Vigilantes?'?
Normal countries, countries with
“exorbitant privilege”, and the threat of th…
loss MAR
of same. ForDELONG
7 • BRAD even for those those with
“exorbitant privilege” cannot…
30 4

See all

Ready for more?

Type your email... Subscribe

© 2025 J. Bradford DeLong

Privacy ∙ Terms ∙ Collection notice

Start writing Get the app

Substack is the home for great culture


30 9 3

You might also like