0% found this document useful (0 votes)
62 views208 pages

Artificial Intelligence and Architecture From Research To Practice 9783035624045 9783035624007 - Compress

The document discusses the intersection of Artificial Intelligence (AI) and architecture, highlighting its historical development, current applications, and future prospects. It aims to bridge the gap between AI research and architectural practice, providing insights into the technological shift and its implications for the discipline. The book includes contributions from various experts and incorporates digital content for further exploration of the topic.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
62 views208 pages

Artificial Intelligence and Architecture From Research To Practice 9783035624045 9783035624007 - Compress

The document discusses the intersection of Artificial Intelligence (AI) and architecture, highlighting its historical development, current applications, and future prospects. It aims to bridge the gap between AI research and architectural practice, providing insights into the technological shift and its implications for the discipline. The book includes contributions from various experts and incorporates digital content for further exploration of the topic.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 208

Artificial

Intelligence
and Architecture

From Research
to Practice

1
2
Artificial
Intelligence
and Architecture

From Research
to Practice
Stanislas Chaillou

Birkhäuser
Basel

3
Stanislas Chaillou

Acquisitions Editor: David Marold, Birkhäuser Verlag, A-Vienna


Content & Production Editor: Bettina R. Algieri, Birkhäuser Verlag,
A-Vienna

Proofreading: Alun Brown


Layout: Stanislas Chaillou
Cover Design: Floyd Schulze
Image editing: Stanislas Chaillou
Printing and binding: Beltz, D-Bad Langensalza
Paper: Condat matt Périgord 135 g/m2
Typeface: Crimson Text, Neue Haas Grotesk

Library of Congress Control Number: 2021937064

Bibliographic information published by the German National Library


The German National Library lists this publication in the Deutsche
Nationalbibliografie; detailed bibliographic data are available on the
Internet at http://dnb.dnb.de.

This work is subject to copyright. All rights are reserved, whether the
whole or part of the material is concerned, specifically the rights of
translation, reprinting, re-use of illustrations, recitation, broadcasting,
reproduction on microfilms or in other ways, and storage in
databases. For any kind of use, permission of the copyright owner
must be obtained.

ISBN 978-3-0356-2400-7
e-ISBN (PDF) 978-3-0356-2404-5

© 2022 Birkhäuser Verlag GmbH, Basel


P. O. Box 44, 4009 Basel, Switzerland
Part of Walter de Gruyter GmbH, Berlin/Boston

9 8 7 6 5 4 3 2 1 www.birkhauser.com

4
Acknowledgments
Before anything else, I would like to take the opportunity to thank
the people who made this book possible; our contributors, for their
time and effort: Foster & Partners’ ARD Group, the City Intelligence
Lab, Kyle Steinfeld, Andrew Witt, Alexandra Carlson & Matias del
Campo, Caitlin Mueller & Renaud Danhaive, Immanuel Koh, and Carl
Christensen. David Marold from Birkhäuser for his judicious advice
from the very beginning. Last but not least, I would like to dedicate
this book to Reinier.

About the Author


Stanislas Chaillou is a Paris-based architect and data scientist.
He is the co-founder of a software company building cloud-based
solutions for the AEC industry. Stanislas received his Bachelor of
Science in Architecture from the Swiss Federal Institute of Tech-
nology of Lausanne (EPFL, 2015) and his Master's degree in Archi-
tecture from Harvard University (GSD, 2019). Since 2018, his work
focuses on the theoretical and experimental aspects of Artificial
Intelligence in Architecture. Stanislas was the curator of the exhi-
bition “Artificial Intelligence & Architecture”, organized at the Arse-
nal Pavilion in Paris in 2020. He is also the author of a book entitled
“L’Intelligence Artificielle au service de l’Architecture”, published in
2021 by le Moniteur Editions.

5
Table of
Contents

6
8 Foreword
12 ArtificiaI 16 The Post-War Period
20 Expert Systems &
Intelligence, AI Winters
Another Field 24 The Deep Learning
Revolution

32 The Advent of 36 Modularity


42 Computer-Aided Design
Architectural AI 48 Parametricism
56 Artificial Intelligence

62 AI's Deployment 64 Artificial Intelligence 101


82 Urban Scale
in Architecture 86 Floor Plans
90 Facades
94 Perspectives
98 Structures
102 Predictive Simulations

106 The Outlooks of 108 The Contribution


AI in Architecture 110 The Form
118 The Context
126 The Performance
134 The Adoption
136 The Practice
146 The Model
162 The Scale
170 The Prospects
172 The Style
180 The Ecology
188 The Language
198 Closing Remarks
203 Image Credits
205 Contributors’
Biographies
206 Index

7
Foreword

8
The presence of Artificial
Intelligence (AI) in
Architecture may still be
in its early days. If current
research makes a strong
case for its potential
adoption, it also reaffirms
the importance of the
discussion surrounding its
inception and its necessary
adaptation to support the
architectural agenda.

From its immediate technical benefits to its longer-term cul-


tural implications, AI’s dialogue with Architecture is unfolding today at
multiple levels. To grasp the full magnitude of this technological shift,
this book considers three complementary angles, together offering
a pedagogical overview. By exploring the historical, experimental,
and theoretical facets of AI’s beginnings in Architecture, it provides
its readers with the opportunity to contemplate both the tangible and
the speculative nature of this encounter. Starting from a historical
perspective, the first chapters place AI back into the past century’s
conversation between Technology and Architecture. Recent results
of AI research then follow and ground the reflection into experimental,
yet tangible, applications to Architecture. Finally, this book gives the
stage to theorists, researchers and entrepreneurs working today

9
Foreword

at the forefront of this revolution. From Harvard to MIT or companies


like Spacemaker and Foster & Partners, this book’s final segment
presents a wide outlook of current theories and discourses sur-
rounding AI’s presence in the field.

More importantly, we hope to help bridge the gap still existing be-
tween the state of AI research and architectural practice. As AI’s
gradual dissemination into Architecture’s tools and methods is an
ongoing reality, this book aims at clarifying the terms and defini-
tions, while providing the necessary explanations needed to explore
this fascinating topic. To that end, we hope to lay down the ground-
work for a meaningful exchange between both disciplines, and to
demystify what too often appears as a blurry technological maze.
Finally, our hope is to unveil the diversity and excitement present
in the current landscape. The intersection of AI and Architecture
can be the source of a new momentum for the discipline, provided
our collective work helps frame and develop this technology so as
to truly serve architects.

The content of this book echoes the exhibition “Artificial Intelligence & Architecture” that
took place in Paris in 2020. Curated by Stanislas Chaillou, and produced by the Arsenal Pa-
vilion, this show originally presented an early overview of AI’s application to Architecture. The
exhibit is today available online as a virtual tour, accessible by scanning the QR code.

Moreover, this book makes extensive use of digital contents, accessible through a system of
QR codes that can be scanned at the end of each chapter. These various references, in the
form of books ( ), articles ( ), videos ( ), and others offer readers the opportunity to
QR Code for the “AI &
continue exploring this fascinating topic beyond the sole content of this book.
Architecture” virtual
exhibit at the Arsenal
Pavilion (image on the
opposite page). 10
11
Artificial
Intelligence
Another Field

12
“Stories about the creation
of machines having human
qualities have long been a
fascinating province in the
realm of science fiction;
yet we are about to witness
the birth of such a machine
– a machine capable of
perceiving, recognizing and
identifying its surroundings
without any human training
or control”.
1. F. Rosenblatt, “The These words1 in 1958 by the American psychologist Frank
Design of an Intelligent
Automaton”, ONR Rosenblatt are a telling testimony to the radical optimism of AI’s early
Research Reviews, 1958.
pioneers. However, nearly 70 years later, Rosenblatt’s vision is still
under development across the world. In hindsight, such assertions
are a striking reminder that the history of computer science is far from
being a linear journey. From the early days of AI in the 1940s-50s
up until the deep learning revolution, this technology is the result of
a slow sedimentation of scientific hypotheses and technological
breakthroughs. Far from the siloed research of the 1950s, AI
nowadays engages with countless other fields. Architecture is no
exception. It is why framing its potential contributions to the discipline
requires first an understanding – however rudimentary – of its early
developments, of the challenge it faced along the way and a short
reminder of its concomitant adoption in other industries.

13
Artificial Intelligence, Another Field

1 Post-War First Expert


AI's Historical
Timeline,
from the post-war
Period AI Winter Systems
period up until the
Deep Learning 1950s 1970s 1980s
revolution.

Artificial Neuron Dartmouth R1 Program SID


W. McCulloch & Workshop J. McDermott Program
W. Pitts (’43) (’56) (’78) DEC (’82)

Transistor Perceptron Lighthill Report


Bell Labs (’47) F. Rosenblatt J. Lighthill
(’57) (’73)

ELIZA “Perceptrons”
J. Weizenbaum M. Minsky,
(’66) S. Pappert
(’69)
14
Second Deep Learning
AI Winter Revolution
1990s 2010s

CYC Program DeepBlue vs AlphaGO vs


D. Lenat Kasparov Sedol
(’84) (’97) (’16)

“Some Expert Stanley at Generative


Systems Need DARPA Adversarial Net
Common Sense” Challenge I. Goodfellow
J. McCarthy (’05) (’14)
(’84)

AlexNET
A. Krizhevsky
(’12)
15
Artificial Intelligence, Another Field

The Post-War Period

The 1940s are considered the crossroads of multiple significant


breakthroughs, together providing the building blocks of our
contemporary definition of AI. In 1943, American scientists Warren
McCulloch and Walter Pitts first laid down an initial mathematical
2. W. McCullock & W.
Pitts, “A logical calculus formulation of the biological neuron2. Although theoretical, this
of the ideas immanent
in nervous activity”, model provided the scientific community of the time with an
Bulletin of Mathematical
Biophysics 5, pp 115–133 early definition of the “artificial network”. In a nutshell, their model
1943.
described the computation performed by a neuron to process a flow
3. Bell Labs’ Website, of information. This achievement would soon be paired with another
“The 1956 Nobel Prize in
Physics”, experiment stemming from the Bell Lab, a research institution
Source: https://www.
bell-labs.com/about/ ran by American telecom company AT&T. At this lab, in 1947, John
awards/1956-nobel-prize-
physics Bardeen, Walter Houser Brattain and William Bradford Shockley
together came up with a new type of semiconductor device: the
2
J. Bardeen, W.
transistor3 (Fig. 2). In brief, this machine could modulate an electric
Shockley, W. Brattain signal by dimming or amplifying it. This new hardware generation
and the transistor at Bell
Labs in 1948. soon enabled theoretical models like McCulloch and Pitts’ to be
materialized by actual functioning prototypes. A few years later, in
1957, the American psychologist Frank Rosenblatt best harvested
this potential by successfully running a groundbreaking experiment
at the Cornell Aeronautical Laboratory using custom-built hardware:

16
The Post-War Period

3 the Perceptron (Fig. 3). Designed to classify images, the Perceptron


Frank Rosenblatt and the was built upon previous theoretical work and offered a functioning
Mark I Perceptron.
prototype of a “learning” machine. “Learning” here refers to the
ability of the Perceptron to self tune its settings when exposed to
arrays of images; a process also referred to as “training”. Through
this trial-and-error procedure, the network would adjust its values to
improve its ability to accurately predict each image’s category. The
same year, the New York Times covered Rosenblatt’s experiment,
4. “New Navy Device describing it as a “New Navy Device [that] Learns By Doing”4. The
[that] Learns By Doing”,
New York Times, July 8th Perceptron’s specificity lay precisely in this ability to perform a
1958.
self-corrective feedback loop. This process would set it apart from
previous algorithmic theories while opening new research avenues
for the forthcoming decades.

Yet another foundational moment in AI’s history took place during


the same decade. In 1956, researchers gathered for the Dartmouth
Summer Research Project – held at the eponymous university –
formulated an initial definition of AI, and set the roadmap for future
developments in the field. Among others, Marvin Minsky, John
McCarthy, Ray Solomonoff, and Oliver Selfridge took part in the
5. J. McCarthy, M. L. workshop. Their team put forth both the term “Artificial Intelligence"5
Minsky, N. Rochester, C.
E. Shannon,“A Proposal and its meaning: the use of the human brain as a model for machine
for The Dartmouth
Summer Research Project logic. To them, emulating the human brain’s mode of acquisition, its
On Artificial Intelligence”,
AI Magazine, August 31st structure and its functioning principles would represent an alternate
1955.
way of defining algorithmic logics.

In the footsteps of these early experiments, applications started


spreading across various domains. Natural language processing
(NLP) probably offers one of the most interesting developments of

17
Artificial Intelligence, Another Field

the period. With ELIZA (1966), a project developed by the German-


American scientist Joseph Weizenbaum, a computer was able
to simulate an exchange with a person through a chat-based
6. J. Weizenbaum, program6. Weizenbaun attempted to formalize some of the
“ELIZA, A Computer
Program For the Study underlying patterns of casual conversations, which would then
of Natural Language
Communication Between be used by ELIZA in the context of a textual exchange with the
Man and Machine”,
Communications of the user. At the other end of the spectrum, robotics engineering saw
ACM, Volume 9, Issue 1,
pp 36–45, 1966. in AI the possibility to offer given systems a partial autonomy. With
applications in manufacturing as early as the 1950s, AI yielded
large-scale results early on. Unimate (1961), a project developed
by the American engineers George Devol and Joseph Engelberger
for General Motors’ assembly lines, perhaps best embodied this
momentum: their robotic arm could perform tasks like transporting
manufactured parts and welding.

ELIZA and Unimate are iconic examples of the optimism of the


period; experiments would gradually spread beyond the realm of
research institutions and would be applied to real-world problems.
And as AI began to provide tangible results across the board, the
scientific community’s confidence was bolstered all the more. The
American cognitive psychologist Hebert Simon maybe best cap-
tured the period’s zeitgeist: “The simplest way I can summarize is to
say that there are now in the world machines that think, that learn,
and that create. Moreover, their ability to do these things is going to
increase rapidly until […] the range of problems they can handle will
be coextensive with the range to which the human mind has been
7. H. Simon, “Heuristic
Problem Solving: The applied”7. However, Herbert’s predictions would face a very differ-
Next Advance Operations
Research”, Operations ent reality, as AI research soon reached a long-lasting plateau, put-
Research 6(1), pp 1-10,
1958. ting a halt to the seemingly positive outlook of the 1960s.

18
2

19
Artificial Intelligence, Another Field

Expert Systems & AI Winters

Throughout the 1960s-70s, and later in the 1990s, the field would
undergo two acute periods of self doubt, today known as the “AI
winters”. In both instances, the general mindset in the private sec-
tor and among research institutions would sharply contrast with
the enthusiasm of the early days.

The first AI winter took place in the aftermath of Rosenblatt’s ex-


periments. Among many factors, two specific publications would
be symptomatic of the period’s growing skepticism. The first one
was a book entitled “Perceptrons” (1969), authored by Marvin Min-
8. M. Minsky & S. sky and Seymour Papert8. The two scientists laid down a critical
Papert, “Perceptrons:
An Introduction to view of Rosenblatt’s Perceptron and derived research. To them, the
Computational Geometry”,
MIT Press, 1969. Perceptron was limited to simple use cases, and could not address
more complex problems. The second publication was the Lighthill
Report (1973), directed by the British mathematician James Light-
9. J. Lighthill, “Artificial hill9. The report, initially called “Artificial Intelligence: a General Sur-
Intelligence: a General
Survey”, Artificial vey”, assessed AI’s results across the field. In this excerpt from the
Intelligence: a paper
symposium, Science report, Lighthill established a rather pessimistic diagnostic: “Most
Research Council, 1973.
workers in Al research and in related fields confess to a pronounced
feeling of disappointment in what has been achieved in the past

20
Expert Systems & AI Winters

twenty-five years. Workers entered the field around 1950, and even
around 1960, with high hopes that are very far from having been re-
alized in 1972. In no part of the field have the discoveries made so far
10. J. Lighthill, “Artificial produced the major impact that was then promised”10. For Lighthill
Intelligence: a General
Survey”, Artificial and his team, AI’s seemingly negligible impact should call the entire
Intelligence: a paper
symposium, Science discipline into question. The influence of these two publications
Research Council, Part 1,
p. 8, 1973. was quite significant at the time: both public funding and private in-
vestments in R&D programs got momentarily frozen or reassigned
to other scientific domains. AI would have to wait a short while be-
fore seeing confidence and funding come its way once again.

The 1980s would correspond to a revival. The advent of expert


systems, as a new generation of AI models fueled by the increasing
availability of computing power, prompted this resurgence of confi-
dence. As an immediate consequence, funding soared and flowed
back into the field, giving it a sudden second chance. Expert sys-
tems were the signature of this period; these models allowed ma-
chines to reason based on a set of rules and collections of facts. In
other words, from a given knowledge base, an expert system could
infer the truth of new statements. The reliability of these models,
when applied to specific domains, is what would explain their suc-
cess throughout the 1980s.
11. B. G. Buchanan & E.
H. Shortliffe, “Rule-based
expert systems: the
MYCIN experiments of The MYCIN project (1972), at Stanford University, stands as an
the Stanford Heuristic
Programming Project”, essential milestone in the early days of expert systems. Meant
Addison-Wesley, 1984.
to be used in medicine to identify infection-inducing bacteria,
4 this AI model would reason on a knowledge base of roughly 600
Cover of MYCIN rules (Fig. 4)11. If MYCIN was in fact never used for actual cases,
expert system's guide
book,1972. it remains a striking demonstration of the potential of expert

21
Artificial Intelligence, Another Field

systems’ relevance at that time. John P. McDermott’s R1 program


(1978) – also called XCON – is another example, yet this time
actually applied to a real-world problem. McDermott’s program
was deployed in 1980 to assist DEC, an American computer
manufacturer, in automating the ordering of computer components
based on customer requirements. Given the specialized nature of
the task, this rule-based model proved extremely successful at
improving the general reliability of industrial processes. However,
one of the most iconic expert systems remains the “Cyc” project,
12. Matuszek et al., “An developed from 1984 onwards by the American AI researcher
Introduction to the Syntax
and Content of Cyc”, AAAI Douglas Lenat12. With Cyc, Lenat wanted to model common-sense
Spring Symposium, 2006.
knowledge, concepts and rules about how the world works. It
is to this day one of the most significant examples of the kind of
experiment that took shape during this period. This project is in fact
still under development today at Cycorp.

By the end of the 1980s, however, expert systems reached a plateau,


due to certain obvious limitations, prompting the beginning of a sec-
ond AI winter. John McCarthy maybe best formulated its causes in his
13. J. McCarthy, “Some article “Some Expert Systems Need Common Sense”13. In this pub-
Expert Systems Need
Common Sense”, Annals lication, McCarthy reflected on expert systems’ “difficulty to extend
of the New York Academy
of Sciences, Volume 426, beyond the scope originally contemplated by their designers, [and in-
pp 129-137, 1984.
ability to] recognize their own limitations”13. At the same time, Jacob T.
Schwarz, then Director of DARPA ISTO – the Information Science &
Technology branch of the Defense Advanced Research Projects
Agency – came to the same realization, and decided to significantly
reduce the funding dedicated to the field. General skepticism and
a lack of investment would plague AI research for the decade to
come, plunging the entire discipline into a new period of self doubt.

22
4

23
Artificial Intelligence, Another Field

The Deep Learning Revolution

In the 1990s and 2000s, AI research would gradually pivot to em-


brace machine learning-based methods. Since expert systems
had set aside the principle of “learning”, their limitations gave
rise to explorations in new directions: neural networks, Bayesian
networks, evolutionary algorithms, etc. All these methods build
upon the concept of a gradual acquisition of knowledge, through
a trial-and-error learning process. In a seemingly quiet research
landscape, investigations into these models would spread. A
few events finally shook up the scientific community to revive
this stalling field once again. In 1997, Deep Blue, an AI computer
conceived at IBM research, eventually beat Garry Kasparov, then
chess world champion. This was an initial wake-up call for the en-
14. M. Newborn, “Deep tire community and beyond14. From the abstract world of chess to
Blue: An Artificial
Intelligence Milestone”, a real-life application, AI was soon going to benefit from another
Springer, 2002.
striking demonstration in 2005, at the DARPA Grand Challenge.
This car race was then won by Stanley, an autonomous car cre-
15. Stanford Artificial
Intelligence Laboratory, ated by Stanford University and the Volkswagen Electronics Re-
“Stanley: The Robot
that Won the DARPA search Lab15. Through a feedback loop between sensors mount-
Grand Challenge”, The
2005 DARPA Grand ed on the car, and a machine learning model, the vehicle was able
Challenge, pp 1-43,
Springer, 2006. to complete the race while securing first place.

24
The Deep Learning Revolution

These two events put AI research once again under the spotlight:
funding was back. This time, however, this revival was concurrent
with a few other realities. First, with the rapid development of the
Internet, data collection and curation had significantly improved.
Large databases were being aggregated and curated, giving AI
research a much broader variety and quantity of information to
process. Then, GPUs (“Graphical Processing Unit”) had started to
become more accessible: this piece of hardware, used by computers
to process images, was diverted from its initial purpose to train AI
models. By parallelizing operations – i.e. computing operations in
parallel rather than sequentially – GPUs could dramatically speed
up computational time. This in turn rendered AI projects considered
to be impossible until then feasible. Throughout the 2000s, this
hardware progressively became more accessible either natively, on
users’ laptops, or on the “cloud” by using servers remotely.

Building on these foundations, the term “deep learning” emerged


at the turn of the 2010s, to refer to the ongoing shift happening
within the AI community. This expression is an acknowledgment
that artificial networks were the main focus from now on, as opposed
to expert systems or other architectures previously employed
in AI research. The concept of “depth” refers to the increasing
complexity of AI models by the addition of more artificial neurons
to their architecture. In return, this network depth allowed AI
systems to tackle more challenging problems, although rendering
the training process computationally more expensive and tedious.
If AI still remained a quiet field until this point, the relevance of this
new generation of models would soon start to be evidenced by
the work of certain research institutions. A few events accelerated

25
Artificial Intelligence, Another Field

this revolution while broadcasting its importance to the world. In


2009, the project ImageNet started at Stanford. By gathering the
largest database of labelled images thus far (more than 14 million),
the university organized a yearly classification competition:
contestants were invited to test their model’s prediction accuracy
16. Krizhevsky et al., against the ImageNet database. In 2012, AlexNet16, a brandnew
“ImageNet Classification
with Deep Convolutional deep learning model, overshot every baseline; the team of
Neural Networks”,
Advances in neural scientists behind AlexNet proved by the same token the validity of
information processing
systems 25, pp 1097- deep architectures for complex problems, and set the bar much
1105, 2012.
higher than any previous research on the topic. This event was eye-
opening for the field, and more importantly, for the general public.
In an entirely different domain, in 2016, Lee Sedol, world Go game
17. Silver et al., “Mastering champion, lost to AlphaGo, an AI model developed by DeepMind17.
the game of Go with deep
neural networks and tree If the Go game can seem to be analogous to chess at first – it is
search”, Nature 529, pp
484–489, 2016. played on a board with black and white ponds – it is in fact far more
complicated. Because of its combinatorial complexity, scientists
had not believed until this point that AI could compete with human
intuition in this game. It is precisely why Sedol’s defeat was both a
breakthrough and a signal to the research community at large that
deep learning represented a quantum leap.

Since ImageNet and AlphaGo, the deep learning era has blossomed
into countless new breakthroughs and applications. First, the diver-
sity and complexity of AI models have significantly increased: con-
volutional neural networks, graph neural networks, generative ad-
versarial networks, variational auto-encoders, and many other new
architectures have been developed since then, always further push-
ing previously set performance baselines and expanding AI’s scope.
The variety of input mediums has also considerably widened: from

26
The Deep Learning Revolution

simple digits and images in the 50s and 60s, AI can today analyze
and generate films, sounds, texts and 3D geometries to name only
a few formats. This reality, combined with the democratization of
computational power, has allowed a widespread dissemination of
AI solutions across industries since the 2010s.

A few examples illustrate the striking diversity of AI applications to-


day. In bioengineering, for instance, drug discovery has drastically
improved. To either determine the solubility of given molecules
or their compatibility, AI can generate a vast quantity of molecu-
lar structures while predicting their associated performance and
18. Hwang et al., properties (toxicity, metabolism, etc.)18; by the same token, the time
“Comprehensive Study
on Molecular Supervised spent on searching for new drugs can be dramatically reduced,
Learning with Graph
Neural Networks”, J. while scientists can explore more options than with traditional
Chem. Inf. Model, 60, 12,
pp 5936–5945, 2020. methods. In an entirely different field, mechanical engineering, the
design of parts – given a set of constraints and material proper-
ties – has always been a key domain of investigation. The repartition
of loads under stress is a complex problem to forecast that diverse
optimization technics have been trying to tackle for decades. AI to-
19. Rawat et al., day allows the speeding up of such optimizations19 so as to predict
“A Novel Topology
Optimization Approach the unsuspected path taken by loads and suggest entirely new
using Conditional Deep
Learning”, 2019. patterns of material repartition.

Image synthesis – a field concerned with the generation of imag-


es by computers – has seen recent developments yield surprising
results, that Generative Adversarial Networks (GANs) may best
exemplify. If these models are explained in more detail in the fol-
lowing chapters, a simple glance at their results gives an idea of the
current performance of such generative AI models. Built on a new

27
5

28
A living room with two white armchairs and a painting of the Colosseum. The painting is mounted over a modern fireplace.

A loft bedroom with a white bed next to a nightstand. There is a fish tank beside the bed.

A photo of Alamo Square, San Francisco, from a street in the afternoon.

29
Artificial Intelligence, Another Field

20. Goodfellow et al., type of architecture, initially theorized by the researcher Ian Good-
“Generative Adversarial
Networks”, Advances fellow (2014)20, these models can be trained to synthesize images
in neural information
processing systems, 27, that are realistic in the extreme. Nvidia Research has evidenced their
2014.
performance with StyleGAN (2018)21, a model able to generate a vast
21. Karras et al., “A
Style-Based Generator number of realistic human faces in high definition (Fig. 5).
Architecture for
Generative Adversarial
Networks”, In Proceedings
of the IEEE/CVF More speculative experiments, at the interface of AI and linguistics,
Conference on Computer
Vision and Pattern finally convey the magnitude of the latest improvements. OpenAI, an
Recognition, pp. 4401-
4410, 2018. American research laboratory founded in 2015, recently published
results of their language models, GPT-322, DALL-E23 and GLIDE24. In
5 essence, these architectures can perform the translation of tex-
Portraits generated
using StyleGAN, tual information into potential associated visual representations.
Nvidia Research,
2018.
In simpler terms, a given sentence, fed to these models, returns a
wide variety of images, fitting the description conveyed by the input
22. Brown et al.,
“Language Models are phrase. Figure 6 displays such results. Beyond the strict depiction
Few-Shot Learners”, 2020.
of literal terms, OpenAI’s projects tackle challenges such as refer-
23. Ramesh et al.,
“Zero-Shot Text-to-Image ences, analogies and other complexities found in human language.
Generation”, 2021.
GPT-3, DALL-E and GLIDE simply illustrate the increasing levels of
24. Nichol et al., “GLIDE: abstraction that current AI models are able to handle.
Towards Photorealistic
Image Generation and
Editing with Text-Guided
Diffusion Models”, 2021. This non-exhaustive collection of examples only underlines the
tangible results of AI’s latest developments. They conclude this
6
70-year-long chronology and set the stage for a discussion be-
Results of DALL-E; each
collection of images is tween Architecture and AI. If the following chapters will offer a more
generated by the model,
based on an input thorough introduction to this technology and its conceptual under-
sentence, displayed
above in the figure.
pinnings, for the time being this genealogy acts as a short reminder
of how this discipline, foreign to Architecture, came to be.

30
The Deep Learning Revolution

References & Resources

Articial Intelligence: The Society of Mind


A general survey

M. Minsky,
J. Lighthill, 1973 Touchstone, 1986

Some Expert The Dartmouth


Systems Need Research Project on
Common Sense Artificial Intelligence
J. McCarthy, Stanford J. McCarthy, M. Minksy, N.
University, 1984 Rochester, C.E. Shannon, 1955

What is AI ? The Perceptron


Basic Questions
with John McCarthy The Machine That
Changed the World,
by Stanford University Documentary, 1992

Artificial Intelligence, AI & Creativity: Using


a discussion with Generative Models To
Marvin Minksy Make New Things
Edge Interview, 2002 D. Eck, Google Brain, 2017

31
The Advent of
Architectural
AI A Historical Perspective

32
The ties between
Architecture and Technology
are neither recent, nor have
they been a stable reality.
Despite having quite distinct
agendas, their respective
histories display moments
of alignment and mutual
enrichment. Either by simply
inspiring one another, or by
sharing entire frameworks
with each other, their
discussion has brought
significant contributions to
both worlds.
This back-and-forth takes its roots far into the history of
Architecture. From the systematization brought by the modular grid
at the turn of the century to the advent of computer-aided design
(CAD), and later of parametric modeling, the discipline has benefited
from the gradual refinement of its technological means and meth-
ods over the past century. Today, AI appears as a potential fourth
stage of this chronology. As Architecture’s relationship to technol-
ogy has matured in parallel to AI’s development, understanding how
AI eventually might land in the discipline’s technological landscape
is essential. This chapter intends to tie both histories together, while
setting the stage for AI’s presence in Architecture.

33
The Advent of Architectural AI

1
A brief historical
Modularity Computer-
timeline of techno-
logical developments
in Architecture since
1940s Aided
the 1920s.
Design
1960s

PRONTO UNISURF
P. Hanratty P. Bézier
(’59) (’66)

Dymaxion “Unité d’habitation” Urban 5 Generator


House in Marseille N. Negroponte C. Price
B. Fuller Le Corbusier (’73) (’76)
(’30) (’52)

Baukasten The Modulor SketchPad


W. Gropius Le Corbusier I. Sutherland
(’23) (’46) (’63)

AI’s History
Artificial Dartmouth Perceptron ELIZA Lighthill Report
Neuron Workshop ’57 ’66 ’73
’43 ’56
34
Parametricism Artificial
2000s Intelligence
2010s

Vectorworks Revit
First Release First Release
Nemetschek RTC
(’85) (’00)

CATIA Rhinoceros Grasshopper


First Release Version 1.0 D. Rutten
Dassault Systèmes McNeel (’07)
(’82) (’98)

AutoCAD Pro/ENGINEER Parametricism’s


First Release S. Geisberg Manifesto
Autodesk (’88) P. Schumacher
(’82) (’09)

“Some Expert DeepBlue vs Stanley at AlexNET Generative AlphaGO


Systems Need Kasparov DARPA ’12 Adversarial vs Sedol
Common Sense” ’97 Challenge Network ’16
’84 ’05 ’14
35
The Advent of Architectural AI

Modularity

Reflecting on the last century, and to set a salient starting point to


this chronology, modularity can be considered as both an import-
ant milestone for Architecture and a sudden increase of its system-
atization. At the turn of the 20th century, modularity’s advent mobi-
lized both academics and practitioners to rapidly reshape some of
the discipline’s core constructive principles and methodologies.

1. A. M. Seelow, “The Modularity was first theorized at the Bauhaus by the German archi-
Construction Kit and
the Assembly Line, tect Walter Gropius. His initial aim was twofold: simplifying technically
Walter Gropius’ Concepts
for Rationalizing the construction process while significantly reducing its cost. In that
Architecture”, In Arts,
Vol. 7, No. 4, p 95, spirit, Gropius first introduced, as early as 1923, the concept of “Bau-
Multidisciplinary Digital
Publishing Institute, 2018. kasten”1. With this new methodology, standard modules were meant
to be assembled as a kit of parts according to strict assembly rules.
2. M. M. Cohen, A.
Prosina, “Buckminster As a result, the complexity of detail solving would be mitigated by
Fuller’s Dymaxion
House as a Paradigm the rigor of the modular system.
for a Space Habitat”, In
ASCEND, p. 4048, 2020.
With the American architect and designer Buckminster Fuller, mod-
2 ularity then evolved towards a more integrated definition. In Fuller’s Dy-
Sections of Buckminster maxion House (1930)2, systems such as water pipes, HVAC, and other
Fuller’s Dymaxion
House, 1933. networks were directly embedded within the very modules (Fig. 2).

36
Modularity

This attempt pushed modular logic to the extreme. The minute de-
composition of the different functions into manufacturable assembly
kits established the Dymaxion House as one of the first successful
proofs of concept for the rest of the industry.

The same year, the Winslow Ames House, designed by the American
architect Robert W. McLaughlin, constituted another successful
experiment. In his project, McLaughlin put the modular principles
under even more acute pressure in an attempt to demonstrate the
affordability of modular dwellings. By significantly streamlining the
manufacturing process, McLaughlin was able to bring the production
cost of a single dwelling down to 7,500 dollars. This demonstration
would set a lasting precedent, demonstrating the obvious benefits of
the modular approach.

This rationalization of Architecture into systems and kits rapidly found


a broader echo within the discipline. Besides its strict economical

37
The Advent of Architectural AI

relevance, modularity gradually inspired theorists across the field.


3. Le Corbusier, “Le Le Corbusier’s “Modulor” may best express this reality3. From
Modulor: essai sur une
mesure harmonique 1946, Le Corbusier developed and implemented a more complete
à l’échelle humaine
applicable universellement theory, where the rationalization of dimensions would factor into the
à l’architecture et à la
architect’s broader agenda. In his work, the dimensions of the building
mécanique”, Édition
de l’Architecture were aligned on key metrics and ratios derived from the human
d’Aujourd’hui, 1950.
body. Consequently, from the “Unité d'Habitation” in Marseille (1952)
to the convent of La Tourette (1960), Le Corbusier systematized
dimensions and spans to match the prescriptions of his Modulor.

In line with these early experiments, architects would increasingly


adapt their work to the requirement of the modular principles. In
essence, by transferring part of the design’s technicality to the
systematic logic of the grid and the assembly systems, architects
discovered a methodology allowing them to conceive affordable
designs at scale. Two major benefits of modular construction were
going to contribute to its rapid adoption: on the one hand, it drastically
reduced both the complexity and the cost of building conception and
construction. On the other, it substantially increased the reliability
of construction processes. Looking at more contemporary iconic
projects, whether realized or speculative, one can still read the lasting
influence of the modular principles. To mention only two examples,
Moshe Safdie's Habitat 67 and Archigram's “Plugin City” are striking
examples of the fascination for modularity that were to continue long
after the end of the Second World War.

3 In 1967, the Israeli-Canadian architect and urban planner Moshe Saf-


Assembly of Moshe die (1895–1983) built the housing complex “Habitat 67” (Fig. 3). This
Safdie’s Habitat 67,
1967. project remains today a masterful modular demonstration, long after
3

38
39
The Advent of Architectural AI

Gropius’ seminal work: prefabricated housing units were assembled


on site with cranes, while the irregularity of the resulting assembly
patterns created a vast array of different conditions across the de-
velopment. With Habitat 67, Safdie achieved a singular combination,
bringing together both the affordability of standardized modules and
4. M. Safdie, “For the richness of countless variations across his design4.
Everyone a Garden”, MIT
Press, 1974.

The influence of the modular principles would also impact the work of
theoreticians at other scales. In the 1960s, Archigram's “Plugin City”
5. S. Sadler, “Architecture envisioned a modular metropolis5. Through the constant assembling
Without Architecture”, MIT
Press, 2005. and dismantling of modules installed on a three-dimensional structural
matrix, cities could experiment with the possibility of modular growth.

These principles, however, would rapidly exhibit obvious limitations.


To restrict Architecture to a simple assembly of modules aligned on a
rigid grid too often reduced the practice to a narrow definition. In many
instances, architects could not resolve themselves to merely act as
the assembler of predefined design systems, abiding by stringent rules
and processes. Moreover, the modular production too often proved to
be quite monotonous, while the early systems of assembly eventually
revealed real constructive weaknesses. For these reasons, architects’
fascination for modularity, under its initial definition, would gradually
fade away throughout the 20th century.

It was, however, to have a profound effect on the architectural disci-


pline, establishing a new rational mindset among practitioners, and a
certain eagerness to envision buildings as actual systems. As a last-
ing testimony of this period, the concepts of grid, module, and assem-
bly still today deeply irrigate some of Architecture’s core principles.

40
Modularity

References & Resources

The Modulor I & II Le Corbusier’s


Modulor system

Le Corbusier, Harvard
University Press, 1954 R. Meier’s Interview, 2017

Towards A New The New Architecture


Architecture and the Bauhaus

Le Corbusier, J. Rodker
Publisher, 1931 W. Gropius, MIT Press, 1965

Archigram’s Buckminster Fuller’s


Plug-In City Dymaxion House

1940s Futuristic
VDF, Dezeen, 2020 Architecture, 1946

The Dymaxion Gropius & The Dessau


World of Bauhaus
Buckminster Fuller
R. W. Marks,
S.I. University Press, 1960 Architecture Collection, ARTE

41
The Advent of Architectural AI

Computer-Aided Design

At the turn of the 1980s, the rapid increase of computing power


and the availability of new hardware (microprocessors, memories,
computer networks, etc.) triggered the advent of multiple design
software programs relevant to architectural design. “Computer-
Aided Design” (or “CAD”) — as this generation of software will later be
named — was to significantly impact the practice of Architecture.

In reality, reflections on the potential of CAD began as early as the


6. W. E. Carlson, “A
Critical History of mid-1950s within certain engineering firms. In 1959, the American
Computer Graphics and
Animation”, The Ohio
computer scientist and businessman Patrick Hanratty released
State University, 2005.
PRONTO6, the first CAD prototype, developed for designing
engineering parts. The possibilities offered by this software marked
7. I. Sutherland,
“Sketchpad: A man- the beginning of significant research efforts on the topic.
machine graphical
communication system”,
Simulation, 2(5), pp R-3.
Shortly thereafter, in 1963, the American computer scientist Ivan
1964.
Sutherland created SketchPad7 (Fig. 4), one of the first truly acces-
4 sible, ergonomic, and simple CAD programs. Working at the Lincoln
Ivan Sutherland and
SketchPad,1963.
Laboratory of the Massachusetts Institute of Technology (MIT),
Sutherland designed a software that not only allowed for the pre-
cise 2D drafting of technical elements, but also offered a streamlined

42
Computer-Aided Design

4
and intuitive interface for designers. With the use of a pencil and
extremely simplified controls, SketchPad gave drafters an unprece-
dented level of comfort and flexibility.
8. P. Bézier, “Essai de
définition numérique des
courbes et des surfaces
experimentales”, PhD
From 2D drafting to 3D modeling, CAD made a leap forward in
diss., these Doctoral France, thanks to the work of mathematician and computer scientist
d’Etat es Sciences
Physiques, 1977. Pierre Bézier. Bézier’s work on complex curvatures8 enables drafters
to draw increasingly challenging 3D shapes using computers, offering
9. P. Bézier, “Example of
an existing system in the a new momentum to CAD software. Released in 1966, Bezier’s
motor industry: the Unisurf
system”, Proceedings UNISURF9 software was used by the car manufacturer Renault to
of the Royal Society of
model the shape of certain prototypes. This sudden leap forward
London. A. Mathematical
and Physical Sciences, did not limit itself to automotive design, but would have a lasting
321(1545), pp 207-218,
1971. influence on design software across many other fields.

43
The Advent of Architectural AI

Thanks to Sutherland, Hanratty, Bézier, and many others, CAD in-


creasingly stood as a new field of research in its own right. At the
same time, CAD was being massively deployed across industries,
and Architecture was no exception. At its core, this generation of
software allowed the creation and edition of primitives — simple geo-
metrical shapes —, their aggregation and sorting using concepts
such as blocks, groups, and layers, to finally output the results under
various formats, digital or printed. Besides their obvious contribution
in speeding drafting tasks, CAD programs imposed a specific struc-
ture to the design process. Drawings were systematically organized
across layers, blocks allowed for a certain replicability of module-like
groups of shapes, and geometries would be tagged with consistent
properties. Through its various conventions, CAD propelled in Archi-
tecture a new way to rationalize and systematize the drafting process.

In parallel to this gradual dissemination, the work of the CAD pioneers


would inspire a generation of computer scientists and architects to
take more speculative and experimental directions. The Architecture
10. N. Negroponte, “The
Architecture Machine”, Machine Group (AMG) at MIT, led by Greek-American computer
MIT Press, 1970.
scientist and professor Nicholas Negroponte, is perhaps one of the
11. N. Negroponte, most significant examples of this period. Negroponte's book, “The
“Toward a Theory of
Architecture Machines”,
Architecture Machine” (1970)10, encapsulates the essence of the
Journal of Architectural AMG's mission: to investigate how computers might improve archi-
Education, Vol. 23, No. 2,
pp 9-12, 1969. tectural design in the decades to come. The Urban 2, and later the
Urban 511, projects allowed him to demonstrate the potential of CAD
12. A reconstituted demo
of Urban 5, created by specifically in its application to Architecture12, even before the industry
Erik Ulberg,
can be accessed at the
had taken this path. Throughout AMG’s projects, researchers investi-
following URL: https:// gated the potential interface between computers and designers, as
c0delab.github.io/
URBAN5/ well as the organization of future CAD programs.
5

44
45
The Advent of Architectural AI

From now on, architects and industrialists would increasingly em-


brace CAD software in its various forms, and sometimes even inno-
vate themselves. In this respect, the initiative of the American-Ca-
nadian architect Frank Gehry would pave the way for the following
decades. For Gehry, the use of computers applied to architectural
design should considerably relax the limits of assembly systems
and allow for new forms and geometries in his designs. In 1989,
businessman Jim Glymph teamed up with Gehry to initiate the use
13. D. Narayanan, “Gehry of Dassault Systemes' main computer-aided design and manufac-
Technologies, a Case
Study”, 2006.
turing (CAD/CAM) software, CATIA, to solve the extreme geomet-
ric complexity of some of their projects13. Among many designs,
5 the Walt Disney Concert Hall in Los Angeles (Fig. 5) remains an
The Walt Disney Concert iconic example of their success that would set a lasting precedent
Hall, Frank Gehry,
2003.
demonstrating the value of 3D CAD to architects14.

14. Haymaker & Fischer, Between the 1980s and 2010, the growth of data storage and
“Challenges and Benefits
of 4D Modeling on the
computing capabilities, combined with their drastic cost decrease,
Walt Disney Concert Hall facilitated the development and adoption of CAD software, such
Project”, 2001.
as CATIA (1982), AutoCAD (1982), Vectorworks (1985), and many
others. Architects widely adopted this new design method as it
allowed for the rigorous control of complex geometrical shapes,
facilitated collaboration among designers, enabled more iterations
than traditional hand-sketching, and limited resulting costs. For all
these reasons, CAD gradually became an industry standard.

However, as architects embraced this software, obvious limitations


arose. The repetitiveness of drafting tasks, the lack of control over
certain shapes and the difficulty in specifying complex design rules
prompted the industry to start looking elsewhere for complementary
technologies.

46
Computer-Aided Design

References & Resources

SOFTWARE The Unisurf System


Show Catalog

The Jewish Museum,


New York, 1970 P. Bézier, 1971

Ivan Sutherland Sketchpad,


& Sketchpad a Thesis
at MIT Lincoln Lab
MIT Science Report, 1963 I. Sutherland, 1963

CAD systems CAD Lab at MIT

MIT, Architecture Machine MIT Department of Mechanical


Group, 1976 Enginerring, 1982

Frank Gehry The Case for Process


uses CATIA Change by Digital
Means
2011 D. R. Shelden, AD, 2006

47
The Advent of Architectural AI

Parametricism

“Parametric modeling” refers to a design methodology that would


gradually be integrated within mainstream architecture software
(Rhino, Revit, etc.). Besides the manipulation of sketches using
standard geometric editing tools, this methodology lets designers
specify explicit rules as an alternate way of designing buildings. In
reality, the use of such rules long predates the arrival of parametric
tools in Architecture; either through the early work of certain architects
or because of experiments realized within specific software in the
second half of the 20th century.

Already, in the early 1960s, the emergence of parametric archi-


tecture had been announced by the Italian architect Luigi Moretti.
His project, the Stadium N, constituted an early demonstration of
15. L. Moretti, parametric modeling’s potential15. By defining nineteen parameters,
“Parametrica Architettura”,
Dizionario Enciclopedico Moretti formulated a precise procedure, as a set of mathematical
Di Architettura e
equations, directly responsible for the final shape of the structure.
Urbanistica. Istituto
Editoriale Romano, 1968. Each variation of this parameter set could yield a new shape for the
stadium. Moretti’s resulting design not only offered a convincing
proof of concept at the time, it also anticipated parametric model-
ing’s upcoming aesthetics.

48
Parametricism

In the meantime, the development of experimental design software


will facilitate early experiments on the same topic. Besides its CAD
interface, Ivan Sutherland’s SketchPad (1963), mentioned earlier, al-
ready formulated certain parametric features. At the heart of this tool,
16. I. Sutherland, the notion of “atomic constraint”16 represented a translation of Moret-
“Sketchpad: A man-
machine graphical ti's idea of parameter. For any sketch made in SketchPad, each ge-
communication system”,
Simulation, 2(5), pp R-3. ometry would be translated for the machine into a set of atomic con-
1964.
straints or, in other words, variables accessible to the user. Not only
could the designer modify these parameters, but the underlying set
of relationships could be changed, giving to the end-user the ability
to set both the design rules and their different inputs. In retrospect,
SketchPad appeared as a precursor to most parametric design
tools later invented throughout the industry.

Twenty-five years after Sutherland’s thesis, Samuel Geisberg,


founder of Parametric Technology Corporation (PTC), launched
Pro/ENGINEER (1988), the first software program to provide users
with complete access to geometric parameters. As the software
was released, Geisberg perfectly summed up the parametric
ideal: “The goal is to create a system that is flexible enough to
encourage the engineer to easily consider a variety of designs. And
17. J. Teresko, Industry the cost of design changes should be as close to zero as possible”17.
Week, December 20,
1993. Geisberg’s assertion corresponded to a key concern addressed
by parametric modeling: the ability to rationalize shapes into strict
rules, to allow for fast and reliable design explorations. This very
characteristic would in fact explain parametric modeling’s success
and dissemination across the industry over the following decades.
These early experiments demonstrated to the discipline the potential
relationship between architectural design and its parameterization.

49
The Advent of Architectural AI

In the footsteps of Sutherland and Geisberg, a new generation of


“parametric” architects could finally flourish. Among many efforts
to digest parametric modeling’s implications for Architecture, Patrik
Schumacher, a German architect and collaborator at Zaha Hadid
Architects (ZHA), attempted to provide a unified theory. For him,
the discipline was gradually “converging” towards what he called
“Parametricism”, understood as a design technique, but also as
a distinct architectural style. In his manifesto, “Parametricism, A
18. P. Schumacher, New Global Style for Architecture and Urban Design” (2009)18,
“Parametricism: A
New Global Style for Schumacher laid down the core principles of this new movement.
Architecture and Urban
Design”, AD Architectural
Design - Digital Cities, Vol
Besides the discussion on its theoretical framework, parametric
79, No 4, 2009.
modeling would rapidly find a more visible manifestation through
the work of key architecture offices, such as ZHA. Zaha Hadid, an
Iraqi-British architect and urban planner, who trained as a mathemati-
cian, grounded her practice early on in the intersection of Mathemat-
ics and Architecture. Her work, such as the master plan for the Kartal
6 Pendik neighborhood in Istanbul (Fig. 6), would often be the result of
Master Plan for Kartal rules encoded directly into the program, allowing an unprecedented
Pendik, Zaha Hadid
Architects, 2006. level of control over building geometry. Throughout her work, many
architectural decisions were formulated into parametric procedures
whereby key variables drove the resulting design. The distinct orga-
nicity of Hadid’s work was in part due to this encoding methodol- 6
ogy. Her project’s organic appearance remains to this day the sig-
nature of both her own style and, more generally, Parametricism’s.

Parametric modeling’s adoption would in fact accelerate as the


development of visual programming platforms took off. Behind ZHA's
work for instance, Grasshopper, a program developed by computer

50
Parametricism

scientist David Rutten in the 2000s, significantly enabled Hadid’s


design process. By using a simple graph-like interface, Grasshopper
allows for the encoding of design rules. In the software, geometrical
objects, functions and their associated parameters are woven together
into sequential procedures. Thanks to this tool, architects get simplified
access to programming logics, without the complication of learning

51
The Advent of Architectural AI

any specific programming language or engaging with the hassle


of code development. Nowadays, the simplicity of the interface,
combined with the relevance of its multiple features and built-in
components, allow Grasshopper to be an essential tool for an entire
7 generation of designers. Evidently, Grasshopper (Fig. 7) built upon
Grasshopper, Visual Sutherland’s and Geisberg’s intuitions, while opening even more the
Progamming Software,
2018. back door of design software19. Using Grasshopper, the design pro-
cess could effectively reach an entirely new level of systematization:
19. D. Rutten, “Computing
Architectural Concepts: it could now be conceived more programmatically, as designers in-
Grasshopper Stories”,
vest part of their design time in the formulation of Architecture’s un-
Lecture at the AA School
of Architecture, 2010. derlying rules, their replicability and applicability at scale. As visual
programming interfaces quickly spread across the industry, such a
mindset shift was to accompany their deployment.

Beyond Grasshopper and its contributions to the profession, an-


other revolution, initiated in the early 2000s, would be timely de-
ployed to leverage the concept of parameter: building information
20. Autodesk, “Building modeling (BIM)20. BIM’s intent is to document and manage the vast
Information Modeling,
White Paper”, 2002. quantity of meta information tied to building forms (quantities, ma-
terials, specifications, properties, etc.). At the same time, within
any BIM software, such as Revit or ArchiCAD, architects can ma-
nipulate objects rather than simple geometric primitives. Besides
their respective shapes, objects carry their own set of properties
and behaviors relative to other objects. While CAD drawings are
21. C. Eastman, P. representations of the building, BIM models aspire to offer actu-
Teicholz, R. Sacks, K.
Liston, “BIM handbook : al digital replicas of buildings and their respective systems21. This
a guide to building
information modeling
semantic enrichment heavily relies on the management of param-
for owners, managers, eters and on the existence of underlying rules for each element,
designers, engineers and
contractors”, Wiley, 2011. each family of objects, etc. From Revit to Sutherland’s SketchPad,

52
7

53
The Advent of Architectural AI

these practices in their diversity followed a common thread: the


explicit use of parameters as design drivers.

However, since the 2010s, the parametrization of architectural de-


sign seems to have been running out of steam, both technically and
conceptually. Several factors contribute to this situation. First, with
these techniques, concerns about strict efficiency too often take
precedence over the imperative of space organization, style and
more implicit considerations vital to the discipline. Then, Architec-
ture requires the exploration of broad design spaces. Unfortunately,
parametric modeling often fails to capture this reality. Although an
improvement over previous methodologies, the variety of gener-
ated design options often remains too narrow. Finally, finding the
right balance among parameters can be a complex and computa-
tionally expensive exercise that often defeats the initial purpose
of parametric design. Moreover, and independently of these tech-
nical shortcomings, parametric modeling is based on a theoretical
premise questioned by some: the important properties of a building
could be described using a fixed set of explicit parameters, directly
encoded using rigid design rules. In reality, certain essential archi-
tectural concerns (sociological, cultural, stylistic, etc.) cannot be as
explicitly formulated, putting parametric modeling at odds with cer-
tain key aspects of Architecture.

AI, the fourth stage of this chronology, is eventually poised to im-


prove over certain limitations of parametric modeling. Its encounter
with the architectural profession is a decisive turning point, which
the last sixty years have been gradually preparing for.

54
Parametricism

References & Resources

An “Other” Aesthetic: Parametricism,


Moretti’s Parametric A New Global Style
Architecture for Architecture and
Urban Design
A. Imperiale, Log, 2018 P. Schumacher, 2008

Parametricism The Kartal Pendik


as Style Masterplan

P. Schumacher, 2008 Z. Hadid Architects, 2015

A History of The Challenges of


Parametric Parametric Modelling

D. Davis, 2013 D. Davis, 2013

The Future of Digital Culture in


Making Buildings Architecture

P. Bernstein, A Lecture by A. Picon,


TEDxYale, 2015 2013

55
The Advent of Architectural AI

Artificial Intelligence

AI’s relevance to Architecture was initially anticipated by a few the-


orists, who foresaw its potential early on. These precursors would
initiate a discussion within the discipline on various aspects of AI’s
future contributions. A short glance at some of these milestones
can help better grasp the direction taken by current developments.

Nicholas Negroponte was to initiate the reflection. His work in


the 1970s focused specifically on the notion of interaction with
“intelligent” machines. He first introduced the concept of “machine
assistant” in his work at the MIT Media Lab’s Architecture Machine
Group (AMG) with the aforementioned Urban 2 and 5. These
programs were initially designed to help architects draw floor
plans by adapting room layouts in order to optimize adjacencies
and lighting conditions, while constraining the sketch to fit onto a
modular grid. Besides providing an early expression of CAD, Urban
5 investigated the very notion of complementarity between the
22. N. Negroponte, “The designer and an “intelligent” agent22. To that effect, the software
Architecture Machine”,
MIT Press, 1970. played off the interaction between two distinct layers of information:
the machine handled an array of implicit rules, while the user was in
charge of specifying given explicit parameters. Urban 5’s repartition

56
Artificial Intelligence

of tasks translated the “machine-human” complementarity desired


by Negroponte. With this project, Negroponte put forward a new
distribution of contributions between computers and architects.
For instance, when users would place elements on the canvas,
Urban 5 would issue warnings if clashes were detected. “TED, TOO
MANY CONFLICTS ARE HAPPENING” would get flagged if blocks
did not coincide. The machine could also suggest rough layouts,
letting users tune and adapt them later. Negroponte’s work assigned
to computers a more active role in the conception process, beyond
the simplicity of other CAD research of the time. His work helped
clarify and demonstrate the type of interaction architects could
expect from “intelligent” design programs for the foreseeable future.

Around the same period, Negroponte’s British counterpart, Cedric


Price, investigated another facet of AI: the principle of autonomy.
8 To that effect, in 1976, Price – then Professor of Architecture at
Detail view of the Cambridge University – invented the Generator (Fig. 8)23. With this
working electronic
model of the project, initially conceived as a proposal for the Gillman Corpora-
Generator project,
between 1976 and 1979.
tion, Price explored the concept of the self-adapting building. In the
project, a floor plan, organized as an orthogonal grid, allowed for a
23. S. Hardingham, system of partitions to be constantly modified. A computer was re-
“Cedric Price Works 1952-
2003, A Forward-minded sponsible for offering new partitioning layouts, either to adapt the
Retrospective”, pp.
447-470, AA Publications,
plan to the users’ behaviors, or spontaneously, as a way to trigger
2016. new conditions. At its core, Price’s work addressed the potential of
24. Furtado et al., “Cedric machines as autonomous design agents24. The Generator forecast-
Price’s Generator and
the Frazers’ systems ed, very early on, how AI could find its place within architectural soft-
research”, Technoetic
Arts: A Journal of
ware, while playing a specific role in the design process. Both Price’s
Speculative Research 6, work and Negroponte's research have shaped the discussion in Ar-
no. 1, 2008.
chitecture around the topic of AI. As explained in the first chapter

57
The Advent of Architectural AI

of this book, AI itself has significantly improved since these early


experiments. Price’s and Negroponte’s intuitions find today a new
echo: no longer are these convictions limited to a handful of isolat-
ed research projects. On the contrary, the increasing affordability
and accessibility of AI bring these considerations back to the cen-
ter of the discussions in Architecture. The last decade has indeed
seen a sharp increase in AI’s dissemination across the architec-
tural field. At this point, estimating its current presence remains a
challenging exercise, since the AI scene in Architecture seems as
diverse as it is recent.

To mention only a few manifestations, we first notice a significant


increase in publications and applied research projects on the topic
across the field. Only browsing through the wealth of published pa-
pers and conference proceedings produced over the past decade
is sufficient to signal the importance this subject has taken in ac-
ademia; an importance even echoed by the 2021 Architecture Bi-
ennale through the many talks, panels and keynotes engaging with
this topic. Then, turning to the state of mainstream software, the
gradual inception of AI capability has brought these technologies
closer to architects. The addition of generative design features to
Revit for instance, or the multiplication of Machine Learning libraries
for Grasshopper, represent as many opportunities for practitioners
to engage with this technology. In addition, a new generation of light-
er design tools has recently surfaced. Mostly browser based, they
offer cheap and simple access to AI-based design tools. Space-
maker, Archistar, Delve, XKool, CoveTool are only a few examples
of this recent web app ecosystem. More interestingly, a growing
number of architects are today being trained to understand, craft
8

58
59
The Advent of Architectural AI

and use this new technology. Throughout colleges and universi-


ties, a growing number of workshops, classes, or even degrees
offer to prepare architects to engage with AI. Finally, the influence
of AI’s momentum in other fields (engineering, computer science,
etc.) is to be taken under consideration. As this technology yields
promising results across these industries, certain AI applications
are being transposed and repurposed to match the architectural
agenda. This cross-pollination comes today from fields as varied
as self-driving cars or image recognition, where state-of-the-art
AI research is being conducted and is often open sourced. In this
sense, Architecture benefits today from a broader cross-disci-
plinary research effort, providing the discipline with many off-the-
shelf technological solutions.

As a matter of fact, AI’s emergence in Architecture leaves us with


a brand new fragmented landscape of applications, theories and
actors that has not yet crystallized into any single definition. This
chronology, therefore, remains as open-ended as the spectrum
of potential scenarios we are today facing. To reflect this reality,
the following chapters offer to present some of the most import-
ant facets of this emerging phenomenon. A first segment will lay
down various concepts and definitions to understand some of AI’s
most essential technical underpinnings. A landscape of existing
experiments and applications will then present AI’s tangible appli-
cations to various architectural scales. A collection of articles will
finally provide a snapshot of the diversity of current discussions.
This dual lens, halfway between application and theory, hopes to
convey and reconcile the heterogeneity of AI’s manifestations in
Architecture into a comprehensive corpus.

60
Artificial Intelligence

References & Resources

The Architecture Being Digital


Machine

N. Negroponte, N. Negroponte,
MIT Press, 1970 Alfred A. Knopf, 1995

Houses that know Cedric Price Archive


the people who
live in them
Canadian Center for
N. Negroponte, 1975 Architecture, 1959-95

Information Soft Architecture


Archaeologies Machines
Molly Wright Steenson on
Cedric Price’s Generator N. Negroponte,
Project, CCA MIT Press, 1975

Architectural The Creativity Code


Intelligence

M. W. Steenson, A Lecture by M. du Sautoy,


Talks at Google, 2018 2020

61
AI’s
Deployment
in Architecture
An Experimental Perspective

62
Looking back at History,
AI’s presence in Architecture
appears as the result
of a slow maturation.
However, the past decade
has witnessed the sharp
acceleration of this
momentum. The recent
results of research and the
wealth of current applications
across Architecture’s
different scales together
provide tangible signs of AI’s
gradual dissemination
in the field.
This chapter attempts to contemplate the landscape
of ongoing applications. It first begins by laying down a simplified
definition of AI’s various facets. Rather than diving into any tech-
nical depth, the following pages intend to set the stage in acces-
sible terms. The following segment, then, showcases some of AI’s
recent contributions to Architecture. Either at different scales, or
for various tasks, current projects developed at the intersection of
both fields already bridge the gap between research and practice.
Although these results provide a snapshot of the state of current
investigations, meant to evolve and mature, they prefigure a prom-
ising future for AI in Architecture.

63
AI's Deployment in Architecture

Artificial Intelligence 101

Since its early formulation in 1956 at the Dartmouth Workshop, AI has


taken different forms and matured in its definition. The past 60 years
have seen a vast diversity of approaches, all aiming at translating the
initial vision into functioning technologies; consequently, AI has today
blossomed into multiple distinct categories. As the rapid development
of research projects often outpaces the effort to exhaustively map out
AI’s ecosystem, this categorization is in fact at the center of intense
1 debate. Therefore, if the following figure (Fig. 1) offers a simplified
A brief categorization classification, it will certainly evolve within the coming decades.
of AI’s diverse fields
of investigation.
Artificial Intelligence

Machine Expert Robotics Computer Natural


Learning Systems Vision Language
Processing

Supervised Unsupervised Reinforcement


Learning Learning Learning

1
64
Artificial Intelligence 101

At this stage, however, the sole purpose of this classification is


to position AI’s latest developments, gathered under the flagship
name of “machine learning”, within AI’s broader ecosystem. Machine
learning encapsulates different models that share a few common-
alities, setting them apart from other computational paradigms.

On the one hand, machine learning describes the bottom-up acqui-


sition of features through repeated observations. In simpler terms,
machine learning models can approximate a phenomenon through
an iterative exposure to vast quantities of data. This process, called
“learning” — or “training” — corresponds to a tuning phase, during
which the model will either succeed or fail at capturing some of the
observed phenomenon’s complexity. Once trained, the model can
be used to predict or mimic the same phenomenon under new set-
tings or different parameters.

On the other hand, machine learning operates a pivot: by embrac-


ing an observational approach, this methodology takes its distance
with descriptive technics. A canonical example will both illustrate
and clarify this reality: the mathematical modeling of water’s boiling
point. It is common knowledge that the physical state of water de-
pends on its temperature and ambient pressure. Rather than taking
a descriptive approach by formulating the equation tying together
temperature and pressure, AI would instead browse through col-
lections of data points called “observations”, to approximate the
same phenomenon. Such observations can be obtained by re-
peatedly recording pairs of temperature and associated pressure
values, at different moments of water’s heating process. During the
learning phase, an AI model will try to improve its ability to predict

65
AI's Deployment in Architecture

the temperature of the boiling point given the ambient pressure, us-
ing a feedback loop mechanism. To that effect, all along its training,
the model compares its estimates to the actual expected values
present in the data. Faced with a residual difference between both,
the model tries to recalibrate itself until it reduces this gap as much
as possible. Training finishes when the user believes the machine
has sufficiently well acquired the “mapping” between a variable and
an “objective value”. In other words, this learned mapping can be
conceived and visualized as the model’s gradual attempt at fitting
a curve best describing the distribution of observations (curves in

2 Fig. 2). If the above example remains fairly simple, it illustrates the
broader idea behind machine learning-based technics: iteratively
Example of a
machine learning
model’s gradual
attempt at matching
d

0%

30%
tart

ing En

the distribution of
ing 7
ing S

water's temperature/ ning


pressure data points.
Train
Train
Train

Trai
Pressure

2 Temperature

66
Artificial Intelligence 101

inducing certain characteristics found among the observed data


while encapsulating these approximations into a trained and accu-
rate-enough model.

Finally, it is worth noticing yet another specificity of machine learn-


ing: the user’s control over the computation. With descriptive tech-
nics, like the ones Architecture is used to with parametric model-
ing, the user is entirely responsible for formulating the steps taken
by the computation and its associated parameters. With machine
learning, however, if the model’s architecture is at the user’s initia-
tive, the tuning of the parameters – and even, for certain models,
the very definition of these parameters – happens within the model
itself. Users retain control through a handful of high-level settings,
also called “hyperparameters”, allowing them to guide the general
direction of the learning process.

Neither a “white box” – a fully controllable algorithm –, nor a “black


box” – an airtight model leaving no control to the end-user –, ma-
1. A. Witt, “Grayboxing”, chine learning stands as a “gray box”1 in the computation landscape.
pp 69-77, Log #43, 2018.
This expression, coined by Andrew Witt, rounds up the description
of the balance that machine learning strikes between control and
computational complexity; with this technology, when the growing
intricacy of the models enables the approximation of ever more
challenging problems, the legibility and the interpretability of its
deeper computation can sometimes fade away. In machine learn-
ing, therefore, users constantly work along a threshold between
interpretability and complexity, striving to keep an adequate level
of transparency of their models, while leveraging the power of their
complex architectures.

67
AI's Deployment in Architecture

The Mosaic of Machine Learning

Machine learning is a field rich in many recent breakthroughs and


initiatives. These applications span across industries, from speech
recognition to image synthesis, all the way to robotics. Consequently,
the variety of existing approaches that populate this domain is
both a chance and a source of confusion for the general public. If a
comprehensive categorization of machine learning remains a tedious
task, a few concepts can help organize a mental map of ongoing
investigations; learning strategy, model architecture and performed
task are three different lenses, enabling us to draw a somewhat
simplified classification.

Machine learning is commonly divided into three subcategories, de-


pending on their respective training strategy: supervised, unsuper-
vised, and reinforced. Supervised learning investigates the applica-
tion of machine learning to the mapping of known input-output data
pairs. The boiling water example explained earlier typically illustrates
such a learning process: the emulation of a mapping, using labeled
data. Unsupervised learning, on the other hand, attempts to model
patterns found among observed data, without having any specified
output values. In other words, the data presented to the algorithm is
not labeled; it is then up to the model to discover trends happening
within this raw stream of information. Reinforced learning, finally,
refers to a third way of conceiving the training process: this time,
rather than consuming preexisting data pairs, the model simulates a
sequence of steps, from which it collects rewards; on this ground, it
recalibrates its behavior to improve its score further. Over time, the

68
Artificial Intelligence 101

model “reinforces” its ability to perform, as a reaction to its quantified


accuracy across a broad sequence of steps.

Beyond these three distinct training strategies, families of models


can also be sorted according to their respective architecture.
This reality corresponds to the internal structure of algorithms
themselves. From this standpoint, a different and more granular
categorization can be achieved: deep neural networks, support
vector machines, Bayesian networks, etc. Each family, through its
architecture, performs learning differently. However, all families build
upon the same concept of artificial neural networks and therefore
share basic common principles worth detailing.

Artificial neural networks (ANN), as used in most models today, di-


rectly stem from the early architectures detailed by McCulloch &
3 Pitts, Rosenblatt and others. With ANNs (Fig. 3), computation is
Simplified schema conceived as the byproduct of a distributed and diffuse process, as
of an ANN's
architecture. artificial networks aim at mimicking the human brain’s processing of
information. Artificial neurons, containing their own parameters – also
called weights – are nested into layered architectures to form entire

Input Output

Input Output

Input Output

3 Layer Neuron Data Flow

69
AI's Deployment in Architecture

networks. The architecture of these networks can vary greatly, as


users can modify the number of neurons, layers, training parame-
ters, and other settings to adapt the model to specific tasks. Layers
can also be specialized to perform specific tasks such as filtering,
activating, normalizing, or pooling information: as many possibilities
that express how much ANNs can display a vast diversity of poten-
tial architectures.

During the training of an ANN, data flows through its network, while
the neurons’ weights are gradually tuned, using a feedback loop
mechanism. Learning proceeds in fact as a simple repetitive back-
4 and-forth (Fig. 4): first, the computation flows from input to output, in
Training an ANN: a process called “feedforward”. Then, as this computation reaches
feedforward and
backpropagation.
the end of the network to produce a prediction, the result’s accuracy
is assessed, triggering a corrective feedback loop also called
“backpropagation”. This time, the information flows in the opposite

Feedforward

Input Output

Input Output

Input Output

Backpropagation
4
70
Artificial Intelligence 101

direction through the network, while assigning a correction to certain


neurons. With ANNs, feedforward and backpropagation get repeated
multiple times so as to gradually tune the network and increase its
general accuracy. This simple mechanism today powers ANNs used
across countless research projects, from basic investigations to
larger deep learning experiments.

Considering the end task performed by a model can finally provide


an alternate way to sort existing machine learning-based technolo-
gies. To name only a few, this book will illustrate in upcoming chapters:
convolutional neural networks (CNN), graph neural networks (GNN),
generative adversarial networks (GAN) and variational auto-encod-
ers (VAE) represent a non-exhaustive list of machine learning archi-
tectures, tailor-made for specific applications and data formats.

Convolutional neural networks (CNNs) are an essential category,


whose recent developments have profoundly changed the course
of machine learning. These architectures have been crafted for the
treatment of visual imagery. The notion of “convolution” is key to their
success; using a 2D patch of parameters called “kernel”, slid across
input images, convolutions are a better fit to process visual data than
standard neurons found in ANN architectures. Convolutions are at
the core of image recognition technologies, video feed analysis, and
numerous other applications; ImageNet, a groundbreaking research
project from the 2010s presented in a previous chapter, is in fact built
on this specific technology.

Graph neural networks (GNNs) are another avenue of research


within machine learning. Their purpose is to allow the processing

71
AI's Deployment in Architecture

of graph data. Various topics necessitate working with topological


information; that is, data formatted as graphs, collections of nodes
and connections, where the 2D or even 3D layout in space of these
architectures is key. Molecules, structures, even architectural
programs can be represented using graphs, and GNNs have been
developed to better parse such complex topological information.

Other architectures use a combination of multiple models to perform


even more challenging tasks. In this respect, generative adversarial
networks (GANs) are a recent revolution that still brings meaningful
results today. GANs focus on the generation of data across multiple
formats (images, graphs, etc). Their architecture is first theorized by
Ian Goodfellow in 2014: in order to synthesize images, GANs use two
competing models, a “generator” and a “discriminator”, to steer the
5 learning process (Fig. 5). Given a database of images, for instance,
Architecture of a the discriminator works on improving its ability to recognize the data,
standard GAN model.
while the generator works on creating synthetic images. At the same
time, the discriminator is used to provide feedback to the generator
on the quality of its output images.

Training
Data

Real
Discriminator
Fake

Generator

Generated
5 Data

72
Artificial Intelligence 101

This back-and-forth between the generator and the discriminator


allows for the progressive improvement of image generation
throughout the training phase of a GAN model. This technology
represents one of this decade’s most significant breakthroughs: it is a
drastically different approach to the very concept of learning, building
upon the feedback between two agents, rather than the self-
correcting loop of a single model. It is also a leap forward regarding
the quality of the results.

Another technique, finally, tackles a similar task: variational auto-


encoders (VAEs). They offer an alternative way of using AI for
generating information in various formats. This model approaches
learning as a process of synthesizing information: with VAEs, learning
is conceived as a task of condensing information so as to extract the
essential features, before decompressing it back into its initial form.
To that effect, VAEs combine two distinct models: an “encoder” and
6 a “decoder” (Fig. 6). The first one abstracts the data by compressing
Typical architecture of it while keeping some of its essential dimensions. The decoder then
a VAE model.
unpacks the information by bringing it back to its initial format. As it
performs this decompression, the decoder can generate variations
of the modeled phenomenon. In other words, VAEs can emulate

Encoder Decoder
Compressed
Information

Input Output
Data Data
6

73
AI's Deployment in Architecture

a given phenomenon by generating multiple different versions of it.


This ability of VAEs to model and render diversity found in the data
constitutes their “generative” potential. Over the past few years, VAEs
have found application for instance in certain creative fields, such
as furniture design, fashion, photography, architecture, and others,
providing in each domain large quantities of design options.

The Latent Space

To complete this short introduction to some of AI’s most fundamen-


tal concepts, touching upon the notion of latent space, even briefly,
remains essential. In short, the latent space is a continuous domain,
sitting at the heart of most AI models today. It encapsulates a com-
pressed and simplified representation of the data presented to the
model during training.

At this point, it is worth mentioning a few of its most important char-


acteristics. First of all, if an AI model is properly trained, then each
dimension of its latent space will correspond to a feature of impor-
tance of the observed data. These dimensions will ideally also be in-
dependent of one another. Then, it is crucial to note that the feature
each dimension respectively encodes its not directly at the user’s
discretion, but rather gradually defined by the model during train-
ing. Finally, in latent space, similarity is translated into proximity: in
other words, things that look the same are close to one another in
this n-dimensional domain. Looking at an example will clarify the
latent space’s behavior and relevance. If a model were trained to
generate images of characters from various fonts, its latent space
would capture the different features of font making. Among many

74
Artificial Intelligence 101

of them, italic, size and thickness are criteria that the model could
7 pick up and assign to specific dimensions (Fig. 7). Fonts would be
Diagram of a “walk” placed in latent space with respect to these dimensions. A “walk”
in latent space, and
the sampling of three
in latent space, meaning the fact of choosing points along a path in
distinct font styles. latent space, would yield different fonts as the model’s output (here
[1], [2], and [3] in Fig. 7). More interestingly, the balance of features
in the generated fonts would be consistent: [2], selected between
[1] and [3] would return a font blending together the properties of [1]
and [3]. The richness of the features that the latent space can cap-
ture, and the legibility of its structure, providing an easy-to-navigate
n-dimensional map, makes it both a powerful tool to control the gen-
eration of complex designs and a domain of investigation in itself.
The following chapters and articles will illustrate and explore these
characteristics in more depth.

75
AI's Deployment in Architecture

8
For designers, the latent space can serve two complementary yet
distinct purposes; namely, imitation and exploration. Expanding on
the above example, Figure 8 and 9 display results of letters generated
using the latent space of a model trained to that end. The precision of
8 the images in Figure 8 shows how realistic this “imitation game” can
Imitation: letters with be. Rather than offering unique designs, this replication can serve de-
somewhat regular font
styles, generated by signers punctually, by providing adequate proposals across various
sampling the latent
space of a trained model.
contexts. However, another reality, maybe more immediately relevant
to designers, is the possibility of exploring the formal richness lying in
the “margins”. In between the rigid categories of fonts, for instance,

76
Artificial Intelligence 101

9
a wealth of hybrid styles can be harvested. Figure 9 presents results
9 selected across the same latent space and leaves us with a vastly
Exploration: letters different impression: by sampling specific moments of the latent
displaying hybrid font
styles, generated by space, the collection of generated types challenges expected classi-
sampling the latent
space of the same fications and typologies. The letters obtained blend together features
model.
picked up from different fonts while merging them into new designs.
Far from merely replicating fonts, this exploration unveils alternative
font styles and letters, derived from the initial training set. In that way,
AI at times can become a source of inspiration, and a tangible tool set,
assisting practitioners in their search for new designs.

77
AI's Deployment in Architecture

From AI to Architecture

The architectural discipline has had the opportunity for the past few
years to benefit from the accelerated development of AI. Models,
conceived in other fields, for different applications, have been used
and repurposed by architects and researchers across various use-
10
cases. The complexity of certain concepts and tasks in Architecture
Typical pair of
input-output images, offers multiple potential avenues of exploration for the different
taken from a
training set. technologies presented earlier. With the aim to evidence this reality,
and in order to bridge the gap between a high-level understanding
Footprint
Entrance of AI and the tangible reality of Architecture, the following example
Window will provide a didactic demonstration. In this experiment, an AI model
Corridor is taught to arrange rooms within a predefined apartment footprint,
Bedroom while respecting the position of the entrance door and that of the
Bathroom
facade windows. Using a database of image pairs (Fig. 10), the model
Closet
Living room progressively learns the mapping from one situation to the other,
Kitchen from an empty footprint to a fully programmed apartment floor plan.
To evidence the gradual acquisition of this task by the model, Figure
11 11 displays results obtained all along the training phase. Each image
Typical training corresponds to an attempt by the model at organizing the space for
sequence.

Input Output

10

78
Training Start

D
Training End
11

79
AI's Deployment in Architecture

an input footprint, given its current learning stage. From the first steps
of the learning process (top left corner of the figure) all the way to
the last hours (bottom right corner), the synthesized image quality
gradually increases.

12 Figure 12 displays four snapshots, taken at four distinct moments of

Four snapshots, the training. They provide a clearer overview of the model’s gradual
sampled at various improvement over time. From Image [A] to Image [D], a progressive
moments of the train-
ing process improvement of the space layout can be noticed. The first attempt
displaying the
model’s gradual ([A]) only emulates the footprint of an apartment. Then the notions
improvement
over time. of facade and program slowly emerge ([B]), without any spatial
coherence yet. Later ([C]), the model acquires the principle of space
enclosure, as partitions between rooms are almost systematically
added, and the adjacencies between them become clearer.

A B C D

12
Finally, once the training is completed ([D]), the model offers
a floor plan that seems to take into account basic space layout
rules: facade openings, almost valid adjacencies between rooms,
initial space partitioning, etc. Although it represents a major
improvement over previous generative methods, this process

80
Artificial Intelligence 101

is not free of obvious limitations. First of all, the generated


plans do not qualify as valid architectural floor plans as such:
obvious flaws are still noticeable (aspect ratio of given rooms,
relevance of certain adjacencies, etc), while these solutions
only answer to formal criteria; matters of contextual relevance
are set aside in this particular example. These generated floor
plans however can constitute a first draft, an initial proposal, to
be corrected and augmented by the architect. These images can
act as initial options, meant to nourish the design process early
on. Additionally, the resulting relevance of the generated forms
also greatly depends on the quality of the data provided to the
machine during its training.

This consideration is yet another reminder of the importance of


our expertise to both train and feed such models. Finally, AI is not
free of its own bias, inherent to its learning strategy. In simpler
terms, a trained model might have captured certain assumptions
found among the data. Since the learning process is not entirely
transparent to the end user, such bias can go unnoticed. Part
of the difficulty of training AI models lies in our ability to detect
these biases and correct training accordingly.

However, for all these challenges, the above example still sets the
stage for AI’s contribution to Architecture. Beyond its didactic
purpose, it shows the tangible results that this technology can
bring when applied to problems specific to Architecture. In the
following pages, this chapter will cover other use cases and
experiments, at many different scales, and present a curated
landscape of AI’s potential for the discipline.

81
AI's Deployment in Architecture

Urban Scale

The study of the urban condition today is a thriving area of research


and experimentation. Yet the layered intricacy of urbanization
patterns confers a deep complexity to this topic. Although a wide
diversity of frameworks have already offered comprehensive
methodologies to describe the ramifications of the urban fabric,
significant improvements are still awaited. On this topic, AI’s recent
results have positioned this technology as a promising alternate
avenue of experimentation.

At the scale of the territory, urbanization patterns can take very


different forms. The variety of city fabrics, conditioned by the
surrounding landscape, infrastructure, and general location can
display significant diversity among urban scenarios. Even though
2. Imaginary Plans, M. del
Campo, S. Manninger, modeling this complexity generally represents an arduous task,
2019.
the gradual improvement of AI models over the past few years has
provided architects and urban planners with a renewed set of tools
13
City-specific to study city patterns. The Urban Fiction2 project represents a step
generated urban
in this direction: a model, trained on the satellite imagery of major
patterns, from the
Urban Fiction project. cities, can adapt city-specific textures to new user-defined patterns.
By M. del Campo & S.
Manninger. As shown in Figure 13, the transposition of characteristics from one

82
13

83
AI's Deployment in Architecture

14 Urban Grid – New York Style Urban Grid – London Style

context to another demonstrates the extent of this model’s agility


at mimicking specific urban fabrics. Although speculative, the
obtained results already forecast the potential contribution of this
approach.

3. Chu et al., “Neural As the urban condition merges together multiple layers of
Turtle Graphics for
Modeling City Road information, other research initiatives attempt to disentangle
Layouts”, In Proceedings
of the IEEE/CVF them as distinct levels to then explore AI’s relevance to each one,
International Conference
on Computer Vision, pp
in isolation. For instance, by focusing solely on the structure of
4522-4530, 2019. road networks, the Neural Turtle Graphics (NTG)3 model attempts
to learn and replicate the properties of circulation paths across
14 chosen cities. Figure 14 displays some of NTG’s results, where a
City-specific street net- few urban grids have been generated, mimicking the road network
work generation using
the NTG model.
style of specific cities; namely here in New York and London.
By Nvidia Research.

Working as a negative of NTG, other initiatives have invested in the


study and generation of urban block typologies. From Barcelona

84
Urban Scale

to New York or Paris, the form and organization of these blocks


can vary drastically and display distinct infill strategies. Similarly
to previously described experiments, these projects have investi-
4. See Rhee et al, 2021 gated the generation of urban blocks4 for given cities, but also for
/ Tian, 2021 / Fedorova,
2021. specific immediate surroundings.

As a matter of fact, the urban scale today witnesses a significant


number of applied AI research initiatives; among many factors,
this momentum benefits from the ever-increasing amount of data
documenting cities’ multiple information layers: road networks,
built environment, topographical data, etc. Mainstream mapping
portals, like Google Maps, Open Street Maps, as well as GIS infor-
mation gathered by institutional players, offer an almost endless
source of high-quality data; a dynamic that today bolsters AI re-
search’s application to the urban condition.

References & Resources


Urban Fictions

M. del Campo & S. Manninger,


2019

Neural Turtle Graphics


for City Road Layouts

Chu et al., 2019

85
AI's Deployment in Architecture

Floor Plans

Closer to Price’s Generator experiments, the challenge of internal


space planning stands as one of Architecture’s core concerns. The set
of constraints pressuring this particular scale originates from various
directions: the program, the structure, the facade openings, the
building’s circulation, etc. Consequently, the layout of internal spaces
5. See Nauata et al. 2021 / tries to balance and resolve these diverse influences, while translating
Hu et al. 2020 / Chaillou
2019. the architect’s intent; a degree of complexity placing any technology
aspiring to address internal space planning under acute pressure.
15 16 17
Internal layout In this respect, AI represents a leap forward. GAN models, for in-
generation. “Input-
output” pairs, for stance, have proven to be surprisingly adequate. Using a broad da-
various user-specified
constraints.
tabase of formatted internal layouts, recent research projects have
By S. Chaillou. studied this model’s ability to learn space programming and furnish-
Footprint
ing patterns5. Figures 15 and 16 display some of their results; these
Bedroom
Entrance image-pairs showcase various mappings, from empty apartment
Window
footprints to their programmed counterparts, with respect to specif-
Livingroom
Bedroom ic constraints (facade openings in Figure 15, facade openings and
Bathr./Restr.
Kitchen bedroom position in Figure 16). With Figure 17, these models are
Circulation
applied to space furnishing. Given a programmed layout, rough fur-
Closet
Washing Room niture outlines are placed to emulate potential setups.

86
Input Output Input Output Input Output

Input Output Input Output Input Output

15
Input Output Input Output Input Output

Input Output Input Output Input Output

16
Input Output Input Output Input Output

Input Output Input Output Input Output

17
87
AI's Deployment in Architecture

Footprint Door Window Footprint Door Window

Footprint Door Window Footprint Door Window

18

These generated floor plans provide an example of AI’s ability to lay


out different functions under various input constraints, after only
a few hours of training. However, evaluating these results against
Architecture’s multiple requirements calls for a more nuanced as-
sessment: given their imprecision, these floor plans should be con-
sidered as first drafts or initial attempts at finding a space planning
strategy, rather than as final designs. In other words, these models,
in the hands of architects, can provide a form of drafting assistance
whose results expect further tuning and refinements.

88
Floor Plans

To formalize this rather iterative design process, a basic web inter-


18 face can act as a simple yet efficient device. Figure 18 exhibits such
4 steps of a an experiment, where a trained model, running in the background of
generation
sequence, using
a streamlined web app, reacts to users’ graphical input. Each frame
a simple web app displays a step of a typical design sequence. Each time, the archi-
interface.
By S, Chaillou. tect’s intent is sketched in the left-side window while, simultaneously,
the machine computes a solution displayed on the right. By drafting
the constraints on the left, the architect iteratively regenerates solu-
tions to narrow down the search and find an adequate typology.

Space planning is in fact today a growing area of application for AI


research. Many projects over the past 5 years have significantly
pushed the envelope, mostly tackling controlled environments like
apartment layout or office space zoning. However, as this area of
research matures, these models have the potential to be applied to
more complex programs with even more challenging constraints.

References & Resources


AI & Architecture,
an Experimental
Perspective
S. Chaillou, Towards
Data Science, 2019

ArchiGAN: a Generative
Stack for Apartment
Building Design
Nvidia Developer Blog, 2019

89
AI's Deployment in Architecture

Facades

6. Isola et al., “Image- More than a mere “wrapper”, a building envelope is as much a source
to-image translation with
conditional adversarial of constraints and challenges for designers, as it is an expressive di-
networks.”,
In Proceedings of the mension of the built environment, conveying concepts such as style,
IEEE conference on
typology, program, etc. Addressing this complex yet essential scale
computer vision and
pattern recognition, pp has therefore been on the roadmap of researchers over the past few
1125-1134, 2017.
years. AI has thus gradually found its way to the generation of building
exteriors.
19
Series of generated
facades. Each pair An early attempt has paved the way to current experiments on the
displays the “input”
(left), and “output”
topic: the application of Pix2Pix6 (a GAN model developed in 2017) to
(right) synthesized a dataset of annotated facades. This approach plays off the discreti-
by the model.
By Isola & al. zation of facade design into a composition of simple structuring ele-

Facade
ments (windows, cornices, pilasters, doors, balconies, etc.). The mod-
Molding el then learns the mapping from an image representing the layouts of
Cornice
Pillar these elements – encoded using a vivid color code – to the facade’s
Window
real picture. Once trained, this network can texture a color map into
Door
Sill an almost realistic-looking building and harmonize its style across the
Blind
Balcony image (Fig. 19). At that point, an architect can use the model by creat-
Shop
ing new compositions, and generating somewhat realistic images of
Deco
Background facades, prefiguring early on a given design’s potential appearance.

90
19

91
AI's Deployment in Architecture

Initial Massing Detailing Texturing

20

21

However, the exterior of buildings wraps around more than single


facades; entire city blocks often share a similar typology that adapts
to the massing’s variations. Broadening the scope of generative AI
to tackle this reality has recently led to more comprehensive results.
7. Kelly et al, Projects like FrankenGAN7 for instance, have provided promising
“FrankenGAN: guided
detail synthesis for demonstrations in this area. By taking as inputs the raw 3D massing
building mass-models
using style-synchonized
of city blocks and a facade style reference, this model generates a
GANs”, 2018.
highly detailed and textured envelope for all buildings. This approach

92
Facades

is both style informed and geometry specific, which in turn creates


20 strikingly realistic building facades. Figure 20 details the generation
Steps of Franken- pipeline, from a raw massing to a detailed one, while Figure 21 shows
GAN generative
pipeline. more results for different styles and city bock typologies.
By Kelly et al.

21 The subject of facade generation is in fact one of these areas


Various textured city where the scientific literature shows potential alignments be-
blocks, results of
FrankenGAN. tween Architecture and other fields (the video game industry, sat-
By Kelly et al.
ellite imagery, etc.). If their underlying motivations are quite distinct
from Architecture’s agenda, as these domains refine and open-
source their research, Architecture is likely to benefit from new
tools, while diverting their usage to serve the discipline.

References & Resources


Image-to-Image
Translation with
Conditional
Adversarial Nets
Isola et al., 2017

FrankenGAN

Kelly et al., 2019

93
AI's Deployment in Architecture

Perspectives

The representation in perspective of an architecture achieves more


than the transcription of its 2D projections into 3D. It is also the trans-
lation of its textures, lighting, and general atmosphere, conveying the
project’s potential perceived experience. While using computers to
achieve this task, image quality and realism remain closely tied to the
availability of computing power, and the extreme detailing of geome-
tries by architects. By providing an alternate approach, AI has recently
proven its ability to drastically reduce the computational time of
renderings while allowing the inference of certain levels of detail.
8. K. Steinfeld, “Gan Loci”,
2019.
GAN Loci8, a project realized in 2019, represents a step in this
direction. This piece of research (Fig. 22) explores the possibili-
22
ty of transforming perspective views of initially white and neutral
Generated urban
scenes, for different volumes into photorealistic urban scenes. Specifically, for a given
styles, given a similar
input image. perspective view, GAN Loci attempts to add facade-like textures,
By Kyle Steinfeld.
pathways, street furniture, pedestrians, cars, etc. More interest-

23 ingly, the project goes even further to train different models on

Various generated
specific types of urban environment: suburban, public park, etc. To
urban scenes, results illustrate this reality, Figure 23 displays the results of the two dif-
of GAN-Loci.
By Kyle Steinfeld. ferent models, obtained for the same input image. However, since

94
22

95
AI's Deployment in Architecture

Input Output – Park Style Output – Suburb Style 23

24
9. Wang et al., Nvidia GAN Loci, projects like Pix2PixHD9 have addressed the same
Research, 2019.
types of representation and, using a different methodology, repre-
24 sent a striking improvement in image quality (Fig. 24). More recent-
Urban scene,
ly, GauGAN (2019)10, scales this approach to a more generalizable
synthesized by
Pix2pixHD. model, packaged into a streamlined interface (Fig. 25). The user is
By Wang et al.
in charge of laying out patches of colors, corresponding to specif-
10. Park et al., Nvidia
Research, 2019. ic semantic categories (water, mountain, sky, etc.), while GauGAN
almost instantaneously offers a rendered translation in the right-
25
side window.
Generated land-
scape (right), given
input mask (left), in Evidently, to fully support Architecture, these models still need
GauGAN's interface.
By Nvidia Research. to improve the precision of their outputs. However, these early

96
Perspectives

25
experiments prefigure some of AI’s potential contributions. On
the one hand, they dramatically reduce the computational time of
rendering, from sometimes multiple hours to a few seconds, or even
less. On the other, they allow simulating the detailing of scenes, with
respect to specific learned styles. This latter aspect maybe opens
one of AI’s most interesting contributions to Architecture.

References & Resources


GAN-Loci

K. Steinfeld, Towards Data


Science, 2019

GauGAN Demo

Nvidia Research, 2019

97
AI's Deployment in Architecture

Structures

Structural integrity is yet another challenge for architectural design.


Finding the right form, able to handle a given building’s loads, can be
a considerable challenge. If the construction industry often defaults
to standard structural typologies (pre-set modules, frames, etc.),
custom structures can sometimes be a better fit for certain proj-
ects. However, exploring new possibilities comes at a cost, as new
designs require studying and simulating their underlying structural
11. See Hoyer et al.
2019 / Miguel et al. 2019 / performance. In this respect, AI can significantly help architects
Mueller et Danhaive 2020.
explore alternate structural options, while being able to afford their
respective analysis.
12. R. Danhaive,
C.T. Mueller, “Design
subspace learning: The challenge of structural form finding is today a thriving area for ap-
Structural design
space exploration plied AI research. Multiple projects11 tackle AI’s potential to generate
using performance-
conditioned generative original structural designs, using models such as GA, VAEs, GANs
modeling”, Automation in
Construction, 2021.
and others. Using VAEs for instance, research developed at MIT12 in-
vestigates how various structures can be generated, while ensuring
26 high-performance standards. Figure 26 displays some of their re-
VAE-generated sults: the exploration of the model’s latent space can yield a col-
structures
By Mueller & lection of diverse, and at times counterintuitive, truss structures.
Danhaive, MIT.
Each option however stays within strict performance bounds.

98
26

99
AI's Deployment in Architecture

Initial Shell Structure

Material Distribution

Thickness Distribution

5cm 20cm

27

More generally, this approach to structural form finding provides us-


ers with the ability to consider design options potentially far removed
from canonical patterns, while ensuring their respective efficiency.

Additionally, as structural form finding is conditioned by the reparti-


tion of loads in resulting shapes, predicting the structural effort of a
given design and the necessary distribution of material is an area in
which AI has proven more than relevant in recent years. Instead of
the traditional approach, using topological optimization – a precise

100
Structures

yet computationally expensive methodology to simulate a design’s


shape given the path of its internal loads – AI models can be used
13. From R. Danhaive Phd to predict lighter material repartitions much faster. Recent research
thesis, MIT, 2020.
from MIT13 demonstrates this possibility. Figure 27 displays some of
27 their results, where for a specific shell structure, an AI model predicts
AI-enabled an optimal material distribution pattern, allowing then to decide on the
prediction of an
optimal material layout of the shell’s various thicknesses.
distribution for
a shell structure
By R. Danhaive, MIT. Both as a means of exploration or as an analytical lens, AI provides
structural design with a renewed set of tools. In Architecture, if the
pressure of structural concerns often hinders the design process,
current models being developed could help address this challenge.
Finally, if the results displayed in this segment are at an experimen-
tal stage, their integration with mainstream design tools is already
underway. As a result, a more integrated approach to structural anal-
ysis in Architecture could offer greater autonomy to practitioners.

References & Resources


Designing with data

N. Brown & C. Mueller,


2017

Digital Structures
Lab

101
AI's Deployment in Architecture

Predictive Simulations

In times of growing ecological awareness, the built environment is ex-


pected to keep close tabs on its carbon footprint and energy perfor-
mance. Part of Architecture’s strategy in addressing this concern has
been to simulate expected building efficiency early in the design
process. Solar radiation, wind flows, indoor thermal comfort are
many dimensions that the industry’s simulations try to capture, to
later inform the design phase as much as possible. However, given
the budget and knowledge required to tap into these resources,
such tools are almost exclusively used by trained experts. In this
respect, AI has been able to lower the threshold and might be about
to allow the dissemination of cheap, fast and simple predictive mod-
els across the industry. These “surrogate models” gradually repre-
sent a valid alternative to standard simulation engines. Looking at
tangible examples will set the stage for this potential substitution.

The estimation of wind flows around a project, for instance, is an es-


sential part of assessing its impact on the immediate surroundings.
The traditional approach, using computational fluid dynamics (CFD)
simulations, sometimes lacks in accessibility for its cost and com-
plexity. To mitigate these drawbacks, researchers have been training

102
Predictive Simulations

Simulated Wind Flow


Physics-engine
simulated result

Wind Direction

Predicted Wind Flow


AI-generated result

Wind Direction

28

103
AI's Deployment in Architecture

AI models to predict the map of potential wind flows in a specific re-


28 gion, based on a simple site layout and its orientation (Fig. 28). These
Comparison between models, although less accurate than actual simulations, are at times
actual and AI-
predicted wind flow.
sufficient to assess a design’s efficiency, or to create benchmarks
By Spacemaker across vast collections of potential design options.
Research.

14. For reference, A handful of research projects have also been recently developed to
see DaylightGAN:
https://github.com/ estimate internal building performance. DaylightGAN14 for instance
TheodoreGalanos/
DaylightGAN
aims at forecasting the potential reach of natural light within a proj-
ect, given a floor plan footprint and its facade openings (Fig. 29). An-
29 other project, ComfortGAN15, investigates the challenges of predict-
Typical result of ing a building’s indoor thermal comfort. Overall, as indoor conditions
DaylightGAN.
By T. Galanos. deeply impact buildings’ final energy efficiency, research aimed at
their forecast constitutes a growing area of investigation today.
15. Quintana et al.,
“Balancing thermal
comfort datasets:
We GAN, but should
we?”, In Proceedings
of the 7th ACM
International
Conference
on Systems for
Energy-Efficient
Buildings, Cities, and
Transportation, pp
120-129, 2020. Input Actual Predicted
29

In parallel to these developments, serving such models to the end


user remains another pressing challenge, vital to ensure their actual
contribution to Architecture. Over the past decade, the deployment
of online platforms has provided the adequate infrastructure to that
30
Snapshot of a
end: Spacemaker (Fig. 30), Covetool, Giraffe or InFraReD are only
wind flow prediction few examples of this growing ecosystem, offering simplified access
in Spacemaker's
web app. to AI-based predictive models.

104
Predictive Simulations

30

References & Resources


Wind Flow Prediction
through Machine
Learning
T. Galanos, A. Chronis, O. Vesely,
AIT, 2020

Pedestrian comfort:
Why wind analyses
are more relevant
than ever
Spacemaker's Blog, 2020

105
The
Outlooks
of AI in
Architecture
A Theoretical Perspective

106
Alongside the mosaic of
current applications, the
discourse surrounding AI’s
presence in Architecture is
as fragmented as it is rapidly
evolving. Since the intuitions
of Negroponte and Price,
AI itself has matured and
improved, thus forcing the
discipline to reinterrogate the
early theorists’ assumptions
concerning its presence and
purpose in Architecture.

To address this reality, the following chapter orients the


discussion towards three distinct avenues: AI’s contribution, adoption
and prospects in Architecture. Through a collection of short articles,
this triptych curates a broad landscape of perspectives, aggregating
the visions from researchers, practitioners and entrepreneurs. Their
perspectives together frame, evidence or challenge AI’s encounter
with the discipline.

107
The
Outlooks
of AI in
Architecture

The Contribution
108
AI’s gradual inception in Architecture calls for a broader reflection
about its actual contribution. If the collection of current experiments
certainly unveils its immediate benefits, the debates among
theorists go further: to address the deeper purpose of AI’s presence
in our field. Among many avenues, at least three salient directions
are worth considering: the form, the context and the performance.

First, the form, designates the importance of formal considerations,


a long-standing tradition in Architecture; each period or movement
offers a new approach and ethos to this burning topic. As AI can help
shape our built environment, this technology reactivates the discus-
sion. The context, then, addresses the relationship that any architec-
ture entertains with its physical, cultural or symbolic surroundings. As
AI enters the field, this essential dimension of the practice is invited
to potentially evolve. The performance, finally, refers to the imperative
of precising, if not simulating, an architecture’s expected efficiency.
Since AI allows for more accessible predictions, the influence of per-
formance on Design is an important reality to consider.

The following segment will unfold this triptych, giving voice to re-
searchers and theorists whose work shed a singular new light on
these different topics.

109
The Form
Architectural Plasticity:
The Neural Sampling
of Forms
by Immanuel Koh,
Assistant Professor at
Singapore University of Technology & Design

The Contribution 110


The Form

Architecture, as a discipline of form-making, often absorbs new


ways of form-thinking from other domains as well as it adapts
new technologies as tools for form-modeling. This disciplinary
tendency for formal appropriation has to do with its underlying
plastic conception of form. With the recent breakthrough in Artificial
Intelligence, architecture has again come to embrace the potential
formal shift that deep learning might bring. Most notably is the use
of a specific class of deep neural networks, known as generative
adversarial networks (GANs). It first appeared in 2014 as a computer
1. Goodfellow et al., science research paper1, and within just a few years, mainly through
“Generative adversarial
nets”, Advances in neural the works of artists experimenting with GANs imagery, it quickly
information processing
systems, 27, 2014.
entered the public imagination. The generic nature of deep learning
models lends itself well for architects to likewise appropriate
GANs in their design explorations. However, unlike the abundance,
accessibility, and ease of creating 2D datasets, few ventured
into experimenting GANs with 3D datasets. This is mainly due to
factors related to higher complexity in designing 3D-GANs, greater
difficulty in encoding 3D geometries, larger computational load,
longer training time, and more complicated code implementation
using distributed cloud computing. Inevitably, current appropriation

111
The Contribution

of GANs by architects who typically lack a strong understanding


of its underlying mathematics, architectures and codes has only
led to a plethora of initially inspirational-looking GAN-generated
2D images, but with limited or no 3D formal consequences. This
essay aims to address this conceptual and technical lack in 3D
form-making by introducing the notion of architectural plasticity,
2. I. Koh, Architectural elaborated with two key projects, to articulate the neural sampling2
“Sampling: A Formal Basis
for Machine-Learnable of three-dimensionality, exteriority, interiority, and semantics.
Architecture”, PhD Thesis,
École polytechnique
fédérale de Lausanne,
2019.
From Neo-Plasticism
to Neuro-Plasticity
In May 1922 at the International Artists Congress of Düsseldorf, Theo
van Doesburg, leader of the De Stijl movement in Holland, announced
“We are preparing the way for the use of an objective universal
3. U. Conrads, “Programs means of creation”3. For van Doesburg, “the universal means” is by
and manifestoes on 20th-
century architecture”, MIT way of a convergence of all artistic expressions into a single style,
Press, 1971.
governed initially by the general principles of Piet Mondrian’s Neo-
Plasticism, and later by those of his own Elementarism. In his 1920
seminal essay “General Principles of Neo–Plasticism”, Mondrian
not only defined Neo-Plasticism, but also laid out its six principles
(or rules). In the second section of the text titled “Neo-Plasticism
and Form”, he differentiates “morphoplasticism” from his neo-
plasticism. The former refers to traditional art that is figurative and
naturalistic in the use of recognizable forms, or as he calls it painting
“in-the-way-of-nature”, while the latter refers to the “representations
of relationships” that use only abstract and pure forms, or as he calls
it painting “in-the-way-of- art”. However, it was Theo van Doesburg’s
1924 manifesto “Towards a Plastic Architecture” and 1925 book

112
The Form

“Principles of Neo-Plastic Art” that set the stage in envisioning a


neoplastic architecture. Quoting the second and fifth points of
his manifesto: “the new architecture is elemental; that is to say, it
develops out of the elements of building in the widest sense” and
“the new architecture is formless and yet exactly defined”. It is thus
the “elemental” and the “formless” that would come to characterise
the formal generative capacity of Neo-Plasticism. Yet, this form-
generativity remains inherently rule-based. Fast-forward a hundred
years, today’s artificial intelligence is again “preparing the way for
the use of an objective universal means of creation”. This “universal
means” is The Master Algorithm – the title of a book by professor of
computer science Pedro Domingos who also calls it the “general-
4. P. Domingos, “The purpose learner” or “universal learning algorithm”4. It is through
master algorithm: How
the quest for the ultimate learning that the machine is to “prepare the way” in abstracting data
learning machine will
remake our world”, Basic into algorithms, which in turn serves as the “means of creation”.
Books, 2015.
The formal generative capacity of deep learning models is a result
of its neuroplasticity — a general term used in neuroscience in
referring to the rewiring of the brain and remapping of its functions.
Analogically, instead of training a GAN model from scratch, by
simply activating or deactivating a sample set of elemental neurons
of a pretrained GAN, it could be rewired and thus made to remap
its original functions in generating new images with different
5. D. Bau et al., “Rewriting concepts5. Rather than directly editing the individual pixels of a
a Deep Generative
Model”, arXiv:2007.15646 given image, one would instead edit the individual neurons of the
[cs], 2020.
GAN to indirectly manipulate the features of any given image. Theo
van Doesburg’s “plastic architecture” is recast here as a result of
the plasticity afforded by deep neural networks — a conceptual shift
from neoplasticism to neuroplasticity.

113
The Contribution

Form-Sampling, Not Form-Finding

The design of a chair has a long history of being used by architects as


a potent exercise to experiment with formal possibilities engendered
by new architectural concepts. The insights made would then be
somehow transferred to the building domain, and typically with an indi-
rect mental analogical translation in scales and functions. The project
3D-GAN-Ar-chair-tecture (2020) from the Neural Sampling Series
takes a similar formal trajectory but implements a custom end-to-end
3D-GAN for a direct transference, simultaneously short-circuiting
the typical bottleneck encountered by architects in trying to interpret
3D geometric interiority and exteriority from flat 2D GAN-generated
images. Unlike existing small-data approaches in form generation,
1 such as parametric modelling with McNeel Grasshopper3D or form
An uncanny “chair- finding with topology optimisation in Autodesk Generative Design,
ness” expressed in
this 3D-printed chair GANs are data-intensive approaches. Large input sample size is first
sampled from a GAN
latent space trained required to effectively learn their implicit features prior to generating
with a dataset
containing different
any novel output samples. Three 3D-GAN models were trained: one
chair designs. with 10,000 chairs from ShapeNet, the other with 4,000 high-rise
(3D-GAN-Ar-Chair-
tecture, 2020). building massings, and the last one with a combination of both. Once
trained, their respective latent spaces were used for sampling 3D
forms. The latent space is a structured exploratory space of probable
2 forms in high dimensions, where similar forms could be understood
A smooth GAN
as being placed close together. This is the basis for not only generat-
interpolation across
forms and scales ing new chairs (Fig. 1) or buildings, but for interpolating among chairs
between “chair-
ness” (leftmost) and buildings directly (Fig. 2). In fact, the project demonstrates that
and ‘building-
ness’(rightmost) is the creative use of GANs need not be constrained “in-the-way-of-
directly translated
nature” (e.g., scale, category, material and structure) since the task at
and fabricated as an
array of 3D prints. hand is to explore the architectural plasticity of forms “in-the-way-of-
(3D-GAN-Ar-Chair-
tecture, 2020). art”, and thus the ideation of novel forms.

114
1

115
The Contribution

Semantics of Interiority and Exteriority

3D-GAN-Housing (2021) from the Neural Sampling Series is


part of a larger funded research project AI Sampling Singapore
recently exhibited at the 17th Venice Architecture Biennale. It
demonstrates the possibility of encoding specific semantics
and 3D configurations of architectural programmes directly
into the design of the training dataset using a custom 3D-GAN
architecture. The initial dataset consists of 5,000 3D models
sampled from Singapore Housing Development Board’s (HDB)
flats – a ubiquitous high-rise public building typology where
80% of the population reside. A statistical exploratory data
analysis of the dataset reveals that some key massing types
could be identified, such as cluster blocks, L/U-shaped blocks,
slab blocks and point blocks, which in turn serves as a useful
3
design intuition when evaluating the GAN’s learning and gen-
A GAN training
process (top left
erative capacities during the training process. The GAN model
to bottom right) is designed to not only learn and infer with increasing granular-
showing increasingly
plausible 3D housing ity the heterogeneity of the visible exterior form, but also those
configurations that
are semantically and occluded interior spatial relations. Due to the high computa-
structurally coherent.
(3D-GAN-Housing, tional load, the GAN model was trained continuously for days
2021).
with cloud GPUs until it reached a legible convergence (Fig.
3). The ability of GAN models in approximating a probabilistic
4
and continuous distribution of the training dataset is evident
A GAN latent
walk showing from the smooth interpolation when sampling between differ-
the composite
renderings of ent building forms (Fig. 4). The semantics and configurational
exteriority and
interiority being
coherence could likewise be observed in the plausible 3D re-
smoothly interpolated lationships generated among different functional zones (e.g.,
as it maintains its
learnt “housing- living units, circulation, service cores and communal spaces),
ness”. (3D-GAN-
Housing, 2021). locally within each unit and globally within each building.

116
4

3
When forms are no longer modelled directly, but sampled
indirectly, the architect would have to harness the plasticity of
deep neural networks in thinking about forms, and perhaps also
in engendering a new aesthetic again paving the way towards a
plastic architecture but now in the age of Artificial Intelligence.

117
The Context
The Sorcerer’s
Apprentice
by Kyle Steinfeld,
Associate Professor at U.C. Berkeley

The Contribution 118


The Context

Recent developments in AI threaten to dislodge some of our ba-


sic assumptions about the nature and purpose of computational
design tools. Creative designers ought to welcome the disruption.
The use of computational tools in design has long been understood
through the metaphor of computer “assistance”, wherein software
stands by as an aide while we work on our design tasks. Affirming
this metaphor, designers have come to expect that computers can
be effective assistants insofar as they facilitate what has been de-
scribed as a “reflective conversation” between an author and the
salient materials of a design situation. But if design activity may be
understood as a conversation with a willing assistant, then we must
ask: what is the nature of this conversation? What is the topic, the
context, and what are the terms by which participants converse?
The variety of possible answers to such questions reveals that
computational tools are not universal, but rather are cultural prod-
ucts strongly related to the details of the contexts in which they are
produced and those in which they are applied. And yet, despite the
contextual nature of computational design, we find that a single
paradigm dominates the contemporary architectural toolkit.

119
The Contribution

For more than 50 years, design technologists have struggled to coax


computers to become better design aides. While a survey across this
period reveals only mixed success, there are a number of notable
bright spots. These bright spots – those times at which design tech-
nology has enjoyed a deep and wide-ranging impact on the culture
and practice of architecture – have come when we presupposed that
design is a rational activity, and is therefore best supported by a ratio-
nal computational partner. This is how most contemporary tools for
computer-aided design are understood today, and we find all around
us examples of such an approach.

From parametric modeling, to BIM, to simulations of building perfor-


mance, most CAD software seeks to extend our capacity for reasoning
about design in an analytical and deliberate way. Parametric modeling, for
example, is a particularly insistent conversation partner, demanding that
each idea flow from a more fundamental premise, thereby encouraging
us to compose increasingly elaborate chains of logic. Similarly, building
information (BIM) systems are meticulous bookkeepers that require us to
elaborate on each tiny idea to an exhausting level of detail. As such, we
may rightly understand contemporary CAD not as general design assis-
tants, but rather as specific instruments of “machine-augmented rational-
ist design”. Such tools are more lab-assistant than raconteur, and while
the conversations enabled by them are welcomed by some, when ap-
plied in the service of creative design, many of us find CAD to be boorish,
tiresome, and fatiguing. While rationalist tools excel in supporting rational
thinking, creative design requires tools for creative thinking. Late-stage
design, a phase in which designs are refined into viable products, may be
well-understood as a rational endeavor, and might therefore warrant the
support of something like a lab assistant. Early-stage design, a phase in

120
The Context

which the contours of a design concept are not yet clear, is different. Ac-
tivities that tend to guide early-stage design, such as sketching and formal
ideation, rely less on rationality and more on creativity, and therefore call
for a very different sort of partner. With this in mind, we might ask: What
are appropriate qualities for an early-stage design assistant? What sort
of conversations are useful for enhancing creativity, and which capacities
should such a tool facilitate? To address such questions, we must better
understand the nature of creative design.

In the early stages of a design, we are faced with a complex, confusing,


and contradictory set of technical and social problems for which there
exist no well-defined approaches to solve. And yet, somehow in this mo-
ment, designers regularly manage to manifest solutions. These are drawn
from the materials of this problematic situation, yes, but are also assem-
bled along the lines of a new order that seemingly appears from nowhere
at all. Such creative leaps are less an act of reason and more an act of
imagination: of finding connections among ideas that were not previously
connected. While such solutions may appear to be conjured up in a “sud-
den illumination”, creativity is not magical, and many thinkers have sought
to account for the ground from which creative leaps might spring. Robin
Evans described creative design in terms of a “projective transmission”
among an author's imagination, the instruments of drawing, and the dic-
tates of geometric description. Nigel Cross referred to creative design as
an “abductive leap” that relies on connections made between the direct
experience of the author and the context surrounding a problem. What the
many accounts share is that creativity is highly contextual, that it thrives in
the recognition of new patterns, and that these new patterns are often im-
provisational modifications of those drawn from experience. With this in
mind, how might our computational assistants support creative action?

121
The Contribution

Insofar as creative design relies less on the mechanisms of reason and


more on those of imagination, the failure of existing models of com-
puter-aided design is clear. Our existing software programs are con-
ceived of as tools of reason, not of imagination. For example, while we
have tools that help us account for the connections across a complex
program brief, we still require a tool for discovering those critical rela-
tionships that we're not yet aware of. While we have tools for finding
forms based on the optimization of structural performance, we still re-
quire a tool that allows us to discover forms that recall selections from
the enormous dataset of architectural precedent that we've inherited.
While we have techniques to collate and visualize the cacophony of
data related to an urban site, we still require a tool to identify those
qualitative patterns that lend our most successful cities a sense of
place. While we have tools conceived in the “lab assistant” model that
excel in supporting rational design, we lack a computational aide that
foregrounds the recognition of patterns, the application of precedent,
and the awareness of context. What creative design requires is a “sor-
cerer's apprentice”, and the new breed of tools based on machine
learning that are now being developed are just this.

To illustrate the potential of positioning computational tools in this


way, I'll briefly present below two projects I have recently completed
that rely on machine learning techniques. First, I would mention
1 the GAN Loci project (Fig. 1), which applies generative adversarial
Generated urban networks (or GANs) to produce synthetic images that capture the
scenes, results of
GAN-Loci. predominant visual properties of urban places. Here, working across
six cities in the US and Europe, urban image data was collected,
processed, and formatted for training two known computational
statistical models (StyleGAN and Pix2Pix) that identify visual

122
1

123
The Contribution

patterns distinct to a given site and that reproduce these patterns to


generate new images. Imaging cities in this way represents the first
computational attempt to document the Genius Loci of a city: those
forms, spaces, and qualities of light that exemplify a particular location
and that set it apart from similar places. Seen as a design research
tool, GAN Loci might better assist a general audience of designers in
identifying tacit visual patterns found in our most successful cities.

Next, I would mention a project still in development and that is


currently being prototyped by a group of undergraduate students at
UC Berkeley. Sketch2Pix is an interactive application for supporting
augmented architectural sketching. Here, workflows are developed
that allow for novice creative authors to train an ML model on the
transformation from a “source” image depicting a sketch, consisting
primarily of hand-drawn lines, to a “target” image depicting an
architectural object that includes desired features such as color,
texture, material, and shade/shadow. Using this system, designers
can effectively train their own AI sketching assistant, trained on
their own data, and for their personal use in sketching architectural
forms. Some students in this course have trained assistants that
2 recall traditional architectural forms (Fig. 2), such as one that evokes
Samples of the glazed tubular tile roof forms of traditional Chinese architecture.
Sketch2Pix
results. Others have selected influences drawn from a local site, such as a
collection that is based on the forms and textures of the produce
grown in the California Central Valley. Seen as a conceptual drawing
tool, Sketch2Pix might allow a general audience of designers to
intentionally “mix in” a specific set of formal precedents as influences
in their design process, thereby allowing the discovery of new
solutions inspired by a wealth of seemingly unrelated forms.

124
2

To conclude, while I do suggest such a metaphor for the production


and application of software tools, I would like to be clear that machine
learning is not magic. One of the central roles of a design technologist
such as myself is the demystification of opaque technologies, and
the suggestion that there is anything magical about the operation of a
neural network would be malpractice. Neural nets are not “magic”, no,
but neither are computers our “assistants”. There remain, however,
some useful aspects to such ways of speaking and of thinking, and
perhaps a certain value in a designer positioning themselves as a
magician, so long as we remain willing to reveal the nature of our tricks.

125
The Performance
Artificial Intelligence
for Human Design in
Architecture
by Renaud Danhaive & Caitlin Mueller,
Digital Structures Lab, MIT

The Contribution 126


The Performance

Architecture as a discipline faces a growing challenge in the glob-


al climate crisis: approximately 40% of greenhouse gas emissions
are due to the built environment. This impact may only increase as
construction expands internationally to house the world’s exploding
populations. These emissions are mostly due to energy consumed in
the construction and operation of buildings and can vary significantly
based on decisions made during the design process: what materi-
als are used and how efficiently they are allocated, whether passive
thermal strategies are engaged, etc. Even the most basic choices of
a building’s massing, scale, shape, and constituent systems can have
a great influence on environmental performance. Until recently, the
climate impact of such design decisions was largely ignored, both be-
cause the scale of the problem was underemphasized and because
designers lacked tools to measure it.

127
The Contribution

Today, advances in computing and simulation have opened up new


pathways for architects to design with data, especially data related
to building performance. Finite element analysis, building energy
modeling, and related methods are now accessible in software tools
used in architectural design, rather than merely specialist engineer-
ing packages. Additionally, methods of optimization and design
space exploration, which can guide architects towards better de-
sign options based on simulation data, are now available within the
same software frameworks. Ideally, these tools and methods should
be used as early in the design process as possible, so that the most
impactful decisions are made with performance information as a
key input. In reality, the integration of simulation data is often limit-
ed, sometimes relegated to just a validation of a crystallized design
once all major decisions have been made.

Since building performance is so important, and tools to measure


and design with it are now available, why have architectural design
processes not adopted this approach en masse? Beyond general
disciplinary inertia, there are several important and fundamental
limitations to the computational design methods described above
that prevent wider adoption. First, many simulation software tools
and methods remain cumbersome to connect with and slow to run,
disrupting the pace of a fluid creative design process. Second, the
parametric design spaces needed for optimization and exploration
are often at odds with the more flexible and natural approaches
used in analog design, constricting design freedom compared to
methods architects are used to. Third, these design spaces often
contain such vast expanses of data that they are virtually impossible
for humans to understand and work with. Finally, due to the intrinsi-

128
The Performance

cally human nature of architecture and design, there is strong resis-


tance to any process which purports to fully automate it. Because of
these challenges, conventional computational design approaches
are underutilized in favor of processes that remain human-centric.
Recent advances in Artificial Intelligence and machine learning of-
fer a way forward, connecting the power of performance-driven
computing with the fluidity and creativity of human design. This may
appear counterintuitive: Artificial Intelligence is often thought of as a
means to replace humans in high-level tasks, for example in playing
chess or performing surgery. Indeed, in recent years some have pro-
posed AI-driven platforms that generate architectural artifacts, such
as floorplans or facades. However, when completely isolated from
human designers, such aspirations may be missing the point: the hu-
man experience of the built environment, arguably the most critical
component of architecture, will always be best understood by a hu-
man designer. In our view, the most compelling and valuable applica-
tions of AI for architecture lie instead in methods where AI systems
augment or collaborate with human intelligence. Several examples of
such methods developed in our research are described below.

Our first approach expands the method of surrogate modeling, orig-


inally developed to substitute a fast data-driven approximation for
a slow simulation in black-box optimization processes (e.g. in aero-
space engineering). In our work, we broaden the application beyond
1. Tseranidis, Brown,
et Mueller, “Data- optimization to instantaneous performance feedback for designers
driven approximation
algorithms for rapid in general1. Building on techniques developed for supervised learning
performance evaluation
and optimization of civil
of image-based content, such as convolutional neural networks, we
structures”, Automation in have demonstrated tools that can accurately predict entire fields of
Construction, 72, pp 279-
293, 2016. simulation data in real time. For example, the entire displacement field

129
The Contribution

2. R. Danhaive et C.T. of a structural surface or radiant exposure of a building facade can


Mueller, “Structural
metamodelling of shells”, be displayed instantaneously as a designer explores conceptual op-
In Proceedings of IASS
Annual Symposia, vol.
tions2. Our surrogate models are also highly portable, giving non-ex-
2018, no. 25, pp 1-4. perts access to real-time performance data in lightweight interfaces
International Association
for Shell and Spatial such as web portals, removing the need for cumbersome file conver-
Structures, 2018.
sions and data transfers between disconnected software programs.
3. Brown et Mueller,
“Design variable analysis
and generation for We also tackle the question of design space parameterization, which
performance-based
parametric modeling in
traditionally requires a user to handcraft the parameters that drive
architecture”, International variation across a design space. Using classical statistical methods
Journal of Architectural
Computing 17, no. 1, pp and more recent developments in machine learning (including varia-
36-52, 2019.

130
The Performance

1 2 tional autoencoders), we can automatically synthesize a small num-

Exploration path in a
ber of variables that generate meaningful and large design variation
latent synthetic design in geometry while maintaining high performance3. We have demon-
space constructed
using a variational strated this approach on building-scale structures (Fig. 1), illustrating
auto-encoder for a roof
design example. how, for example, the geometry of a long-span roof can be morphed
in complex ways to generate many diverse forms (Fig. 2) that all per-
4. R. Danhaive et C.T.
Mueller, “Structural design form very well, driven by a designer with only two synthesized param-
space exploration with
deep generative models: eters4. Finally, we are also actively developing design approaches
applications to shells and
spatial structures”, 2020. that improve interfaces and modes of human-computer interaction.
We have created tools that allow designers to collaborate with com-
puting systems via interactive evolutionary algorithms, evolving

131
The Contribution

5. C.T. Mueller et al., design options to capture both human intent and numerical perfor-
“Combining structural
performance and designer mance5,6. We have developed techniques for designers to tame
preferences in evolutionary
design space exploration”, the wilderness of expansive design spaces by clustering families
Automation in Construction,
of designs7 and guiding exploration using statistical and optimiza-
pp 70-82. 2015.
tion-based methods. Finally, we are designing systems that allow
6. R. Danhaive et C.T.
Mueller, “Combining designers to interact with complex digital modeling environments
parametric modeling and
using natural human processes such as sketching. Specifically, in
interactive optimization
for high-performance and these sketch-based interfaces, designers can intuitively interact
creative structural design”,
In Proceedings of IASS with complex parametric models using quick sketches and gen-
Annual Symposia, Vol.
2015, No. 20, pp 1-11,
IASS, 2015.
132
The Performance

7. N. Brown et C.T. erate three-dimensional models of structures or buildings and


Mueller, “Designing With
Data: Moving Beyond The understand their associated performance8. In all of this research,
Design Space Catalog”,
our aspiration is to harness advances in machine learning and Arti-
2017.
ficial Intelligence to amplify human creativity in high-performance
8. Ong et al., “Machine architectural design. Given the sustainability imperative now faced
learning for human
design: Sketch interface by the built environment, new tools and design approaches are
for structural morphology
needed to guide architects and engineers towards better solu-
ideation using neural
networks”, In Proceedings tions without limiting their imagination or freedom in this funda-
of the IASS Symposium,
2020. mentally human endeavor.

133
The
Outlooks
of AI in
Architecture

The Adoption 134


AI may today stand more as a possibility than an evidence for archi-
tects. Whether the discipline at large will embrace, reject, or adapt
this technology is still unclear and leaves the discussion wide open.
Among many others, three facets of current debate speak to the
state of ongoing reflections and concerns: the practice, the model
and the scale.

Although the practice of Architecture could soon begin relying on


AI-enabled tool sets, the path to adoption still remains to be defined.
The modalities of its integration and adaptation to the processes and
needs of practitioners constitute an important avenue to explore and
clarify. The very notion of “model” is also at the core of AI’s adoption in
Architecture. If the use of models is not a novelty for architects, with AI
their definition is invited to evolve towards a deeper anchoring in math-
ematics and logic. Consequently, the discipline’s reflection on this evo-
lution will condition AI’s inception in the field. Finally, the scale of AI’s
deployment in Architecture constitutes a topic in itself. Technology’s
successful dissemination is contingent upon its translation into ade-
quate tool sets. Finding its best expression to match architects’ needs
remains an open discussion and a topic of investigation.

This segment offers to explore these three avenues, through the con-
tributions of architects, scholars, and entrepreneurs whose work ad-
dresses the peculiarity and challenges of each theme respectively.

135
The Practice
The Data Challenge
for Machine Learning
in AECO
by the Applied Research
& Development Group,
Foster + Partners

The Adoption 136


The Practice

After the recession of the latest AI winter, the past decade showed
a remarkable infiltration of Machine Learning (ML) techniques in
various industries. Meanwhile, within the AECO industry (Archi-
tecture-Engineering-Construction-Operation), the question archi-
tects, engineers, and contractors alike are still asking is how could
ML be used in the built environment in a meaningful (and profit-
able) manner?

Of course, anyone who has used ML knows that the success of the
system can only be as good as the quantity and quality of data to
which the system is exposed. And there lies the problem with our in-
dustry, which is not the lack of data, but rather its abundance in for-
mats that are incompatible with each other and do not match current
ML requirements. Practically, the issue is not how ML can be used
in AECO but rather how the industry can develop a structured data
pipeline tailored to and appropriate for ML workflows.

137
The Adoption

The Challenge of Original Datasets


and the Value of Synthetic Data

The AECO industry is producing vast quantities of data during all


stages of a building’s life cycle. This is true not only through design
to construction, but also during operation (the rise of IoT and Smart
Buildings being the main contributor). Data produced or utilized in the
built environment can range from climate and geospatial information
through to brief requirements, sketches, drawings, images, perfor-
mance simulation analysis, 3D BIM models, construction logistics
and procurement or post-occupancy data gathered by sensors, 3D
scans or HVAC monitoring systems (just to name a few).

AECO data is derived from various sources, stored in different for-


mats, and often includes high redundancy. Thus, it requires consid-
erable effort before it can be utilized for any sort of ML endeavor.
For it to become useful, it must be normalized. This process, which
leads to the creation of meaningful datasets, is where the bulk of
1. S.I. Nikolenko, “Synthetic the work lies.
Data for Deep Learning”,
Springer, 2021.
The challenge that the above process poses has led many research-
2. Kosicki et al., “HYDRA
Distributed Multi- ers to the use of synthetic datasets – that is, data artificially gener-
Objective Optimization
for Designers”, In Design
ated – rather than original datasets – data collected from actual
Modelling Symposium
events or experiments. This has been an accepted practice within
Berlin, pp 106-118,
Springer, 2020. many industries, including automotive, healthcare, and financial ser-
3. Abdel-Rahman et al., vices1. In the context of Architecture, this type of data can be derived
“Design of thermally
deformable laminates from generative or parametric models, which could be accurate
using machine learning”, In
enough to replicate specific properties of the built environment. For
Advances in engineering
materials, structures and large synthetic dataset production, distributed computing pipelines
systems, pp 1016-1021,
2019. could be utilized – like Foster + Partner’s bespoke system Hydra2.

138
The Practice

)a( )b(
Input Training Progress Target
Displacement after 2, 10, 50, 200 epochs layering

)c( )d(
1 1
a,b) sample from the
synthetic dataset showing
different layering of In collaboration with Autodesk, the Applied Research + Development
the laminates and their
simulated deformations. (ARD) group at Foster + Partners has generated such synthetic data-
c) The model input the
displacement values, sets to prototype two ML systems – one for designing passively actuat-
while the target is the
layering of the laminates. ed laminates3 (Fig. 1) and another for the rapid assessment of visual and
In the middle the training
progress could be seen, spatial connectivity for office layouts4. Both experiments demonstrated
as time passes the
model is able to predict the immense potential in applying ML methods for supporting certain
a layering close enough
to the target layering. d) design tasks, and the challenges these pose due to our industry’s lack
comparison between
the simulated (bottom) of appropriate datasets.
and predicted (top)
deformation3.

4. Tarabishy et al., “Deep But while opting to generate a synthetic dataset provides great control
learning surrogate models
for spatial and visual on the quality and amount of what is being generated, it is an idealization
connectivity”, International
Journal of Architectural of reality and often should only be used as a starting point. So, the ques-
Computing, 18(1), pp 53-
66, 2019. tion remains: how to leverage the industry’s abundant original data?

139
The Adoption

Show Me the Data!

In recent years, ARD has been developing pipelines that could


take advantage of both the agility and ease of use of synthesized
data, as well as the 55 years of original data produced at Foster +
Partners. One of the group’s earliest investigations was centered
around the extraction of furniture layouts from residential floor
plan data. This entailed the collection, labeling and augmentation
of floor plans (in an image format) and was mainly focused on how
this process could not only be mainstreamed, but also automated.

While that initial exercise focused on using this data to train de-
sign-assist ML models, subsequent research focused on the de-
velopment of surrogate ML models that could be used for an array
of analyses from in-house simulation tools, and more specifically
spatial and visual connectivity, a significant driver for planning the
2 layout of offices (Fig. 2). The input to those analyses were office
Sample floor plans from a floor plans, and despite having an abundance of those in-house
synthetic dataset4: 4,000
images of open layout
the decision was made to create a parametric model capable of
and compartmentalized
floor plans were
mimicking spatial and furniture floorplate organization, which was
generated along with subsequently analyzed and used to train the model.
their respective spatial
connectivity and visual
graph analysis to be
used to train a ML model. This decision to use a synthetic dataset may seem strange given
the massive amount of data collated through the years in our arse-
nal. But retrieving any useful information from it is not a straightfor-
ward proposition. While (or because) data retention is straightfor-
ward and – nowadays – relatively cheap, not much thought is given
to which data is valuable or how to retain it in a consistent manner.
Additionally, for big companies operating for decades, there is

140
The Practice

2
rarely a clear definition of the lifecycle of a piece of data within a
team, let alone cross-department or even cross-business.

In that sense, trying to dig through the archives looking for specif-
ic data, taking into consideration the number of formats produced
from various software, under ever-changing file naming conven-
tions has proven to be much more challenging than developing
the ML model itself. This is a hurdle that is removed from synthetic
datasets, where quite a lot of thought has been given in advance to
the way data will be created. This usually entails proper tagging, la-
beling, or captioning of different elements, comparable formats, and
consistent naming conventions.

The above investigations made one thing abundantly clear: special


pipelines and tools need to be put in place to allow for the smart re-
trieval and labeling of data; the process of curating a task-oriented
dataset must not include excessive ceremony. One approach could
be processing data during its creation, allowing the encapsulation

141
The Adoption

30 %
Number of Assets
(as % of office entire database)

25 %

20 %

15 %

10 %

5%

0%

Fi ble
es

les

les

les

les

les

les
ile &

les

ied

ile e

ile r
Fi -
E AM

ile

t F od

c F to
ag

dF e

Fi

Fi

Fi

Fi

Fi

Fi

Fi

ta
s

Da s

les
les

hi ec
tif
oF
sse iv

rip C
Im

ecu
CA D-C

ap V
re rch

ta
up

xt

nt

ry

isc
en

Sc ce
3
ut

de
Te

me

r
na
id
ck

Ex
yo
A

& Sou
Vi
CA

Un
Ba

Bi
cu
La

Do

Gr
ge
mp

Pa
Co

Asset Categories

of pre-existing knowledge about the data to the medium it is being


stored or archived in at the source. Structuring data like that is both
challenging and time-consuming; how does one pick a system and
a structure that is rich enough, transdisciplinary, ages well and is
user-friendly?

At Foster + Partners, we are in the process of specifying data struc-


tures that allow for ease of traversability and cross-referencing.
In the meantime, we have been developing tools that facilitate the
retrieval of items of interest from our on-premises data stores. One
such tool is our bespoke File-Seeker, a parallelized breadth-first tool
for traversing file system directories on network drives. The way the
tool was designed allows for plugging in different file format pars-

142
The Practice

Raw Data Size

0.3 0.24 0.18 0.12 0.06


Fi eet

Fi ase

les

les

les

les

Fi ted

Fi ary

Fi age

les

Em iles r

Fi ed
ile

ile

ile

F to
XM

ta lat
Fi

Fi

Fi

Fi

Fi
sh
les

les

ra s

Di es

Pl es

les
ula
p
tab

SF

sF

nF

Im
or
le
cry

l
ad

Da il-re
m

nt

eb

ok

in
mp

Em
Da

ng
GI

tio

sk
W
ste
re

Fo

En

ug
Bo
tti

Te

a
Sp

Sy

E-

d&
Se

igu
de

nf
co

Co
En

ers to search the content of the files for elements or labels of inter-
est. Although it allows the user to traverse millions of files in only a
fraction of the time compared to publicly available search tools, the
3 application really depends on some assumptions associated with
A diagram showing name conventions, file type and folder locations being true before
the allocation of data
under specific file format starting the search. These limitations led us to our current investi-
categories both in terms
of raw size and number gations (Fig. 3), where we are evaluating different techniques for
of files as a percentage
crawling and indexing data, using non-intrusive ways for “on-the-
from a total of around 74
million files on our warm edge” labeling, and tagging of newly created daily data. Adding a
storage constituting data
for only the currently semantic layer on top of all data streams produced in the office can
active projects. Images
and PDF/page layout
make it format-agnostic, which in turn would result in an easy, com-
files being collectively pany-wide access to information through an integrated desktop ap-
the highest in size and
number. plication ecosystem.

143
The Adoption

Future Outlook

Historically, the AECO industry is well known for being highly resis-
tant to change. This characteristic, paired with the emerging nature
of ML research, is posing a lot of challenges on deep integration of
ML. To take full advantage of ML, AECO first needs to reassess its
standing as a data-driven industry.

Being a data-driven organization requires a comprehensive approach,


where its corporate culture and tools are tailored in such a way as to
deliver high-quality products (designs) at a faster pace than traditional
practices. With ML, this translates into building a data pipeline: a live
ecosystem which collects and combines data coming from various
sources and disciplines before it is used in predictive models. In a
ML pipeline, incoming data is transformed through a series of steps,
5. Treveil, Omont, Stenac, linking data and code to produce models and predictions5. Since new
Lefevre, Phan, Zentici,
Lavoillotte, Miyazaki & L. data is being gathered all the time, such systems need to be con-
Heidmann, “Introducing
MLOps”, O’Reilly, 2020.
stantly updated and redeployed, there also needs to be a governance
framework to evaluate and manage the lifecycle of data and models.

144
The Practice

This idea of continuous product deployment is encapsulated by


Machine Learning Operations (MLOps)5. Those ML models, auto-
matically performing specific design tasks, could be part of a bigger
system that still relies on designers’ experience. For this to work, data
must flow freely between design stages, departments and different
companies, which highlights the importance of data interoperability —
traditionally one of the biggest bottlenecks for the AECO industry.

The processes mentioned above are as much an opportunity as they


are a challenge. Their success depends not only on the amount of
available data, but also on other factors, like capacity and willingness
for change, availability of financial and other resources, existence of
implicit biases or even corporate culture, all of which can provide ei-
ther a massive leverage or an insurmountable challenge. But the bot-
tom line is this: whoever succeeds in putting these structures in place
will be in an incredibly advantageous position in the future; data is
power, and ML goes a long way in providing the means to harness it.

145
The Model
Shadowplays: Models,
Drawings, Cognitions

by Andrew Witt,
Associate Professor at Harvard University
Initial publication in Log #50

The Adoption 146


The Model

“Model is a generalization,
form is a special case.”
– Buckminster Fuller, Synergetics 2, 1979

In its classical guise, an architectural model is both a tangible artifact


and a proxy for the imagined reality that it is designed to resemble. With
its volume, shadow, and tactility, a model stakes a claim for an architec-
tural idea as embodied fact. A model looks like a building and vice versa:
they share a common and specific form, conjoined as a dyad of original
and copy. They exist together in a zone between actual and ideal, fact
and fiction. A model is an intermediary between appearance and imag-
ination, anchored in the form of a specific object.

If an architectural model is a designed artifact, a 21st-century scientific


model is more mathematical or logical than physical and spatial. As phi-
losopher Ian Hacking notes, “A model in physics is something you hold
1. I. Hacking, in your head rather than your hands”1. Climatic, economic, and ecolog-
“Representing and
Intervening: Introductory ical models are all austere mathematical systems instead of thick ob-
Topics in the Philosophy
jects. Visual resemblance is irrelevant to the behavior of systems that
of Natural Science”,
Cambridge University scientific models aim to encode and make operative. If the architectural
Press,1983.
model relies on resemblance, the scientific model rests on the numeric
language of deep and hidden structures.

When the two poles of architectural and technoscientific models are


juxtaposed, a spectrum of practices opens between them. Archi-

147
The Adoption

tectural models evoke specific imaginations through tangible ma-


terial objects, while logical scientific models posit the mathematical
interactions of abstract entities and phenomena. Yet separating the
disparate tactics of architectural and scientific models is ever more
confounding due to the proliferation of digital instruments that sur-
reptitiously import the modus operandi of the exact sciences into
the practice of design. Abstract varieties of quasi-scientific digital
models are increasingly supplanting physical models whose func-
tion rests merely on appearance. The once tautological connection
between model, resemblance, and representation in architecture is
giving way to new relationships between epistemic abstraction and
technique. What is emerging is an increasingly scientific intuition – if
not an explicit understanding – of model as a creative and evaluative
matrix that exceeds the scaled specification of a single building.

By attending to models that are not precisely architectural but on


a continuum between architectural and technoscientific, the roles,
possibilities, and futures of architectural modeling can be critically
reframed. In particular, the dichotomization of visual resemblance
and instrumental abstraction that dominates discussion of models
can be critiqued and perhaps overcome. If the building is dislodged
as the exclusive focus of model representation, the more disciplinary
functions of the model emerge, such as its capacities for cultural en-
capsulation, propagation, and diffusion. Here I consider two types of
models – skiagraphic models of 19th-century shadow rendering and
image-based neural network models of 21st-century Artificial Intel-
ligence – that abandon any pretense of resemblance to buildings in
favor of more abstract roles within the knowledge culture of design.
Each type of model brings visual and mathematical rigor – geomet-
ric rigor for the skiagraphic model and statistical rigor for the neural

148
The Model

model – to bear on systems of perception and representation. Their


function is not merely instrumental: both play critical roles in encoding
visual practices beyond the direct specification of buildings. In both
episodes, models are not scaled miniatures but tools for training archi-
tectural perception and creation by both humans and machines.

In their philosophical account of scientific models, the biomathemati-


cians Philip Gerlee and Torbjörn Lundh remind us that the word model
descends from the Latin “modulus, a diminutive form of modus, mean-
2. P. Gerlee and T. Lundh, ing a small measuring device”2. A model, then, is a ruler against which
“Scientific Models: Red
Atoms, White Lies and to gauge, to delimit, and to judge. Models are devices to dimension not
Black Boxes in a Yellow
Book”, p 123, Springer,
only buildings but the culture of architecture – its practices, conven-
2016. tions, styles, and processes. Yet the appearance of buildings is never a
distant concern of the architect, and even the relentless abstractions
of scientific models can be hacked for freshly intense and unexpected
kinds of design invention. New computational forms of neural vision
open up strange kinds of glitched, warped, and liquid transformations.
In this respect, visuality mediated by calculation models confirms the
philosopher of science Bas C. van Fraassen’s observation that “distor-
tion, infidelity, lack of resemblance in some respect, may in general be
3. B. C. van Fraassen, crucial to the success of a representation”3. These new models are en-
“Scientific Representation:
Paradoxes of Perspective” gines to mutate representation itself.
p 13, Oxford University
Press, 2008.
Modeling Shadows

Architectural models are specific artifacts, but they are also evidence
of disciplinary vision and concretized conventions that invite the
user to behold the idea of a building in a particular way. The status
of architectural models in the larger pantheon of representations
is brought into relief through their sometimes peculiar and even

149
The Adoption

competitive relationship with architectural drawings. Models rarely


exist alone as the only representations of buildings. Instead, they
are one in an entourage of other representations – drawings chief
among them – that collectively delineate the world of a project.
Modeling and drawing are conjoined practices and nowhere is that
more apparent than in the atmospheric realm of shadows.

With its illusion of solidity, the shaded drawing seems to exceed the
merely documentary qualities of technical plans and adopt the ap-
pearance of a three-dimensional model. This conflation between
drawing and model is readily discernible in the artfully rendered
drawings of the 19th-century French Beaux-Arts architects. Build-
ings were drawn as if they were models, shadows cast as if the sec-
tional thickness were cut away. Among the most facile hands was
Jean-Jacques Lequeu (1757–1826), whose remarkable drawings
seem to close the gap between drawing and model. The shadows
cast in Lequeu’s interiors evoke his contemporary Jean-Baptiste
Rondelet’s monumental sectional maquette of the Pantheon in
4. “Les règles de la Paris as much as they do actual buildings. Much scholarship on
Science des Ombres
Naturalles,” quoted in the enigmatic Lequeu’s work rightly focuses on his flamboyant
P. Duboy, “Lequeu: An
Architectural Enigma”, p
imagination or meticulous craft. Yet shadows held a definite prior-
14, MIT Press, 1987. ity in Lequeu’s technique. His “Architecture Civile”, an unpublished
drawing manual in which he claimed to outline “the rules of the sci-
ence of natural shadows”, is substantially devoted to the tonal and
1 geometric intricacies of rendering shade4.
Plans, elevations,
and sections of two
domed projects of In Lequeu’s drawings, we see a virtuosic manifestation of skiagraphy,
the French architect
Jean-Jacques the projective science of rendering shadow. As a disciplinary prac-
Lequeu, assembled
as a single drawing, tice, skiagraphy altered the common precedence between draw-
and rendered with
precise shadows. ings and models. To produce a skiagraphic drawing, the meticulous

150
1

151
The Adoption

2 draftsperson projectively constructs the shadows of a complex


A skiagraphic model – a building, a fragment, or an entirely contrived object – with
construction from
Jean-Jacques Lequeu’s exquisite precision. In its pure form, skiagraphy was an academic
manuscript Architecture
Civile, ca. 1820,
exercise to mold a visceral intuition of light. Between the early 19th
showing the classical
technique derived from
and mid-20th centuries, skiagraphic drawings were de rigueur in
descriptive geometry. architecture schools across Europe and the United States. The
Source: Bibliothèque
Nationale de France. practice persisted well into the 20th century and was taught at rep-
5. W. Muschenheim, utable schools of architecture, such as the Bartlett, into the 1960s5.
“Curricula in Schools of
Architecture: A Directory”, As much as almost any architectural drawing practice, skiagraphy
Journal of Architectural
defined a family resemblance among architectural drawings of a cer-
Education 18, no. 4, p 56,
1964. tain style and from a certain period.

What is the object of representation in a skiagraphic drawing? The


apparent focus of depiction would seem to be the model itself. But
in fact, skiagraphic models are more like props, merely incidental to
the object’s shadows, which are the proper focus. The true test of the
draftsperson’s skill is not the rendering of the model per se but rather
the rendering of the epiphenomenal shadows. The secondary effects
of the shadows are elevated to the primary object of attention. Draft-
ing these shadows entails a sophisticated and systemic analysis of
the atmospheric conditions that surround the model as well as the
occluded geometry of the model itself. In other words, skiagraphy re-
quires a concept of a world system that the rendered model inhabits.
Unmoored from representational obligations, the model becomes a
pretext for disciplinary training. The model can thus take on functions
6. Thomas DaCosta that ignore the conventions of building per se and instead attend to
Kaufmann, “The
Perspective of Shadows: systems of representation themselves. In his account of shadow
The History of the Theory
of Shadow Projection”,
projection, art historian Thomas DaCosta Kaufmann calls practices
Journal of the Warburg like skiagraphy “modeling shadows”6. Kaufmann notes that specially
and Courtauld Institutes
38, p 258, 1975. constructed models played an essential role in the training of Re-

152
The Model

naissance painters’ visual intuition for the construction of light and


shade: “Vasari says that his contemporaries continued to use ‘round-
ed’ models of clay or wax before drawing their cartoons, in order to

7. Ibid., p 260.
see shadows in sunlight. He says that Michelangelo had used models
and that the sculptor Jacopo Sansovino had supplied wax models for
8. M. Hirst and C.
Bambach Cappel, “A Note
a number of painters”7. The models Michelangelo and other painters
on the Word Modello”, The used were physical props or maquettes that furnished the space of
Art Bulletin 74, no. 1, pp
172–73, 1992. a painting8. These models were not intended as representations, ex-

153
The Adoption

cept in the highly indirect fashion that they simulated the shadows of
specific figures. It was the resemblance of shadows that mattered, not
the resemblance of model to object. Instead, models were expedients to
initiate practitioners and introduce a disciplinary way of seeing.

In 19th-century France, skiagraphy spanned architectural and engi-


neering practices and thus inevitably impinged on technoscientific
culture. In Paris, then the European nucleus in the professionalization
of both architecture and engineering, skiagraphy was an indispens-
able part of both architectural training at the École des Beaux-Arts and
engineering training at the École Polytechnique. As the birthplace of
Gaspard Monge’s descriptive geometry, France was a fertile soil for
a quasi-scientific model of skiagraphy to take root. Monge developed
a theory of shadows that was an integral part of later editions of his
Géométrie descriptive and which led the way for a considerable liter-
ature of manuals, including Lequeu’s, for training aspiring draftspeople
9. See Gaspard Monge in “les dessins des ombres”9. French sculptor Eugène Guillaume, who
and Barnabé Brisson,
Géométrie descriptive: straddled the porous boundary between the arts and engineering as
augmentée d’une théorie
both director of the École des Beaux-Arts and professor of drawing at
des ombres et de la
perspective extraite des the École Polytechnique, framed the pedagogical philosophy of skia-
papiers de l’auteur (Paris:
Gauthiers-Villars, 1922). graphic studies as a precise mathematical exercise: “The very essence
This publication followed
Monge’s 1798 edition
of drawing is purely mathematical, since the only two modes by which it
published by Baudouin can be envisaged, the geometrical or the perspectival, both that which
in Paris.
would be applied to draw lines and to trace shadows, rest on exact
laws: the truths of mathematics. This manner of consideration is justi-
fied by language that the artist and the mathematician each employ in
their own sphere, using the same words of line, plan, proportion, sym-
metry, equilibrium, and retaining the same meaning... drawing itself is
a science”. Drawing shadows was not an act of pure intuition and per-
ception, but rather a deliberate practice of exact construction.

154
The Model

In a skiagraphic drawing, the raison d’etre of the models was to provide


an entry into a representational system and a suitable challenge of skill
for the eye, mind, and hand of the composing draftsperson. Models func-
tioned as pretexts for a specific mode of disciplinary training. Through
them, the architect was viscerally and autonomically sensitized to the
behaviors, moods, and subtleties of shadow. Physically, skiagraphic
models could be abstract platonic forms, odd and awkward assemblag-
es, mechanical contrivances, mathematical maquettes, or fragments of
actual architectural models. In the Beaux-Arts context, Corinthian cap-
itals or fragmented entablatures arranged in still-life tableaux were
favorite examples. Other skiagraphic models were improbable piles
of odd forms, generative devices designed to induce as much variety
and difference in their shadows as possible. Skiagraphic models were
furniture in a regime of a highly mathematical representation. Not all
skiagraphic models were even physical maquettes. They could be,
and often were, more abstract entities fancifully imagined and projec-
tively constructed entirely within the drawing itself. Whether physical
objects or mathematical entities, the models’ highly contrived forms
were intended to probe the limits of representation, beholden not to the
demands of building but rather to the private and autonomous conven-
tions of architectural drawing.

The skiagraphic model was a parafactual entity that existed purely as


the linchpin of a specific practice of architectural seeing and drawing.
The model had a definite physical presence and form, yet that form was
secondary to its role as an object of initiation. It never served to clarify or
expedite the design of a building. On the contrary, it defined the architect,
her intuition, and her vision. In that sense it was an entirely cultural arti-
fact, a resolutely architectural model that nevertheless had nothing to do
with building, only with seeing and drawing.

155
The Adoption

Beau-Arts Deepfakes

In the 1950s, a quite distinct kind of model began to emerge in the work
of mathematical psychologists interested in visual perception. These
were not models of individual things (like buildings) or even models of
explicit systems (like skiagraphic geometry) but models of perception
itself. In his remarkable 1950 paper “Mathematical Biophysics, Cyber-
netics, and Significs”, mathematical psychologist Anatol Rapoport was
among the first to use the term “model” to describe the function of neu-
ral networks, the reticulated structures posited as the basis of organic
nervous systems. He recounted the convergence between biological
perception and electronic calculation: “The two programs of research,
a mathematical theory of the nervous system on the one hand and
the development of electronic computers on the other, proceeded
along parallel lines . . . Workers from both fields soon found themselves
talking to each other in a language which was a curious mixture of psy-
cho-physiology (neurons, synapses, refractory periods, threshold, etc.)
and electronics (feedbacks, vacuum tubes, amplifiers, transformers,
10. A. Rapoport, etc.).”10 When encoded electronically, these new neural models took on
“Mathematical Biophysics,
Cybernetics and Significs”, the comportment of human or animal reflexes, reactions, and cogni-
Synthese 8, no. 3/5, p 189,
1950-1951. tions. They could be “trained” and generate new internal associations
in response to serialized stimuli. In short, they could learn to sense.

One of the earliest computational neural networks was an optical


mechanism for the discretization of light and shadow. In 1958, psychol-
ogist Frank Rosenblatt introduced the perceptron, the mathematical
11. F. Rosenblatt, “The
Perceptron: A Probabilistic framework for an interconnected matrix of retinal sensors configured
Model for Information
Storage and
to detect gradients of illuminance11. Rosenblatt attacked the assump-
Organization in the Brain”, tion of eidetic resemblance between model and modeled object in
Psychological Review 65,
no. 6, pp 386–408, 1958. neural representation, claiming that “the images of stimuli may never

156
The Model

really be recorded at all ... the central nervous system simply acts as
an intricate switching network, where retention takes the form of new
12. Ibid., p 386. connections, or pathways, between centers of activity”12. Rosenblatt’s
argument is suffused with multiple models: coded models, physiolog-
ical models, conceptual models. As befitted an essentially electrical
apparatus, the neurons of Rosenblatt’s network could be readily and
continuously tuned to inculcate particular kinds of visual training.

In the 70 years since Rapoport’s account of neurocomputation, mod-


els that learn have made fitful but dramatic advancements toward a
distinctly novel fusion of perception, computation, and creation. New
tactics of deep learning like generative adversarial networks have
relentlessly pushed the limits of computational perception and affil-
iated techniques of image generation. Generative networks are mod-
els that are not explicitly and deliberatively theorized but instead
emerge autonomically from iteratively applied statistical encod-
ings. Taking enormous archives of images as feedstock for training,
neural models build matrices of statistical probabilities that encode
perceptual and creative behavior. On the one hand, the process of
encoding a neural network is akin to the use of compression and
encryption algorithms, distilling the vast array of images to numer-
ic correlations. On the other hand, neural training is like teaching a
child to recognize and draw shapes through the reinforced repetition
of countless examples of ascending complexity. Image by image, a
distinct intuition is formed through cybernetic feedback.

If classical architectural models were explicit physical representations,


neural net models are black-boxed codices of relational connections.
Though they are computationally deterministic, neural networks are
not rulesets per se. Instead of semantically articulated rules, neural

157
The Adoption

3 models are vast ledgers of numeric correlations. Derived from proba-


A matrix of architectural bilistic and statistical associations as opposed to explicit logical rules,
drawings generated
from a neural network when visualized in their raw form these models appear almost as noise
trained on thousands of
Beaux-arts drawings,
even to the educated eye.
including plans, sections,
and elevations. In many
cases those drawing Yet the imagery produced by suitably trained models is marvelous-
types are hybridized or
morphed into a new and ly specific and inescapably legible. When trained on the Beaux-Arts
ambiguous arrangement
through the encoding drawings of Lequeu and thousands of others, a neural model begins to
of the neural network.
Project team: Andrew draw in the luxurious style of figured volumes, filigreed details, and cre-
Witt, Gia Jung, Claire
Djang, 2020. puscular shadows as convincingly as any suitably trained draftsperson.
What emerges in these drawings is a statistical skiagraphy: shadow
rendered not with the constructive principles of descriptive geometry
but with the stochastic processes of neurocomputation. The unmis-
takable forms of domes, colonnades, entablatures, and other telltale
elements of the Beaux-Arts vocabulary appear in a mannered chiar-
4 oscuro calculated from tensors of probabilities. The images have im-
A matrix of sections pressionistic and atmospheric qualities, as if seeing the precise lines
generated from a neural
network trained on of the building through a light fog. There is a striking vagueness about
thousands of Beaux-
arts drawings. Though these images; they are more sketches than renderings. Yet that be-
derived entirely from
unsupervised training, a
lies their calculational precision, and what may glancingly appear as
characteristic technique
of shadows begins to
fluid washes of watercolors are actually grayscales calibrated by an
emerge. Project team: exact science of machine learning. Neural networks not only capture
Andrew Witt, Gia Jung,
Claire Djang, 2020. and reproduce the specific atmospheric subtleties that elude more
procedural means of computation, they also have the capacity to blur
the line between distinct forms of architectural representation. If neural
models are trained on several different genres of drawing, they fluidly
hybridize all these disparate types in their generated images. Muta-
tions of plan, section, and perspective are vivisected into strange new
quasi-montages that seamlessly blend them all together. Plan melts
into section, coalescing or dissipating through the technical intermedi-

158
3

159
The Adoption

ary of the neural model. Like skiagraphic models, the drawings do not
refer to specific buildings and, indeed, often depict something that
defies consistent interpretation. Instead, they are artifacts of pure
disciplinary representation. In this they resemble another Beaux-
Arts staple, the composite drawing juxtaposing plan, section, and el-
evation in one patchwork image. The neural model becomes a mixing
chamber to reformat all the myriad forms of modeling and drawing in
a common and continuous visual language. In circumscribing and re-
lating a corpus of images, neural networks delimit specific territories
of imagination. More than models, neural networks are maps. They
statistically interpolate disparate images and thereby plot a gradient
of interstitial architectures. Instead of modeling one form or a discrete
set of forms, they offer a model of visual invention itself, and with it,
a continuous and seamlessly variable terrain of endless and endless-
ly different forms. Like the Situationist dérive, latent walks through the
neural space allow the user to wander through the dream space of a
trained intuition, each step generating a new and surprising result. This
continuity is of a totally different order than parametric or combinatorial
variation, where incremental changes of scale or density retain the
essential topological organization of a space. Instead, neural varia-
tion ranges across type and topology, setting up liquid interpolations
between improbable and chimerical forms.

Modelers, Human, and Machine

Despite their mathematical origins, neural models transcend the strictly


deductive and largely instrumental methods of most contemporary
scientific modeling. As models of thought and perception, they expose
the qualitative vagaries of taste and style to exact modeling. In doing
so, neural models confound the direct relationship between visual

160
The Model

resemblance and representation. Resemblance is merely a secondary


effect of a model that can learn and manipulate entire representational
systems. In this paradigm, the architectural model of the future will take on
increasingly protean roles, representing not only specific buildings but
also the disciplinary intuitions and cultural nuances of architecture itself.

At one time, a finite and specific architectural model might have been
interpreted as a representational terminus, a definitive conclusion of a
search, or the birth of an architectural idea in an embodied reality. Even
today, most digital 3D models merely replicate or amplify the qualities of
eidetic resemblance characteristic of physical maquettes. In shifting from
embodiments of specific forms to generalized encodings of representa-
tional processes, models become ever more expansive and open-end-
ed vessels of design culture. Now even imaginations, aspirations, and
obsessions are amenable to modeling. Once proxies for buildings that
were manipulated by architects, models are on the verge of integrat-
ing the operative tastes and judgements of the architect herself.

Inanimate models and human authors have always maintained un-


confused and distinct places in architecture. As artificial intelligences
model, incubate, and encapsulate cognition, that careful distinction
between made and maker, thought and thinker may seem as anti-
quated as physical maquettes themselves. Between the maquette
and the architect there is a new actor and mediator, the quasi-intel-
ligent model that embeds human intuitions and hallucinates end-
lessly elastic images, drawings, and buildings. In the realm of imag-
ination, the gap between the neural-generated deepfake and the
human-imagined model is eroding. When models can download and
contain patterns of thought, the cease to be distinct objects and sim-
ply become the way architecture is created.

161
The Scale
Scaling AI in the AEC:
The Spacemaker Case

by Carl Christensen,
Co-founder and CTO at Spacemaker AI

The Adoption 162


The Scale

In late 2016, Spacemaker AI was incorporated in Norway. We the


founders — Havard Haukeland, Carl Christensen, Anders Kvale —
formulated a bold vision to fundamentally change the way design,
engineering and project teams worked. Frustrated with the inefficien-
cies of the planning process, especially in the early stages, we saw an
opportunity to find a better way to design our cities. Did something like
this already exist, or if it didn’t, would it be impossible to build? We set
out to find a way to realize our ambition and, in the process, create
something that would have a global impact. Bringing Spacemaker
to life has been both a joy and an epic undertaking — not to mention
the challenge to change an entire industry that has largely remained
unchanged for decades.

Central to our vision was a plan to create a game-changing AI tech-


nology platform that would empower practitioners all over the world.
However, utilizing AI to successfully solve these challenges would re-
quire designers (ie. primarily architects) to not only adopt, but to truly
embrace this technology into their workflow on a massive scale.

163
The Adoption

Impediments to the
Adoption of AI Technology
in AEC Design

By its very nature, AI has an air of magic and mystery to it: taking com-
plex information, processing it, then quickly and effortlessly providing
predictions or suggestions where humans and traditional computa-
tion struggle. While these properties contribute to making the idea of
AI alluring and attractive, it can also become an impediment to adop-
tion. Applying AI in a design process in AEC, we argue that this is par-
ticularly true.

In many processes augmented by AI, there is a yardstick with which


to measure — a reference for quality. In applications like predicting
the likelihood of rain tomorrow, or the most effective route to take
while avoiding traffic jams, results can be discussed with a high de-
gree of objectivity.

The design process, however, is deeply subjective. There is no com-


monly accepted norm for perfect design. Any suggestion from an
AI (or a human) is inherently subjective. In such a process, a “Black
Box” AI quickly becomes a challenge. While a human can contextu-
alize and make a compelling argument for the merits of their propos-
al, an AI cannot.

If we were to overcome the challenges of subjectivity, we still need


the AI to capture intent. When observing a design process in AEC,

164
The Scale

intent is driven by a multitude of local conditions, soft and hard con-


straints as well as subjective preferences and needs. Capturing and
describing all of these, and requiring the “operator” of the AI to enter
them all a priori is impractical and likely infeasible. This makes the AI
inherently hard to control.

An often underestimated challenge with AI is the technical complex-


ity of operating it. While interesting results found in singular exper-
iments are common and frequently published, these results often
require significant setup and fine tuning to work, with a human-in-the-
loop evaluating results for feasibility. The technical competence and
resources required for such an undertaking makes AI inaccessible to
most practitioners.

Another challenge is intellectual property. If the AI learns from me,


does it also steal? How can I trust that as I use it, it does not become
smarter at my expense? When the sources of knowledge and effects
of training become unintuitive, it becomes harder to gain this trust.

But most importantly, to be adopted, workflows enabled by the AI


would need to be attractive and compatible with the creative pro-
cess of design. At its core, this process is both incremental and
iterative in nature. A designer wants to interact with and augment
a proposed design, and stakeholders want to have their say. Com-
promises must be made. An AI that creates “finished’’ design pro-
posals by taking in information and turning it into designs, is nei-
ther iterative nor incremental in nature. Rather than augmenting
the process, it replaces the process, becoming a competitor to the
designer, not a complement.

165
The Adoption

The Spacemaker Approach

Facing all these challenges, but determined to succeed, we started


out with a very conscious approach to the identity of the Space-
maker AI platform. If it had been a person, how would we describe
it? Rather than being perceived as an “oracle” providing opaque,
finished designs, Spacemaker was to be a sage; a knowledgeable
and wise advisor helping the user in their pursuit of better designs
and outcomes. For good measure, we added a sprinkle of magic to
the identity: enough to be exciting and alluring, but humble enough
to avoid detraction.

Fully embedding AI elements in a process requires meaningful ways


to connect with the existing process. In Spacemaker’s case, we did
not see how it would be possible to leverage existing tools or work-
flows in making our value proposition viable. No cohesive platform
existed where those workflows existed. Information was fragmented,
and no design tools were built to collect or process the types of infor-
mation we needed.

We decided that we needed to build a fully integrated design platform


from scratch, encompassing the necessary data capture, design process
and evaluation for desired outcomes that we required. This included
automatically setting up a detailed digital twin of the physical
environment of the proposed design, and building fully automated,
powerful simulations to predict the impact of design options and
1 changes to sustainability, buildability, zoning and constraints (Fig. 1).
Spacemaker’s various
predictive analyses for a
given site. A crucial premise in building the platform was to be technology-ag-
nostic. While, in principle, it was clear that AI would be a key enabler,

166
Facade Daylight Analysis Noise Analysis

Facade Sunlight Analysis Wind Speed Analysis


1

we needed to be focused on outcomes for users, not technology.


Value creation to users would be the only priority. As a result, we
would be very pragmatic to the definition of AI. The proof of value
would be results and adoption, rather than academic scrutiny.

In order to make our platform accessible to users with minimal fric-


tion and technical competence, we built a fully web-based Saas envi-
ronment, powered by elastic, serverless cloud services. In so doing,
we could “productize” AI, encapsulating complexity and the resource
intensive systems needed to deliver and continuously evolve our of-
fering. In addition, an uncompromising focus on ease of use and ac-
cessibility that would allow non-technical users to adopt the platform
with little or no training would be key.

To gain trust, we formulated terms of usage in clear language, de-


scribing ownership of data and how training is performed. In combi-
nation with the incremental nature of our design assistance, this in-
stills confidence that contributing to the platform’s learning does not
unreasonably extract or replicate specific designs.

167
The Adoption

How to accomplish the goal of supporting and augmenting the design-


er rather than becoming a competitor? We realized that we needed to
avoid the “Black Box” fallacy. Rather than view the design process as
an “input/output” problem, we dived into the increments and iterations
that make up the unpredictable journey from idea to finished product.

Thinking of automation and AI as design assistance rather than “gen-


erative design”, providing an “AI on the shoulder” supporting the de-
signer towards intended outcomes, we broke capabilities down into
small parts, offering a multitude of different ways to “nudge” a design
forward, rather than “pushing” too far.

The designer is always in control of selecting the appropriate level of


2
control at any given time. Going wide, the designer can ask for a handful
Multiple massing sugges-
tions in the Spacemaker of options for a scope she controls, ie. one part of a site (Fig. 2) or a floor
App for a given site and
a set of user-specified (Fig. 3). Exploring an idea, she can sketch with simple lines (Fig. 4) or
constraints.
build on components of previous designs with embedded knowledge.
Going deep, she can perform detailed freehand design. All of these
3
Multiple apartment
methods, can be fluently combined and iterated upon. At any given
layout suggestions in the time, increments of “magic” are so small you can intuitively accept
Spacemaker App,
given a program mix, them as meaningful. Building blocks of logic empowering subjectivi-
and selected typologies.
ty. The AI disappears into the fabric of the creative process, and the
user forgets it is there, helping her focus on intent and outcomes.
4
Sketching an apartment
building in the Space- In focusing on outcomes, stakeholders are a key part of a successful
maker App.
iterative design process. To empower a collaborative process we built
the platform to be deeply collaborative, where a team of stakeholders
shares a common truth, providing input, understanding different per-
spectives and coming to terms with the many inevitable compromises.

168
The Scale

3 4

AI has the potential to empower designers to imagine and realize a


better built environment for humanity and a better tomorrow for our
planet. For this to become reality, we believe that AI needs to be dis-
seminated into the architecture practice at scale. With Spacemaker,
we strive to do our part in contributing to this shift and as we observe
the increasing number of initiatives in academia and the industry do-
ing the same, the future of AI in design is brighter than ever!

169
The
Outlooks
of AI in
Architecture

The Prospects 170


Independently of AI’s immediate contributions and potential adop-
tion, the architectural agenda is filled with longer-term prospects.
Among them, at least three seem about to evolve with AI’s advent:
the style, the ecology and the language.

The notion of style, first, belongs to Architecture’s core concerns. For


historical, cultural or functional reasons, style conditions the form of
any architecture. AI revives this discussion by offering new ways to
study the diversity of Architecture’s stylistic landscape. Ecology,
then, stands as yet another pressing contemporary matter for the
discipline. The significant impact of the built world on the environ-
mental balance sheet calls for a more informed design process. AI
can provide architects with the means to address certain critical
ecological dimensions of Architecture, and contribute to the disci-
pline’s broader environmental strategy. Finally, the language and its
analogy with Architecture is a long-standing discussion. Concepts
borrowed from linguistics now provide expressive frameworks to Ar-
chitecture. AI can renew this analogy by providing the discipline with
an alternate schema.

The upcoming segment rounds up this chapter’s theoretical land-


scape. The articles gathered in this last part explore plausible scenar-
ios for Architecture, as AI could soon shed new light on preexisting
crucial discussions in our field.

171
The Style
Strange, but Familiar
Enough: Reinterpreting
Style in the Context of AI
by Matias del Campo & Alexandra Carlson,
SPAN, Michigan University

The Prospects 172


The Style

1. L. Wright, “In the Architecture has a very complicated relationship to the term
Cause of Architecture, II.
What ‘Styles’ Mean to the “style”. It is a charged term1; it’s a loved term2, it’s a despised term3.
Architect”, Architectural
Record, 1928.
Ever since the German architect and writer Hermann Muthesius
2. M. Carpo, “Digital Style”, proposed to rid the discipline entirely from the term style4 in an
Log No.23, Anyone Corp,
pp 41-52, 2011.
attempt to cleanse the domain from the frivolous formalistic
3. N. Leach, “There is No escapades of the 19th century and its historicism, the discussion
Such Thing as a Digital
Building, A Critique of the
has been ongoing weither style at large is a valid area of inquiry in the
Discrete”, AD Architectural architectural discourse at all. Sigfried Giedion, the quintessential
Design, No. 89, Issue 2,
Wiley, London, UK, pp. modern architecture critic, vehemently criticized the concept of
136-141.
style, proclaiming that “There is a word we should refrain from
4. H. Muthesius,
“Stilarchitektur und using to describe contemporary architecture. This is the word
Baukunst”, Verlag v.
Schimmelpfeng,1903.
‘style’. The moment we fence architecture within a notion of ‘style’,
5. S. Giedion, “Space, we open the door to a formalistic approach”5. If style was a taboo
Time, Architecture”,
Cambridge University
for some – such as Hermann Muthesius – for others, such as the
Press, 1941. influential architecture critic Walter Curt Behrendt, it represented a
6. W. C. Behrendt, “Der
Kampf um den Stil im cornerstone of the discipline, in that new styles were both intrinsic
Kunstgewerbe und in der
and necessary6,7. Others, like Peter Behrens, pondered the idea
Architektur”, Deutsche
Verlag, 1920. that style is nothing but the result of the design process. Difficult, if
7. W. C. Behrendt, “Der
not impossible, for contemporaries to discern8. There were even
Sieg des neuen Baustils”,
Fritz Wedekind, 1927. calls to abandon style to discover a new style (Style 2.0?), using
8. P. Behrens, “Stil?”, negation to affirm the fundamental importance of style9.
Die Form: Zeitschrift für
gestaltende Arbeit, 1: pp
5–8, 1922.
9. R. Hausmann, et al.
“Aufruf zur elementaren
Kunst”, De Stijl, 1921.

173
The Prospects

1
Despite these attempts to get rid of the term, the practice of catego-
rizing buildings with specific similar features into a style has prevailed.
Style is like a Zombie; it is undead – neither really dead nor really
1
alive – it repeatedly emerges in conversations about architecture. For
A walk through the latent example, the notion of style is inescapable when dealing with ques-
space (learned visual
space) of a Gothic archi- tions about the history of architecture. Who would refuse well-estab-
tecture dataset.
lished terms such as Baroque or Gothic? (Fig. 1) Even Muthesius him-
10. J. V. Maciuika, “Art in self recognized this. He rejected the Bauhaus, which he inspired and
the Age of Government
Intervention: Hermann helped form as “Just another Style”10. To this very day, this discussion
Muthesius”, Sachlichkeit,
and the State, 1897–1907.
rages on. Reject or accept that style is part of architectural inquiry?

The recent, impressive advances in the field of machine vision, spe-


2
cifically Deep Neural Networks, have thrown this discussion of both
Four snapshots, taken
during the training historical and new styles into a new light, as well as what “style” can
process (see previous
figure), displaying the
be. Deep Neural networks are algorithms loosely based upon the
model's gradual improve- human visual system11. They can take in vast corpora of images (Fig.
ment over time.
2), more significant than any human or groups of humans can pro-
11. I. Goodfellow et al.,
cess, and learn to extract salient visual features from images that
“Generative Adversarial
Networks”, Advances allow them to achieve an often greater-than-human level of perfor-
in neural information
processing systems, 2014. mance on visual tasks like image classification. These algorithms are

174
2

175
The Prospects

trained similarly to how architecture students are trained; they are


shown a set of images, curated by a human, and are provided a su-
pervisory signal that guides how they learn a style like “Baroque”, etc.

However, in contrast to architectural education, where a professor


will tell the students which specific visual features define a particu-
lar style, (for example, the presence of fluted columns, ellipses, and
voluptuous figures define a Baroque object) the visual features that
neural networks learn to extract is only constrained by what visual
information is present in the training data and the network’s task per-
formance. Humans as trainers of these algorithms do not engineer or
specify them beforehand.

Style can be defined by the statistical distributions of visual fea-


tures that end up being learned by neural networks; their learned
features capture the probability of specific texture distributions
based upon how they exist in the training dataset or in a given im-
age. This data-driven style does not consider the context through
which to understand it. For example, the lens of intellectual inter-
rogation; Neural Networks lack the ability for a crucial discussion
around aspects of style referring to why a building has come into
being. Motivations behind the design, such as a particular theory,
ideology, or political conviction, are a priori missing when training/
collecting datasets or in labeling.

While Architecture has been hauling the baggage of debating style


around for the entirety of the last century, it seems that the debate
about the term style is way more innocent in computer/machine vi-
sion circles. In machine vision, the term is clearly used to describe

176
3
the collection of visual features that capture a specific morphological
quality of an object. Recognized features would include, for example,
symmetry, proportion, and repetition, as well as spatial and composi-
tional techniques. In the context of machine vision, style refers to how
architecture is manifest; it indicates the specific ornamental motives,
12. D. Ascher Barnstone, material palette, color, pattern, construction, and technical systems.
“Style Debates in Early
20th-Century German This means that computer vision scientists can describe “Hans Hol-
Architectural Discourse”,
pp. 1–9, Architectural
lein” or “Coop Himmelblau” as being a style, whereas art historians or
Histories, 2018 architects would not. Or, as Deborah Ascher Barnstone put it: “When
style refers to why a building has come into being, it alludes to the
3 motivations behind design, such as satisfying functional imperatives,
Exploring the “Style” of site conditions, a spiritual movement or a philosophical concept, or
SPAN. A collection of
2243 images created by responding to societal circumstances”12.
SPAN between 2010 and
2020 was used as a data-
set for the StyleGAN2
However, through the lens of algorithmic, data-driven Style (Fig. 3),
neural network. Walking
through the learned the definition of style within the realm of Architecture starts to
visual space of the SPAN
design universe. change, transform, mutate, and produce new and strange objects.

177
The Prospects

These generated objects are not a copy of existing styles, even


though those objects are based on existing data in the form of
historical architecture images. It is not merely a copy; it falls into its
own category. The result presents a provocation for the architect’s
13. G. Harman, “Weird mind: what are we seeing in the strange, defamiliarized13, and alien
Realism: Lovecraft and
Philosophy”, p 93, Zero images resulting from this process? As Demis Hassabis, the CEO
Books, Hants, 2012.
of DeepMind, explains, there are three categories that need to be
observed in this case: aspects of interpolation within a dataset (which
machines are very good at), aspects of extrapolation (a profoundly
human ability), and invention – the last one being profoundly difficult
14. Lecture on Creativity to achieve even by humans, let alone machines14. This statement
and AI by Demis Hassabis
to the Royal Academy of epitomizes the tension between style as it is known in machine vision
Arts, September 17th 2018.
and style as it is known in architecture: data-driven style is not a new
architectural style; it is a mash-up of existing architectural styles, of
textural and geometric features that have been captured by a given
dataset or image. The results remind us of what we have seen; they
are familiar but strange. Behrens observed that “every period has
its unique style, including ours”, although “a style is not recognizable
in one’s own time but rather can only be perceived at a later time”15.
15. P. Behrens, “Stil?”, Riffing on Peter Behrenses argument, it would mean that it might not
Die Form: Zeitschrift für
gestaltende Arbeit, pp 5–8, be up to us as contemporary witnesses to define a particular style
1922.
– that might be the job of an art historian down the line – it is also
doubtful whether we have the necessary distance to evaluate the
4
current actions that lead to the provocative imagery (Fig. 4) resulting
This plan is the result of
a StyleGAN interpolating/ from the use of Neural Networks as a design method.
transforming between
Baroque and Modern
plans. The voluptuous
However, what can be observed is the incredible influence that neu-
pouches of the Baroque
interpolated with the ral networks have on human designers; the images generated by Neu-
asymmetry of Modern
plans. ral Networks can act as a stimulus for the human mind to interpret

178
4
them in ways that ultimately push the architecture discourse further.
Because they are based on existing information, they are familiar
enough to be construed as architecture but strange enough to pro-
voke us and challenge us as designers. Ultimately, neural networks
as a design tool provoke questions about the boundaries of design
or the value of the history of our discipline. Simultaneously these
images explore aspects such as agency, authorship, and design
ethos in a posthuman design ecology. Currently, many parts of this
posthuman design ecology are blank spots – waiting to be charted.

179
The Ecology
InFraReD: Accessible
Environmental
Simulations
by AIT’s City Intelligence Lab,
A. Chronis, T. Galanos, S. Duering, N. Khean

The Prospects 180


The Ecology

The impact of climate change on urban environments is no longer pro-


jected but measured. In 2020 262 deaths and $98.9 billion worth of
damages have occurred due to extreme climate events in the United
1. A. Smith, “2020 U.S. States alone1. The construction industry is still the largest contributor
Billion-Dollar Weather and
Climate Disasters, of greenhouse gas emissions, with more than 38% of the total annual
In Historical
Context”, 10.13140/
emissions attributed to the construction and operation of buildings2;
RG.2.2.25871.00166/1, a metric which itself does not account for the significant impact of the
2021.
construction industry on every aspect of our ecosystems, from urban
2. V. Bertollini, “Here’s What heat islands to water and waste management etc. If one considers
Building the Future Looks
Like for a 10-Billion-Person that we are currently building more than 11,000 buildings per day3 with
Planet”, Redshift, 2018.
around 3,600 more projected to be built daily by 2050 if the urbaniza-
tion rate continues, it’s easy to conclude that we need to calculate and
3. United Nations
Environment Programme, mitigate the environmental impact, both in terms of energy demand but
“2020 Global Status
Report for Buildings and
also in terms of the direct effect that buildings have on their environment,
Construction: Towards a such as for example their thermal, solar or wind properties. Despite the
Zero-emission, Efficient
and Resilient Buildings immense impact of the construction industry on the environment, we
and Construction Sector”,
Nairobi, 2020. have very little understanding of how our constructions affect their envi-
ronment, especially during the crucial phases of their conception.

Early design stages when most important design decisions are made
significantly affect the design outcomes, and by the time the designs
are finalized, massing volumes, orientations and other fundamental
environmental design aspects can change very little. It is common
knowledge that early design stages require fast but also accurate

181
The Prospects

environmental simulation feedback to have a maximal positive effect


on the climatic aspects of design. The evaluation however of the en-
vironmental impact of both new and existing buildings is not trivial.
Environmental simulations can be quite complex, time consuming
and difficult to set up. Moreover, they often require higher techni-
cal expertise, not commonly found in architecture and planning offic-
es. As an example, a typical wind simulation – a Computational Fluid
Dynamics (CFD) simulation – takes days to setup and many hours to
run. A wind comfort simulation can take up to a few days simply to run.
This complexity makes the inclusion of such simulations prohibiting
for fast, early-stage design cycles; and in most cases impossible to
include at any design phase.

The integration of environmental simulations in both computational


and standard design systems has undoubtedly increased in the re-
4. Ladybug Official cent year. Ladybug tools4, as an example, have made environmental
Website: https://www.
ladybug.tools/ simulations accessible to a much greater audience. However, the
barriers of environmental simulations and specifically simulations
speed and domain expertise remain. To overcome these barriers,
further to integration, faster simulation models are also needed.
One way to do this is using Artificial Intelligence. The intense recent
development of AI models has revolutionized simulation speeds in
other domains, and environmental simulations can benefit from this
development. ML models can be used to predict simulation results
in a fraction of the time required to conventionally run them. InFra-
ReD, the intelligent framework for resilient design, developed by
5. T. Galanos & A. Chronis,
the City Intelligence Lab (CIL) of the Austrian Institute of Technol-
“A deep-learning approach
to real-time solar radiation ogy, is aiming to do exactly that, to use AI to overcome the environ-
prediction”, Routledge,
2021. mental simulation barriers in architectural and urban design5.

182
1

183
The Prospects

InFraReD is based on deep learning models, trained with large sim-


ulation datasets developed by the CIL. To produce these simulation
datasets the CIL has developed a distributed simulation pipeline
1 that produces thousands of simulation results (Fig. 1), automating
A large solar radiation the simulation processes from the geometry input to the simulation
simulation dataset used
to train InFraReD’s ma-
output. These simulation results are then used to train deep learning
chine-learning models. models to learn the relationship between geometry input and simula-
tion output. In doing so, the whole simulation workflow is overcome,
and the result is directly produced. As an example, a CFD simulation
result that takes 8 hours to produce is predicted within a few sec-
2 onds (Fig. 2). InFraReD’s models are trained using data from many
An actual solar radiation cities around the world and simulation results from fundamental en-
simulation compared
with the AI-predicted vironmental models, such as wind comfort (CFD), solar radiation and
simulation result from
InFraReD.
sunlight hours calculations. The accuracy of the simulation predic-
tions is quite high (ranging from 85 to 95%), making InFraReD very
useful, especially for early design stages.

The main goal of the development of InFraReD is to make environ-


mental simulations more accessible and to increase their integration
at all stages of design. A simulation result that is available in seconds

184
3
can have a significant impact in early design stages by allowing de-
signers to make well-informed decisions based on the environmen-
tal impact of their designs. However, for these results to be accessi-
ble, the integration of InFraReD’s AI models in the design process is
also needed. For that reason, InFraReD is developed as a modular,
open-ended architecture that allows easy integration in both existing
as well as new design systems. InFraReD’s models can be currently
accessed through three different approaches: as a cloud-based app
3 (Fig. 3) that allows end users to design or upload their designs on the
Wind analysis in cloud and get instant environmental feedback; as a Grasshopper
InFraReD’s web app
interface. plugin that allows more expert users to directly integrate InFraReD’
AI models in standard computational workflows such as Grasshop-
per; as well as through an API that allows other design platforms to
integrate InFraReD and provide fast environmental feedback to their
users. This open-ended deployment approach aims to maximize the
accessibility of InFraReD’s models and thus maximize the accessibil-
ity of environmental simulation to diverse users. Further to using AI to
predict the simulation results, InFraReD also aims to address the bar-
rier of the lack of domain expertise to understand and effectively use
these results to steer design decisions through a key performance

185
The Prospects

indicators (KPI) approach. InFraReD’s models compute not only the


standard performance maps and point values that a user would find
in environmental simulation platforms, but also a series of useful KPIs
4 that help drive design decisions (Fig. 4). These can be for example
InFraReD’s KPI-based the percentage of unsafe areas in terms of pedestrian wind comfort
design explorer.
or the areas with excess solar radiation and thus extreme thermal
conditions. Through a comprehensive design explorer that allows an
intuitive comparison of different design options and a performance
tracker which helps the user understand how to improve the envi-
ronmental performance based on specific KPIs, InFraReD aims to
help the user focus on meaningful design metrics that can steer their
design to improved performance.

The increasing integration of environmental simulations in design


systems, as discussed, can significantly help designers and plan-
ners understand the environmental impact of their designs. It can
be argued that for most users, integration itself gives access to
previously inaccessible environmental simulation models. Integra-
6. Radiance Official tion frameworks, such as for example Ladybug, enable a designer
Website: https://www.
radiance-online.org to incorporate results from state-of-the-art simulation models like

7. EnergyPlus Official Radiance6, EnergyPlus7 or OpenFOAM8, all being environmental


Website: https://energyplus.
simulation standards. The integration of these models, though, still
net
does not reduce the speed and domain expertise burden which In-
8. OpenFOAM Official
Website: https://https:// FraReD is trying to overcome. Integration however also enables a
www.openfoam.com
fundamentally different way of optimizing the environmental impact
of designs. The examples of computational optimization or algo-
rithmic exploration of designs that couple advanced computation-
al techniques – such as genetic algorithms, simulated annealing or
self-organizing maps – with environmental simulations, mainly solar

186
4
radiation or energy simulations, are numerous. For these algorith-
mic design explorations and optimization methods, the computa-
tional demand of environmental simulation that InFraReD’s simu-
lation prediction models overcome is the biggest bottleneck. If we
take as an example the simplest environmental simulation – a solar
radiation calculation that takes only a few minutes to perform – and
we assume a design search for a mere thousand different options
to explore, we still need a few hours for this optimization run. If we
then consider the more complex wind simulations, which need at
least a few hours to perform, it is easy to conclude that an optimiza-
tion run is simply not possible. It is evident that the ability to obtain
environmental simulation results of such complex simulation mod-
els in seconds can lead to unprecedented levels sof fine-tuning of
design problems, thus potentially significantly reducing the envi-
ronmental impact of future constructions while still allowing de-
signers a great amount of freedom on spatial configurations. The
aim of InFraReD is exactly that, to make environmental simulations
accessible to both traditional as well as advanced design process-
es and allow designers and planners to make more environmentally
conscious design decisions.

187
The Language
Semanticism:
Towards a Semantic
Age for Architecture
by Stanislas Chaillou,
Architect, Data Scientist

The Prospects 188


The Language

For a long time, Architecture has benefited from fruitful analogies


with linguistics. The language is a rich matrix that provides both a sys-
tem and a free canvas where creations are expressed according to
rules and transgressions. Quite naturally, architects have harvested
its lexicon and frameworks to describe and think about Architecture.

Over the past decades, the discipline has in fact considerably bor-
rowed from grammar and its concepts: the translation of Architec-
ture into formal languages has corresponded to a need to formulate,
organize, and replicate architectural information. Although this effort
has proven to be very instructive for the discipline, a strict grammat-
ical conversion does not fully account for many aspects of Architec-
ture: at the very least it represents a missed opportunity.

Today, we believe that semantics offers a new angle to revive the


analogy between Architecture and linguistics. This alternate ap-

1. S. Chaillou, “Latent proach should allow for a more adequate dialogue between technol-
Architecture: a
ogy and the architectural agenda. Built upon the latest development
semanticist’s perspective”,
Architectural Research in Artificial Intelligence, we will call “Semanticism”1 this new momen-
Quarterly 24, pp 309-313,
2020. tum for Architecture.

189
The Prospects

Rules of Design, Design of Rules

A short glance at the past century reminds us how concepts, bor-


rowed from linguistics, have made their way into many other dis-
ciplines. This chronology could in fact begin with Gottlob Frege’s
2. G. Frege, seminal work on formal languages. In his book, “Begriffsschrift”2, the
“Begriffsschrift”, Louis
Nebert Verlag, 1879. German philosopher attempts to ground logic into arithmetic. For
Frege, the rigor of arithmetic would help formulate a “pure” language,
so as to provide a powerful framework for thought processes. Frege,
and later the British logician Bertrand Russell, are going to deploy
an entire corpus where the formulation of complex sets of rules will
offer an early expression of formal languages.

Since then, the discussion has matured among linguists; the rele-
vance of formal languages has also grown, as computer science
came to adopt some of their characteristics for shaping many pro-
gramming languages. By capillarity, Architecture has also found an
interest in this approach, as theorists started to investigate the ben-
3. J. Gips & G. Stiny,
“Shape Grammars and the
efits of a rule-based design process. In this respect, shape grammar
Generative Specification of and parametric modeling represent a golden age for the formalization
Painting and Sculpture”, In
IFIP congress, Vol. 2, No. 3, of design. The work of James Gips and Georges Stiny offers com-
pp 125-135, 1971.
pelling examples of these attempts at defining a rule-based logic for
1 the organization of compositions (Fig. 1). Their seminal publication in

Typical shape
19713 displays such systems and demonstrates the originality of this
grammar procedure. approach. With parametric modeling and the advent of computers,
By J. Gips & G.Stiny.
rules are formulated into scripts. Functions and parameters are then
4. P. Schumacher,
“Parametricism as Style
woven into entire procedures for the machine to follow. Patrik Schum-
- Parametricist Manifesto”, acher’s manifesto in 20084 reaffirms Parametricism dependence on
11th Architecture Biennale,
Venice, 2008. this type of rule-based approach.

190
Rule 1 Rule 2 Rule 3

Step 0 Step 19
Initial Shape

Rule 1 Rule 3
Step 1 Step 18

Rule 2 ...
Step 2 Step 5

Rule 2 Rule 2
Step 3 Step 4
Rule 1

1
191
The Prospects

In a nutshell, with shape grammar and parametric modeling, archi-


tects have explored Design as a process built on logic and rules.
Under this definition, Architecture could be translated into heuris-
tics explicitly declared by architects, to then be communicated to
computers as exact procedures to follow. This grammatical mo-
mentum in Architecture remains, to this day, a notable moment for
theory and formal research.

A Semantic Momentum

The influence of grammatical concepts however is soon going


to fade away, for the benefit of new frameworks grounded in se-
mantics. In linguistics, semantics allows moving past the oversim-
plification of the relationship between language and meaning into
a strict logic-based mapping. If the language appears to convey
more than the sum of its parts, this new discipline hopes to help
address the question of meaning and its deep complexity.

In computer science, this shift is later echoed by the development


of new frameworks to represent information. Following the seman-
tic principles laid down in linguistics, computer scientists are going
to investigate the possibility of reflecting their language content in
the very structure of their code. Object-Oriented Programming
(OOP), that is the organization of code using the abstraction of ob-
jects with attached properties, is a direct expression of this reality.
5. T. Berners Lee , The Semantic Web, as explained by Tim Berners-Lee5, father of
“Semantic Web Road
Map”, W3C, 1998. the Internet, is another manifestation of the same principles: the
Web as we know it today is built on an infrastructure of nodes and
connections whose denomination and organization reflect the

192
The Language

6. W3C OWL Working content they host. The Web Ontology Language (OWL)6 maps out
Group, “OWL 2 Web
Ontology Language”, 2012. the entirety of this structuration. In a nutshell, from OOP to OWL,
the semantic principles and their benefits are going to profoundly
shape technology.

In Architecture, the reflection is soon going to move along the same


line thanks to a few theorists. The British-American architect and
design theorist Christopher Alexander remains a central figure of
this movement. Alexander in his books lays down his idea of com-
paring built forms to patterns, whose nesting and imbrication would
explain the morphology of our built environment’s fabric. For him,
this approach goes hand in hand with the attempt to exhaustive-
ly map the categories and types of systems composing the built
7. C. Alexander, “The world. In the Pattern Language (1977)7, Alexander goes through
Pattern Language”, Oxford
University Press, 1977. this process of declaring a quasi-ontology of built forms, category
by category, type by type, so as to describe and explain their po-
tential relationships. The underpinnings of BIM very much proceed
from the same intuitions. The information of BIM models is indeed
8. The ifcOWL initiative, organized following an OOP schema8, declaring families, types,
by turning the universal
BIM format (IFC) into a elements, their respective properties and their ways of interacting
proper ontology, give
us the opportunity to
with one another.
contemplate how much the
underlying BIM schema
relies on a deeply semantic It is not unreasonable to say that the semantic principles pro-
structuration.
foundly irrigate technology and, by capillarity, many creative fields
today. The porosity between both spheres seems in fact consid-
erable and leads us today to anticipate a profound evolution in
Architecture: Semanticism, that is, the application of the seman-
tic principles both as a descriptive and generative framework
for the discipline.

193
The Prospects

Semanticism

Today, semantics provides a robust descriptive framework that


permeates most of Architecture’s tool set. However, recent AI
projects are currently demonstrating its generative capability. This
new avenue of research is about to round up a “semantic momen-
tum” for the discipline. “Semanticism” gives a name and a direc-
tion to this reality. As shape grammar did yesterday, the ambition
of Semanticism is to help architects both describe and generate
the shapes and forms that populate our built environment. If the
former is well underway, the latter is still nascent and lacks a clear
definition. In an effort to delineate its upcoming contribution to
Architecture, we believe semantic generation differentiates itself
from previous methodologies in at least three distinct ways.

First, for its abstraction potential: previous generative methodologies


mainly operated on raw geometry or low-level numerical data. Se-
manticism, on the contrary, formulates architectural information so as
to convey its content through its form. Through the wealth of potential
abstractions – categorical, graphical, textual, etc. – information gets
2
formatted to reflect high-level architectural concepts. Figure 2 pres-
Conversion from
semantic abstractions to ents this reality: in an image, room colors encode a program (categor-
space layouts using AI.
Top: S. Chaillou
ical); in a graph, nodes represent rooms while connections denote a
Middle: Nauata et al.
doorway (graphical); in a sentence, expressions depict the general
Bottom: images generated using
OpenAI’s Glide model. features of an actual space layout (textual)9. Semanticism’s approach

9. The ArchiText project to translation represents also a unique opportunity. Although it is


offers an ideal example
straightforward to turn an architecture into a semantic abstraction,
of this type of application
and is accessible at the it is much harder to reverse this process. If previous generative par-
following address:
https://architext.design/ adigms would employ explicit rule-based systems to do so, Seman-

194
Categorical Input Outputs

Graph Input Outputs

Text Input Outputs

“A“Ahousing
House with
floor
Three Bedrooms
planand
with
Twothree
bedrooms”.
Bathrooms.”

2
195
The Prospects

ticism relies on the learning process of certain AI models to achieve


the mapping from abstraction to forms. This transformation is there-
fore induced rather than deduced, observed rather than described,
learned rather than declared: this difference sets aside Semanticism
from previous generative methodologies. As a result, the forms ob-
tained have the potential to be better informed and well-rounded
than with previous approaches (Fig. 2). Finally, Semanticism’s use of
“multimodal” generation contributes to its relevance for Architecture:
using certain AI models, one semantic abstraction can be translated
into multiple designs, so as to render Architecture’s vast diversity. In
simpler terms, one input maps to multiple outputs. Consequently, in
Figure 2, four different options are obtained each time for a unique in-
put. This “one-to-many” translation is an essential aspect of semantic
generation that addresses the variety of built forms.

As with any new paradigm, however, we believe Semanticism is a


two-edged sword, with its epistemic gains and challenges. The for-
mulation of abstractions is the first immediate challenge Semanti-
cism faces. As shown in Figure 2, many representation modes can
help abstract and encode Architecture semantically. This “game of
formulation” is a challenge that will require extensive work and refine-
ment over the next few years. The next important facet of Semanti-
cism is its “polysemic” potential. Polysemy corresponds in linguistics
to the fact that a term can refer to various meanings. By analogy, a
semantic abstraction can be translated into a field of shapes, rather
than to a single form. This polysemy can liberate the design process
by providing architects with a wealth of designs. However, training
AI models to achieve this “one-to-many” translation is an arduous
technical challenge. Elaborating training processes able to keep

196
The Language

these models’ generative spectrum as wide as possible will be one


of Semanticism’s most pressing imperatives. Context embedding
eventually confers to Semanticism a clear advantage over previous
generative methodologies. As training sets can carry many implic-
it features – typological, cultural, or demographic information to
only name a few – it is a unique opportunity for designers to embed
some crucial dimensions of Architecture in their generative tools. AI
models, while operating the translation from abstractions to forms,
can take into account these various influences. It remains therefore
essential to control the training process as these biases can both
act as a way of incorporating relevant contextual information or add
irrelevant notions to the generation process.

Towards a Technology of the Specific


Although Semanticism is in its early days, we believe a “semantic mo-
ment” is underway. This momentum foreshadows a deeper purpose
and a greater potential. Semanticism is somewhat at odds with those
theories that, in Architecture, have aimed at placing a generic style
above the particularity of cultures or the singularity of locations. Se-
manticism offers the means to anchor Architecture back into its im-
mediate context. And if technology often rhymes with the uprooting
of our practice, in contrast Semanticism provides us with a renewed
methodology to ground the form we design into the specificity of a
given site, a certain place, or a particular culture. No “international
style 3.0” or “space design automation”; rather a framework for ar-
chitects, and maybe others, to observe, describe, and create archi-
tectures mindful of the particular, aware of the singular, closer to
the peculiar. This potential constitutes, in essence, Semanticism’s
greatest promise for Architecture.

197
Closing
Remarks

198
The rapid pace of innovation
presents architects with an
ever-growing technological
landscape. The “disruption”
rhetoric, however, too often
prevents practitioners from
understanding the actual
dynamic between Technology
and its applications. Taking
the opposite route, this
book has tried to clarify and
illustrate AI’s distinct potential
with regard to Architecture.
To conclude, we wish to
condense its message to a few
final assertions.
Most evidently, AI aspires to democratize
the analytical and assist the sensitive in Architecture. Simpler, faster
and cheaper predictions, coupled with the ability to process a wide
and diverse array of mediums – from textual to geometrical or visual
inputs – confers to AI a distinct relevance to the many facets of the
architectural agenda. The experiments and research projects pre-
sented in this book speak to this immediate contribution. AI is also
an invitation to reestablish observation as creativity’s springboard.
As seen earlier, building upon the notion of statistical learning, AI-en-

199
Closing Remarks

abled tools can derive their functioning principles from the informa-
tion collected across multiple observations. Rather than modeling
Architecture using explicit context-agnostic rules, AI can help study
architectural patterns in context. Consequently, an age of AI in Ar-
chitecture could correspond to an increased understanding and
proximity to the unique character of singular conditions. Far from
the uprooting of the practice, AI can give architects new means to
refine the adequacy between their design and the specificity of con-
textual or cultural factors. An age of AI in Architecture also carries
the potential to play off the porosity between research and practice,
even more so than previous technological revolutions did. The syn-
ergies between architectural practice’s project-based mindset and
AI’s research-based culture can be the bedrock of a new approach
to Architecture, provided practitioners and researchers establish
meaningful bridges between both worlds. It is, finally, an age that
expects much more from technology than the mere promise of au-
tomation. The sole autonomous replication of architectural patterns
by computers does not harvest AI’s full potential. On the contrary,
the dynamic relationship between designers and AI through a “gray-
1. A. Witt, “Grayboxing”, boxing”1 approach is a perspective far more likely to benefit Archi-
pp 69-77, Log #43, 2018.
tecture in the long run.

The relationship between both worlds is not yet a set reality. How-
ever, as Architecture engages with AI, it just so happens that the
world around us is watching: many other creative fields, looking to
embrace AI as a new methodology, still wrestle with its adoption and
are today witnessing the vibrant discussions unfolding in our field.
The discipline has here the unique opportunity to set a lasting prece-
dent, and inspire practitioners, well beyond the realm of Architecture.

200
References & Resources

The Incredible AI and Creativity:


Inventions of Using Generative
Intuitive AI Models To Make
A conference by
New Things
M. Conti, TedX, 2017 by Google Brain, 2017

AI & Architecture: Digital Culture


Towards a New in Architecture
Approach
A conference A conference
by S. Chaillou, 2020 by Antoine Picon, 2010

The Routledge Atlas of Digital


Companion to Architecture
Artificial Intelligence
in Architecture L. Hovestadt, U. Hirschberg
I. As, P. Basu, Routledge, 2021 and O. Fritz, Birkhaeuser, 2020

Architectural Architecture,
Intelligence Design,
Data
M. W. Steenson, P. G. Bernstein,
MIT Press, 2017 Birkhaeuser, 2018

201
References &
Contributors

202
Image Credits

Foreword AI's Deployment


@ Stanislas Chaillou, 2020 in Architecture
Artificial Intelligence, Fig. 1: @ S. Chaillou, 2021
Fig. 2: @ S. Chaillou, 2021
Another Field Fig. 3: @ S. Chaillou, 2021
Fig. 4: @ S. Chaillou, 2021
Fig. 1: @ S. Chaillou, 2021 Fig. 5: @ S. Chaillou, 2021
Fig. 2: @ AT&T, photographer: Jack St. Fig. 6: @ S. Chaillou, 2021
Fig. 3: @ Historic American Fig. 7: @ S. Chaillou, 2021
Engineering Record Fig. 8: @ S. Chaillou, 2021
Fig. 4: @ B. G. Buchanan and Fig. 9: @ S. Chaillou, 2021
E. H. Shortliffe Fig. 10: @ S. Chaillou, 2021
Fig. 5: @ Image Courtesy of NVIDIA Fig. 11: @ S. Chaillou, 2021
Fig. 6: @ OpenAI Fig. 12: @ S. Chaillou, 2021
Fig. 13: @ SPAN M. del Campo & S.
The Advent of Manninger 2019 & 2020
Architectural AI Fig. 14: @ Image Courtesy of NVIDIA
Fig. 15: @ S. Chaillou, 2021
Fig. 1: @ S. Chaillou, 2020 Fig. 16: @ S. Chaillou, 2021
Fig. 2: @ Historic American Buildings Fig. 17: @ S. Chaillou, 2021
Survey (Library of Congress) Fig. 18: @ S. Chaillou, 2021
Fig. 3: @ Safdie Architects Fig. 19: @ Isola & al.
Fig. 4: @ Electronic edition of Suther- Fig. 20: @ Kelly & al.
land’s Sketchpad dissertation, image Fig. 21: @ Kelly & al.
adapted to format Fig. 22: @ K. Steinfeld
Fig. 5: @ C. M. Highsmith Archive, Fig. 23: @ K. Steinfeld
Library of Congress Fig. 24: @ Wang & al.
Fig. 6: @ Z. Hadid Architects Fig. 25: @ Image Courtesy of NVIDIA
Fig. 7: @ S. Chaillou, 2020 Fig. 26: @ Mueller & Danhaive 2020
Fig. 8: @ Cedric Price fonds, Canadian Fig. 27: @ Danhaive 2020
Centre for Architecture Fig. 28: @ Spacemaker AI

203
References & Contributors

Fig. 29: @ T. Galanos Fig. 3: @ Andrew Witt, 2021.


Fig. 30: @ Spacemaker AI Fig. 4: @ Andrew Witt, 2021.

The Scale
The Outlooks of Fig. 1: @ Spacemaker AI
AI in Architecture Fig. 2: @ Spacemaker AI
Fig. 3: @ Spacemaker AI
The Form Fig. 4: @ Spacemaker AI
Fig. 1: @ I. Koh The Style
Fig. 2: @ I. Koh
Fig. 3: @ I.Koh Fig. 1: @ SPAN M. del Campo & S.
Fig. 4: @ I.Koh Manninger, 2019
Fig. 2: @ SPAN M. del Campo & S.
The Context Manninger, 2020
Fig. 3: @ SPAN M. del Campo & S.
Fig. 1: @ K. Steinfeld Manninger, 2020
Fig. 2: @ K. Steinfeld Fig. 4: @ SPAN M. del Campo & S.
Fig. 3: @ K. Steinfeld Manninger, 2020
Fig. 4: @ K. Steinfeld
The Ecology
The Performance
Fig. 1: @ Chronis, 2021
Fig. 1: @ Mueller & Danhaive Fig. 2: @ Chronis, 2021
Fig. 2: @ Mueller & Danhaive Fig. 3: @ Chronis, 2021
Fig. 3: @ Mueller & Danhaive Fig. 4: @ Chronis, 2021
Fig. 4: @ Mueller & Danhaive
The Language
The Practice
Fig. 1: @ G. Stiny
Fig. 1: @ Foster + Partners, 2021 Fig. 2: @ S. Chaillou, 2021,
Fig. 2: @ Foster + Partners, 2021 @ Nauata et al, @ S. Chaillou
Fig. 3: @ Foster + Partners, 2021
Fig. 4: @ Foster + Partners, 2021

The Model
Fig. 1: @ Bibliotheque Nationale de
France
Fig. 2: @ Bibliotheque Nationale de
France

204
Contributors’ Andrew Witt
Andrew Witt is an associate professor in

Biographies practice in Architecture at the Harvard


GSD, teaching and researching on the
relationship of geometry and machines
to perception, design, construction,
ARD Group and culture. Witt is also co-founder of
Certain Measures, a Boston/Berlin-
The Applied Research and Development based design and technology studio.
team (ARD) at Foster & Partners is an
integrated multi-disciplinary team of
architects and engineers. The ARD’s Renaud Danhaive &
expertise ranges from art, aerospace
engineering, and computer science
Caitlin Mueller
to landscape architecture, structural Renaud Danhaive and Caitlin Mueller
engineering and applied mathematics. are respectively post-doctoral associate
and associate professor at MIT’s Digital
City Intelligence Lab Structures Lab (DS Lab).
The DS Lab’s work focuses on the
The City Intelligence Lab (CIL) is an synthetic integration of creative and
interactive digital platform to explore technical goals in the design and
novel forms and techniques for the fabrication of buildings, bridges, and
urban development practice of the other large-scale structures.
future. As incubator for intelligent
solutions the lab fosters the co-creation
of digital urban planning workflows and
Carl Christensen
processes, applying augmented reality Carl Christensen is co-founder and
and interactive design interfaces to CTO at Spacemaker AI. The company,
create simulations, generative design and founded in 2016, provides an online
artificial intelligence solutions. platform for real-estate developers,
architects and other stakeholders in the
Immanuel Koh AEC industry to make early stage data-
driven decisions.
Immanuel Koh holds a joint appointment
as an assistant professor in Architecture
& Sustainable Design (ASD) and Design Kyle Steinfeld
& Artificial Intelligence (DAI) at the
Kyle Steinfeld is an associate professor
Singapore University of Technology
of Architecture at U.C. Berkeley. His
and Design (SUTD), where he now
academic and scholarly work investigates
directs Artificial-Architecture. He
the relationship between the creative
obtained his PhD between the School of
practice of design and computational
Computer Sciences and the Institute of
design methods. More generally, his
Architecture at EPFL.
creative work happens at the intersection
of AI and Environmental Design.
Matias del Campo
Dr. Matias del Campo is a registered
architect, designer, and educator. He is an
Alexandra Carlson
Associate Professor at Taubman College, Alexandra Carlson is a PhD candidate
University of Michigan, and director at the Robotics Institute, University
of the AR2IL at UoM. He conducts of Michigan. Her current research
research on advanced design methods focuses on robust computer vision for
in architecture, through the application of autonomous vehicles, specifically on
Artificial Intelligence techniques. realistic noise modeling in images.

205
Index

Index classification 26, 174


cloud 25, 111, 116, 167, 185
Convolutional Neural Network 71
GAN Loci 94, 96, 122, 124
GauGAN 96
Gehry, Frank 46
ComfortGAN 104 Geisberg, Samuel 49
Architecture, Engineering, Construc- convolution 71 Generator 57, 86
tion (AEC) 136, 137, 162, 164 Cornell Aeronautical Laboratory 16 genetic algorithm 187
aesthetics 48 CoveTool 58, 104 Gerlee, Philip 149
AI winter 20, 22, 137 Cross, Nigel 121 Giedion, Sigfried 173
Alexander, Christopher 193 CYC 22 Gips, James 190
AlexNet 26 Giraffe 104
algorithm 174 DaCosta Kaufmann, Thomas 152 Geographic Information System
AlphaGo 26 DALL-E 30 (GIS) 85
Architecture Machine Group (AMG) DARPA 22, 24 Glymph, Jim 46
44, 56 DARPA Grand Challenge 24 Graph Neural Network (GNN) 71
Artificial Neural Network (ANN) 16, Dartmouth Summer Research Proj- Goodfellow, Ian 30, 72
69, 71 ect 17, 64 GPT-3 30
ArchiCAD 52 Dassault Systemes 46 Graphics Processing Unit (GPU)
Archigram 38 data 46, 65, 66, 68, 81, 114, 122, 124, 25, 116
Archistar 58 128, 129, 130, 136, 137, 178 grammar 189, 194
Architecture Biennale 58, 116 database 25, 26, 72, 78 Grasshopper 50, 51, 52, 58, 114, 185
Applied Research & Development dataset 90, 116, 122, 138, 139, 140, Gropius, Walter 36, 40
Group (ARD) 136, 139, 140 176, 178
Arsenal Pavilion 8 Deep Blue 24 Habitat 67 38
Artificial Intelligence (AI) 7, 17, 33, 56, Deep Learning 13, 24, 25, 70, 111, Hacking, Ian 147
57, 60, 63, 64, 78, 107, 109, 113, 184 Hadid, Zaha 50
111, 113, 117, 119, 129, 133, 135, DeepMind 26, 178 Hanratty, Patrick 42, 44
137, 148, 163, 171, 182, 189, Delve 58 hardware 16, 25, 42
194, 199 Devol, George 18 Harvard 8, 146
Ascher Barnstone, Deborah 177 Dymaxion House 36 Hassabis, Demis 178
AutoCAD 46 Haukeland, Havard 163
autonomous car 24 ecology 171, 179, 180 Houser Brattain, Walter 16
efficiency 54, 100, 102, 104, 109 Hydra 138
Bardeen, John 16 ELIZA 17, 18 hyperparameter 67
Bauhaus 36, 174 EnergyPlus 187
Bayesian networks 24, 69 Engelberger, Joseph 18 IBM research 24
Behrendt, Walter Curt 173 Evans, Robin 121 ImageNet 26
Behrens, Peter 173, 178 evolutionary algorithm 24, 132 InFraReD 104, 184, 185, 186
Bell Lab 16 expert system 20, 21, 24 interface 43, 44, 51, 52, 89, 96, 130,
Berners Lee, Tim 192 131, 132
Bézier, Pierre 43 feedback loop 17, 24, 70 Internet 25, 192
Building Information Modeling (BIM) File-Seeker 142 interpolation 116, 160, 178
52, 120, 138, 193 finite element analysis (FEA) 128
Bradford Shockley, William 16 floor plan 56, 57, 78, 80, 86, 88, 104, knowledge base 21
Buckminster, Fuller 147 140 Koh, Immanuel 110
Foster & Partners 8, 136, 138, 140 Kvale, Anders 163
Computer-Aided Design (CAD) 33, FrankenGAN 92
42, 43, 46, 49, 56, 120 Frege, Gottlob 190 Ladybug 182
CATIA 46 Fuller, Buckminster 36 language 30, 52, 147, 154, 156, 160,
Computational Fluid Dynamics (CFD) 171, 189, 190, 192
102, 182, 184 Generative Adversarial Network latent space 74, 75, 76, 98, 114
Christensen, Carl 163 (GAN) 27, 71, 72, 86, 90, 98, Le Corbusier 38
City Intelligence Lab (CIL) 182, 184 111, 113, 114, 122 Lenat, Douglas 22 206
Lequeu, Jean-Jacques 150 performance 26, 27, 30, 71, 98, 102, style 50, 77, 84, 90, 93, 112,
Lighthill, James 20 104, 109, 120, 126, 127, 128, 149, 152, 171, 173,
Lincoln Laboratory 42 131, 133, 138, 174, 176, 186 174, 178, 197
linguistics 30, 171, 189, 196 Pitts, Walter 16, 69 StyleGAN 30, 122
Lundh, Torbjörn 149 Pix2Pix 90, 96, 122 supervised learning 68, 129
platform 50, 104, 129, 163, 166, 167, surrogate model 102, 129,
Machine Learning (ML) 24, 58, 65, 168, 185, 186 130
68, 71, 122, 124, 129, 131, 136, Plugin City 38 Sutherland, Ivan 42, 49
137, 138, 140, 144, 145, 158, polysemy 196
182 Price, Cedric 57, 86, 107 technology 7, 13, 30, 33, 73,
McCarthy, John 17, 22 procedure 17, 48, 50, 190, 192 81, 82, 86, 109, 120,
McCulloch, Warren 16, 69 Pro/ENGINEER 49 135, 163, 189, 193,
McDermott, John P. 22 program 18, 21, 22, 42, 49, 50, 72, 80, 197, 199
McLaughlin, Robert W. 37 89, 122, 194 thermal comfort 102
Media Lab 56 programming language 52, 190 training 13, 25, 65, 68, 69,
Minsky, Marvin 17, 20 PRONTO 42 74, 77, 78, 80, 116,
Massachusetts Institute of Technol- PTC 49 165, 196
ogy (MIT) 8, 42, 44, 56, 98, transistor 16
101, 126 R1 22 typology 89, 90, 92, 98, 116
modularity 36, 38, 40 Radiance 187
Modulor 38 Rapoport, Anatol 156 UC Berkeley 124
Mondrian, Piet 112 reinforced learning 68 Unimate 18
Monge, Gaspard 154 Revit 48, 52, 58 UNISURF 43
Moretti, Luigi 48 Rhino 48 Unité d’Habitation 38
Muthesius, Hermann 173 robotics 18, 68 unsupervised learning 68
MYCIN 21 Rondelet, Jean-Baptiste 150 Urban 2 44
Rosenblatt, Frank 13, 16, 156 Urban 5 44, 56, 57
Negroponte, Nicholas 44, 56, 57, rule 21, 36, 40, 46, 48, 49, 52, 56, 80, Urban Fiction 82
58, 107 150, 157, 158, 189, 196, 200
neoplasticism 113 Russell, Bertrand 190 Variational Autoencoder
neuroplasticity 113 (VAE) 71, 73, 74, 98
Natural Language Processing (NLP) Safdie, Moshe 38, 40 van Doesburg, Theo 112, 113
17 Schumacher, Patrik 50, 190 van Fraassen, Bas C. 149
Neural Turtle Graphics (NTG) 84 Schwarz, Jacob T. 22 Vectorworks 46
Nvidia 30 Selfridge, Oliver 17 visual programming 50, 52
semantic 52, 96, 116, 143, 189, 193 Volkswagen Electronics
Object-Oriented Programming Semanticism 188, 189, 194, 196 Research Lab 24
(OOP) 192, 193 shape grammar 190, 192, 194
OpenAI 30 ShapeNet 114 Walt Disney Concert Hall 46
OpenFOAM 187 Simon, Hebert 18 web app 58, 89
optimization 27, 100, 122, 128, 129, Sketch2Pix 124 Weizenbaum, Joseph 18
132, 187 SketchPad 42, 43, 49, 52 wind flow 102, 104
OWL (Web Ontology Language) software 42, 43, 44, 48, 51, 56, 119, Winslow Ames House 37
193 125, 128, 130 Witt, Andrew 67, 146
Solomonoff, Ray 17
Papert, Seymour 20 Spacemaker 8, 58, 104, 162, 163, 166 XKool 58
parameter 48, 49, 54, 56, 65, 67, 69, Stadium N 48
70, 71, 130, 190 Stanford University 21, 24 Zaha Hadid Architects 50
Parametricism 48, 50, 190 Stanley 24
pattern 18, 27, 40, 68, 82, 86, 100, 121, Steinfeld, Kyle 118
122, 161, 177, 193, 200 Stiny, Georges 190
Perceptron 16, 20, 156 structural design 98, 101 207
208

You might also like