Reinforcement Learning For Architectural Design
Reinforcement Learning For Architectural Design
DESIGN-BUILD
CHIEN-HUA HUANG
1
Institute of Architecture, University of Applied Arts Vienna
1
[email protected]
1. Introduction
Machine learning (ML) has been rapidly advancing and becoming accessible for
design practice. Plugin packages such as RunwayML for web-based platforms,
ML-Agents kit for Unity, and Owl for Grasshopper allow designers to utilize
computational power for computational design and modeling. Examples of
applications of artificial intelligence (AI) and ML in the architectural field have
recently emerged. ArchiGAN, for instance, is a generational tool based on
PROJECTIONS, Proceedings of the 26th International Conference of the Association for Computer-Aided
Architectural Design Research in Asia (CAADRIA) 2021, Volume 1, 171-180. © 2021 and published by the
Association for Computer-Aided Architectural Design Research in Asia (CAADRIA), Hong Kong.
172 C.H. HUANG
Figure 3. RealityCapture screenshot showing the photo samples and the digital model of one
plastic fragment built via photogrammetry.
Figure 4. Diagram illustrating the types of data analyzed and their implementation in the
following processes.
within a reasonable range. If the boundary is notably larger than the sums of all
the fragments’ projection areas or volumes, the efficiency of RL will remarkably
diminish. Therefore, the boundary resembles the site perimeter and regulates the
overall forms, which helps control design and avoid unnecessary learning time.
Figure 5. Diagram illustrating the processes, control, observation, and reward setup of the
game for ML-Agents to learn assembly.
Figure 7. Diagram showing the maps and axonometric models of Unity observing the lighting
variety (top left), floor area (top middle), symmetry (top right), structural stability (bottom
left), thermal dynamic variety (bottom middle), and height (bottom right).
After the assembly simulation, the model of all the finished shells, their
six scores, and an NN-model are saved. One shell can be selected to proceed
based on the scores and the users’ choices. The multiple reward evaluations
and weights open up options for the users to curate their priorities according to
their programs’ needs (Figure 8). However, the nature of the evaluation may
bring about results different from the users’ expectations. The problem is caused
by the different non-linear training trajectories used to obtain better scores for
different evaluations. For instance, structural stability is much harder to achieve
than area coverage as the former requires more data observations, detections, and
REINFORCEMENT LEARNING FOR ARCHITECTURAL 177
DESIGN-BUILD
explorations. In the demonstrator project, a shell with the best structural stability
was selected to proceed to the construction phase.
Figure 9. Rhinoceros screenshot of individual instructions for a sample fragment (left and
middle) and its position in the model (right).
Figure 10. Under construction: Projection and marking (left), drilling (middle), and connecting
(right).
178 C.H. HUANG
To further expand the realm of the circular design strategy with materials as
inputs, the future researches can expand the application of the workflow through
three major aspects: robotic fabrication to replace manual work, introduction of
the composition of mixed materials, and an online inventory to enable efficient
mass collaboration and fabrication.
4. Conclusion
When the difficulties of fitting geometry to design are dissolved by shifting the
design cycle and applying RL searching, significant opportunities are opened up
in a new realm of practice and design, especially the possible circular strategy.
The nature of the workflow reinforces designers’ responsibility to take into
account the complex material reality in the environment. ”Reform Standard”
demonstrates how architectural design can implement ML in a design−build cycle
and prioritize the material inputs. Despite the current limitation of computation
power and RL’s nature of unexplainable rationale, the workflow can effectively
and smartly assemble complex geometries whose massive data are beyond a
human designer’s capacity to understand. Various future research potentials are
unfolded by this workflow, which makes the sustainable design more industrially
applicable, intelligent, and efficient. ”Reform Standard” thus illustrates that RL
has the potential to give rise to a material-informed design method and to fully
sustainable building practices.
Acknowledgements
This research was based on a master thesis titled “Reform Standard” completed
in June 2020 at Studio Greg Lynn, Institute of Architecture at the University
of Applied Arts Vienna (die Angewandte), under the supervision of Prof. Greg
Lynn and with the assistance of Bence Pap, Kaiho Yu, Maja Ozvaldic, and Martin
Murero. Special thanks are extended to Lisa-Marie Androsch, Martin Lai, Wing
Yan Joyce Lee, Yi Jiang, Yu Zhao, and Zach Beale.
References
Aksoez, Z. and Preisinger, C. 2020, An Interactive Structural Optimization of Space Frame
Structures Using Machine Learning, in C. Gengnagel, O. Baverel, J. Burry, M. Ramsgaard
Thomsen and S. Weinzierl (eds.), Impact: Design With All Senses, Springer, Cham, 18-31.
Arthur, J., Berges, V.P., Vckay, E., Gao, Y., Henry, H., Mattar, M. and Lange, D.: 2018, “Unity:
A General Platform for Intelligent Agents” . Available from <https://www.researchgate.ne
t/publication/327570403_Unity_A_General_Platform_for_Intelligent_Agents>.
Carpo, M.: 2017, The Second Digital Turn. Design beyond intelligence, MIT Press.
Chaillou, S. 2020, ArchiGAN: Artificial Intelligence x Architecture, in P. Yuan, M. Xie, N.
Leach, J. Yao and X. Wang (eds.), Architectural Intelligence, Selected Papers from the 1st
International Conference on Computational Design and Robotic Fabrication (CDRF 2019),
Springer, Singapore, 117-127.
Kober, J., Bagnell, J.A. and Peter, J.: 2013, Reinforcement Learning in Robotics: A Survey,
The International Journal of Robotics Research, 32, 1238-1274.
McDonough, W.: 2002, Cradle To Cradle : Remaking the Way We Make Things, North Point
Press, New York.
Mousavi, S., Schukat, M. and Howley, E.: 2018, Deep Reinforcement Learning: An Overview,
Proceedings of SAI Intelligent Systems Conference (IntelliSys) 2016, 426-440.
Parascho, S., Kohlhammer, T., Coros, S., Gramazio, F. and Kohler, M.: 2018, Computational
design of robotically assembled spatial structures, Conference: Advances in Architectural
Geometry 2018, Gothenburg.
Witt, A.: 2016, Cartogrammic Metamorphologies; or, Enter the Rowebot, Log Journal of
Architecture, 36, 115-124.