0% found this document useful (0 votes)
52 views

Automated Design Steel Frame GAN

This document discusses using dual generative adversarial networks (GANs) for automated component layout design of steel frame-brace structures. It proposes a novel GAN-based method called FrameGAN that was tested against other GAN models like pix2pix and pix2pixHD using real design drawings. FrameGAN designs were also compared to those made by structural engineers and found to be close, showing promise for automated steel frame-brace structure component layout design using GANs.

Uploaded by

sumit saha
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
52 views

Automated Design Steel Frame GAN

This document discusses using dual generative adversarial networks (GANs) for automated component layout design of steel frame-brace structures. It proposes a novel GAN-based method called FrameGAN that was tested against other GAN models like pix2pix and pix2pixHD using real design drawings. FrameGAN designs were also compared to those made by structural engineers and found to be close, showing promise for automated steel frame-brace structure component layout design using GANs.

Uploaded by

sumit saha
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 12

Automation in Construction 146 (2023) 104661

Contents lists available at ScienceDirect

Automation in Construction
journal homepage: www.elsevier.com/locate/autcon

Dual generative adversarial networks for automated component layout


design of steel frame-brace structures
Bochao Fu a,b ,1 , Yuqing Gao c ,1 , Wei Wang a,b ,∗
a State Key Laboratory of Disaster Reduction in Civil Engineering, Tongji University, Shanghai, 200092, China
b
Department of Structural Engineering, Tongji University, Shanghai, 200092, China
c
Department of Civil and Environmental Engineering, University of California, Berkeley, 94720, CA, USA

ARTICLE INFO ABSTRACT

Keywords: With the development of artificial intelligence (AI), it gains in popularity to use AI to solve problems in civil
Machine learning engineering. However, the research on AI is mainly focused on the field of structural health monitoring, and
GAN less on the field of structural design. As one new direction in the AI domain, the generative adversarial network
Steel frame-brace structure
(GAN) method has developed rapidly, which is able to synthesize high-quality images based on demand.
Conditional probability
Therefore, it opens a new window for AI-aided automatic structure design. In this paper, a novel GAN-based
Component layout design
method, namely FrameGAN, is proposed to realize automated component layout design of steel frame-brace
structures. By collecting and processing drawings designed by senior structural engineers, FrameGAN and two
mainstream GAN models (pix2pix and pix2pixHD) are tested and compared, which demonstrates the superiority
of the proposed FrameGAN. In addition, the design results of FrameGAN are compared and analyzed with
those of senior structural engineers based on two unique evaluation metrics, i.e., expert grading and objective
comparison. The results show that the design of FrameGAN is close to that of structural engineers, which
indicates the availability of FrameGAN in the component layout design of steel frame-brace structures.

1. Introduction interdisciplinary research, trying to use computers instead of manpower


conducting repetitive and tedious work. As early as the beginning of the
As the first and most significant step in structure design, the layout 21st century, some researchers attempted to adopt neural network in
design of structural components determines the safe and economic the design of the concrete slab [9]. With the development of ML/DL
performance of the structure, hence it requires structural engineers technologies, remarkable progress has been achieved in the building
possessing rich design experience and coordinating with architectural structure selection [10], component design [11–14], component layout
designers, which consumes significant human efforts and time. design [15] and topology optimization [16]. However, only relatively
Simultaneously, with the development of the economy and the simple and fundamental designs have been realized currently, e.g., the
improvement of living standards, prefabricated buildings with the char- design of RC beam, retaining wall, steel reinforcement, RC wall, etc.,
acteristics of fast construction speed and low energy consumption are and more complex structural designs have not benefited from these
favored by national governments and construction engineers [1–6]. As AI-based technologies [17,18]. Furthermore, the research on the AI-
an important member of the prefabricated building, the prefabricated aided steel frame-brace structure design is inadequate and worthy of
steel structure owing to the advantage of light weight, high strength,
exploration.
good seismic and resilient performance has been widely used in in-
To pursue more complex structural designs, a more effective method
dustrial buildings and residential buildings [7,8]. However, due to the
is needed, where generative adversarial network (GAN) can be an opti-
complexity and variety of components in prefabricated steel structure
mal choice. A GAN is composed of two networks, namely generator and
buildings, the layout of components needs to be confirmed repeatedly
discriminator [19]. The generator captures data distribution and gener-
in the design stage, which is time-consuming.
ates samples, while the discriminator distinguishes generated samples
Since the advent of machine learning (ML) and deep learning (DL)
technologies and the development of high-performance hardware, re- from real samples. The two networks compete with each other until
searchers in many fields, including civil engineering, have carried out they converge to Nash equilibrium or designated iteration. Since GAN

∗ Corresponding author at: State Key Laboratory of Disaster Reduction in Civil Engineering, Tongji University, Shanghai, 200092, China.
E-mail address: [email protected] (W. Wang).
1
These authors contributed equally to this work and considered co-first authors.

https://doi.org/10.1016/j.autcon.2022.104661
Received 23 June 2022; Received in revised form 19 October 2022; Accepted 3 November 2022
Available online 25 November 2022
0926-5805/© 2022 Elsevier B.V. All rights reserved.
B. Fu et al. Automation in Construction 146 (2023) 104661

was proposed, it has gained in popularity world-widely, and derived a 2.2. pix2pix & pix2pixHD
large number of improved GANs, which has proved its potential in the
fields of image generation [20], image-to-image translation [21] and Image-to-image translation is one of the applications of GAN, which
image style transformation [22]. mainly based on conditional generative adversarial network
In civil/structural engineering area, GANs have been successfully in- (cGAN) [29]. It can realize the mutual conversion between two kinds of
troduced to the field of structural health monitoring (SHM), e.g., struc- images with different types or styles through establishing the mapping
tural image data augmentation [23] and damage assessment [24]. In relationship of input and output. Pix2pix [21] and pix2pixHD [30]
addition, a recent study indicated that GAN can realize the automated are the two most widely used algorithms in the field of image-to-
design of shear wall, beam and slab in the plane, which proves its image translation, which can be applied in structure drawing design by
feasibility of in structural design [25–27]. establishing the mapping relationship between architectural drawing
and structural drawing.
However, compared to the shear wall layout design, the application
of the GAN to the steel frame-brace structures still presents several chal-
2.2.1. pix2pix
lenges to be solved: (a) it is difficult to collect adequate valid structural
Pix2pix establishes a mapping between two images by training with
drawings, (b) the existing AI-based methodologies of shear wall design
paired images. Pix2pix can achieve satisfactory results in semantic
may not be suitable to steel frame-brace structure component layout
segmentation, image style transformation, etc. However, it only works
design and (c) appropriate evaluation metrics need to be proposed to
well at low resolution, e.g., under 256 × 256. For high-resolution
evaluate the drawings designed by GAN. images, it has the issues of missing details and blurred edges.
Therefore, in this paper, a novel GAN-based method, namely Pix2pix mainly consists of two parts: U-net generator and PatchGAN
FrameGAN, is proposed for steel frame-brace structures to realize the discriminator. The U-net generator realizes the transfer of low-level
automated component layout design. The superiority of the FrameGAN information across the network by connecting the 𝑖th layer with the
is proved by comparison with other existing algorithms, and its avail- (𝑛 − 𝑖)th layer. The PatchGAN discriminator uses an (𝑛 × 𝑛) matrix
ability is further verified by comparison with structural engineers. The to judge whether the image is real or synthesized by the generator.
main contributions are listed as follows: Compared with the original GAN, PatchGAN can focus on more areas in
the entire image, which increases the obtained information. In addition,
• A dual GAN model with a two-stage strategy is proposed for the
pix2pix introduces a 𝐿1 loss to the original loss function Eq. (1), which
layout design of steel frame-brace structures, where two GANs
is used to calculate the 𝐿1 distance between the synthesized image and
determine the layout of the columns and braces consequently.
the real image, forcing the image synthesized by the generator to be
• A dataset of steel frame-brace structures is established by collect-
more similar to the real image.
ing and processing drawings in practical engineering.
• Two unique evaluation metrics, namely experts grading and ob- 2.2.2. pix2pixHD
jective comparison, are introduced to evaluate GAN performance On the basis of the pix2pix, pix2pixHD improves the resolution
in the layout design. of synthesized images, enables to edit the objects in images, and can
generate various images with the same input.
2. Generative adversarial network (GAN) The generator of the pix2pixHD consists of two parts: the global
generator network (𝐺1 ) and the local enhancement network (𝐺2 ). 𝐺1 is
2.1. Basic of GAN mainly used for the generation of low-resolution pictures. Due to the
lower resolution, it is easier to grasp the global information of images,
GAN is a framework for evaluating the quality of generative mod- while 𝐺2 collects the global information obtained by 𝐺1 , and generates
els in ML. The generative model is a kind of neural network that high-resolution images through downsampling and upsampling. Mean-
while, pix2pixHD has a multi-scale discriminator, which discriminates
learns training data and generates a distribution close to the train-
on three different scales of images, including the original image, 1/2
ing data. The innovation of GAN lies in the utilization of a second
downsampling and 1/4 downsampling of the original image, and aver-
neural network, i.e., the discriminator, which evaluates and constrains
ages the results. The discriminator operates at the coarsest scale which
the generative model, i.e., the generator, prompting the generator to
can grasp the overall information of images, while the discriminator
generate near-real data in continuous iterations. Ideally, the generator
at the finest scale discriminates the details of images. In order to
and discriminator will eventually converge to the Nash equilibrium
obtain diversified outputs, pix2pixHD adds an encoding network 𝐸
point [28], where no longer how the discriminator updates, it cannot
to the original network, which can generate different results given
distinguish whether the input data are real or those generated by the the same input. Additionally, pix2pixHD improves original GAN loss
generator, and the prediction confidence (probability) is 0.5, which is by introducing a feature matching loss, which reduces the differences
the random guess for a binary classification problem. in different scales among the synthesized image and the real image
The training process of the GAN is equivalent to the process of by calculating the 𝐿1 distance between the output of the synthesized
solving a maximum likelihood estimation. Suppose the real data has image and the real image at different scales on the discriminator.
a distribution 𝑝𝑑𝑎𝑡𝑎 in a 𝑑-dimensional space, and 𝑥 is the sample taken
from it. The generator 𝐺 can be expressed as a differentiable function 3. FrameGAN
with parameter 𝜃𝑔 , whose task is to map a noise vector 𝑧 to the real
data space and synthesize samples 𝐺(𝑧; 𝜃𝑔 ). While the discriminator 𝐷 3.1. Methodology
is a differentiable function with parameter 𝜃𝑑 , and its output 𝐷(𝑥; 𝜃𝑑 )
is a scalar representing the probability that 𝑥 comes from 𝑝𝑑𝑎𝑡𝑎 . During As mentioned above, the real data can be regarded as having com-
the training, the discriminator 𝐷 maximizes the probability of correct plex a distribution in a high-dimensional space, and it may be difficult
judgment by updating the parameter 𝜃𝑑 , while the generator 𝐺 updates to converge under the condition of limited datasets using GAN directly.
the parameter 𝜃𝑔 to minimize log(1 − 𝐷(𝐺(𝑧))), as well as fooling the In leaf-bootstrapping GAN [23], structural images have been clustered
discriminator 𝐷 by making the output of synthesized samples close to into several subcategories, applied in the training of different GANs
that of real samples. The training process of GAN can be expressed as for each subcategory, and finally the synthesized images are mixed
a function of 𝑉 (𝐺, 𝐷): and reassembled to represent the target structural image datasets. LB-
GAN avoids letting GAN learn a complex distribution directly, which
min max 𝑉 (𝐷, 𝐺) = E𝑥∼𝑝𝑑𝑎𝑡𝑎 (𝑥) [log 𝐷(𝑥)] + E𝑧∼𝑝𝑧 (𝑧) [log(1 − 𝐷(𝐺(𝑧)))] (1) provides a new idea for this paper.
𝐺 𝐷

2
B. Fu et al. Automation in Construction 146 (2023) 104661

on the architectural drawing, and 𝐺2 and 𝐷2 determine the layout of


braces.
The generators 𝐺1 and 𝐺2 are both composed of three parts: down-
sampling layers, residual blocks and upsampling layers, as shown in
Fig. 4 and Table 1. Downsampling layers facilitate the extraction and
processing of high-dimensional features in the image by increasing
channels and receptive fields, residual blocks make the whole network
robust and reduces the occurrence of vanishing gradient by connecting
the input and output, and upsampling layers gradually restore the
processed data to the size of the input image. The discriminators 𝐷1 and
𝐷2 both consist of two identical PatchGAN discriminators ( Table 2).
Each discriminator takes images of different scales as input respec-
tively, hence the overall information can be retained while ensuring
that the details are not distorted. The configuration of the discriminator
Fig. 1. Schematic diagram of mixture distribution of different layouts in abstract
dimensional space. is shown in Fig. 5.

3.3. Objective function and optimization


Statistically, the conditional probability of 𝑛 random variables is
expressed as: Suppose the training dataset is {(𝑥𝑛 , 𝑦𝑛 )}, where 𝑥𝑛 is the input
image and 𝑦𝑛 is the ideal output image, i.e., the real image paired with
𝑝(𝐴1 , 𝐴2 , … , 𝐴𝑛 ) = 𝑝(𝐴1 )𝑝(𝐴2 ||𝐴1 ) ⋯ 𝑝(𝐴𝑛 ||𝐴1 , 𝐴2 , … , 𝐴𝑛−1 ) (2)
the input image, the objective function can be expressed as:
In other words, a probability distribution can be decomposed into mul-
tiple sub-distributions. Therefore, during the GAN training, if the real min max 𝐿𝐺𝐴𝑁 (𝐺, 𝐷) = E𝑥,𝑦 [log 𝐷(𝑥, 𝑦)] + E𝑥 [log(1 − 𝐷(𝑥, 𝐺(𝑥)))] (5)
𝐺 𝐷
data distribution can be decoupled into multiple low-dimensional and
And multi-scale discriminator is adopted in this paper, which con-
less complex sub-distributions, it can effectively reduce the learning
sists of two discriminators. Based on the objective function adopted in
difficulty of GAN and make the final results closer to the real data
pix2pixHD [30], the total loss is the sum of these two discriminators,
distribution.
hence the objective function of the 𝑖th GAN in FrameGAN is expressed
When structural engineers design the component layout of steel
in Eq. (6), where 𝐺𝑖 , 𝐷𝑖,1 and 𝐷𝑖,2 are the generator and the discrimi-
frame-brace structures, the layout of columns is usually designed first,
nators of the 𝑖th GAN, respectively. In this study, 𝑖 ∈ {1, 2}, denoting
and then the layout of braces is designed. Suppose its distribution is
two-stage of FrameGAN.
𝑝(𝐴𝑐𝑜𝑙 , 𝐴𝑏𝑟 ), considering the above-mentioned 2-stage process, it can be

decoupled into two subsequent sub-distributions, i.e., the distribution min max 𝐿𝐺𝐴𝑁 (𝐺𝑖 , 𝐷𝑖,𝑗 ) (6)
𝐺𝑖 𝐷𝑖,1 ,𝐷𝑖,2
of column-only layout, 𝑝(𝐴𝑐𝑜𝑙 ) and the distribution of layout of braces 𝑗=1,2
on condition that the layout of columns is completed, 𝑝(𝐴𝑏𝑟 ||𝐴𝑐𝑜𝑙 ), as In addition, in order to keep the details of high-resolution images,
shown in Eq. (3):
feature matching loss 𝐿𝐹 𝑀 is introduced, which calculates the 𝐿1
𝑝(𝐴𝑐𝑜𝑙 , 𝐴𝑏𝑟 ) = 𝑝(𝐴𝑐𝑜𝑙 )𝑝(𝐴𝑏𝑟 ||𝐴𝑐𝑜𝑙 ) (3) distance (‖⋅‖1 ) between synthesized image and real image on each layer
of the discriminator. Suppose that the discriminator has 𝑇 layer and the
Take logarithm of Eq. (3), the distribution of 𝑝(𝐴𝑐𝑜𝑙 , 𝐴𝑏𝑟 ) is divided number of elements in 𝑘th layer is 𝑁𝑘 , then 𝐿𝐹 𝑀 is expressed as:
into sum of two sub-distributions, as shown in Eq. (4), which can
be regarded as to further decouple the design procedure into two ∑𝑇
1 ‖ (𝑘) (𝑘) ‖
𝐿𝐹 𝑀 (𝐺𝑖 , 𝐷𝑖,𝑗 ) = E𝑥,𝑦 ‖𝐷 (𝑥, 𝑦) − 𝐷𝑖,𝑗 (𝑥, 𝐺𝑖 (𝑥))‖ (7)
independent parts and the form of logarithm probability also shows the 𝑘=1
𝑁𝑘 ‖ 𝑖,𝑗 ‖1
links to the GAN loss function.
Therefore, The final objective function is obtained:
log 𝑝(𝐴𝑐𝑜𝑙 , 𝐴𝑏𝑟 ) = log 𝑝(𝐴𝑐𝑜𝑙 ) + log 𝑝(𝐴𝑏𝑟 ||𝐴𝑐𝑜𝑙 ) (4) ( )
∑ ∑
min max 𝐿𝐺𝐴𝑁 (𝐺𝑖 , 𝐷𝑖,𝑗 ) + 𝜆 𝐿𝐹 𝑀 (𝐺𝑖 , 𝐷𝑖,𝑗 ) (8)
The schematic diagram of three layouts in abstract dimensional space 𝐺𝑖 𝐷𝑖,1 ,𝐷𝑖,2
𝑗=1,2 𝑗=1,2
is illustrated in Fig. 1. Compared to learning the original 𝑝(𝐴𝑐𝑜𝑙 , 𝐴𝑏𝑟 ) di-
rectly, it is much easier to learn its sub-distributions and then integrate where 𝜆 is a hyper-parameter and is set to 10 based on preliminary
them. study.
Therefore, to realize the complex steel frame-brace structures design To make the entire network less sensitive to the learning rate while
process through the GAN-based method, this paper proposes a novel ensuring high computational efficiency, Adam solver [31] is used in
dual GAN model, namely FrameGAN, Fig. 2, which consists of two gradient computation and updating.
separately trained GANs in the two stages correspondingly. It decouples
the design process into two stages, which represent the prior column 3.4. Training procedure
layout design and subsequent brace layout design, respectively, as
shown in Fig. 3. Based on the above theoretical distribution decom-
To verify the effectiveness of FrameGAN in structural layout de-
position, it is expected that the FrameGAN will be easier to converge
sign, three kinds of images are utilized in the training procedure of
to the real distribution and obtain more authentic results.
FrameGAN, namely (1) architectural drawing, (2) column-only struc-
tural drawing and (3) complete structural drawing, and the specified
3.2. Network configuration definitions are listed as follows:

FrameGAN consists of two GAN models in two stages, whose back- 1. Architectural drawing comprises only three basic components,
bone structures are based on pix2pix [21] and pix2pixHD [30]. Both walls, doors and windows.
GANs have a generator and a discriminator with the same configu- 2. Column-only structural drawing comprises columns in addition.
ration, where 𝐺1 and 𝐷1 determine the layout of the columns based 3. Complete structural drawing comprises all key components.

3
B. Fu et al. Automation in Construction 146 (2023) 104661

Fig. 2. Overview of the FrameGAN.

Fig. 3. Design process.

Fig. 4. Generator of FrameGAN.

The training procedure is mainly divided into two stages: (1) train i.e., {(𝑋𝑖 , 𝑋(𝑖+1) )| 𝑖 ∈ 1, 2}, where 𝑋𝑖 = {𝑥𝑖,1 , … , 𝑥𝑖,𝑁 } and 𝑋(𝑖+1) =
the first GAN with all architectural drawings, and column-only struc- {𝑥(𝑖+1),1 , … , 𝑥(𝑖+1),𝑁 }. Each image 𝑥 has a shape of (𝐶, 𝑊 , 𝐻), where
tural drawings constitute the dataset 𝐷𝑆1 to learn the column layout, 𝐶, 𝑊 , 𝐻 stand for the number of color channels, width and height of
and (2) train the second GAN with all synthetic column-only struc- the input image. In this study, 𝑋1 , 𝑋2 , 𝑋3 and 𝑋2′ specifically repre-
tural drawings generated by the first GAN and all complete structural sent architectural drawings, column-only structural drawings, complete
drawings constitute the dataset 𝐷𝑆2 to learn the brace layout. structural drawings, and GAN-generated synthetic column-only draw-
Due to the possible computational limitations in practice, the train- ings, respectively. The number of layers of 𝐷𝑖 is 𝑇𝑖 , and the initial
ing procedure of the FrameGAN is performed on data batches. Suppose learning rate is 𝜂. The detailed training procedure of FrameGAN is as
the batch size is 𝑁, each batch contains 𝑁 pairs of image input, follows:

4
B. Fu et al. Automation in Construction 146 (2023) 104661

Table 1
Generator of FrameGAN (𝑁 denotes the batch size).
Layer Filter(size, stride, padding) Activation Shape Note
Input – – (N,3,1024,1024) Input the drawings
ReflectionPad2d – – (N,3,1030,1030) –
Conv2d (64 × 7 × 7, 1, 0) – (N,64,1024,1024) –
Batch Norm – ReLU (N,64,1024,1024) –
Conv2d (128 × 3 × 3, 2, 1) – (N,128,512,512) Downsampling
Batch Norm – ReLU (N,128,512,512) –
Conv2d (256 × 3 × 3, 2, 1) – (N,256,256,256) Downsampling
Batch Norm – ReLU (N,256,256,256) –
Conv2d (512 × 3 × 3, 2, 1) – (N,512,128,128) Downsampling
Batch Norm – ReLU (N,512,128,128) –
ResNet block × 9 (256 × 3 × 3, 1, 1) ReLU (N,512,128,128) Add the inputs and
+ (256 × 3 × 3, 1, 1) outputs in the dimension
of the color channel
ConvTranspose2d (256 × 3 × 3, 2, 1) – (N,256,256,256) Upsampling
Batch Norm – ReLU (N,256,256,256) –
ConvTranspose2d (128 × 3 × 3, 2, 1) – (N,128,512,512) Upsampling
Batch Norm – ReLU (N,128,512,512) –
ConvTranspose2d (64 × 3 × 3, 2, 1) – (N,64,1024,1024) Upsampling
Batch Norm – ReLU (N,64,1024,1024) –
ReflectionPad2d – – (N,64,1030,1030) –
Conv2d (3 × 7 × 7, 1, 0) Tanh (N,3,1024,1024) –

Table 2
PatchGAN discriminator (𝑁 denotes the batch size).
Layer Filter(size, stride, padding) Activation Shape Note
Input – – (N,6,1024,1024) Add image A(B) and image
B(C) in the dimension of the
color channel as the input
Conv2d (64 × 4 × 4, 2, 1) – (N,64,512,512) Downsampling
Batch Norm – LeakyReLU (N,64,512,512) –
Conv2d (128 × 4 × 4, 2, 1) – (N,128,256,256) Downsampling
Batch Norm – LeakyReLU (N,128,256,256) –
Conv2d (256 × 4 × 4, 2, 1) – (N,256,128,128) Downsampling
Batch Norm – LeakyReLU (N,256,128,128) –
Conv2d (512 × 4 × 4, 1, 1) – (N,512,127,127) –
Batch Norm – LeakyReLU (N,512,127,127) –
Conv2d (1 × 4 × 4, 1, 1) – (N,1,126,126) –

Fig. 5. Discriminator of FrameGAN.

5
B. Fu et al. Automation in Construction 146 (2023) 104661

Step 1: Start from 𝑖 = 1. compares material cost and mechanical properties between practical
Step 2: Initialize the parameters of 𝐺𝑖 and 𝐷𝑖 , i.e., 𝜃𝑔𝑖 and 𝜃𝑑𝑖 structural drawings and synthesized structural drawings to make the
respectively. evaluation results more objective.
Step 3: 𝑁 pairs of images, {(𝑋𝑖 , 𝑋(𝑖+1) )}, are randomly sampled from
the 𝐷𝑆𝑖 . 3.5.1. Expert grading
Step 4: Feed 𝑋𝑖 to 𝐺𝑖 and generate 𝑁 synthetic images, 𝑋(𝑖+1) ′ = The questionnaire for grading is mainly divided into two parts,
{𝑥′(𝑖+1),1 , … , 𝑥′(𝑖+1),𝑁 }. as shown in Fig. 6. The first question is scored for the integrity of
Step 5: Concatenate 𝑋𝑖 and 𝑋(𝑖+1) in the dimension of the color components in structural drawings (count 5 to 1 from A to E, and the
channel, 𝑋𝑟 = {𝑥𝑟,1 , … , 𝑥𝑟,𝑁 }, where 𝑥𝑟 has a shape of higher the score, the better the integrity), and the second question is
(2𝐶, 𝑊 , 𝐻). scored for the rationality of overall layouts. In order to compare the

Step 6: Concatenate 𝑋𝑖 and 𝑋(𝑖+1) in the dimension of the color differences between drawing designed by FrameGAN and structural
channel, 𝑋𝑓 = {𝑥𝑓 ,1 , … , 𝑥𝑓 ,𝑁 }, where 𝑥𝑓 has a shape of engineers, the questionnaire contains an equal number of practical
(2𝐶, 𝑊 , 𝐻). drawings and synthesized drawings, and finally the difference between
Step 7: Feed 𝑋𝑟 to 𝐷𝑖 , for 𝑥𝑟,𝑗 ∈ 𝑋𝑟 , 𝑗 ∈ {1, … , 𝑁}, 𝐷𝑖 outputs the average scores of these two is calculated.
a list, containing the output of each layer of 𝐷𝑖 , 𝐷𝑟 =
[𝑑𝑟,1 , … , 𝑑𝑟,𝑇𝑖 ]. 3.5.2. Objective comparison
Due to the objective physical significance of the structure drawing,
Step 8: Feed 𝑋𝑓 to 𝐷𝑖 , for 𝑥𝑓 ,𝑗 ∈ 𝑋𝑓 , 𝑗 ∈ {1, … , 𝑁}, 𝐷𝑖 outputs
the material cost and mechanical properties are compared according to
a list, containing the output of each layer of 𝐷𝑖 , 𝐷𝑓 =
the content of the structure drawing.
[𝑑𝑓 ,1 , … , 𝑑𝑓 ,𝑇𝑖 ].
Above all, an automatic column and brace position information
Step 9: Compute the loss of 𝐺𝑖 and 𝐷𝑖 , expressed in Eq. (8). extraction method is required to promote design effectiveness and effi-
Step 10: Optimize and update the network parameters 𝜃𝑔𝑖 and 𝜃𝑑𝑖 ciency. In the field of image processing, there are many ways to extract
respectively. the location of the sub-image patch in the entire image, where the
𝜃𝑔𝑖 ← 𝜃𝑔𝑖 − 𝜂∇𝜃𝑔 𝐿(𝑔𝑖 ) and 𝜃𝑑𝑖 ← 𝜃𝑑𝑖 − 𝜂∇𝜃𝑑 𝐿(𝑑𝑖 ) (9) sub-image patch is one part of the entire image. One straightforward
𝑖 𝑖
method is that by sliding an image patch, namely template image, over
Step 11: Update the learning rate, where 𝑚 is the current number the entire image, namely original image, and then calculating the cor-
of iteration, 𝑚1 is the number of iterations using the initial relation between the template image and the corresponding part of the
learning rate, and 𝑚2 is the number of iterations updating original image, the position of the template image in the original image
the learning rate. can be found according to the correlation. Therefore, this methodology
is suitable for extracting the position information of columns and braces
1
if 𝑚 > 𝑚1 , 𝜂←𝜂− 𝜂 (10) automatically, which has been implemented in an open-sourced Python
𝑚2
library aircv[32]. According to the preliminary study, using the gray
Step 12: Repeat steps (3)–(11) until the designated number of itera- scale image to extract columns position information, while using the
tions is reached. RGB image to extract braces position information, and setting the

Step 13: Feed all 𝑋𝑖 to 𝐺𝑖 , to generate 𝑋(𝑖+1) ′
, and let all 𝑋(𝑖+1) and threshold to 0.9 can achieve satisfying results. Feeding the original
𝑋(𝑖+2) constitute the 𝐷𝑆(𝑖+1) . images (the drawings designed by structural engineers and FrameGAN)
Step 14: Update 𝑖 = 𝑖 + 1, and repeat steps (2)–(12). and the template images (sub-image patch containing column or brace)
into the aircv, the coordinates of columns and braces are calculated
In many cases, it is difficult for GAN to converge to the optimal automatically according to the proportion of the pixel image and the ac-
solution. Therefore, after the training procedure is completed (reach- tual size (1:60 mm), as shown in Fig. 7. The design results of structural
ing designated iterations), the model parameters corresponding to the engineers and FrameGAN are then processed manually according to the
minimum generator loss and the maximum discriminator loss in the content of real structure drawings, to extract the information of the
loss curves under different hyperparameters are selected as the final dimension and material properties of the components in the drawings.
result. Meanwhile, the intelligent design of steel frame-brace structures For the material cost, the total length of columns and braces
is essentially a process of image generation, and the quality of gener- are counted, and the difference between 𝑁 drawings designed by
ated images can better evaluate the quality of models during training. FrameGAN and structural engineers is then calculated. Suppose 𝑛𝑒𝑛𝑔
Therefore, in this study, several generators saved during training with and 𝑛𝐺𝐴𝑁 are the number of columns in the drawings designed by
low loss are selected, and the images generated by them are observed the structural engineer and FrameGAN, 𝐿𝑒𝑛𝑔,𝑖 and 𝐿𝐺𝐴𝑁,𝑖 are the
accordingly, where the one with the best visual quality is selected as distances between two columns arranged with 𝑖th brace, and 𝐻𝑗 is
the final model. the height of the 𝑗th floor among 𝑀 floors. The difference of the
material consumption between FrameGAN and structural engineers is
3.5. Evaluation metrics then computed according to Eqs. (11) & (12). Eq. (11) calculates the
number of columns multiplied by the floor height as the material cost
For images synthesized by GAN, choosing appropriate evaluation of the columns, and Eq. (12) calculates the actual length of the braces
metrics is significant. In the field of image-to-image translation, human by the Pythagorean theorem. Besides, both Eqs. (11) & (12) are divided
evaluation is commonly used, which takes the human judgment of the by the material cost of structural engineers to avoid large values.
generation quality as the evaluation metric, i.e., whether the images ∑𝑁 ∑𝑀 (| |
)
look realistic. However, it is subjective and may possess large errors 𝑘=1 𝑗=1 ||𝑛𝑒𝑛𝑔 − 𝑛𝐺𝐴𝑁 || ⋅ 𝐻𝑗
leading to low reliability. In addition, due to the physics principles and 𝑑𝑐𝑜𝑙 = ∑𝑁 ∑𝑀 ( ) (11)
mechanics in the structural drawings, the evaluation should not only be 𝑘=1 𝑗=1 𝑛𝑒𝑛𝑔 ⋅ 𝐻𝑗
( )
∑𝑁 ∑𝑀 ||∑𝑛 √ 2 ∑ √ 2 |
limited to the generated image quality, but also requires specific pro-
𝑘=1 𝑗=1 | 𝑖=1 𝐿𝑒𝑛𝑔,𝑖 + 𝐻𝑗2 − 𝑚 𝑖=1 𝐿𝐺𝐴𝑁,𝑖 + 𝐻𝑗2 ||
fessional domain knowledge and maintains the objective towards the | |
∑𝑁 ∑𝑀 ( ∑𝑛 √ 2 )
𝑑𝑏𝑟 = (12)
results. Therefore, two unique evaluation metrics, namely expert grading 2
𝐿𝑒𝑛𝑔,𝑖 + 𝐻𝑗
𝑘=1 𝑗=1 𝑖=1
and objective comparison, are introduced in this study. The former takes
expert grading by professional designers or engineers in the form of In addition, finite element method (FEM) models based on practical
questionnaires as the new evaluation metric in this study, and the latter drawings and synthesized drawings are established respectively. The

6
B. Fu et al. Automation in Construction 146 (2023) 104661

Fig. 6. Questionnaire.

Fig. 7. Position information extraction process.

dimension and material properties of the components in the FEM model The collected drawings are first semanticized to facilitate computer
keep the same for both practical design and GAN design. For high-rise recognition and processing, and the key components in the drawings,
buildings, the story drift ratio is generally adopted to judge the safety including walls, doors, windows, columns and braces, are marked
of structures. Therefore, the story drift ratios of each layer from the and filled with different colors. As shown in Fig. 8, gray, blue, red,
analysis results of the finite element models are selected to compare and yellow colors denote wall, window and door, column, and brace,
the difference in mechanical properties between drawing designed by respectively. Since this paper mainly studies the layout of the compo-
FrameGAN and structural engineers. nents, the dimensions of the components in the drawings are unified to
the same thickness.
4. Experiment preparation Although abundant drawings are collected, it was still far from
enough for GAN training. Therefore, data augmentation techniques are
In order to validate the performance of FrameGAN in compo- applied. In order not to violate physics, only rotation and mirroring
nent layout design of steel frame-brace structures, a dataset is estab- operations are conducted towards the raw drawings. The resolution of
lished firstly and then its performance is compared with pix2pix and the drawings is adjusted to 1024 × 1024 before mirroring and rotation
pix2pixHD, which are two mainstream DL networks. to ensure that the drawings possess high resolution and the shape of
the drawings remains square after processing. Finally, 650 samples are
4.1. Dataset obtained, and 585 samples are randomly selected as the training data,
and the remaining 65 samples are used as the testing data.
To ensure the authenticity of the dataset, steel frame-brace draw- Besides, based on the demands of practical engineering projects, the
ings of 110 engineer projects are collected from design institutes, drawings we collected are low-rise and high-rise buildings from 3 to 15
e.g., Tongji Design Institute, which were designed by experienced floors, whose story height is 2.9 m or 3.0 m. They have the same seismic
engineers and have been applied in practical engineering. intensity (7-degree), and the characteristic period is 0.35s or 0.40s. As

7
B. Fu et al. Automation in Construction 146 (2023) 104661

Fig. 8. Three types of drawing in the dataset. (For better visualization purposes herein, the size of the columns in this and the following figures is enlarged.)

an ongoing project, the dataset is expected to be expanded considering Table 3


Experts grading.
more design conditions, e.g., building height and seismic intensity.
Evaluation metrics Designers

4.2. Experiment settings FrameGAN Structural engineer


Arrangement on key nodes 4.1 4.3
All models in this paper are built on Python 3.7 and PyTorch 1.5, Reasonability of overall layout 3.7 4.0
Summation 7.8 8.3
and trained with NVIDIA RTX 3080 GPU with 10 GB GPU memory.
Considering that the pixel complexity of the structural drawings en-
gages fewer classes, e.g., column, wall, etc., and less noisy background
than those applied in image-to-image translation, 3 downsampling regular grid of columns, hence many columns in their designs are in the
layers, 3 upsampling layers and 9 residual blocks are applied in the same position. But structural engineers’ design is more flexible, e.g., the
proposed FrameGAN to reduce the overall model complexity. Pix2pix columns are arranged densely near the doors and the wall between two
uses the default U-net generator (8 downsampling layers) and Patch- apartments, while FrameGAN’s design is rather rigid, e.g., columns are
GAN discriminator, while pix2pixHD uses the default global generator arranged uniformly and more likely to be arranged at wall corners.
(4 downsampling layers) and multi-scale discriminator (2 PatchGAN Furthermore, as illustrated in Fig. 10, after inputting an architec-
discriminators). Meanwhile, the encoding network 𝐸 in pix2pixHD is tural drawing into the first GAN of FrameGAN, a column-only structural
not applied because the diversified output is not required in the exper- drawing is generated as the input to the second GAN. The second GAN
iment. To reduce the training time and GPU memory occupation of the will further optimize the column layout according to the braces, which
models, Apex mixed precision is adopted to accelerate training [33], makes the layout of braces more reasonable. It is similar to the practical
which improves the training speed by 10% ∼ 20%, and reduces the structural design process that after the architectural engineers propose
GPU memory occupation by 40%. a preliminary plan for the column layout, the structural engineer will
In this paper, pix2pix and pix2pixHD are trained with architectural modify it according to the structural layout rules and determine the
drawings and complete structural drawings. And all three models are layout of the braces.
trained with 500 epochs, where the learning rate is set as 0.0002 for the For comparisons, the single GAN model is also trained and analyzed
first 400 epochs and linearly decreases to 0 during the last 100 epochs. in this study. As shown in Fig. 11, results of the single GAN model are
Finally, the experimental results are evaluated by two proposed similar to those of pix2pixHD, which can generate reasonable brace
metrics, i.e., expert grading and objective comparison. With respect to layout, while does not perform well in column design, e.g., some
expert grading, senior structural engineers in the field of steel frame- columns are missed in the ends of braces. Because both the single GAN
brace structures at Tongji Design Institute are invited to grade the model and pix2pixHD cannot generate a reasonable column layout,
drawings by questionnaires. in the subsequent experts grading and objective comparison, only the
design of structural engineers and FrameGAN are mainly discussed.
5. Experiment results
5.2. Experts grading
5.1. Visual comparison and analysis
After averaging the scores of all experts, the scores of arrangement
The results of the three models on the test set are shown in Fig. 9. on key nodes, reasonability, and total performance are listed in Table 3.
Comparing the three models with the practical structural drawings, the It is observed that the average total score of FrameGAN is only 6.4%
following conclusions are achieved: (a) Due to the limitation of pix2pix lower than that of structural engineering, indicating that the design of
that it can only synthesize images with a resolution of 256 × 256, FrameGAN is not much different from that of structural engineers sub-
whose performance is poor, and especially for symmetrical buildings, jectively, which can meet the design needs. In addition, the difference
pix2pix cannot generate symmetrical structural layout, which may be of the arrangement on key nodes is 4.9%, while that of the reasonability
attributed the inferior learning ability of the U-net generator than the of the overall layout is 8.1%, indicating the design of FrameGAN is
ResNet-type generator. (b) Compared with pix2pix, the brace layouts much closer to that of the structural engineer in the details. It is
generated by pix2pixHD are more uniform and reasonable. However, considered that the convolution layer in FrameGAN mainly carries out
some columns are arranged too densely, and some braces are not convolution processing on a small part of images, thus paying more
arranged between two columns. (c) The layouts of braces generated attention to the details.
by FrameGAN are uniform and reasonable too, and unlike the other
two models, FrameGAN uses a separate GAN to learn the layout of 5.3. Objective comparison
the columns, hence the layout of columns is well-proportioned. (d)
FrameGAN is able to synthesize a regular grid of columns, which is The number of columns and the distances between two columns
significant in structural design. (e) In terms of column design, both arranged with braces in 10 structural drawings are counted to compare
FrameGAN and structural engineers design the columns in the form of a the difference in the material cost between drawings designed by

8
B. Fu et al. Automation in Construction 146 (2023) 104661

Fig. 9. Practical structural drawings and results of the three models.

Table 4 the analysis results of finite element models. As shown in Fig. 12, the
Difference of material cost.
maximum difference of the story drift ratio between structures designed
Components Column Brace
by the structural engineer and FrameGAN is 0.015%, and the maximum
Difference 7.3% 16.2% of the story drift ratio is less than the limit of 0.4% in the current
Chinese seismic design code [34]. Therefore, the mechanical properties
of structures designed by structural engineer and that of structures
FrameGAN and structural engineers, expressed in Eqs. (11) & (12). designed by FrameGAN are approximate, and the security can meet the
As shown in Table 4, the difference in the material cost of columns requirements of daily use.
between FrameGAN and structural engineers is 7.3%, and that of brace
is 16.2%, hence the material cost is relatively close. 6. Conclusions and extensions
In addition, three structural drawings with different floors (8 floors
with 3.0 m story height, 10 floors with 3.0 m story height and 15 In this paper, a GAN-based method, namely FrameGAN, is proposed
floors with 2.9 m story height), and different planar sizes (19.7 m × to realize the automated layout design of steel frame-brace structures.
13.8 m, 26.5 m × 14.2 m, and 34.8 m × 17.2 m) are firstly selected from According to the two-stage characteristics of the steel frame-brace
the test set, and finite element models are then established according structure layout design process, this method adopts dual GAN models
to the layout of components on the drawings. After calculation, the to decouple this process into two stages, i.e., designing the column
story drift ratios of each layer of the three structures are selected from layout and the brace layout sequentially. To verify the performance

9
B. Fu et al. Automation in Construction 146 (2023) 104661

Fig. 10. Design process of brace layout.

Fig. 11. Design results of pix2pixHD and single GAN.

of FrameGAN, abundant steel frame-brace drawings applied in prac- • The differences in the material cost of column and brace between
tical engineering are collected from multiple design institutes, and FrameGAN and structural engineers are 7.3% and 16.2%. Mean-
semanticized to facilitate computer recognition. As for comparison, the while, their mechanical properties are also very close, and the
performance of the FrameGAN is compared with two mainstream GAN maximum difference in the story drift ratios is mere 0.015%.
models (pix2pix and pix2pixHD) used in some similar tasks, e.g., image-
In the field of steel frame-brace structures, automated design is still
to-image translation. All models are trained with the same dataset
in its infancy. The following items are worthy of future study:
under the same conditions. In addition, to evaluate the effectiveness
of FrameGAN, two unique evaluation metrics, expert grading and ob- ◦ This paper only carries out automated component layout design
jective comparison, are introduced in this paper. Senior structural of steel frame-brace structures, and the design of the components,
engineers of the design institute are invited to grade the design results e.g., dimension and material properties, remains to be studied.
of FrameGAN in the form of questionnaires. And the drawings of ◦ In this study, only the layout of columns and braces is considered,
FrameGAN and structural engineers are compared and analyzed in and the layout of beams and slabs is not studied.
terms of material cost and overall structural mechanical properties. ◦ This paper is only aimed at the low-rise and high-rise steel frame-
By these two evaluation metrics, the effectiveness of FrameGAN in brace structures with square steel tube columns and H-shaped
the component layout design of steel frame-brace structures is demon- steel braces, and the automated design of other structures still
strated, which can assist architectural engineers in the layout design. needs further exploration.
The following conclusions are achieved from these experiments:
◦ The dataset will be expanded with drawings under different de-
• By decoupling the design process into two stages, FrameGAN can sign conditions. Furthermore, the transfer learning techniques
effectively reduce the learning difficulty of the GAN, and it is [35], e.g., fine-tuning, can be applied to the FrameGAN on a
easier for GAN to converge to the real data distribution. sub-dataset with a few drawings under different design conditions
• The U-net generator used in pix2pix is not as effective as the to produce satisfactory results.
ResNet-type generator used in pix2pixHD and FrameGAN, which
Declaration of competing interest
cannot generate high-resolution images. Therefore, the ResNet-
type generator is more suitable for structural design.
The authors declare that they have no known competing finan-
• Compared to pix2pix and pix2pixHD, FrameGAN synthesizes a cial interests or personal relationships that could have appeared to
regular grid of columns, which conforms to the design habits of influence the work reported in this paper.
structural engineers.
• The score of FrameGAN is a mere 6.4% lower than that of Data availability
structural engineers in experts grading, hence the difference is
small subjectively. Data will be made available on request.

10
B. Fu et al. Automation in Construction 146 (2023) 104661

Fig. 12. Story drift ratios of three structures.

Acknowledgments References

[1] A. Gibb, Offsite construction industry survey–2006, Build Offsite, London,


UK, 2007, URL https://www.buildoffsite.com/content/uploads/2015/03/Offsite-
The authors gratefully acknowledge the financial supports from survey-2006.pdf.
Shanghai 2022 Science and Technology Innovation Action Plan Social [2] Y. Chang, X. Li, E. Masanet, L. Zhang, Z. Huang, R. Ries, Unlocking the
green opportunity for prefabricated buildings and construction in China, Resour.
Development Science and Technology Research Project with Grant No. Conserv. Recy. 139 (2018) 259–261, http://dx.doi.org/10.1016/j.resconrec.2018.
22dz1201700, National Key Research and Development Program of 08.025.
[3] S. Yu, Y. Liu, D. Wang, A.S. Bahaj, Y. Wu, J. Liu, Review of thermal and
14th Five-Year Plan of China with Grant No. 2022YFC3801904 and the
environmental performance of prefabricated buildings: Implications to emission
National Natural Science Foundation of China (NSFC) with Grant No. reductions in China, Renew. Sustain. Energy Rev. 137 (2021) 110472, http:
51820105013, and appreciate Engineer Xin Zhao from Tongji Design //dx.doi.org/10.1016/j.rser.2020.110472.
[4] Q. Du, Q. Pang, T. Bao, X. Guo, Y. Deng, Critical factors influencing carbon
Institute for providing steel frame-brace drawings and inviting experts emissions of prefabricated building supply chains in China, J. Clean. Prod. 280
to grade the drawings. (2021) 124398, http://dx.doi.org/10.1016/j.jclepro.2020.124398.

11
B. Fu et al. Automation in Construction 146 (2023) 104661

[5] G. Tumminia, F. Guarino, S. Longo, M. Ferraro, M. Cellura, V. Antonucci, Life [21] P. Isola, J.Y. Zhu, T. Zhou, A.A. Efros, Image-to-image translation with
cycle energy performances and environmental impacts of a prefabricated building conditional adversarial networks, in: Proceedings of the IEEE Confer-
module, Renew. Sustain. Energy Rev. 92 (2018) 272–283, http://dx.doi.org/10. ence on Computer Vision and Pattern Recognition, 2017, pp. 1125–
1016/j.rser.2018.04.059. 1134, https://openaccess.thecvf.com/content_cvpr_2017/papers/Isola_Image-To-
[6] A. Mangialardo, E. Micelli, Innovation of Off-Site Constructions: Benefits for Image_Translation_With_CVPR_2017_paper.pdf.
Developers and the Community in an Italian Case Study, in: G. Mondini, A. [22] J.Y. Zhu, T. Park, P. Isola, A.A. Efros, Unpaired image-to-image trans-
Oppio, S. Stanghellini, M. Bottero, F. Abastante (Eds.), Values and Functions lation using cycle-consistent adversarial networks, in: Proceedings of the
for Future Cities, Springer International Publishing, 2020, pp. 217–228, http: IEEE International Conference on Computer Vision, 2017, pp. 2223–
//dx.doi.org/10.1007/978-3-030-23786-8_12. 2232, https://openaccess.thecvf.com/content_ICCV_2017/papers/Zhu_Unpaired_
[7] Y. Wang, Y. Shi, H. Chen, Y. Zhang, S. Li, Contemporary lightweight steel Image-To-Image_Translation_ICCV_2017_paper.pdf.
structure and its application in China, J. Build. Struct. 23 (1) (2002) 2–8, [23] Y. Gao, B. Kong, K.M. Mosalam, Deep leaf-bootstrapping generative adversarial
http://dx.doi.org/10.3321/j.issn:1000-6869.2002.01.001. network for structural image data augmentation, Comput.-Aided Civ. Infrastruct.
[8] G. Shi, F. Hu, Y. Shi, Recent research advances of high strength steel structures Eng. 34 (9) (2019) 755–773, http://dx.doi.org/10.1111/mice.12458.
and codification of design specification in China, Int. J. Steel Struct. 14 (4) [24] Y. Gao, P. Zhai, K.M. Mosalam, Balanced semisupervised generative adver-
(2014) 873–887, http://dx.doi.org/10.1007/s13296-014-1218-7. sarial network for damage assessment from low-data imbalanced-class regime,
[9] M. Rafiq, G. Bugmann, D. Easterbrook, Neural network design for engineering Comput.-Aided Civ. Infrastruct. Eng. 36 (9) (2021) 1094–1113, http://dx.doi.
applications, Comput. Struct. 79 (17) (2001) 1541–1552, http://dx.doi.org/10. org/10.1111/mice.12741.
1016/S0045-7949(01)00039-6. [25] W. Liao, X. Lu, Y. Huang, Z. Zheng, Y. Lin, Automated structural design of shear
[10] T.M. Ballal, W.D. Sher, Artificial neural network for the selection of buildable wall residential buildings using generative adversarial networks, Autom. Constr.
structural systems, Eng. Constr. Archit. Manag. 10 (4) (2003) 263–271, http: 132 (2021) 103931, http://dx.doi.org/10.1016/j.autcon.2021.103931.
//dx.doi.org/10.1108/09699980310489979. [26] X. Lu, W. Liao, Y. Zhang, Y. Huang, Intelligent structural design of shear wall
[11] A.H. Gandomi, A.R. Kashani, D.A. Roke, M. Mousavi, Optimization of retaining residence using physics-enhanced generative adversarial networks, Earthq. Eng.
wall design using recent swarm intelligence techniques, Eng. Struct. 103 (2015) Struct. Dyn. 51 (7) (2022) 1657–1676, http://dx.doi.org/10.1002/eqe.3632.
72–84, http://dx.doi.org/10.1016/j.engstruct.2015.08.034. [27] P. Zhao, W. Liao, H. Xue, X. Lu, Intelligent design method for beam and slab of
[12] M. Mangal, J.C. Cheng, Automated optimization of steel reinforcement in shear wall structure based on deep learning, J. Build. Eng. 57 (2022) 104838,
RC building frames using building information modeling and hybrid genetic http://dx.doi.org/10.1016/j.jobe.2022.104838.
algorithm, Autom. Constr. 90 (2018) 39–57, http://dx.doi.org/10.1016/j.autcon. [28] I. Goodfellow, Nips 2016 tutorial: Generative adversarial networks, 2016, ArXiv
2018.01.013. Preprint, ArXiv:1701.00160.
[13] J.H. Jeong, H. Jo, Deep reinforcement learning for automated design of rein- [29] M. Mirza, S. Osindero, Conditional generative adversarial nets, 2014, ArXiv
forced concrete structures, Comput.-Aided Civ. Infrastruct. Eng. 36 (12) (2021) Preprint, ArXiv:1411.1784.
1508–1529, http://dx.doi.org/10.1111/mice.12773. [30] T.C. Wang, M.Y. Liu, J.Y. Zhu, A. Tao, J. Kautz, B. Catanzaro, High-resolution
[14] P.N. Pizarro, L.M. Massone, Structural design of reinforced concrete buildings image synthesis and semantic manipulation with conditional gans, in: Proceed-
based on deep neural networks, Eng. Struct. 241 (2021) 112377, http://dx.doi. ings of the IEEE Conference on Computer Vision and Pattern Recognition, 2018,
org/10.1016/j.engstruct.2021.112377. pp. 8798–8807, https://openaccess.thecvf.com/content_cvpr_2018/papers/Wang_
[15] S. Tafraout, N. Bourahla, Y. Bourahla, A. Mebarki, Automatic structural design High-Resolution_Image_Synthesis_CVPR_2018_paper.pdf.
of RC wall-slab buildings using a genetic algorithm with application in BIM [31] D.P. Kingma, J. Ba, Adam: A method for stochastic optimization, 2014, ArXiv
environment, Autom. Constr. 106 (2019) 102901, http://dx.doi.org/10.1016/j. Preprint, ArXiv:1412.6980.
autcon.2019.102901. [32] NetEase, Aircv, 2017, Github Repository, Github URL https://github.com/
[16] Y. Yu, T. Hur, J. Jung, I.G. Jang, Deep learning for determining a near-optimal NetEaseGame/aircv.
topological design without any iteration, Struct. Multidiscip. Optim. 59 (3) [33] P. Micikevicius, S. Narang, J. Alben, G. Diamos, E. Elsen, D. Garcia, B. Ginsburg,
(2019) 787–799, http://dx.doi.org/10.1007/s00158-018-2101-5. M. Houston, O. Kuchaiev, G. Venkatesh, et al., Mixed precision training, 2017,
[17] P.N. Pizarro, N. Hitschfeld, I. Sipiran, J.M. Saavedra, Automatic floor plan ArXiv Preprint, ArXiv:1710.03740.
analysis and recognition, Autom. Constr. 140 (2022) 104348, http://dx.doi.org/ [34] GB50011-2010, Code for seismic design of buildings, China Architecture &
10.1016/j.autcon.2022.104348. Building Press, Beijing, 2010, in Chinese URL https://scholar.google.com/
[18] C. Málaga-Chuquitaype, Machine learning in structural design: an opinionated scholar_lookup?title=Code%20for%20Seismic%20Design%20of%20Buildings&
review, Front. Built Environ. (2022) 6, URL https://frontiersin.yncjkj.com/ author=GB50011%E2%80%93-2010&publication_year=2010.
articles/10.3389/fbuil.2022.815717/pdf. [35] Y. Gao, K.M. Mosalam, Deep transfer learning for image-based structural damage
[19] I.J. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, recognition, Comput.-Aided Civ. Infrastruct. Eng. 33 (9) (2018) 748–768, http:
A. Courville, Y. Bengio, Generative adversarial networks, 2014, ArXiv Preprint, //dx.doi.org/10.1111/mice.12363.
ArXiv:1406.2661.
[20] A. Radford, L. Metz, S. Chintala, Unsupervised representation learning with
deep convolutional generative adversarial networks, 2015, ArXiv Preprint, ArXiv:
1511.06434.

12

You might also like