BuildingSimulation2009 SimulationNovices
BuildingSimulation2009 SimulationNovices
ABSTRACT
The daylight factor is usually one of the first daylight
performance measures that simulation newcomers
calculate. Apart from the intrinsic limitations of the
daylight factor as a meaningful daylighting
performance metric, little work has been done in the
past as to how accurate one can actually expect
simulation novices to simulate the daylight factor
compared to an expert modeler. This paper presents
the comparison of daylight factor predictions from a
best practice model of an L-shaped perimeter
classroom to a total of 69 novice/student models. In
all cases the models were prepared in ECOTECT and
simulated in RADIANCE. The paper discusses
common mistakes that simulation beginners make
when carrying out a daylight simulation, how close
their simulation results were compared to the best
practice model, and how software developers and
educators could potentially guide users to avoid
making these mistakes. In addition, a comparison of
simulation results obtained with ECOTECTs buildin split flux method as opposed to RADIANCE is
carried out for the 69 models in order to quantify in
how far using a less reliable simulation engine
compromises the accuracy of a simulation.
Keywords:
Daylight simulation, simulation errors
INTRODUCTION
Introducing effective daylight strategies has become
an essential goal for any sustainable building.
However, since it is difficult to evaluate its quality
and quantity in non-standard spaces through simple
rules of thumb, the use of daylight simulations has
considerably increased as a necessary step to
accurately evaluate daylight in buildings. This has
been a conclusion of two recent surveys: the first
survey focused on simulation experts and their
specific use of daylight simulations (Reinhart and
Fitz 2008), the second survey addressed more
globally how green building design teams are
currently implementing daylighting in their projects.
Both groups reported that they routinely use
simulations especially at the design development
stage (Galasiu and Reinhart 2008). The simulation
experts mostly modeled work plane illuminances and
daylight factor (DF) whereas the green building
Figure 1: Left view of the room; left - floor plan of the room.
the use of these programs will yield reliable results if
the investigated buildings are of comparable
complexity to the ones investigated in the validation
studies. But, given that previous validation studies
were carried out by a handful of simulation experts
one might actually wonder how accurate one can
expect the results of simulation novices to be?
Few studies have focused on the impact of the user in
the accuracy of simulation results by analyzing the
output of multiple users modeling the same
simulation case. Kummert (Bradley, Kummert and
McDowell 2004) compared the difference in
simulation results when three expert users applied
ANSI/ASHRAE Std 140-2001 to the TRNSYS
simulation program (TRNSYS 2009). The users
where categorized as a developer, a user/developer,
and an expert user. The study concluded that there is
a great leeway within a given software package to
make widely varying assumptions and yet still fall
well within the range of acceptably accurate results.
The study concluded that despite this modeling
latitude, knowledgeable users can still be confident
that their results will not dramatically vary from
those of other expert users (Kummert et al. 2004).
This conclusion might be reasonable for expert users
who understand the underlying assumptions and
limitations of a simulation program. But, are these
conclusions also valid for novice users? How large is
the error margin introduced by typical simulation
newcomers? Can common mistakes be identified and
potentially be avoided in the future?
To answer these questions this paper compares
daylight factor simulation results for a best practice
model of an L-shaped perimeter classroom to a total
of 69 novice/student models. The objectives of this
work are to indentify common mistakes that
simulation beginners make and to develop guidance
for software developers of how they could help users
to avoid making these mistakes.
In addition, a comparison of simulation results
obtained with ECOTECTs build-in split flux method
as opposed to RADIANCE is carried out for the 69
models in order to quantify in how far using a less
METHODOLOGY
During the Fall 2005 and 2006 Terms the second
author asked a total of 87 students at the McGill
School of Architecture to model the daylight factor
distribution in one of the schools L shaped crit
rooms (Figure 1). The task was assigned as part of
the deliverables for an introductory course in lighting
and daylighting for 3rd year Bachelor of Architecture
students. The room is located on the ground floor of
the Macdonald-Harrington Building in Montreal,
Canada (Latitude: 4550N, Longitude: 7370W).
The building was designed by Sir Andrew Taylor and
built between 1896 and 1897. The room is daylit
through four windows and is characterized through
over 800mm thick walls and a 4350mm high ceiling.
The floor area is about 64 m2 and the room
dimensions in the widest sections are 11.95m along
the east-west axis and 8.17m wide along the northsouth axis. The optical characteristics of all walls and
windows were measured using a reference white
surface and a Hagner luminance meter1. The resulting
material properties are listed in Table 1.
Table 1. Building material optical properties
Ceiling
Wall
Floor
Diffuse reflectance 8%
Windows
http://www.hagnerlightmeters.com/products.htm
ad
1500
as
100
aa
0.05
ar
300
Table 3. List of model inputs that were used to characterize the 69 student models.
Category
Question
PossibleAnswers
ErrorFrequency
General
Geometry
Fall 05 / Fall 06
2005 - 2006
24
0= Yes | 1= No
67
44
30
13
69
55
69
69
18
0= Yes | 1= No
14
16
0= Yes | 1= No
26
Simulation
Settings
0= Imported unsuccessfully | 1=
Built within ECOTECT | 2=
Imported Successfully
1= lo | 2= medium | 3= high |
4=very high | 5=full
Average DF
1.53
Area Above 2% DF
6.89
RADIANCE
2.59
41.65
Fall2005 ECOTECT&RADIANCEAverageDaylightFactorSimulation
12
PercentageDF[%]
10
8
6
4
2
0
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39
ECOTECTSimulation
RADIANCESimulation
Fall2006 ECOTECT&RADIANCEAverageDaylightFactorSimulation
26.0
12
32.9
48.6
PercentageDF[%]
10
8
6
4
2
0
1
10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30
ECOTECTSimulation
RADIANCESimulation
Figure 6. Comparison of ECOTECT and RADIANCE average daylight factor results for the 2005and 2006
models. The simulation results are compared to the best practice RADIANCE model for which a 10% error
band is plotted as well.
cases these high results stem from and unsuccessful
import of the scene geometry from another CAD
tools into ECOTECT: In model number 17 the
ceiling plane was imported as construction lines only.
Similarly, in model 25 most of the perimeter walls
were imported as construction lines.
Figure 7 shows a frequency distribution of the
RADIANCE results from Figure 6 binned into 1%
slots for 2005 and 2006. Contrary to what one would
expect, the two distributions do not approximate a
normal distribution but the results lie by around
200% to 300% to the 400% over the best practice
model results. In fact in 2005 only one models was in
either the 2% or 3% bins that can be interpreted as
the acceptable result range. In 2006 the number of
acceptable models grew to six that still only
corresponds to 16% of the submitted student models.
As will be shown in the following the reason for the
different types of models submitted in 2006 can be
largely attributed to the simulation tips provided by
the instructor in the second year.
RelativeErrorbyModelSource
14
12
Frequency
10
8
9
8
7
6
5
4
3
2
1
0
RelativeErrorofECOTECTModelsbyYear
1 0.5
6
4
2
0
1 0.5 0 0.5 1
ModelImportedWrong
ModelImportedok
1.5
2 2.5 3 3.5 4
ModelcreatedinEcotect
Frequency
DISCUSSION
The previous section presented the simulation results
for all 69 models and compared them to a best
practice model results. The sobering results of the
students models reveal that even simple mature
workflows have to be taught in greater detail for
novices to obtain accurate simulation results. Part of
the problem can be attributed to the limited 3D
modeling capabilities of the ECOTECT GUI.
Modeling three-dimensional spaces using zero
thickness walls does not lead to acceptable results.
One may question if the test space was
unrepresentatively difficult for ECOTECT to model
due to the uncommonly high wall thicknesses of 980
mm. The authors believe not because while modern
buildings tend to have thinner walls they come with
advanced faade features (overhangs, shading
devices etc.) which also have to be geometrically
modeled.
At the same time, import workflows from other
programs have to be further streamlined and properly
explained to software users; as many as 72% of the
users who tried importing geometry were
unsuccessful. Material properties, on the other hand,
had a relatively minor effect on the model quality in
this study mainly because the scene geometries were
modeled inadequately.
Users were surprisingly careless when modeling
space geometries, something unexpected from
architecture students. The study demonstrates that
students pay attention to simulation tips, which is a
real opportunity for instructors and a good sign for
the adoption of modeling guidelines.
There is currently a push toward moving away from
the daylight factor as a performance metric for
daylighting and using climate-based metrics instead
(Reinhart, Rogers and Mardaljevic 2006). While this
study used the daylight factor to determine the
quality of a simulation one should assume that the
results would have been largely the same if a climate
based metric such as daylight autonomy or useful
daylight illuminance had been used instead. The
nature of the simulation errors lay in basic model
CONCLUSION
When comparing the simulation results reported by
ECOTECTs built-in engine and RADIANCE for the
best practice model, ECOTECT reported a dramatic
79% lower daylight factor and a reduction in the
Area above>2% DF from 41% to a 0%. For the 69
student models, ECOTECT simulations reported on
average a 36% lower result and a 72% lower MBE
than the same simulations run in RADIANCE.
Furthermore individual ECOTECT models both
grossly over- and under predicted daylight factor
levels according to RADIANCE. This finding
suggests that ECOTECT-based daylight factor
predictions cannot be considered to be worst case
assumptions and that RADIANCE should always be
used instead of the build-in ECOTECT daylighting
engine.
Comparing the quality of the student models
submitted in 2005 and 2006 suggests that if the
instructor provides simulation tips they are being
followed. This finding puts an additional emphasis
on the importance of high quality teaching material
to complement simulation workflows: Offering
simple simulation tips in the 2006 version of the class
considerably improved the accuracy of the simulation
results. Conversely, when no explicit modeling
guidelines were provided in 2005, students tend to
make dramatic errors, especially in relation to
geometry input. If only a few parameters are
addressed, users tended to overlook the impact of
other variables, and continued to obtain inaccurate
simulation results.
The authors conclude that even a simple set of
modeling guidelines is generally required to
complement any simulation workflow, however
simple it may be, in order to ensure that simulation
novices can follow it accurately.
ACKNOWLEDGEMENT
The authors would like to thank Lia Ruccolo for
helping with some of the initial student model
analysis.
REFERENCES
ASHRAE, ed. ASHRAE/ IESNA 90.1 Standard 2007 - Energy Standard for Buildings Except
Low-Rise
Residential
Buildings.
2007,
American Society of Heating Refrigerating and
Air-conditioning Engineers
Bradley, E., Kummert, M., McDowell T., 2004.
Experiences with and Interpretation of Standard
Test Methods of Building Energy Analysis