Pavement Thesis 5 2
Pavement Thesis 5 2
1 Introduction 1
2 Case study 4
2.1 AntiSkid Layer (ASK) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
2.1.1 Type of pavement surfaces . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
2.1.2 Flightflex
R
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
2.2 Quality control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
2.2.1 Type of sampling methodologies . . . . . . . . . . . . . . . . . . . . . . . 8
2.2.2 Current sampling strategies . . . . . . . . . . . . . . . . . . . . . . . . . . 10
3 Literature review 12
3.1 Categories of sampling techniques . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
3.2 Surface properties and manufacturing process signatures . . . . . . . . . . . . . . 13
3.3 Definition of a sampling strategy . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
3.4 Literature conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
I
8 Validation and Test 57
8.1 Correlation between Elatextur and Sand Patch Method . . . . . . . . . . . . . . 57
8.1.1 Correlation between the techniques . . . . . . . . . . . . . . . . . . . . . . 57
8.1.2 Conclusion ad Recommendation . . . . . . . . . . . . . . . . . . . . . . . 59
8.2 Field Measurements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
8.3 Measurements on Taxiway . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
8.4 Measurements on the Runway . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63
8.4.1 Measurements on untreated surface . . . . . . . . . . . . . . . . . . . . . . 63
8.4.2 Measurement on semi treated surface . . . . . . . . . . . . . . . . . . . . . 64
8.4.3 Homogeneous distribution . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
8.4.4 Final Measurements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67
8.5 Conclusion of measurements process . . . . . . . . . . . . . . . . . . . . . . . . . 68
Appendix 76
List of Figures
III
6.11 Identification of good and low quality area. Axis labels represent the number of
surface squares . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
6.12 Probability distribution function and cumulative distribution function; loc= Mean,
scale= St.Deviation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
6.13 Fitting process of low (top) and high (bottom) quality pavement; loc= Mean,
scale= St.Deviation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
6.14 Location of ASK Measurements . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
6.15 Results of ASK measurements analysis. Axis labels represent the number of surface
squares; loc= Mean, scale= St.Deviation . . . . . . . . . . . . . . . . . . . . . . . 38
6.16 Assembly procedure of simulated surface . . . . . . . . . . . . . . . . . . . . . . . 41
6.17 Final Surface Simulated. Axis labels represent the number of surface squares . . 42
V
Preface
This Thesis represents the last step of my academical career at TU Delft. I would like to thank
the University for giving me the opportunity to conclude my studies at a top educational level
and in this nice country. The experience collected in my rst two years is the base of this report. I
would also like to thank the member of the committee for guiding me during this research. In
particular I would like to thank Lambert for this assistance with the double degree paperwork. It
was a complicated procedure that would have been more complex without his help.
As well, I would like to thank two companies: Heijmans and Schiphol. They gave me chance
to work every day at Schiphol Airport and to assist at maintenance activities on the runways.
These are unique experiences that I will never forget. In particular, I would like to thank my daily
supervisors Arjan and John for the patience they had in these 12 months and for the considerable
amount of knowledge and experience they taught me. Although it was not possible to meet often,
I would also thank Maarten for the opportunity he gave me last year and for supporting me
whenever I needed him in the research process.
In these years far from home I have also learned the importance of family and friends. For
this reason there are several people I want to mention.
My mother, for the unconditional support, the patience and the dicult moments passed
together. I admire her strength and I am blessed to receive so much love from her. My father,
who gave me the opportunity to come here and start this unique experience.
Thea, for always being present, supportive and patient in these months. Without your support,
love and help, this year would have been much harder.
Carla, Francesco and Giorgio. The magnicent trio. It took me longer (as usual), but we are
nally all graduated. I will stop here my academical career but I know you will always make me feel
as a student (and most likely a stupid one). Thank you for accepting me also when I made mis-
takes, for always supporting me and for making me a better person (and probably a better student).
Tommy, almost invisible but always present to help and to have a glass of wine together. You
have always been there in the most dicult moments with your rationality and calm.
Marsa, for always being on my side, even when it was not the case. Your unconditional
kindness is something I admire.
Balda, my oldest friend and probably the only reason to come to visit Brusnengo. Thank you
for tolerating my hypochondriac questions and concerns.
Matteo, for having the patience of being my flatmate for an entire year and for showing me
the real meaning of "Enjoyare".
My Greek friends I met here in Delft. They have helped me with the courses and we spent
nice moments also outside the university. In particular I want to thank Ioannis. Please remember:
"Solaku".
Last but not least, Juan Camillo Laguado Lancheros De Castillia. It took me more time to
learn your entire name than you. Your help and friendship has been really important.
VII
Abstract
Schiphol Airport and Heijmans are working together on the renewal of the Schiphol runways
pavements. Currently the top layer is covered with a synthetic antiskid material called ASK.
But the strong weather limitations for the installation of this material have opened the door for
alternatives. Heijmans has proposed an innovative asphalt mixture that is able to provide similar
and in some cases better surface performances compared with the ASK.
This asphalt mixture is called Flightflex
R
and is a stone matrix asphalt. Consequently it is
affected by the variability of the construction process. This thesis project focuses on the analysis
of the best quality control procedure for this asphalt. The pavement surface needs to meet
specific requirements and it is of interest to define a sampling methodology for the evaluation of
the Texture Depth (TD). In particular the research aims to define the minimum number of
samples that provides the highest reliability for the definition of the Mean Texture
Depth (MTD) of the surface.
To achieve this goal a theoretical approach is adopted. Starting from the collection of a
consistent number of samples, the properties of the surface are analysed. In this process it is of
interest to define the influence of the construction process on the surface quality. The information
obtained are used to simulate bigger surfaces on which different sampling methodologies are tested.
In particular three different methodologies are analysed: the current methodology called CROW,
a Uniform methodology and a random methodology called Hammersley methodology. Thorough
testing these sampling methodologies on the simulated surfaces it is possible to evaluate the
relative error between the MTD of the simulated surfaces and the MTD of the samples taken. A
Monte Carlo type of approach helps to define precisely which methodology performs better. The
one with the lowest relative error and minimum required number of samples will be considered
the most efficient.
The simulation of the surfaces and the analysis of the sampling process highlights a correlation
between the manufacturing signature and some sampling methodologies. In case of a correla-
tion the reliability of the methodology decreases. In particular the CROW and the Uniform
methodology present a form of correlation and thus have a lower reliability. The Hammersley
methodology aims to simulate a random selection of samples and for this reason it does not enter
in correlation with the surface patterns. The three aforementioned methodologies are in the last
part of the research applied on a 500 m long section of the runway Polderbaan at Schiphol Airport.
Although the Uniform methodology is less reliable it provides a 1% of relative error with only
70 samples. The Hammersley instead needs 180 samples to reach the same relative error but with
a higher reliability. The CROW is the least performing. In fact it has a lower reliability than the
Uniform strategy and it needs 170 samples to reach 1% relative error.
The research helps highlighting the correlation between the manufacturing signature left by
the construction process and the sampling strategy adopted. It also highlights the fact that a
random distribution escapes this correlation and provides more reliable results.
To conclude, the companies are suggested to use the Uniform methodology in case of short
time available for the quality control measurements. This comes with a lowest reliability that
has to be accepted. But in case a high reliability is required and sufficient time is available, the
Hammersley strategy is considered more appropriate.
IX
Chapter
1
Introduction
According to the 2017 Airport Traffic Report, Schiphol Airport is one of the busiest airports in
the world. Currently it is ranked 11th for passenger traffic volume with a growth rate of 7.7%,
the highest in Europe [19]. Also the freight movements have an important impact with 1,960,328
tons transported, the 20th highest value in the world [19].
As can be seen in figure 1.1 Schiphol Airport has a 6 runway system but currently only 5 are
actively used for regular operations[3]. With the exception of the runway 36L-18R that was built
in 2003, all the other runways are quite old and will need maintenance operations in the following
years.
In 2017 Schiphol Airport had a traf-
fic volume of 68,515,425 passengers. Com-
pared to the 78,047,278 passengers of
Heathrow this is a lower value but the
airport of London has a growth factor
of only 3.04% [20]. From a prospec-
tive point of view the Dutch airport
is then expected to increase its passen-
gers volume faster. Keeping this trend
Schiphol could reach in a few years a sim-
ilar passengers volume of Heathrow. The
lower growth rate of the English air-
port is due to the fact that it is cur-
rently operating at 98% of its capacity
[12]. For this reason the only possible
growth is achieved by increasing the share
of aircrafts with higher passenger capac-
ity.
Figure 1.1: Runways configuration Schiphol
Airport The growth percentage of Schiphol, as said
previously, is 7.7%. This means that the ca-
pacity of the runway system is expected to be maximised as much as possible, and for this reason
a night maintenance strategy is a valuable option to be considered. Currently Heijmans is the
contractor in charge of asset maintenance activities and is highly involved in the planning of this
new maintenance strategy.
One of the main concerns for Heijmans is to meet the high quality standard required by
Schiphol. This has been done developing a new asphalt mixture called Flightflex
R
(FFX).
Currently the texture properties are ensured by an anti-skid layer (ASK). Although the final
1
performances are good, its installation requires specific weather conditions, such a: maximum
level of relative humidity of 80% and a minimum temperature of 4◦ C during the construction
process. To overcome these problems the engineers from Heijmans have proposed the Flightflex
R
mixture which would be more suitable for construction under adverse weather conditions and
should guarantee the pavement performances required by the client.
The quality of runway pavements is, in fact, one of the main concerns in the asset manage-
ment department at Schiphol Airport. For safety and regulatory reasons the European Aviation
Safety Agency (EASA) has imposed a minimum Mean Texture Depth (MTD) value for runway
pavements. This threshold guarantees a proper water storage and reduces the risk of aquaplaning
that could lead to a huge number of fatalities in case of an accident. For this reason a surface
quality evaluation procedure of the runway pavement surface is required. This is valid for any
type of surface and Schiphol has verified both ASK and FFX. Currently the sampling process
is executed with a laser machine that has a sampling dimension of 400 cm2 . Compared to the
dimension of a runway or a road surface this sampling area is not large enough to enable the
measurement of the total surface. Specific points have to be selected and measured but its amount
and location need to be defined with a clear methodology.
The regulation imposes a specific minimum MTD value, but it does not specify the location
of texture depth measurements and the the required number of samples per area. The regulation
is limited to the following sentence[10]:
“The average surface texture depth of a new surface should be not less than 1.0
mm”
A similar regulation approach is also present in the road and highway pavement industry.
Schiphol and the contractor Heijmans need to define a precise quality control methodology to
include in the project contract. The design of such methodology is the result of a scientific analysis
of the correlation between the manufacturing process of the asphalt, its physical properties and
the sampling techniques needed to measure the surface texture.
The goal of this thesis research is to bring an academic and thus a more theoretical approach
on the definition of such a methodology. This should produce a series of scientific evidences on
the relationships between the asphalt quality and the sampling technique adopted. This will be
an added information for Schiphol Group and Heijmans to agree on which sampling methodology
to adopt.
The research will go through the definition of the most important sampling techniques and
the simulation of representative surfaces. These two elements will be combined and the simulated
surfaces will be used to try out the different sampling methodologies. An extensive statistical
analysis will then be executed to interpret the results and define how the sampling methodologies
behave in different situations. From this outcome a series of measurements will be executed
in the field in order to define a correspondence between the theoretical analysis and the field results.
In the next chapters the research topic will be described in detail before presenting the
literature review. Information regarding sampling methodologies and pavement surfaces will
be searched in papers obtained on Google Scholar, Scopus and TU Delft library. Once the
background and current state of art of these topics are clear, the analysis and simulation of the
surfaces will take place. It will be necessary to simulate on Python a series of asphalt surfaces
containing the surface characteristics left by the construction process and to define the algorithms
2 Chapter 1 Introduction
representing the sampling methodologies. The third part will be dedicated to the analysis and
interpretation of the results. The fourth will focus on defining which methodology provides, with
a sufficient reliability, the minimum number of samples for the calculation of the mean of the
entire surface. After this theoretical analysis there will be the possibility to try the different
methodologies in the field to ensure consistency and validation of the theoretical and simulation
results. The final part of the research will be dedicated to present the outcome of the analysis
and provide some practical recommendations to both Heijmans and Schiphol Group.
3
Chapter
2
Case study
As mentioned in the introduction, the EASA regulation does not propose a sampling guidance for
quality control of runway surfaces. This then became a topic of discussion between contractors
and clients on the procedure to adopt to verify the pavement’s qualities. This practical issue has
called for a more academical study on the effect of sampling methodologies on runway surfaces.
One of the main disadvantages of this material, beside the high costs, is the strict weather
conditions required for the installation. Under a temperature lower then 10◦ C and humidity
higher than 80% the ASK layer is not applicable. Moreover total absence of rain is needed.
These requirements strictly limit the installation phases. In fact during the winter period and
night shifts it will not be possible to install this material. This forces some limitation on the
maintenance strategy of Schiphol. As an example an overnight maintenance strategy could not
be planned with this kind of procedure, not even in the summer period.
Another important disadvantage is the degradation phase at the end of the material lifetime.
After 4-5 years the ASK starts degrading and entire areas get consumed. These areas loose
the basalt mixture and uncover the underneath asphalt reducing the overall performance of the
pavement. To conclude, it is important to highlight that the high costs of this product were one
of the main reason for evaluating a more economical alternative.
As mentioned this material is made by a synthetic glue and a homogeneous basalt mixture.
These elements define a surface that is highly homogeneous and require a limited number of Tex-
ture Depth measurements. A different scenario is present with an asphalt mixture. The variance of
aggregates, quantity of bitumen and construction process influence the uniformity of the mixture
and the final properties of the surface. This requires a specific methodology for quality control, able
to determine the MTD of the surface with a limited number of samples and an acceptable accuracy.
2.1.2 Flightflex
R
Schiphol Airport is constantly increasing its capacity and is looking at maintenance strategies
that could help achieving this goal. For this reason the evaluation of an overnight maintenance
strategy is under analysis. The night maintenance operation could increase the capacity of the
airport but it would be complicated to place the ASK during the night. To face this problem
Schiphol is looking for an alternative. Heijmans has proposed a new asphalt mixture called
Flightflex
R
that aims to provide all the surface performances required by Schiphol.
Advantages of Flightflex
R
This mixture has a higher flexibility for the construction process allowing the construction at
any humidity condition and also with light rain. Moreover the minimum temperature required is
6◦ C. These factors would allow the construction also during the night shift in the summer period.
1
ICAO= International Civil Aviation Organisation. It is an agency of the United Nations that defines codes and
regulation to ensure safety in the international air transport.
Figure 2.2: Period of the year available for the installation of FFX and ASK
are expected to meet the Schiphol and EASA requirements and be comparable with the ASK
performances.
Flightflex
R
characteristics
Heijmans started the development of Flightflex
R
at the end of 2013. The development process
lasted from December 2013 to August 2014. There were two leading parameters for the mixture
design, namely the texture depth and the skid resistance. The requirements values for this
parameters are:
• Minimum skid resistance of µ=0.74 as the EASA requirement. This value is required to be
measured at 95 km/h [3].
Flightflex
R
is a stone matrix asphalt, this means the aggregates play an important role in the
performances of the final product. This category of mixtures is characterised by high bearing
capacity provided by the contact between the aggregates [23]. The downside of this mixture
design is the fact that the final asphalt is more subject to ravelling damages. This negative aspect
can be reduced by the use of synthetic additives that also increase the cost of the final product.
The aggregates also play an important role on the texture properties of the surface. From the
micro-texture point of view they have to be rough enough for the skid resistance of the surface.
From the macro-texture point of view the shape and dimension will influence the Texture Depth.
This last value will be the main aspect of analysis during this thesis project.
Due to the importance of the aggregate, Heijmans has tested three different types of aggregates
in order to evaluate the most suitable for a runway pavement. In table 2.1 the type of tests and
results are presented.
The laboratory analysis lead to three suitable stone types: EO slags, Grauwacke and BeStone.
Samples created from these aggregates have been tested for friction, texture depth, splitting
strength, tear resistance and stone losses.
The aggregate type was considered the most appropriate to meet the Schiphol requirements
was the Grauwacke. This provided the best results in terms of texture depth (1.92 mm) and high
values in term of skid resistance. The value of the FAP (Friction After Polishing) is also the one
of the best with 75.4 µ. In some categories (as the Splitting strength) the Grauwacke was not
the most performing aggregate type but these categories were less important to meet the final
requirement of Schiphol. Please note that in general the quality reached is very high although the
values were lower than the other aggregates. After these tests the Grauwacke has been selected
for the realisation of the FlighFlex.
To ensure the proper binding properties of these aggregates a 8% [m/m] of bitumen content
is required. The negative effect of this high quantity of bitumen is the reduction of texture depth
on the surface. To face this problem Heijmans has decided to implement a water-blasting process
after the compaction phase. A blast of water at high pressure (around 2000 bar) removes part of
the bitumen from the surface and consequently increases the final Texture Depth. A comparison
of the surface before and after the treatment is shown in figure 2.3
Several tests have suggested Heijmans to define a water-jetting procedure to reach proper
This methodology is proposed in the ASTM standards in the TP763 clause [6]. From an
execution point of view this methodology presents a series of disadvantages: first of all with high
wind and rain it is not possible to execute the test. Moreover when the conditions are suitable
for the test execution it takes 3-5 minutes to execute it. The duration is also dependent on the
experience of the operator. Besides these negative factors there is also the good reliability of the
results. With this methodology the accuracy of the measurement is high due to the possibility of
the sand to fill all the holes of the pavement structure. The sand is pushed in the structure and
can also occupy the voids present below some aggregates. Figure 2.6 shows this.
Laser method
The second methodology available to measure the Texture Depth is the laser technique. The tool
used at Schiphol Airport is the ELAtextur machine that is shown in figure 2.8.
The Texture Depth is measured by a rotating laser that scans 2000 points per measurement
and calculates the texture depth area under the machine. In this case the sample dimension
remains fixed and the exact area is represented by a circle with a diameter of 400 mm. This
machine measures the Texture Depth in accordance with the EN ISO 13473-1 and ASTM E1845-09
regulations.
The most important advantage of this methodology is the short duration of execution. It
takes 12 seconds, including the saving data time, to take a measurement. It is faster than the
sand patch method. For this reason it is preferred in case of a high amount of measurements
needed. The limit of this technique arises from the nature of the laser technique. During the
measurement process the laser line remains straight and cannot bend under the aggregates, some
spaces remain then undetected and are not calculated by the machine (see figure ??). The Texture
depth can, for this reason, be lower than the real one. The reliability of the laser method with
the ELATextur machine needs to be analysed during this thesis to ensure that the methodologies
can provide reliable results for the calculation of the Mean Texture Depth (MTD).
Figure 2.8: Laser limitation for detection during texture depth measurement
Both methods for measuring the texture depth have a very limited sampling dimension and
for this reason a consistent number of samples is needed to define the MTD of the surface. In
this case the surface analysed is a runway. In particular, the runway used to test the different
methodologies is the Polderbaan that has a dimension of 3500 m length and 60 meter width. But
the test area has a dimension of 500 meters length and 60 m width. This area has been renovated
in April 2018 with Flightflex
R
. The definition of the number of samples and their locations for
R
the Flightflex surfaces will be based on the dimensions of this test area.
In this case study the area used to analyse the different sampling strategies is a rectangle
During this research this sampling strategy will be referred to as the CROW Method. This
sampling pattern is a deterministic method that is not influenced by the manufacturing process.
The number of samples and their location is fixed and decided without taking into consideration
the different phases that characterise the construction process.
At this stage of the research the CROW method was adapted to the runway dimensions and
modified from the original version. During this research the characteristics of this and other
sampling strategies will be examined: their characteristics, relations with the manufacturing
process and reliability will be tested and compared in order to choose the most efficient one
for this construction process. Moreover during the analysis of the sampling strategies and the
comparison phase it will be possible to derive some theoretical and more academical conclusions
on this topic.
From the TU Delft library and online websites as Google scholar and Scopus different articles
have been found on the sampling theory and surface texture.
The first category requires only geometrical information about the surfaces that need to be
analysed, some measurement tolerance and no information regarding the manufacturing process.
The scarcity of information can lead to choose this methodology implementing a standardised
procedure and a fixed number of measurements. Normally a uniform spatial measurement is
considered the most appropriate to provide robustness to this kind of processes. The negative
aspect of this methodology is the fact that for large surfaces many samples are required in
order to provide a proper level of analysis reliability [8]. Some, like Lee et al, have defined a
specific segmentation procedure to divide the area that needs to be analysed and provide a series
of regions where the texture properties are homogeneous [15]. This approach is typical of im-
age analysis and could find application difficulties with in-homogeneous surfaces such as pavements.
Different is the situation with the adaptive strategies, where the sampling proceeds by adapting
to the data features. Initially a certain amount of samples are taken, to their analysis new samples
are added by trying to meet a pre-defined criterion. Often the analysis proceeds by searching for
those points that present the higher deviation from the mean. Important with this methodology
is the definition of the analysis criterion, this can be a maximum number of samples analysed or a
property of the analysis (i.e. the geometric deviation does not vary substantially from the mean).
The main drawback is the number of samples needed to achieve the criterion established, in case
of very in-homogeneous and varying surfaces it could take an excessive number of samples to have
a suitable set of results or, in case of a limited number of samples allowed, a non-representative
description of the surface [18].
Affan Badar et al. have developed a search based algorithm sampling technique that enables
the operator to reduce the number of samples needed to have an accurate representation of the
surface’s property. From a defined number of starting points it is then possible to move forward
in a precise direction in order to find the most significant points of the surface [2]. Although
this methodology reduces the number of samples needed, if the texture is highly irregular a large
number of measurements will still be needed.
The third and last category is based on particular information provided by the manufacturing
process. Knowing some properties or aspects of the surface and the production process, it is
then possible to reduce the number of samples required and focus only on the element that
still presents a high value of variability. This can be as an example integrated with the blind
strategy by reducing the area that needs to be analysed or just focus on one aspect of the surface.
The main disadvantage of this technique is the fact that it is based on a specific manufacturing
process and surface type, for this reason it cannot be generalised to other fields or materials [8, 18].
The process of raw data modelling has been extensively described by Colosimo et al. in a
second paper where they defined the extreme point selection Method (EPS). They assert that if
there is a signature in the manufacturing process, then with this methodology it will be found
more or less in the same surface location. In contrast with the random or predefined selection of
sample locations this methodology defines the exact locations based on the manufacturing process,
but if this changes, also the locations change, so with process uncertainty this methodology
appears less suitable [16]
Stefano Pertò in his doctorate thesis presented a series of models aimed to describe the
manufacturing signature [18]. The two main categories are: Ordinary Least Squares Model and
Spatial Error Model. The first one is basically a linear regression model, the second a model based
on the property per location. More specifically a method called Lattice model has been defined
that could suit the asphalt manufacturing process. Each sampled point represents a certain area
that is then modelled through a Spatial Auto-regressive Model. This model is mathematically
complex and long, for this reason it will not be explained in this section but will be used if
necessary in the pavement analysis.
Where:
• b is a compensation factor
• up uncertainty of measurements
This formula and all the different factors are calculated according to the ISO/IEC guide98-3
[1] and ISO/IEC guide99:2007(E/F) [13].
The solution of this problem can be reached by defining a sampling strategy that minimises
the U function as much as possible. If the methodology is effective then the U value will have
a lower value and the sampling points will represent the areas with the higher deviations from
the mean. Colosimo et al. proposed to transform this strategy selection in an optimisation
process and look for genetic algorithms. Still in the same paper a comparison of a uniform, a
random called Hammersley and minimum U strategy selection has been presented in terms of an
U function. Figure 3.1 shows that the minimum U strategy has proven to be the best.
(a) U value with different strategies (b) Bias for different strategies
In this chapter the research question that will guide the thesis and the research approach presented.
As described in the introduction, the industry lacks a scientific basis for the definition of a
proper sampling methodology for runway textures evaluation. This research aims to provide this
support by defining the theoretical relations between the surface properties and the sampling
methodologies. Different intermediate steps as the definition of the simulations, the virtual
representation of the surfaces, the influence of the manufacturing process will be part of the
research and help during the different operational phases.
This is the main question of this research but some specific sub-questions will also guide the
different steps needed. The main sub questions are:
• How does the manufacturing process affect the properties of the surface?
• How do the different sampling methodologies behave with the simulated surfaces?
These sub questions assume relevance during the different phases of the process because they
will provide consistency and preserve the scientific approach of the research. The simulation of
the sampling methodologies cannot take place if it is not clear how the surface is characterised
by the manufacturing process. The same applies if there is no consistency in the representation
of the surface during the simulation. To conclude, it is clear that to evaluate which sampling
methodology is the most efficient, all the proposed ones need to be tried and compared on the
same simulated surfaces.
This subject and research question represent the willingness to provide a practical support to
the industry by means of a scientific analysis. The academic approach is introduced to provide a
conclusive and reliable analysis to the industry. Interested companies can then rely on the outcome
of this analysis and take decisions to reduce their risks on specific projects. The feasibility of this
research will strongly depend on the quality of data obtained during the field measurements.
1. Define in detail the distribution of the samples according to the CROW regulation. This is
a blind technique that partially takes into account the whole manufacturing process.
2. Define a methodology for the simulation of a random sampling technique. In this case it
will be necessary to simulate a sampling process that is not related with the manufacturing
process and the characteristics of the asphalt. In the literature review it has been shown
that Colosimo et al. have defined the Hammersley distribution the most appropriate to
simulate a random sampling process[8]. For this reason the Hammersley methodology will
be adapted and used also in this research.
3. Define the manufacturing properties of the asphalt construction process. In this case the
design phase is more complex because finding a manufacturing signature for this kind of
process appears to be a challenging operation. Two approaches will be applied and the best
one will be selected:
Once the manufacturing signature is identified it will be possible to implement the U function
and implement the genetic algorithms to minimise the number of samples as much as possible.
The algorithm has not been defined yet and will be part of the analysis process.
Simulation process
Knowing the characteristics of the pavement and having developed different sampling techniques,
the simulation process can take place. This process will be executed with computer simulations.
The simulation will be set and programmed with Python, the development of the code will be
totally made from the beginning and tailored on the data and findings provided in the previous
steps. The program will take the samples obtained in the field, then it will extract the statistical
properties as Mean, Variance and Covariance. From this information it will be possible to simulate
entire runways surfaces containing those properties and being representative of the real ones.
The algorithms will simulate them hundreds and thousands times, testing on each surface the
sampling techniques and recording the MTD of each technique used. Each time a surface is
simulated it is possible to know its real MTD, this enables to calculate the relative error with the
MTD of the samples taken. Each simulation cycle will present a comparison of the relative error
of each sampling methodology.
Results analysis
A challenge will be to define how to evaluate the results of the simulation. Two main approaches
are possible. The first one is to choose which sampling methodology provides the lowest relative
error without caring about the number of samples. This could be positive to increase the reliability
of the measurements but could also produce a too high number of samples, that could not be
measured in practice.
The second possibility is to direct the analysis towards the minimisation of the samples
number, this would be preferred for field operations. In this case the algorithm will look for the
sampling methodology that provides the minimum number of samples, but imposing a threshold
for the relative error.
In the first case the analysis will be more consistent and rigorous from the academical point
of view, presenting a precise and clear relation between the manufacturing process of the surface
and the sampling techniques used. But this will not answer the research question where it is
specified that the minimum number of samples is required. The second approach could provide
a series of conclusions and recommendations applicable on the field in the form of regulations.
But the negative aspect of this strategy is the definition of the relative error threshold value to
impose on the analysis. This should be chosen in order to provide consistency with analysis and
real data and provide the company with an applicable methodology.
In this chapter the models of the sampling methodologies and the surfaces will be defined. From
this information it will be possible to start the simulations and compare the results in order to
evaluate the best sampling techniques.
This sampling methodology is a deterministic planning of locations where the texture depth
measurements have to be collected. These locations are not adapted nor influenced by the
manufacturing process. The only relation with the construction process is the fact that the
width of the longitudinal strips can be the same as the pavers width. The reason behind this
particularity is the fact that the CROW methodology was mostly intended to verify the quality
of the paving and compaction process. In each longitudinal strip a series of three measurements
in transverse direction are defined. Two at the extreme of the strip in transverse direction
and one in the middle. This distribution is aimed to analyse the homogeneity of the texture
depth on the width of the paver. During the paving process it can happen that the asphalt
is not homogeneously distributed by the paver, for this reason the final texture depth can be
slightly different. This sampling process was aimed to verify this aspect of the construction process.
It has been proposed to start from the CROW regulation and apply some modifications to
adapt it to the runway dimensions. The Polderbaan is used as main reference for the runway
dimensions. This runway has a length of 3800 m and width of 60 m [14]. The proposal was
to divide the width in 3 m wide strips. In each strip 3 measurements are set with an interval
of 150 m in the longitudinal direction. The location of the measurements in the other strip is
scattered with 50 m distance. A clear representation of the sampling strategy is shown in figure 5.1.
The total number of measurements with this strategy for a 500 m longitudinal length is 102.
It has been decided to take a 500 m length unit because Schiphol and Heijmans have decided to
Figure 5.1: Sampling strategy proposed by Heijmans
renovate a Polderbaan stretch with the same dimension. This work was planned in order to test
the performances of FFX on a bigger area compared to the previous test stretches. But during
the simulation this methodology needs more flexibility in order to enable an increase and decrease
of number of samples for the same runway length. This will be done by creating an algorithm that
varies the longitudinal distance between the measurements in the same longitudinal column. In
this way the main properties of the methodology are maintained with a lower number of samples.
The disadvantage of this algorithm is the constraints in the increment of the number of samples
collected. With this strategy it is not possible to increase the few independent samples but an
entire transverse row is inserted. This because the longitudinal distance between the rows change
and in 500 m more lines of samples can be inserted.
Figure 5.2: Example of increase of samples with CROW method (own creation). Axis labels
represent the number of surface squares
As explained in section 2.2.2 the CROW procedure has been adapted to this case study.
The interval between two subsequent measurements has been reduced to 150 m instead of 500
m. Moreover the strip width has been reduced to 3.5 m instead of the 4 m of the paver. This
was necessary to increase the number of samples in the test stretch and provide the necessary
information of the pavement quality. With the original method the number of samples would
have been limited and the information on the surface would have not been sufficient. In figure
From this sampling dimension it is possible to divide the surface in rows and columns. When
this process is concluded, it is possible to design an algorithm for the increase and decrease of
sampling points. The main constraint was to maintain a proportion between the location of the
sampling points and the number of rows and columns defined before. The complete algorithm
can be found in the Appendix.
In figure 5.4 the increment of the number of sampling points is shown as an example. Please
note that this is only an example surface aimed to show how the location of the sampling points
changes when the number of samples is increased.
With this sampling methodology it is possible to have a better overview of the texture depth
values and their locations. This will enable a clearer understanding of the locations with lower
or higher TD. The previous definition of the uniform sampling methodology shows that it has
no relation with the manufacturing process nor is influenced by it. This methodology is then
categorised as a blind technique that has a practical advantage during the measurement phase.
Having an entire runway to measure, for the operator it is easier to define a grid of the points to
measure. Meanwhile the CROW methodology has specific locations that are not contained in a
regular grid. This make the process longer and increases the possibility of wrong measurement
locations.
Figure 5.5: Example of Hammersley location of 10 sampling points [22]. Axis labels represent the
number of surface squares
A random selection of points can, in theory, determine a group of points that are concentrated
in a specific part of the surface, leaving another part totally unmeasured. The necessity is to
define a sequence of locations that can be assumed as random selection but maintaining uniformity
on the surface.
iW
Yi = (5.1)
N
k−1
bij 2−j−i
X
Xi = (5.2)
j=0
where:
• Xi is the point coordinate in the transverse direction of the surface (of the runway if we
consider the case study)
As an example: if 15 samples need to be located on a surface the distribution will be the first
in figure 5.5
As it can be seen in figure 5.5 this sampling strategy works for squared sections. In order to
adapt to this case study it has been decided to create a repetitive series of squares that present the
same Hammersley distribution. The number of samples is increased just by increasing the num-
ber N in the previous equations. The complete code for this strategy can be found in the Appendix.
An example of the increment and change of location of the number of sampling points is
presented in figure 5.6
Figure 5.6: Example of increase of samples with Hammersley method (own creation). Axis labels
represent the number of surface squares
The pattern and characteristics of the construction process thus became an important aspect
for the determination of the surface properties. It is necessary to identify those characteristics
and represent them on the simulated surfaces. This is possible by collecting a large number of
samples in a limited surface. This determines a high density of the points and ensures a highly
correct representation of the real surface properties. The surface has to be divided in rows and
columns with a width equal to the sample square. Then the Texture Depth is measured in all the
cells.
To have a clear visualisation of the measurements results a program in python has been coded
to export the data from the Elatextur machine and locate the value of the measurements in a
matrix. These data are then plotted in the surface with a different colour representing the value
of the Texture Depth. In figure 5.7 we can clearly see how the points are plotted and areas with
different values are independent.
Figure 5.7: Example of Surface simulation. Axis labels represent the number of surface squares
This example matrix is plotted with random values but it is possible to identify different
areas where values are lower and higher. In the next phase an analysis of these surfaces to obtain
statistic outputs is carried out. The scope of this part is to find the most important properties of
the surface in order to generalise them and create entire simulated runways.
The representative surface will be created multiple times and each time all the sampling
methodologies will be tested. The real value of the MTD will be known because the matrix
will be created manually and it will be possible to compare it with the MTD calculated from
the collected samples. The technique with the lowest relative error for the minimum number of
samples will be the most reliable from a theoretical point of view but also other considerations
will be made.
This method consists in proposing a curve drawn from the data parameter. As an example, if
the normal distribution has to be fitted on the data the parameters used will be the Mean and
the Variance. Once the fitted curve is drawn it is possible to calculate the fitting accuracy by
calculating for each point of the distribution its distance from the fitted curve. In figure 5.8 the
points of the distribution and the fitted curves can be seen. Each point has a distance Yi to the
curve. By taking this distance squared for each point it is possible to have a final value that has
to be compared with all the other fitting curves.
N
X
Snormaldistribution = Yi2 (5.3)
i=1
with:
The distance is squared and summed with the ones of all the other points. The fitting curve
that provides the minimum sum is the best fitting curve.
To check the best curve fitting the program has be set to evaluate the fitting process with all
the curves available in the mathematical Python library. This produces the outcome as shown in
figure 5.9a. At first sight the figure appears confusing but it is possible to see in the background
the distribution of TD values and above it all the possible curves representing the distribution.
The program will calculate with the minimum least square method which curve best represent
the distribution of points. The output will be a single curve that will be plotted in the same TD
value histogram.
The selected distribution in this case is very specific and not common in basic statistic analyses.
This is due to the fact that the program searches for the best curve possible. If the number of
samples is limited and not representing the majority of the surface this fitting process can be
misleading. A limited number of samples could be characterised by particular aspects that are not
recurrent in the whole surface. The final fitting curve could be the most representative for these
samples, but not for the total surface. This imprecise interpretation of the results is also known
as Overfitting[7]. This word refers to the practice to create a model based on a limited number of
values and apply it on another group of data[7]. The model is not able to provide reliable results
in different applications. In case of a limited number of data the solution is to fit those data with
more generic curves. The fitting process with normal distribution, from a mathematical point
of view, is not the most precise but provides reliable results in case of more general data. The
distributions usually used in this cases are the Normal or Log-normal distribution.
(a) Fitting process of all the curves (b) Best curve selected
In this research an initial fitting of all the curves has been used to extrapolate the features of
the asphalt. But due to the limited number of data available it has been decided to fit only the
normal distribution in order to have a more conservative approach for the generalisation of those
properties during the simulation of the entire runway.
Figure 5.10: Example of three different Probability density curves for a normal distribution with
different mean value but the same standard deviation
1 −(x−µ)2
f (x) = √ e 2σ2 (5.4)
2πσ
With:
• σ= standard deviation
• µ= Mean, called also median
• σ 2 = Variance
Figure 5.11: CDF and PDF for the same normal distribution
During the fitting process the Mean and Standard deviation are defined. They will be used as
parameters for the creation of the cumulative distribution function (CFD).
The cumulative distribution function will be very useful during the analysis because a uniform
distribution of values between 0 and 1 will generate a percentage to insert in the CDF to determine
TD values. These values will then be used to create simulated surfaces with a MTD probability
corresponding to the PDF obtained from the fitting process describe above.
Figure 5.12 shows a more clear description of the steps and the corresponding graphs.
Figure 5.12: Summary of fitting process and determination of simulated texture values
In the previous chapter a description of the sampling techniques, enriched with theoretical notions,
has been proposed to ease the understanding of the whole analysis presented in this chapter.
Here the description of the data collection is presented, followed by the extrapolation of the
manufacturing features. These information are used in the last part of the chapter for the
construction of the entire surface.
The first opportunity to take measurements was during the night between 19-02-18 and
20-02-18 because of regular maintenance on the Zwanenburg runway. In figure 6.1 the locations
of the runways and the specific areas built with Flightflex
R
are shown with red markings.
The authorisation to take measurements was from 22:30 PM to 4:30 AM. In this time window
three groups of measurements were taken.
• In the lower strip in figure 6.1b a square of 25 m2 is used to take 400 measurements.
• In the upper FFX strip in figure 6.1b the whole area is measured with 105 sampling points
using a uniform distribution.
• A short strip with 100 samples will also be analysed for the ASK.
The idea behind the first group of measurements was to get a real mean and variation of
a FFX area. Measuring 400 points in a 25 m2 area means that the exact Mean and Standard
Deviation are calculated. To avoid overfitting though, the mean and variance will be based on a
normal distribution fitting.
Figure 5.3 shows the sampling unit is assumed to be a square of 250 mm side length. This is
bigger than the diameter of the ELATexture machine but it helps to take into account tolerances
for imprecision during the measurements. Knowing that, one main assumption is made:
In all the sampling units, represented by a square of 250 mm side, the Texture
Depth is assumed to be equal to the value provided by the ELATextur machine.
Figure 6.2: Square area for dense measurements with division in rows and columns
The measurement process took 2 hours and 26 minutes excluding the time needed to make
the grid and remove the markings.
The Texture Depth values were then inserted in the plotting program and the outcome
obtained is the one in figure 6.3. The plotting process really helps to have a quick overview of the
surface properties.
At first sight it is possible to see that the average value is quite high. In fact the scale of values
starts from 1, this means that there are no lower values. The EASA regulation of a minimum
value of 1 mm for the MTD would be respected because the mean of values that are all above
1mm will be above 1 mm too. In order to see if the same can be said for the Schiphol requirement
Figure 6.3: Square area for dense measurements and plot of the TD values. Axis labels represent
the number of surface squares
The average of these 400 values is 1.67 mm so also the Schiphol requirement is met.
The surface is characterised by two distinct areas. One with yellow squares and high texture
quality, and a second with lower TD values and darker colours. The main characteristics is that
the lower quality strip affect all the transverse width of the square. Figure 6.3 shows that a darker
strips is present of the asphalt. This strip is determined by rubber deposition. The location of
the dark strip and the low quality area have been verified and it has been confirmed that they do
not match. The conclusion is that this lower quality is determined by the construction process
(asphalt mixture, paving, compacting or waterjetting). A confirmation comes from the second
strip where a lower area can be found in an area far from the rubber deposition strips.
Figure 6.4: Highlighted values below and above 1.3 mm. Axis labels represent the number of
surface squares
Figure 6.4 highlights the low quality area of the surface, from which the TD values have been
isolated and the MTD of this area is equal to 1.52 mm. The Schiphol requirement is also met
here.
At this point it is known that:
• The surface on average meets the EASA and Schiphol MTD requirements.
• There can be a good quality surface and a lower quality surface area, but both fulfil the
Schiphol requirements.
First all the 400 values have been fit with a normal distribution curve as we can see in figure
6.5. The mean µ is 1.68 mm and St.Deviation σ is 0.28 mm. The latter value describes how all
the values are distributed around the mean. In this case it is quite low, which means that the
values are very close the average.
Figure 6.5: Result of the fitting process; loc= Mean, scale= St.Deviation
A better understanding of this numerical data is provided by figure 6.6. Here it becomes clear
that a limited area of the surface has TD values below 1.3 mm, and the corresponding percentage
is 9.11%. This means that using this curve parameter for the simulation of surfaces there will be
a chance to have values lower than 1.3 mm equal to 9.11%.
A more accurate evaluation of the information is executed by not considering the fitting
process on all the 400 sampling points but by fitting the normal curves to the two different
areas defined in the previous phases: the good quality area and the lower quality areas. Figure
6.4 shows the two areas. The red square defines the low quality area while the remaining area
represents the high quality values.
The low quality area TD values have been fitted with a normal distribution and the parameters
obtained are:
• µ=1.52 mm
• σ=0.25 mm
Figure 6.7: Result of the fitting process; loc= Mean, scale= St.Deviation
The same applies for the high quality area where the values are:
• µ=1.80 mm
• σ=0.24 mm
Figure 6.8: Result of the fitting process; loc= Mean, scale= St.Deviation
The total surface was divided in 5 transverse strips and 11 longitudinal strips for a total of
105 squares of 1 m2 . In this case it is assumed that the measurements of the ELATextur Machine
represent the uniform TD of the entire square. In the first case this assumption was coherent
due to the close dimension of the sampling machine and sampling unit. In this scenario this
assumption is weaker due to the high difference between the sampling unit and the actual area
measured. But the scope in this case is not to detect the behaviour of the surface properties in
detail, but to understand if the properties of the surface are detectable in a large area. This is
possible accepting a lower level of precision.
Figure 6.10: Uniform distribution of points. Axis labels represent the number of surface squares
Although the preparation and dismissal took more time due to the large area of analysis,
the measurement session in this case was shorter (around 1 hour). Figure 6.10 shows how the
measurements have been performed and what the outcome was.
Also in this case it is possible to find an area that had a better quality and another with a
concentration of low TD values. The low quality area is present over the whole width of the strip
with a behaviour that is similar to the one on the square measured previously. It is also possible
in this case to separate the two areas as shown in figure 6.11.
This shows that the manufacturing process affects the surface with similar patterns. The
paving of this two strips was executed with one paver, this means that the influence affects
the entire width of the paving machine. Also the waterjetting process affects the quality of
the surface but in this case the width of the machine is only 2 meters and the influence is less
homogeneous. This suggests that the different areas are more likely determined by the paver than
the waterjetting process.
Fitting process
Also for this stretch the values will be fitted with a normal distribution. The outcome is shown
in figure 6.12. It is possible to notice that in this case the lower value of the distribution is less
than 1 mm, but the average is still above the Schiphol requirements. In fact the Mean µ is 1.50
mm and the St.Deviation σ is 0.28 mm.
Compared to the dense area measured previously, the quality overall is lower but the St.Deviation
is still the same. This means that in general the surface has a lower MTD but the values are
distributed around the mean similarly. The distribution of the quality of the surface remains the
same in general.
Figure 6.12: Probability distribution function and cumulative distribution function; loc= Mean,
scale= St.Deviation
Figure 6.13: Fitting process of low (top) and high (bottom) quality pavement; loc= Mean,
scale= St.Deviation
In both cases the St.Deviation remains around 0.25 mm that is in accordance with the high
density measurements. This testifies how the Flightflex
R
manufacturing process provides a
certain value of homogeneity to the surface.
It can be observed that the ASK fulfils the requirements imposed by Schiphol and the EASA
regulation but the mean is lower than that of the FFX surface. But the most important feature
of the ASK layer is its very low St.Deviation. In fact this is 50% lower than that of FFX. The
homogeneity of this material is very high, this is due to its manufacturing process. As explained
in chapter 2 the production process of the ASK gives less room for variance and uncertainty. The
glue is laid and being a synthetic material this has a high homogeneity. Moreover the basalt grit
mixture used to provide texture and friction is the result of a though and severe selection process
that guarantees a high homogeneity level of the aggregates.
Figure 6.15: Results of ASK measurements analysis. Axis labels represent the number of surface
squares; loc= Mean, scale= St.Deviation
Compared to the ASK measurement, the higher variance of the FFX measurements and the
presence of a high and low quality surface testifies the need of a tailored sampling methodology.
The ASK does not present different quality areas and has a low Variance. This would suggest
that the TD on the surface is uniform. With the FFX there is the risk of having a low or high
quality area and this needs to be taken into account in the sampling process. In particular it has
to be avoided to measure only one type of surface quality because the resulting MTD would not
be correct.
In the next chapter the information will be used to simulate the surface and make them more
similar as possible to the real ones.
From the analysis of the measurements obtained, the following rules have been imposed for a
strip of FFX:
• The lower quality areas cover the entire width of a simulated strip. The width dimension is
the same as the paver width: 4 m.
• The real runway surface will be made by a sequence of strips with the same manufacturing
process.
• The distribution of the dense area (400 samples) needs to be perturbed in order to take into
account a lower quality mixture. It has been seen in the second group of measurements
that the minimum value was lower than 1 mm. The simulated surface can be represented
then with lower values to take into account defects in the production process.
• The variance of the high and low quality surface should be similar.
• The percentage of low quality areas should vary to take into account different manufacturing
process influences.
The measurements presented previously are based on strips with a width of 5 m, but the
surface needed is 60 m wide and 500 m long. To obtain a surface of such dimensions a specific
construction process needs to be created. The main concept is to add a sequence of strips and
create the final surface. This is in accordance with the manufacturing process because the paver
normally works with a width of 4 m and the strips are paved in sequence.
Here a remark is needed. There are two types of maintenance process:
• Continuous maintenance process: in this case the strips paved are continuous and long,
reaching also a distance of 500 m. The scope is to reduce as much as possible the number
of transverse joints. In this process some areas of different quality are still present due to
stops, pauses, loading of new asphalt, and variation in the asphalt mixture due to waiting
time of the truck or production imperfections.
• Clustered maintenance process: this process it typical for night maintenance operations. It
consists in dividing the surface in short transverse sections and pave each one completely
before to advance the next one. In this case the strips are shorter with a distance of 100-120
m and for this reason more transverse joints are present in the final surface. The number of
strips in the transverse direction remains constant though.
During the construction process the paver lays the asphalt in parallel strips. To ease the
simulation process it has been assumed that the width of a strip is equal to the width of the
paver and that the length of a strip is 60 m. The latter dimension has been assumed taking into
account the uniform measurements. In that case a percentage of the longitudinal dimension of
the strip was characterised by low quality asphalt. The consequence of this assumption is that
to have a total strip with a length of 500 m, 10 strip units have to be added in longitudinal direction.
The procedure for the creation of the surface will be the following:
• Define a percentage of low quality surface for each strip. To increase the variability and
make the surface more realistic a normal distribution of the percentage will be used.
• From the dense matrix measurements define the best fit distribution of the low quality and
high quality areas are taken.
• Take the inverse of the cumulative distributions and give as input values from a normal
distribution between 0 and 1. This defines a series of TD values.
• Insert the values in the matrix taking into account the different areas.
• Add the strips in transverse and longitudinal direction to create the final matrix.
The assumptions made in this case are aimed to define a patch unit that is multiplied in
longitudinal and transverse direction to form the total surface. Each strip will follow the rules
described previously. Figure 6.17 describes the assembly process.
The final matrix is then influenced by the nature of each strip unit. It has to be noted that
each strip is generated from the sequence of rules defined in the previous section. Each strip being
generated by the inverse of the cumulative distribution will follow the imposed distribution but it
will also make each strip unique. This way of building the matrix is based on the manufacturing
process and is aimed to simulate the real surface as much as possible. Each construction process
is different because influencing events may occur. This simulation process is aimed to represent
the main features of the construction process.
One of the main features of this simulation model is its flexibility. By changing some
parameters, such as the percentage of low quality area per strip, or the means and variance of
the distribution it is possible to obtain slightly different strips. This will be very useful during
the simulation of the sampling methodologies performances. It will be possible to verify which
sampling methodology behaves better when the surface quality changes to an overall better or
lower quality.
Now that the surface simulation process is completes it is possible to test the different sampling
methodologies and observe how they behave.
The runway surface is entirely simulated with python. Inside the simulations all the TD
values are known and consequently also the exact MTD is known. With this value it is possible
to calculate the relative error between the real MTD and the mean of the TD values measured
with the sampling methodologies.
M T Dr − M T Ds
=
M T Dr
with:
• M T Dr = Real Mean Texture Depth for the surface. This is the Average of the Texture
Depth values of all the sample units composing the surface
1. The first one is based on an iterative process. Having set a maximum threshold for the
relative error as a requirement, the process starts by defining a minimum number of samples
for each technique. This minimum number of samples is applied to the surface, then the
relative error is calculated. If this value is lower than the threshold, the simulation stops
and the minimum number of samples is defined. In case the relative error is higher the
number of samples is increased and the process restarts. The cycle will stop when the
requirement is met.
2. The second one is similar but consists in defining in advance the number of samples for each
technique and verifies the relative error for each group of samples. Then the lowest number
of samples that guarantees the required minimum relative error is selected. This way of
proceeding impose to verify different numbers of samples and provides a wider overview
of the behaviour of the surface. In the previous simulation strategy this was not possible.
As an example, if the minimum is met with the minimum number of samples it will be
impossible to evaluate how the sampling methodology behaves with a larger number of
samples.
43
7.1 First sampling methodology
This simulation strategy is based on an iterative algorithm that is meant to stop when the
minimum relative error is reached. In figure 7.1 the exact algorithm is represented.
Figure 7.1: Flowchart of simulation algorithm for the first methodology proposed
The starting point is the simulation of one entire surface. When the surface is ready it is
possible to calculate its real MTD that will be used for the Relative Error calculation. At this
point the minimum number of samples X is used on the simulated surface and the mean of the
TD values obtained is compared with the real MTD. If the requirement is met the simulation
stops, if not the number of samples is increased and the procedure restarts. The cycle is executed
for each sampling methodology.
When the simulation stops the number of samples that meets the requirement is registered in
a matrix. At this point another surface matrix is simulated and the process is restarted. When
1000 values are stored in the matrix the total simulation process stops. This means that the
surface is simulated 1000 times and the procedure above is executed from the beginning every
time. All this tasks are executed for the three sampling methodologies and the data restored and
compared.
This way of proceeding is an adaptation of a Monte Carlo simulation. The Monte Carlo
method has wide application in different fields. In general it is based on the repetitive simulation
of a process or a series of calculations using as input (controlled) random values for the system
variables. The results of all the processes are then collected and outcomes based on statistical
analysis are provided.
This also applies for this case study because each time a unique surface is simulated and then
a mathematical process is applied. The results of all the simulations are collected in the final
matrix and for each sampling methodology the mean is calculated.
Table 7.1 shows the results of the simulation for each sampling methodology. As already
mentioned for each simulation the minimum value is stored and after 1000 simulations the mean
is calculated. In this case the relative error threshold was fixed at 1%. This threshold is selected
as the maximum acceptable relative error between samples mean and real MTD. In table 7.1
it can be seen that the methodology that provided the minimum relative error with the lowest
Table 7.1: Paired t-test of most common TF families for Pearson Correlations
To define and explain such behaviour more simulations are run. In this case it is decided to
simulate only one surface and regularly increase the number of samples until they are 8000. For
each number of samples the relative error is calculated, then the number of samples is increased
and the same does the news relative error calculated and so on.
Figure 7.2: Verification of mean behaviour related to the variation of number of samples
Figure 7.2 shows that for less than 1000 samples the mean remains unstable with the uniform
sampling methodology. The Hammersley and CROW methodology take longer to stabilise but
after 2000 samples the asymptotic line is reached. For this reason the first simulation methodology
was providing different results: the minimum relative error requirement is met with a very low
number of samples and the probability to go beyond the 1000 samples is extremely low.
From a practical point of view and to facilitate the understanding also the evaluation of the
relative error related to the number of samples is presented. Figure 7.3 shows that in terms of
relative error the curve behaviour is similar to that of the mean TD. In this case it is observed
that the relative error has a high variation until 1000 samples for the Uniform distribution, and
until 2000 for the Hammersley and CROW distribution.
Figure 7.3: Verification of Relative Error behaviour related to the variation of number of samples
This last Analysis has highlighted the limits of the first simulation methodology due to high
variability of the surface texture. To reach a stable behaviour of the relative error and a high
lever of robustness it is necessary to work with more than 1000-2000 samples. In practice such an
amount of samples cannot be measured during a quality control session. As described in chapter
2 the measurements are taken with the ELAtexture machine that reduces the measurement time.
But still the time available for the measurements is limited and measuring 1000 locations would
take more than 8 hours. Time that is not compatible with Schiphol’s operational schedules.
With the second simulation methodology it will be analysed whether it is possible to achieve
a high consistency and robustness with a lower number of samples.
With this methodology the number of samples are fixed and they are all used on the same
surface. There is no loss of information, but there is a downside: if none of the defined numbers
of samples meets the error threshold value, then more samples need to be planned and the
simulation runs from the beginning. Moreover, to ensure consistency of this process, all the
sample methodologies are applied on the same simulated surface. This enables to understand not
only how these methodologies behave on average but also how they relate to the same surface.
Also for this simulation methodology a specific algorithm is defined in Python. The flowchart
in figure 7.4 provides an overview of the steps that constitute the simulation process. The main
tasks are:
1. Define and list the number of samples for each sampling methodology. To better understand,
the number of samples selected for this case study are given in table 7.3. The increment
rules for each sampling methodology are based on the rules defined in chapter 5.
3. For each sampling methodology try all the numbers of samples set at point 1
4. Calculate the relative error for each number of samples of each methodology
8. Calculate the average of the relative error for each sampling number for each methodology
Sampling methodology
Hammersley 9 17 26 34 43 51 59 67 76 84 93 101 110 118 126 134 143 151 160 168 177 185 193 201
CROW 15 30 45 60 75 90 105 120 135 150 165 180 195
Uniform 10 18 28 40 54 70 88 108 130 154 180
The last may seem confusing, for this reason it has to be explained in detail. As described
in chapter 6 the surface is characterised by two normal distribution functions, one for the high
quality values and one for the low quality values. Figure 7.5 shows the two distribution curves.
Their Mean can be changed or influenced as shown before. But for this case study the values
obtained from chapter 6 are used. Moreover the Standard Deviation was close to 0.25 mm in
almost all configurations and for this reason it has been kept fixed for all the simulations.
Using the input values from the measurements and changing some of the parameters, it is
possible to simulate the surface in different conditions. The main concept of this simulation
process is to be able to have different configurations of the surface and see how the different
sampling methodologies perform.
It is not to simulate a surface that meets the Schiphol or EASA requirements. But
to simulate a surface that presents the manufacturing process patterns and
evaluates which sampling methodology performs better in the identification of the
real MTD
Having this concept in mind it is legit to consider the simulation of surfaces that present a
lower quality than the real ones. If the sampling process proves to be effective it will be able to
detect this lower quality.
It has already been mentioned that the Mean of both high and low quality distributions have
been taken from the results of chapter 6 and will be kept fixed during the simulations. This
means that the two variables that will be allowed to change are: the percentage of low quality
area per strip and the intensity of the noise. These two parameters will allow to simulate surfaces
with lower quality than the real ones. The goal is to verify the efficiency of the methodology in
case of very low quality surfaces.
• The noise factor has been set as a value taken from a uniform distribution of values between
0 and 0.3. This means that for each surface simulation the noise was slightly different. This
simulates a manufacturing process that contains imperfections.
• For the percentage of low quality area per strip it has been decided to create different
categories. Each category represents a uniform distribution with a value between 0, 0.1, 0.2,
0.3, 0.4, 0.5. For each category of values 1000 surfaces have been simulated.
Table 7.4 shows an overview of the results of this simulation. The different surfaces are
represented by the two parameters mentioned above. The two values used to represent the
different surfaces are shown in the first column of each table. More specifically:
• The first is related to the percentage of the low quality area on each strip. The value
represents the extreme of the uniform distribution (i.e. 0.2 represents the extreme of a
uniform distribution between 0 and 0.2).
• The second represent the intensity of the noise. Also this value is represented as the extreme
of an uniform distribution.
As an example "0.3_0.2" represents a surface with a a noise that come from a uniform distribution
of values between 0 and 0.3 and a percentage of low quality area that can have a value between 0
and 0.2.
All the tables present in the first row a type of surface with 0 percentage of low quality and 0
noise (type of surface 0.0_0.0). This means that this surface is considered as a uniform surface
A graphical representation of the results in table 7.4 is shown in the following sections.
Different plots are proposed based on the research scope. The first plot in figure 7.6 presents how
the three sampling methodologies perform with ideal surfaces (0.0_0.0).
Figure 7.6: Results for each sampling methodology on a surface with no low quality areas.
Figure 7.6 presents a more consistent behaviour with respect to the first simulation method-
ology. In fact the sampling methodologies all present a decrease of the relative error when the
number of samples increases. This is coherent with the U function proposed by the ISO regulations
as can be observed in figure 3.1.
Figure 7.6 also shows that all the three sampling methodologies appear to be effective in
reaching a relative error lower than 1% within the numbers of samples proposed in table 7.3.
This means that no other numbers of samples need to be added. It can be observed that with
an ideal surface the Uniform sampling methodology appears to be the most effective in reaching
the 1% relative error with the lowest number of samples. With 70 samples a relative error of
1.04% is reached. Meanwhile, with the Hammersley and CROW methodology 160 samples are
needed. This is certainly an idealistic situation that will not occur in practice but from a scientific
perspective it is important to evaluate the general behaviour of these sampling methodologies
with uniform surfaces.
Figure 7.7 shows that all the curves have the same behaviour. The trend is similar and
the distance between the curves is negligible. This means that the Hammersley methodology
is not affected by the manufacturing process. The justification is found by the fact that the
construction process creates areas of low quality texture with specific patterns but the locations
of the Hammersley samples are random and do not follow a fixed structure. The probability
of having more sampling locations in the same low quality area is then very low. No relation
between the sampling technique and the construction process is then found.
It has to be mentioned that also when the noise of the low quality area is increased the effect on
the sampling methodology is negligible. This is a second evidence of the lack of correlation between
the sampling methodology and the manufacturing process. The downside of this technique is the
fact that 160 samples are required to reach a 0.96% relative error.
In this case the situation is slightly different because the behaviour of the curves does not
follow completely the blue curve that represents the surface with no low quality areas. A trendline
representing the other 5 lines would be very similar to the blue line because the overall behaviour
of the curve is the same. But it is clear that all the disturbed surfaces present a particular peak
when 55 samples are imposed.
This is due to the fact that in this particular sampling configuration a higher percentage of
samples are located on the low quality areas of the surface. The simulations create 1000 different
surfaces and each surface is different but maintains the same manufacturing signature. Every
time it is simulated the surface presents a series of strips with different locations of the low quality
area. But the number of strips, the percentage of low quality area and the noise intensity remains
the same. This preserves the manufacturing signature. The fact that this peak is present in
each simulation testifies that there is a correlation between the sampling methodology and the
manufacturing process for this specific number of samples. Another proof of this correlation
comes from the fact that the peak is present also when the noise is increased.
This curve behaviour is in accordance with the findings of Colosimo et al. In their researches
they obtained similar peaks and a similar configuration of the curves [8].
The results are slightly different with respect to the previous cases. But it is still possible
to say that the general trendline of the curve is maintained. Three main peaks are present and
the reason is the same as the one for the Uniform Methodology. The surface signature and the
sampling methodology enter in relation and the samples taken are mainly located on the low
quality areas of the surface. In this case a sequence of three sampling locations are drawn closer
to each other. This sampling distribution was originally meant to check the paving quality at the
extremes of the paver and the compactors. For this reason, in this technical context, if one of the
three points is located on a low quality strip most likely the two adjacent points will be included
in that area too. The probability of having a correlation between the manufacturing pattern
and the sampling methodology is higher and determines a major relative error. The three peaks
increase when the noise is increased. This is the proof that a higher number of samples is located
in the low quality areas. With higher noise the difference between the samples means and the
real mean of the surface is greater.
Please note that the scale of this graph is very low. The vertical axis has a maximum value
of 4.5% relative error. If the scale would have reached 100% then those peaks would have not
been detectable. From a scientific perspective an error of 3-4% is considerable and is object of
investigation, in particular for the mechanical production industry. In pavement construction
industry an error of 4% could still be considered acceptable. This is because for a runway strip
More stability and coherence is brought with the second simulation strategy. A series of
number of samples is selected and the behaviour of all the numbers of samples on the list is
tested. It is proven that the results are coherent with the ISO regulation introduced in the
literature review. Among the three sampling techniques the Hammersley is the only one that is
not affected by the manufacturing process. Due to his structural nature there is no correlation
between the sampling methodology and the patterns determined by the construction process. The
situation with the Uniform and the CROW methodology is different. They are both influenced
by the manufacturing process. The Uniform methodology is sensible when 55 samples are used,
this is why a Peak is present in the graph. The CROW methodology is more sensible to the
manufacturing process. The reason is that groups of three samples are taken and there is more
probability to have a high concentration of samples in the low quality areas. This is also testified
by the fact that the peaks increase when the noise is increased. The peaks present with the
CROW and the Uniform methodology testifies the lower reliability of these methodologies.
The presence of the peaks testifies a deviation from the general behaviour of the curves and
thus represents measurement values that have a higher relative error than expected. These
measurement techniques will have the risk to provide a MTD value lower than the real one.
In this chapter the field experience will be described and the results presented. This will lead
to a series of conclusions and comparisons with the simulation proposed previously. The field
experience was aimed to enrich and validate the theoretical notions and the results from the
simulations.
This part was divided in different phases:
• Analysis of the correlation between laser MTD measurement and Sand Patch Method
• Analysis of the effects of waterjetting on MTD
• Testing of sampling methodologies
The drawback of the sand patch method is the fact that it takes 3-4 minutes per measurement,
while an ELAtextur measurement takes 12 seconds. The analysis with the laser methodology is
different. This is executed with the ELATextur machine and it is considered less precise because
the laser is not able to measure the hidden holes underneath the aggregates. The scope of this
paragraph is to evaluate how much the two methodologies differ and to define if the ELATexture
machine is a reliable tool for the quality control.
cov(X, Y )
ρX,Y =
σX σY
with:
57
• σY as the standard deviation of Y
In table 8.1 the value of the correlation factor are shown.
Measurement Sand patch electronic Rel.error
1 1.52 1.58 0.04
2 1.55 1.33 0.14
3 1.61 1.24 0.23
4 1.6 1.35 0.16
5 1.76 1.5 0.15
6 1.6 1.28 0.20
7 1.77 1.88 0.06
8 1.55 1.49 0.04
9 1.52 1.33 0.13
10 1.69 1.58 0.07
11 1.66 1.27 0.23
12 1.73 1.5 0.13
13 1.73 1.36 0.21
14 1.73 1.72 0.01
15 1.98 1.88 0.05
16 1.52 1.51 0.01
17 1.69 1.81 0.07
18 1.5 1.34 0.11
19 1.63 1.46 0.10
20 1.57 1.29 0.18
21 1.69 1.71 0.01
22 1.63 1.75 0.07
23 1.44 1.28 0.11
24 1.66 1.42 0.14
25 1.63 1.3 0.20
26 1.66 1.62 0.02
27 1.55 1.29 0.17
28 1.63 1.71 0.05
29 1.6 1.46 0.09
30 1.44 1.38 0.04
31 1.83 1.41 0.23
32 1.94 1.5 0.23
33 1.49 1.52 0.02
Mean 1.64 1.49 0.11
Table 8.1: Results values from Sand Patch and ELAtextur measurements
To try to understand the exact distribution of those values, the plot in figure 8.1 shows the
scatter distribution of the measurement values and the trend that can be seen as a graphical
interpretation of the correlation factor. The correlation factor is 0,48 so partially corre-
lated values. The graph is characterised by the thick 45 degree line that represents the perfect
correlation between the two variables. The closer the points are to this line, the higher is the
correlation factor. The fact that the points are located above the blue line means, in
general, that the measurements are more conservative for the ELAtextur with the
values below 1.5 mm. Above 1.5 mm the Sand Patch method appears to be more
conservative but the number of measurements in this region is considerably low
To validate this observation the relative error between the measurements for the ELAtextur
data below 1.5 mm and above 1.5 mm are calculated for each pair of data. Finally the average is
provided. Looking at table 8.2 it can be observed that for values below 1.5 mm the conservative
error committed using the ELAtextur machine is around 15% meanwhile for values above 1.5
mm is 4%.
Due to the large surface available and the need to have a proper evaluation of the MTD, the
Please note that these results and analyses are referring to Flightflex
R
surfaces.
During the construction process it has been possible to measure the asphalt surface in the
different phases of the process: after compaction and after the waterjetting process. The scope
was to analyse the performance of the asphalt from the MTD point of view in the different
phases to see if and how the manufacturing process influences the surface quality. This helps
to verify if the properties of the simulated surfaces are reliable. In fact a full validation process
is only possible if all the locations of the runway are measured and this would mean measuring
480,000 locations. The time available for the measurements was limited to a few hours. To
obtain that amount of measurements, several weeks are necessary. This is not a possibility. For
this reason during the maintenance process the goal was to take as much measurements as possible.
Unfortunately due to weather conditions and few delays, it was not possible to have a full
time slot to execute the measurements. Most of the measurements were taken with the machinery
still working on the construction site. For this reason the number of measurements is limited and
The samples were obtained before waterjetting and also after the shoulders of the runway
was treated. Figure 8.4 shows that the treated shoulder presents a higher MTD but this is still
not enough to reach the requested value of 1,3 mm set by Schiphol. The middle part, untreated,
remains the same, below 1 mm in average. To facilitate the understanding, the two different
edges of the investigated area have been called with the letter A and B. A for the one closer to
the runway and B for the other.
Some characteristics of the performance of the asphalt and the manufacturing process already
arise. First, it can be see that after the compaction process the central part has a lower quality
than the shoulders. The average value is 0.95 mm, lower than the EASA regulation requirement.
The same appears from the uniform sampling where the mean MTD is 0.94 mm.
At the same time the waterjetting process proved to increase this characteristic of the asphalt
by 20% from the original value. But compared to the measurements of March the overall values
are lower. The reason for this is still under investigation and will help to understand how to
improve the performance of the asphalt.
• Mean = 1.19 mm
• St.Deviation = 0.06 mm
• Mean = 1.17 mm
• St.Deviation = 0.06 mm
This methodology was taking three points with a fixed distance of 1.75 m between two
measurements. This would allow to evaluate the quality of the compaction process. But as we can
see from figure 8.8 the waterjetting machine has a width not bigger than 1.5 m. The waterjetting
process itself was not uniform as we can see in figure 8.8 and the quality changes from strip to
strip. This is the reason why a location with three points may have three totally different values.
Figure 8.8 shows that the left part has a proper bitumen removal while the other part has a
To have an easier comprehension of the MTD distribution figure 8.9 shows the distribution of
the red, yellow and green points on the surface. Some areas appear to have a lower MTD value
but the general picture is a homogeneous distribution on the entire surface.
• Mean = 1.45 mm
• St.Deviation = 0.1 mm
• Mean = 1.48 mm
• St.Deviation = 0.01 mm
The focus was more on the analysis of the construction process and its influence on the
surface quality. It has been highlighted that the compaction process itself already influences
the pavement’s properties. In some areas the MTD already satisfies the Schiphol requirements
as it has been seen in figure 8.6. The general mean of an untreated surface remains low and
increases when it is treated with high pressurised water. If this process is executed correctly
the proper amount of bitumen is removed from the surface, increasing the MTD approximately
by 25% (figure 8.12). Moreover this procedure will also increase the skid resistance properties
of the surfaces because the removal of the bitumen will uncover the micro-texuter of the aggregates.
It has been observed that the waterjetting process needs attention and supervision to obtain
an homogeneous surface. In particular the machine is characterised by two brushes that are
supposed to help remove the bitumen. The quality of the two parallel brushes is hardly the same
thus the quantity of bitumen removed is not the same. This could create a series of strips with
very different texture depth. The consequence is that during the measurement process it is very
easy to only measure a series of locations in a good or a better strip. The Uniform sampling
methodology could, as an example, have an entire column of measurements on a high quality
or low quality strip. The final quality of the waterjetting process can be improved by control-
ling that the speed of the waterjetting machine is kept constant and the brushes regularly changed.
In case this pattern of high and low quality strips is present a second waterjetting process is
planned. The process becomes in this case however more delicate because an excess of pressure or
In general it can be said that both paving and waterjetting influence the MTD. The last
process in particular has proven to sensibly increase the texture depth by 25%. This values are
coherent with the measurements obtained in the taxiways and an overview of the final results can
be found in figure 8.12
In this chapter the final conclusions of the research question will be described. Starting from
the research question the final outcomes of the thesis are presented. This part will lead to some
recommendations for Schiphol and Heijmans. Finally some recommendations for future researches
will be proposed.
9.1 Conclusions
The starting point for this thesis was the following research question:
In the definition of the research boundaries three main sampling techniques have been selected:
Modified CROW, Uniform, Hammersley. But to investigate which technique could guarantee the
best MTD evaluation some surfaces have been simulated with Python scripts. This methodology
was based on an adapted Monte Carlo simulation, where the surfaces were repetitively simulated
and the different techniques were applied. A first simulation technique based on an iteration
process proved to be unreliable due to the high variability of the surface texture depth values. It
has been in fact proven that the relative error stabilised to acceptable values only after 1000-2000
samples were used. But a better bench of results was obtained with the second simulation
methodology. In this case a number of samples per sampling methodology is imposed to the same
surface repetitively for 1000 times.
In this case the results are consistent and coherent with the analysis proposed by Colosimo et
al. [8]. As for their analysis it has been possible to represent the decrease of the relative error by
increasing the number of samples. This is in accordance with the behaviour of the U function
presented in the ISO regulations [13]. Figure 9.1 shows the similarity of the curves proposed by
Colosimo and the ones obtained from the simulations.
The results highlighted the fact that the behaviour is regular for all the three methodologies
in the case of a homogeneous surface without low quality areas. Moreover the best performing
methodology in this ideal situation would be the uniform methodology because it is able to
provide a relative error lower than 1% with only 70 samples.
The situation is slightly different when low quality areas are present. The Hammersley
sampling methodology is absolutely not affected by the different types of surface resulting in
a smooth behaviour in all circumstances. Different is the situation with the CROW and the
(a) U value with different strategies (b) Results from Simulation
Uniform methodology. The latter one is only slightly disturbed with 55 samples but the first one
gives several peaks in the curves. This is due to the fact that these two methodologies can relate in
some cases with the manufacturing process and consequently the reliability of the measurements
decreases.
The main conclusion that arises from the simulation analysis is that a random selection of
sampling points (Hammersley methodology) appears to be the most reliable because it is not
possible to establish a relation with the manufacturing process with any number of samples.
But the downside of this methodology is the fact that to reach a relative error lower than 1%
a high number of samples is required. In the order of 160 samples for a runway strip of 500 m
length and 60 m width. The uniform methodology from this point of view appears to be the
most accurate with the lowest number of samples because it can reach a low relative error with
only 70-80 samples but there is the risk of having a synchronisation with the pattern left by the
construction process and underestimating the real MTD. From this point of view the CROW
methodology is considered the most inefficient because it requires almost the same number of
samples as the Hammersley methodology to reach a lower relative error and it is less reliable than
the Uniform strategy. For these reasons, from a mathematical point of view, the CROW strategy
is not considered suitable for this kind of measurements.
The number of samples required for each methodology are referred to the test area of 500 m
length in the Polderbaan were the FFX has been placed. If it is of interest to know the number
of samples required for a different area it is necessary to make a proportion with the test area.
As an examples: the number of samples required for the entire Polderbaan, which has a lenght
of 3500 m is 7 times the number of the test area. This means that for the entire runway 1260
measurements are requested for the Hammersley strategy and 490 for the Uniform ones.
In future researches it would be of interest to run the same simulation proposed in this research
using as reference area not the test stretch but the entire runway. This would define a more
accurate result compared to a simple proportional calculation. In this research this process has
not been included because of software and hardware limits. The simulation of the entire runway
requires a high calculation power to be executed in a limited number of hours.
9.1 Conclusions 71
The field experience has proven that the combination of paving, compacting and the mixture
properties on one side and the waterjetting process on the other side, affect the properties of the
surface. In particular the mixture aggregates and the paving process create different macro areas
with different surface properties. The surface is then again influenced by the waterjetting process
that may lead to specific patterns. These are due to the machinery properties and the water
pressure. These irregularities are mitigated with a second waterjetting process and this is proven
by a reduction of the variance in the samples distribution. But it also proves that the overall
MTD increases with 25% after the second waterjetting process. This confirms the need of this
double treatment and its efficiency.
The field experience has proven that unexpected events, such as as problems with the produc-
tion or some machinery, can influence the texture quality of the surface. The main patterns of the
construction process have been recorded and translated into parameters that regulate the surface
simulation but this process has some limitations. Each construction process will have some main
recurrent patterns and a series of variable influences on the surfaces. What arises from this
analysis is that a simulation methodology that presents specific patterns may interfere with the
manufacturing process and affect the reliability of the results. This discrepancy is however limited
to 3-4% relative error, which means that there is still 96% precision with those methodologies.
The main outcome of what has been described is that a pavement surface of SMA presents
patterns left by the manufacturing process. The most reliable method to determine the MTD of
such surfaces is a random selection of samples that, in this case, is represented by the Hammersley
strategy. Other sampling strategies with a fixed and predefined structure are less reliable because
they can randomly find a relation with the patterns left by the manufacturing process.
The answer of the research question recalled at the beginning of the chapter is to use 160
samples with a Hammersley distribution. From the results this would ensure a low relative error
(<1%) and thus a high reliability of the MTD. The number of samples is higher compared to the
Uniform strategy but the reliability is also higher. With this second methodology there would be
the risk of a relation between the sampling strategy and the manufacturing signature.
Please note that with this thesis Schiphol and Heijmans will be able to use all the programs
created in python. This would furnish them a tool that automatically plots all the samples before
in a simulated surface and also on the map with GPS coordinates. This will enable them to see on
the maps where the samples have been taken and have an overview of the pavement quality. This
could ease the identification of the high and low quality areas and helps the improvement of the
maintenance process. Observing the distribution and the characteristics of the low quality area
could help the contractor to define which part of the construction process affects most the quality
of the pavement surface in terms of texture depth. This would open the doors for mitigation
measures and consequently increase the quality of the final product.
Firstly it has to be said that the three methodologies need different measuring times. The
Uniform methodology is the fastest to execute due to the simplicity of the grid of samples. The
CROW methodology is more complicated but still regular, in particular because in one location
three adjacent samples are taken. The most complex methodology is the Hammersley because
this algorithm simulates a random distribution. It is more complex to define the map and execute
the measurements correctly.
Taking these premises in mind a company has to evaluate the trade off between the time
dedicated to the quality control and the level of reliability requested. The most reliable situation
would be to measure 180 location (for each 500 m) with the Hammersley methodology. This
would ensure a relative error lower than 1% (thus a high accuracy) but it would also require more
than 2 hours. Moreover this methodology would be more reliable because there is no risk to have
misleading values on the measurements.
In case of a lack of time available it is also possible to sensibly reduce the number of samples
and implement the Uniform strategy. With 70 samples there is already a relative error lower than
1% but it has to be taken into account that the reliability is lower. In any case the simulations
have proven not to have a relative error higher than 5%. In practice this means that the MTD
value obtained risks to be lower than the real one but with a limited error of 5%. The company
should be aware of this possibility and could accept a lower reliability of the measurements.
To recapitulate: if the company wants to have the highest reliability of the measurement and
simultaneously wants to have a maximum 1% relative error, it is suggested to adopt 180 measure-
ments for each 500 m with the Hammersley methodology. If it is accepted to have a lower reliability
of the measurements, 70 samples for each 500 m of runway can be measured with the Uniform
distribution. In this case the MTD value obtained is considered to be 5% lower than the real value.
The CROW methodology is not suggested because it still takes a considerable amount of time
and samples to have satisfactory results. More important, it is the one that is more sensible to
enter in relation with the manufacturing signature.
• Apply the Hammersley strategy right after the completion of the construction process. This
will ensure the highest reliability in the definition of the quality of the surface in terms of
texture depth.
• For regular monitoring of the quality a Uniform distribution can be implemented. In this
case the reliability is lower but it will be faster. This could be interesting for the evaluation
of the behaviour of the surface in time. Limited time would be needed and a plot of a
regular structure as the Uniform one could facilitate the understanding of which areas are
damaged or have lower quality.
It is also suggested to use the plotting tool to visually evaluate if the sampling procedures
have been executed correctly.
With this maintenance strategy each night a limited area is renewed because the time available
is limited. The quality control in this scenario needs to be fast and produce a sufficient level of
reliability. The most appropriate sampling methodology in this case would be the Uniform. This
ensures an acceptable reliability and strongly reduces the number of samples required.
[3] M. Brouwer, E. van Calck, J. Knoester, T. Joustra, N. Schmidt, and S. Heblij. Beschikbaarheid
strategie. Schiphol Annual Report 2015, 2015.
[4] X. Cao. Effective perturbation distributions for small samples in simultaneous perturbation
stochastic approximation. In 2011 45th Annual Conference on Information Sciences and
Systems, pages 1–5, March 2011.
[5] G. Casella and R. L. Berger. Statistical inference, volume 2. Duxbury Pacific Grove, CA,
2002.
[6] W. Chamberlin and D. Amsler. Measuring surface texture by the sand-patch method.
Pavement Surface Characteristics and Materials, pages 3–15, 1982.
[7] G. Claeskens and N. L. Hjort. Model Selection and Model Averaging. Number 9780521852258
in Cambridge Books. Cambridge University Press, October 2008.
[8] B. M. Colosimo and N. Senin. Geometric Tolerances. Milan, first edition, 2011.
[9] A. Corrado and W. Polini. Manufacturing signature in variational and vector-loop models
for tolerance analysis of rigid parts. The International Journal of Advanced Manufacturing
Technology, 88(5):2153–2161, Feb 2017.
[10] EASA. Certification Specifications (CS) and Guidance Material (GM) for Aerodromes Design.
European Aviation Safety Agency.
[11] P. S. GMBH. Possehl antiskid.the special high-friction surface for takeoff and landing runways.
Technical report.
[15] D.-H. Lee, M.-G. Kim, and N.-G. Cho. Characterization of the sampling in optical measure-
ments of machined surface textures. Proceedings of the Institution of Mechanical Engineers,
Part B: Journal of Engineering Manufacture, 230(11):2047–2063, 2016.
[16] B. Maria Colosimo, E. Gutierrez Moya, and G. Moroni. Statistical Sampling Strategies for
Geometric Tolerance Inspection by CMM. 23(1):109–121, 2008.
[17] G. Moroni and M. Pacella. An approach based on process signature modeling for roundness
evaluation of manufactured items. 8, 06 2008.
75
[18] G. Moroni and S. Petrò. Optimal inspection strategy planning for geometric tolerance
verification. Precision Engineering, 38(1):71 – 81, 2014.
[19] J. Quayson, S. Grullón, T. Japi, P. DiFulco, and P. Fushan. Airport traffic report. Technical
report, 2017.
[22] T. Woo, R. Liang, C. Hsieh, and N. Lee. Efficient sampling for surface measurements.
Journal of Manufacturing Systems, 14(5):345 – 354, 1995.
import random a s rd
d e f get_num ( x ) :
r e t u r n f l o a t ( ’ ’ . j o i n ( e l e f o r e l e i n x i f e l e . i s d i g i t ( ) o r e l e ==
’. ’) )
r e s u l t s =[]
f o l d e r _ p a t h = r ’ C: \ U s e r s \wano2\ Desktop \ 2 0 1 8 0 6 0 8 ’
f o r f i l e in s orted ( os . l i s t d i r ( folder_path ) ) :
path= o s . path . j o i n ( f o l d e r _ p a t h , f i l e )
#p r i n t ( path )
for l i n e in f :
i f "<mpd>" i n l i n e :
mpd= get_num ( l i n e ) /1000
measure . append (mpd)
77
i f "<etd >" i n l i n e :
e t d=get_num ( l i n e ) /1000
measure . append ( e t d )
i f "< l a t i t u d e >" i n l i n e :
l a t i t u d e=s t r ( get_num ( l i n e ) )
f o r i in range ( len ( l a t i t u d e ) ) :
i f latitude [ i ]== ’. ’:
s p l i t a t _ l = i −2
risultato_l = float ( latitude [ : splitat_l ])
+ f l o a t ( l a t i t u d e [ s p l i t a t _ l : ] ) /60
measure . append ( r i s u l t a t o _ l )
i f "< l o n g i t u d e >" i n l i n e :
l o n g i t u d e=s t r ( get_num ( l i n e ) )
f o r i in range ( len ( l o n g i t u d e ) ) :
i f longitude [ i ]== ’. ’:
s p l i t a t _ o = i −2
risultato_o = float ( longitude [ : splitat_o
] )+ f l o a t ( l o n g i t u d e [ s p l i t a t _ o : ] ) /60
measure . append ( r i s u l t a t o _ o )
i f "<time >" i n l i n e :
time= l i s t (map( s t r , r e . f i n d a l l ( " \ d+\:\d+" , l i n e ) ) )
measure . append ( time )
r e s u l t s . append ( measure )
d f=John_points
w r i t e r = pd . E x c e l W r i t e r ( ’8 −6 −18. x l s x ’ )
d f . t o _ e x c e l ( w r i t e r , ’ Measurements ’ )
w r i t e r . save ( )
\ clearpage
#PLOTTING VALUES ON GOOGLE MAPS IMAGES
# P l a c e map
gmap = gmplot . GoogleMapPlotter ( 5 2 . 3 3 8 1 3 , 4 . 7 1 0 0 3 0 8 3 , 1 5 )
top_attraction_lats_r =[]
top_attraction_lons_r =[]
top_attraction_lats_y =[]
top_attraction_lons_y =[]
top_attraction_lats_g =[]
top_attraction_lons_g =[]
c o u n t e r=0
f o r i in df [ ’ latitude ’ ] :
i f d f [ ’MTD’ ] [ c o u n t e r ] <1:
t o p _ a t t r a c t i o n _ l a t s _ r . append ( i )
counter = counter + 1
e l i f d f [ ’MTD’ ] [ c o u n t e r ] <1.3 and d f [ ’MTD’ ] [ c o u n t e r ]>=1 :
t o p _ a t t r a c t i o n _ l a t s _ y . append ( i )
counter = counter + 1
e l i f d f [ ’MTD’ ] [ c o u n t e r ] >=1.3:
t o p _ a t t r a c t i o n _ l a t s _ g . append ( i )
counter = counter + 1
c o u n t e r=0
f o r i in df [ ’ longitude ’ ] :
i f d f [ ’MTD’ ] [ c o u n t e r ] <1:
t o p _ a t t r a c t i o n _ l o n s _ r . append ( i )
counter = counter + 1
e l i f d f [ ’MTD’ ] [ c o u n t e r ] >=1.3:
t o p _ a t t r a c t i o n _ l o n s _ g . append ( i )
counter = counter + 1
gmap . s c a t t e r ( t o p _ a t t r a c t i o n _ l a t s _ r , t o p _ a t t r a c t i o n _ l o n s _ r , ’ red ’ ,
s i z e =0.5 , marker=F a l s e )
gmap . s c a t t e r ( t o p _ a t t r a c t i o n _ l a t s _ y , t o p _ a t t r a c t i o n _ l o n s _ y , ’ y e l l o w ’ ,
79
s i z e =0.5 , marker=F a l s e )
gmap . s c a t t e r ( t o p _ a t t r a c t i o n _ l a t s _ g , t o p _ a t t r a c t i o n _ l o n s _ g , ’ green ’ ,
s i z e =0.5 , marker=F a l s e )
# Draw
gmap . draw ( " p r o v a p l o t . html " )
import webbrowser , o s
webbrowser . open ( " p r o v a p l o t . html " )
\ clearpage
# # Tutte l e f u n z i o n i che s e r v o n o p e r a n a l y s i s
# ## G e n e r a t o r e m a t r i c i
import pandas a s pd
import o s
import r e
#import numpy a s np
import s e a b o r n a s s n s
import wa rn i n g s
#import numpy a s np
import pandas a s pd
import s c i p y . s t a t s a s s t
import s t a t s m o d e l s a s sm
import m a t p l o t l i b a s mpl
import m a t p l o t l i b . p y p l o t a s p l t
import f u z i o n i a s f z
from m p l _ t o o l k i t s . a x e s _ g r i d 1 import make_axes_locatable
import pandas a s pd
import random a s rd
# ## M a t r i c e d i s t r i b normale
d e f matrice_norm ( r i g h e , c o l o n n e ) :
T=np . random . normal ( 1 . 2 , 0 . 8 , [ r i g h e , c o l o n n e ] )
np . s e t _ p r i n t o p t i o n s ( p r e c i s i o n =3)
return T
# ## M a t r i c e con d i s t u r b o
import numpy a s np
def matrice_dist ( righe , colonne ) :
# i t e r a t e through rows
f o r i i n r a n g e ( l e n (X) ) :
r i g =[]
# i t e r a t e through columns
f o r j i n r a n g e ( l e n (X [ 0 ] ) ) :
v a l= (X[ i ] [ j ]+Y[ i ] [ j ] )
r i g . append ( v a l )
T . append ( r i g )
return T
# ## M a t r i c e Composta
# ## G e n e r a t o r e m a t r i c e da d i s t r i b u z i o n e
import pandas a s pd
a r g = param [ : − 2 ]
l o c = param [ −2]
81
s c a l e = param [ −1]
f o r i i n r a n g e ( raw ) :
riga =[]
f o r j i n r a n g e ( column ) :
x=d i s t r i b u t i o n . ppf ( np . random . uniform ( ) , l o c=l o c , s c a l e=
s c a l e , ∗ arg )
r i g a . append ( x )
T. append ( r i g a )
f o r i i n r a n g e ( raw ) :
f o r j i n r a n g e ( column ) :
x=T [ i ] [ j ]
v a l u e s . append ( x )
# ## G e n e r a l i i n f o M a t r i c e
import numpy a s np
#from s t a t i s t i c s import mean
import s c i p y a s s i
#c r e a z i o n e d i v e r s i t i p i d i v e t t o r i
d e f i n f o _ g e n ( matrix ) :
#""" t h i s f u n c t i o n r e t u r n t h e s t a t i s t i c v a l u e o f a matrix , i n
o r d e r : mean , s t a n d a r d d e v i a t i o n and v a r i a n c e " " "
media_g=np . mean ( matrix )
dev_std_g=np . s t d ( matrix )
varianza_g=np . var ( matrix )
r e t u r n media_g , dev_std_g , varianza_g
# ## Sampling Techniques
import numpy a s np
d e f crow_seq ( m a t r i c e , s c a t t e r ) :
’ ’ ’ l x and l y have t o be e x p r e s s e d i n m not o t h e r measure u n i t s ’ ’ ’
crow = [ ]
nx=l e n ( m a t r i c e )
ny=l e n ( m a t r i c e [ 0 ] )
b=[0 ,1 ,2]
k=0
#l s t r i p=l y //10
#x s e q u e n c e=np . a r a n g e ( 0 , nx , 1 5 0 0 )
f y s e q u e n c e=np . a r a n g e ( 8 , ny , 4 8 )
s y s e q u e n c e=np . a r a n g e (8+16 , ny , 4 8 )
t y s e q u e n c e=np . a r a n g e (8+32 , ny , 4 8 )
a=b∗ l e n ( crowseq )
f o r i i n crowseq :
riga =[]
i f a [ k ]==0:
for j in fysequence :
x i=i
y i=j
ye=j +4
yu=j −4
v a l 1=m a t r i c e [ x i ] [ yu ]
v a l 2=m a t r i c e [ x i ] [ y i ]
v a l 3=m a t r i c e [ x i ] [ ye ]
r i g a . append ( v a l 1 )
r i g a . append ( v a l 2 )
r i g a . append ( v a l 3 )
e l i f a [ k ]==1:
f o r j in sysequence :
x i=i
y i=j
ye=j +4
yu=j −4
v a l 1=m a t r i c e [ x i ] [ yu ]
v a l 2=m a t r i c e [ x i ] [ y i ]
v a l 3=m a t r i c e [ x i ] [ ye ]
r i g a . append ( v a l 1 )
r i g a . append ( v a l 2 )
r i g a . append ( v a l 3 )
e l i f a [ k ]==2:
f o r j in tysequence :
x i=i
y i=j
ye=j +4
yu=j −4
v a l 1=m a t r i c e [ x i ] [ yu ]
v a l 2=m a t r i c e [ x i ] [ y i ]
83
v a l 3=m a t r i c e [ x i ] [ ye ]
r i g a . append ( v a l 1 )
r i g a . append ( v a l 2 )
r i g a . append ( v a l 3 )
k +=1
crow . append ( r i g a )
r e t u r n crow
# ## Draw s a m p l i n g map
d e f draw_crow ( m a t r i c e ) :
nx=l e n ( m a t r i c e )
ny=l e n ( m a t r i c e [ 0 ] )
b=[0 ,1 ,2]
k=0
d i s=np . z e r o s ( ( nx , ny ) )
#l s t r i p=l y //10
#x s e q u e n c e=np . a r a n g e ( 0 , nx , 1 5 0 0 )
crowseq=np . a r a n g e ( 0 , nx , 5 0 0 )
f y s e q u e n c e=np . a r a n g e ( 1 9 , ny , 1 2 0 )
s y s e q u e n c e=np . a r a n g e (19+40 , ny , 1 2 0 )
t y s e q u e n c e=np . a r a n g e (19+80 , ny , 1 2 0 )
a=b∗ l e n ( crowseq )
f o r i i n crowseq :
k
riga =[]
i f a [ k ]==0:
for j in fysequence :
x i=i
y i=j
ye=j +5
yu=j −5
d i s [ x i ] [ yu ] += 1
d i s [ x i ] [ y i ] += 1
d i s [ x i ] [ ye ] += 1
e l i f a [ k ]==1:
f o r j in sysequence :
x i=i
y i=j
ye=j +5
yu=j −5
d i s [ x i ] [ yu ] += 1
d i s [ x i ] [ y i ] += 1
d i s [ x i ] [ ye ] += 1
e l i f a [ k ]==2:
f o r j in tysequence :
x i=i
y i=j
ye=j +5
yu=j −5
d i s [ x i ] [ yu ] += 1
d i s [ x i ] [ y i ] += 1
d i s [ x i ] [ ye ] += 1
k +=1
# ## Uniform d e t e r m i n i s t i c method
85
x=np . l i n s p a c e ( 0 , l e n ( m a t r i c e ) −1,num=s r i g h e , dtype = i n t )
y=np . l i n s p a c e ( 0 , l e n ( m a t r i c e [ 0 ] ) −1,num=s c o l o n n e , dtype = i n t )
f o r i in range ( s r i g h e ) :
riga =[]
f o r j in range ( scolonne ) :
v a l=m a t r i c e [ x [ i ] ] [ y [ j ] ]
r i g a . append ( v a l )
un_sempl . append ( r i g a )
values =[]
return matrice
# ## Hammerseley Method
d e f Hammerseley_seq ( nsamples_xsquare , m a t r i c e ) :
width=l e n ( m a t r i c e [ 0 ] )
l e n g h t=l e n ( m a t r i c e )
raw_i=np . a r a n g e ( 0 , l e n g h t , width )
val =[]
f o r i i n r a n g e ( nsamples_xsquare ) :
b a s e=2
vdc , denom = 0 , 1
u n i t=width / nsamples_xsquare
j=i
while j :
denom ∗= b a s e
j , r e m a i n d e r = divmod ( j , b a s e )
vdc += r e m a i n d e r / denom
y i=i n t ( np . a r a n g e ( 0 , width +1, u n i t ) [ i ] )
x i=i n t ( vdc ∗ width )
f o r l i n raw_i :
f=x i+l
i f f < lenght :
v a l o r e=m a t r i c e [ f ] [ y i ]
v a l . append ( v a l o r e )
else :
pass
return val
#p e r c e n t a g e =120/400
r a w s _ s t r i p=s t r i p _ l / u n i t
r a w s _ s t r i p 1=r a w s _ s t r i p ∗ p e r c e n t a g e
r a w s _ s t r i p 0=r a w s _ s t r i p −r a w s _ s t r i p 1
87
column_strip=strip_w / u n i t
number_w=width / strip_w
number_l=l e n g h t / s t r i p _ l
number_strips=number_w∗number_l
Big_mama = [ ]
f o r a i n np . a r a n g e ( i n t ( number_l ) ) :
Big_raw = [ ]
f o r i i n np . a r a n g e ( i n t ( number_w ) ) :
Matrice =[]
x=rd . r a n d i n t ( 0 , i n t ( r a w s _ s t r i p 0 ) −1)
f o r j i n np . a r a n g e ( i n t ( r a w s _ s t r i p ) ) :
n e g s t r i p=np . a r a n g e ( x , x+r a w s _ s t r i p 1 )
i f j in negstrip :
raw=nM1 . ppf ( np . random . uniform ( s i z e=i n t (
column_strip ) ) )−np . random . uniform ( low =0, h i g h=
l o c , s i z e=i n t ( column_strip ) )
i f j not i n n e g s t r i p :
raw=nM0 . ppf ( np . random . uniform ( s i z e=i n t (
column_strip ) ) )
M a t r i c e . append ( raw )
i f i == 0 :
Big_raw=M a t r i c e
else :
Big_raw = np . h s t a c k ( ( Big_raw , M a t r i c e ) )
i f a == 0 :
Big_mama=Big_raw
else :
Big_mama = np . v s t a c k ( ( Big_mama , Big_raw ) )
return Big_mama
d e f Hammerseley_draw ( nsamples_xsquare , m a t r i c e ) :
width=l e n ( m a t r i c e [ 0 ] )
l e n g h t=l e n ( m a t r i c e )
raw_i=np . a r a n g e ( 0 , l e n g h t , width )
valy =[]
valx =[]
f o r i i n r a n g e ( nsamples_xsquare ) :
colonna =[]
b a s e=2
vdc , denom = 0 , 1
j=i
while j :
denom ∗= b a s e
j , r e m a i n d e r = divmod ( j , b a s e )
vdc += r e m a i n d e r / denom
y i=i n t ( np . a r a n g e ( 0 , width +1, u n i t ) [ i ] )
x i=i n t ( vdc ∗ width )
v a l y . append ( y i )
f o r l i n raw_i :
f=x i+l
i f f < lenght :
m a t r i c e [ f ] [ y i ]=1
v a l x . append ( f )
else :
pass
r e t u r n m a t r i c e , valy , v a l x
# ## Prendere d a t i da ELATextuur
import pandas a s pd
import o s
import r e
#import numpy a s np
import s e a b o r n a s s n s
89
def elatextur_data ( pathf ) :
d e f get_num ( x ) :
return f l o a t ( ’ ’ . j o i n ( e l e f o r e l e in x i f e l e . i s d i g i t ( ) or e l e
== ’ . ’ ) )
r e s u l t s =[]
f o r f i l e in s orted ( os . l i s t d i r ( pathf ) ) :
#p r i n t ( path )
for l i n e in strada :
i f "<mpd>" i n l i n e :
mpd= get_num ( l i n e ) /1000
measure . append (mpd)
i f "<etd >" i n l i n e :
e t d=get_num ( l i n e ) /1000
measure . append ( e t d )
i f "< l a t i t u d e >" i n l i n e :
l a t i t u d e=get_num ( l i n e )
measure . append ( l a t i t u d e )
i f "< l o n g i t u d e >" i n l i n e :
l o n g i t u d e=get_num ( l i n e )
measure . append ( l o n g i t u d e )
#i f "<time >" i n l i n e :
#time= l i s t (map( s t r , r e . f i n d a l l ( " \ d+\:\d+" ,
line )))
#measure . append ( time )
r e s u l t s . append ( measure )
d f=pd . DataFrame ( data=r e s u l t s , columns =[ ’ f i l e ’ , ’MTD’ , ’ETD’ , ’
latitude ’ , ’ longitude ’ ] )
#d f=pd . DataFrame ( data=r e s u l t s , columns =[ ’ f i l e ’ , ’MTD’ , ’ETD’ ] )
return df
# ## Real r e p r e s e n t a t i o n o f s u r f a c e
import m a t p l o t l i b
import m a t p l o t l i b . p y p l o t a s p l t
F= [ ]
f o r i in range ( r i g h e ) :
riga =[]
f o r j in range ( colonne ) :
r i g a . append ( m a t r i x _ c o l [ c o u n t e r ] )
c o u n t e r +=1
F . append ( r i g a )
c o u n t e r 2=0
D= [ ]
f o r i in range ( r i g h e ) :
riga =[]
f o r j in range ( colonne ) :
c o u n t e r 2 +=1
91
D. append ( r i g a )
d i v i d e r 2 = make_axes_locatable ( ax2 )
# Append a x e s t o t h e r i g h t o f ax3 , with 20% width o f ax3
cax2 = d i v i d e r 2 . append_axes ( " r i g h t " , s i z e ="5%" , pad =0.25)
return F, D
r e t u r n cdf , n
# D i s t r i b u t i o n s t o check
DISTRIBUTIONS = [
93
s t . alpha , s t . a n g l i t , s t . a r c s i n e , s t . beta , s t . betaprime , s t .
b r a d f o r d , s t . cauchy , s t . c h i , s t . c h i 2 , s t . c o s i n e ,
s t . e r l a n g , s t . expon , s t . exponnorm , s t . exponweib , s t . exponpow , s t . f
, st . fatiguelife , st . fisk ,
st . foldcauchy , st . frechet_r , st . frechet_l , st . g e n l o g i s t i c , st .
g e n p a r e t o , s t . gennorm , s t . genexpon ,
s t . genextreme , s t . gausshyper , s t . gamma , s t . gengamma , s t .
g e n h a l f l o g i s t i c , s t . g i l b r a t , s t . gompertz , s t . gumbel_r ,
s t . gumbel_l , s t . h a l f c a u c h y , s t . h a l f l o g i s t i c , s t . halfnorm , s t .
halfgennorm , s t . hypsecant , s t . invgamma , s t . i n v g a u s s ,
s t . i n v w e i b u l l , s t . johnsonsb , s t . johnsonsu , s t . ksone , s t . kstwobign
, s t . l a p l a c e , s t . l e v y , s t . levy _l , s t . l e v y _ s t a b l e ,
s t . l o g i s t i c , s t . loggamma , s t . l o g l a p l a c e , s t . lognorm , s t . lomax , s t .
maxwell , s t . miel ke , s t . nakagami , s t . ncx2 , s t . ncf ,
s t . nct , s t . norm , s t . p a r e t o , s t . pearson3 , s t . powerlaw , s t .
powerlognorm , s t . powernorm , s t . r d i s t , s t . r e c i p r o c a l ,
st . rayleigh , st . rice , st . recipinvgauss , st . semicircular , st . t , st .
truncexpon , s t . truncnorm , s t . tukeylambda ,
s t . uniform , s t . vonmises , s t . v o n m i s e s _ l i n e , s t . wald , s t .
weibull_min , s t . weibull_max , s t . wrapcauchy
]
#, s t . d w e i b u l l , s t . foldnorm s t . burr , s t . dgamma , s t . t r i a n g
# Best h o l d e r s
b e s t _ d i s t r i b u t i o n = s t . norm
best_params = ( 0 . 0 , 1 . 0 )
b e s t _ s s e = np . i n f
# Try t o f i t t h e d i s t r i b u t i o n
try :
# I g n o r e w a r n i n g s from data t h a t can ’ t be f i t
with w a r n i n g s . catch_warnings ( ) :
warnings . f i l t e r w a r n i n g s ( ’ ignore ’ )
# f i t d i s t t o data
params = d i s t r i b u t i o n . f i t ( data )
#Kolmogorov−Smirnov Test
#s , p=k s t e s t ( d f [ ’MTD’ ] , n . c d f )
#C r e a t e matrix o f k s t e s t r e s u l t s
# i f a x i s p a s s i n add t o p l o t
try :
i f ax :
pd . S e r i e s ( pdf2 , z ) . p l o t ( ax=ax )
end
except Exception :
pass
except Exception :
pass
#f i n a l e=KSframe . s o r t _ v a l u e s ( by =[ ’ pvalue ’ ] )
r e t u r n b e s t _ d i s t r i b u t i o n . name , best_params
95
s t a r t = d i s t . ppf ( 0 . 0 0 0 1 , ∗ arg , l o c=l o c , s c a l e=s c a l e ) i f a r g e l s e
d i s t . ppf ( 0 . 0 1 , l o c=l o c , s c a l e=s c a l e )
end = d i s t . ppf ( 0 . 9 9 9 , ∗ arg , l o c=l o c , s c a l e=s c a l e ) i f a r g e l s e
d i s t . ppf ( 0 . 9 9 , l o c=l o c , s c a l e=s c a l e )
r e t u r n pdf
d e f b e s t _ c u r v _ f i t ( data , b i n s s ) :
m a t p l o t l i b . rcParams [ ’ f i g u r e . f i g s i z e ’ ] = ( 8 , 6 )
ma tpl otl ib . s t y l e . use ( ’ ggplot ’ )
#Find and p l o t b e s t f i t
# Load data from s t a t s m o d e l s d a t a s e t s
#data = d f [ "MTD" ]
#b i n s s =25
# P l o t f o r comparison
f i g , ax = p l t . s u b p l o t s ( )
ax = data . p l o t ( kind =’ h i s t ’ , b i n s=b i n s s , normed=True , a l p h a =0.5 ,
c o l o r=p l t . rcParams [ ’ a x e s . c o l o r _ c y c l e ’ ] [ 1 ] )
# Save p l o t l i m i t s
#dataYLim = ax . get_ylim ( )
# Find b e s t f i t d i s t r i b u t i o n
best_fit_name , best_fir_paramms = b e s t _ f i t _ d i s t r i b u t i o n ( data ,
b i n s s , ax ) #, K S r e s u l t s
b e s t _ d i s t = g e t a t t r ( s t , best_fit_name )
# Update p l o t s
ax . se t _ y l i m ( ( 0 , 1 . 8 ) )
ax . se t _ x l i m ( ( 0 . 7 , 3 ) )
#ax . s e t _ y l i m ( dataYLim )
ax . s e t _ t i t l e ( u ’MTD v a l u e s \n A l l F i t t e d D i s t r i b u t i o n s ’ )
ax . s e t _ x l a b e l ( u ’MTD’ )
ax . s e t _ y l a b e l ( ’ Frequency ’ )
# Make PDF
pdf = make_pdf ( b e s t _ d i s t , best_fir_paramms )
# Make CDF
cdf , n = make_cdf ( b e s t _ d i s t , best_fir_paramms )
# Display
param_names = ( b e s t _ d i s t . s h a p e s + ’ , l o c , s c a l e ’ ) . s p l i t ( ’ , ’ ) i f
best_dist . shapes e l s e [ ’ loc ’ , ’ scale ’ ]
param_str = ’ , ’ . j o i n ( [ ’ { } = { : 0 . 2 f } ’ . format ( k , v ) f o r k , v i n z i p (
param_names , best_fir_paramms ) ] )
d i s t _ s t r = ’ { } ( { } ) ’ . format ( best_fit_name , param_str )
# Display
ax2 = c d f . p l o t ( lw =2, l a b e l =’CDF’ , l e g e n d=True )
param_names = ( b e s t _ d i s t . s h a p e s + ’ , l o c , s c a l e ’ ) . s p l i t ( ’ , ’ ) i f
best_dist . shapes e l s e [ ’ loc ’ , ’ scale ’ ]
param_str = ’ , ’ . j o i n ( [ ’ { } = { : 0 . 2 f } ’ . format ( k , v ) f o r k , v i n z i p (
param_names , best_fir_paramms ) ] )
d i s t _ s t r = ’ { } ( { } ) ’ . format ( best_fit_name , param_str )
s t a t , p v a l u e=s t . k s t e s t ( data , n . c d f )
p r i n t ( ’ The Kolmogorov−Smirnov Test produced a P−Value of ’ , p v a l u e )
r e t u r n best_fir_paramms , b e s t _ d i s t , n
d e f best_curv_fit_norm ( data , b i n s s ) :
m a t p l o t l i b . rcParams [ ’ f i g u r e . f i g s i z e ’ ] = ( 8 , 6 )
ma tpl otl ib . s t y l e . use ( ’ ggplot ’ )
97
#Find and p l o t b e s t f i t
#Load data from s t a t s m o d e l s d a t a s e t s
#data = d f [ "MTD" ]
#b i n s s =25
# P l o t f o r comparison
#f i g , ax = p l t . s u b p l o t s ( )
#ax = data . p l o t ( kind =’ h i s t ’ , b i n s=b i n s s , normed=True , a l p h a =0.5 ,
c o l o r=p l t . rcParams [ ’ a x e s . c o l o r _ c y c l e ’ ] [ 1 ] )
# Save p l o t l i m i t s
#dataYLim = ax . get_ylim ( )
# Find b e s t f i t d i s t r i b u t i o n
b e s t _ d i s t = s t . norm
best_fir_paramms = b e s t _ d i s t . f i t ( data ) #, K S r e s u l t s
b e s t _ d i s t = g e t a t t r ( s t , b e s t _ d i s t . name )
# Update p l o t s
#ax . s e t _ y l i m ( ( 0 , 1 . 8 ) )
#ax . s e t _ x l i m ( ( 0 . 7 , 3 ) )
#ax . s e t _ y l i m ( dataYLim )
#ax . s e t _ t i t l e ( u ’MTD v a l u e s \n A l l F i t t e d D i s t r i b u t i o n s ’ )
#ax . s e t _ x l a b e l ( u ’MTD’ )
#ax . s e t _ y l a b e l ( ’ Frequency ’ )
# Make PDF
pdf = make_pdf ( b e s t _ d i s t , best_fir_paramms )
# Make CDF
cdf , n = make_cdf ( b e s t _ d i s t , best_fir_paramms )
# Display
param_names = ( b e s t _ d i s t . s h a p e s + ’ , l o c , s c a l e ’ ) . s p l i t ( ’ , ’ ) i f
best_dist . shapes e l s e [ ’ loc ’ , ’ scale ’ ]
param_str = ’ , ’ . j o i n ( [ ’ { } = { : 0 . 2 f } ’ . format ( k , v ) f o r k , v i n z i p (
param_names , best_fir_paramms ) ] )
d i s t _ s t r = ’ { } ( { } ) ’ . format ( b e s t _ d i s t . name , param_str )
# Display
ax2 = c d f . p l o t ( lw =2, l a b e l =’CDF’ , l e g e n d=True )
param_names = ( b e s t _ d i s t . s h a p e s + ’ , l o c , s c a l e ’ ) . s p l i t ( ’ , ’ ) i f
best_dist . shapes e l s e [ ’ loc ’ , ’ scale ’ ]
param_str = ’ , ’ . j o i n ( [ ’ { } = { : 0 . 2 f } ’ . format ( k , v ) f o r k , v i n z i p (
param_names , best_fir_paramms ) ] )
d i s t _ s t r = ’ { } ( { } ) ’ . format ( b e s t _ d i s t . name , param_str )
s t a t , p v a l u e=s t . k s t e s t ( data , n . c d f )
p r i n t ( ’ The Kolmogorov−Smirnov Test produced a P−Value of ’ , p v a l u e )
r e t u r n best_fir_paramms , b e s t _ d i s t , n
#P r o b a b i l i t y o f f a i l u r e with g r a p h s
d e f p r o b _ f a i l ( b e s t _ d i s t , best_fir_paramms , t a r g e t ) :
z = np . l i n s p a c e ( 0 , 4 , 1 0 0 )
a r g = best_fir_paramms [ : − 2 ]
l o c = best_fir_paramms [ −2]
s c a l e = best_fir_paramms [ −1]
cdf_ax , pdf_ax = a x e s [ : ]
cdf_ax . p l o t ( z , c d f _ v a l u e s )
pdf_ax . p l o t ( z , p d f _ v a l u e s )
# F i l l a r e a a t and t o t h e l e f t o f x .
pdf_ax . f i l l _ b e t w e e n ( z , pdf_values ,
where=z <= t a r g e t ,
99
c o l o r=f i l l _ c o l o r )
pd = b e s t _ d i s t . pdf ( t a r g e t , l o c=l o c , s c a l e=s c a l e , ∗ a r g ) #
P r o b a b i l i t y density at t h i s value .
r e t u r n cd ∗100
# D i s t r i b u t i o n s t o check
DISTRIBUTIONS = [ s t . lognorm , s t . norm
]
#, s t . d w e i b u l l , s t . foldnorm s t . burr , s t . dgamma , s t . t r i a n g
# Best h o l d e r s
b e s t _ d i s t r i b u t i o n = s t . norm
best_params = ( 0 . 0 , 1 . 0 )
b e s t _ s s e = np . i n f
else :
pass
r e t u r n b e s t _ d i s t r i b u t i o n . name , best_params
d e f curv_fit_Lognorm ( data , b i n s s ) :
m a t p l o t l i b . rcParams [ ’ f i g u r e . f i g s i z e ’ ] = ( 8 , 6 )
ma tpl otl ib . s t y l e . use ( ’ ggplot ’ )
#Find and p l o t b e s t f i t
#Load data from s t a t s m o d e l s d a t a s e t s
#data = d f [ "MTD" ]
#b i n s s =25
# P l o t f o r comparison
#f i g , ax = p l t . s u b p l o t s ( )
#ax = data . p l o t ( kind =’ h i s t ’ , b i n s=b i n s s , normed=True , a l p h a =0.5 ,
c o l o r=p l t . rcParams [ ’ a x e s . c o l o r _ c y c l e ’ ] [ 1 ] )
# Save p l o t l i m i t s
#dataYLim = ax . get_ylim ( )
# Find b e s t f i t d i s t r i b u t i o n
b e s t _ d i s t = s t . lognorm
best_fir_paramms = b e s t _ d i s t . f i t ( data ) #, K S r e s u l t s
101
# Update p l o t s
#ax . s e t _ y l i m ( ( 0 , 1 . 8 ) )
#ax . s e t _ x l i m ( ( 0 . 7 , 3 ) )
#ax . s e t _ y l i m ( dataYLim )
#ax . s e t _ t i t l e ( u ’MTD v a l u e s \n A l l F i t t e d D i s t r i b u t i o n s ’ )
#ax . s e t _ x l a b e l ( u ’MTD’ )
#ax . s e t _ y l a b e l ( ’ Frequency ’ )
# Make PDF
pdf = make_pdf ( b e s t _ d i s t , best_fir_paramms )
# Make CDF
cdf , n = make_cdf ( b e s t _ d i s t , best_fir_paramms )
# Display
param_names = ( b e s t _ d i s t . s h a p e s + ’ , l o c , s c a l e ’ ) . s p l i t ( ’ , ’ ) i f
best_dist . shapes e l s e [ ’ loc ’ , ’ scale ’ ]
param_str = ’ , ’ . j o i n ( [ ’ { } = { : 0 . 2 f } ’ . format ( k , v ) f o r k , v i n z i p (
param_names , best_fir_paramms ) ] )
d i s t _ s t r = ’ { } ( { } ) ’ . format ( b e s t _ d i s t . name , param_str )
param_names = ( b e s t _ d i s t . s h a p e s + ’ , l o c , s c a l e ’ ) . s p l i t ( ’ , ’ ) i f
best_dist . shapes e l s e [ ’ loc ’ , ’ scale ’ ]
param_str = ’ , ’ . j o i n ( [ ’ { } = { : 0 . 2 f } ’ . format ( k , v ) f o r k , v i n z i p (
param_names , best_fir_paramms ) ] )
d i s t _ s t r = ’ { } ( { } ) ’ . format ( b e s t _ d i s t . name , param_str )
r e t u r n best_fir_paramms , b e s t _ d i s t , n
\ clearpage
#ANALYSIS OF MEASUREMENTS
−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−
F ,D=f z . r e a l _ r e p r e s e n t a t i o n ( 2 0 , 2 0 , d f [ ’MTD’ ] )
p f a i l=f z . p r o b _ f a i l ( b e s t _ d i s t , best_fir_paramms , 1 . 3 )
d i s t r i b u t i o n = s t . norm
params = d i s t r i b u t i o n . f i t ( d f [ ’MTD’ ] )
s e q u e n c e=np . l i n s p a c e ( 0 , 4 , 2 0 )
count=0
f o r i i n r a n g e ( l e n (F) ) :
v l a g=0
f o r j i n r a n g e ( l e n (F [ 0 ] ) ) :
i f F[ i ] [ j ] <1.3:
v l a g += 1
103
count += 1
else :
v l a g=v l a g
i f v l a g >= 0 . 1 5 ∗ l e n (F [ 0 ] ) :
s e q u e n c e [ i ]=1
else :
s e q u e n c e [ i ]=0
M1= [ ]
M0= [ ]
indx=0
f o r i in sequence :
riga1 =[]
riga0 =[]
i f i ==0:
f o r j i n r a n g e ( l e n (F [ 0 ] ) ) :
r i g a 0 . append (F [ indx ] [ j ] )
M0. append ( r i g a 0 )
else :
f o r j i n r a n g e ( l e n (F [ 0 ] ) ) :
r i g a 1 . append (F [ indx ] [ j ] )
M1. append ( r i g a 1 )
indx += 1
listM1 =[]
listM0 =[]
f o r i i n r a n g e ( l e n (M1) ) :
f o r j i n r a n g e ( l e n (M1 [ 0 ] ) ) :
x=M1[ i ] [ j ]
l i s t M 1 . append ( x )
f o r i i n r a n g e ( l e n (M0) ) :
f o r j i n r a n g e ( l e n (M0 [ 0 ] ) ) :
x=M0[ i ] [ j ]
l i s t M 0 . append ( x )
#p e r c e n t a g e =120/400
r a w s _ s t r i p=s t r i p _ l / u n i t
r a w s _ s t r i p 1=r a w s _ s t r i p ∗ p e r c e n t a g e
r a w s _ s t r i p 0=r a w s _ s t r i p −r a w s _ s t r i p 1
column_strip=strip_w / u n i t
number_w=width / strip_w
number_l=l e n g h t / s t r i p _ l
number_strips=number_w∗number_l
Big_mama = [ ]
f o r a i n np . a r a n g e ( i n t ( number_l ) ) :
Big_raw = [ ]
f o r i i n np . a r a n g e ( i n t ( number_w ) ) :
Matrice =[]
x=rd . r a n d i n t ( 0 , i n t ( r a w s _ s t r i p 0 ) −1)
f o r j i n np . a r a n g e ( i n t ( r a w s _ s t r i p ) ) :
n e g s t r i p=np . a r a n g e ( x , x+r a w s _ s t r i p 1 )
i f j in negstrip :
raw=nM1 . ppf ( np . random . uniform ( s i z e=i n t (
column_strip ) ) )−np . random . normal ( l o c=l o c ,
s c a l e =0.07 , s i z e=i n t ( column_strip ) )
i f j not i n n e g s t r i p :
raw=nM0 . ppf ( np . random . uniform ( s i z e=i n t (
105
column_strip ) ) )
M a t r i c e . append ( raw )
i f i == 0 :
Big_raw=M a t r i c e
else :
Big_raw = np . h s t a c k ( ( Big_raw , M a t r i c e ) )
i f a == 0 :
Big_mama=Big_raw
else :
Big_mama = np . v s t a c k ( ( Big_mama , Big_raw ) )
return Big_mama