0% found this document useful (0 votes)
90 views120 pages

Pavement Thesis 5 2

This document is a thesis submitted by Walter Junior Noca to the Delft University of Technology to obtain a Master of Science degree. The thesis examines theoretical developments in sampling analysis for monitoring pavement quality. It focuses on a case study of the AntiSkid Layer used at Schiphol Airport. The thesis reviews literature on sampling techniques, analyzes real data collected from airport pavements, simulates pavement surfaces, develops sampling methodologies, and tests the methods. The goal is to determine the optimal strategy for sampling pavements to efficiently assess quality.

Uploaded by

Marko Popovic
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
90 views120 pages

Pavement Thesis 5 2

This document is a thesis submitted by Walter Junior Noca to the Delft University of Technology to obtain a Master of Science degree. The thesis examines theoretical developments in sampling analysis for monitoring pavement quality. It focuses on a case study of the AntiSkid Layer used at Schiphol Airport. The thesis reviews literature on sampling techniques, analyzes real data collected from airport pavements, simulates pavement surfaces, develops sampling methodologies, and tests the methods. The goal is to determine the optimal strategy for sampling pavements to efficiently assess quality.

Uploaded by

Marko Popovic
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 120

Theoretical development of sampling analysis for the

monitoring of pavement quality


by

Walter Junior Noca

to obtain the degree of Master of Science


at the Delft University of Technology,
to be defended publicly on Friday October 19, 2018 at 14:00

Student number: 4516990


Project duration: October 1, 2017 – October 1, 2018
Thesis committee: Prof.dr.ir S. Erkens, TU Delft, chair
Ir. J.M. Houben, TU Delft, supervisor
Dr. J. Verlaan, TU Delft, supervisor
Ir. J.H.M van der Palen, Schiphol Group, supervisor
M. van Santvoort, Heijmans, supervisor
Ir. J. Visée, Heijmans, supervisor

An electronic version of this thesis is available at http://repository.tudelft.nl/.


Contents

1 Introduction 1

2 Case study 4
2.1 AntiSkid Layer (ASK) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
2.1.1 Type of pavement surfaces . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
2.1.2 Flightflex R
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
2.2 Quality control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
2.2.1 Type of sampling methodologies . . . . . . . . . . . . . . . . . . . . . . . 8
2.2.2 Current sampling strategies . . . . . . . . . . . . . . . . . . . . . . . . . . 10

3 Literature review 12
3.1 Categories of sampling techniques . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
3.2 Surface properties and manufacturing process signatures . . . . . . . . . . . . . . 13
3.3 Definition of a sampling strategy . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
3.4 Literature conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15

4 Research question and research


methodology 16
4.1 Goal and Research Question . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
4.2 Research approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17

5 Sampling and simulation modelling 20


5.1 Sampling techniques . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
5.1.1 CROW sampling Technique . . . . . . . . . . . . . . . . . . . . . . . . . . 20
5.1.2 Uniform Sampling technique . . . . . . . . . . . . . . . . . . . . . . . . . 22
5.1.3 Hammersley Sampling Technique . . . . . . . . . . . . . . . . . . . . . . . 23
5.2 Surface simulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
5.2.1 Defining Surface Properties . . . . . . . . . . . . . . . . . . . . . . . . . . 26
5.2.2 Normal Probability distribution . . . . . . . . . . . . . . . . . . . . . . . . 28

6 Real data analysis and surface simulation 30


6.1 Data Collection from the field . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
6.1.1 Dense measurements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
6.1.2 Uniform measurement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
6.1.3 ASK Measurements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
6.1.4 Conclusion of the measurement process . . . . . . . . . . . . . . . . . . . 38
6.2 Surface simulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39

7 Analysis methodology and results 43


7.1 First sampling methodology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
7.1.1 Robustness of the methodology . . . . . . . . . . . . . . . . . . . . . . . . 45
7.2 Second sampling methodology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
7.2.1 Results of the simulations . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
7.3 Overview of the results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55

I
8 Validation and Test 57
8.1 Correlation between Elatextur and Sand Patch Method . . . . . . . . . . . . . . 57
8.1.1 Correlation between the techniques . . . . . . . . . . . . . . . . . . . . . . 57
8.1.2 Conclusion ad Recommendation . . . . . . . . . . . . . . . . . . . . . . . 59
8.2 Field Measurements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
8.3 Measurements on Taxiway . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
8.4 Measurements on the Runway . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63
8.4.1 Measurements on untreated surface . . . . . . . . . . . . . . . . . . . . . . 63
8.4.2 Measurement on semi treated surface . . . . . . . . . . . . . . . . . . . . . 64
8.4.3 Homogeneous distribution . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
8.4.4 Final Measurements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67
8.5 Conclusion of measurements process . . . . . . . . . . . . . . . . . . . . . . . . . 68

9 Conclusions and recommendations 70


9.1 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70
9.2 Recommendations for the industry . . . . . . . . . . . . . . . . . . . . . . . . . . 73

Appendix 76
List of Figures

1.1 Runways configuration Schiphol Airport . . . . . . . . . . . . . . . . . . . . . . . 1

2.1 ASK installation proces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4


2.2 Period of the year available for the installation of FFX and ASK . . . . . . . . . 6
2.3 Asphalt texture before and after waterjetting . . . . . . . . . . . . . . . . . . . . 7
2.4 ASK (on the left) and Flightflex R
(on the right) . . . . . . . . . . . . . . . . . . . 8
2.5 Sand Patch measurement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
2.6 Sand Patch measurement holes detection . . . . . . . . . . . . . . . . . . . . . . . 9
2.7 ELAtextur machine . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
2.8 Laser limitation for detection during texture depth measurement . . . . . . . . . 10
2.9 Sampling strategy proposed by Heijmans . . . . . . . . . . . . . . . . . . . . . . . 11

3.1 U comparison with other sampling methodologies . . . . . . . . . . . . . . . . . . 14

4.1 Research’s process overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19

5.1 Sampling strategy proposed by Heijmans . . . . . . . . . . . . . . . . . . . . . . . 21


5.2 Example of increase of samples with CROW method (own creation). Axis labels
represent the number of surface squares . . . . . . . . . . . . . . . . . . . . . . . 21
5.3 ELAtextur machine and consequent sampling dimension . . . . . . . . . . . . . . 22
5.4 Example of increase of samples with Uniform method (own creation). Axis labels
represent the number of surface squares . . . . . . . . . . . . . . . . . . . . . . . 23
5.5 Example of Hammersley location of 10 sampling points [22]. Axis labels represent
the number of surface squares . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
5.6 Example of increase of samples with Hammersley method (own creation). Axis
labels represent the number of surface squares . . . . . . . . . . . . . . . . . . . . 24
5.7 Example of Surface simulation. Axis labels represent the number of surface squares 25
5.8 Example of distance between fitted curve and data points . . . . . . . . . . . . . 27
5.9 Fitting process and selection of the best curve . . . . . . . . . . . . . . . . . . . . 27
5.10 Example of three different Probability density curves for a normal distribution
with different mean value but the same standard deviation . . . . . . . . . . . . . 28
5.11 CDF and PDF for the same normal distribution . . . . . . . . . . . . . . . . . . 28
5.12 Summary of fitting process and determination of simulated texture values . . . . 29

6.1 Location of Flightflex


R
strips . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
6.2 Square area for dense measurements with division in rows and columns . . . . . . 31
6.3 Square area for dense measurements and plot of the TD values. Axis labels
represent the number of surface squares . . . . . . . . . . . . . . . . . . . . . . . 32
6.4 Highlighted values below and above 1.3 mm. Axis labels represent the number of
surface squares . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
6.5 Result of the fitting process; loc= Mean, scale= St.Deviation . . . . . . . . . . . 33
6.6 Percentage of values below 1.3 mm . . . . . . . . . . . . . . . . . . . . . . . . . . 33
6.7 Result of the fitting process; loc= Mean, scale= St.Deviation . . . . . . . . . . . 34
6.8 Result of the fitting process; loc= Mean, scale= St.Deviation . . . . . . . . . . . 34
6.9 Location of the first strip . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
6.10 Uniform distribution of points. Axis labels represent the number of surface squares 35

III
6.11 Identification of good and low quality area. Axis labels represent the number of
surface squares . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
6.12 Probability distribution function and cumulative distribution function; loc= Mean,
scale= St.Deviation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
6.13 Fitting process of low (top) and high (bottom) quality pavement; loc= Mean,
scale= St.Deviation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
6.14 Location of ASK Measurements . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
6.15 Results of ASK measurements analysis. Axis labels represent the number of surface
squares; loc= Mean, scale= St.Deviation . . . . . . . . . . . . . . . . . . . . . . . 38
6.16 Assembly procedure of simulated surface . . . . . . . . . . . . . . . . . . . . . . . 41
6.17 Final Surface Simulated. Axis labels represent the number of surface squares . . 42

7.1 Flowchart of simulation algorithm for the first methodology proposed . . . . . . 44


7.2 Verification of mean behaviour related to the variation of number of samples . . 45
7.3 Verification of Relative Error behaviour related to the variation of number of samples 46
7.4 Flowchart of second simulation methodology . . . . . . . . . . . . . . . . . . . . . 47
7.5 Representation of distribution curves and perturbing effect . . . . . . . . . . . . 49
7.6 Results for each sampling methodology on a surface with no low quality areas. . 51
7.7 Hammersley sampling technique for different surface properties . . . . . . . . . . 52
7.8 Uniform sampling technique for different surface properties . . . . . . . . . . . . 53
7.9 CROW sampling technique for different surface properties . . . . . . . . . . . . . 54

8.1 Measurements distribution and correlation . . . . . . . . . . . . . . . . . . . . . . 59


8.2 Location of the maintenance works on the Polderbaan . . . . . . . . . . . . . . . 60
8.3 Uniform distribution in the taxiway . . . . . . . . . . . . . . . . . . . . . . . . . 61
8.4 Comparison of surfaces before and after waterjetting . . . . . . . . . . . . . . . . 61
8.5 Results of analysis taxiway surface . . . . . . . . . . . . . . . . . . . . . . . . . . 62
8.6 Measurement on untreated area . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63
8.7 Measurement according to the monitoring plan . . . . . . . . . . . . . . . . . . . 64
8.8 Waterjetting process and heterogeneity of TD surface . . . . . . . . . . . . . . . 65
8.9 Distribution of MTD values on the surface . . . . . . . . . . . . . . . . . . . . . . 65
8.10 Application of uniform strategy . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
8.11 Last group of measurements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67
8.12 Effect of Waterjetting on MTD . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68

9.1 Comparison Colosimo Results and Own results . . . . . . . . . . . . . . . . . . . 71


List of Tables

2.1 Test of different aggregate mixture [21] . . . . . . . . . . . . . . . . . . . . . . . 7

6.1 Overview Results of Measurements Phase . . . . . . . . . . . . . . . . . . . . . . 38

7.1 Paired t-test of most common TF families for Pearson Correlations . . . . . . . . 45


7.2 Simulations results or different cycles . . . . . . . . . . . . . . . . . . . . . . . . . 45
7.3 Selection of the number of samples for each methodology . . . . . . . . . . . . . 48
7.4 Results of sampling simulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
7.5 Presentation of number of samples corresponding to 1% or lower relative error . 55

8.1 Results values from Sand Patch and ELAtextur measurements . . . . . . . . . . 58


8.2 Result of fitting process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59

V
Preface

This Thesis represents the last step of my academical career at TU Delft. I would like to thank
the University for giving me the opportunity to conclude my studies at a top educational level
and in this nice country. The experience collected in my rst two years is the base of this report. I
would also like to thank the member of the committee for guiding me during this research. In
particular I would like to thank Lambert for this assistance with the double degree paperwork. It
was a complicated procedure that would have been more complex without his help.

As well, I would like to thank two companies: Heijmans and Schiphol. They gave me chance
to work every day at Schiphol Airport and to assist at maintenance activities on the runways.
These are unique experiences that I will never forget. In particular, I would like to thank my daily
supervisors Arjan and John for the patience they had in these 12 months and for the considerable
amount of knowledge and experience they taught me. Although it was not possible to meet often,
I would also thank Maarten for the opportunity he gave me last year and for supporting me
whenever I needed him in the research process.

In these years far from home I have also learned the importance of family and friends. For
this reason there are several people I want to mention.

My mother, for the unconditional support, the patience and the dicult moments passed
together. I admire her strength and I am blessed to receive so much love from her. My father,
who gave me the opportunity to come here and start this unique experience.

Thea, for always being present, supportive and patient in these months. Without your support,
love and help, this year would have been much harder.

Carla, Francesco and Giorgio. The magnicent trio. It took me longer (as usual), but we are
nally all graduated. I will stop here my academical career but I know you will always make me feel
as a student (and most likely a stupid one). Thank you for accepting me also when I made mis-
takes, for always supporting me and for making me a better person (and probably a better student).

Tommy, almost invisible but always present to help and to have a glass of wine together. You
have always been there in the most dicult moments with your rationality and calm.
Marsa, for always being on my side, even when it was not the case. Your unconditional
kindness is something I admire.
Balda, my oldest friend and probably the only reason to come to visit Brusnengo. Thank you
for tolerating my hypochondriac questions and concerns.

Matteo, for having the patience of being my flatmate for an entire year and for showing me
the real meaning of "Enjoyare".
My Greek friends I met here in Delft. They have helped me with the courses and we spent
nice moments also outside the university. In particular I want to thank Ioannis. Please remember:
"Solaku".
Last but not least, Juan Camillo Laguado Lancheros De Castillia. It took me more time to
learn your entire name than you. Your help and friendship has been really important.

VII
Abstract

Schiphol Airport and Heijmans are working together on the renewal of the Schiphol runways
pavements. Currently the top layer is covered with a synthetic antiskid material called ASK.
But the strong weather limitations for the installation of this material have opened the door for
alternatives. Heijmans has proposed an innovative asphalt mixture that is able to provide similar
and in some cases better surface performances compared with the ASK.
This asphalt mixture is called Flightflex R
and is a stone matrix asphalt. Consequently it is
affected by the variability of the construction process. This thesis project focuses on the analysis
of the best quality control procedure for this asphalt. The pavement surface needs to meet
specific requirements and it is of interest to define a sampling methodology for the evaluation of
the Texture Depth (TD). In particular the research aims to define the minimum number of
samples that provides the highest reliability for the definition of the Mean Texture
Depth (MTD) of the surface.

To achieve this goal a theoretical approach is adopted. Starting from the collection of a
consistent number of samples, the properties of the surface are analysed. In this process it is of
interest to define the influence of the construction process on the surface quality. The information
obtained are used to simulate bigger surfaces on which different sampling methodologies are tested.
In particular three different methodologies are analysed: the current methodology called CROW,
a Uniform methodology and a random methodology called Hammersley methodology. Thorough
testing these sampling methodologies on the simulated surfaces it is possible to evaluate the
relative error between the MTD of the simulated surfaces and the MTD of the samples taken. A
Monte Carlo type of approach helps to define precisely which methodology performs better. The
one with the lowest relative error and minimum required number of samples will be considered
the most efficient.

The simulation of the surfaces and the analysis of the sampling process highlights a correlation
between the manufacturing signature and some sampling methodologies. In case of a correla-
tion the reliability of the methodology decreases. In particular the CROW and the Uniform
methodology present a form of correlation and thus have a lower reliability. The Hammersley
methodology aims to simulate a random selection of samples and for this reason it does not enter
in correlation with the surface patterns. The three aforementioned methodologies are in the last
part of the research applied on a 500 m long section of the runway Polderbaan at Schiphol Airport.

Although the Uniform methodology is less reliable it provides a 1% of relative error with only
70 samples. The Hammersley instead needs 180 samples to reach the same relative error but with
a higher reliability. The CROW is the least performing. In fact it has a lower reliability than the
Uniform strategy and it needs 170 samples to reach 1% relative error.
The research helps highlighting the correlation between the manufacturing signature left by
the construction process and the sampling strategy adopted. It also highlights the fact that a
random distribution escapes this correlation and provides more reliable results.

To conclude, the companies are suggested to use the Uniform methodology in case of short
time available for the quality control measurements. This comes with a lowest reliability that
has to be accepted. But in case a high reliability is required and sufficient time is available, the
Hammersley strategy is considered more appropriate.

IX
Chapter
1
Introduction

According to the 2017 Airport Traffic Report, Schiphol Airport is one of the busiest airports in
the world. Currently it is ranked 11th for passenger traffic volume with a growth rate of 7.7%,
the highest in Europe [19]. Also the freight movements have an important impact with 1,960,328
tons transported, the 20th highest value in the world [19].

As can be seen in figure 1.1 Schiphol Airport has a 6 runway system but currently only 5 are
actively used for regular operations[3]. With the exception of the runway 36L-18R that was built
in 2003, all the other runways are quite old and will need maintenance operations in the following
years.
In 2017 Schiphol Airport had a traf-
fic volume of 68,515,425 passengers. Com-
pared to the 78,047,278 passengers of
Heathrow this is a lower value but the
airport of London has a growth factor
of only 3.04% [20]. From a prospec-
tive point of view the Dutch airport
is then expected to increase its passen-
gers volume faster. Keeping this trend
Schiphol could reach in a few years a sim-
ilar passengers volume of Heathrow. The
lower growth rate of the English air-
port is due to the fact that it is cur-
rently operating at 98% of its capacity
[12]. For this reason the only possible
growth is achieved by increasing the share
of aircrafts with higher passenger capac-
ity.
Figure 1.1: Runways configuration Schiphol
Airport The growth percentage of Schiphol, as said
previously, is 7.7%. This means that the ca-
pacity of the runway system is expected to be maximised as much as possible, and for this reason
a night maintenance strategy is a valuable option to be considered. Currently Heijmans is the
contractor in charge of asset maintenance activities and is highly involved in the planning of this
new maintenance strategy.

One of the main concerns for Heijmans is to meet the high quality standard required by
Schiphol. This has been done developing a new asphalt mixture called Flightflex R
(FFX).
Currently the texture properties are ensured by an anti-skid layer (ASK). Although the final

1
performances are good, its installation requires specific weather conditions, such a: maximum
level of relative humidity of 80% and a minimum temperature of 4◦ C during the construction
process. To overcome these problems the engineers from Heijmans have proposed the Flightflex
R

mixture which would be more suitable for construction under adverse weather conditions and
should guarantee the pavement performances required by the client.

The quality of runway pavements is, in fact, one of the main concerns in the asset manage-
ment department at Schiphol Airport. For safety and regulatory reasons the European Aviation
Safety Agency (EASA) has imposed a minimum Mean Texture Depth (MTD) value for runway
pavements. This threshold guarantees a proper water storage and reduces the risk of aquaplaning
that could lead to a huge number of fatalities in case of an accident. For this reason a surface
quality evaluation procedure of the runway pavement surface is required. This is valid for any
type of surface and Schiphol has verified both ASK and FFX. Currently the sampling process
is executed with a laser machine that has a sampling dimension of 400 cm2 . Compared to the
dimension of a runway or a road surface this sampling area is not large enough to enable the
measurement of the total surface. Specific points have to be selected and measured but its amount
and location need to be defined with a clear methodology.

The regulation imposes a specific minimum MTD value, but it does not specify the location
of texture depth measurements and the the required number of samples per area. The regulation
is limited to the following sentence[10]:

“The average surface texture depth of a new surface should be not less than 1.0
mm”

A similar regulation approach is also present in the road and highway pavement industry.
Schiphol and the contractor Heijmans need to define a precise quality control methodology to
include in the project contract. The design of such methodology is the result of a scientific analysis
of the correlation between the manufacturing process of the asphalt, its physical properties and
the sampling techniques needed to measure the surface texture.

The goal of this thesis research is to bring an academic and thus a more theoretical approach
on the definition of such a methodology. This should produce a series of scientific evidences on
the relationships between the asphalt quality and the sampling technique adopted. This will be
an added information for Schiphol Group and Heijmans to agree on which sampling methodology
to adopt.

The research will go through the definition of the most important sampling techniques and
the simulation of representative surfaces. These two elements will be combined and the simulated
surfaces will be used to try out the different sampling methodologies. An extensive statistical
analysis will then be executed to interpret the results and define how the sampling methodologies
behave in different situations. From this outcome a series of measurements will be executed
in the field in order to define a correspondence between the theoretical analysis and the field results.

In the next chapters the research topic will be described in detail before presenting the
literature review. Information regarding sampling methodologies and pavement surfaces will
be searched in papers obtained on Google Scholar, Scopus and TU Delft library. Once the
background and current state of art of these topics are clear, the analysis and simulation of the
surfaces will take place. It will be necessary to simulate on Python a series of asphalt surfaces
containing the surface characteristics left by the construction process and to define the algorithms

2 Chapter 1 Introduction
representing the sampling methodologies. The third part will be dedicated to the analysis and
interpretation of the results. The fourth will focus on defining which methodology provides, with
a sufficient reliability, the minimum number of samples for the calculation of the mean of the
entire surface. After this theoretical analysis there will be the possibility to try the different
methodologies in the field to ensure consistency and validation of the theoretical and simulation
results. The final part of the research will be dedicated to present the outcome of the analysis
and provide some practical recommendations to both Heijmans and Schiphol Group.

3
Chapter
2
Case study

As mentioned in the introduction, the EASA regulation does not propose a sampling guidance for
quality control of runway surfaces. This then became a topic of discussion between contractors
and clients on the procedure to adopt to verify the pavement’s qualities. This practical issue has
called for a more academical study on the effect of sampling methodologies on runway surfaces.

2.1 AntiSkid Layer (ASK)


In this chapter the surface properties, the
sampling materials and the current sampling
methodology adopted will be described. The
first part will be used to describe the types of
pavement surfaces that are currently present
at Schiphol. The second part will be dedicated
to describe the methods available for the Tex-
ture Depth measurement. The last section will
present the sampling strategy that is currently
used by Schiphol. The proposition of the al-
ternatives strategies and their effects will be
the core of the research described in the next
chapters.

2.1.1 Type of pavement surfaces


There are two types of runway surfaces at
Schiphol Airport: the ASK surfaces and the
Flightflex R
surfaces. The first type is used
since 1967 and currently is present in almost
Figure 2.1: ASK installation proces the totality of the runway pavements. The
second is an innovation proposed by Heijmans
and currently has been installed only in few
areas of the runways.
Currently all the runways at Schiphol Air-
port are characterised by the presence of an antiskid layer on top of their surfaces. It is called
Possehl Antiskid and is produced by a German company, Possehl Spezialbau [11]. This product
is a two-component epoxy-resin coated with a basalt grit mixture. This makes the surface of ASK
similar to a coarse sandpaper.
This material creates a thin high friction surface that fulfils the requirements imposed by the
ICAO1 regulation under the "design objective coefficient for new runway"[11]. The company claims
a series of benefits from the application of this material: reduction of aquaplaning, resistance
against aircraft fuel and pollutants, improved grip and stability of the aircraft in adverse wind
situation and high water storage capacity. The minimum thickness requirement is 4 mm but
Schiphol preferred to adopt a 5 mm layer. This provides an expected durability of 5 years.
Until the quality of the ASK is high and the degradation has not started it provides high
performances of the surface and simultaneously protects the underneath layers of asphalt. The
company affirms that this last property of the ASK is important in increasing the lifetime of the
asphalt by reducing the need of maintenance.

One of the main disadvantages of this material, beside the high costs, is the strict weather
conditions required for the installation. Under a temperature lower then 10◦ C and humidity
higher than 80% the ASK layer is not applicable. Moreover total absence of rain is needed.
These requirements strictly limit the installation phases. In fact during the winter period and
night shifts it will not be possible to install this material. This forces some limitation on the
maintenance strategy of Schiphol. As an example an overnight maintenance strategy could not
be planned with this kind of procedure, not even in the summer period.
Another important disadvantage is the degradation phase at the end of the material lifetime.
After 4-5 years the ASK starts degrading and entire areas get consumed. These areas loose
the basalt mixture and uncover the underneath asphalt reducing the overall performance of the
pavement. To conclude, it is important to highlight that the high costs of this product were one
of the main reason for evaluating a more economical alternative.

As mentioned this material is made by a synthetic glue and a homogeneous basalt mixture.
These elements define a surface that is highly homogeneous and require a limited number of Tex-
ture Depth measurements. A different scenario is present with an asphalt mixture. The variance of
aggregates, quantity of bitumen and construction process influence the uniformity of the mixture
and the final properties of the surface. This requires a specific methodology for quality control, able
to determine the MTD of the surface with a limited number of samples and an acceptable accuracy.

2.1.2 Flightflex
R

Schiphol Airport is constantly increasing its capacity and is looking at maintenance strategies
that could help achieving this goal. For this reason the evaluation of an overnight maintenance
strategy is under analysis. The night maintenance operation could increase the capacity of the
airport but it would be complicated to place the ASK during the night. To face this problem
Schiphol is looking for an alternative. Heijmans has proposed a new asphalt mixture called
Flightflex
R
that aims to provide all the surface performances required by Schiphol.

Advantages of Flightflex
R

This mixture has a higher flexibility for the construction process allowing the construction at
any humidity condition and also with light rain. Moreover the minimum temperature required is
6◦ C. These factors would allow the construction also during the night shift in the summer period.
1
ICAO= International Civil Aviation Organisation. It is an agency of the United Nations that defines codes and
regulation to ensure safety in the international air transport.

2.1 AntiSkid Layer (ASK) 5


In general the time window available during the year where the maintenance can be executed is
larger for the Flightflex
R
than the ASK as figure 2.2 shows.

Figure 2.2: Period of the year available for the installation of FFX and ASK

The adoption of Flightflex R


during night maintenance is also expected to help reduce the
Total Cost of Ownership (TCO) and optimise the use of runways. Also during the construction
phase several advantages are present. The installation process is faster and easier reducing the
risk of failure or delay during the construction phase. The final performances of the Flightflex
R

are expected to meet the Schiphol and EASA requirements and be comparable with the ASK
performances.

Flightflex
R
characteristics
Heijmans started the development of Flightflex
R
at the end of 2013. The development process
lasted from December 2013 to August 2014. There were two leading parameters for the mixture
design, namely the texture depth and the skid resistance. The requirements values for this
parameters are:

• Minimum texture depth 1.3 mm (EASA requirement is 1) [3]

• Minimum skid resistance of µ=0.74 as the EASA requirement. This value is required to be
measured at 95 km/h [3].

Flightflex R
is a stone matrix asphalt, this means the aggregates play an important role in the
performances of the final product. This category of mixtures is characterised by high bearing
capacity provided by the contact between the aggregates [23]. The downside of this mixture
design is the fact that the final asphalt is more subject to ravelling damages. This negative aspect
can be reduced by the use of synthetic additives that also increase the cost of the final product.
The aggregates also play an important role on the texture properties of the surface. From the
micro-texture point of view they have to be rough enough for the skid resistance of the surface.
From the macro-texture point of view the shape and dimension will influence the Texture Depth.
This last value will be the main aspect of analysis during this thesis project.

Due to the importance of the aggregate, Heijmans has tested three different types of aggregates
in order to evaluate the most suitable for a runway pavement. In table 2.1 the type of tests and
results are presented.
The laboratory analysis lead to three suitable stone types: EO slags, Grauwacke and BeStone.
Samples created from these aggregates have been tested for friction, texture depth, splitting
strength, tear resistance and stone losses.
The aggregate type was considered the most appropriate to meet the Schiphol requirements
was the Grauwacke. This provided the best results in terms of texture depth (1.92 mm) and high
values in term of skid resistance. The value of the FAP (Friction After Polishing) is also the one
of the best with 75.4 µ. In some categories (as the Splitting strength) the Grauwacke was not
the most performing aggregate type but these categories were less important to meet the final
requirement of Schiphol. Please note that in general the quality reached is very high although the

6 Chapter 2 Case study


Test type EOS BeStone Grauwacke
FAP ( Friction After Polishing)
C90 without removing bitumen 0.224 0.310 0.247
C90 removing bitumen by sanding 0.377 0.402 0.299
C90 removing bitumen by water jetting 0.553 0.506 0.583
SRT (skid resistance)[µ]
Before FAP test 65.4 66.6 65.4
after FAP test 75.8 76.0 75.4
Before RSAT test 71.6 72.2 80.1
After RSAT test 57.9 69.7 72.2
Texture depth [mm] 1.78 1.71 1.92
Splitting strength [MPa]
Average 3.55 2.55 2.55
Minimum 3.45 2.40 2.45
Tear resistance [N/mm]
Average 21.9 25.9 27.3
Minimum 21.3 25.0 25.4
RSAT [g]
Before aging 12.27 19.97 16.33
After aging 3.30 12.30 9.43
ITSR [%] 103 87 94
Hollow space [%] 7.5 7 7
Bitumen content [% by mass] 8 8 8

Table 2.1: Test of different aggregate mixture [21]

values were lower than the other aggregates. After these tests the Grauwacke has been selected
for the realisation of the FlighFlex.

To ensure the proper binding properties of these aggregates a 8% [m/m] of bitumen content
is required. The negative effect of this high quantity of bitumen is the reduction of texture depth
on the surface. To face this problem Heijmans has decided to implement a water-blasting process
after the compaction phase. A blast of water at high pressure (around 2000 bar) removes part of
the bitumen from the surface and consequently increases the final Texture Depth. A comparison
of the surface before and after the treatment is shown in figure 2.3

Figure 2.3: Asphalt texture before and after waterjetting

Several tests have suggested Heijmans to define a water-jetting procedure to reach proper

2.1 AntiSkid Layer (ASK) 7


results. In fact it has been observed that if the temperature of the asphalt was too high during
this treatment the bitumen was not detaching from the aggregates but was just sliding and redis-
tributing on the surface. The texture depth then was not increased, moreover the micro-texture
was reduced due to this sliding behaviour of the bitumen. Better results can be achieved when a
minimum waiting period of 48 hours (between the start of water-jetting and the finalisation of
compaction) is adopted. During that waiting period the surface temperature of the asphalt must
have been lower than 20◦ C. In this conditions the bitumen has a more solid consistency and it
detaches from the aggregates.

Figure 2.4: ASK (on the left) and Flightflex


R
(on the right)

2.2 Quality control


The two type of surfaces presented previously are both used at Schiphol Airport. As mentioned
in the previous section, the quality of the surface of any type of pavement has to be verified.
Currently this process is performed with two different methodologies: The sand Patch Method
and the Laser Measurements. Both methodologies will be analysed in order to present their
advantages and disadvantages. The section will conclude with a description of the actual sampling
strategy adopted.

2.2.1 Type of sampling methodologies


As mentioned the methodologies currently used for the Texture Depth Measurement are two: the
Sand patch Method and the Laser Method.

Sand Patch Method


The first one is the Sand Patch Method. It is the most traditional methodology for the texture
depth measurements. This methodology consists in distributing a specific volume (V) of a
standard sand on the surface. The sand is spread creating a circle that is enlarged until the sand
is completely distributed into the surface. When the level of the sand is the same as that of the
surface, the diameter of the circle obtained is measured. Different measurements of the circle’s
diameter are collected and an average is calculated (defining a value D). With D and V it is
possible to calculate the texture depth using the following equation:

8 Chapter 2 Case study


4V
T = (2.1)
πD2
with:

• T= Texture depth [mm]


• V= Volume of the cylinder containing the sand [mm]
• D= Average diameter of the sand patch [mm]

Figure 2.5: Sand Patch measurement

This methodology is proposed in the ASTM standards in the TP763 clause [6]. From an
execution point of view this methodology presents a series of disadvantages: first of all with high
wind and rain it is not possible to execute the test. Moreover when the conditions are suitable
for the test execution it takes 3-5 minutes to execute it. The duration is also dependent on the
experience of the operator. Besides these negative factors there is also the good reliability of the
results. With this methodology the accuracy of the measurement is high due to the possibility of
the sand to fill all the holes of the pavement structure. The sand is pushed in the structure and
can also occupy the voids present below some aggregates. Figure 2.6 shows this.

Figure 2.6: Sand Patch measurement holes detection

Laser method
The second methodology available to measure the Texture Depth is the laser technique. The tool
used at Schiphol Airport is the ELAtextur machine that is shown in figure 2.8.
The Texture Depth is measured by a rotating laser that scans 2000 points per measurement
and calculates the texture depth area under the machine. In this case the sample dimension
remains fixed and the exact area is represented by a circle with a diameter of 400 mm. This
machine measures the Texture Depth in accordance with the EN ISO 13473-1 and ASTM E1845-09
regulations.

2.2 Quality control 9


Figure 2.7: ELAtextur machine

The most important advantage of this methodology is the short duration of execution. It
takes 12 seconds, including the saving data time, to take a measurement. It is faster than the
sand patch method. For this reason it is preferred in case of a high amount of measurements
needed. The limit of this technique arises from the nature of the laser technique. During the
measurement process the laser line remains straight and cannot bend under the aggregates, some
spaces remain then undetected and are not calculated by the machine (see figure ??). The Texture
depth can, for this reason, be lower than the real one. The reliability of the laser method with
the ELATextur machine needs to be analysed during this thesis to ensure that the methodologies
can provide reliable results for the calculation of the Mean Texture Depth (MTD).

Figure 2.8: Laser limitation for detection during texture depth measurement

Both methods for measuring the texture depth have a very limited sampling dimension and
for this reason a consistent number of samples is needed to define the MTD of the surface. In
this case the surface analysed is a runway. In particular, the runway used to test the different
methodologies is the Polderbaan that has a dimension of 3500 m length and 60 meter width. But
the test area has a dimension of 500 meters length and 60 m width. This area has been renovated
in April 2018 with Flightflex R
. The definition of the number of samples and their locations for
R
the Flightflex surfaces will be based on the dimensions of this test area.

2.2.2 Current sampling strategies


This two sampling methodologies described previously are used in detailed sampling strategies.
These define the number and location of samples that have to be taken for a defined area.

In this case study the area used to analyse the different sampling strategies is a rectangle

10 Chapter 2 Case study


of 500 m length and 60 mt width equal to the test area described previously. Currently there
is a proposed sampling strategy based on the CROW regulation. The CROW is a non-profit
organisation that has been established to enable the Dutch government and private companies to
operate together in the design, construction and maintenance of roads. This organisation has
proposed a sampling strategy with the Sand Patch Method to apply on Roads. This methodology
starts with the assumption that the runways is divided in longitudinal strips that have a width
equal to the paver width. In each strip three transverse measurements are planned and repeated
with a specific interval in longitudinal direction. The measurements are not parallel over the strips
but are scattered with a fixed length. This concept appears more clear looking at figure 5.1. This
strategy was producing a limited number of samples for the test area and was not able to provide
a representative analysis of the surface. For this reason this methodology has been adapted by
reducing the width of the strips and the longitudinal distance between the measurements.

Figure 2.9: Sampling strategy proposed by Heijmans

During this research this sampling strategy will be referred to as the CROW Method. This
sampling pattern is a deterministic method that is not influenced by the manufacturing process.
The number of samples and their location is fixed and decided without taking into consideration
the different phases that characterise the construction process.

At this stage of the research the CROW method was adapted to the runway dimensions and
modified from the original version. During this research the characteristics of this and other
sampling strategies will be examined: their characteristics, relations with the manufacturing
process and reliability will be tested and compared in order to choose the most efficient one
for this construction process. Moreover during the analysis of the sampling strategies and the
comparison phase it will be possible to derive some theoretical and more academical conclusions
on this topic.

2.2 Quality control 11


Chapter
3
Literature review

From the TU Delft library and online websites as Google scholar and Scopus different articles
have been found on the sampling theory and surface texture.

3.1 Categories of sampling techniques


Currently the literature is poor of research on the sampling techniques to adopt with asphalt
surfaces. Colosimo et al. define three main general sampling approaches [8]: blind, adaptive and
process based strategies. Their studies are based on the definition of sampling techniques for the
quality control of industrial products.

The first category requires only geometrical information about the surfaces that need to be
analysed, some measurement tolerance and no information regarding the manufacturing process.
The scarcity of information can lead to choose this methodology implementing a standardised
procedure and a fixed number of measurements. Normally a uniform spatial measurement is
considered the most appropriate to provide robustness to this kind of processes. The negative
aspect of this methodology is the fact that for large surfaces many samples are required in
order to provide a proper level of analysis reliability [8]. Some, like Lee et al, have defined a
specific segmentation procedure to divide the area that needs to be analysed and provide a series
of regions where the texture properties are homogeneous [15]. This approach is typical of im-
age analysis and could find application difficulties with in-homogeneous surfaces such as pavements.

Different is the situation with the adaptive strategies, where the sampling proceeds by adapting
to the data features. Initially a certain amount of samples are taken, to their analysis new samples
are added by trying to meet a pre-defined criterion. Often the analysis proceeds by searching for
those points that present the higher deviation from the mean. Important with this methodology
is the definition of the analysis criterion, this can be a maximum number of samples analysed or a
property of the analysis (i.e. the geometric deviation does not vary substantially from the mean).
The main drawback is the number of samples needed to achieve the criterion established, in case
of very in-homogeneous and varying surfaces it could take an excessive number of samples to have
a suitable set of results or, in case of a limited number of samples allowed, a non-representative
description of the surface [18].

Affan Badar et al. have developed a search based algorithm sampling technique that enables
the operator to reduce the number of samples needed to have an accurate representation of the
surface’s property. From a defined number of starting points it is then possible to move forward
in a precise direction in order to find the most significant points of the surface [2]. Although
this methodology reduces the number of samples needed, if the texture is highly irregular a large
number of measurements will still be needed.

The third and last category is based on particular information provided by the manufacturing
process. Knowing some properties or aspects of the surface and the production process, it is
then possible to reduce the number of samples required and focus only on the element that
still presents a high value of variability. This can be as an example integrated with the blind
strategy by reducing the area that needs to be analysed or just focus on one aspect of the surface.
The main disadvantage of this technique is the fact that it is based on a specific manufacturing
process and surface type, for this reason it cannot be generalised to other fields or materials [8, 18].

3.2 Surface properties and manufacturing process signatures


In order to understand which sampling techniques perform better it is important to study the
surface properties and their relation with the manufacturing process.

The process of raw data modelling has been extensively described by Colosimo et al. in a
second paper where they defined the extreme point selection Method (EPS). They assert that if
there is a signature in the manufacturing process, then with this methodology it will be found
more or less in the same surface location. In contrast with the random or predefined selection of
sample locations this methodology defines the exact locations based on the manufacturing process,
but if this changes, also the locations change, so with process uncertainty this methodology
appears less suitable [16]

Stefano Pertò in his doctorate thesis presented a series of models aimed to describe the
manufacturing signature [18]. The two main categories are: Ordinary Least Squares Model and
Spatial Error Model. The first one is basically a linear regression model, the second a model based
on the property per location. More specifically a method called Lattice model has been defined
that could suit the asphalt manufacturing process. Each sampled point represents a certain area
that is then modelled through a Spatial Auto-regressive Model. This model is mathematically
complex and long, for this reason it will not be explained in this section but will be used if
necessary in the pavement analysis.

Another alternative is the application of a manufacturing signature model. There is very


limited literature regarding signature sampling modelling, in particular nothing has been found for
asphalt compaction and manufacturing. Some mathematical and statistical models are frequently
used, in particular Corrado et al. defined a signature modelling for tolerance analysis of rigid
parts which proved to be a reliable analysis [9]. The asphalt mixture is anyway influenced by
many variables and it would be difficult to define a mathematical model. Colosimo et al. also
present the opportunity to determine this model experimentally [8], in cases with high variability
this would be the preferred option but to define the signature a high number of samples is needed
so a simulation based approach can also be used.

3.3 Definition of a sampling strategy


Once the manufacturing signature model is ready or a proper number of raw data is given it is
possible to define the sampling strategy. Moroni et al. proposed a procedure called "Minimum U"

3.2 Surface properties and manufacturing process signatures 13


that is based on the ISO regulations. They defined an uncertainty variable (U) and elaborated a
procedure in order to minimise and stabilise its value as much as possible [17].
q
U = k u2cal + u2p + u2w + |b| (3.1)

Where:

• k is the expansion factor

• b is a compensation factor

• uw is the variability of production and is suggested to be set as 0

• up uncertainty of measurements

• ucal is the calibration uncertainty

This formula and all the different factors are calculated according to the ISO/IEC guide98-3
[1] and ISO/IEC guide99:2007(E/F) [13].

The solution of this problem can be reached by defining a sampling strategy that minimises
the U function as much as possible. If the methodology is effective then the U value will have
a lower value and the sampling points will represent the areas with the higher deviations from
the mean. Colosimo et al. proposed to transform this strategy selection in an optimisation
process and look for genetic algorithms. Still in the same paper a comparison of a uniform, a
random called Hammersley and minimum U strategy selection has been presented in terms of an
U function. Figure 3.1 shows that the minimum U strategy has proven to be the best.

(a) U value with different strategies (b) Bias for different strategies

Figure 3.1: U comparison with other sampling methodologies

14 Chapter 3 Literature review


3.4 Literature conclusions
The papers of Colosimo and Petrò will be taken in consideration because they provide a wide
overview of the different methodologies. They also insist on the sampling methodologies and the
dependency on the manufacturing process proposing mathematical tools to analyse it. Meanwhile
being restricted to the laser machine, the image approach proposed by Lee et al. cannot be applied
to this case study and analysis. At the moment the quality of the surface is analysed only with
the two methodologies presented in chapter 2. The evaluation of the quality with pictures has not
been implemented yet. Also the mathematical approach proposed by Assan and Badar will be of
great importance due to the necessity to reduce as much as possible the number of samples needed.

Finally it will be complicated to implement the exact U function as proposed by Moroni et


al. because this function was applied to the manufacturing process of mechanical parts. In this
field the construction is automatised and the level of precision is very high. It is thus possible to
define the parameters needed to apply the formula presented previously.
In the pavement industry the level of uncertainty is higher and the precision of the manufac-
turing process is lower. The strict application of the U function becomes complicated, but the
idea to define an analytical and graphical way of comparing the different methodologies will be
implemented. The desired final outcome will be to have curves similar to those shown in figure
3.1 and see which one decreases fastest.

3.4 Literature conclusions 15


Chapter
4
Research question and research
methodology

In this chapter the research question that will guide the thesis and the research approach presented.

4.1 Goal and Research Question


The construction process of asphalt pavements influences the final quality of the surface. A non
homogeneous mixture, a wrong compaction procedure and other external factors can generate
areas with higher or lower texture depth and skid resistance. Being unrealistic to have a perfect
construction process, an asphalt surface will have high chances to be influenced by the manufac-
turing process.

As described in the introduction, the industry lacks a scientific basis for the definition of a
proper sampling methodology for runway textures evaluation. This research aims to provide this
support by defining the theoretical relations between the surface properties and the sampling
methodologies. Different intermediate steps as the definition of the simulations, the virtual
representation of the surfaces, the influence of the manufacturing process will be part of the
research and help during the different operational phases.

To achieve the goal the main guiding research question is:

“Given a surface characterised by a predefined manufacturing process, which


minimum number of samples and at which locations provide the lowest relative
error between the real mean of the surface and the mean texture depth of the
samples collected?”

This is the main question of this research but some specific sub-questions will also guide the
different steps needed. The main sub questions are:

• How does the manufacturing process affect the properties of the surface?

• Can the surface be consistently represented with a matrix of values?

• How do the different sampling methodologies behave with the simulated surfaces?

These sub questions assume relevance during the different phases of the process because they
will provide consistency and preserve the scientific approach of the research. The simulation of
the sampling methodologies cannot take place if it is not clear how the surface is characterised
by the manufacturing process. The same applies if there is no consistency in the representation
of the surface during the simulation. To conclude, it is clear that to evaluate which sampling
methodology is the most efficient, all the proposed ones need to be tried and compared on the
same simulated surfaces.

This subject and research question represent the willingness to provide a practical support to
the industry by means of a scientific analysis. The academic approach is introduced to provide a
conclusive and reliable analysis to the industry. Interested companies can then rely on the outcome
of this analysis and take decisions to reduce their risks on specific projects. The feasibility of this
research will strongly depend on the quality of data obtained during the field measurements.

4.2 Research approach


The main characteristic of this research process is the presence and delicate combination of
different research approaches. Several data and information need to be meticulously collected and
carefully analysed during the research. Field tests are essential to analyse the MTD distribution
on the surface, more precisely they will provide the information on the manufacturing process
effects on the surface.

Data collection from the field


A consistent number of samples are needed to reach the aforementioned goal, and this sampling
process requires at least 5/6 hours on the field. Although this research is in co-participation with
Heijmans and Schiphol Airport, obtaining the authorisation to stay on the runway during the
time required is part of a complicate bureaucratic process. The only opportunities will be during
regular night maintenance that is planned once a month and during the full closure of the runway
for 20 days in March and April 2018.

Definition of the sampling techniques


Alongside this data collection, the sampling techniques for the simulation have to be analysed
and precisely defined in order to be correctly applied. This process is based on three main steps:

1. Define in detail the distribution of the samples according to the CROW regulation. This is
a blind technique that partially takes into account the whole manufacturing process.

2. Define a methodology for the simulation of a random sampling technique. In this case it
will be necessary to simulate a sampling process that is not related with the manufacturing
process and the characteristics of the asphalt. In the literature review it has been shown
that Colosimo et al. have defined the Hammersley distribution the most appropriate to
simulate a random sampling process[8]. For this reason the Hammersley methodology will
be adapted and used also in this research.

3. Define the manufacturing properties of the asphalt construction process. In this case the
design phase is more complex because finding a manufacturing signature for this kind of
process appears to be a challenging operation. Two approaches will be applied and the best
one will be selected:

• A stochastic manufacturing signature by sampling a little area (1 m2 ) with a high


number of measurements (more than 1000). In accordance with Colosimo et al. this

4.2 Research approach 17


process would be long and time consuming but should provide a proper representation
of the signature model.
• The implementation of the Lattice model proposed by Pertò in his doctorate thesis[16].
This model has never been applied to asphalt surfaces and, although it appears
challenging, it could lead to a suitable representation of the manufacturing signature.

Once the manufacturing signature is identified it will be possible to implement the U function
and implement the genetic algorithms to minimise the number of samples as much as possible.
The algorithm has not been defined yet and will be part of the analysis process.

Simulation process

Knowing the characteristics of the pavement and having developed different sampling techniques,
the simulation process can take place. This process will be executed with computer simulations.
The simulation will be set and programmed with Python, the development of the code will be
totally made from the beginning and tailored on the data and findings provided in the previous
steps. The program will take the samples obtained in the field, then it will extract the statistical
properties as Mean, Variance and Covariance. From this information it will be possible to simulate
entire runways surfaces containing those properties and being representative of the real ones.
The algorithms will simulate them hundreds and thousands times, testing on each surface the
sampling techniques and recording the MTD of each technique used. Each time a surface is
simulated it is possible to know its real MTD, this enables to calculate the relative error with the
MTD of the samples taken. Each simulation cycle will present a comparison of the relative error
of each sampling methodology.

Results analysis

A challenge will be to define how to evaluate the results of the simulation. Two main approaches
are possible. The first one is to choose which sampling methodology provides the lowest relative
error without caring about the number of samples. This could be positive to increase the reliability
of the measurements but could also produce a too high number of samples, that could not be
measured in practice.

The second possibility is to direct the analysis towards the minimisation of the samples
number, this would be preferred for field operations. In this case the algorithm will look for the
sampling methodology that provides the minimum number of samples, but imposing a threshold
for the relative error.

In the first case the analysis will be more consistent and rigorous from the academical point
of view, presenting a precise and clear relation between the manufacturing process of the surface
and the sampling techniques used. But this will not answer the research question where it is
specified that the minimum number of samples is required. The second approach could provide
a series of conclusions and recommendations applicable on the field in the form of regulations.
But the negative aspect of this strategy is the definition of the relative error threshold value to
impose on the analysis. This should be chosen in order to provide consistency with analysis and
real data and provide the company with an applicable methodology.

18 Chapter 4 Research question and research


Validation and testing
The final part of the research will be based on a practical sampling process in the field. The
measurement results will also be used to validate the surface simulation process. The three
methodologies defined in the research will be tested in order to evaluate the results and verify if
they correspond to those obtained with the computer simulations.
All this phases will be part of the research and are schematised in figure 4.1.

Figure 4.1: Research’s process overview

4.2 Research approach 19


Chapter
5
Sampling and simulation modelling

In this chapter the models of the sampling methodologies and the surfaces will be defined. From
this information it will be possible to start the simulations and compare the results in order to
evaluate the best sampling techniques.

5.1 Sampling techniques


Here three sampling techniques will be defined and analysed for the application on the simulated
and real surfaces: Uniform sampling technique, Hammersley sampling technique and CROW
sampling technique.

5.1.1 CROW sampling Technique


The first sampling strategy analysed is the one proposed by the CROW regulation. As briefly
described in chapter 2 the CROW is a no profit organisation that proposes a series of regulations
to ease the collaboration between highway agencies and private companies for the design, mainte-
nance and construction of highways.

This sampling methodology is a deterministic planning of locations where the texture depth
measurements have to be collected. These locations are not adapted nor influenced by the
manufacturing process. The only relation with the construction process is the fact that the
width of the longitudinal strips can be the same as the pavers width. The reason behind this
particularity is the fact that the CROW methodology was mostly intended to verify the quality
of the paving and compaction process. In each longitudinal strip a series of three measurements
in transverse direction are defined. Two at the extreme of the strip in transverse direction
and one in the middle. This distribution is aimed to analyse the homogeneity of the texture
depth on the width of the paver. During the paving process it can happen that the asphalt
is not homogeneously distributed by the paver, for this reason the final texture depth can be
slightly different. This sampling process was aimed to verify this aspect of the construction process.

It has been proposed to start from the CROW regulation and apply some modifications to
adapt it to the runway dimensions. The Polderbaan is used as main reference for the runway
dimensions. This runway has a length of 3800 m and width of 60 m [14]. The proposal was
to divide the width in 3 m wide strips. In each strip 3 measurements are set with an interval
of 150 m in the longitudinal direction. The location of the measurements in the other strip is
scattered with 50 m distance. A clear representation of the sampling strategy is shown in figure 5.1.

The total number of measurements with this strategy for a 500 m longitudinal length is 102.
It has been decided to take a 500 m length unit because Schiphol and Heijmans have decided to
Figure 5.1: Sampling strategy proposed by Heijmans

renovate a Polderbaan stretch with the same dimension. This work was planned in order to test
the performances of FFX on a bigger area compared to the previous test stretches. But during
the simulation this methodology needs more flexibility in order to enable an increase and decrease
of number of samples for the same runway length. This will be done by creating an algorithm that
varies the longitudinal distance between the measurements in the same longitudinal column. In
this way the main properties of the methodology are maintained with a lower number of samples.
The disadvantage of this algorithm is the constraints in the increment of the number of samples
collected. With this strategy it is not possible to increase the few independent samples but an
entire transverse row is inserted. This because the longitudinal distance between the rows change
and in 500 m more lines of samples can be inserted.

Figure 5.2: Example of increase of samples with CROW method (own creation). Axis labels
represent the number of surface squares

As explained in section 2.2.2 the CROW procedure has been adapted to this case study.
The interval between two subsequent measurements has been reduced to 150 m instead of 500
m. Moreover the strip width has been reduced to 3.5 m instead of the 4 m of the paver. This
was necessary to increase the number of samples in the test stretch and provide the necessary
information of the pavement quality. With the original method the number of samples would
have been limited and the information on the surface would have not been sufficient. In figure

5.1 Sampling techniques 21


5.2 it is possible to see how the increment of points works for the same surface. This increment
procedure will be useful to analyse how the relative error reacts to this variation. It is in fact
expected that the relative error decreases by increasing the number of samples as described by
Colosimo in figure 9.1.

5.1.2 Uniform Sampling technique


As proposed by Colosimo it is worthy to insert the uniform sampling technique in this study
because it is easy to vary the number of samples proportionally and have a clear location of the
sampling points [8]. The uniform sampling distribution is developed by dividing the surface in an
proportional number of rows and columns. In this case the width of the rows and columns will be
chosen as the dimension of the sampling area. In this case study the sampling area is determined
by the dimension of the ELATextur machine. The sampling circle has a circumference of 400 mm
and a diameter of 127 mm. The machine though has bigger dimensions and for this reason a
diameter of 150 mm has been considered. In order to take into account human mistakes during
the measurement phase a square of 250 mm has been considered. That determine a sampling area
of 62500 mm2 . Figure 5.3 clearly show that the sampling circle is contained inside the sampling
square and has a tolerance in all directions.

Figure 5.3: ELAtextur machine and consequent sampling dimension

From this sampling dimension it is possible to divide the surface in rows and columns. When
this process is concluded, it is possible to design an algorithm for the increase and decrease of
sampling points. The main constraint was to maintain a proportion between the location of the
sampling points and the number of rows and columns defined before. The complete algorithm
can be found in the Appendix.
In figure 5.4 the increment of the number of sampling points is shown as an example. Please
note that this is only an example surface aimed to show how the location of the sampling points
changes when the number of samples is increased.

With this sampling methodology it is possible to have a better overview of the texture depth
values and their locations. This will enable a clearer understanding of the locations with lower
or higher TD. The previous definition of the uniform sampling methodology shows that it has
no relation with the manufacturing process nor is influenced by it. This methodology is then
categorised as a blind technique that has a practical advantage during the measurement phase.
Having an entire runway to measure, for the operator it is easier to define a grid of the points to
measure. Meanwhile the CROW methodology has specific locations that are not contained in a
regular grid. This make the process longer and increases the possibility of wrong measurement

22 Chapter 5 Sampling and simulation modelling


Figure 5.4: Example of increase of samples with Uniform method (own creation). Axis labels
represent the number of surface squares

locations.

5.1.3 Hammersley Sampling Technique


The Hammersley Method is also a blind technique because it is not related to the construction
process nor the dimensions of the surface. It has been developed with the aim of simulating a
random selection of points on a surface. In this case study it is applied because there is the need
to evaluate the precision of the MTD with a random points selection. This will be compared with
the other methodologies to see if a total detachment from predefined rules and relations with the
manufacturing process can increase the precision of the MTD.

Figure 5.5: Example of Hammersley location of 10 sampling points [22]. Axis labels represent the
number of surface squares

A random selection of points can, in theory, determine a group of points that are concentrated
in a specific part of the surface, leaving another part totally unmeasured. The necessity is to
define a sequence of locations that can be assumed as random selection but maintaining uniformity
on the surface.

5.1 Sampling techniques 23


This methodology is based on a low discrepancy sequence firstly elaborated by Van der Corput
[22]. Hammersley had the merit to develop this mathematical tool to N dimensions. In this case
study only a 2 dimension sequence is needed. The main functions that governs this sequence of
points are[22]:

iW
Yi = (5.1)
N

k−1
bij 2−j−i
X
Xi = (5.2)
j=0

where:

• Xi is the point coordinate in the transverse direction of the surface (of the runway if we
consider the case study)

• Yi is the coordinate in the longitudinal direction of the surface

• N is the total number of samples

• W is the total width of the runway

• bij is the binary representation of the index i

As an example: if 15 samples need to be located on a surface the distribution will be the first
in figure 5.5
As it can be seen in figure 5.5 this sampling strategy works for squared sections. In order to
adapt to this case study it has been decided to create a repetitive series of squares that present the
same Hammersley distribution. The number of samples is increased just by increasing the num-
ber N in the previous equations. The complete code for this strategy can be found in the Appendix.

An example of the increment and change of location of the number of sampling points is
presented in figure 5.6

Figure 5.6: Example of increase of samples with Hammersley method (own creation). Axis labels
represent the number of surface squares

24 Chapter 5 Sampling and simulation modelling


5.2 Surface simulation
Once that the three selected sampling methodologies are coded and defined it is possible to
work on the surface texture depth measurements simulation. In order to apply the different
sampling technique a surface that has the same properties as the runway is needed. The most
complex part in this process is to work on the properties of the real pavement, more specifically
try to understand how those are determined and influenced by the manufacturing process. As
it has been described in chapter 2 the properties of the asphalt mixture are dependent on the
components of the mixture but also on the construction process. For instance, a wrong exe-
cution of the waterjetting process can improve the properties of the surface but also decrease them.

The pattern and characteristics of the construction process thus became an important aspect
for the determination of the surface properties. It is necessary to identify those characteristics
and represent them on the simulated surfaces. This is possible by collecting a large number of
samples in a limited surface. This determines a high density of the points and ensures a highly
correct representation of the real surface properties. The surface has to be divided in rows and
columns with a width equal to the sample square. Then the Texture Depth is measured in all the
cells.

To have a clear visualisation of the measurements results a program in python has been coded
to export the data from the Elatextur machine and locate the value of the measurements in a
matrix. These data are then plotted in the surface with a different colour representing the value
of the Texture Depth. In figure 5.7 we can clearly see how the points are plotted and areas with
different values are independent.

Figure 5.7: Example of Surface simulation. Axis labels represent the number of surface squares

This example matrix is plotted with random values but it is possible to identify different
areas where values are lower and higher. In the next phase an analysis of these surfaces to obtain
statistic outputs is carried out. The scope of this part is to find the most important properties of
the surface in order to generalise them and create entire simulated runways.

5.2 Surface simulation 25


Not only the numerical values of the Texture Depth are important but also the location of
those measurements. It can happen that a certain part of the manufacturing process determines
a repetition of areas with different MTD. The identification of such zones is important because it
represents a feature of the manufacturing process and has to be identified and repeated in the
simulated surfaces.

The representative surface will be created multiple times and each time all the sampling
methodologies will be tested. The real value of the MTD will be known because the matrix
will be created manually and it will be possible to compare it with the MTD calculated from
the collected samples. The technique with the lowest relative error for the minimum number of
samples will be the most reliable from a theoretical point of view but also other considerations
will be made.

5.2.1 Defining Surface Properties


The first information that has to be collected from the surface is the value distributions of the
TD values present in all the cells. This can be done by fitting the TD values with different
distribution and searching for the one that approximates the best. This fitting process is based
on the least squared method.

This method consists in proposing a curve drawn from the data parameter. As an example, if
the normal distribution has to be fitted on the data the parameters used will be the Mean and
the Variance. Once the fitted curve is drawn it is possible to calculate the fitting accuracy by
calculating for each point of the distribution its distance from the fitted curve. In figure 5.8 the
points of the distribution and the fitted curves can be seen. Each point has a distance Yi to the
curve. By taking this distance squared for each point it is possible to have a final value that has
to be compared with all the other fitting curves.
N
X
Snormaldistribution = Yi2 (5.3)
i=1

with:

• Yi = distance of single point i from the fitted curve


• N = number of points collected

The distance is squared and summed with the ones of all the other points. The fitting curve
that provides the minimum sum is the best fitting curve.

To check the best curve fitting the program has be set to evaluate the fitting process with all
the curves available in the mathematical Python library. This produces the outcome as shown in
figure 5.9a. At first sight the figure appears confusing but it is possible to see in the background
the distribution of TD values and above it all the possible curves representing the distribution.
The program will calculate with the minimum least square method which curve best represent
the distribution of points. The output will be a single curve that will be plotted in the same TD
value histogram.

The selected distribution in this case is very specific and not common in basic statistic analyses.
This is due to the fact that the program searches for the best curve possible. If the number of
samples is limited and not representing the majority of the surface this fitting process can be
misleading. A limited number of samples could be characterised by particular aspects that are not

26 Chapter 5 Sampling and simulation modelling


Figure 5.8: Example of distance between fitted curve and data points

recurrent in the whole surface. The final fitting curve could be the most representative for these
samples, but not for the total surface. This imprecise interpretation of the results is also known
as Overfitting[7]. This word refers to the practice to create a model based on a limited number of
values and apply it on another group of data[7]. The model is not able to provide reliable results
in different applications. In case of a limited number of data the solution is to fit those data with
more generic curves. The fitting process with normal distribution, from a mathematical point
of view, is not the most precise but provides reliable results in case of more general data. The
distributions usually used in this cases are the Normal or Log-normal distribution.

(a) Fitting process of all the curves (b) Best curve selected

Figure 5.9: Fitting process and selection of the best curve

In this research an initial fitting of all the curves has been used to extrapolate the features of
the asphalt. But due to the limited number of data available it has been decided to fit only the
normal distribution in order to have a more conservative approach for the generalisation of those
properties during the simulation of the entire runway.

5.2 Surface simulation 27


5.2.2 Normal Probability distribution
The normal probability distribution is the most used in statistical analyses[5]. It has a probability
distribution curve that is shaped as a symmetric bell, for this reason it is also called Bell curve
(Figure 5.10).

Figure 5.10: Example of three different Probability density curves for a normal distribution with
different mean value but the same standard deviation

Its probability density curve is characterised by the following equation:

1 −(x−µ)2
f (x) = √ e 2σ2 (5.4)
2πσ
With:

• σ= standard deviation
• µ= Mean, called also median
• σ 2 = Variance

(a) Probability distribution function (b) Cumulative distribution function

Figure 5.11: CDF and PDF for the same normal distribution

During the fitting process the Mean and Standard deviation are defined. They will be used as
parameters for the creation of the cumulative distribution function (CFD).

28 Chapter 5 Sampling and simulation modelling


Figure 5.11 shows the Cumulative distribution function and the Probability Distribution
Function of a normal distribution can be seen. The Cumulative Distribution Function is defined
as the inverse of the Probability Distribution Function. Providing an input probability, defined
as a number between 0 and 1, in the CDF will give the corresponding value.

The cumulative distribution function will be very useful during the analysis because a uniform
distribution of values between 0 and 1 will generate a percentage to insert in the CDF to determine
TD values. These values will then be used to create simulated surfaces with a MTD probability
corresponding to the PDF obtained from the fitting process describe above.

Figure 5.12 shows a more clear description of the steps and the corresponding graphs.

Figure 5.12: Summary of fitting process and determination of simulated texture values

5.2 Surface simulation 29


Chapter
6
Real data analysis and surface simulation

In the previous chapter a description of the sampling techniques, enriched with theoretical notions,
has been proposed to ease the understanding of the whole analysis presented in this chapter.
Here the description of the data collection is presented, followed by the extrapolation of the
manufacturing features. These information are used in the last part of the chapter for the
construction of the entire surface.

6.1 Data Collection from the field


The Flightflex R
has been placed on the Touch Down Zone of the 18C runway in 2014. It was
interesting then to measure the Texture Depth on this area because it is the most damaged from
the landing of aircraft.

(a) Runway Location (b) Flightflex


R
strips

Figure 6.1: Location of Flightflex


R
strips

The first opportunity to take measurements was during the night between 19-02-18 and
20-02-18 because of regular maintenance on the Zwanenburg runway. In figure 6.1 the locations
of the runways and the specific areas built with Flightflex
R
are shown with red markings.
The authorisation to take measurements was from 22:30 PM to 4:30 AM. In this time window
three groups of measurements were taken.

• In the lower strip in figure 6.1b a square of 25 m2 is used to take 400 measurements.

• In the upper FFX strip in figure 6.1b the whole area is measured with 105 sampling points
using a uniform distribution.

• A short strip with 100 samples will also be analysed for the ASK.

The idea behind the first group of measurements was to get a real mean and variation of
a FFX area. Measuring 400 points in a 25 m2 area means that the exact Mean and Standard
Deviation are calculated. To avoid overfitting though, the mean and variance will be based on a
normal distribution fitting.

Figure 5.3 shows the sampling unit is assumed to be a square of 250 mm side length. This is
bigger than the diameter of the ELATexture machine but it helps to take into account tolerances
for imprecision during the measurements. Knowing that, one main assumption is made:

In all the sampling units, represented by a square of 250 mm side, the Texture
Depth is assumed to be equal to the value provided by the ELATextur machine.

6.1.1 Dense measurements


The first part analysed was a square area of 5 m side located in the first strip of figure 6.1b. The
exact location was the red Square shown in figure 6.2. The area was divided in 20 column and 20
rows for a total of 400 samples.

Figure 6.2: Square area for dense measurements with division in rows and columns

The measurement process took 2 hours and 26 minutes excluding the time needed to make
the grid and remove the markings.

The Texture Depth values were then inserted in the plotting program and the outcome
obtained is the one in figure 6.3. The plotting process really helps to have a quick overview of the
surface properties.
At first sight it is possible to see that the average value is quite high. In fact the scale of values
starts from 1, this means that there are no lower values. The EASA regulation of a minimum
value of 1 mm for the MTD would be respected because the mean of values that are all above
1mm will be above 1 mm too. In order to see if the same can be said for the Schiphol requirement

6.1 Data Collection from the field 31


of 1.3 mm MTD, the average of the values will be computed.

Figure 6.3: Square area for dense measurements and plot of the TD values. Axis labels represent
the number of surface squares

The average of these 400 values is 1.67 mm so also the Schiphol requirement is met.
The surface is characterised by two distinct areas. One with yellow squares and high texture
quality, and a second with lower TD values and darker colours. The main characteristics is that
the lower quality strip affect all the transverse width of the square. Figure 6.3 shows that a darker
strips is present of the asphalt. This strip is determined by rubber deposition. The location of
the dark strip and the low quality area have been verified and it has been confirmed that they do
not match. The conclusion is that this lower quality is determined by the construction process
(asphalt mixture, paving, compacting or waterjetting). A confirmation comes from the second
strip where a lower area can be found in an area far from the rubber deposition strips.

Figure 6.4: Highlighted values below and above 1.3 mm. Axis labels represent the number of
surface squares

Figure 6.4 highlights the low quality area of the surface, from which the TD values have been
isolated and the MTD of this area is equal to 1.52 mm. The Schiphol requirement is also met
here.
At this point it is known that:

• The surface on average meets the EASA and Schiphol MTD requirements.
• There can be a good quality surface and a lower quality surface area, but both fulfil the
Schiphol requirements.

32 Chapter 6 Real data analysis and surface simulation


• The low quality area is a rectangle that covers the whole width of the square and part of
the length.
As described in chapter 5 the next step is to fit the TD values with a normal distribution in
order to get a graphical overview of the numerical results and define the curve parameters.

First all the 400 values have been fit with a normal distribution curve as we can see in figure
6.5. The mean µ is 1.68 mm and St.Deviation σ is 0.28 mm. The latter value describes how all
the values are distributed around the mean. In this case it is quite low, which means that the
values are very close the average.

(a) Probability distribution function (b) Cumulative distribution function

Figure 6.5: Result of the fitting process; loc= Mean, scale= St.Deviation

A better understanding of this numerical data is provided by figure 6.6. Here it becomes clear
that a limited area of the surface has TD values below 1.3 mm, and the corresponding percentage
is 9.11%. This means that using this curve parameter for the simulation of surfaces there will be
a chance to have values lower than 1.3 mm equal to 9.11%.

Figure 6.6: Percentage of values below 1.3 mm

A more accurate evaluation of the information is executed by not considering the fitting
process on all the 400 sampling points but by fitting the normal curves to the two different
areas defined in the previous phases: the good quality area and the lower quality areas. Figure
6.4 shows the two areas. The red square defines the low quality area while the remaining area
represents the high quality values.

6.1 Data Collection from the field 33


Low quality area

The low quality area TD values have been fitted with a normal distribution and the parameters
obtained are:

• µ=1.52 mm
• σ=0.25 mm

The corresponding graphs are shown in figure 6.7.

(a) Probability distribution function (b) Cumulative distribution function

Figure 6.7: Result of the fitting process; loc= Mean, scale= St.Deviation

High quality area

The same applies for the high quality area where the values are:

• µ=1.80 mm
• σ=0.24 mm

(a) Probability distribution function (b) Cumulative distribution function

Figure 6.8: Result of the fitting process; loc= Mean, scale= St.Deviation

34 Chapter 6 Real data analysis and surface simulation


6.1.2 Uniform measurement
The second group of measurements took place on the first FFX strip. In this case the measurements
were not dense because the surface analysed was bigger (60 m x 5 m) and the number of samples
was considerably lower (105 samples).

Figure 6.9: Location of the first strip

The total surface was divided in 5 transverse strips and 11 longitudinal strips for a total of
105 squares of 1 m2 . In this case it is assumed that the measurements of the ELATextur Machine
represent the uniform TD of the entire square. In the first case this assumption was coherent
due to the close dimension of the sampling machine and sampling unit. In this scenario this
assumption is weaker due to the high difference between the sampling unit and the actual area
measured. But the scope in this case is not to detect the behaviour of the surface properties in
detail, but to understand if the properties of the surface are detectable in a large area. This is
possible accepting a lower level of precision.

Figure 6.10: Uniform distribution of points. Axis labels represent the number of surface squares

Although the preparation and dismissal took more time due to the large area of analysis,
the measurement session in this case was shorter (around 1 hour). Figure 6.10 shows how the
measurements have been performed and what the outcome was.

Also in this case it is possible to find an area that had a better quality and another with a
concentration of low TD values. The low quality area is present over the whole width of the strip
with a behaviour that is similar to the one on the square measured previously. It is also possible
in this case to separate the two areas as shown in figure 6.11.

6.1 Data Collection from the field 35


Figure 6.11: Identification of good and low quality area. Axis labels represent the number of
surface squares

This shows that the manufacturing process affects the surface with similar patterns. The
paving of this two strips was executed with one paver, this means that the influence affects
the entire width of the paving machine. Also the waterjetting process affects the quality of
the surface but in this case the width of the machine is only 2 meters and the influence is less
homogeneous. This suggests that the different areas are more likely determined by the paver than
the waterjetting process.

Fitting process

Also for this stretch the values will be fitted with a normal distribution. The outcome is shown
in figure 6.12. It is possible to notice that in this case the lower value of the distribution is less
than 1 mm, but the average is still above the Schiphol requirements. In fact the Mean µ is 1.50
mm and the St.Deviation σ is 0.28 mm.
Compared to the dense area measured previously, the quality overall is lower but the St.Deviation
is still the same. This means that in general the surface has a lower MTD but the values are
distributed around the mean similarly. The distribution of the quality of the surface remains the
same in general.

Figure 6.12: Probability distribution function and cumulative distribution function; loc= Mean,
scale= St.Deviation

36 Chapter 6 Real data analysis and surface simulation


Also in this case it is possible to fit the two different areas identified in figure 6.11. Figure
6.13 shows that in the low quality area the Mean is 1.36 mm and this still fulfils the Schiphol
requirements. The same, consequently, can be said for the high quality area were the surface has
a MTD of 1.65 mm.

Figure 6.13: Fitting process of low (top) and high (bottom) quality pavement; loc= Mean,
scale= St.Deviation

In both cases the St.Deviation remains around 0.25 mm that is in accordance with the high
density measurements. This testifies how the Flightflex R
manufacturing process provides a
certain value of homogeneity to the surface.

6.1.3 ASK Measurements


In the last hours available that night it has been possible to take 100 measurements from the
ASK strip identified in figure 6.14.
The number of samples were 100 with a Mean µ of 1.54 mm and a St.Deviation σ of 0.12.
The results are plotted in figure 6.15.

It can be observed that the ASK fulfils the requirements imposed by Schiphol and the EASA
regulation but the mean is lower than that of the FFX surface. But the most important feature
of the ASK layer is its very low St.Deviation. In fact this is 50% lower than that of FFX. The
homogeneity of this material is very high, this is due to its manufacturing process. As explained
in chapter 2 the production process of the ASK gives less room for variance and uncertainty. The
glue is laid and being a synthetic material this has a high homogeneity. Moreover the basalt grit

6.1 Data Collection from the field 37


Figure 6.14: Location of ASK Measurements

mixture used to provide texture and friction is the result of a though and severe selection process
that guarantees a high homogeneity level of the aggregates.

Figure 6.15: Results of ASK measurements analysis. Axis labels represent the number of surface
squares; loc= Mean, scale= St.Deviation

6.1.4 Conclusion of the measurement process


Table 6.1 presents an overview of the results obtained from the measurements.

Mean[mm] St.Deviation [mm] Variance [mm] N.Samples


Dense Matrix 1.68 0.28 0.08 400
High quality group 1.8 0.24 0.06 220
Low quality area 1.52 0.25 0.06 180
Uniform sampling 1.5 0.28 0.08 105
High Quality Group 1.65 0.23 0.05 50
Low Quality Group 1.36 0.25 0.06 55

Table 6.1: Overview Results of Measurements Phase

The quality of the Flightflex


R
pavement surface is high and fulfils both Schiphol and EASA

38 Chapter 6 Real data analysis and surface simulation


requirements. It can also be observed that in both strips there are low and high quality ar-
eas. The low quality areas have a lower TD than the average but maintain the same variance.
This means that, although the quality of the surface decreases or increases, the homogeneity
remains almost constant. This is the effect of the manufacturing process. If the homogeneity
of the asphalt remains the same over the entire width of the strip, this suggest that the mix-
ture was homogeneous but the paving or compaction process could have influenced the final results.

Compared to the ASK measurement, the higher variance of the FFX measurements and the
presence of a high and low quality surface testifies the need of a tailored sampling methodology.
The ASK does not present different quality areas and has a low Variance. This would suggest
that the TD on the surface is uniform. With the FFX there is the risk of having a low or high
quality area and this needs to be taken into account in the sampling process. In particular it has
to be avoided to measure only one type of surface quality because the resulting MTD would not
be correct.

In the next chapter the information will be used to simulate the surface and make them more
similar as possible to the real ones.

6.2 Surface simulation


In this section the simulation process of the surfaces will be described. To simulate the process in
python a series of rules and boundary conditions have to be established. The two strips in figure
6.1 have the width dimension of the strips that are built during the construction process. They
can be used as surface units that will be repeated for the creation of the final surface. For this
reason some patterns of the surface have to be identified and will be part of all the surface units
determined in the next phases of this research.

From the analysis of the measurements obtained, the following rules have been imposed for a
strip of FFX:

• A surface strip is characterised by a percentage of low quality texture. This is represented


by a normal distribution with Mean 1.36 mm and St.Deviation 0.25 mm. Those values are
taken from the fitting process of the low quality area of the dense measurements. But the
values can be changed to increase or decrease the quality of this patch.

• The lower quality areas cover the entire width of a simulated strip. The width dimension is
the same as the paver width: 4 m.

• The real runway surface will be made by a sequence of strips with the same manufacturing
process.

• The distribution of the dense area (400 samples) needs to be perturbed in order to take into
account a lower quality mixture. It has been seen in the second group of measurements
that the minimum value was lower than 1 mm. The simulated surface can be represented
then with lower values to take into account defects in the production process.

• The variance of the high and low quality surface should be similar.

• The percentage of low quality areas should vary to take into account different manufacturing
process influences.

6.2 Surface simulation 39


These main guidelines and rules are the basis in the construction of the artificial surface.

The measurements presented previously are based on strips with a width of 5 m, but the
surface needed is 60 m wide and 500 m long. To obtain a surface of such dimensions a specific
construction process needs to be created. The main concept is to add a sequence of strips and
create the final surface. This is in accordance with the manufacturing process because the paver
normally works with a width of 4 m and the strips are paved in sequence.
Here a remark is needed. There are two types of maintenance process:

• Continuous maintenance process: in this case the strips paved are continuous and long,
reaching also a distance of 500 m. The scope is to reduce as much as possible the number
of transverse joints. In this process some areas of different quality are still present due to
stops, pauses, loading of new asphalt, and variation in the asphalt mixture due to waiting
time of the truck or production imperfections.

• Clustered maintenance process: this process it typical for night maintenance operations. It
consists in dividing the surface in short transverse sections and pave each one completely
before to advance the next one. In this case the strips are shorter with a distance of 100-120
m and for this reason more transverse joints are present in the final surface. The number of
strips in the transverse direction remains constant though.

During the construction process the paver lays the asphalt in parallel strips. To ease the
simulation process it has been assumed that the width of a strip is equal to the width of the
paver and that the length of a strip is 60 m. The latter dimension has been assumed taking into
account the uniform measurements. In that case a percentage of the longitudinal dimension of
the strip was characterised by low quality asphalt. The consequence of this assumption is that
to have a total strip with a length of 500 m, 10 strip units have to be added in longitudinal direction.

The procedure for the creation of the surface will be the following:

• Define the dimension of a single strip.

• Define a percentage of low quality surface for each strip. To increase the variability and
make the surface more realistic a normal distribution of the percentage will be used.

• From the dense matrix measurements define the best fit distribution of the low quality and
high quality areas are taken.

• Take the inverse of the cumulative distributions and give as input values from a normal
distribution between 0 and 1. This defines a series of TD values.

• Insert the values in the matrix taking into account the different areas.

• Vary the position of the low quality area in each strip.

• Add the strips in transverse and longitudinal direction to create the final matrix.

The assumptions made in this case are aimed to define a patch unit that is multiplied in
longitudinal and transverse direction to form the total surface. Each strip will follow the rules
described previously. Figure 6.17 describes the assembly process.
The final matrix is then influenced by the nature of each strip unit. It has to be noted that
each strip is generated from the sequence of rules defined in the previous section. Each strip being
generated by the inverse of the cumulative distribution will follow the imposed distribution but it

40 Chapter 6 Real data analysis and surface simulation


Figure 6.16: Assembly procedure of simulated surface

will also make each strip unique. This way of building the matrix is based on the manufacturing
process and is aimed to simulate the real surface as much as possible. Each construction process
is different because influencing events may occur. This simulation process is aimed to represent
the main features of the construction process.

One of the main features of this simulation model is its flexibility. By changing some
parameters, such as the percentage of low quality area per strip, or the means and variance of
the distribution it is possible to obtain slightly different strips. This will be very useful during
the simulation of the sampling methodologies performances. It will be possible to verify which
sampling methodology behaves better when the surface quality changes to an overall better or
lower quality.

6.2 Surface simulation 41


Figure 6.17: Final Surface Simulated. Axis labels represent the number of surface squares

42 Chapter 6 Real data analysis and surface simulation


Chapter
7
Analysis methodology and results

Now that the surface simulation process is completes it is possible to test the different sampling
methodologies and observe how they behave.
The runway surface is entirely simulated with python. Inside the simulations all the TD
values are known and consequently also the exact MTD is known. With this value it is possible
to calculate the relative error between the real MTD and the mean of the TD values measured
with the sampling methodologies.

The relative error is defined as:

M T Dr − M T Ds
=
M T Dr

with:

• M T Dr = Real Mean Texture Depth for the surface. This is the Average of the Texture
Depth values of all the sample units composing the surface

• M T Ds = Mean Texture Depth of the samples collected.

Two methodologies are proposed for the performance simulation:

1. The first one is based on an iterative process. Having set a maximum threshold for the
relative error as a requirement, the process starts by defining a minimum number of samples
for each technique. This minimum number of samples is applied to the surface, then the
relative error is calculated. If this value is lower than the threshold, the simulation stops
and the minimum number of samples is defined. In case the relative error is higher the
number of samples is increased and the process restarts. The cycle will stop when the
requirement is met.

2. The second one is similar but consists in defining in advance the number of samples for each
technique and verifies the relative error for each group of samples. Then the lowest number
of samples that guarantees the required minimum relative error is selected. This way of
proceeding impose to verify different numbers of samples and provides a wider overview
of the behaviour of the surface. In the previous simulation strategy this was not possible.
As an example, if the minimum  is met with the minimum number of samples it will be
impossible to evaluate how the sampling methodology behaves with a larger number of
samples.

43
7.1 First sampling methodology
This simulation strategy is based on an iterative algorithm that is meant to stop when the
minimum relative error is reached. In figure 7.1 the exact algorithm is represented.

Figure 7.1: Flowchart of simulation algorithm for the first methodology proposed

The starting point is the simulation of one entire surface. When the surface is ready it is
possible to calculate its real MTD that will be used for the Relative Error calculation. At this
point the minimum number of samples X is used on the simulated surface and the mean of the
TD values obtained is compared with the real MTD. If the requirement is met the simulation
stops, if not the number of samples is increased and the procedure restarts. The cycle is executed
for each sampling methodology.

When the simulation stops the number of samples that meets the requirement is registered in
a matrix. At this point another surface matrix is simulated and the process is restarted. When
1000 values are stored in the matrix the total simulation process stops. This means that the
surface is simulated 1000 times and the procedure above is executed from the beginning every
time. All this tasks are executed for the three sampling methodologies and the data restored and
compared.

This way of proceeding is an adaptation of a Monte Carlo simulation. The Monte Carlo
method has wide application in different fields. In general it is based on the repetitive simulation
of a process or a series of calculations using as input (controlled) random values for the system
variables. The results of all the processes are then collected and outcomes based on statistical
analysis are provided.

This also applies for this case study because each time a unique surface is simulated and then
a mathematical process is applied. The results of all the simulations are collected in the final
matrix and for each sampling methodology the mean is calculated.
Table 7.1 shows the results of the simulation for each sampling methodology. As already
mentioned for each simulation the minimum value is stored and after 1000 simulations the mean
is calculated. In this case the relative error threshold was fixed at 1%. This threshold is selected
as the maximum acceptable relative error between samples mean and real MTD. In table 7.1
it can be seen that the methodology that provided the minimum relative error with the lowest

44 Chapter 7 Analysis methodology and results


number of samples is the CROW methodology.

Uniform Hammersley CROW


Simulation No Min No samples Simulation No Min No samples Simulation No Min No samples
1 73 1 145 1 85
2 345 2 273 2 195
3 112 3 68 3 15
... ... ...
999 123 999 225 999 420
1000 88 1000 187 1000 362
Mean 144 Mean 165 Mean 99

Table 7.1: Paired t-test of most common TF families for Pearson Correlations

7.1.1 Robustness of the methodology


For the validation of the robustness and consistency of this methodology different cycles of 1000
simulations have been executed. Table 7.2 shows the results of three different cycles. It can be
observed that the three cycles have different outputs. This shows that the process lacks robustness
and needs more verification.

Simulations Uniform Hammersley CROW


first cycle 144 165 99
second cycle 102 158 76
third cycle 132 189 102

Table 7.2: Simulations results or different cycles

To define and explain such behaviour more simulations are run. In this case it is decided to
simulate only one surface and regularly increase the number of samples until they are 8000. For
each number of samples the relative error is calculated, then the number of samples is increased
and the same does the news relative error calculated and so on.

Figure 7.2: Verification of mean behaviour related to the variation of number of samples

7.1 First sampling methodology 45


The scope of these procedure is to have an overview of the relative error behaviour with
different numbers of samples. Similarly for each number of samples it is possible to calculate the
M T Ds and verify how it behaves with different number of samples. The outcome is expected to
be similar because it only represents a second point of view of the same mathematical problem.

Figure 7.2 shows that for less than 1000 samples the mean remains unstable with the uniform
sampling methodology. The Hammersley and CROW methodology take longer to stabilise but
after 2000 samples the asymptotic line is reached. For this reason the first simulation methodology
was providing different results: the minimum relative error requirement is met with a very low
number of samples and the probability to go beyond the 1000 samples is extremely low.

From a practical point of view and to facilitate the understanding also the evaluation of the
relative error related to the number of samples is presented. Figure 7.3 shows that in terms of
relative error the curve behaviour is similar to that of the mean TD. In this case it is observed
that the relative error has a high variation until 1000 samples for the Uniform distribution, and
until 2000 for the Hammersley and CROW distribution.

Figure 7.3: Verification of Relative Error behaviour related to the variation of number of samples

This last Analysis has highlighted the limits of the first simulation methodology due to high
variability of the surface texture. To reach a stable behaviour of the relative error and a high
lever of robustness it is necessary to work with more than 1000-2000 samples. In practice such an
amount of samples cannot be measured during a quality control session. As described in chapter
2 the measurements are taken with the ELAtexture machine that reduces the measurement time.
But still the time available for the measurements is limited and measuring 1000 locations would
take more than 8 hours. Time that is not compatible with Schiphol’s operational schedules.

With the second simulation methodology it will be analysed whether it is possible to achieve
a high consistency and robustness with a lower number of samples.

46 Chapter 7 Analysis methodology and results


7.2 Second sampling methodology
For the second sampling methodology the procedure changes. The first step is to identify for each
sampling methodology different number of samples. The goal of this simulation methodology
is to use these limited number of samples and analyse their average relative error with 1000
different surfaces. This prospective is different from the previous simulation methodology. In the
previous case the simulation stopped if the threshold requirement was reached with a limited
number of samples. Thus the behaviour of the relative error related to more samples, for that
single simulation, remains unknown.

Figure 7.4: Flowchart of second simulation methodology

With this methodology the number of samples are fixed and they are all used on the same
surface. There is no loss of information, but there is a downside: if none of the defined numbers
of samples meets the error threshold value, then more samples need to be planned and the
simulation runs from the beginning. Moreover, to ensure consistency of this process, all the
sample methodologies are applied on the same simulated surface. This enables to understand not
only how these methodologies behave on average but also how they relate to the same surface.

Also for this simulation methodology a specific algorithm is defined in Python. The flowchart
in figure 7.4 provides an overview of the steps that constitute the simulation process. The main
tasks are:

1. Define and list the number of samples for each sampling methodology. To better understand,
the number of samples selected for this case study are given in table 7.3. The increment
rules for each sampling methodology are based on the rules defined in chapter 5.

2. Simulate one entire surface as it has been defined in chapter 6

3. For each sampling methodology try all the numbers of samples set at point 1

4. Calculate the relative error for each number of samples of each methodology

7.2 Second sampling methodology 47


5. Store these values inside a matrix

6. Simulate another surface and restart the process

7. Stop when 1000 surfaces are simulated

8. Calculate the average of the relative error for each sampling number for each methodology

Sampling methodology
Hammersley 9 17 26 34 43 51 59 67 76 84 93 101 110 118 126 134 143 151 160 168 177 185 193 201
CROW 15 30 45 60 75 90 105 120 135 150 165 180 195
Uniform 10 18 28 40 54 70 88 108 130 154 180

Table 7.3: Selection of the number of samples for each methodology

Standare Deviation of the simulated surface


In section 6.2 the flexibility of the surface simulator was mentioned. This property proves
to be useful during this simulation, because it is of interest to understand how the sampling
methodologies behave when the configuration of the surface properties vary.

Four factors can be modified to influence the surface property:

• The uniform distribution’s Mean of the high quality area.

• The uniform distribution’s Mean of the low quality area.

• The percentage of low quality area on a single strip.

• The noise factor of the low quality area distribution values.

The last may seem confusing, for this reason it has to be explained in detail. As described
in chapter 6 the surface is characterised by two normal distribution functions, one for the high
quality values and one for the low quality values. Figure 7.5 shows the two distribution curves.
Their Mean can be changed or influenced as shown before. But for this case study the values
obtained from chapter 6 are used. Moreover the Standard Deviation was close to 0.25 mm in
almost all configurations and for this reason it has been kept fixed for all the simulations.

In mathematics the representation of an unexpected variation on a distribution is expressed by


a factor called "noise" [4]. This can be obtained by adding or removing to the original distribution
a secondary distribution that in general has µ=0 and a specific Standard Deviation. In this case
the need is to evaluate a decrease of the surface quality so the worst scenario is simulated by
subtracting the noise from the distribution of the low quality area distribution.
Figure 7.5 shows the effect of the noise. The blue curve is shifted to the left side due to the
subtraction of the noise from the original distribution. The shift is a pure translation to the left
due to the fact that each value of the original probability distribution function is reduced by the
noise. The final low quality distribution curve became the green one.

Using the input values from the measurements and changing some of the parameters, it is
possible to simulate the surface in different conditions. The main concept of this simulation
process is to be able to have different configurations of the surface and see how the different
sampling methodologies perform.

48 Chapter 7 Analysis methodology and results


Figure 7.5: Representation of distribution curves and perturbing effect

In order to avoid misunderstandings regarding this simulation it is very important to remember


that the scope of this procedure:

It is not to simulate a surface that meets the Schiphol or EASA requirements. But
to simulate a surface that presents the manufacturing process patterns and
evaluates which sampling methodology performs better in the identification of the
real MTD

Having this concept in mind it is legit to consider the simulation of surfaces that present a
lower quality than the real ones. If the sampling process proves to be effective it will be able to
detect this lower quality.

It has already been mentioned that the Mean of both high and low quality distributions have
been taken from the results of chapter 6 and will be kept fixed during the simulations. This
means that the two variables that will be allowed to change are: the percentage of low quality
area per strip and the intensity of the noise. These two parameters will allow to simulate surfaces
with lower quality than the real ones. The goal is to verify the efficiency of the methodology in
case of very low quality surfaces.

7.2.1 Results of the simulations


Having a clear understanding of how the simulation works, it is possible to describe which
parameters have been used for the simulations and the related results.
In all the simulations the following inputs have been used:

• µ= 1.65 mm, σ = 0.23 mm for the high quality surface values

• µ= 1.36 mm, σ= 0.25 mm for the low quality surface values

7.2 Second sampling methodology 49


Overview of all the simulation methodologies
For the simulations the noise and percentage of low quality have been defined as follow:

• The noise factor has been set as a value taken from a uniform distribution of values between
0 and 0.3. This means that for each surface simulation the noise was slightly different. This
simulates a manufacturing process that contains imperfections.
• For the percentage of low quality area per strip it has been decided to create different
categories. Each category represents a uniform distribution with a value between 0, 0.1, 0.2,
0.3, 0.4, 0.5. For each category of values 1000 surfaces have been simulated.

Table 7.4 shows an overview of the results of this simulation. The different surfaces are
represented by the two parameters mentioned above. The two values used to represent the
different surfaces are shown in the first column of each table. More specifically:

• The first is related to the percentage of the low quality area on each strip. The value
represents the extreme of the uniform distribution (i.e. 0.2 represents the extreme of a
uniform distribution between 0 and 0.2).
• The second represent the intensity of the noise. Also this value is represented as the extreme
of an uniform distribution.

As an example "0.3_0.2" represents a surface with a a noise that come from a uniform distribution
of values between 0 and 0.3 and a percentage of low quality area that can have a value between 0
and 0.2.

Table 7.4: Results of sampling simulation

All the tables present in the first row a type of surface with 0 percentage of low quality and 0
noise (type of surface 0.0_0.0). This means that this surface is considered as a uniform surface

50 Chapter 7 Analysis methodology and results


without any low quality area. This would represent a surface with a perfect construction process
without any influence of the manufacturing process. An ideal situation that both client and
contractor want to achieve.

A graphical representation of the results in table 7.4 is shown in the following sections.
Different plots are proposed based on the research scope. The first plot in figure 7.6 presents how
the three sampling methodologies perform with ideal surfaces (0.0_0.0).

Figure 7.6: Results for each sampling methodology on a surface with no low quality areas.

Figure 7.6 presents a more consistent behaviour with respect to the first simulation method-
ology. In fact the sampling methodologies all present a decrease of the relative error when the
number of samples increases. This is coherent with the U function proposed by the ISO regulations
as can be observed in figure 3.1.

Figure 7.6 also shows that all the three sampling methodologies appear to be effective in
reaching a relative error lower than 1% within the numbers of samples proposed in table 7.3.
This means that no other numbers of samples need to be added. It can be observed that with
an ideal surface the Uniform sampling methodology appears to be the most effective in reaching
the 1% relative error with the lowest number of samples. With 70 samples a relative error of
1.04% is reached. Meanwhile, with the Hammersley and CROW methodology 160 samples are
needed. This is certainly an idealistic situation that will not occur in practice but from a scientific
perspective it is important to evaluate the general behaviour of these sampling methodologies
with uniform surfaces.

7.2 Second sampling methodology 51


Simulation of Hammersley for different surface types
In this case the focus is on the Hammersley Sampling Methodology. All the different types of
surface results are plotted in order to evaluate how this sampling methodology performs with
different surfaces.

Figure 7.7: Hammersley sampling technique for different surface properties

Figure 7.7 shows that all the curves have the same behaviour. The trend is similar and
the distance between the curves is negligible. This means that the Hammersley methodology
is not affected by the manufacturing process. The justification is found by the fact that the
construction process creates areas of low quality texture with specific patterns but the locations
of the Hammersley samples are random and do not follow a fixed structure. The probability
of having more sampling locations in the same low quality area is then very low. No relation
between the sampling technique and the construction process is then found.

It has to be mentioned that also when the noise of the low quality area is increased the effect on
the sampling methodology is negligible. This is a second evidence of the lack of correlation between
the sampling methodology and the manufacturing process. The downside of this technique is the
fact that 160 samples are required to reach a 0.96% relative error.

52 Chapter 7 Analysis methodology and results


Simulation of Uniform for different surfaces types
The sample plotting process has been applied to the group of results of the Uniform Methodology.

Figure 7.8: Uniform sampling technique for different surface properties

In this case the situation is slightly different because the behaviour of the curves does not
follow completely the blue curve that represents the surface with no low quality areas. A trendline
representing the other 5 lines would be very similar to the blue line because the overall behaviour
of the curve is the same. But it is clear that all the disturbed surfaces present a particular peak
when 55 samples are imposed.

This is due to the fact that in this particular sampling configuration a higher percentage of
samples are located on the low quality areas of the surface. The simulations create 1000 different
surfaces and each surface is different but maintains the same manufacturing signature. Every
time it is simulated the surface presents a series of strips with different locations of the low quality
area. But the number of strips, the percentage of low quality area and the noise intensity remains
the same. This preserves the manufacturing signature. The fact that this peak is present in
each simulation testifies that there is a correlation between the sampling methodology and the
manufacturing process for this specific number of samples. Another proof of this correlation
comes from the fact that the peak is present also when the noise is increased.

This curve behaviour is in accordance with the findings of Colosimo et al. In their researches
they obtained similar peaks and a similar configuration of the curves [8].

7.2 Second sampling methodology 53


Simulation of CROW for different surfaces types
The last sampling methodology that has to be plotted is the adapted CROW methodology
presented in chapter 2. In this case the sampling methodology is related to the manufacturing
process although the original scope of this method was to test the paving quality.

Figure 7.9: CROW sampling technique for different surface properties

The results are slightly different with respect to the previous cases. But it is still possible
to say that the general trendline of the curve is maintained. Three main peaks are present and
the reason is the same as the one for the Uniform Methodology. The surface signature and the
sampling methodology enter in relation and the samples taken are mainly located on the low
quality areas of the surface. In this case a sequence of three sampling locations are drawn closer
to each other. This sampling distribution was originally meant to check the paving quality at the
extremes of the paver and the compactors. For this reason, in this technical context, if one of the
three points is located on a low quality strip most likely the two adjacent points will be included
in that area too. The probability of having a correlation between the manufacturing pattern
and the sampling methodology is higher and determines a major relative error. The three peaks
increase when the noise is increased. This is the proof that a higher number of samples is located
in the low quality areas. With higher noise the difference between the samples means and the
real mean of the surface is greater.

Please note that the scale of this graph is very low. The vertical axis has a maximum value
of 4.5% relative error. If the scale would have reached 100% then those peaks would have not
been detectable. From a scientific perspective an error of 3-4% is considerable and is object of
investigation, in particular for the mechanical production industry. In pavement construction
industry an error of 4% could still be considered acceptable. This is because for a runway strip

54 Chapter 7 Analysis methodology and results


of 500 m length and 60 m width there are 480,000 sample units and at the first peaks with 90
samples a real estimation of the MTD with an error of 3.5% is obtained. So with 0.019% of the
total sample units it is possible to estimate the MTD value of the surface with an accuracy of
96.5%.

7.3 Overview of the results


The three sampling methodology techniques have been analysed with different simulation tech-
niques. The first simulation technique has proven to not be consistent and robust because during
the simulation process part of the information was lost. The iterations were forced to stop
when the relative error requirement was met. For this reason it was not possible to evaluate
the behaviour of the methodology with a large number of samples. It has been proven that the
stability of the relative error was reached only when 1000-2000 of samples were used and this
justifies the inconsistency of the first simulation technique. But to understand the intensity of
the error committed by taking a lower number of samples the second simulation methodology
was adopted.

More stability and coherence is brought with the second simulation strategy. A series of
number of samples is selected and the behaviour of all the numbers of samples on the list is
tested. It is proven that the results are coherent with the ISO regulation introduced in the
literature review. Among the three sampling techniques the Hammersley is the only one that is
not affected by the manufacturing process. Due to his structural nature there is no correlation
between the sampling methodology and the patterns determined by the construction process. The
situation with the Uniform and the CROW methodology is different. They are both influenced
by the manufacturing process. The Uniform methodology is sensible when 55 samples are used,
this is why a Peak is present in the graph. The CROW methodology is more sensible to the
manufacturing process. The reason is that groups of three samples are taken and there is more
probability to have a high concentration of samples in the low quality areas. This is also testified
by the fact that the peaks increase when the noise is increased. The peaks present with the
CROW and the Uniform methodology testifies the lower reliability of these methodologies.
The presence of the peaks testifies a deviation from the general behaviour of the curves and
thus represents measurement values that have a higher relative error than expected. These
measurement techniques will have the risk to provide a MTD value lower than the real one.

Number of samples Relative error Type of surface


Hammersley 160 0.96 % 0.3-0.3
Uniform 70 0.96 % 0.3-0.3
CROW 180 1% 0.3-0.3

Table 7.5: Presentation of number of samples corresponding to 1% or lower relative error

7.3 Overview of the results 55


From the results it can be said that if the real MTD needs to be measured with a high
reliability the Hammersley methodology is the most appropriate one. Because it is independent
on the sampling technique, but compared to the other two strategies this requires more samples
to obtain a relative error lower than 1%. Table 7.5 shows how quickly the methodology reaches a
proper relative error. The Uniform method is the quickest but is also less reliable. In case of an
ideal surface with no lower quality areas the Uniform is the most reliable and requires the least
number of samples. The same behaviour is present also with the undisturbed surfaces (0.0_0.0).
This proves that the performance of the different sampling techniques are similar in all the surface
condition when the lower number of samples is required.

56 Chapter 7 Analysis methodology and results


Chapter
8
Validation and Test

In this chapter the field experience will be described and the results presented. This will lead
to a series of conclusions and comparisons with the simulation proposed previously. The field
experience was aimed to enrich and validate the theoretical notions and the results from the
simulations.
This part was divided in different phases:

• Analysis of the correlation between laser MTD measurement and Sand Patch Method
• Analysis of the effects of waterjetting on MTD
• Testing of sampling methodologies

8.1 Correlation between Elatextur and Sand Patch Method


As described in section 2.2, two methods are used by Schiphol and Heijmans for the TD measure-
ments. The first one presented was the Sand Patch method and the second the laser method. The
first is the older and more precise methodology and is able to measure also the hidden holes that
are present underneath the aggregates. During this analysis the Sand Patch results will be consid-
ered the most precise and it will be calculated how much the Laser Method results differ from them.

The drawback of the sand patch method is the fact that it takes 3-4 minutes per measurement,
while an ELAtextur measurement takes 12 seconds. The analysis with the laser methodology is
different. This is executed with the ELATextur machine and it is considered less precise because
the laser is not able to measure the hidden holes underneath the aggregates. The scope of this
paragraph is to evaluate how much the two methodologies differ and to define if the ELATexture
machine is a reliable tool for the quality control.

8.1.1 Correlation between the techniques


To answer this question 33 measurements have been taken with both techniques at exactly the
same locations. The results are presented in table 8.1. The data shows that there is an important
difference between the mean of the two values. The correlation between two sets of data is
expressed by a factor called correlation factor defined as

cov(X, Y )
ρX,Y =
σX σY
with:

• Cov(X,Y) as the covariance between the variable X and σY

• σX as the standard deviation of X

57
• σY as the standard deviation of Y
In table 8.1 the value of the correlation factor are shown.
Measurement Sand patch electronic Rel.error
1 1.52 1.58 0.04
2 1.55 1.33 0.14
3 1.61 1.24 0.23
4 1.6 1.35 0.16
5 1.76 1.5 0.15
6 1.6 1.28 0.20
7 1.77 1.88 0.06
8 1.55 1.49 0.04
9 1.52 1.33 0.13
10 1.69 1.58 0.07
11 1.66 1.27 0.23
12 1.73 1.5 0.13
13 1.73 1.36 0.21
14 1.73 1.72 0.01
15 1.98 1.88 0.05
16 1.52 1.51 0.01
17 1.69 1.81 0.07
18 1.5 1.34 0.11
19 1.63 1.46 0.10
20 1.57 1.29 0.18
21 1.69 1.71 0.01
22 1.63 1.75 0.07
23 1.44 1.28 0.11
24 1.66 1.42 0.14
25 1.63 1.3 0.20
26 1.66 1.62 0.02
27 1.55 1.29 0.17
28 1.63 1.71 0.05
29 1.6 1.46 0.09
30 1.44 1.38 0.04
31 1.83 1.41 0.23
32 1.94 1.5 0.23
33 1.49 1.52 0.02
Mean 1.64 1.49 0.11

Table 8.1: Results values from Sand Patch and ELAtextur measurements

To try to understand the exact distribution of those values, the plot in figure 8.1 shows the
scatter distribution of the measurement values and the trend that can be seen as a graphical
interpretation of the correlation factor. The correlation factor is 0,48 so partially corre-
lated values. The graph is characterised by the thick 45 degree line that represents the perfect
correlation between the two variables. The closer the points are to this line, the higher is the
correlation factor. The fact that the points are located above the blue line means, in
general, that the measurements are more conservative for the ELAtextur with the
values below 1.5 mm. Above 1.5 mm the Sand Patch method appears to be more
conservative but the number of measurements in this region is considerably low

To validate this observation the relative error between the measurements for the ELAtextur
data below 1.5 mm and above 1.5 mm are calculated for each pair of data. Finally the average is
provided. Looking at table 8.2 it can be observed that for values below 1.5 mm the conservative
error committed using the ELAtextur machine is around 15% meanwhile for values above 1.5
mm is 4%.

58 Chapter 8 Validation and Test


Figure 8.1: Measurements distribution and correlation

Measurement Sand patch electronic Rel.error


1 1.33 1.55 0.14
2 1.24 1.61 0.23
3 1.35 1.6 0.16
4 1.5 1.76 0.15
Measurement Sand patch electronic Rel.error
5 1.28 1.6 0.20
1 1.58 1.52 0.04
6 1.49 1.55 0.04
2 1.88 1.77 0.06
7 1.33 1.52 0.13
3 1.58 1.69 0.07
8 1.27 1.66 0.23
4 1.72 1.73 0.01
9 1.5 1.73 0.13
5 1.88 1.98 0.05
10 1.36 1.73 0.21
6 1.51 1.52 0.01
11 1.34 1.5 0.11
7 1.81 1.69 0.07
12 1.46 1.63 0.10
8 1.71 1.69 0.01
13 1.29 1.57 0.18
9 1.75 1.63 0.07
14 1.28 1.44 0.11
10 1.62 1.66 0.02
15 1.42 1.66 0.14
11 1.71 1.63 0.05
16 1.3 1.63 0.20
12 1.52 1.49 0.02
17 1.29 1.55 0.17
Mean 1.69 1.67 0.03
18 1.46 1.6 0.09
19 1.38 1.44 0.04 (b) values above 1.5 mm
20 1.41 1.83 0.23
21 1.5 1.94 0.23
Mean 1.37 1.62 0.15

(a) Values Below 1.5 mm

Table 8.2: Result of fitting process

8.1.2 Conclusion ad Recommendation


Although from a statistical point of view the amount of analysed data is considered too low for a
complete analysis, it is possible to say that between the two methods of measurements there is a
partial correlation (almost 0.5) but for ELAtextur values below 1.5 mm the average value is 15%
lower than the Sand Patch value. The same error for values above 1.5 mm is considerably lower.
But this means that the ELATextur results can be used because the error is conservative. The
results proposed by the laser in the worst scenario will be lower than the real one. This means that
if the ELATextur results already meet the Schiphol requirement then also the real MTD will do so.

Due to the large surface available and the need to have a proper evaluation of the MTD, the

8.1 Correlation between Elatextur and Sand Patch Method 59


ELAtextur method is considered more convenient and fast. But the previous analysis highlights
the conservative nature of the results with a MTD value below 1.5 mm. For this reason, due to
high requirement set by Schiphol, it could be taken into consideration to increase the values below
1.5 mm obtained with the machine with 10 to 15%. This way of proceeding, according with the
results proposed would allow to increase the reliability of the ELAtextur and preserve operational
time. This only in case more precision is needed. But as mentioned the error is conservative and
there is no need if the results already meet the requirements.

Please note that these results and analyses are referring to Flightflex
R
surfaces.

8.2 Field Measurements


From the 27th to the 14th of April 2018, the Polderbaan (18R-36L runway) has undergone a
maintenance process where a pavement section has been renewed with Flightflex
R
. Figure 8.2
presents the exact location of the construction site.

Figure 8.2: Location of the maintenance works on the Polderbaan

During the construction process it has been possible to measure the asphalt surface in the
different phases of the process: after compaction and after the waterjetting process. The scope
was to analyse the performance of the asphalt from the MTD point of view in the different
phases to see if and how the manufacturing process influences the surface quality. This helps
to verify if the properties of the simulated surfaces are reliable. In fact a full validation process
is only possible if all the locations of the runway are measured and this would mean measuring
480,000 locations. The time available for the measurements was limited to a few hours. To
obtain that amount of measurements, several weeks are necessary. This is not a possibility. For
this reason during the maintenance process the goal was to take as much measurements as possible.

Unfortunately due to weather conditions and few delays, it was not possible to have a full
time slot to execute the measurements. Most of the measurements were taken with the machinery
still working on the construction site. For this reason the number of measurements is limited and

60 Chapter 8 Validation and Test


it was not always possible to try all the sampling strategies.

8.3 Measurements on Taxiway


The first measurements were taken on the taxiway because the maintenance process started at
this location. After the compaction of Flightflex R
some measurements were taken with a uniform
distribution as seen in figure 8.3. Green points represent TD values above 1.3 mm, yellow between
1 mm and 1.3 mm and red for values below 1 mm. First a group of measurements was taken
from a straight line cutting the pavement in the transverse direction.

Figure 8.3: Uniform distribution in the taxiway

The samples were obtained before waterjetting and also after the shoulders of the runway
was treated. Figure 8.4 shows that the treated shoulder presents a higher MTD but this is still
not enough to reach the requested value of 1,3 mm set by Schiphol. The middle part, untreated,
remains the same, below 1 mm in average. To facilitate the understanding, the two different
edges of the investigated area have been called with the letter A and B. A for the one closer to
the runway and B for the other.

(a) Surface before waterjetting (b) Surface after waterjetting

Figure 8.4: Comparison of surfaces before and after waterjetting

8.3 Measurements on Taxiway 61


The calculation presented in figure 8.5 shows an MTD increase due to waterjetting of 23% in
the A part and 18% in the B part.

Figure 8.5: Results of analysis taxiway surface

Some characteristics of the performance of the asphalt and the manufacturing process already
arise. First, it can be see that after the compaction process the central part has a lower quality
than the shoulders. The average value is 0.95 mm, lower than the EASA regulation requirement.
The same appears from the uniform sampling where the mean MTD is 0.94 mm.
At the same time the waterjetting process proved to increase this characteristic of the asphalt
by 20% from the original value. But compared to the measurements of March the overall values
are lower. The reason for this is still under investigation and will help to understand how to
improve the performance of the asphalt.

62 Chapter 8 Validation and Test


8.4 Measurements on the Runway
The second and more consistent number of measurements came from the runway. There, four
main sets of measurements have been executed. The goal was to obtain an overview of the spread
of the MTD on a major surface area.

8.4.1 Measurements on untreated surface


The first series of measurements are clearly seen in figure 8.6. We could consider this method as
a semi-random one. The three lines have been taken in compacted strips in order to analyse the
variation of MTD due to the compaction process. It was not possible to choose the location of the
strips because the construction process was still ongoing and part of the runway was not paved yet.

The data analysis provides the following output:

• Mean = 1.19 mm

• St.Deviation = 0.06 mm

From these data it can be said that the performance


of the runway’s pavement is superior compared to the
taxiway. This is testified by the green circles in the
central part of the runway. Moreover it is possible to
recognise different quality areas. On the northern part
of the runway a concentration of yellow and red spots is
observed that represent a lower quality surface. Also can
be seen that there are groups of three dots that allow
to easily recognise the lower quality areas. The centre
is characterised by the presence of some high quality
areas. This means that during the compaction process,
in some locations, the MTD already meets the Schiphol
requirements. But uniformity is not achieved.

Figure 8.6: Measurement on untreated


area

8.4 Measurements on the Runway 63


8.4.2 Measurement on semi treated surface
Due to time constraints and delay during the compaction process it was not possible to have a
total evaluation of the runway before waterjetting and post waterjetting. For this reason it has
been decided to take samples according to the monitoring plan when the waterjetting process
was not totally completed.
The sampling methodology applied in this case was
the modified CROW proposed by Heijmans. In figure
8.7 all the points required are plotted but it is difficult
to see each point separately due to the fact that they
are very close on the surface. For this reason in figure
8.9 three different images can be distinguished, each
one with one quality of values. Please note that the
green values represent values above 1.3 mm, yellow those
between 1 mm and 1.3 mm and the red ones those below
1 mm.
The number of samples amounted is 112 with:

• Mean = 1.17 mm

• St.Deviation = 0.06 mm

Paying attention to the second image in fig-


ure 8.9 it is possible to notice that the waterjet-
ting process presents a specific pattern and does
not remove the mastic uniformly. The waterjet-
ting process was more complex than the other
tasks of the maintenance process. Several fac-
tors affect the quality of the waterjetting process
such as the speed of the truck and the pres-
sure of the water. When the speed of the wa-
terjetting machine was too high the bitumen was
not properly removed, the same results were ob-
tained when the pressure was not high enough.
But also a too high pressure is not recommended,
because in that case too much bitumen is re-
moved and the pavement life expectation is re-
Figure 8.7: Measurement according to duced.
the monitoring plan
It was also observed that one waterjetting treatment
was not enough to reach a proper Texture Depth. To
solve this problem a second treatment was planned. With
a double treatment the process has proven to be enough efficent as it will be shown in the next
sections.

This methodology was taking three points with a fixed distance of 1.75 m between two
measurements. This would allow to evaluate the quality of the compaction process. But as we can
see from figure 8.8 the waterjetting machine has a width not bigger than 1.5 m. The waterjetting
process itself was not uniform as we can see in figure 8.8 and the quality changes from strip to
strip. This is the reason why a location with three points may have three totally different values.
Figure 8.8 shows that the left part has a proper bitumen removal while the other part has a

64 Chapter 8 Validation and Test


lower visual texture depth. The manufacturing process, in particular the waterjetting, apparently
strongly affects the quality of the pavement in term of TD.

Figure 8.8: Waterjetting process and heterogeneity of TD surface

To have an easier comprehension of the MTD distribution figure 8.9 shows the distribution of
the red, yellow and green points on the surface. Some areas appear to have a lower MTD value
but the general picture is a homogeneous distribution on the entire surface.

Figure 8.9: Distribution of MTD values on the surface

8.4 Measurements on the Runway 65


8.4.3 Homogeneous distribution
Once the paving process was completed and the surface was treated, at least one time the uniform
sampling strategy was used. In total 39 measurements were planned as it can be seen in figure
8.10.
Compared to the previous plotting the sur-
face presented a higher total quality due to
the fact that the waterjetting process was al-
ready executed at least once. Some areas as
the two external strips were waterjetted already
twice.

Please observe the left strip that presents in


the upper part a concentration of lower quality
points. In this area the EASA requirement is
met but not the one imposed by Schiphol. This
representation of the points comes from the GPS
coordinates and allows a quick and easy under-
standing of the surface’s quality in different loca-
tions.

The analysis of the sample values confirms the pre-


vious visual interpretation because:

• Mean = 1.45 mm

• St.Deviation = 0.1 mm

The Mean is consistently higher than in the previous


case. i.e. an increase of MTD of 23%. This value is
indeed coherent with the test previously executed in the
taxiway.

Figure 8.10: Application of uniform


strategy

66 Chapter 8 Validation and Test


8.4.4 Final Measurements
When the works were totally completed some engineers from Heijmans had the chance to take
some final measurements.
Exactly 33 locations were measured in accordance to
the correlation test with the sand patch method. The
data presented in section 8.1.1 come from this group
of measurements. The plot in figure 8.11 shows the
disposition and intensity of these values. In this case the
numerical results are the following:

• Mean = 1.48 mm

• St.Deviation = 0.01 mm

Also in this case the Mean is considerably higher than


that on the untreated surface. In this specific case the
MTD compared to the first group of measurements has
increased by 25%, also this value is in accordance with
the test in the taxiway. The second waterjetting process
has lightly increased the MTD but has strongly reduced
the variance. Thus the surface is more uniform and the
overall quality is increased. It is still possible to find
lower quality areas as in the bottom part, but in general
there are no values below 1 mm in accordance with the
first group of dense measurements from February 2018.

Figure 8.11: Last group of


measurements

8.4 Measurements on the Runway 67


8.5 Conclusion of measurements process
During this measuring process the difficulties caused by delays and defects on the waterjetting
process have forced to adapt the measurements strategy. It was not possible to try exactly all the
three sampling methodologies but this is not considered a main issue because without knowing
the exact value of all the surface locations it would have been not possible to define the exact
evaluation of the performances.

The focus was more on the analysis of the construction process and its influence on the
surface quality. It has been highlighted that the compaction process itself already influences
the pavement’s properties. In some areas the MTD already satisfies the Schiphol requirements
as it has been seen in figure 8.6. The general mean of an untreated surface remains low and
increases when it is treated with high pressurised water. If this process is executed correctly
the proper amount of bitumen is removed from the surface, increasing the MTD approximately
by 25% (figure 8.12). Moreover this procedure will also increase the skid resistance properties
of the surfaces because the removal of the bitumen will uncover the micro-texuter of the aggregates.

Figure 8.12: Effect of Waterjetting on MTD

It has been observed that the waterjetting process needs attention and supervision to obtain
an homogeneous surface. In particular the machine is characterised by two brushes that are
supposed to help remove the bitumen. The quality of the two parallel brushes is hardly the same
thus the quantity of bitumen removed is not the same. This could create a series of strips with
very different texture depth. The consequence is that during the measurement process it is very
easy to only measure a series of locations in a good or a better strip. The Uniform sampling
methodology could, as an example, have an entire column of measurements on a high quality
or low quality strip. The final quality of the waterjetting process can be improved by control-
ling that the speed of the waterjetting machine is kept constant and the brushes regularly changed.

In case this pattern of high and low quality strips is present a second waterjetting process is
planned. The process becomes in this case however more delicate because an excess of pressure or

68 Chapter 8 Validation and Test


passes could remove more bitumen than necessary. In the worst circumstances the surface could
face damages that reduce the expected lifetime. In the last group of measurements this second
post-process has proven to not increase sensibly the MTD but to strongly reduce the variance of
the data. The MTD from the Homogeneous distribution of samples to the final measurement has
a difference of 0.03 mm. The Standard Deviation instead has reduced 10 times in value from 0.1
to 0.01. This means that the final quality of the surface is ten times more homogeneous. The
second waterjetting process has proven to be effective for the increase of the total quality of the
surface.

In general it can be said that both paving and waterjetting influence the MTD. The last
process in particular has proven to sensibly increase the texture depth by 25%. This values are
coherent with the measurements obtained in the taxiways and an overview of the final results can
be found in figure 8.12

8.5 Conclusion of measurements process 69


Chapter
9
Conclusions and recommendations

In this chapter the final conclusions of the research question will be described. Starting from
the research question the final outcomes of the thesis are presented. This part will lead to some
recommendations for Schiphol and Heijmans. Finally some recommendations for future researches
will be proposed.

9.1 Conclusions
The starting point for this thesis was the following research question:

“Given a surface characterised by a predefined manufacturing process, which


minimum number of samples and at which locations provide the lowest relative
error between the real mean of the surface and the mean texture depth of the
samples collected?”

In the definition of the research boundaries three main sampling techniques have been selected:
Modified CROW, Uniform, Hammersley. But to investigate which technique could guarantee the
best MTD evaluation some surfaces have been simulated with Python scripts. This methodology
was based on an adapted Monte Carlo simulation, where the surfaces were repetitively simulated
and the different techniques were applied. A first simulation technique based on an iteration
process proved to be unreliable due to the high variability of the surface texture depth values. It
has been in fact proven that the relative error stabilised to acceptable values only after 1000-2000
samples were used. But a better bench of results was obtained with the second simulation
methodology. In this case a number of samples per sampling methodology is imposed to the same
surface repetitively for 1000 times.

In this case the results are consistent and coherent with the analysis proposed by Colosimo et
al. [8]. As for their analysis it has been possible to represent the decrease of the relative error by
increasing the number of samples. This is in accordance with the behaviour of the U function
presented in the ISO regulations [13]. Figure 9.1 shows the similarity of the curves proposed by
Colosimo and the ones obtained from the simulations.
The results highlighted the fact that the behaviour is regular for all the three methodologies
in the case of a homogeneous surface without low quality areas. Moreover the best performing
methodology in this ideal situation would be the uniform methodology because it is able to
provide a relative error lower than 1% with only 70 samples.

The situation is slightly different when low quality areas are present. The Hammersley
sampling methodology is absolutely not affected by the different types of surface resulting in
a smooth behaviour in all circumstances. Different is the situation with the CROW and the
(a) U value with different strategies (b) Results from Simulation

Figure 9.1: Comparison Colosimo Results and Own results

Uniform methodology. The latter one is only slightly disturbed with 55 samples but the first one
gives several peaks in the curves. This is due to the fact that these two methodologies can relate in
some cases with the manufacturing process and consequently the reliability of the measurements
decreases.

The main conclusion that arises from the simulation analysis is that a random selection of
sampling points (Hammersley methodology) appears to be the most reliable because it is not
possible to establish a relation with the manufacturing process with any number of samples.
But the downside of this methodology is the fact that to reach a relative error lower than 1%
a high number of samples is required. In the order of 160 samples for a runway strip of 500 m
length and 60 m width. The uniform methodology from this point of view appears to be the
most accurate with the lowest number of samples because it can reach a low relative error with
only 70-80 samples but there is the risk of having a synchronisation with the pattern left by the
construction process and underestimating the real MTD. From this point of view the CROW
methodology is considered the most inefficient because it requires almost the same number of
samples as the Hammersley methodology to reach a lower relative error and it is less reliable than
the Uniform strategy. For these reasons, from a mathematical point of view, the CROW strategy
is not considered suitable for this kind of measurements.

The number of samples required for each methodology are referred to the test area of 500 m
length in the Polderbaan were the FFX has been placed. If it is of interest to know the number
of samples required for a different area it is necessary to make a proportion with the test area.
As an examples: the number of samples required for the entire Polderbaan, which has a lenght
of 3500 m is 7 times the number of the test area. This means that for the entire runway 1260
measurements are requested for the Hammersley strategy and 490 for the Uniform ones.
In future researches it would be of interest to run the same simulation proposed in this research
using as reference area not the test stretch but the entire runway. This would define a more
accurate result compared to a simple proportional calculation. In this research this process has
not been included because of software and hardware limits. The simulation of the entire runway
requires a high calculation power to be executed in a limited number of hours.

9.1 Conclusions 71
The field experience has proven that the combination of paving, compacting and the mixture
properties on one side and the waterjetting process on the other side, affect the properties of the
surface. In particular the mixture aggregates and the paving process create different macro areas
with different surface properties. The surface is then again influenced by the waterjetting process
that may lead to specific patterns. These are due to the machinery properties and the water
pressure. These irregularities are mitigated with a second waterjetting process and this is proven
by a reduction of the variance in the samples distribution. But it also proves that the overall
MTD increases with 25% after the second waterjetting process. This confirms the need of this
double treatment and its efficiency.

The field experience has proven that unexpected events, such as as problems with the produc-
tion or some machinery, can influence the texture quality of the surface. The main patterns of the
construction process have been recorded and translated into parameters that regulate the surface
simulation but this process has some limitations. Each construction process will have some main
recurrent patterns and a series of variable influences on the surfaces. What arises from this
analysis is that a simulation methodology that presents specific patterns may interfere with the
manufacturing process and affect the reliability of the results. This discrepancy is however limited
to 3-4% relative error, which means that there is still 96% precision with those methodologies.

The main outcome of what has been described is that a pavement surface of SMA presents
patterns left by the manufacturing process. The most reliable method to determine the MTD of
such surfaces is a random selection of samples that, in this case, is represented by the Hammersley
strategy. Other sampling strategies with a fixed and predefined structure are less reliable because
they can randomly find a relation with the patterns left by the manufacturing process.
The answer of the research question recalled at the beginning of the chapter is to use 160
samples with a Hammersley distribution. From the results this would ensure a low relative error
(<1%) and thus a high reliability of the MTD. The number of samples is higher compared to the
Uniform strategy but the reliability is also higher. With this second methodology there would be
the risk of a relation between the sampling strategy and the manufacturing signature.

Please note that with this thesis Schiphol and Heijmans will be able to use all the programs
created in python. This would furnish them a tool that automatically plots all the samples before
in a simulated surface and also on the map with GPS coordinates. This will enable them to see on
the maps where the samples have been taken and have an overview of the pavement quality. This
could ease the identification of the high and low quality areas and helps the improvement of the
maintenance process. Observing the distribution and the characteristics of the low quality area
could help the contractor to define which part of the construction process affects most the quality
of the pavement surface in terms of texture depth. This would open the doors for mitigation
measures and consequently increase the quality of the final product.

72 Chapter 9 Conclusions and recommendations


9.2 Recommendations for the industry
The outcomes of the analyses, although mainly theoretical, lead to some recommendations for
the industry.

Firstly it has to be said that the three methodologies need different measuring times. The
Uniform methodology is the fastest to execute due to the simplicity of the grid of samples. The
CROW methodology is more complicated but still regular, in particular because in one location
three adjacent samples are taken. The most complex methodology is the Hammersley because
this algorithm simulates a random distribution. It is more complex to define the map and execute
the measurements correctly.

Taking these premises in mind a company has to evaluate the trade off between the time
dedicated to the quality control and the level of reliability requested. The most reliable situation
would be to measure 180 location (for each 500 m) with the Hammersley methodology. This
would ensure a relative error lower than 1% (thus a high accuracy) but it would also require more
than 2 hours. Moreover this methodology would be more reliable because there is no risk to have
misleading values on the measurements.

In case of a lack of time available it is also possible to sensibly reduce the number of samples
and implement the Uniform strategy. With 70 samples there is already a relative error lower than
1% but it has to be taken into account that the reliability is lower. In any case the simulations
have proven not to have a relative error higher than 5%. In practice this means that the MTD
value obtained risks to be lower than the real one but with a limited error of 5%. The company
should be aware of this possibility and could accept a lower reliability of the measurements.

To recapitulate: if the company wants to have the highest reliability of the measurement and
simultaneously wants to have a maximum 1% relative error, it is suggested to adopt 180 measure-
ments for each 500 m with the Hammersley methodology. If it is accepted to have a lower reliability
of the measurements, 70 samples for each 500 m of runway can be measured with the Uniform
distribution. In this case the MTD value obtained is considered to be 5% lower than the real value.

The CROW methodology is not suggested because it still takes a considerable amount of time
and samples to have satisfactory results. More important, it is the one that is more sensible to
enter in relation with the manufacturing signature.

Moreover a main strategy is proposed:

• Apply the Hammersley strategy right after the completion of the construction process. This
will ensure the highest reliability in the definition of the quality of the surface in terms of
texture depth.

• For regular monitoring of the quality a Uniform distribution can be implemented. In this
case the reliability is lower but it will be faster. This could be interesting for the evaluation
of the behaviour of the surface in time. Limited time would be needed and a plot of a
regular structure as the Uniform one could facilitate the understanding of which areas are
damaged or have lower quality.

It is also suggested to use the plotting tool to visually evaluate if the sampling procedures
have been executed correctly.

9.2 Recommendations for the industry 73


Overnight maintenance strategy
This research may also be of interest for the quality control during the overnight maintenance
strategy. This maintenance strategy refers to maintenance operations executed only during the
night. During the daytime the runway is operational and the quality of the pavement needs to
meet the EASA requirements.

With this maintenance strategy each night a limited area is renewed because the time available
is limited. The quality control in this scenario needs to be fast and produce a sufficient level of
reliability. The most appropriate sampling methodology in this case would be the Uniform. This
ensures an acceptable reliability and strongly reduces the number of samples required.

74 Chapter 9 Conclusions and recommendations


Bibliography

[1] Uncertainty of measurement Part 3: Guide to the expression of uncertainty in measurement


(GUM:1995) Supplement 1: Propagation of distributions using a Monte Carlo method
ISO/IEC GUIDE 98-3/Suppl.1:2008(E). 2008.

[2] M. Badar, S. Raman, P. Pulat, and R. L. Shehab. Experimental analysis of search-based


selection of sample points for straightness and flatness estimation. 127, 02 2005.

[3] M. Brouwer, E. van Calck, J. Knoester, T. Joustra, N. Schmidt, and S. Heblij. Beschikbaarheid
strategie. Schiphol Annual Report 2015, 2015.

[4] X. Cao. Effective perturbation distributions for small samples in simultaneous perturbation
stochastic approximation. In 2011 45th Annual Conference on Information Sciences and
Systems, pages 1–5, March 2011.

[5] G. Casella and R. L. Berger. Statistical inference, volume 2. Duxbury Pacific Grove, CA,
2002.

[6] W. Chamberlin and D. Amsler. Measuring surface texture by the sand-patch method.
Pavement Surface Characteristics and Materials, pages 3–15, 1982.

[7] G. Claeskens and N. L. Hjort. Model Selection and Model Averaging. Number 9780521852258
in Cambridge Books. Cambridge University Press, October 2008.

[8] B. M. Colosimo and N. Senin. Geometric Tolerances. Milan, first edition, 2011.

[9] A. Corrado and W. Polini. Manufacturing signature in variational and vector-loop models
for tolerance analysis of rigid parts. The International Journal of Advanced Manufacturing
Technology, 88(5):2153–2161, Feb 2017.

[10] EASA. Certification Specifications (CS) and Guidance Material (GM) for Aerodromes Design.
European Aviation Safety Agency.

[11] P. S. GMBH. Possehl antiskid.the special high-friction surface for takeoff and landing runways.
Technical report.

[12] Heathrow. Flight performance-annual report 2014. 2014.

[13] ISO/IEC. ISO/IEC GUIDE 99:2007(E/R). 2007.

[14] J. Knoester and N. Schmidt. 20170707 Adviesrapport Lange termijn onderhoudstrategie


v0.7. Schiphol Report, v 0.7:40, 2017.

[15] D.-H. Lee, M.-G. Kim, and N.-G. Cho. Characterization of the sampling in optical measure-
ments of machined surface textures. Proceedings of the Institution of Mechanical Engineers,
Part B: Journal of Engineering Manufacture, 230(11):2047–2063, 2016.

[16] B. Maria Colosimo, E. Gutierrez Moya, and G. Moroni. Statistical Sampling Strategies for
Geometric Tolerance Inspection by CMM. 23(1):109–121, 2008.

[17] G. Moroni and M. Pacella. An approach based on process signature modeling for roundness
evaluation of manufactured items. 8, 06 2008.

75
[18] G. Moroni and S. Petrò. Optimal inspection strategy planning for geometric tolerance
verification. Precision Engineering, 38(1):71 – 81, 2014.

[19] J. Quayson, S. Grullón, T. Japi, P. DiFulco, and P. Fushan. Airport traffic report. Technical
report, 2017.

[20] Schiphol. Large-scale maintenance of runway 06-24 (kaagbaan) at schiphol, 2016.

[21] M. M. Willemsen. Eindrapport Flightflex. 2016.

[22] T. Woo, R. Liang, C. Hsieh, and N. Lee. Efficient sampling for surface measurements.
Journal of Manufacturing Systems, 14(5):345 – 354, 1995.

[23] L. Yufeng, H. Yucheng, S. Wenjuan, N. Harikrishnan, L. D. Stephen, and W. Linbing.


Effect of coarse aggregate morphology on the mechanical properties of stone matrix asphalt.
Construction and Building Materials, 152:48 – 56, 2017.
Appendix

#EXPORT DATA FROM ELAtextur MACHINE

from pandas import DataFrame


import pandas a s pd
import o s
import r e
import numpy a s np
import s e a b o r n a s s n s
import w a r n i n g s
import s c i p y . s t a t s a s s t
import s t a t s m o d e l s a s sm
import m a t p l o t l i b a s mpl
import m a t p l o t l i b . p y p l o t a s p l t
import f u z i o n i a s f z

import random a s rd

d e f get_num ( x ) :
r e t u r n f l o a t ( ’ ’ . j o i n ( e l e f o r e l e i n x i f e l e . i s d i g i t ( ) o r e l e ==
’. ’) )

r e s u l t s =[]

f o l d e r _ p a t h = r ’ C: \ U s e r s \wano2\ Desktop \ 2 0 1 8 0 6 0 8 ’

f o r f i l e in s orted ( os . l i s t d i r ( folder_path ) ) :

path= o s . path . j o i n ( f o l d e r _ p a t h , f i l e )

#p r i n t ( path )

with open ( path ) a s f :


measure = [ ]
measure . append ( f i l e )

for l i n e in f :

i f "<mpd>" i n l i n e :
mpd= get_num ( l i n e ) /1000
measure . append (mpd)

77
i f "<etd >" i n l i n e :
e t d=get_num ( l i n e ) /1000
measure . append ( e t d )

i f "< l a t i t u d e >" i n l i n e :
l a t i t u d e=s t r ( get_num ( l i n e ) )
f o r i in range ( len ( l a t i t u d e ) ) :
i f latitude [ i ]== ’. ’:
s p l i t a t _ l = i −2
risultato_l = float ( latitude [ : splitat_l ])
+ f l o a t ( l a t i t u d e [ s p l i t a t _ l : ] ) /60
measure . append ( r i s u l t a t o _ l )

i f "< l o n g i t u d e >" i n l i n e :
l o n g i t u d e=s t r ( get_num ( l i n e ) )
f o r i in range ( len ( l o n g i t u d e ) ) :
i f longitude [ i ]== ’. ’:
s p l i t a t _ o = i −2
risultato_o = float ( longitude [ : splitat_o
] )+ f l o a t ( l o n g i t u d e [ s p l i t a t _ o : ] ) /60
measure . append ( r i s u l t a t o _ o )

i f "<time >" i n l i n e :
time= l i s t (map( s t r , r e . f i n d a l l ( " \ d+\:\d+" , l i n e ) ) )
measure . append ( time )

r e s u l t s . append ( measure )

#mySubString=myString [ myString . f i n d ( " ! " ) +1: myString . f i n d ( "@" ) ]


#m y f i l e = open ( r ’ C: \ U s e r s \wano2\ Desktop \ 2 0 1 8 0 2 0 5 \ 0 0 0 0 1 2 7 9 .ELA’ )
#C: \ \ U s e r s \\ a p p l e \\ Downloads \ t r a i n . c s v
#mytxt = m y f i l e . r e a d ( )
#mytxt

John_points=pd . DataFrame ( data=r e s u l t s , columns =[ ’ f i l e ’ , ’ time ’ , ’MTD’ , ’


ETD’ , ’ l a t i t u d e ’ , ’ l o n g i t u d e ’ ] )

d f=John_points
w r i t e r = pd . E x c e l W r i t e r ( ’8 −6 −18. x l s x ’ )
d f . t o _ e x c e l ( w r i t e r , ’ Measurements ’ )
w r i t e r . save ( )
\ clearpage
#PLOTTING VALUES ON GOOGLE MAPS IMAGES

from gmplot import gmplot

# P l a c e map
gmap = gmplot . GoogleMapPlotter ( 5 2 . 3 3 8 1 3 , 4 . 7 1 0 0 3 0 8 3 , 1 5 )

top_attraction_lats_r =[]
top_attraction_lons_r =[]
top_attraction_lats_y =[]
top_attraction_lons_y =[]
top_attraction_lats_g =[]
top_attraction_lons_g =[]

c o u n t e r=0
f o r i in df [ ’ latitude ’ ] :
i f d f [ ’MTD’ ] [ c o u n t e r ] <1:
t o p _ a t t r a c t i o n _ l a t s _ r . append ( i )
counter = counter + 1
e l i f d f [ ’MTD’ ] [ c o u n t e r ] <1.3 and d f [ ’MTD’ ] [ c o u n t e r ]>=1 :
t o p _ a t t r a c t i o n _ l a t s _ y . append ( i )
counter = counter + 1
e l i f d f [ ’MTD’ ] [ c o u n t e r ] >=1.3:
t o p _ a t t r a c t i o n _ l a t s _ g . append ( i )
counter = counter + 1

c o u n t e r=0

f o r i in df [ ’ longitude ’ ] :

i f d f [ ’MTD’ ] [ c o u n t e r ] <1:
t o p _ a t t r a c t i o n _ l o n s _ r . append ( i )
counter = counter + 1

e l i f d f [ ’MTD’ ] [ c o u n t e r ] <1.3 and d f [ ’MTD’ ] [ c o u n t e r ] >=1:


t o p _ a t t r a c t i o n _ l o n s _ y . append ( i )
counter = counter + 1

e l i f d f [ ’MTD’ ] [ c o u n t e r ] >=1.3:
t o p _ a t t r a c t i o n _ l o n s _ g . append ( i )
counter = counter + 1

gmap . s c a t t e r ( t o p _ a t t r a c t i o n _ l a t s _ r , t o p _ a t t r a c t i o n _ l o n s _ r , ’ red ’ ,
s i z e =0.5 , marker=F a l s e )
gmap . s c a t t e r ( t o p _ a t t r a c t i o n _ l a t s _ y , t o p _ a t t r a c t i o n _ l o n s _ y , ’ y e l l o w ’ ,

79
s i z e =0.5 , marker=F a l s e )
gmap . s c a t t e r ( t o p _ a t t r a c t i o n _ l a t s _ g , t o p _ a t t r a c t i o n _ l o n s _ g , ’ green ’ ,
s i z e =0.5 , marker=F a l s e )

# Draw
gmap . draw ( " p r o v a p l o t . html " )
import webbrowser , o s
webbrowser . open ( " p r o v a p l o t . html " )

\ clearpage

#FUNCTION USED DURING THE DIFFERENT ANALYSIS . OWN CREATION

# # Tutte l e f u n z i o n i che s e r v o n o p e r a n a l y s i s

# ## G e n e r a t o r e m a t r i c i

import pandas a s pd
import o s
import r e
#import numpy a s np
import s e a b o r n a s s n s
import wa rn i n g s
#import numpy a s np
import pandas a s pd
import s c i p y . s t a t s a s s t
import s t a t s m o d e l s a s sm
import m a t p l o t l i b a s mpl
import m a t p l o t l i b . p y p l o t a s p l t
import f u z i o n i a s f z
from m p l _ t o o l k i t s . a x e s _ g r i d 1 import make_axes_locatable
import pandas a s pd
import random a s rd

def matrice_unif ( righe , colonne ) :


T=np . random . uniform ( 0 . 5 , 2 , [ r i g h e , c o l o n n e ] ) # C r e a t e an a r r a y
f i l l e d with random v a l u e s
np . s e t _ p r i n t o p t i o n s ( p r e c i s i o n =3)
return T

# ## M a t r i c e d i s t r i b normale

d e f matrice_norm ( r i g h e , c o l o n n e ) :
T=np . random . normal ( 1 . 2 , 0 . 8 , [ r i g h e , c o l o n n e ] )
np . s e t _ p r i n t o p t i o n s ( p r e c i s i o n =3)
return T

# ## M a t r i c e con d i s t u r b o

import numpy a s np
def matrice_dist ( righe , colonne ) :

X=np . random . normal ( 0 , 0 . 3 2 , [ r i g h e , c o l o n n e ] )

Y=np . random . uniform ( 0 . 6 , 2 , [ r i g h e , c o l o n n e ] )


T= [ ]

# i t e r a t e through rows
f o r i i n r a n g e ( l e n (X) ) :
r i g =[]
# i t e r a t e through columns
f o r j i n r a n g e ( l e n (X [ 0 ] ) ) :
v a l= (X[ i ] [ j ]+Y[ i ] [ j ] )
r i g . append ( v a l )
T . append ( r i g )

return T

# ## M a t r i c e Composta

#var1 , var2 = i n p u t ( " e n t e r two numbers : " ) . s p l i t ( ’ ’)


#p r i n t ( var1 , var2 )

# ## G e n e r a t o r e m a t r i c e da d i s t r i b u z i o n e

import pandas a s pd

d e f simulated_pavement ( d i s t r i b u t i o n , param , raw , column ) :


" r e t u r n t h e matrix r e p r e s e n t i n g t h e s u r f a c e a s DataFrame , t h e
a r r a y with a l l t h e v a l u e f o r t h e s t a t i s t i c "
T= [ ]
values =[]

a r g = param [ : − 2 ]
l o c = param [ −2]

81
s c a l e = param [ −1]

f o r i i n r a n g e ( raw ) :
riga =[]
f o r j i n r a n g e ( column ) :
x=d i s t r i b u t i o n . ppf ( np . random . uniform ( ) , l o c=l o c , s c a l e=
s c a l e , ∗ arg )
r i g a . append ( x )
T. append ( r i g a )

f o r i i n r a n g e ( raw ) :
f o r j i n r a n g e ( column ) :
x=T [ i ] [ j ]
v a l u e s . append ( x )

# ## G e n e r a l i i n f o M a t r i c e

import numpy a s np
#from s t a t i s t i c s import mean

import s c i p y a s s i
#c r e a z i o n e d i v e r s i t i p i d i v e t t o r i

d e f i n f o _ g e n ( matrix ) :
#""" t h i s f u n c t i o n r e t u r n t h e s t a t i s t i c v a l u e o f a matrix , i n
o r d e r : mean , s t a n d a r d d e v i a t i o n and v a r i a n c e " " "
media_g=np . mean ( matrix )
dev_std_g=np . s t d ( matrix )
varianza_g=np . var ( matrix )
r e t u r n media_g , dev_std_g , varianza_g

# ## Sampling Techniques

import numpy a s np

d e f crow_seq ( m a t r i c e , s c a t t e r ) :
’ ’ ’ l x and l y have t o be e x p r e s s e d i n m not o t h e r measure u n i t s ’ ’ ’

crow = [ ]
nx=l e n ( m a t r i c e )
ny=l e n ( m a t r i c e [ 0 ] )
b=[0 ,1 ,2]
k=0
#l s t r i p=l y //10
#x s e q u e n c e=np . a r a n g e ( 0 , nx , 1 5 0 0 )

crowseq=np . l i n s p a c e ( 0 , nx −1,num=i n t ( s c a t t e r ) , dtype=i n t )

f y s e q u e n c e=np . a r a n g e ( 8 , ny , 4 8 )
s y s e q u e n c e=np . a r a n g e (8+16 , ny , 4 8 )
t y s e q u e n c e=np . a r a n g e (8+32 , ny , 4 8 )

a=b∗ l e n ( crowseq )

f o r i i n crowseq :
riga =[]
i f a [ k ]==0:
for j in fysequence :

x i=i
y i=j
ye=j +4
yu=j −4
v a l 1=m a t r i c e [ x i ] [ yu ]
v a l 2=m a t r i c e [ x i ] [ y i ]
v a l 3=m a t r i c e [ x i ] [ ye ]
r i g a . append ( v a l 1 )
r i g a . append ( v a l 2 )
r i g a . append ( v a l 3 )
e l i f a [ k ]==1:
f o r j in sysequence :

x i=i
y i=j
ye=j +4
yu=j −4
v a l 1=m a t r i c e [ x i ] [ yu ]
v a l 2=m a t r i c e [ x i ] [ y i ]
v a l 3=m a t r i c e [ x i ] [ ye ]
r i g a . append ( v a l 1 )
r i g a . append ( v a l 2 )
r i g a . append ( v a l 3 )
e l i f a [ k ]==2:
f o r j in tysequence :

x i=i
y i=j
ye=j +4
yu=j −4
v a l 1=m a t r i c e [ x i ] [ yu ]
v a l 2=m a t r i c e [ x i ] [ y i ]

83
v a l 3=m a t r i c e [ x i ] [ ye ]
r i g a . append ( v a l 1 )
r i g a . append ( v a l 2 )
r i g a . append ( v a l 3 )

k +=1

crow . append ( r i g a )

r e t u r n crow

# ## Draw s a m p l i n g map

d e f draw_crow ( m a t r i c e ) :

nx=l e n ( m a t r i c e )
ny=l e n ( m a t r i c e [ 0 ] )
b=[0 ,1 ,2]
k=0
d i s=np . z e r o s ( ( nx , ny ) )

#l s t r i p=l y //10
#x s e q u e n c e=np . a r a n g e ( 0 , nx , 1 5 0 0 )

crowseq=np . a r a n g e ( 0 , nx , 5 0 0 )

f y s e q u e n c e=np . a r a n g e ( 1 9 , ny , 1 2 0 )
s y s e q u e n c e=np . a r a n g e (19+40 , ny , 1 2 0 )
t y s e q u e n c e=np . a r a n g e (19+80 , ny , 1 2 0 )

a=b∗ l e n ( crowseq )

f o r i i n crowseq :
k
riga =[]
i f a [ k ]==0:
for j in fysequence :

x i=i
y i=j
ye=j +5
yu=j −5
d i s [ x i ] [ yu ] += 1
d i s [ x i ] [ y i ] += 1
d i s [ x i ] [ ye ] += 1
e l i f a [ k ]==1:
f o r j in sysequence :

x i=i
y i=j
ye=j +5
yu=j −5
d i s [ x i ] [ yu ] += 1
d i s [ x i ] [ y i ] += 1
d i s [ x i ] [ ye ] += 1
e l i f a [ k ]==2:
f o r j in tysequence :

x i=i
y i=j
ye=j +5
yu=j −5
d i s [ x i ] [ yu ] += 1
d i s [ x i ] [ y i ] += 1
d i s [ x i ] [ ye ] += 1

k +=1

f o r i in range ( len ( matrice ) ) :


f o r j in range ( len ( matrice [ 0 ] ) ) :
x=m a t r i c e [ i ] [ j ]
v a l u e s . append ( x )

return dis , values

# ## Uniform d e t e r m i n i s t i c method

def unif_semplice ( srighe , scolonne , matrice ) :


# ’ ’ ’ Return a matrix t h a t r e p r e s e n t t h e sample o b t a i n e d from a
q u a d r a t i c g r i d on t h e 2D s u r f a c e . There i s t h e p o s s i b l i t y t o
i n c r e a s e t h e numer o f raw and colums ’ ’ ’
un_sempl = [ ]

85
x=np . l i n s p a c e ( 0 , l e n ( m a t r i c e ) −1,num=s r i g h e , dtype = i n t )
y=np . l i n s p a c e ( 0 , l e n ( m a t r i c e [ 0 ] ) −1,num=s c o l o n n e , dtype = i n t )
f o r i in range ( s r i g h e ) :
riga =[]
f o r j in range ( scolonne ) :
v a l=m a t r i c e [ x [ i ] ] [ y [ j ] ]
r i g a . append ( v a l )
un_sempl . append ( r i g a )

values =[]

f o r i in range ( len ( matrice ) ) :


f o r j in range ( len ( matrice [ 0 ] ) ) :
x=m a t r i c e [ i ] [ j ]
v a l u e s . append ( x )
r e t u r n un_sempl

d e f draw_unif_semplice ( s r i g h e , scolonne , matrice ) :


# ’ ’ ’ Return a matrix t h a t r e p r e s e n t t h e sample o b t a i n e d from a
q u a d r a t i c g r i d on t h e 2D s u r f a c e . There i s t h e p o s s i b l i t y t o
i n c r e a s e t h e numer o f raw and colums ’ ’ ’

x=np . l i n s p a c e ( 0 , l e n ( m a t r i c e ) −1,num=s r i g h e , dtype = i n t )


y=np . l i n s p a c e ( 0 , l e n ( m a t r i c e [ 0 ] ) −1,num=s c o l o n n e , dtype = i n t )
f o r i in range ( s r i g h e ) :
f o r j in range ( scolonne ) :
matrice [ x [ i ] ] [ y [ j ]]=1

return matrice

# ## Hammerseley Method

d e f Hammerseley_seq ( nsamples_xsquare , m a t r i c e ) :
width=l e n ( m a t r i c e [ 0 ] )
l e n g h t=l e n ( m a t r i c e )
raw_i=np . a r a n g e ( 0 , l e n g h t , width )
val =[]
f o r i i n r a n g e ( nsamples_xsquare ) :

b a s e=2
vdc , denom = 0 , 1

u n i t=width / nsamples_xsquare

j=i
while j :
denom ∗= b a s e
j , r e m a i n d e r = divmod ( j , b a s e )
vdc += r e m a i n d e r / denom
y i=i n t ( np . a r a n g e ( 0 , width +1, u n i t ) [ i ] )
x i=i n t ( vdc ∗ width )

f o r l i n raw_i :
f=x i+l
i f f < lenght :

v a l o r e=m a t r i c e [ f ] [ y i ]
v a l . append ( v a l o r e )
else :
pass

return val

d e f Big_mama_sim ( p e r c e n t a g e , nM1, nM0, l o c ) :

width =6000 #cm


l e n g h t =50000 #cm
u n i t =25 #cm

strip_w =400 #cm


s t r i p _ l =5000 #cm

#p e r c e n t a g e =120/400
r a w s _ s t r i p=s t r i p _ l / u n i t
r a w s _ s t r i p 1=r a w s _ s t r i p ∗ p e r c e n t a g e
r a w s _ s t r i p 0=r a w s _ s t r i p −r a w s _ s t r i p 1

87
column_strip=strip_w / u n i t

number_w=width / strip_w
number_l=l e n g h t / s t r i p _ l
number_strips=number_w∗number_l

Big_mama = [ ]
f o r a i n np . a r a n g e ( i n t ( number_l ) ) :

Big_raw = [ ]

f o r i i n np . a r a n g e ( i n t ( number_w ) ) :

Matrice =[]

x=rd . r a n d i n t ( 0 , i n t ( r a w s _ s t r i p 0 ) −1)

f o r j i n np . a r a n g e ( i n t ( r a w s _ s t r i p ) ) :

n e g s t r i p=np . a r a n g e ( x , x+r a w s _ s t r i p 1 )

i f j in negstrip :
raw=nM1 . ppf ( np . random . uniform ( s i z e=i n t (
column_strip ) ) )−np . random . uniform ( low =0, h i g h=
l o c , s i z e=i n t ( column_strip ) )

i f j not i n n e g s t r i p :
raw=nM0 . ppf ( np . random . uniform ( s i z e=i n t (
column_strip ) ) )

M a t r i c e . append ( raw )

i f i == 0 :
Big_raw=M a t r i c e

else :
Big_raw = np . h s t a c k ( ( Big_raw , M a t r i c e ) )

i f a == 0 :
Big_mama=Big_raw
else :
Big_mama = np . v s t a c k ( ( Big_mama , Big_raw ) )

return Big_mama
d e f Hammerseley_draw ( nsamples_xsquare , m a t r i c e ) :
width=l e n ( m a t r i c e [ 0 ] )
l e n g h t=l e n ( m a t r i c e )
raw_i=np . a r a n g e ( 0 , l e n g h t , width )
valy =[]
valx =[]

f o r i i n r a n g e ( nsamples_xsquare ) :

colonna =[]
b a s e=2
vdc , denom = 0 , 1

u n i t=i n t ( width / nsamples_xsquare )

j=i
while j :
denom ∗= b a s e
j , r e m a i n d e r = divmod ( j , b a s e )
vdc += r e m a i n d e r / denom
y i=i n t ( np . a r a n g e ( 0 , width +1, u n i t ) [ i ] )
x i=i n t ( vdc ∗ width )
v a l y . append ( y i )

f o r l i n raw_i :
f=x i+l

i f f < lenght :

m a t r i c e [ f ] [ y i ]=1
v a l x . append ( f )
else :
pass

r e t u r n m a t r i c e , valy , v a l x

# ## Prendere d a t i da ELATextuur

import pandas a s pd
import o s
import r e
#import numpy a s np
import s e a b o r n a s s n s

89
def elatextur_data ( pathf ) :

d e f get_num ( x ) :
return f l o a t ( ’ ’ . j o i n ( e l e f o r e l e in x i f e l e . i s d i g i t ( ) or e l e
== ’ . ’ ) )

r e s u l t s =[]

f o r f i l e in s orted ( os . l i s t d i r ( pathf ) ) :

path= o s . path . j o i n ( pat hf , f i l e )

#p r i n t ( path )

with open ( path ) a s s t r a d a :


measure = [ ]
measure . append ( f i l e )

for l i n e in strada :

i f "<mpd>" i n l i n e :
mpd= get_num ( l i n e ) /1000
measure . append (mpd)

i f "<etd >" i n l i n e :
e t d=get_num ( l i n e ) /1000
measure . append ( e t d )

i f "< l a t i t u d e >" i n l i n e :
l a t i t u d e=get_num ( l i n e )
measure . append ( l a t i t u d e )

i f "< l o n g i t u d e >" i n l i n e :
l o n g i t u d e=get_num ( l i n e )
measure . append ( l o n g i t u d e )

#i f "<time >" i n l i n e :
#time= l i s t (map( s t r , r e . f i n d a l l ( " \ d+\:\d+" ,
line )))
#measure . append ( time )

r e s u l t s . append ( measure )
d f=pd . DataFrame ( data=r e s u l t s , columns =[ ’ f i l e ’ , ’MTD’ , ’ETD’ , ’
latitude ’ , ’ longitude ’ ] )
#d f=pd . DataFrame ( data=r e s u l t s , columns =[ ’ f i l e ’ , ’MTD’ , ’ETD’ ] )

return df

# ## Real r e p r e s e n t a t i o n o f s u r f a c e
import m a t p l o t l i b
import m a t p l o t l i b . p y p l o t a s p l t

def r e a l _ r e p r e s e n t a t i o n ( colonne , righe , matrix_col ) :

" This works o n l y with column g e n e r a t e d from DataFrame s t r u c t u r e s ,


f o r o t h e r s g e n e r a t e a n o t h e r a l g o r i t h m . Return t h e f i r s t
matrix a s t h e e x h a c t r e p r e s e n t a t i o n o f t h e s u r f a c e and t h e
s e c o n d t o h i g h l i g h t t h e v a l u e s below 1 . 3 "
c o u n t e r=0
#c o l o n n e =20
#r i g h e =20

F= [ ]

f o r i in range ( r i g h e ) :
riga =[]
f o r j in range ( colonne ) :
r i g a . append ( m a t r i x _ c o l [ c o u n t e r ] )
c o u n t e r +=1
F . append ( r i g a )

c o u n t e r 2=0

D= [ ]

f o r i in range ( r i g h e ) :
riga =[]
f o r j in range ( colonne ) :

i f matrix_col [ counter2 ] < 1 . 3 :


r i g a . append ( 0 )
else :
r i g a . append ( m a t r i x _ c o l [ c o u n t e r 2 ] )

c o u n t e r 2 +=1

91
D. append ( r i g a )

f i g , ( ax1 , ax2 ) = p l t . s u b p l o t s ( 1 , 2 , f i g s i z e =(8 , 6 ) )


f i g . tight_layout ()
p l t . s u b p l o t s _ a d j u s t ( l e f t =None , bottom=None , r i g h t=None , top=None ,
wspace =0.5 , h s p a c e=None )

cmap = p l t . cm . get_cmap ( ’ hot ’ )

img = ax1 . imshow (F , i n t e r p o l a t i o n =’ n e a r e s t ’ ,


cmap = cmap ,
o r i g i n =’ lower ’ )
d i v i d e r 1 = make_axes_locatable ( ax1 )
# Append a x e s t o t h e r i g h t o f ax3 , with 20% width o f ax3
cax1 = d i v i d e r 1 . append_axes ( " r i g h t " , s i z e ="5%" , pad =0.25)

cb=p l t . c o l o r b a r ( img , cax=cax1 )

cmap2 = mpl . c o l o r s . ListedColormap ( [ ’ red ’ , ’ gold ’ ] )


bounds = [ 0 , 1 . 3 , 3 ]

img = ax2 . imshow (D, i n t e r p o l a t i o n =’ n e a r e s t ’ ,


cmap = cmap2 ,
o r i g i n =’ lower ’ )

d i v i d e r 2 = make_axes_locatable ( ax2 )
# Append a x e s t o t h e r i g h t o f ax3 , with 20% width o f ax3
cax2 = d i v i d e r 2 . append_axes ( " r i g h t " , s i z e ="5%" , pad =0.25)

cb=p l t . c o l o r b a r ( img , cax=cax2 , b o u n d a r i e s=bounds )

#cb2=p l t . c o l o r b a r ( img , cmap=cmap2 , b o u n d a r i e s=bounds )

cmap2 = mpl . c o l o r s . ListedColormap ( [ ’ red ’ , ’ gold ’ ] )


bounds = [ 0 , 1 . 3 , 3 . 3 ]

#ax2 . g r i d ( c o l o r =’b ’ , l i n e s t y l e = ’ − ’ , l i n e w i d t h =1)

img = ax2 . imshow (D, i n t e r p o l a t i o n =’ n e a r e s t ’ ,


cmap = cmap2 ,
o r i g i n =’ lower ’ )
d i v i d e r 2 = make_axes_locatable ( ax2 )
# Append a x e s t o t h e r i g h t o f ax3 , with 20% width o f ax3
cax2 = d i v i d e r 2 . append_axes ( " r i g h t " , s i z e ="5%" , pad =0.2)

cb=p l t . c o l o r b a r ( img , cax=cax2 , b o u n d a r i e s=bounds )


#p l t . show ( )

return F, D

d e f make_cdf ( d i s t , params , s i z e =1000) :


" " " Generate d i s t r i b u t i o n s ’ s P r o p b a b i l i t y D i s t r i b u t i o n
Function " " "

# Separate parts of parameters


a r g = params [ : − 2 ]
l o c = params [ −2]
s c a l e = params [ −1]

# Get s a n e s t a r t and end p o i n t s o f d i s t r i b u t i o n


s t a r t = d i s t . ppf ( 0 . 0 0 0 0 0 0 1 , ∗ arg , l o c=l o c , s c a l e=s c a l e ) i f
a r g e l s e d i s t . ppf ( 0 . 0 1 , l o c=l o c , s c a l e=s c a l e )
end = d i s t . ppf ( 0 . 9 9 9 , ∗ arg , l o c=l o c , s c a l e=s c a l e ) i f a r g e l s e
d i s t . ppf ( 0 . 9 9 , l o c=l o c , s c a l e=s c a l e )

# B u i l d PDF and t u r n i n t o pandas S e r i e s


x = np . l i n s p a c e ( s t a r t , end , s i z e )
y = d i s t . c d f ( x , l o c=l o c , s c a l e=s c a l e , ∗ a r g )
c d f = pd . DataFrame ( y , x )
n= d i s t ( l o c=l o c , s c a l e=s c a l e , ∗ a r g )

r e t u r n cdf , n

# C r e a t e models from data


d e f b e s t _ f i t _ d i s t r i b u t i o n ( data , b i n s =200 , ax=None ) :
" " " Model data by f i n d i n g b e s t f i t d i s t r i b u t i o n t o data " " "
# Get h i s t o g r a m o f o r i g i n a l data
y , x = np . h i s t o g r a m ( data , b i n s=b i n s , d e n s i t y=True )
x = ( x + np . r o l l ( x , −1) ) [ : − 1 ] / 2 . 0
z = np . l i n s p a c e ( 0 , 4 , 1 0 0 )

KSframe=pd . DataFrame ( columns =[ ’ d i s t r i b u t i o n ’ , ’ arg ’ , ’ l o c ’ , ’ s c a l e


’ , ’ pvalue ’ ] )

# D i s t r i b u t i o n s t o check
DISTRIBUTIONS = [

93
s t . alpha , s t . a n g l i t , s t . a r c s i n e , s t . beta , s t . betaprime , s t .
b r a d f o r d , s t . cauchy , s t . c h i , s t . c h i 2 , s t . c o s i n e ,
s t . e r l a n g , s t . expon , s t . exponnorm , s t . exponweib , s t . exponpow , s t . f
, st . fatiguelife , st . fisk ,
st . foldcauchy , st . frechet_r , st . frechet_l , st . g e n l o g i s t i c , st .
g e n p a r e t o , s t . gennorm , s t . genexpon ,
s t . genextreme , s t . gausshyper , s t . gamma , s t . gengamma , s t .
g e n h a l f l o g i s t i c , s t . g i l b r a t , s t . gompertz , s t . gumbel_r ,
s t . gumbel_l , s t . h a l f c a u c h y , s t . h a l f l o g i s t i c , s t . halfnorm , s t .
halfgennorm , s t . hypsecant , s t . invgamma , s t . i n v g a u s s ,
s t . i n v w e i b u l l , s t . johnsonsb , s t . johnsonsu , s t . ksone , s t . kstwobign
, s t . l a p l a c e , s t . l e v y , s t . levy _l , s t . l e v y _ s t a b l e ,
s t . l o g i s t i c , s t . loggamma , s t . l o g l a p l a c e , s t . lognorm , s t . lomax , s t .
maxwell , s t . miel ke , s t . nakagami , s t . ncx2 , s t . ncf ,
s t . nct , s t . norm , s t . p a r e t o , s t . pearson3 , s t . powerlaw , s t .
powerlognorm , s t . powernorm , s t . r d i s t , s t . r e c i p r o c a l ,
st . rayleigh , st . rice , st . recipinvgauss , st . semicircular , st . t , st .
truncexpon , s t . truncnorm , s t . tukeylambda ,
s t . uniform , s t . vonmises , s t . v o n m i s e s _ l i n e , s t . wald , s t .
weibull_min , s t . weibull_max , s t . wrapcauchy
]
#, s t . d w e i b u l l , s t . foldnorm s t . burr , s t . dgamma , s t . t r i a n g
# Best h o l d e r s
b e s t _ d i s t r i b u t i o n = s t . norm
best_params = ( 0 . 0 , 1 . 0 )
b e s t _ s s e = np . i n f

# Estimate d i s t r i b u t i o n p a r a m e t e r s from data


f o r d i s t r i b u t i o n i n DISTRIBUTIONS :

# Try t o f i t t h e d i s t r i b u t i o n
try :
# I g n o r e w a r n i n g s from data t h a t can ’ t be f i t
with w a r n i n g s . catch_warnings ( ) :
warnings . f i l t e r w a r n i n g s ( ’ ignore ’ )

# f i t d i s t t o data
params = d i s t r i b u t i o n . f i t ( data )

# Separate parts of parameters


a r g = params [ : − 2 ]
l o c = params [ −2]
s c a l e = params [ −1]

# C a l c u l a t e f i t t e d PDF and e r r o r with f i t i n


distribution
pdf = d i s t r i b u t i o n . pdf ( x , l o c=l o c , s c a l e=s c a l e , ∗ a r g )
s s e = np . sum ( np . power ( y − pdf , 2 . 0 ) )
pdf2= d i s t r i b u t i o n . pdf ( z , l o c=l o c , s c a l e=s c a l e , ∗ a r g )
#n= d i s t r i b u t i o n ( l o c=l o c , s c a l e=s c a l e , ∗ a r g )

#Kolmogorov−Smirnov Test

#s , p=k s t e s t ( d f [ ’MTD’ ] , n . c d f )

#C r e a t e matrix o f k s t e s t r e s u l t s

#d f 2 = pd . DataFrame ( [ d i s t r i b u t i o n . name , arg , l o c , s c a l e ,


p ] , columns =[ ’ d i s t r i b u t i o n ’ , ’ arg ’ , ’ l o c ’ , ’ s c a l e ’ , ’
pvalue ’ ] )

#KSframe . append ( df2 , i g n o r e _ i n d e x=True )

# i f a x i s p a s s i n add t o p l o t
try :
i f ax :
pd . S e r i e s ( pdf2 , z ) . p l o t ( ax=ax )
end
except Exception :
pass

# identify i f this distribution is better


i f best_sse > sse > 0:
best_distribution = distribution
best_params = params
best_sse = sse

except Exception :
pass

#f i n a l e=KSframe . s o r t _ v a l u e s ( by =[ ’ pvalue ’ ] )
r e t u r n b e s t _ d i s t r i b u t i o n . name , best_params

d e f make_pdf ( d i s t , params , s i z e =1000) :


" " " Generate d i s t r i b u t i o n s ’ s P r o p b a b i l i t y D i s t r i b u t i o n Function
"""

# Separate parts of parameters


a r g = params [ : − 2 ]
l o c = params [ −2]
s c a l e = params [ −1]

# Get s a n e s t a r t and end p o i n t s o f d i s t r i b u t i o n

95
s t a r t = d i s t . ppf ( 0 . 0 0 0 1 , ∗ arg , l o c=l o c , s c a l e=s c a l e ) i f a r g e l s e
d i s t . ppf ( 0 . 0 1 , l o c=l o c , s c a l e=s c a l e )
end = d i s t . ppf ( 0 . 9 9 9 , ∗ arg , l o c=l o c , s c a l e=s c a l e ) i f a r g e l s e
d i s t . ppf ( 0 . 9 9 , l o c=l o c , s c a l e=s c a l e )

# B u i l d PDF and t u r n i n t o pandas S e r i e s


x = np . l i n s p a c e ( 0 , 3 . 2 , s i z e )
y = d i s t . pdf ( x , l o c=l o c , s c a l e=s c a l e , ∗ a r g )
pdf = pd . DataFrame ( y , x )

r e t u r n pdf

d e f b e s t _ c u r v _ f i t ( data , b i n s s ) :

m a t p l o t l i b . rcParams [ ’ f i g u r e . f i g s i z e ’ ] = ( 8 , 6 )
ma tpl otl ib . s t y l e . use ( ’ ggplot ’ )

#Find and p l o t b e s t f i t
# Load data from s t a t s m o d e l s d a t a s e t s
#data = d f [ "MTD" ]

#b i n s s =25

# P l o t f o r comparison
f i g , ax = p l t . s u b p l o t s ( )
ax = data . p l o t ( kind =’ h i s t ’ , b i n s=b i n s s , normed=True , a l p h a =0.5 ,
c o l o r=p l t . rcParams [ ’ a x e s . c o l o r _ c y c l e ’ ] [ 1 ] )
# Save p l o t l i m i t s
#dataYLim = ax . get_ylim ( )

# Find b e s t f i t d i s t r i b u t i o n
best_fit_name , best_fir_paramms = b e s t _ f i t _ d i s t r i b u t i o n ( data ,
b i n s s , ax ) #, K S r e s u l t s
b e s t _ d i s t = g e t a t t r ( s t , best_fit_name )

# Update p l o t s
ax . se t _ y l i m ( ( 0 , 1 . 8 ) )
ax . se t _ x l i m ( ( 0 . 7 , 3 ) )
#ax . s e t _ y l i m ( dataYLim )
ax . s e t _ t i t l e ( u ’MTD v a l u e s \n A l l F i t t e d D i s t r i b u t i o n s ’ )
ax . s e t _ x l a b e l ( u ’MTD’ )
ax . s e t _ y l a b e l ( ’ Frequency ’ )

# Make PDF
pdf = make_pdf ( b e s t _ d i s t , best_fir_paramms )
# Make CDF
cdf , n = make_cdf ( b e s t _ d i s t , best_fir_paramms )

# Display

#f i g , ( ax1 , ax2 ) = p l t . s u b p l o t s ( 1 , 2 , f i g s i z e =(8 , 6 ) )

s f o n d o 1 = pdf . p l o t ( lw =2, l a b e l =’PDF’ , l e g e n d=True )


ax1=data . p l o t ( kind =’ h i s t ’ , b i n s=b i n s s , normed=True , a l p h a =0.5 ,
l a b e l =’Data ’ , l e g e n d=True , ax=s f o n d o 1 )

param_names = ( b e s t _ d i s t . s h a p e s + ’ , l o c , s c a l e ’ ) . s p l i t ( ’ , ’ ) i f
best_dist . shapes e l s e [ ’ loc ’ , ’ scale ’ ]
param_str = ’ , ’ . j o i n ( [ ’ { } = { : 0 . 2 f } ’ . format ( k , v ) f o r k , v i n z i p (
param_names , best_fir_paramms ) ] )
d i s t _ s t r = ’ { } ( { } ) ’ . format ( best_fit_name , param_str )

ax1 . s e t _ t i t l e ( u ’MTD with b e s t f i t d i s t r i b u t i o n \n ’ + d i s t _ s t r )


ax1 . s e t _ x l a b e l ( u ’MTD’ )
ax1 . s e t _ y l a b e l ( ’ Frequency ’ )

# Display
ax2 = c d f . p l o t ( lw =2, l a b e l =’CDF’ , l e g e n d=True )

param_names = ( b e s t _ d i s t . s h a p e s + ’ , l o c , s c a l e ’ ) . s p l i t ( ’ , ’ ) i f
best_dist . shapes e l s e [ ’ loc ’ , ’ scale ’ ]
param_str = ’ , ’ . j o i n ( [ ’ { } = { : 0 . 2 f } ’ . format ( k , v ) f o r k , v i n z i p (
param_names , best_fir_paramms ) ] )
d i s t _ s t r = ’ { } ( { } ) ’ . format ( best_fit_name , param_str )

ax2 . s e t _ t i t l e ( u ’MTD with b e s t f i t d i s t r i b u t i o n \n ’ + d i s t _ s t r )


ax2 . s e t _ x l a b e l ( u ’MTD’ )
ax2 . s e t _ y l a b e l ( ’ Frequency ’ )

s t a t , p v a l u e=s t . k s t e s t ( data , n . c d f )
p r i n t ( ’ The Kolmogorov−Smirnov Test produced a P−Value of ’ , p v a l u e )

r e t u r n best_fir_paramms , b e s t _ d i s t , n

d e f best_curv_fit_norm ( data , b i n s s ) :

m a t p l o t l i b . rcParams [ ’ f i g u r e . f i g s i z e ’ ] = ( 8 , 6 )
ma tpl otl ib . s t y l e . use ( ’ ggplot ’ )

97
#Find and p l o t b e s t f i t
#Load data from s t a t s m o d e l s d a t a s e t s
#data = d f [ "MTD" ]

#b i n s s =25

# P l o t f o r comparison
#f i g , ax = p l t . s u b p l o t s ( )
#ax = data . p l o t ( kind =’ h i s t ’ , b i n s=b i n s s , normed=True , a l p h a =0.5 ,
c o l o r=p l t . rcParams [ ’ a x e s . c o l o r _ c y c l e ’ ] [ 1 ] )
# Save p l o t l i m i t s
#dataYLim = ax . get_ylim ( )

# Find b e s t f i t d i s t r i b u t i o n
b e s t _ d i s t = s t . norm
best_fir_paramms = b e s t _ d i s t . f i t ( data ) #, K S r e s u l t s
b e s t _ d i s t = g e t a t t r ( s t , b e s t _ d i s t . name )

# Update p l o t s
#ax . s e t _ y l i m ( ( 0 , 1 . 8 ) )
#ax . s e t _ x l i m ( ( 0 . 7 , 3 ) )
#ax . s e t _ y l i m ( dataYLim )
#ax . s e t _ t i t l e ( u ’MTD v a l u e s \n A l l F i t t e d D i s t r i b u t i o n s ’ )
#ax . s e t _ x l a b e l ( u ’MTD’ )
#ax . s e t _ y l a b e l ( ’ Frequency ’ )

# Make PDF
pdf = make_pdf ( b e s t _ d i s t , best_fir_paramms )

# Make CDF
cdf , n = make_cdf ( b e s t _ d i s t , best_fir_paramms )

# Display

#f i g , ( ax1 , ax2 ) = p l t . s u b p l o t s ( 1 , 2 , f i g s i z e =(8 , 6 ) )

s f o n d o 1 = pdf . p l o t ( lw =2, l a b e l =’PDF’ , l e g e n d=True )


ax1=data . p l o t ( kind =’ h i s t ’ , b i n s=b i n s s , normed=True , a l p h a =0.5 ,
l a b e l =’Data ’ , l e g e n d=True , ax=s f o n d o 1 )

param_names = ( b e s t _ d i s t . s h a p e s + ’ , l o c , s c a l e ’ ) . s p l i t ( ’ , ’ ) i f
best_dist . shapes e l s e [ ’ loc ’ , ’ scale ’ ]
param_str = ’ , ’ . j o i n ( [ ’ { } = { : 0 . 2 f } ’ . format ( k , v ) f o r k , v i n z i p (
param_names , best_fir_paramms ) ] )
d i s t _ s t r = ’ { } ( { } ) ’ . format ( b e s t _ d i s t . name , param_str )

ax1 . s e t _ t i t l e ( u ’MTD with b e s t f i t d i s t r i b u t i o n \n ’ + d i s t _ s t r )


ax1 . s e t _ x l a b e l ( u ’MTD’ )
ax1 . s e t _ y l a b e l ( ’ Frequency ’ )

# Display
ax2 = c d f . p l o t ( lw =2, l a b e l =’CDF’ , l e g e n d=True )

param_names = ( b e s t _ d i s t . s h a p e s + ’ , l o c , s c a l e ’ ) . s p l i t ( ’ , ’ ) i f
best_dist . shapes e l s e [ ’ loc ’ , ’ scale ’ ]
param_str = ’ , ’ . j o i n ( [ ’ { } = { : 0 . 2 f } ’ . format ( k , v ) f o r k , v i n z i p (
param_names , best_fir_paramms ) ] )
d i s t _ s t r = ’ { } ( { } ) ’ . format ( b e s t _ d i s t . name , param_str )

ax2 . s e t _ t i t l e ( u ’MTD with b e s t f i t d i s t r i b u t i o n \n ’ + d i s t _ s t r )


ax2 . s e t _ x l a b e l ( u ’MTD’ )
ax2 . s e t _ y l a b e l ( ’ Frequency ’ )

s t a t , p v a l u e=s t . k s t e s t ( data , n . c d f )
p r i n t ( ’ The Kolmogorov−Smirnov Test produced a P−Value of ’ , p v a l u e )

r e t u r n best_fir_paramms , b e s t _ d i s t , n

#P r o b a b i l i t y o f f a i l u r e with g r a p h s

d e f p r o b _ f a i l ( b e s t _ d i s t , best_fir_paramms , t a r g e t ) :

z = np . l i n s p a c e ( 0 , 4 , 1 0 0 )

a r g = best_fir_paramms [ : − 2 ]
l o c = best_fir_paramms [ −2]
s c a l e = best_fir_paramms [ −1]

p d f _ v a l u e s = b e s t _ d i s t . pdf ( z , l o c=l o c , s c a l e=s c a l e , ∗ a r g )


c d f _ v a l u e s = b e s t _ d i s t . c d f ( z , l o c=l o c , s c a l e=s c a l e , ∗ a r g )
f i l l _ c o l o r = ( 0 , 0 , 0 , 0 . 6 ) # L i g h t gray i n RGBA format .
l i n e _ c o l o r = ( 0 , 0 , 0 , 0 . 5 ) # Medium gray i n RGBA format .
f i g , a x e s = p l t . s u b p l o t s ( 2 , 1 , f i g s i z e =(8 , 6 ) )

cdf_ax , pdf_ax = a x e s [ : ]
cdf_ax . p l o t ( z , c d f _ v a l u e s )
pdf_ax . p l o t ( z , p d f _ v a l u e s )

# F i l l a r e a a t and t o t h e l e f t o f x .
pdf_ax . f i l l _ b e t w e e n ( z , pdf_values ,
where=z <= t a r g e t ,

99
c o l o r=f i l l _ c o l o r )
pd = b e s t _ d i s t . pdf ( t a r g e t , l o c=l o c , s c a l e=s c a l e , ∗ a r g ) #
P r o b a b i l i t y density at t h i s value .

# Line showing p o s i t i o n o f x on x−a x i s o f PDF p l o t .


pdf_ax . p l o t ( [ t a r g e t , t a r g e t ] ,
[ 0 , pd ] , c o l o r=l i n e _ c o l o r )
cd = b e s t _ d i s t . c d f ( t a r g e t , l o c=l o c , s c a l e=s c a l e , ∗ a r g ) #
Cumulative d i s t r i b u t i o n v a l u e f o r t h i s x . f f v f

# L i n e s showing x and CDF v a l u e on CDF p l o t .


x_ax_min = cdf_ax . a x i s ( ) [ 0 ] # x p o s i t i o n o f y a x i s on p l o t .
cdf_ax . p l o t ( [ t a r g e t , t a r g e t , 0 ] ,
[ 0 , cd , cd ] , c o l o r=l i n e _ c o l o r )
cdf_ax . s e t _ t i t l e ( ’ x = { : . 1 f } , P r o b a b i l i t y o f f a i l u r e = { : . 2 f }% ’.
format ( t a r g e t , cd ∗ 1 0 0 ) )
# Hide top and r i g h t a x i s l i n e s and t i c k s t o r e d u c e c l u t t e r .
f o r ax i n ( cdf_ax , pdf_ax ) :
ax . s p i n e s [ ’ r i g h t ’ ] . s e t _ v i s i b l e ( F a l s e )
ax . s p i n e s [ ’ top ’ ] . s e t _ v i s i b l e ( F a l s e )
ax . y a x i s . s e t _ t i c k s _ p o s i t i o n ( ’ l e f t ’ )
ax . x a x i s . s e t _ t i c k s _ p o s i t i o n ( ’ bottom ’ )

r e t u r n cd ∗100

d e f b e s t _ f i t _ d i s t r i b u t i o n _ 2 ( data , b i n s =200 , ax=None ) :


" " " Model data by f i n d i n g b e s t f i t d i s t r i b u t i o n t o data " " "
# Get h i s t o g r a m o f o r i g i n a l data
y , x = np . h i s t o g r a m ( data , b i n s=b i n s , d e n s i t y=True )
x = ( x + np . r o l l ( x , −1) ) [ : − 1 ] / 2 . 0
z = np . l i n s p a c e ( 0 , 4 , 1 0 0 )

# D i s t r i b u t i o n s t o check
DISTRIBUTIONS = [ s t . lognorm , s t . norm

]
#, s t . d w e i b u l l , s t . foldnorm s t . burr , s t . dgamma , s t . t r i a n g
# Best h o l d e r s
b e s t _ d i s t r i b u t i o n = s t . norm
best_params = ( 0 . 0 , 1 . 0 )
b e s t _ s s e = np . i n f

# Estimate d i s t r i b u t i o n p a r a m e t e r s from data


f o r d i s t r i b u t i o n i n DISTRIBUTIONS :
# f i t d i s t t o data
params = d i s t r i b u t i o n . f i t ( data )

# Separate parts of parameters


a r g = params [ : − 2 ]
l o c = params [ −2]
s c a l e = params [ −1]

# C a l c u l a t e f i t t e d PDF and e r r o r with f i t i n d i s t r i b u t i o n


pdf = d i s t r i b u t i o n . pdf ( x , l o c=l o c , s c a l e=s c a l e , ∗ a r g )
s s e = np . sum ( np . power ( y − pdf , 2 . 0 ) )

# identify i f this distribution is better


i f best_sse > sse > 0:
best_distribution = distribution
best_params = params
best_sse = sse

else :
pass

r e t u r n b e s t _ d i s t r i b u t i o n . name , best_params

d e f curv_fit_Lognorm ( data , b i n s s ) :

m a t p l o t l i b . rcParams [ ’ f i g u r e . f i g s i z e ’ ] = ( 8 , 6 )
ma tpl otl ib . s t y l e . use ( ’ ggplot ’ )

#Find and p l o t b e s t f i t
#Load data from s t a t s m o d e l s d a t a s e t s
#data = d f [ "MTD" ]

#b i n s s =25

# P l o t f o r comparison
#f i g , ax = p l t . s u b p l o t s ( )
#ax = data . p l o t ( kind =’ h i s t ’ , b i n s=b i n s s , normed=True , a l p h a =0.5 ,
c o l o r=p l t . rcParams [ ’ a x e s . c o l o r _ c y c l e ’ ] [ 1 ] )
# Save p l o t l i m i t s
#dataYLim = ax . get_ylim ( )

# Find b e s t f i t d i s t r i b u t i o n
b e s t _ d i s t = s t . lognorm
best_fir_paramms = b e s t _ d i s t . f i t ( data ) #, K S r e s u l t s

101
# Update p l o t s
#ax . s e t _ y l i m ( ( 0 , 1 . 8 ) )
#ax . s e t _ x l i m ( ( 0 . 7 , 3 ) )
#ax . s e t _ y l i m ( dataYLim )
#ax . s e t _ t i t l e ( u ’MTD v a l u e s \n A l l F i t t e d D i s t r i b u t i o n s ’ )
#ax . s e t _ x l a b e l ( u ’MTD’ )
#ax . s e t _ y l a b e l ( ’ Frequency ’ )

# Make PDF
pdf = make_pdf ( b e s t _ d i s t , best_fir_paramms )

# Make CDF
cdf , n = make_cdf ( b e s t _ d i s t , best_fir_paramms )

# Display

#f i g , ( ax1 , ax2 ) = p l t . s u b p l o t s ( 1 , 2 , f i g s i z e =(8 , 6 ) )

s f o n d o 1 = pdf . p l o t ( lw =2, l a b e l =’PDF’ , l e g e n d=True )


ax1=data . p l o t ( kind =’ h i s t ’ , b i n s=b i n s s , a l p h a =0.5 , l a b e l =’Data ’ ,
l e g e n d=True , ax=s f o n d o 1 )

param_names = ( b e s t _ d i s t . s h a p e s + ’ , l o c , s c a l e ’ ) . s p l i t ( ’ , ’ ) i f
best_dist . shapes e l s e [ ’ loc ’ , ’ scale ’ ]
param_str = ’ , ’ . j o i n ( [ ’ { } = { : 0 . 2 f } ’ . format ( k , v ) f o r k , v i n z i p (
param_names , best_fir_paramms ) ] )
d i s t _ s t r = ’ { } ( { } ) ’ . format ( b e s t _ d i s t . name , param_str )

ax1 . s e t _ t i t l e ( u ’MTD with b e s t f i t d i s t r i b u t i o n \n ’ + d i s t _ s t r )


ax1 . s e t _ x l a b e l ( u ’MTD’ )
ax1 . s e t _ y l a b e l ( ’ Frequency ’ )
ax1 . s et _ y l i m ( ax1 . get_ylim ( ) )
ax1 . s et _ x l i m ( ax1 . get_xlim ( ) )

ax2 = c d f . p l o t ( lw =2, l a b e l =’CDF’ , l e g e n d=True )

param_names = ( b e s t _ d i s t . s h a p e s + ’ , l o c , s c a l e ’ ) . s p l i t ( ’ , ’ ) i f
best_dist . shapes e l s e [ ’ loc ’ , ’ scale ’ ]
param_str = ’ , ’ . j o i n ( [ ’ { } = { : 0 . 2 f } ’ . format ( k , v ) f o r k , v i n z i p (
param_names , best_fir_paramms ) ] )
d i s t _ s t r = ’ { } ( { } ) ’ . format ( b e s t _ d i s t . name , param_str )

ax2 . s e t _ t i t l e ( u ’MTD with b e s t f i t d i s t r i b u t i o n \n ’ + d i s t _ s t r )


ax2 . s e t _ x l a b e l ( u ’MTD’ )
ax2 . s e t _ y l a b e l ( ’ Frequency ’ )
s t a t , p v a l u e=s t . k s t e s t ( data , n . c d f )
p r i n t ( ’ The Kolmogorov−Smirnov Test produced a P−Value of ’ , p v a l u e )

r e t u r n best_fir_paramms , b e s t _ d i s t , n

\ clearpage

#ANALYSIS OF MEASUREMENTS
−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−

#GET DATA FROM ELATEXTUR

d f=f z . e l a t e x t u r _ d a t a ( r ’ F : \ Documents \ Walter \ a n a l y s i s ’ )

#PLOT DATA ON MATRIX

F ,D=f z . r e a l _ r e p r e s e n t a t i o n ( 2 0 , 2 0 , d f [ ’MTD’ ] )

#DEFINE THE BEST CURVE

best_fir_paramms , b e s t _ d i s t , n=f z . b e s t _ c u r v _ f i t ( d f [ ’MTD’ ] , 1 5 )

#EVALUATION OF FAILURE PROBABILITY

p f a i l=f z . p r o b _ f a i l ( b e s t _ d i s t , best_fir_paramms , 1 . 3 )

#FIT ONLY NORMAL DISTRIBUTION

d i s t r i b u t i o n = s t . norm
params = d i s t r i b u t i o n . f i t ( d f [ ’MTD’ ] )

best_fir_paramms_NORM , best_dist_NORM , n_NORM=f z . best_curv_fit_norm (


d f [ ’MTD’ ] , 1 5 )

#IDENTIFY HIGH AND LOW QUALITY AREAS

s e q u e n c e=np . l i n s p a c e ( 0 , 4 , 2 0 )

count=0
f o r i i n r a n g e ( l e n (F) ) :
v l a g=0

f o r j i n r a n g e ( l e n (F [ 0 ] ) ) :
i f F[ i ] [ j ] <1.3:
v l a g += 1

103
count += 1
else :
v l a g=v l a g
i f v l a g >= 0 . 1 5 ∗ l e n (F [ 0 ] ) :
s e q u e n c e [ i ]=1
else :
s e q u e n c e [ i ]=0

M1= [ ]
M0= [ ]
indx=0

f o r i in sequence :
riga1 =[]
riga0 =[]
i f i ==0:
f o r j i n r a n g e ( l e n (F [ 0 ] ) ) :
r i g a 0 . append (F [ indx ] [ j ] )
M0. append ( r i g a 0 )
else :
f o r j i n r a n g e ( l e n (F [ 0 ] ) ) :
r i g a 1 . append (F [ indx ] [ j ] )
M1. append ( r i g a 1 )

indx += 1

listM1 =[]
listM0 =[]

f o r i i n r a n g e ( l e n (M1) ) :
f o r j i n r a n g e ( l e n (M1 [ 0 ] ) ) :
x=M1[ i ] [ j ]
l i s t M 1 . append ( x )

f o r i i n r a n g e ( l e n (M0) ) :
f o r j i n r a n g e ( l e n (M0 [ 0 ] ) ) :
x=M0[ i ] [ j ]
l i s t M 0 . append ( x )

l i s t M 0=pd . DataFrame ( data=l i s t M 0 , columns =[ ’MTD’ ] )


l i s t M 1=pd . DataFrame ( data=l i s t M 1 , columns =[ ’MTD’ ] )

#GET PARAMETERS FROM THESE AREAS AND PLOT STATISTICAL ANALYSIS

best_fir_parammsM1_NORM , best_distM1_NORM , nM1_NORM=f z .


best_curv_fit_norm ( l i s t M 0 [ ’MTD’ ] , 1 5 )
#CREATE BIG SURFACE
−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−

d e f Big_mama_sim ( p e r c e n t a g e , nM1, nM0, l o c ) :

width =6000 #cm


l e n g h t =50000 #cm
u n i t =25 #cm

strip_w =400 #cm


s t r i p _ l =5000 #cm

#p e r c e n t a g e =120/400
r a w s _ s t r i p=s t r i p _ l / u n i t
r a w s _ s t r i p 1=r a w s _ s t r i p ∗ p e r c e n t a g e
r a w s _ s t r i p 0=r a w s _ s t r i p −r a w s _ s t r i p 1
column_strip=strip_w / u n i t

number_w=width / strip_w
number_l=l e n g h t / s t r i p _ l
number_strips=number_w∗number_l

Big_mama = [ ]
f o r a i n np . a r a n g e ( i n t ( number_l ) ) :

Big_raw = [ ]

f o r i i n np . a r a n g e ( i n t ( number_w ) ) :

Matrice =[]

x=rd . r a n d i n t ( 0 , i n t ( r a w s _ s t r i p 0 ) −1)

f o r j i n np . a r a n g e ( i n t ( r a w s _ s t r i p ) ) :

n e g s t r i p=np . a r a n g e ( x , x+r a w s _ s t r i p 1 )

i f j in negstrip :
raw=nM1 . ppf ( np . random . uniform ( s i z e=i n t (
column_strip ) ) )−np . random . normal ( l o c=l o c ,
s c a l e =0.07 , s i z e=i n t ( column_strip ) )

i f j not i n n e g s t r i p :
raw=nM0 . ppf ( np . random . uniform ( s i z e=i n t (

105
column_strip ) ) )

M a t r i c e . append ( raw )

i f i == 0 :
Big_raw=M a t r i c e

else :
Big_raw = np . h s t a c k ( ( Big_raw , M a t r i c e ) )

i f a == 0 :
Big_mama=Big_raw
else :
Big_mama = np . v s t a c k ( ( Big_mama , Big_raw ) )

return Big_mama

Big_mamacita=Big_mama_sim ( np . random . uniform ( 0 , 0 . 3 ) ,nM1_NORM,nM0_NORM,


np . random . uniform ( 0 , 0 . 3 ) )
Big_mamacita_val = [ ]
f o r i i n r a n g e ( l e n ( Big_mamacita ) ) :
f o r j i n r a n g e ( l e n ( Big_mamacita [ 0 ] ) ) :
x=Big_mamacita [ i ] [ j ]
Big_mamacita_val . append ( x )

F ,D=f z . r e a l _ r e p r e s e n t a t i o n ( l e n ( Big_mamacita [ 0 ] ) , l e n ( Big_mamacita ) ,


Big_mamacita_val )

You might also like