0% found this document useful (0 votes)
19 views

Optimal Sample Size Selection For Torusity Estimation Using A PSO Based Neural Network

This document discusses using a particle swarm optimization (PSO) based neural network to determine optimal sample size for estimating the geometry of a torus shape (doughnut shape). It identifies factors that influence the required sample size, such as size, tolerances, manufacturing process, and confidence level. It then collects data on these factors and corresponding sample sizes. Finally, it develops a PSO-based neural network model to guide selection of sample size based on the relevant factors. The network aims to provide an efficient and effective sampling strategy for inspecting torus features.
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
19 views

Optimal Sample Size Selection For Torusity Estimation Using A PSO Based Neural Network

This document discusses using a particle swarm optimization (PSO) based neural network to determine optimal sample size for estimating the geometry of a torus shape (doughnut shape). It identifies factors that influence the required sample size, such as size, tolerances, manufacturing process, and confidence level. It then collects data on these factors and corresponding sample sizes. Finally, it develops a PSO-based neural network model to guide selection of sample size based on the relevant factors. The network aims to provide an efficient and effective sampling strategy for inspecting torus features.
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 14

ThammasatInt. J. Sc. Tech.,Vol. 12, No.

2, April-June2007

Optimal SampleSizeSelectionfor Torusity


EstimationUsinga PSO Based
Neural Network
Siwaporn Kunnapapdeelert
Schoolof ManufacturingSystemsand MechanicalEngineering,SirindhomInternationalInstituteof
Technology,ThammasatUniversity,RangsitCampus,PathumThani, l2l2l, Thailand

Chakguy Prakasvudhisarn
Schoolof Technology,ShinawatraUniversity, | 5th Floor, ShinawatraTower lll,
Chatuchak,Bangkok,10900 Thailand

Abstract
In a competitivemanufacturingenvironment,the quality, cost, and time to market dependnot
only on the design and manufacturingbut also on the inspection process used. The use of
computerizedmeasuringdeviceshas greatly improvedthe efficacy of geometrictoleranceinspection,
especiallyfor form measurement.However,they still lack an efficient and effectivesamplingplan for
data collectionof a complex form featurelike a torus. Factorsthat could affect plans due to design,
manufacturing,and measurementsuch as size, geometricaltolerance,manufacfuringprocess,and
confidence level are studied. Type of manufacturingprocess,feature size, precision band, and
sampling method are identified as impact factors for sampling strategy. A Hammersleysequence
basedsamplingmethod is extendedto cover toroidal shape. A neural network basedon the particle
swarm optimtzation(PSO) is then applied to determinesample size for torus feature inspectionby
taking these impact factors into consideration. The PSO based neural network's algorithm and
architectureare describedand its predictive ability on unseentest subsetsis also presented. An
effective and efficient sampling strategy can be achieved by using sampling locations from the
Hammersleysamplingmethodand samplesizeguidedby the PSO basedneuralnetwork.

Keywords: Hammersleysamplingmethods,torusity, minimum tolerancezone, samplesize, neural


network' particleswarmoptimization
all information of that surfhcecan be obtained
1. Introduction and the actual error of that part can then be
A basic requirementin manufacturingis to identified. This is very time consuming and
produce a product that meets the specifiiation hence not suitable for a competitive
due to its functional and assemblyriquirement. manufacturing environment' This excessive
Hence,inspectionofdiscretemanufacturedparts sample size must be reducedto an acceptable
becomesvery critical for conformance of both number while maintaining the same high level
dimensional and geometrical tolerances. In of accuracy' This undoubtedly can lead to a
decrease in the time required for inspection' A
coordinatemetrology,inspectionis affectedby a
variety of data iollectlon and data fitting sampling plan or strategy considering a
methods t1]. Many researchers have minimum sample size and points' locationsmust
investigateddata fitting issues in geometrical be developedto obtain an effectiveand efficient
tolerances,especially for basic form tolerances inspection process' The commonly practiced
underthe assumptionthat the collecteddatawas methods are the uniform sampling, and the
a good represeniativeof the inspectedshape[2- simple random sampling methods (sR)'
5]. Data collection thus plays an equally Sampling locations based on somemathematical
sequences such as Hammersley (HM) and
important role as its data litting counterpart.
Theoreticallv. if the entire surfbce is measured. Halton-Zaremba (HZ) have also been studied

64
Thammasat Int. J. Sc. Tech., Vol. 12, No. 2, April-June 2007

with some successlI, 6, 71. However, the new population based search technique,
selection of sample size which controls demonstrates appealing properties such as
measurementprecision is normally conducted simplicity, shortcomputercode,few parameters,
by trial and error, experience,and metrology fast convergence,consistentresults,robustness,
handbooks.This resultsin a relaxedsamplesize and no requirementfor gradientinformationI I 0]
which gives a trade-off between It can be applied to train neural networks by
precision/accuracy of measurement and optimizing their weights in place of the BP.
inspection time. Therefore,a suitable sample This should relieve some drawbacksposed by
size which can representenoughinformationon the BP algorithm.
the whole populationhasto be found with a high The purposeof this work is to propose,for
confidencelevel. the first time, an efhcient and effective method,
Interestingly, complex forms such as PSO basedneural network, for optimal sample
torusity have been largely ignored and are size selectionfor torusity estimation. To do so,
normally left to be dealt-with by the use of the following steps are investigated; (l)
profile tolerance definition, except in a few identification of relevant factors influencing
recentcases[8,9]. The correspondingequations sample size of a doughnut-shapedfeature
for torusesare complex. This has partially lec inspection;(2) collectionof dataof thesefactors
to a relative absenceof researchworks dealing and corresponding sample sizes; and (3)
with the torusity tolerance in the literature. developmentof a PSO basedneuralnetwork for
Since the equations required for torusity optimal samplesizedetermination.
calculationhavejust beenfound, the study ofits
data collection issue has not been realizedyet. 2. Literature review
In addition. a sufficient number of industriaL Form toleranceinspectionplays a vital role
parts such as outer and inner races in bearings, in industrialproductionsince it can guarantee
and toroidal continuous variable transmission the interchangeabilityof the parts. Therefore,
possesstoroidal features,must be effectively probe-type coordinate measuring machines
and efficiently inspected. Considering these (CMMs) have been widely used to accurately
many applications, sampling strategies for measureand analyze parts. However, a main
torusity estimation, especially sample size drawback of CMMs is that an entire inspected
determination, should be studied more surfacecan not practicallybe measured.CMMs
extensively. The need to developeffective and are normally used to measureonly a sampleof
efficient guidelines for sample size used for discrete points on the part feature surface and
torusity measurementis the subject of this thesepoints are used as a representativeof the
paper. entiresurface.Someotherinstruments can scan
The task in doing so is rather complicated the entire surfacebut with lower accuracyand
becausethere are many factorsinvolved such as precision. Hence,a questionfollows; how well
size, dimensional and geometrical tolerances, do the discrete sample points represent the
manufacturing processes,sampling locations, inspectedsurface?
and accuracyand confidencelevels. Thus, an Dimensional surface measurementshave
analytical approachfor their modeling is very involved the use of deterministicsequencesof
difficult due to their unknown nonlinearnature. numbers for determination of sample
Feedforward neural networks have been coordinatesto maximize information collected.
considereda very powerful tool for function According to Woo and Liang t6], a two
approximation and modeling. One of their dimensional (2D) sampling strategy based on
advantagesis the ability to learn from examples. the Hammersleysequenceshows a remarkable
Hence, they can be applied to a model improvementof a nearly quadraticreductionin
relationshipbetweensamplesize and its relevant the numberof sampleswhen comparedwith the
factors. However, their classical training uniform sampling strategy, while maintaining
algorithm, back-propagationalgorithm (BP), the same level of accuracy. The HZ based
present some disadvantagesassociated with strategyin 2D spacewas also suggestedby Woo
overfitting, local optimum problems, and et al. I I ] without a discernibledifferencein the
sensitivity to the initial values of weights. performanceover the HM strategy. The only
Particleswarm optimization(PSO), a relatively differencesare that the total number of sample

65
Thammasat Int. J. Sc. Tech., Vol. 12, No. 2, April-June 2007

points in the HZ sequencemust be a power ol Conventionally,statisticalmethodssuch as


two and the binary representations of the odd multiple regressionand partial leastsquarescan
bits are inverted. Also, Liang et al. [12, 13] be used to determine relationships between
comparedthe 2D HZ sampling schemeto the inputs and outputs. They normally suffer from
uniform scheme and the SR theoretically and assumptionsof datadistribution. Plus,nonlinear
experimentallyfor roughnessmeasurement with relationships are rather difficult to handle.
similar results. Lee et al. [7] demonstrateda Without such limitations, neural networks can
methodologyfor extendingthe HM sequencefor be alternatively used to model the complex
geometriessuch as circles, cones,and spheres. phenomena between factors and outputs of
Kim and Raman |41 investigated different interestin the manufacturingenvironment.Thus,
sampling strategiesand difierent sample sizes a neural network would be applied to capture
for flatnessmeasurements. Their findings were correlation between sample size for torusity
similar to others with regards to accuracy inspectionand its relevantfactors. However,the
determination.Summerhays et al. [15] proposec most widely usedalgorithmto minimize the sum
new sampling patterns to guide form of squaredlearning elrors, a gradientbasedBP
measurementsof internal cylindrical surfaces algorithm, struggles with overfitting, local
with somesuccess. optimum,and sensitivityto the initial weights.
Dowling et al. [16] presenteda survey of The PSO has been introducedin the framework
statisticalissuesin geometricfeatureinspection. of an artificial social model that demonstrates
Fitting and evaluation approaches,sampling appealing properties such as simplicity, short
designissues,and sourcesofmeasurementerror computer code, few Parameters, fast
were discussed. The incorporation of the convergence,consistency results, robustness,
knowledgeof manufacturingprocesseswas also and no requirementfor gradientinformation[10],
suggestedto improve the accuracyof geometric Comparedto other evolutionarysearchmethods
form inspection. Prakasvudhisarn [1] suggested in solving both continuous and discrete
guidelines for cones and conical frustum optimizationproblems,the PSO was the second
inspectionby using three sampling sequences, best in terms of processing time while it
HM, HZ, aligned systematic(AS) with various performedthe best in terms of successrate and
samplesizes. The sampledpointswere usedto quality of solutions[2 1]. In addition, much
estimatethe form error of the f-eaturebasedon work has shown the potential of PSO in neural
differentfi tting algorithms. networktrainingwhile alleviatingshortcomings
To help circumvent the adequacyof the of the BP [22-251.
data collection problems, Menq et al. [17]
suggested a statistical sampling plan to 3. Study of sample size related factors
determine a suitable sample size which can An artificial Neural Network (ANN) can
represent the entire population of the part capture the relationship between input and
surfacewith sufficientconfidenceand accuracy. output by adjustingweights on each link while
Zhang et al. [18] proposeda feedforwardback- leaming fiom data. Therefore,selectionof data
propagationneuralnetwork approachto estimate pairs of input and output for training the
sample sizes of holes' measurementsfrom network is an essentialstepto ensuresufficiency
various manufacturingoperations. Machining and integrity of the target function.
processes,hole diameters,and tolerancebands Determinationof suitable sample size is quite
were considered as influencing factors. complicated since it is affected by various
Similarly, Lin and Lin tl9l developed an factors such as form fitting criteria, size of the
algorithmbasedon the grey theoryto predictthe part,type of error on the part's surface,sampling
number of measuring points on the next location (position of measured point), and
workpiecefor flatnessverificationby using data precision band (confidence level on the
from the last four workpieces. Raghunandan measurementresults). The first factor was not
and Rao [20] also reporteda method to reduce includedin this study eventhoughvariousform-
samplesize of flatnessestimationby inspecting fitting criteria can be used to estimate form
the first part in detail and using it as the tolerances. The most widely used method for
reference for succeeding parts in a batch form error estimation,the least squaremethod
production. (LSQ), does not guarantee the minimum

66
ThammasatInt. J. Sc. Tech.,Vol. 12, No.2, April-June2007

tolerancezonedefinedby ANSI/ASME,standard p is the phaseangle.


126] In other words, it may overestimatethe Stepis a discontinuityofthe radiusat eachcross
tolerancezoneand hencerejectsomegood parts. sectionas calculatedby:
Therefore,the minimum zone approach,which rS(;r) (3),
is consistentwith the ANSI standard.was used where I is amplitude,and
to evaluatethe tolerancezone torusity in this l0:r<0
work. The remaining factors are taken into S / r i i s u n i t s t e pf u n c t i o na n ds ( x ) - j
,, _o
considerationwhetheror not they really have an
impacton samplesize. Before consideringthese Randomis the randomerror perpendicularto the
factors, some background is discussedin the torus surfaceas describedbelow:
U(a,/J) (4),
next two subsections. Effects of these factors
are then presentedafterward. where U(a, /J) is a uniformly-distributed
randomvalue within the range[a, pl .
3.1 Torus generation Major Major Circle
To validate the developedmodel and also
avoid measurementerrors from CMMs such as
x
probe orientation,probe angle adjustment,and
probe compensation,perfect toruses illustrated Minor Radius

in Figure I were simulatedwith detailsshown in T O PV I E W


Z
Table l.
Minor (lircle
able 1. Specificationo torusessenerated
Area m a i o r r a d i u s( r ' ) M i n o r r a d i u s( a ) FRONT VIEW RIGIII VIE\I
2369 l0 t)
9475 l2 20 Figure l. Torus definition.
I 4804 l5 25
CMMs provideaccuracyof about0.0001inch or
Three sizesofperfect toruseswere generatedby 2.54 microns for the most basic form, flatness,
using the following formula: measurement[28]. A torus, on the other hand,
is a complex feature. Their measurement
+z -a
. t l
(t), accuracyis hardly the same as that of flatness.
where c is a major radiusof the torus (from the Thus, two multiple constants of flatness
centerof hole to the centerof the torustube), accuracy were selectedto give the tolerance
a represents the minor radiusof the torus (radius zone errors of 5 and 25 microns for generated
oftube),and toruses. Two groups of nine toruseseach were
(x, y, t) is the coordinatesof the torus' surface. then generatedwith the controlled torusity of
To imitate the real surface of a manufactured five and twenty five microns. Altogether,there
torus-shapedfeature, three selected types of were eighteentorusesincludedin the experiment.
error, namely,randompattern,sine coupledwith
random pattern, and step mixed with random 3.2 Sampling methods
pattem, were each added to the perfect toruses To decrease inspection time while
[27]. Threepiecesof torus were then generated maintaining a high level of accuracy,various
for eachsize. sampling techniques such as SR, AS, and
Sine is the sinusoidaloscillation perpendicular mathematical sequence-based,HM and HZ,
to the torus surfaceas explainedbelow: sampling have been studied. Interestingly,the
A s i n ( f i+ p ) (2), root mean squareserrors of thesemathematical
where A representsthe amplitude of the sequencesare lower than those of commonly
periodicwave; practiced procedures,SR, and a type of AS,
/representsfrequency; uniform sampling. Hence, HM, AS, and SR
/ ;' - t \ samplingmethodswere taken into consideration.
u :2rl l w h e r eN i s t h et o t a ln u m b e o
rf Even though the performanceof HM and HZ
\1/-1, based methods are not much different, the HM
simulatedpointsand the positionof the point llft basedmethod was selectedbecausethe number
varies from I to y'/: and

61
Thammasat Int. J. Sc. Tech., Vol. 12, No. 2, April-June 2007

of samplingpoints for HZ must be a power of 3.2.3 Simple random sampling method


two. Simple random sampling is the sampling
3.2.1Hammersleybasedmethod procedure that each search element in the
HM sequencetechniquewas designedto placeN populationhas an equalchanceofbeing selected.
hypercube[11]. In
pointson the k-dimensional The abovesamplingstrategiesare normally
two dimensions,the HM coordinates(x,,yr) describedfor a2 dimensional(2D) rectanglebut
can be determinedas follows: the toroidal featureis a 3D problem. Extension
from 2D spaceto 3D space is required. The
conceptof torus generationas shown in Figure2
'lrl = i (5), was appliedto all three samplingmethods. The
generatedcoordinatesvia the three sampling
li
-- \ - r r i1 (6), methodswere transformed from the first picture
L"i''
(upperleft corner)to the last picture(lower right
comer). Therefore, coordinatesgeneratedby
where N is the total numberof samplepoints, eachsamplingmethodfor toruseswould result.

Enmm
,€1=[0,...,N_l],
ft is [log, lr] : ceiling of log, 1y',
b; is binary representation ofthe index i,
b;7denote thej'h bit in bi, and
j:0,...,k-1.
3.2.2Aligned systematicsamplingmethod
ME€GO
Figure 2. Torus surfacegeneration[29].
The systematicsamplingsequenceis a form
of probabilisticsamplingwhich employs a grid
Note that the sample size attempted for each
of equally spacedlocation. There are two types
transformed 3D sampling method was varied
of systematicsampling; aligned and unaligned
from 8 to 256 points to measureeach group of'
sampling. Aligned samplingis normally called
nine toruses. Then, the torusity tolerancezone
systematic sampling. The sample is first
would be calculatedfrom suchpointsby [8]:
determinedby the choice of a pair of random
numbers in order to selectthe coordinatesof the
upper left unit and the subsequentpoints are
taken according to the predetermined
mathematicalpattem.
Supposethat a population is suggestedin
(7),
the form of am rows and eachrow consistsofbr
where 4 is the normal deviation from the
units. The basic procedure for arranging the
measurement (r,,y,,t,) to the ideal torus
coordinateof aligned systematicsampling can
be computedas follows: surface; and xs,y6,u,v,c , and ro are searched
1. Determinea pair of randomnumbers(p,4) parameters for establishing the ideal torus
wherep is lessthan or equal to m, and q is less surface by using the following minimax
than or equal to r. These random numbers criterion:
would decide the coordinatesof the upper left
unit by the p" unit column andq'h unit row. minimum zonetorusity : 2 x min (max d) (8).
2. Locatethe subsequent samplingpoints for x-
Clearly, the torusity zone obtained depends
c o o r d i n a t ea s p + i m w h e r e l e [ 0 , . . . , a - 1 ]
on the measurements(r,,y,,2,) and hence
Therefore, the row consists of
p, p + m,p + 2m,..., p + (a -l)m . sampling strategies used. Different strategies
may give different torusity tolerance zones.
3. Locatethe subsequent samplingpoints fory-
-l] Five levels of quantitativeprecision (precision
coordinate as q + jn where i e10,...,b
band)were chosenas 0.3, 0.9, 1.5,2.1, and2.1
Therefore, the column consists of pm to reflex various sampling strategies
q,q + n,q + 2n,..., q + (b -l)n . employedlor eachprecision.

68
Thammasat Int. J. Sc. Tech., Vol. 12, No. 2, April-June 2007

Figures6-8. They show that the HM sampling


3.3 Effect of surface area sequencerequireda smallersamplesizethan the
Intuitively, a larger surface area requires other two sampling methods; and AS
more measurementpoints than a smaller one to outperformed SR in almost every case.
reach the same accuracy and/or precision. To Obviously, different sampling methods extract
verifu such a statement,the impact of changing different information from inspected surfaces.
surface area on the sample size required for Hence,a samplingmethodshouldbe selectedas
torusity inspection was studied by using three an input for samplesize estimationas well.
different sampling methods,HM, AS, and SR.
Each surfacehad different surfaceerror patterns: 3.6 Effect of precision band
random, sine*random, and step+random;with a Precision represents the degree of
fixed torusity of 25 microns and precisionband repeatability in a measurement whereas
of 1.5 micronsas shown in Figures3-5. accuracyis the degreewhich the measuredvalue
Clearly, Figures 3-5 illustrate that if surface agreeswith the true value. The true value is not
area (feature size) of inspected workpiece possibleto obtain due to the time, and hence,
increases,the number of points required also cost incurred. Intuitively, if the number of
increases to obtain the same accuracy and sample points is finite, the measurementresult
precisionresults. Hence,size or surfaceareaof doesnot convergeto a single value, but it does
doughnut-shapedfeature must be selectedas one vary within a certain range, the so-called
ofthe relevantfactorsfor samplesizeprediction. precisionband. Obviously, a tighter precision
band implies that all measurementresults are
3.4 Effect ofsurface error patterns closerto one anotherandwould give a higher
Obviously, the manufacturing processused
to produce the part affects its entire surface[20].
180
Identificationof the characteristicsof a surface r60
resulting from a manufacturing process can be o t40
':
very complicatedand it is very challengingto a 120

obtain proven models due to many factors p roo


d s o
involved such as characteristicsof the process,
60
vibration,tool wear,workpiecedeformation,and 40
temperature. In the absenceof such models, 20
some selected error patterns such as random 0
(noise),sine (systematic),and step (systematic) 2369 94' 75 14804
SurfaceArea
were added to the surface of those generated
perfect toruses to imitate the acfual surface of Figure 3. Samplesizeversussurfaceareawhen
manufacturedparts as if they would be produced measuredby using HM methodwith tolerance
from different manufacturingprocesses. zone- 25 pm and precisionband: 1.5 pm.
Figures3-5 show that with the samesize of Aligned Systematic
workpiece, torusity value, and precision band,
the (sine+random) pattern required the highest
number of points. The random pattern required )o0
more measurements than the (step+random) ':
3 lso
pattem for all sampling methods. Therefore,
E
error patterns(or manufacturingprocesses)must -
loo
be included as a relevantfactor for samplesize
prediction.

3.5 Effect of sampling locations 2369 94' 75 14804


As discussed above, the quality of SurfaceArea
information from measurementdepends on the
Figure 4. Samplesize versus surfaceareawhen
number of points collectedand their locations.
measuredby using AS methodwith tolerance
Three samplingmethods,namely,HM, AS, and
zone:25 pm and precisionband : 1.5 pm.
SR, were taken into considerationas depicted in

69
ThammasatInt. J. Sc. Tech., Vol. 12, No. 2, April-June 2007

determine the quantitative correlation between


Simple Random
them bv usine artificial neural networks.
Sjne+random

250

& zoo
q.-^
t l)u

50
100
0
2369 94'7s 14804
SurfaceArea

Figure 5. Samplesizeversussurfaceareawhen 2.7 2.1 1.5 0.9 0.3


measuredby using SR methodwith tolerance Band(microns)
Precision
zone:25 pm and precisionband: 1.5pm.
Figure 7. Variationsof measurement on
confidence level of the outcomes obtained.
sine*random error surfacewith torusity of 25
Figures 6-8 show that the tighterthe precision pm and surface areaof 9475 mm2.
band the greater the sample size and vice versa.
SteP+random
Thus, the band ofvariation shouldbe taken into
accountas a factor for samplesize determination- 180
Note that five experiments for determining 160
suitable number of points for each precision o t40
band were conducted to ensure that the sample 6 l2o

sizes,an averagevalue ofthese five experiments, E too


for eachprecisionbandwere reliable. 3so
In summary, various levels of relevant 60
40
factors taken into account in this work are
20
depictedin Table 2.
Random
0
2.7 2.1 1.5 0.9 0.3
180 PrecisionBand (microns)
t60
140 Figure 8. Variations of measurementon
120
stepfrandom error surfacewith torusity of 25
E
100 FEI
*AS pm and surfaceareaof 14804mm'.
@
80
I " S R
60
40
Table 2. Summaryof relevant factors for
20 e sizeorediction
U Parameters Details of each Darameter
2.1 L5 0.9 Part 2369
dimension 94'75
PrecisionBand (microns)
(--') 14804
Random
Figure 6. Variations of measurementon random Error
Sine + random
pattem
error surfacewith torusity of 25-pm and surface SteD+ random
areaof 2369 mm'. Sampling
Hammersley
Aligned systematic
sequence
Simole random
As clearly seen in this section, the
0.3
qualitative correlation between sample size and
0.9
relevant factors such as surface area, surface Precision
1.5
band (pm)
error pattern, sampling location, and precision 2.1
band was clearly identified. The next step is to 2.7

70
ThammasatInt. J. Sc. Tech.,Vol. 12, No.2, April-June2007

4. Artificial n€ural networks nodesconductedfor ten runs per structure.As a


The implementationprocessof feedforward result, the structure of 8 hidden nodes was
neural networksto model relationshipsbetween selected to alleviate the overfitting problem.
samples size and its relevant factors can be Two hidden layers were also attempted and
roughly divided into four main steps, (1) discarded due to high overfitting results and
assemblingthe data, (2) creating the network, longercomputationaltime.
(3) training the network, and (4) simulatingthe
network. Table 3. Averagespercentageofaccuracyof
ln step ( I ), measurement data were different structuresfor PSONN.
Number of 7o acCuraCV
collectedfrom simulatedtorusesas describedin
hidden nodes Trainins set Validation set
Section3. Obviously,not all input factorswere 4 88.5805623886.06696142
representedby numerical data. Some were 5 90.159949s2 87.42250469
categoricalvariablessuch as sampling method 6 90.99489026 87.58054s76
and type of surface error. Therefore, these 7 91.02564456 87.65348044
variablesmust be encodedto numericalnumbers 8 91.rs97605 87.69065186
9 91.29307204 8'7.65102572
between-l and l. The threesurfaceerror types,
10 9 1 . 3 3 1 6 6 7 3 587.55392991
namely, random, sinefrandom, and
il 9 1 . 5 1 9 7 6 9 0 687.4947543
step+random,were encodedas 0.3, 0.6, and 0.9,
respectively. Similarly, sampling strategies by
ofaccuracywascalculated
Thepercentage
were also encodedas 0.3, 0.6, and 0.9 for HM, , \
AS, and SR, respectively. Normally, input t - ucruall I
l
parametersof the target function are composed | | - sunlroredicred
',
oooccuracY=l zl00
of various magnitudes. The one with higher
magnitude may dominate the one with lower I. i- o'''or ,
magnitude. Therefore,preprocessingshould be \ t=t t

applied to raw data before training. Thus, the ,n),


raw data were normalizedto [-1, 1] for every where 1y'is the total numberof data.
factor. Sincea large data set of 135 points was The same procedure was also conducted for
collected,the holdout method was chosenas a BPNN. The best generalizationperformance
validation technique for model selection anc obtained for both BPNN and PSONN were
performance estimation of the constructec reachedby using 8 hidden neurons. Thus, the
model. This data setwas thus randomlydivided architectureof 4-8-1 (four input nodes, eight
into three subsetsfor training, validating, and hidden nodes,and I output node) was selected
testing. Training the network was performedby for sample size prediction of torusity
using about 10"/oof the original data (95 data verification. Note that the standardhyperbolic
points) whereas the remaining 40 data points tangentsigmoid function or tansig function was
was split equally for validatingand testing. All used in the hidden layer to limit its output to
of these135dataare shownin Appendix A. small range (-1, l) whereasthe linearpurelin
In step (2), a neural network was created function was used in the output layer to allow
with 4 inputs and I output. Four inputs were the network outputto be a real number.
composed of surface area, effor pattern, In step (3), training the network is an
samplingsequence, and precisionbandwhile the attempt to minimize the sum of squarederror
only output was the target, sample size. Trial (difference between actual output and desired
and error was used to determine the network output) by adjusting the weights on each link.
architecture including the number of hidden The bases of feedforward back-propagation
nodes for each layer and the number of hidden neuralnetwork and PSO are well documentedin
layers by choosing the highest accuracy the literature and are not repeated here. The
combined from both training and validating weaknesses ofthe BP such as slow convergence
subsets. In one hidden layer architecture,the during training, possibledivergencefor certain
numberof hiddennodeswas varied from 4 to 11 conditions, extensive computations (its
to find the best combined accuraciesbefween performancedecreases when the size of problem
both subsets. Table 3 illustrates averages increases), and trap in local minima are
percentageof accuracyof different hidden layer alleviated by training the NN with a more

71
Thammasat Int. J. Sc. Tech., Vol. 12, No. 2, Apil-June 2007

effective and efficient optimization technique (0,1) range. This makes the system less
like the PSO. Therefore,the PSO was proposed predictableand more flexible.
to train the neuralnetwork insteadof the BP. 6. Loop to step 2 until a stopping criterion,
either a sufficiently good evaluation function
4.1 PSO basedneural network value or a maximum numberof iterations,is met.
The following steps illustrate neural In step (4), both trained networks would
network trainingby using the PSO: then be simulated with all data sets to check
1. Initialize a populationof particleswith small their predictiveabilities.
random positions,presentxfilfd],and velocities,
vlil[d], of the i'h particle in the lh dimension on 5. Results and analyses
problem space D dimensions (number of The discussedPSONN was implementedin
weightssearched).The positionof eachparticle MATLAB 7 running on a Pentium lY 2.4 GHz
corresponds to weights in the neural network with Microsoft Windows XP operatingsystem.
whereasvelocity representsthe rate of position The computation of PSO depends on a few
change.Also, initializeNN's parameters. parameters such as population size, inertia
2. Evaluatethe desiredoptimization function, weight, maximum velocity, maximum anc
minimization of sum-squared error, in D minimum positions on each dimension, and
dimensionsfor each particle. This is done for maximum numberof iterations. Populationsize
every training pair by computing the actual and maximum iterations of 20 and 600 were
output via analyzing the network from input selected, respectively. The inertia weight
layer to output layer. gradually decreasedfrom 0.9 to 0.4 so as to
3. Compareevaluationwith particle'sprevious balance the global and local exploration.
best value, pbestfi). If current value is better Particles' velocities in each dimension were
than pbestlil, then pbestfi] - current value and clampedto a maximum velocity, v^o,,to control
pbest position,pbestxfi]ffl, is set to the current the explorationability of particles. If u.* is too
position(or weight). high, the PSO facilitates a global search;and
4. Compareevaluationwith swarm's previous particlesmight passgood solutions. However,if
best vafue, pbestfgbestl. lf current value is v^^ is too small, the PSO facilitates a local
better than (pbest[gbest]), then gbest : particle's search;and particlesmight not explore beyond
array index. locally good regions. v-* is a problem-oriented
5. Updatevelocity and positionof eachparticle parameterand shouldbe set at about l0-20oloof
by usingEquations(10) and(1 l), respectively: the dynamic range of the variable in each
vli)[d)= w x vfil[dl+ C' x randO x d i m e n s i o n . I n t h i s e x p e r i m e n t .m a x i m u m
(pbestx[ilfdl - presentxfilldl) + C, x rand 0 x velocity(v.u^),was set at l2%o. The maximum
(pbestxIgbest lfd] - presentx fil[d]) and minimum positions of each variable were
(10), chosento be 0.5 and -0.5, respectively,so that
presentxlilldl=presentxfi)fd]+v[i][dl (ll). they would give small, around zero, initial
weights.
A linearly decreasing inertia weight was
The BPNN was also createdby using the
implementedby startingat 0.9 and endingat 0.4.
neural network toolbox in MATLAB 7 to
This helps expand the search space in the
predict the sample size of torusity verification.
beginning so that the particlescan explore new
Parameterselection in BPNN dependson a few
areas, which implies a global search. This
statistically shrinks the search space through factors, learning rate (r1) and momentum (2) .
iterations,which resemblesa local search. The Both are used to control weight adjustment
accelerationconstantsCr and C2 representthe along the gradientdirection. The learningrate is
weighing of the stochasticterms that pull each used to adjust step size of the weight whereas
particle toward pbest and gbest positions. They the momentum factor is used to accelerate
are normally set to 2.0 to give it a mean of I for convergenceof the network. ry (learning rate)
the cognition (2"d term) and social parts (3'd and ,t (momentum factor) were selected as 0.5
term), so that the particles would thoroughly and 1, respectively. In addition, the maximum
search the settled regions [0]. rand0 is a numberofepochs was 12000(numberofepochs
uniformly random number generator within the in BPNN : maximum iterationsxsize of swarm

72
ThammasatInt. J. Sc. Tech.,Vol. 12, No.2, April-June2007

in PSONN). All other things for training both Since the test subsetswere not used for
networkswere kept the same. training, it can be concluded that the neural
PSONN can function very well for network can perform well in determining the
prediction of sample size of torusity estimation requiredsamplesize for torusity inspectionwith
with the training subsetsandjust slightly poorer a certain confidencelevel. lts performancecan
for the validating and unseen test subsetsas be enhanced by utilizing a better training
illustrated in Table 4. Recall that the BP algorithm (optimizer). DaIa distribution
algorithm has some serious limitations assumptionsrequired in traditional statistical
associatedwith overfitting and local optimum approachescan be discardedin this approach.
problems. The outcomescomparisonshowsthat In addition, more relevantfactors,if any, could
the PSONN can avoid local optimum trap and be included rather easily by retraining the
reach near-optimal results better than the BPNN network to obtain a more realistic and
can. For both training subsetsof torusity enors, comprehensivemodel with added factors. For
their accuracieswere much lower than those of inspection of other forms, this PSONN could
PSONN. To avoid overfitting for both PSONN also be applied, but new training with
and BPNN, the proper architectureof 8 hidden correspondingdata must be conductedfirst to
layer nodes was selected based on the best capture new characteristicsof that particular
combined results between training and form and its relevant factors. This learning
validatingsubsets.When appliedto both unseen ability, simplicity, and effectiveness are
test subsets,their performanceshowed a slight important advantagesof the neural network
drop from those of training subsetsand the approach.
resultsfrom PSONN were still much higherthan
thoseof BPNN. This showsgood generalization 6. Conclusions and recommendations
performance. It can be concluded that the In coordinatemetrology, an effective and
performance of the trained PSONN is efficient samplingplan for data collection of a
remarkable and consistent for both training given feature is difficult to determine since it
samplesand testingsamples. can be affected by many factors such as
Classical BP updates weights and biases geometric tolerances,manufacturingprocesses,
based on the gradient descentconcept so the size, and confidencelevel on the measurement
solution obtained might get stuck in a local results. Establishing the correlation between
minimum easily without any mechanism to them is the key leading to such a sampling
avoid it. Particlesin PSO explore new areasin strategy. Experimental studies on torusity
the beginning and refine the search later on. produced by different processes(different enor
while keeping personal best and group best types) were carried out to identify key factors
values. Better solutionscan be found basedon that affect the sample size. Surface area, error
this concept. In addition, to avoid local traps, pattern, sampling methods,and precision band
some mechanismssuch as inertia weight and were found as relevant factors. An improved
randj are incorporated in velocity adjustment. neuralnefwork,PSONN, was proposedto model
This should result in near-optimalperformance. this relationship with significant improvement
Coupledwith the model selectiontechnique,the over the original BPNN due to the appealing
overfitting issuecan be avoidedto some extent. properties of PSO such as fast convergence,
Consequently,the results obtainedconfirm that consistencyresults, robustness,and local trap
improvementof NN can be accomplishedwhen avoidance. The results from both training
trainedby the PSO. subsetsand unseentest subsetsdemonstratethat
the PSONN has the potential for sample size
Table 4. Resultsof Toaccuracycomparison selection of a given form feature measurement,
between the two trar
een tne traini me thods especiallywhen the explicit relationshipmodel
PSONN BPNN
Torusity is hard to find or does not exist. The PSONN
erTor Training Test Set
Training
Test Sel can also be easily expanded to cover more
Set Set
5 91.1597 85.8066 86.2648 8r .3906 factors so that the correlationmodel would be
IJ 83.9762 79.6038 78.2935 74.2561 more comprehensiveand realistic. Moreover,
the PSONN could be used to handle other form
features by retraining the network with new

t)
Thammasat Int. J. Sc. Tech., Vol. 12, No. 2, April-June 2007

correspondingdata sets. Therefore,an effective tSl Prakasvudhisarn, C., and Kunnapapdeelert,


and efficient sampling strategycan be devised S., Torusity Tolerance Verification Using
by selecting a low discrepancy Hammersley Swarm Intelligence,lnternal ReportIE0601,
sampling method and a sample size guided by School of Manufacturing Systems and
the PSONN. This should give a good MechanicalEngineering, SIIT, TU, 2006.
representativeof the inspectedshape for data t9l Aguirre-Cruz, J.A., and Raman, S., Torus
collectionin coordinatemetrology. Form Inspection Using Coordinate
Justlike a black-box,the PSONN still lacks Sampling,J. Manu. Sci. Eng.,VoI.127,pp'
clear interpretability in expressing and 84-95,2005.
explaining relationships between sample size [0] Kennedy, J., and Eberhart, R.C., Particle
and its relevant factors. This issue should be Swarm Optimization,
investigated in the future. In addition, http ://www. engr.i upui.edu/-shi/Coferen
systematicparametersselectionof the PSO will ceipsopap4.htmlProc.IEEE Int. Conf. on
certainly enhanceits easeof use and should be Neural Networks. IEEE Service Center,
investigatedas well. Piscataway, NJ, Vol.4, pp. 1942-1948,
1995.
7. Acknowledgment [ 1 1 ] W o o , T . C . ,L i a n g ,R . ,H s i e h ,C . C . ,a n dL e e ,
The authorswere partially supportedby the N.K., Efficient Sampling for Surface
Thailand Research Fund (TRF) grant Measurements, J. Manu.Sys.,Vol.l4, No.5,
MRG4980170. pp.345-354,1995.
fl2lLiang, R., Woo, T.C., and Hsieh, C.C.,
8. References Accuracy and Time in Surface
Raman, S., Measurement, Part l: Mathematical
t1] Prakasvudhisarn, C.,
Frameworkfor Cone FeatureMeasurement Foundations, J. Manu. Sci. Eng., Vol.120,
Using CoordinateMeasuring Machines,J. No. 1, pp. 141-149,1998a.
Manu. Sci. Eng., Vol.l26, pp. 169-177, [ 1 3 ] L i a n g , R . , W o o , T . C . , a n d H s i e h ,C . C . ,
2004. Accuracy and Time in Surface
Measurement,Part 2: Optimal Sampling
l2l Traband,M.T., Joshi,S., Wysk, R.A., and
Cavalier, T.M., Evaluation of Straightness Sequence, J. Manu. Sci.Eng,Vol.l20, No. l,
and Flatness Tolerances Using the p p .1 5 0 - 1 5 5 , 1 9 9 8 b .
Minimum Zone, Manu. Rev., Vol.2, No.3" [14] Kim, W.S. and Raman,S., On the Selection
p p . 1 8 9 - 1 9 51,9 8 9 . of Flatness Measurement Points in
CoordinateMeasuringMachine Inspection,
[3] Prakasvudhisarn,C., Trafalis, T.B., anc
Raman,S., Support Vector Regressionfor Int. J. Mach.Tools Manu.,Vol.40,No.3,pp.
Determinationof Minimum Zone, J. Manu. 42' 7-443.2000.
Sci. Eng.,Vol. 125,pp. 736-739,2003. [15] SummerhaysK.D., Henke R.P., Baldwin
J.M., Cassou R.M., Brown C.W.,
t4l Shunmugam, M.S., On Assessment of
GeometricErrors,Int. J. Prod. Res.,Vol.24, Optimization Discrete Point Sample
No.2, pp. 413-425,1986. Pattems and MeasurementData Analysis
on Internal Cylindrical Surfaces with
t5l Shunmugam,M.S., Comparisonof Linear
and Normal Deviations of Forms of Systematic Form Deviations, Prec. Eng.,
Engineering Surfaces, Prec. Eng., Vol.9, Y o l . 2 6 ,p p . 1 0 5 - l 2 l, 2 0 0 2 .
No.2,pp. 96-102,1981. [6] Dowling, M.M., Griffin, P.M., Tsui, K.L.,
and Zhou, C., Statistical Issues in
[6] Woo, T.C. and Liang, R., Dimensional
measurement of surfacesand their sampling, Geometric Feature Inspection Using
ComputerAided Design,Vol.25,No.4, pp. Coordinate Measuring Machines,
233-239,1993. T e c h n o m e t r i c sV, o l . 3 9 , N o .1 , p p . 3 - 1 7 ,
1991.
l7l Lee G., Mou J., and Shen, Y., Sampling
Strategy Design for Dimensional [7] Menq, C.H., Yau, H.T., Lai, G.Y., and
Measurementof GeometricFeaturesUsing Miller. R.A.. StatisticalEvaluationof Form
Coordinate Measuring Machine, Int. J. Tolerances Using Discrete Measurement
Mach. Tools and Manu.,Vol.37,No.7, pp. Data, American Society of Mechanical
917-934. 1991. Engineers, Production Engineering

14
Thammasat Int. J. Sc. Tech., Vol. 12, No. 2, April-June 2007

Division (Publication)PED, 47, pp. 135- l23lZhang, C., Shao, H., and Li, Y., Particle
149, t990. Swarm Optimisationfor Evolving Artificial
[ 1 8 ] Z h a n gY . F . , N e e A . Y . C . , F u h J . Y . H . ,N e o Neural Network, Proc. IEEE Int. Conf. on
K.S., and Loy H.K., A Neural Network Sys.,Man, and Cyb., pp.2487-2490,2000.
Approach to Determining Optimal [24] Mendes, R., Cortez, P., Rocha, M., and
Inspection Sampling Size for CMM, Neves,J., ParticleSwarmsfor Feedforward
Computer-lntegrated Manufacturing Neural Network Training, Proc. Int. Conf.
S y s t e m sV, o l . 9 ,N o . 3 ,p p . l 6 l - 1 6 9 , 1 9 9 6 . on Neural Networks (IJCNN 2002), pp.
[19] Lin 2.C., andLin W.S.,Measurement Point 1895-1899.2002.
Predictionof FlatnessGeometricTolerance [25] Gudise, V. G. and Venayagamoorthy, G. K.,
by Using Grey Theory, Prec. Eng., Vol.25, Comparison of Particle Swarm
No.3,pp. l7 l-184, 2001. Optimization and Backpropagation as
[20] Raghunandan, R., and Rao, P.V., Selection Training Algorithms for Neural Networks,
of An Optimum Sample Size for Flatness Proc. IEEE Swarm IntelligenceSymposium
Error Estimation While Using Coordinate 2003 (SfS 2003), Indianapolis,Indiana,
Measuring Machine, Int. J. Mach. Tools U S A ,p p . 1 1 0 - 1 1 7 , 2 0 0 3 .
Manu.,Yol.47,No.3-4,pp.477-482,2007. [26] ASME Yl4.5M-1994, Dimensioningand
[21] Elbeltagi,E.,Hegazy,T., and Grierson,D., Tolerancing, The American Society of
Comparison Among Five Evolutionary- MechanicalEngineers,New York, 1995.
based Optimization Algorithms, Advanced [27] Algeo, M.E. and Hopp, T.H., Form Error
EngineeringInformatics,Vol. I 9, pp.43-53, Models of the NIST Algorithm Testing
2005. System,NISTIR 4740, National Instituteof
[22]Clow, B., and White, T., An Evolutionary Standardsand Technology, Gaithersburg,
Race:A Comparisonof GeneticAlgorithms MD, 1992.
and Particle Swarm OptimizationUsed for [28] http:II en wikipedia.org/wiki/Lapping, June
Training Neural Networks,Proc. of the Int. 2006.
Conf. on Artificial Intelligence,IC-AI'O4, l29l http:II abyss.uoregon.edu/-j s/glossary/torus.
2, pp. 582-588,2004. html, August 2006.

Appendix A

Surface area Precision band Sample


(mm2) Sampling strategy Error pattern
(rm) size
2369 Hammerslev random 2.7 58
2369 Hammerslev random 2.1 84
2369 Hammerslev random t.5 t56
2369 Hammerslev random 0.9 156
2369 Hammerslev random 0.3 184
2369 Alisned Svstematic random 2.7 50
2369 A I sned Svstematic random 2.1 40
2369 A I ened Svstematic random 1.5 40
2369 A I sned Svstematic random 0.9 44
2369 AI gned Systematic random 0.3 84
2369 mple Random random 2.7 64
2369 mole Random random 2.1 96
2369 Simole Random random 1.5 l16
2369 mple Random random 0.9 l16
2369 mple Random random 0.3 184
9475 Hammerslev random 2.7 62
9475 Hammersley random 2.1 92
9475 Hammerslev random 1.5 Il6
9475 Hammerslev random 0.9 164
9475 Hammerslev random 0.3 200
94't5 Aliened Svstematic random 2.7 52
9475 Aliened Svstematic random 2.1 140
9475 Alisned Svstematic random 1.5 144
9475 Aligned Systematic random 0.9 t64

75
Thammasat Int. J. Sc. Tech., Vol. 12, No.2, April-June 2007

Surface area Precision band Sample


Sampling strategy Error pattern lrrml size
(mm2)
9475 Aliened Svstematic random 0.3 200
9475 Simple Random random 2.7 80
9475 Simole Random random 2.1 98
9475 Simole Random random l.l t28
94'75 Simole Random random 0.9 t32
94' 75 Simole Random random 0.3 t96
4804 Hammerslel random 2.7 76
4804 Hammerslel random 2.1 108
4804 Hammersler random 1.5 180
4804 Hammerslev random 0.9 180
4804 Hammerslev random 0.3 200
4804 Aligned Systematic random 2.7 JZ

4804 Aligned Systematic random 2.1 148


4804 AI gned Systematic random t-) 196
4804 AI sned Systematic random 0.9 196
4804 Aliened Systematic random 0.3 200
4804 Simple Random random 2.7 92
4804 Simple Random random 2.1 132
4804 Simple Random random 1.5 148
4804 Simple Random random 0.9 148
4804 Simole Random random 0.3 200
2369 Hammersley sine*random 2.7 /o

2369 Hammerslev sineirandom 2.1 96


2369 Hammerslev sine*random 1.5 120
2369 Hammerslel sine+random 0.9 t20
2369 Hammerslel sinearandom 0.3 t32
2369 Aliened Systemat sine*random 2.7 64
2369 Aliened Systemat sine*random 2.1 84
2369 Alisned Systemat sine+random 1.5 ll6
2369 Alisned SYstemat sine+random 0.9 116
2369 AlisnedSvstematic sine*random 0.3 160
2369 Simole Random sine*random 2.7 72
2369 Simole Random sine+random 2.1 72
2369 Simole Random sine+random 1.5 84
2369 Simnle Random sine+random 0.9 84
2369 Simole Random sinelrandom 0.3 I16
9475 Hammerslev sinelrandom 2.7 88
9475 Hammerslev sine*random z.l 20
9475 Hammersley sine*random t_) 28
9475 Hammersley sine*random 0.9 28
9475 Hammerslev sine*random 0.3 80
9475 Alisned Svstematic sine+random 2.7 04
9475 Alisned Systematic sine*random 2.1 8
9475 Alisned Systematic sine*random t_) 8
9475 Alisned Svstematic sinefrandom 0.9 28
9475 Aliened Systematic sine*random 0.3 70
9475 Simole Random sine*random 2.7 84
9475 Simple Random sine*random 2.1 84
9475 Simple Random sine+random 1.5 120
94'75 Simple Random sine+random 0.9 120
9475 Simole Random sine*random 0.3 t20
14804 Hammerslev sine*random 2.7 92
I 4804 Hammerslev sine*random 2.1 24
14804 Hammersley sine*random 1.5 44
14804 Hammersley sine*random 0.9 44
14804 Hammersley sine*random 0.3 184
I 4804 Alisned Svstematic sine+random 2.7 04
14804 Aligned Systematic sinelrandom ? l 64
i4804 Aliened Systematic sine*random 1.5 o4
r4804 Aliened Systematic sine+random 0.9 64

76
ThammasatInt. J. Sc. Tech., Vol. 12, No.2, April-June2007

Surface area Precision band Sample


Sampling strategy Error pattern (um)
(mm2) size
4804 Aliened Systematic sine*random 0.3 180
4804 Simple Random sine*random 2.7 04
4804 Simple Random sine*random 2.1 2
4804 Simple Random sine*random l.) 2
4804 Simple Random sine*random 0.9 2
4804 Simple Random sine*random 0.3 2
2369 Hammerslev step*random 2.7 56
2369 Hammerslev step+random 2.1 56
2369 Hammerslev step+random 1.5 56
2369 Hammerslev step+random 0.9 56
2369 Hammerslev step+random 0.3 88
2369 Alisned Svstematic steD+random 2.7 64
2369 Alisned Svstematic steD+random 2.1 64
2369 Alisned Svstematic steo*random 1.5 64
2369 Alisned Svstematic steD+random 0.9 64
2369 Alisned Svstematic steD+random t t _t 76
2369 Simole Random steD+random 2.'7 60
2369 Simole Random steo*random 2.1 60
2369 Simole Random steD+random 60
2369 Simple Random steD+random 0.9 60
2369 Simple Random steD+random 0.3 84
9475 Hammerslev step+random 2.7 64
9475 Hammerslev steD+random 2.1 64
9475 Hammerslev steD+random 1.5 64
9475 Hammerslev steD+random 0.9 64
9475 Hammerslev steD+random 0.3 92
9475 Alisned Svstematic steD*random 2.7 16
9475 Aliened Svstematic steD+random 2.1 /o
9475 Aligned Systematic step+random 1.5 /o
94'75 Aligned Systematic step+random 0.9 76
9475 Alisned Svstematic step+random 0.3 80
9475 Simole Random step+random 2.7 /o
9475 S i m o l eR a n d o m step+random 2.1 76
9475 Simple Random step*random 1.5 76
9475 Simnle Random step+random o.9 /o
94' 57 Simole Random step+random t r -t 92
4804 Hammerslev step+random 2.7 72
4804 Hammerslev steD+random 2.1 72
'72
4804 Hammersley steD+random 1.5
'72
4804 Hammerslev steD+random 0.9
4804 Hammersley steo*random 0.3 108
4804 Aligned Systematic steD+random 2.7 84
4804 Aligned Systematic steD+random 2.1 84
4804 Aligned Systematic steo+random 1.5 84
4804 Aligned Systematic step+random 0.9 84
4804 Alisned Svstematic step+random 0.3 120
4804 Simnle Random steD+random 2.7 80
4804 Simnle Random step+random 2.1 80
4804 Simple Random steo*random 1.5 80
4804 Simple Random step+random 0.9 80
4804 Simple Random steD+random 0.3 100

77

You might also like