Sampling Errors and Control of Assay Datax
Sampling Errors and Control of Assay Datax
1. Introduction
The fundamental cause of the errors of samples of rocks and minerals collected by geologists
for evaluation of mining projects is heterogeneity of the sampled materials (Gy, 1982;
Francois-Bongarcon, 1993; Pitard, 1993). Constitution heterogeneity and distribution
heterogeneity (Pitard, 1993) both are important and cause geological sampling errors. The
more heterogeneous the sampled material the more difficult it is to obtain a representative
sample and infer characteristics of the geological object from samples. The current chapter
overviews sampling theory explaining sampling error types and their likely causes, and also
describes the practical approaches used in the mining industry for estimating sampling
errors and monitoring them at an acceptably low level. It is based on numerous case studies
by the author (Abzalov & Both, 1997; Abzalov, 1999, 2007, 2008; Abzalov & Humphreys,
2002; Abzalov & Mazzoni, 2004; Abzalov & Pickers, 2005; Abzalov et al., 2007; Abzalov &
Bower, 2009) and also reviews of the recently published QAQC procedures used in the
mining industry (Taylor, 1987; Vallee et al., 1992; Leaver et al., 1997; Long, 1998; Sketchley,
1998).
TOTAL ERROR = Err. 1st Group + Err. 2 nd Group + Err.3rd Group (1)
Where:
Err.1st Group – are sampling errors related to a chosen sample extraction and preparation
procedure, referred as sampling protocol. An example is poor repeatability of assays when
sample sizes are disproportionately small in comparison with the degree of heterogeneity of
www.intechopen.com
612 Applications and Experiences of Quality Control
material. The main error of this type is known as Fundamental Sampling Error (Gy, 1982). It
is always present and can not be fully eliminated as it is related to intrinsic characteristics of
the sampled material, such as mineralogy and texture of mineralisation. The Fundamental
Sampling Error (FSE) can be minimised through optimisation of the sampling protocols,
which will be discussed in the next section. The first group also includes Grouping-
Segregation error which is a consequence of the distribution heterogeneity of the sampled
material (Pitard, 1993) and therefore this error also relates to the intrinsic characteristics of
the sampled material.
Err.2nd Group – is the group of errors related to sampling practise, in other words the errors
which depend on how rigorously the sampling protocol was developed, implemented and
followed. The group includes delimitation, extraction, preparation and weighing errors.
These errors are caused by incorrect extraction of the samples from a lot, their suboptimal
preparation procedures, contamination and incorrect measurements. Human errors, such as
mixed sample numbers, can also be included in this group. These types of errors can be
minimised by upgrading practices of the samples extraction and preparation, which usually
needs an improvement of the quality control procedures and often requires equipment
upgrading.
Err.3rd Group – analytical and instrumental errors occurred during the analytical operations
(Gy, 1982). The group includes assaying, moisture analysis, weighing of the aliquots, density
analysis, precision errors and bias caused by suboptimal performance of analytical
instruments. These errors are considered in the current study separately from the two first
groups because of the different factors causing them.
www.intechopen.com
Sampling Errors and Control of Assay Data Quality in Exploration and Mining Geology 613
a. Theoretical background.
The theoretical approach for estimating the FSE was proposed by P.Gy (1982) and further
developed by F.Pitard (1993) and D.Francois-Bongarcon (1993, 1998, 2005). The theory states
that FSE, representing precision of the samples expressed as their relative variance, can be
estimated as follows (Eq. 2):
σ FSE = fgcldN3 (
2 1 1
- ) (2)
MS M L
σ FSE
2
- Fundamental Sampling Error, representing relative variance of the precision error;
f - shape factor. This parameter represents geometry of the particulate materials. It is a
dimensionless factor varying from zero, when particles are ideal cubes, to one, when they
represented by ideal spheres. Most types of mineralisation have shape factor varying in a
narrow range from 0.2 (gold or mica flakes) to 0.5 (isometric grains).
g - granulometric factor, which is also called a particle size distribution coefficient or size
range factor. This factor is dimensionless and taking into account the fact that fragments do
not have the same size ( d ). If all fragments have had exactly the same size the factor ( g )
would be equal to 1. This theoretically is possible only in an ideal case when studied
material is perfectly sorted. In practice it never happens, therefore the ( g ) factor is less than
one and can be as small as 0.1 when particles show a wide range of distribution. Default
values of ( g ) factor are summarised in Table 1. In the mining industry the value of 0.25 is
usually used as default value as it suits for most types of mineralisation and corresponds to
a case when 95% of particles pass the nominal mesh size.
c= ( ) x (ρ M (1- t L ) + ρG t L )
1- t L
(3)
tL
where:
gangue.
www.intechopen.com
614 Applications and Experiences of Quality Control
The formula (Eq. 3) can be simplified (Francois-Bongarcon, 1998) and represented by it’s
concise version (Eq. 4).
ρ x ρG
c= (
ρ
1- t L
)x( M ) (4)
tL
In the equation (Eq. 4) ( ρ ) denotes the average specific gravity of mineralisation at a given
grade (t L ) , other variables are the same as in Eq. 3.
For low-grade ores, a mineralogical factor (c) can be further simplified and approximated as
ratio of the density of the mineral of interest by the average grade of the studied material
(Eq. 5):
ρM
c= (5)
tL
The mineralogical factor (c) relates the sampling variance given by formula (Eq. 2) to the grade
of mineralisation (lot) being sampled. D.Francois-Bongarson and P.Gy (Francois-Bongarson &
Gy, 2001) have noted that 'any use of the formula, or any sampling nomogram derived from it,
only makes sense when the grade level at which it is established is duly stated'.
l - liberation factor, estimated as ratio of liberation size to a nominal particle size (Eq. 6).
l= (
dL A
) (6)
dN
σ FSE = fgc (
2 dL A 3 1 1
) dN ( - ) (7)
dN MS M L
If the exponent ( A ) is expressed as ( 3 − α ) and after reduction of ( dN3 ) the FSE formula
becomes (Eq. 8)
3−α α 1
σ FSE = f g c dL
2 1
dN ( - ) (8)
3−α
MS ML
3−α
K = f g c dL (9)
Substituting sampling constant (K) to equality (Eq. 2) leads to formula of FSE (Eq. 10) which
is most commonly used in practice.
σ FSE = KdNα (
2 1 1
- ) (10)
MS M L
www.intechopen.com
Sampling Errors and Control of Assay Data Quality in Exploration and Mining Geology 615
α
σ FSE =
2 K dN
(11)
MS
experimental definition of the FSE because parameters (K) and ( α ) can be calibrated
Equality (Eq. 10) and its concise version (Eq. 11) are practically the most convenient tools for
Bongarcon (1993) has suggested default (K) and ( α ) values for low-grade mineralisation,
the next section of the book. When calibrated parameters are not available D.Francois-
such as gold veins, which are K = 470 and α =1.5. However, great care should be taken as
actual values of the sampling constant (K) can significantly differ from the default value
(Sketchley, 1998).
b. Experimental calibration of sampling constants.
Several techniques have been proposed (Gy, 1982; Pitard, 1993; Francois-Bongarson, 1993,
2005; Bartlett & Viljoen, 2002; Minkkinen & Paakkunainen, 2005; De Castilho et al., 2005;
Minnitt et al., 2007) for experimental determination of sampling constants. The most
common approach is the technique developed by Francois-Bongarson (2005), representing a
modified version of the ‘sampling tree experiment’ (Francois-Bongarson, 1993), and
‘heterogeneity test’ of Pitard (1993). ’30-Pieces Experiment’ developed by D.Francois-
Bongarson (1993) has many similarities to above mentioned ‘heterogeneity test’ (Pitard,
1993) representing a simplified version of it.
‘Sampling Tree Experiment’ was first proposed by D. Francois-Bongarcon in 1993 and then
modified in 2005 (Francois-Bongarson, 2005). The modified version represents analysis of
(Table 3) allowing to experimentally obtain the (K) and ( α ) parameters of the Fundamental
the series of the duplicate samples (Fig. 2) cut from a lot at various comminution degrees
10 - 15 kg
1st split
2nd split
3rd split
4th split
5th split 32 samples
0.3 - 0.5 kg
Fig. 2. Flow sheet of the ‘Modified Sampling Tree Experiment’ (MSTE). The shown binary
sampling tree is applied to each of the four nominal size fractions (Table 2)
www.intechopen.com
616 Applications and Experiences of Quality Control
Theoretical background of this method is as follows. Firstly the formula (Eq. 11) can be
logarithmically transformed to the equality Eq. 12:
Ln (MSσ FSE
2
) = α Ln(dN ) + Ln(K ) (12)
several points which are plotted onto the diagram Ln (MSσ FSE 2
) vs. Ln(dN ) and then a linear
function is inferred by a suitable best fit algorithm.
‘MSTE’ method is based on collecting a representative sample of 40-60kg which is then
dried, successively crushed and split following the flow sheet shown on the Fig. 2. The
nominal particle sizes for the four groups of subsamples depend on mineralogy and texture
of the mineralisation. Examples of the particle sizes that have been used at the ‘MSTE’ are
shown in Table 2 which can be used as a reference when ‘MSTE’ is planned; however, best
practise is to determine experimentally the sample weight and the nominal particle size of
each sampling series.
Sampling Series
Elements of Interest Reference
First Second Third Forth
Orogenic Gold 2.5 0.3 0.1 0.05Au, As a, b
Ni-S: Komatiitic-type 3 1 0.5 0.1 Ni, Cu, As b
Cu-Au-U: IOCG-type 2.5 0.5 0.1 0.05Cu, U, Au, S b
Deposit type
•
Representative sample of 40-60kg is collected and dried;
The whole sample is crushed at jaw crusher to a nominal size of 95% passing the mesh
•
size chosen for Series 1 (Table 2);
•
One-quarter of the sample (lot) is split out and forms the first subsampling series;
Remaining material is crushed to a nominal size of 95% passing the mesh size chosen
•
for Series 2 (Table 2);
One-third of these secondary crushed material is split out and forms the second
subsampling series;
www.intechopen.com
Sampling Errors and Control of Assay Data Quality in Exploration and Mining Geology 617
• Remaining two fractions are recombined and crushed to a nominal size of 95% passing
•
the mesh size chosen for Series 3 (Table 2);
The crushed material is split by riffle splitter onto two equal subsamples, one of them
•
split out and forms the third subsampling series;
The remaining material is crushed to a nominal size of 95% passing the mesh size
•
chosen for Series 4;
Using a riffle splitter each of these portions is now split into 32 samples (Figure 2). Each
of the produced samples is weighed, pulverised and assayed. Minnitt et al. (2007)
recommends to use 2 samples for granulometric analysis. These samples are randomly
This approach produces 4 groups of 32 samples. Each group includes samples of the same
nominal size of the particles and approximately of equal weight.
7.00
K (Exp.Intercept) = 195.3
6.00 Alfa (slope) = 1.1
Ln (St.Rel.Var. * Ms)
5.00
4.00
3.00
2.00
1.00
0.00
-3.50 -2.50 -1.50 -0.50 0.50 1.50
Ln (dn)
and also sampling constants ( α ) and ( K ) therefore only one line can be constructed for
the straight lines of a slope -1. The actual position of the line depends on particle size ( dN )
www.intechopen.com
618 Applications and Experiences of Quality Control
each sub sampling stage at the given sample particle size ( dN ). Every stage when particle
size is reduced (i.e. comminution) is lowering the FSE and therefore it is represented on the
nomogram by vertical line which extended until it intersects a new straight line of slope -1
corresponding to a new particle size. Combination of the vertical and diagonal segments
corresponding to all stages of the sampling protocol graphically visualises the entire sample
preparation protocol. Each stage contributing to the overall precision error is clearly
represented on the nomogram and this allows to use this diagram as an effective tool for
assessment, control and improvement of the sampling procedures.
10 316 %
Fundamental Sampling Error
1 2 mm
100 %
1 mm
0.5 mm
0.1 32 %
0.1 mm
0.01
0.010 P.Gy's Safety 10 %
0.01 Line
0.001 3 %
0.0001 1%
0.00001 0.3 %
1 10 100 1,000 10,000
Sample Mass (g)
www.intechopen.com
Sampling Errors and Control of Assay Data Quality in Exploration and Mining Geology 619
-1mm causes a smaller error, less than 10%. Final stage when 50g aliquot was collected
from a 200g pulp pulverised to -0.1mm is also characterised by less than 10% of FSE. Based
on this nomogram it is obvious that improvement of the sampling protocol should be
focused on optimisation of the stage 2, which can be easily achieved by collecting larger sub-
sample, approximately 500g.
(A) (B)
Fig. 5. Examples of segregation of the rock fragments: (A) segregation of the heavy and
lighter fragments at the discharge from conveyor belt, modified after Pitard (1993); (B)
uneven distribution of fragments in a blast hole cone caused by segregation of the smaller
and heavier fragments (e.g. liberated gold particles, or sulphides grains) and their
accumulation at the bottom of the cone
The Grouping-Segregation error represents a complementary part of the Fundamental
Sampling Error (Eq. 13) and therefore has been assigned here to the first group of errors (Eq.
1) because of its direct relationships with fundamental error. However, in practice this error
( σ FSE
as much depends on sampling protocol, being controlled by the value of fundamental error
2
), as it is a consequence of the practical implementation of the sampling protocol. For
example Grouping-Segregation error can be generated by insufficient homogenisation of the
material prior to sampling.
The Grouping-Segregation error by definition is a function of two factors, grouping factor
( f G ) and segregation factor ( f S ), both being a consequence of the small scale distribution
heterogeneity of the material (Eq. 13).
www.intechopen.com
620 Applications and Experiences of Quality Control
σ GS
2
= f G f Sσ FSE
2
(13)
Grouping factor in practise characterises the size of the increments (i.e. fragments) making
up the sample. A segregation factor characterises the number of increments. Both of these
factors are dimensionless and can not be dissociated from each other (Pitard, 1993).
Grouping-Segregation error can not be theoretically estimated, however, in practice, it can
be minimised if factors causing this error are well understood. F.Pitard (1993) has pointed
out that grouping factor can be minimised by taking as many and as small increments as
practically possible, assuming that all aspects of extraction and preparation of these
increments is carried out correctly. Practical implication of this finding is that when
selecting a riffle splitter preference should be given to that with a more riffles. When
collecting samples by randomly chosen fragments, as a rule of thumb, the sample should be
made of at least 30 increments (Pitard, 1993). The same rational applies to the choice of
rotary splitter. To minimise the grouping-segregation error it is necessary to assure that
sample was collected by at least 30 cuts from the lot.
The minimisation of errors caused by segregation factor is more difficult than for errors
related to the grouping factor (Pitard, 1993). The only practical approach is to homogenise
the material prior to sampling, which is not always technically possible. It is important to
understand the segregation mechanisms in the proposed sampling operation and adjust the
procedures so that a samples extraction will be less affected by inhomogeneous (segregated)
distribution of the sampled particles. For example, to minimise the errors caused by
segregation factor the blast hole in Figure 5B should be sampled by using a sampling device
which is radial in plan view and taking several increments positioning them at random
around the blast hole pile (Pitard, 2005).
www.intechopen.com
Sampling Errors and Control of Assay Data Quality in Exploration and Mining Geology 621
(A)
(B)
Fig. 6. Examples of delimitation errors: (A) sampling broken ore at the draw point of
underground stope represents an unresolvable issue of the delimitation error.; (B) sampling
of crushed material (e.g. blast hole cone) by a scoop having round shape profile
Extraction error is the result of a sampling tool which is selectively taking fragments,
therefore these errors also known as sample recovery errors (Pitard, 1993) because they are
caused by selective sampling systems. This type of error can be frequently observed in
geological exploration and mining geology applications (Fig. 7). One of the most common
examples of extraction error is a preferential caving of soft material, such as clay pods, when
drilling rocks of variable strength. Sampling of blast hole cones using incorrectly designed
auger drill, which rejects large fragments is another type of extraction error often occurring
in geological applications. In all these cases the extraction error can cause significant biases
of the analytical results. It is important to note that having the correct equipment does not
guarantee high quality results because inappropriate use of equipment can also lead to
significant extraction error. The example shown in Figure 7B demonstrates an extraction
error caused by the incorrect use of a riffle splitter which is quickly fed on one side. Such an
approach leads to disproportional distribution of the fragments and segregation of heavier
particles on one side.
Fig. 7. Examples of Extraction errors: (A) extraction error caused by selective plucking of
mineral grains (e.g. gold grains) from the drill core surface; (B) extraction error caused by
incorrectly used riffle splitter, modified after Pitard (1993)
www.intechopen.com
622 Applications and Experiences of Quality Control
Preparation errors are the changes of chemical or physical characteristics of the material
caused by its crushing, grinding, pulverising, homogenising, screening, filtering, drying,
packaging and transportation. These errors take place during processing of the samples, and
include contamination of the samples, preferential losses and alteration of sampled material.
For example, in the blast holes some portion of the drill cuttings fall back into the hole
which can be a source of sample biases due to preparation error.
9
1 : 1 Line
8
6
Cu% (2nd Lab)
3
Outliers
2
Outliers
1
0
0 1 2 3 4 5 6 7 8 9
Cu% (1st Lab)
Fig. 8. Scatter-diagram of the Cu grades determined in the drill core samples and their
duplicates. Cu-Au bearing skarn, Russia. All data, except several outliers, are compactly
distributed along the 1:1 line indicating a good repeatability of the sample grades. Several
erratic results (outliers) have been caused by mixing of sample numbers in the lab
Weighing errors are the errors which are introduced by weightometers and scales. The
author has observed a case where a laboratory had been equipped by the state of art
analytical equipment including robotic XRF instruments whereas the sample preparation
stage was still very old, out of date and poorly calibrated scales were used, entirely
eliminating all benefits of having high precision analytical instrument.
The second group also includes different types of human errors, such as mixing sample
numbers, transcript errors, and incorrect instrument readings. These errors, except in the
cases of deliberate fraud, are accidental by their nature and can create extremely erratic
values, abnormally high or low in comparison with the true sample grades. When such
extreme values are present, these accidental-type of human errors can be easily recognised
by presence of outliers on the scatter-diagrams where sample duplicates are plotted against
the original samples (Fig. 8).
www.intechopen.com
Sampling Errors and Control of Assay Data Quality in Exploration and Mining Geology 623
www.intechopen.com
624 Applications and Experiences of Quality Control
is assessed using a statistical test (Eq. 14). If this condition (Eq. 14) is satisfied the analytical
results are considered acceptable with regard to accuracy.
|m − μ| ≤ 2 σ L2 +
S 2W
(14)
n
σ L ≈ 2 S W (CANMET, 1998). Consequently, for large (n > 10) number of replications the
The base formula (Eq. 14) can be simplified as the usual empirical relationships is
|m − μ| ≤ 2 σ L (15)
|m − μ| ≤ 4 S W (16)
When multiple analyses of a certified standard have been made in the same laboratory the
results can be used for estimation of analytical precision of laboratory. Analytical precision
is acceptable if results of the replicate analysis of the certified standards assayed in this
laboratory satisfy the statistical test (Eq. 17) (ISO Guide 33):
χ(2n − 1); 0.95 - critical value of 0.95 quartile ( α = 0.05 ) of the ( χ 2 ) distribution at (n-1)
degrees of freedom, where (n) is the number of replicate analysis of the standard. In
practise, at least 3 repeat analysis of the certified standards should be available for this test
(CANMET, 1998).
b. Statistical tests for assessing performance of the standard samples: different laboratories
case
When exploration samples and standards inserted to the samples batch have been analysed
in (p) several different laboratories, the overall accuracy of the analytical results can be
tested (Kane, 1992; ISO Guide, 1989; CANMET, 1998) using the following statistical
condition (Eq. 18):
www.intechopen.com
Sampling Errors and Control of Assay Data Quality in Exploration and Mining Geology 625
+
S 2W
|m − μ| ≤ 2
2
S LM
k (18)
p
m - arithmetic mean of the replicate analyses of this certified standard sample in the assay
batch;
S W - estimated within-laboratory standard deviation of the replicate analyses of the
standard samples;
S LM - estimated between-laboratory standard deviation of the replicate analyses of the
standard samples;
k=
n
, is a ratio of total numbers of replicate analysis (n) of certified standards to a
p
number of laboratories (p) participating in Round Robin test.
The assays of the certified standards obtained from different laboratories can be used for
estimation analytical precision (Eq. 19).
where:
k=
n
, is a ratio of total numbers of replicate analysis (n) of certified standards to a
p
χ p2( k − 1); 0.95 - critical value of 0.95 quartile ( α = 0.05 ) of the ( χ 2 ) distribution at p(k-1)
number of laboratories participating in Round Robin test (p);
degrees of freedom.
Inter-laboratory precision can be assessed indirectly using the equality Eq. 20. Precision is
considered satisfactory if this equality is true.
S 2W + kS LM χ ( p − 1); 0.95
≤
2
2
σ C + σ L2
2
(20)
(p -1)
www.intechopen.com
626 Applications and Experiences of Quality Control
χ(2p − 1); 0.95 - critical value of 0.95 quartile ( α = 0.05 ) of the ( χ 2 ) distribution at (p-1)
degrees of freedom.
c. Statistical tests for assessing performance of the standard samples: single assay case
Repeat assays of certified standards may not be available in a single analytical batch, which
often can contain only one standard sample. In such case, when repeat assays of certified
standards are not available a decision on the analytical accuracy should be made using a
single value (X) of the certified standard included into the analysed batch the above
mentioned statistical tests are inapplicable. Decisions on the accuracy of the method need to
be based on the different statistical test (Eq. 21) which uses a single assayed value of the
certified standard (Kane, 1992; CANMET, 1998):
|X − μ| ≤ 2 σ C (21)
where:
www.intechopen.com
Sampling Errors and Control of Assay Data Quality in Exploration and Mining Geology 627
Systematic drifts (Fig. 9E) of assayed standard values usually indicate a possible
instrumental drift. Alternatively this can be also caused by degradation of the standard
samples. The author of the current paper is familiar with cases when characteristics of the
standard samples have degraded in comparison with their certified values because of
inappropriate storing conditions of the standards when they were kept in the large jars and
have not been protected against vibrating due to operating equipment.
(A) (B)
Grade (%)
6
Grade (%)
6 + 2 SD + 2 SD
Certified Certified
4 Mean
4 Mean
- 2 SD - 2 SD
2 2 “Outliers”, usually transcription error
(C) (D)
Grade (%)
Grade (%)
6 + 2 SD 6 + 2 SD
Certified Certified
4 Mean 4 Mean
- 2 SD - 2 SD
2 2 Rapid change in dispersion
Uncalibrated analytical equipment Biased assays
of the ‘Standard’ sample assays
(E)
Grade (%)
6 + 2 SD
Certified
4 Mean
- 2 SD
2
Trend of the ‘Standard’ sample assayed values
200 400
Order of Processing of the Standards
Fig. 9. Schematic diagrams showing quality control pattern recognition method (Abzalov,
2008): (A) Accurate data, statistically valid distribution of the standard values; (B) Presence
of ‘outliers’ suggesting transcription errors; (C) Biased assays; (D) Rapid decrease in data
variability indicating for a possible data tampering; (E) Drift of the assayed standard values
Blank samples (i.e. material that have very low grade of a metal of interest) are usually
inserted after high grade mineralisation samples. The main purpose of using blanks is to
monitor laboratory for a possible contamination of samples which is mainly caused by poor
house keeping and insufficiently thorough cleaning of equipment. Blank assays are also
presented on the grade versus order of analysis diagram (Fig. 10) and if equipment have not
been properly cleaned the blank samples will be contaminated which is reflected on the
diagram as increased value of a metal of the interest. The case presented on the Figure 10
www.intechopen.com
628 Applications and Experiences of Quality Control
shows that laboratory procedures have degraded during the course of the project because
the blank samples after approximately blank with sequential number 150 have started to
systematically indicate contamination by base metals derived from processed Ni-Cu
sulphide samples.
700
600
500
Cu (ppm)
400
300
200
100
0
0 50 100 150 200 250 300 350 400
Sequential Number (in Chronological Order)
Fig. 10. Cu grade of blank samples plotted versus order of their analysis, Ni-Cu project,
Australia
www.intechopen.com
Sampling Errors and Control of Assay Data Quality in Exploration and Mining Geology 629
variance of the assayed values normalised to the means of the corresponding pairs of the
data.
The different statistical methods are available for estimation precision error from paired
data (Garrett, 1969; Thompson & Howarth, 1978; Bumstead, 1984; Shaw, 1997; Francois-
Bongarson, 1998; Sinclair & Bentzen, 1998; Sinclair & Blackwell, 2002). All these methods
have been reviewed by the author (Abzalov 2008) and compared by applying the same sets
of duplicated pairs of samples collected in operating mines and mining projects. The study
(Abzalov, 2008) concurs with the suggestion of Stanley and Lawie (2007) that the use of
average coefficient of variation (Eq. 22 and 23) as the universal measure of relative precision
error in geological applications is appropriate.
1 N σ i2 2 N ⎛ ( ai - bi )2 ⎞
CV% = 100% x ∑
N i = 1 mi2
= 100% x ∑⎜ ⎟
N i = 1 ⎜⎝ ( ai + bi )2 ⎟⎠
(23)
1 For consistency with other measurements coefficient of variation (CV) (one standard deviation divided
∑ (xi - ∑N i )2
by mean) is expressed as percentage (CV%)
a + b1 2 a + b1 2
single pair of data is available the formula can be transformed as follows:
) + (b1 - 1 ( 1 1 1 )2 + ( 1 1
(a1 - b1 )2 + (b1 - a1 )2
2a - a -b 2b - a - b1 2
σ = = =
(a1 - 1 ) )
2−1
2 2 2 2 2
1 4
=
(a1 - b1 )2
;
2
Taking square root of the variance we arriving to a final formula of standard deviation of the paired
data: Standard Deviation = σ 2 = |a1 - b1|
2
www.intechopen.com
630 Applications and Experiences of Quality Control
AMPD = 100% x
AMPD |ai - bi Bumstead, 1984 This estimator is also known
( ai + bi ) as MPD (Roden and Smith,
AMPD = 2 x CV%
2001) or ARD (Stanley &
Lawie, 2007)
HARD = 100% x
HARD | ai - bi | Shaw, 1997 HARD is simply the half of
( ai + bi ) the corresponding AMPD
HARD =
2
x CV%
2
where ( ai ) is original sample and ( bi ) is its duplicate
Table 3. Measures of Relative Error Based on Absolute Difference Between Duplicated Data
Pairs
b. Reduced Major Axis (RMA)
This is a linear regression technique which takes into account errors in two variables,
original samples and their duplicates (Sinclair & Bentzen, 1998; Sinclair & Blackwell, 2002).
This technique minimises the product of the deviations in both the X- and Y- directions.
This, in effect, minimises the sum of the areas of the triangles formed by the observations
and fitted linear function (Fig. 11). Calculation of the RMA parameters is explained in detail
in (Abzalov, 2008) and only briefly summarised below.
The general form of the reduced major axis (RMA) is as follows (Eq. 24):
bi = W0 + W1 ai ± e (24)
where ( a and b ) are matching pairs of the data, ( a ) denotes a primary samples plotted
i i i
along X axis and ( b ) is duplicate data, plotted along the Y axis. ( W0 ) is Y-axis intercept by
i
the RMA linear model, ( W1 ) is the slope of the model to X-axis, ( e ) standard deviation of
the data points around the RMA line.
The parameters ( W0 , W1 and e ) estimated from the set of the matching pairs of data
( a and b ), plotted along X-axis and Y-axis, respectively. The slope of RMA line ( W1 ) is
i i
estimated as ratio of standard deviations of the values ( a and b ) (Eq. 25).
i i
W1 =
St.Dev(bi )
(25)
St.Dev( ai )
Intercept of the RMA model with Y axis is estimated from (Eq. 32)
RMA model allows quantifying the errors between matching data pairs (Eq. 27 – Eq.30).
Dispersion of the data points about the RMA line ( SRMA ) is estimated using Eq. 27.
www.intechopen.com
Sampling Errors and Control of Assay Data Quality in Exploration and Mining Geology 631
Fig. 11. Scatter- diagram and the RMA model (thick line) fitted to paired Fe grades of the
blast hole samples (Abzalov, 2008). 1:1 line (fine) is shown for reference. Grey triangle shows
an area formed by projection of the data points and RMA line. RMA technique minimises
the sum of the areas of the all triangles
Error on Y-axis intercept ( S0 ) is estimated using Eq. 28. and the error on the slope ( SSLOPE )
is estimated using Eq. 29.
⎡ (1 − r ) ⎤ ⎡ ⎛ Mean( a ) ⎞ 2 ⎤
S0 = St.Dev(bi ) ⎢ ⎥ ⎢2 + ⎜ ⎟ + ⎥
⎣ N ⎦ ⎢ ⎝⎜ St.Dev( ai ) ⎠⎟ ⎥
i
(1 r ) (28)
⎣ ⎦
St.Dev(bi ) (1 − r 2 )
SSLOPE = (29)
St.Dev( ai ) N
where ( r ) is correlation coefficient between ( a and b ) values, (N) is the number of the data
i i
pairs and (Var) are variances of the ( a and b ) values.
i3 i
The relative precision error ( PRMA (%) ) can be estimated from the RMA model:
2
S RMA
PRMA (%) = 100 x
( ∑ ai ) + ( ∑ bi )
2 (30)
N N
i i
2N
3 For consistency with CV% and other estimators discussed in this section the ( PRMA (%) ) value is
estimated at 1 standard deviation and reported as per cents (i.e., multiplied by 100 in Eq. 30).
www.intechopen.com
632 Applications and Experiences of Quality Control
where ( ai ) and ( bi ) are the matching pairs of the data and N is the number of pairs.
Calculated RD(%) values are arranged in increasing order of the average grades of the data
pairs and then RD(%) values are plotted against sequential numbers of the pairs whereas the
calculated average grades of the data pairs are shown on the secondary y-axis (Fig. 12). Link
between average grades of the data pairs and their sequential numbers are established by
adding a ‘Calibration curve’ to the RDP diagram (Fig. 12). RD(%) values usually exhibit a
large range of variations; therefore, interpretation of this diagram can be facilitated by
applying the moving window technique to smooth the initially calculated RD(%) of the data
pairs (Fig. 12).
80 10
20 7
0 6
- 16%
-20 5
Mean RD% = -21.6
-40 4
-60 3
-120 0
0 20 40 60 80 100 120 140 160
Sequential Number of the Data Pairs
Fig. 12. Relative Difference Plot (RDP) showing Cu(%) grades of duplicated drill core
samples assayed in different labs (Abzalov, 2008). Open symbols (diamond) connected by
fine tie-lines are RD(%) values calculated from matching pairs of data (i.e., original sample
and duplicate). Average RD(%) value (thick dashed line) and + 2SD values (fine dashed
lines) are shown for reference. The solid line is a smoothed line of the RD(%) values
calculated using a moving windows approach. The ‘Calibration Curve’ sets the relationship
between RD% values on the primary y-axis, the sequential number of the data pairs plotted
along the x-axis and the average grades of the data pairs plotted on the secondary y-axis.
For example point ‘A’ (sequential number of the data pairs is 100) is characterised by RD%
value equal to -16% and average grade of that pair of data is 1.1% Cu
www.intechopen.com
Sampling Errors and Control of Assay Data Quality in Exploration and Mining Geology 633
The example in Figure 12 is based on data collected from a massive Cu-sulphide project in
Russia. Approximately 140 core duplicate samples have been analysed in an external
reputable laboratory as part of project due diligence. The results, when plotted on an RDP
diagram (Fig. 12), show that copper assays of low grade samples (Cu < 1.1%) are biased.
Assayed values have significantly underestimated the true copper grade of those samples.
Another feature revealed by this diagram is that low grade samples (Cu < 1.1%) exhibit
excessive precision error which is not shown by higher grade samples (Cu >1.1%). These
findings have triggered a special investigation of the laboratory procedures, with a
particular emphasis on differences between high-grade and low-grade samples.
The RDP diagram can be used for testing the impact onto data precision the different
factors, for example sample size, core recovery, depth of the samples collection. In that case
the RD% values are arranged according to a factor under investigation. In Figure 12 the
RD% values are plotted against the sequential number of the samples arranged by their
sizes (i.e. length of the sampling interval). The diagram (Fig. 13) shows that small samples,
less than 0.5m in length, are characterised by approximately twice as larger precision error
than samples of 1 m in length.
250. 2.5
200.
2.0
150.
Relative Difference (%)
50.
1.0
0.
-50.
0.5
-100.
-150. 0.0
0. 100. 200. 300. 400. 500. 600.
Sequential Number of the Data Pairs
Fig. 13. Cu% grades of the duplicated drill hole samples plotted on the RDP diagram. RD%
values are arranged by length of the samples
4. Twin holes
Drilling of twinned holes (i.e., drilling of a new hole, or “twin”, next to an earlier drill hole)
is a traditional technique used in exploration and mining geology for verification of
mineralization grades. The importance of this technique is emphasized in the JORC Code,
which specifically queries if twinned holes have been used for ‘verification of sampling and
assaying’ (JORC, 2004, p.15). A well known example of a (negatively) successful twinned
holes program is the confirmatory drilling at the Busang project of Bre-X Minerals Ltd.
www.intechopen.com
634 Applications and Experiences of Quality Control
Fraud was revealed by seven twinned holes (Lawrence, 1997). Each of the seven new holes
was drilled only 1.5 m away from supposedly extensively gold mineralized holes, and all
failed to substantiate the originally stated gold values (Lawrence, 1997). Application of the
twinned holes technique is not limited to verification of the high-grade intersections and
also includes verification historic data (Abzalov, 2009a). Such a need usually occurs where
results from earlier drill holes are causing concerns either by being noticeably different to
other drilling campaigns in the same area, or where poor quality of drilling has been
revealed from previously documented parameters, such as core recovery. In some cases,
historic data might be suspect (e.g., biased) because of suboptimal drilling technology or
inappropriate drilling parameters. For example, air-core drilling of a mineral sands deposit
might be biased, and the issue could be examined by twinning the historic air-core holes
with sonic drill holes.
Verification of earlier drilled holes by twinning them with new holes is a common technique
used by geological due diligence teams when reviewing third party projects (Gilfillan, 1998;
Abzalov, 2009a). Drilling twinned holes becomes particularly important where the reviewed
project is based on historical data that were collected without rigorously applied sampling
quality control procedures, or where the relevant quality assurance/quality control (QAQC)
documentation is lacking. Where bias of historic data has been proven, the data from
twinned holes can be used for quantification of this bias, and integrated into the resource
model using multivariate and/or non-stationary geostatistical methods for correcting the
resource estimates (Abzalov & Pickers, 2005; Abzalov, 2006).
The number and locations of twinned holes, drilling methods, and the tested variables are
selected to accomplish the objectives of the proposed study. For example, if the purpose of
drilling twinned holes is to verify high grade intersections, new holes should be drilled next
to previous holes that reportedly intersected high grade mineralization. Conversely, if the
objective is to test and statistically quantify a possible bias in historic data, the twinned holes
should be drilled in such a manner that assures that a wide range of grades is tested,
including both low- and high- grade mineralization. In this latter case, twinned holes
should be distributed throughout the entire deposit, and the number of twinned holes
should be sufficient for statistical analysis of the data.
It should be remembered that twinned holes should be located as close as possible to the
original holes to minimize the effects of short range variability in the studied variables. A
study undertaken by Abzalov et al. (2007) at the Yandi iron-ore open pit has shown that
CV% of Al2O3 grades estimated from matching pairs of blast holes and reverse circulation
(RC) holes increase from 23.8% to 35.7% when the distance between twinned holes increases
from 1 m to 10 m. These results suggest that many unsuccessful twinned holes studies could
have failed because twinned holes have been drilled too far apart. Personal experience of the
author indicates that good practice is not to drill matching pairs of holes more than 5 m
apart.
When planning twinned drill holes it is necessary to remember that important decisions
may have to be made based on a limited number of verification holes. Therefore, it is
necessary to ensure that their quantity and quality are sufficient to make a conclusive
decision. It is necessary to ensure that verification holes are of the best practically achievable
quality. This often requires the use of a different drilling technique, commonly a more
expensive one to that used for the previous drilling. For example, air core holes in mineral
sands projects are preferably twinned by sonic drilling, auger holes in bauxite deposits by
diamond core holes, and RC holes in coal deposits by PQ diamond core holes. It is
www.intechopen.com
Sampling Errors and Control of Assay Data Quality in Exploration and Mining Geology 635
important to realize that verification holes, even if they are drilled using the most up to date
technology, can still produce biased and/or excessively variable results. For example, the
quality of confirmatory diamond drilling can be marred by poor core recovery, or sonic
drilling can be adversely affected by self-injection of sands into core barrels caused by
lithostatic pressure.
The number of twinned holes depends on the aim of the twinning program and the range of
variation in the studied variables. Where studied variables are characterized by large short
range variability, the number of twinned holes should be sufficient for conclusive statistical
and geostatistical analysis. In the author’s experience this may often be achieved by drilling
a relatively small number of holes. However, special studies that require 20–30 twinned
holes are not uncommon (Abzalov, 2009a).
Prior to comparing assayed grades in the twinned holes it is necessary to compare
geological logs, because the geological features are usually more continuous than grades
(Sinclair and Blackwell, 2002). Twinned holes are initially compared by mineralized
intersections. Comparisons should not be limited to average grades of intersections, but
should also include thicknesses and positioning of the contacts. The latter is particularly
important when down hole contamination of the samples is suspected. More detailed
studies include comparison of twinned holes by matching intervals. Practically achievable
outcomes are often obtained through grouping samples by geological units (Abzalov,
2009a). This approach is particularly useful when twinned holes are drilled through
stratified mineralization or regularly intercalated low- and high-grade zones.
In particular cases, for example when twinned holes are drilled through mineralization
exhibiting gradational zoning, comparison can be made directly between samples where
they are of an equal length; alternatively comparisons may be made between equal length
composites (Abzalov, 2009a). Compositing of samples of small size into composites of a
larger size smoothes away the impact of outliers and converts samples into common length
data, which are necessary for geostatistical resource estimation. In twinned holes analysis,
when comparing variables with high statistical variation, grouping samples to larger
composites is commonly necessary to minimize the noise exhibited by individual samples
(Abzalov, 2009a). It is important to ensure that samples are composited to geological
boundaries.
5. Database
Modern mining operations use a vast amount of different sampling data, coming from drill
hole, trench, surface, grade control and mine workings samples. The total number of
collected samples can be in the hundreds of thousands which are supported by hundreds of
QAQC samples and they all need to be stored together in a relational database which is used
for storing assayed sample results and for data grouping and analysis using different
selection criteria (Lewis, 2001; Abzalov, 2009b,c). Therefore, the quality control of the
analytical data in the mining industry should include procedures of relational databases
management.
Good relational databases should not be limited to storing data which are required for
geological modelling and resource estimation. They should also include metadata or “data
about the data”. A vast amount of auxiliary information is usually received from the drilling
contractors, field geologists and analytical laboratories, all which need to be systematically
stored in the relational database. S.Long (1998) has suggested a practical way to store all
www.intechopen.com
636 Applications and Experiences of Quality Control
additional information without making the database tables overly cumbersome. Most of the
important auxiliary information are recorded in the headers of the data sheets, received
from the personnel and laboratories undertaking drilling, documentation, sample collection
and assaying. For example almost all auxiliary information related to analytical techniques
is usually reported at the header of assay certificate received from the laboratory. Entering
this information into a separate table within the same relational database enables storage of
all key facts related to analytical laboratory, technique, personnel and dates of analyses.
www.intechopen.com
Sampling Errors and Control of Assay Data Quality in Exploration and Mining Geology 637
Cut core
Duplicate - 1
(Field duplicate)
Sample
(1/4 of core,
30 - 80 cm long)
Transportation to the Lab
(-75 μm)
Crushing and Pulverising
Blanks
(-75 μm)
Pulp
Duplicate - 2
Aliquot
(Lab Duplicate)
Standards
Original Repeat
assay assay
Fig. 14. Sample preparation flow chart combined with the samples quality control map,
West Musgrave Ni-sulphide project, Australia
Quality control procedures should be shown on the sample preparation flow-chart (Fig. 14).
Such combined diagrams, where sampling protocol is joined with the samples quality
control map, are useful practical tools helping to implement and administrate the QAQC
www.intechopen.com
638 Applications and Experiences of Quality Control
procedures assuring that all sampling and preparation stages are properly controlled. It is
also important to note establishing the references between the quality control map and
sample preparation flow chart help to better understand stages monitored by different
quality control actions facilitating interpretation of QAQC results. When the sampling
protocol is changed it should be documented and the associated QAQC procedures
updated.
Finally, all procedures should be documented and personnel responsible for their
implementation and control determined and instructed. It is necessary to assure that
geological team, working at the mine or developing project, regularly reviews the QAQC
results. Alternatively, all good efforts can be wasted if sampling errors have not been timely
diagnosed. The author after reviewing many different mines found the most effective and
practically convenient periodicity for the data quality review is when the QAQC results
were checked by mine geologist with every analytical batch and summary QAQC reports
have been prepared for approval by the chief geologist on a monthly basis. The monthly
reports should contain several diagnostic diagrams showing performance of the reference
materials and duplicates. The reports also should present the calculated precision variances
which shell be compared with their acceptable levels. The levels of acceptable precision
errors and deviation of the standards from their certified values should be clearly
determined and documented as part of the QAQC procedures.
www.intechopen.com
Sampling Errors and Control of Assay Data Quality in Exploration and Mining Geology 639
duplicates, which in this case are called “rig” duplicates, are collected from the sample
splitting devices built-in to the drill rigs. These can be rotary, cone or riffle slitters. In case of
diamond core drilling the field duplicates are represented by another portion of core. Field
duplicates of the blast hole samples should be another sample, taken from the same blast
hole cone as the original sample and following exactly the same procedures.
Coarse reject duplicates consists of material representing output from crushers. There often
can be more than one type of coarse reject when a samples preparation requires several
stages of crushing and splitting. In this case, coarse reject duplicates should be collected for
each stage of crushing and/or grinding followed by sample reduction. Operations, using
large pulverisers, usually don’t have coarse rejects as all collected sample is pulverised to a
fine pulp. In this case, it is extremely important to collect and analyse field duplicates to
understand the overall repeatability of the assayed results.
Same-pulp duplicates should be analysed in the same laboratory with the original samples
and externally, sending a certain portion of them to an independent and reputable laboratory.
When collecting sample duplicates it is necessary to remember that they should be identical
to the sample which they are duplicating. Unfortunately this is not always possible as the
chemical or physical characteristics of the remaining material can be altered after sample has
been collected or its amount is simply insufficient to make a representative duplicate. For
example, if drill core is sampled by cutting a half core it is unwise to collect quarter core as a
duplicate as this will produce a duplicate samples of a twice smaller weight than the
original samples. Such duplicates are likely to produce a larger precision error than that of
the original samples. Using all remaining second half of core as duplicate is also suboptimal
practice as it breaches auditability of the data. The problem can be partially resolved if to
take duplicate using the approach shown on the Figure 15.
In case when all drill samples are assayed, the latter is a common case at the bauxite mines, the
only possibility to assess precision of the drill samples is to use twin holes (Abzalov, 2009a).
2nd cut
1st cut
Return
to core tray For 2nd Cut ~Half core ~ Half core
and Retain Sample field duplicate
for Future
Audits
Fig. 15. Sketch explaining collection the duplicate drill core samples in case if routine
samples represent the half of the core cut by diamond saw
It is important to assure that duplicate samples should be representative for the given
deposit covering the entire range of the grade values, mineralisation types, and different
geological units and have a good spatial coverage of the deposit. It is difficult to achieve the
www.intechopen.com
640 Applications and Experiences of Quality Control
representativeness of the duplicates when they are collected at random from the sample
batches. The disadvantage of randomly selected duplicates is that most of them will
represent the most abundant type of rocks, usually barren or low-grade mineralisation. The
ore grade and, in particular, high-grade intervals are often poorly represented in randomly
chosen duplicates. They can also be non-representative regarding mineralisation types, their
spatial distribution or grade classes. To meet these two conditions, a dual approach is
recommended: approximately half of the duplicates should be collected by a project
geologist with instructions to ensure that they properly represent all mineralisation styles
and grade ranges, and that they spatially cover the entire deposit. The other half of the
duplicates should be selected and placed at random in each assay batch.
The position of sample duplicates in a given analytical batch should not be immediately
after their original samples and also should not be systematically related to it. It is
important that the original samples and their duplicates are included in the same analytical
batch which will allow using them for within-batch precision studies.
Numbering of the duplicate samples should disguise them in the analytical batch. The
duplicate samples should be taken through the same analytical procedure as the original
sample.
www.intechopen.com
Sampling Errors and Control of Assay Data Quality in Exploration and Mining Geology 641
graphic presentation of the standard assay results (Fig. 9) and a review their distribution
patterns. A pattern recognition method is applied for qualitative diagnostics for possible
accuracy issues. When distribution of assayed values of reference standards don’t exhibits
any systematic drifts or other types of specific patterns indicating possible accuracy errors
the statistical techniques are applied for quantitative estimation of accuracy (Eqs. 20 – 27).
Obtained statistical estimates should be compared with the certified means and the standard
deviations. Only when the assayed values of the standards match their recommended
values the analytical batch can be considered sufficiently accurate.
All obtained data should be stored together in a relational database which is regularly
backed up assuring transparency of the data handling procedures and their auditability. It
should be remembered that database is a final storage for large volume of data and therefore
the quality control of the analytical data in the mining industry impossible without accurate
management of the data flow and database administration procedures.
Mineralisation Type / Deposit Metal Best Practise Acceptable Practise Sample type
Gold, very coarse grained and Au (g/t) 20 (?) 40 Coarse rejects
nuggety
Au (g/t) 20 30 Coarse rejects
Gold, coarse to medium grained
Au (g/t) 10 20 Pulp duplicate
Cu (%) 5 10
Mo (%) 10 15 Coarse rejects
Au (g/t) 10 15
Cu-Mo-Au porphyry
Cu (%) 3 10
Mo (%) 5 10 Pulp duplicate
Au (g/t) 5 10
Fe (%) 1 3
Al2O3 (%) 10 15
Iron Ore, CID type Field duplicate
SIO2 (%) 5 10
LOI (%) 3 5
Cu (%) 7.5 15
Coarse rejects
Cu-Au-Fe scarn and Iron oxide Au (g/t) 15 25
associate Cu-Au Cu (%) 5 10
Pulp duplicate
Au (g/t) 7.5 15
Ni (%) 10 15
Cu (%) 10 15 Coarse rejects
PGE 15 30
Ni-Cu-PGE - sulphides
Ni (%) 5 10
Cu (%) 5 10 Pulp duplicate
PGE (g/t) 10 20
Total Heavy 5 10 Field duplicate
Detrital Ilmenite sands
Minerals (%)
Table 4. Best and acceptable levels of the precision errors (CV%) at the mining projects
(Abzalov, 2008)
7. Acknowledgements
Financial support for, and permission to publish, the paper by Rio Tinto Exploration is
gratefully acknowledged. Author also would like to thank A. Faragher and C.Welton for
critical review of the paper and useful comments.
www.intechopen.com
642 Applications and Experiences of Quality Control
8. References
Abzalov, M.Z. (2009a). Use of twinned drill - holes in mineral resource estimation.
Exploration and Mining Geology, Vol.18, No.1-4, p.13-23, ISSN 0964-1823
Abzalov, M.Z. (2009b). Principals and Practice of Use of the Relational Databases in the
Mining Industry, Part 1. The AusIMM Bulletin, No.5, (October 2009), p.39-41, ISSN
1034-6775
Abzalov, M.Z. (2009c). Principals and Practice of Use of the Relational Databases in the
Mining Industry, Part 2. The AusIMM Bulletin, No.6, (December 2009), p.57-60, ISSN
1034-6775
Abzalov, M.Z. (2008). Quality control of assay data: a review of procedures for measuring
and monitoring precision and accuracy. Exploration and Mining Geology, Vol.17, No
3-4, p.131-144, ISSN 0964-1823
Abzalov, M.Z. (2007). Granitoid hosted Zarmitan gold deposit, Tian Shan belt, Uzbekistan.
Economic Geology, Vol.102, No.3, p.519-532, ISSN 0361-0128
Abzalov, M.Z. (2006). Localised Uniform Conditioning (LUC): a new approach for direct
modelling of small blocks. Mathematical Geology, Vol.38, No.4, p.393-411, ISSN
0882-8121
Abzalov, M.Z. (1999). Gold deposits of the Russian North East (the Northern Circum
Pacific): metallogenic overview, Proceedings of the PACRIM '99 symposium, pp.701-
714, ISBN 9781875776719, Bali, Indonesia, 10-13 October, 1999, AusIMM,
Melbourne
Abzalov, M.Z. & Bower, J. (2009). Optimisation of the drill grid at the Weipa bauxite deposit
using conditional simulation, 7th International Mining Geology Conference, pp.247 -
251, ISBN 978-1-921522-05-5, Perth, Australia, 17-19 August, 2009, AusIMM,
Melbourne
Abzalov, M.Z. & Both, R.A. (1997). The Pechenga Ni-Cu deposits, Russia: Data on PGE and
Au distribution and sulphur isotope compositions. Mineralogy and Petrology, Vol.61,
No. 1-4, p.119-143, ISSN 0930-0708
Abzalov, M.Z. & Humphreys, M. (2002). Resource estimation of structurally complex and
discontinuous mineralisation using non-linear geostatistics: case study of a
mesothermal gold deposit in northern Canada. Exploration and Mining Geology,
Vol.11, No.1-4, p.19-29, ISSN 0964-1823
Abzalov, M.Z. & Mazzoni, P. (2004). The use of conditional simulation to assess process risk
associated with grade variability at the Corridor Sands detrital ilmenite deposit,
Ore body modelling and strategic mine planning: uncertainty and risk management,
pp.93-101, ISBN 1-920806-22-9, Perth, Australia, 22-24 November, 2004, AusIMM,
Melbourne
Abzalov, M.Z. & Pickers, N. (2005). Integrating different generations of assays using
multivariate geostatistics: a case study. Transactions of Institute of Mining and
Metallurgy. (Section B: Applied Earth Science), Vol.114, No.1, p.B23-B32, ISSN 0371-
7453
Abzalov, M.Z.; Menzel, B.; Wlasenko, M. & Phillips, J. (2007). Grade control at Yandi iron
ore mine, Pilbara region, Western Australia – comparative study of the blastholes
and Reverse Circulation holes sampling, Proceedings of the iron ore conference, pp. 37
– 43, ISBN 978-1-920808-67-5, Perth, Australia, 20-22 August, 2007, AusIMM,
Melbourne
www.intechopen.com
Sampling Errors and Control of Assay Data Quality in Exploration and Mining Geology 643
Bartlett, H.E. & Viljoen, R. (2002). Variance relationships between masses, grades, and
particle sizes for gold ores from Witwatersrand. South African Institute of Mining and
Metallurgy Journal, Vol.102, No. 8, p.491-500, ISSN 0038-223X
Bumstead, E.D. (1984). Some comments on the precision and accuracy of gold analysis in
exploration. Proceedings AusIMM, No. 289, p.71-78, ISSN 1034-6783
CANMET. (1998). Assessment of laboratory performance with certified reference materials,
CANMET Canadian Certified Reference Materials Project Bulletin, 5p
De Castilho, M.V.; Mazzoni, P.K.M. & Francois-Bongarcon, D. (2005). Calibration of
parameters for estimating sampling variance, Proceedings Second World Conference on
Sampling and Blending, pp.3-8, ISBN 1-92086-29-6, Sunshine Coast, Queensland,
Australia, 10-12 May, 2005, AusIMM, Melbourne
Francois-Bongarcon, D. (1993). The practise of the sampling theory of broken ore. CIM
Bulletin, Vol.86, No.970, p.75-81, ISSN 0317-0926
Francois-Bongarcon, D. (1998). Error variance information from paired data: applications to
sampling theory. Exploration and Mining Geology, Vol.7, No.1-2, p.161-165, ISSN
0964-1823
Francois-Bongarcon, D. (2005). Modelling of the liberation factor and its calibration,
Proceedings Second World Conference on Sampling and Blending, pp.11-13, ISBN 1-
92086-29-6, Sunshine Coast, Queensland, Australia, 10-12 May, 2005, AusIMM,
Melbourne
Francois-Bongarcon, D. & Gy, P. (2001). The most common error in applying "Gy's formula"
in the theory of mineral sampling, and the history of the liberation factor, In:
Mineral resource and ore reserve estimation – the AusIMM guide to good practice, A. C.
Edwards (Ed.), pp. 67-72, AusIMM, ISBN 1-875776-80-X, Melbourne
Garrett, R.G. (1969). The determination of sampling and analytical errors in exploration
geochemistry. Economic Geology, Vol.64, No.5, p.568-574, ISSN 0361-0128
Gilfillan, J.F. (1998). Testing the data—The role of technical due diligence, Ore reserves and
finance seminar, pp. 33–42, Sydney, Australia, 15 June, 1998, AusIMM, Melbourne
Gy, P. (1982). Sampling of particulate materials, theory and practice,2dn edition, Developments in
Geomathematics 4, Elsevier, ISBN 0-444-42079-7, Amsterdam, 431p
ISO Guide 33. (1989) Uses of certified reference materials. Standards Council of Canada, 12p
JORC Code (2004). Australaisian Code for Reporting of Exploration Results, Mineral Resources
and Ore Reserves, AusIMM, Melbourne, 20p
Kane, J.S. (1992). Reference samples for use in analytical geochemistry: their availability
preparation and appropriate use. Journal of Geochemical Exploration, Vol.44, No.1-3,
p.37-63, ISSN 0375-6742
Lawrence, M.J. (1997). Behind Busang: the Bre-X scandal: Could it happen in Australia?
Australian Journal of Mining, Vol.12, No.134, p.33–50, ISSN 0375-6742
Leaver, M.E.; Sketchley, D.A. & Bowman, W.S. (1997). The benefits of the use of CCRMP's
custom reference materials. Canadian Certified Reference Materials Project MSL No
637, 21st International precious metals conference, pp., ISBN 1-881825-18-3, San
Francisco, 15-18 June, 1997, International Precious Metals Institute, Florida
Lewis, R. W. (2001) Resource database: now and in the future, In: Mineral resource and ore
reserve estimation – the AusIMM guide to good practice, A. C. Edwards (Ed.), pp. 43-48,
AusIMM, ISBN 1-875776-80-X, Melbourne
www.intechopen.com
644 Applications and Experiences of Quality Control
www.intechopen.com