Chapter 3 Radiation
Chapter 3 Radiation
The term Counting Statistics includes: 1 - Statistical analysis needed to obtain the measurement results from a nuclear test. 2- A framework to predict the expected quantities resulting from such measurements. Effects of statistical fluctuations: 1- The measurements will not be identical and will display some internal variation. This fluctuation, has to be quantified and compared with the results from prediction using statistical models. 2- In case of a single measurement situation, we need to find out the statistical uncertainty and estimate the accuracy of the single measurement
Characterization of data
First we assume that we have N integer values data 1 , 2 ., . These values represent N successful readings from any counter used in the radiation area. Sum=
=1
Experimental mean :
= 1.
Continue
Fig.3.1 Distribution function for the data given in table 3.1 ( Knoll,2010 page 66)
The table 3.1 shows a dataset with 20 entries the maximum value is 14 while minimum value of 3. Fig. 3.1 shows the corresponding data distribution function - The horizontal bars represent dataset for the 20 entries from table 3.1. - The experimental mean value = 8.8 lies at the center of F(x). - The fluctuation in the data can be understood by looking at the data distribution function
The data distribution function can be used to calculate the experimental mean, as shown below: The mean of a distribution can be found as shown below: = Sample variance Sample variance quantifies the amount of fluctuation in the data To determine sample variance, the first step is to find the residual of any data point =
Amount Experimental mean
=0 ()
From the Fig. 3.2 , it can be concluded that - The data has equal positive and negative residuals as shown in part b - The square of the residuals always give positives result. This can be clearly seen in part c
Fig. 3.2 (a) Plot of the data given in table 3.1 corresponding values for the residual d for are shown in part (b) and (c) (Knoll 2010, p.68)
Deviation
Can be defined as the amount differ from the true mean value X .
= - Sample variance
- It can be defined as the amount of internal scatter for the given data. - Can be calculated as the average of value of each deviation after squaring :
2 = 2 =
-
=1(
. )2
Here it difficult to apply because it is not easy to get on only after obtaining a large number of readings, so the easiest is to use the experimental mean and thus will become more residual than deviation .But with using the experimental mean : 2 =
1 1 =1(
)^2
Note: This equation with Small N , if N very large we will used the mean square value either residual or deviation
Fig.3.3 Distribution functions for two sets of data differing amounts of internal fluctuation. ( Knoll 2010, p.68).
Large deviation
Note: Even if we increase the data from 20 to 40 of the value value of the sample variance will remain the same as it is the absolute measure about of internal scatter in data.
Another way to calculate sample variance by using the data distribution function:
S2 =
=0(
)2 F(x)
.. (a)
( b)
S 2 = 2 - ( x )2
From the equation ( a ) since this expression cant be used in the computation , we can use the expression in equation (b).
We conclude that any data set can be explained and clarified by using the data distribution function F(x).And from this distribution , the most important parameters are experimental mean and the sample variance.
Statistical Models
The Binomial Distribution Three-specific Statistical models The Poisson Distribution The Gaussian (Normal) Distribution
Binomial Distribution
General:
Most general of all statistical models Rarely used in Nuclear applications.
Expression:
The probability of counting the number of successes x, can be calculated as follows:
p(x) =
! ! !
px (1- p )
= 0.60
If we conduct a total of n = 10 trials, the probability that x trials will be successful can be calculated by using the Table 2, where x is between 0 and 10. The Table 3.2 shows the predicted possibility if we conduct n trials. In this case n = 10 The plot of the Binomial distribution is shown in Fig.3.4. The distribution shows that the number of successes is 7.
Fig. 3.4 A plot of Bionomial distribution for p=2/3 and n=10 (Knoll 2010, p.72)
Table 3.2 Values of the Bionomial Distribution of the parameter p=4/6 or 2/3 , n=10 (Knoll 2010, p.71)
=0
= 1
The average value of the distribution is given as below: x= =0 = 0 From the definition of binomial distribution
P (x) =
! ! !
P x (1- P )
= P n
where : P: Probability n : # of Trials
)2 P(x)
where = n p
2 = ( 1 )
Example: Suppose n=10 , p=0.60 Predicted mean = 6 Predicted variance = 2 = 6 0.4 = 2.4 Predicted standard deviation = 2 = 1.549
Poisson Distribution
General :
Similar to Binomial distribution but mathematically simplified. For a small and constant predicted probability, the binomial distribution reduces to Poisson distribution.
Uses :
For nuclear counting experiments where the number of nuclei is large and the observation time is small as compared to the half life.
Expression : P (x) =
( ) !
We saw that Binomial distribution requires two parameters n & p . In contrast, the Poisson distribution requires only one parameters = np
=0
= 1
=0
2 The predicted variance 2 = =0( ) P(x) =pn 2 = x The predicted standard deviation =
Example: If the probability <<<< 1 we need to change to Poisson distribution For e.g. In case of annual vaccinations, if we have a sample group of 1,000 people with 1000 1 trials , the probability of anyone taking the vaccination today is p= = 0.00274. In this case 365 we need to change to Poission distribution. = pn = 2.74
=1.66
Fig.3.5 The Poisson distribution for a mean value =2.74 (Knoll 2010, p 75).
P (x) =
( ) !
3
. .
0.221
. .
Expression :
1 2
( )2 2
P(x) =
Normalized
=0
= 1
Large number
Normal distribution to Poisson distribution is are good for estimating : 1- Statistical properties . 2-Uncertainty.
P(x) =
From Fig 3.6 (a ) We can see that the distribution symmetric about the mean value , p(x) depends on the absolute value of the deviation of any value x from mean =
Fig.3.6 (a) Discrete Gaussian distribution for mean value x= 27.4. (b)Plot of the corresponding continuous from the Gaussian .( Knoll 2010, p.77).
G()
2 2
Fig.3.7 Comparison of the discrete and continuous forms of the Gaussian distribution
where x = 2
A new variable t is introduced here So the Gaussian distribution can be written with respect the ratio: G(t) =G()
= G()
/
G(t)= /
Application A: Check the Counting System to See whether Observed Fluctuations are consistent with expected statistical fluctuation.
The counting laboratories are subjected to quality control standards once a month on an ongoing basis. In this process a record of 20-50 from detector are read using the present procedure and any abnormal records of fluctuation can be monitored, which helps to find the defects.
From Fig.3.9 we can note : N ( in experimental data left side) is independent measurement of same physical quantity. is the value of the center of the distribution . 2 ( Sample variance) : it s used to give us the amount of fluctuation. When the value of = => We have fully specified statistic model .P(x) is represent Poisson or Gaussian distribution. Data distribution function F(x) must be approximated to P(x). When we draw these we can get the shape and the amplitude of the distribution.
Fig.3.9An illustration of Application A of counting statistics-the inspection of a data for consistency with statistical model.(Knoll 2010,p80
To make a comparison between the two functions we need to use one parameter ( mean value ). Since this parameter is equal for each of these functions we need to look for l for another parameter. This is done by using the variance parameter, where we take the value of predicted variance and compare it with the sample variance. This must give same value in case there is internal fluctuation.
Fig. 3.10 A direct comparison of experimental data from table 3.1 with predictions of a satirical model( Poisson distribution for mean value equal to 8.8 ( Knoll 2010, p 81)
For the same data mentioned in table 3.1 , Fig.3.10 shows the data distribution function for these data the mean value and the experimental mean value is equal 8.8 , which is not that large so we can use Poisson instead of the Gaussian distribution , as we know that Poisson is just for discrete x value . Since the sample variance 2 = 7.36 ,the predicted variance 2 =mean value = 8.8 ( because it is Poisson). From these value we considered that we have less fluctuation in the data. As we only have limited data and as these values are close to each other , we need to use another test which is Chisquared test. Chi-squared test is similar to the sample variance :
2 = 1 2 =
=1( (1)
- )2
Table 3.3 Portion of a Chi squared Distribution table (Knoll 2010, p 82).
From the table above , the p s represents the probability for any sample from the Poisson distribution a large value of 2 compared to the specific value in the Table 3.3. Very low probability (p< 0.2 ) . Very high probability (p> 0.98) Small fluctuation .
Fig.3.11 A plot of the Chi-squared distribution. For each curve , p gives the probability that a random sample of n numbers from a true Poisson distribution would have a larger value calculate the number of degrees of freedom v=N-1 (Knoll 2010, p 82)
than that of the ordinate. For data for which the experimental mean used to
From fig.3.13 : The left hand side represents the experimental data while the right side represents the statistical model. Since we have single measurement x , we assume that x = x The expected sample variance 2 2 of the statistical model from measurement x - 2 = x - 2 =x Provided Poisson or Gaussian because we have only one measurement.
Fig.3.13 A graphical display of error bars associated with experimental data.( Knoll 2010, p.85)
From the table above, if we consider the uncertainty for the single measurement we will use x 10 which it will have a mean with probability 68%. To reach 99% we wll use x 2.58 => 100 25.8 .
100
Fractional standard deviation can be defined as of simple counting measurement, if we record 100 counts and fractional standard deviation is 10% , we can reduce it to 1%if we raise the counts to 10,000. All above is for general cases , for the nuclear counting we will apply = . Can not associate the standard deviation with square root if any quantity that t s not directly measured number of count. Association can not apply for the following : 1- Counting rate. 2-Sums of differences of counts. 3- Averages of independent counts 4- Any derives quantity.
Error Propagation
If we assume that the measurements are performed the expected Row Count and even followed Gaussian distribution for this case only one parameter will be needed. If we assume that the counts was recorded by using back ground free detector system , which record only half of the quantity emitted by the source from this we can conclude that only 50% is the efficiency for this detector. And until we get the correct number who get out of the source must be multiplied by 2 and if this repeated we will get new distribution. The new distribution will have a shape of Gaussian distribution , but here we can not get the ion from square root of the mean value , therefore here we need two parameter the mean value & Standard deviation . It is continuous Gaussian distribution : G(x) dx =
1 1
exp (2 2
( )2 2 2
) dx
G(t ) dt =
dt
Error propagation :
if we have counts .. x, y , z and related to variables x , y and z
2 = (
2 ) 2
+(
2 ) 2
+(
2 ) 2 +
*This equation known as error propagation formula where x , y and z are independent , and can be used in all nuclear measurements.
The use of this equation will be cleared by using t in some simple cases. Case # 1 Sum or differences of counts. u= x + y ==>
or or
u=x-y
=1
=1
The application o source and need f this case , when we have radioactive source counts and need to correct it by subtracting the back ground count: net counts = total counts back ground counts this mean u = x-y
Example :
if the total counts = x = 1000 background counts= y = 500 net count = u = 500 = = 31.622 = = 22.361
= 2 + 2
=
+ = 38.729
=A
= A Similarly
v= v =
From all above we conclude that the multiply or divide value by constant does not change the relative error. Example: If x is counts recorded over time t , the counting rate = r = and considered to be constant x= 1220 , t = 5 s r=
1220 5
= 224 1/s =
1220 5
= 6.7 s
=y
=x
2 = 2 2 + 2 2
2 ) =
2 )
+(
2 )
divided by 2 = 2 2
Division
u=
1 ( )2
=
2 )
= - 2
2 =( )2 2 +(( )2 =
2 ) 2 2
+(
and
Example :
To calculate the ratio of two sources activities from independent counts. (Neglect the background) - Count from Source 1 = N1 = 16265 - Count from Source 2 = N2 = 8192 - Activity ratio R = -(
2 ) =
1 2
16265 8192
= 1.985 +
2 22
1 2 ) 1
+(
2 2 ) 2
1 12
= 1.835*104
= 0.0135
+(
2 ) 2
+(
2 ) 2 + .
=>
= 12 + 22 + .. 2
But = 2 = x1 + x2 +. = = and values would be same if we were dealing with single count & the mean value x = The expected standard deviation =
=> since any typical count would not be different from the mean . Based on N independent counts, the expected error is smaller by a factor of compared to any single measurement . Thus it can be concluded that if we aim to improve the statistical precision of measurement by factor of 2, we must invest 4 times the initial counting time.