0% found this document useful (0 votes)
52 views

Software Random Number Generation Based On Race Conditions

This document summarizes a 2008 conference paper about generating true random numbers in software using race conditions between threads. The paper proposes a strategy where multiple threads concurrently read, modify, and write to a shared variable. Due to non-deterministic thread scheduling, the final value of the shared variable will be random. The authors implemented this strategy on Linux and found it passed over 90% of randomness tests. Race conditions are more likely to occur when threads interfere with each other due to factors like cache misses and interrupts.
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
52 views

Software Random Number Generation Based On Race Conditions

This document summarizes a 2008 conference paper about generating true random numbers in software using race conditions between threads. The paper proposes a strategy where multiple threads concurrently read, modify, and write to a shared variable. Due to non-deterministic thread scheduling, the final value of the shared variable will be random. The authors implemented this strategy on Linux and found it passed over 90% of randomness tests. Race conditions are more likely to occur when threads interfere with each other due to factors like cache misses and interrupts.
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 7

See discussions, stats, and author profiles for this publication at: https://www.researchgate.

net/publication/224578073

Software Random Number Generation Based on Race Conditions

Conference Paper · October 2008


DOI: 10.1109/SYNASC.2008.36 · Source: IEEE Xplore

CITATIONS READS
10 2,465

3 authors:

Adrian Coleșa Radu Tudoran


Universitatea Tehnica Cluj-Napoca IRISA - Institut de Recherche en Informatique et Systèmes Aléatoires
29 PUBLICATIONS   57 CITATIONS    40 PUBLICATIONS   379 CITATIONS   

SEE PROFILE SEE PROFILE

Sebastian Banescu
Technische Universität München
38 PUBLICATIONS   204 CITATIONS   

SEE PROFILE

Some of the authors of this publication are also working on these related projects:

Characterizing the Strength of Obfuscation View project

Large Scale Online Machine Learning View project

All content following this page was uploaded by Sebastian Banescu on 30 March 2016.

The user has requested enhancement of the downloaded file.


Software Random Number Generation Based on Race Conditions ∗

Adrian Coleşa Radu Tudoran Sebastian Bănescu


Technical University of Cluj-Napoca

Abstract some entropy sources in software (e.g. system clock, mouse


movement etc.) [9] and not on any previous outputs.
The paper presents a new software strategy for generat- The hardware methods proved to behave better than the
ing true random numbers, by creating several threads and software ones, but at more higher costs, because they use
letting them compete unsynchronized for a shared variable, expensive specialized hardware components that must be at-
whose value is read-modified-updated by each thread re- tached to normal computers. The in-use software methods
peatedly. The generated sequence of random numbers con- for TRNGs are not so expensive, because they can be run on
sists of the final values of the shared variable. any on-hand computer. Anyway, when they use the user’s
Our strategy is based on the functionality of the operat- unpredicted actions (e.g. mouse movements, key pressing)
ing system’s thread scheduler. Different values of the shared their throughput is quite bad (the user is awfully slow, com-
variable are obtained because the concurrent threads are pared with the speed a processor can execute applications),
preempted at different moments in their execution. We iden- and when they are based on the activity of some hardware
tified some software and hardware factors that make the components (e.g. network-card activity), they depend on
scheduler generate context switches at unpredictable mo- some external factors which can be controlled by attackers.
ments: execution environment, cache misses, the instruc- We developed a software strategy to get a TRNG, based
tion execution pipeline and the imprecision of the hardware only on the software and hardware off-the-shelf compo-
clock used to generate timer interrupts. nents a common on-hand computer provides.
We implemented the strategy on x86 architecture running We wrote an application that creates more threads that
Linux operating system. The random number sequences ob- read-modify-update the same shared variable, whose final
tained pass over 90% of the NIST tests. value is the random number we generate. It is well known
that letting concurent threads compete unsynchronized for
a shared resource create race conditions that bring the re-
1. Introduction source into an unpredicted state. Because race conditions
rarely occur in practice, we tried to increase in our applica-
tion the frequency of their occurence, in order to improve
In the last few years there has been an increasing interest the randomness of the numbers we generated. Even if the
for random number generators (RNGs), since they are ex- scheduler is deterministic, we identified a couple of soft-
haustively exploited in cryptography [3], [11]. The strong ware and hardware factors that influence its decisions and
dependency between RNG and applications that require se- make our application outputs different, random results dur-
cure data can be drawn from the successful attack on the ing different runs.
communication mechanism of Netscape V2.0 browser [6].
We tested our application on an x86 architecture with a
There is a lot of ongoing research for improving and ob-
Pentium 4 processor, running Linux operating system. The
taining new methods to generate random numbers, based
random number sequences we obtained pass over 90% of
on both software [5] and hardware [4] strategies. Pseudo-
the NIST [10] tests.
random number generators (PRNGs) use a seed and an al-
gorithm in order to produce an apparently random sequence Section 3 describes the sofware strategy we used. In
of numbers. They have an internal state, which provides the Section 4 we identify and explain the software and hard-
next output [7]. True random number generators (TRNGs) ware factors that contribute to the generation of true ran-
produce a sequence of numbers based on some physical dom numbers using our strategy. Section 5 contains details
process in hardware (e.g. jitter, ring oscillators etc) or on about the application we wrote to test our stategy. Section 6
presents tests we made and the results we got, and finaly,
∗ This work was partially supported by the CNMP funded CryptoRand we present the conclusions of our work and the ongoing re-
project, nr. 11-020/2007. search directions.
Thread 1 Thread 2 threads’ execution, which can lead to unpredictable results
v1 = shared var; ... (suspended) in different runs of the same application.
v1 = (v1 + 1) % 2; ... (suspended) The threads modify the shared variable in three steps: (1)
shared var = v1; ... (suspended)
read its value into another local working variable, (2) mod-
... (suspended) v2 = shared var;
ify the read value in some way (e.g. increment or decre-
... (suspended) v2 = (v2 + 1) % 2;
... (suspended) shared var = v2; ment) and (3) write the final value from the local working
variable into the shared variable.
Race conditions occur because the three steps executed
Table 1. The “normal” execution scenario: concurrently by different threads can interleave in different
the steps of the two threads do not interleave. ways, resulting in different values of the shared variable in
different runs of the application.
Race conditions in our case can basically lead to two dif-
Thread 1 Thread 2
ferent execution scenarios described in Table 1 and Table 2.
v1 = shared var; ... (suspended)
The normal execution corresponds to the case the context
v1 = (v1 + 1) % 2; ... (suspended)
... (suspended) v2 = shared var;
switch occurs after the current thread succeeds executing all
... (suspended) v2 = (v2 + 1) % 2; three steps corresponding to variable’s manipulation. The
shared var = v1; ... (suspended) abnormal execution corresponds to the situation in which
... (suspended) shared var = v2; the context switch occurs before the update step is executed,
letting the next concurrent thread read an inconsistent value
of the shared variable.
Table 2. The “abnormal” execution scenario:
the steps of the two threads interleave.
4. Factors of Randomness

Supposing the initial value of the shared variable used in


2. Related Work the method described in Section 3 is 0 and the modification
operation performed on it is addition followed by a modulo
The vast majority of software generators are PRNGs. 2, the final value of the shared variable in the two scenarios
They are based on the system clock, mathematical algo- described above could be 0 and 1, respectively.
rithms, for which the current output is a function of the The probability to get one of them as the final value of
previous output, and network statistics etc. Our genera- the shared variable depends on moment when the context
tor is based on the correlation of cache misses, hardware switch between the two threads occurs, according to the first
clock imprecision and execution delays within the proces- or the second scenario. Having only the two scenarios pos-
sor’s pipeline. sible to occur, there are 50% theoretical chances to get 0 or
Compared with other software strategies, which gener- 1 as the final value of the variable.
ate random numbers monitoring various and unpredicted It is known, that the thread scheduler is deterministic,
actions of user [8] (e.g. mouse movements, keyboard keys i.e. it uses a deterministic algorithm, generating the same
pressing etc.) or events occurring in the system (e.g. inter- sequence of scheduled threads in the same execution con-
rupts from network card), our application takes implicitly figuration. With no loss of generality we can consider
such factors into account due to the fact that they influence that there are only the two threads of our application run-
the scheduler decisions, our strategy is based on. ning in the system. In that case, the scheduler will al-
Another difference is that our startegy needs no seed. ways generate the same scheduling sequence, Thread1,
This is a solid argument that our application generates true Thread2, Thread1, Thread2 and so on or the one in
random sequences of numbers. For example, the PRNG im- which Thread2 is scheduled first. The two threads have
plemented in Java [14], which is fast and passes around 90% the same priority and are identical from the point of view
of the NIST tests, requires a seed for initialization. Our ap- of CPU and other resources usage, so they will get identi-
plication does not need such an input to function properly cal time quanta alternately. Thus, knowing the duration of
and passes over 90% of the NIST tests. the time quantum assigned to the two threads, someone can
predict the final value of the shared variable. It can be 0
3. The Strategy or 1, depending on the moments the context switches oc-
cur, but it is sure it will alway be the same value, because
The idea of our work is to create more threads of the context switches occur always at the same moments of time,
same application that access concurrently and unsynchro- applied on the same thread scheduling sequence. It is only a
nized a shared variable. We promote race conditions during matter of knowing the time quantum duration and the thread
Figure 1. Deterministic scheduling of
Thread 1 and Thread 2. The time quanta
assigned to both threads are of the same
lenght, i.e. τ milliseconds. So, it can be Figure 2. Variation in real time of an allocated
calculated at each moment which thread will time quanta. The tk+1 end time of a quanta
be executed and what that thread executes with the value τ = tk+1 −tk is situated actually
at that moment. in the interval [tk + τ − ρτ, tk + τ + ρτ ].

which is scheduled first. Figure 1 illustrates the determinis- — it can contribute to the randomness of our application’s
tic scheduling of the two threads. behavior.
Although the behavior of the scheduler is deterministic, We identified three hardware factors that contribute to
we observed during multiple execution of our application the random behavior of our application:
that the final value of the shared variable is different and
unpredictable. So, the question that must be answered is: 1. Cache misses force the CPU to bring the needed in-
“What makes it possible to get different random values of struction or data from another memory level and con-
the shared variable in different runs under the same condi- sequently to consume some extra CPU clock cycles on
tions, even if the system is deterministic?” behalf of the currently running thread [13]. That means
In order to answer this question we followed two direc- that during the same time interval, i.e. the allocated
tions: finding software and hardware factors of the random- time quantum, a different number of thread’s instruc-
ness. On the other hand, we also took into consideration tions can be executed in different situations. That re-
two different perspectives on any factor identified: theoret- sults in one thread being preempted at different places
ical and practical possibility that factor to contribute to the during its execution. Related to our application, that
generation of random values of the shared variable. means that the two different execution scenarios de-
Related to the software factors, one that contributes to scribed in Section 3 can be met. If that happens ran-
the randomness behavior of our application is its execution domly during the execution of our application or dur-
environment. ing different runs of it, having the effect of generating
The scheduler being deterministic, it behaves identically random values of the shared variable, depends on the
each time in the same conditions, generating the same se- way the cache misses occur or not. This is very ex-
quence of scheduled threads. From this point of view, gen- ecution environment dependent. In a special test-bed
erating an execution environment identical with a previous prepared system, like the one we used to test our ap-
one means to execute the same applications and start them plication, where the environment was very stable, the
all at the same corresponding moments and in the system cache misses do not occur eventually at all, in the case
state as in the previous environment. This is theoretically all the threads of the testing application fit in the cache
possible, but practically very difficult if not impossible to (which we think is the case of our application) or oc-
get. cur accordingly to a regular periodic pattern, in the
We tried to test of our application in the same execu- case where there is not enough room for all the running
tion environment: (1) we ran it in Linux single mode with threads in the cache. The latter case happens because
no other user processes running, but the ones the operating the scheduling sequence is a fixed one and also follows
system automatically starts in order to function properly and a regular periodic pattern as we already mentioned. In
(2) started it when the system was in an identical stabilized a real practical system, anyway, the environment being
state, observed using a monitoring tool. Even so, the val- not stable, the cache miss sequence will also not be a
ues of the shared variable we obtained were random. So, it regular and deterministic one, thus we can take it into
is very possible that the software execution environment is account for the randomness of our application.
not the only factor responsible for the randomness observed.
Nevertheless, in a practical situation, on a system in use, the 2. Another hardware factor that can make our applica-
execution environment being not identical at different runs tion behave differently in similar situations is related
of our application and also not even during one execution to the way the pipeline instruction flow is affected by
— because other applications running evolve dynamically the occurrence of the timer interrupt, which leads to a
context switch. In such a situation, the CPU instruc-
tion execution pipeline contains instructions of the pre-
empted thread. Now, due to the context switch the
following instructions being introduced in the pipeline
belong to the newly scheduled thread, the remaining
instructions of the preempted thread being executed Figure 3. Undeterministic scheduling of
when it resumes later its execution. Anyway, after Thread 1 and Thread 2 on a real sys-
the context switch, the pipeline contains instructions of tem. All the time quanta assigned to the
both threads. We do no insist now on the way the CPU two threads last theoretically τ milliseconds.
protects one thread from another in such a situation, Practically, however, they contain a different
because this is not so important for us now (see [13] number νk of CPU clock cycles. So, it cannot
for details). Let us suppose that the first instruction be calculated at one moment which thread
that will be fetched by the CPU when the preempted will be the executed one.
thread will resume its execution depends in some way
on some other previous instruction, which was in the
pipeline at the moment of the context switch. In an un-
of CPU clock cycles and, consequently different num-
interrupted execution, when the context switch would
bers of instructions, being executed in different time
have not occurred, this instruction has had to wait some
quanta of the same (theoretical) length. Relative to our
time, i.e. some number of CPU clock cycles, for the
application, that means that its execution can generate
instruction it depends on to be retired (finished). In
randomly any of the two scenarios described before.
the case of the context switch, however, the instruc-
tion is not fetched by the CPU. Nevertheless, in the
The second and the third hardware factors we mentioned
same time the previous instruction it depends on, be-
above do not depend of the execution environment. They
ing in the CPU pipeline, continues its execution, but
are tightly coupled, though the second one, the hazard in
in the context of the next thread executed. That is, it
the instruction pipeline, seems to be a consequence of the
“consumes” (in fact, shares) time from the quantum al-
third one, the variation of clock quatua. So, they always
located to the next thread. When the preempted thread
contribute to the randomness of our application.
resumes its execution and its next instruction is fetched
The hardware clock’s drift rate can vary in the interval
in the CPU, the instruction it depends on is already re-
[−ρ, ρ], where ρ is in the range [10−4 , 10−6 ] µs s , i.e. every
tired and consequently it can be launched immediately,
second the clock deviates with a 10−4 or 10−6 µs from the
so saving CPU clock cycles from its quantum, com-
real time. Thus a time quantum of theoretical τ millisec-
pared with the uninterrupted execution. Similar to the
onds lasts in reality something between [τ − ρτ, τ + ρτ ].
case of cache misses, that results in a thread executing
Figure 2 illustrates this. So, the variation in time of the time
different number of instructions during different iden-
quantum is 2ρτ . Considering the default value of a time
tical time quanta and consequently meeting the two ex-
quantum in Linux of 100ms that means that the variation
ecution scenarios.
interval I of a time quantum is
I = 2ρτ = 2×10−4 µs s ×100ms = 2×10
−4 µs
s ×100×
3. The timer interrupt, the scheduler uses to preempt
10 s = 2 × (10 × 10 × 10 )µs = 2 × 10−5 µs =
−3 −4 2 −3
currently running thread and switch the CPU to an-
2 × 10−2 ns = 0.02ns
other thread, is not generated perfectly periodically.
For a processor of 2.6GHz, which generates a clock tick
This happens because the hardware clock, which de-
every 0.385ns, a CPU clock cycle can be missed or saved
termines the timer interrupt is not perfect. It is known
at each 20 consecutive time quantum, i.e. each 2 seconds.
[1] that a hardware clock has a small bounded drift
All the above mentioned factors contribute in a real sys-
rate, i.e. the difference between the clock and real-
tem to the randomness behavior of our application, due to
time can change every second at most by some known
the fact that different time quanta allocated to its threads,
constant ρ  1. Consequently, different time quanta
even equal as mathematical values, contain different num-
allocated by the scheduler to the running threads are
ber of CPU clock cycles. That means that the theoretical
not of the same length, even if, mathematically speak-
scheduling described in Figure 1 does not hold in practice.
ing, they have the same value.
The scheduling situation, we suppose to happen in reality, is
In case the clock used to generate timer interrupts is illustrated in Figure 3. The different lengths of time quan-
not the same or not synchronized with the one used to tum we illustrated in that figure are different in terms of
generate clock cycles to the CPU, the variation in real- CPU clock cycles, a characteristic specific to all the factors
time length of time quanta results in different numbers we described above.
5. The Application

The application we wrote to generate random numbers


is based on the strategy described in Section 3. It creates
more threads, which access unsinchronized a shared vari-
able. Race conditions manifest because of context switches
occurrence in the code area where the shared variable is ac- Figure 4. The way we place the critical code
cessed (critical code) and lead to an unpredicted final value susceptible to race conditions to the end of
of the shared variable from the two possible, i.e. 0 and 1. a time quantum, i.e. the area of context
Theoretically, a context switch can occur anywhere dur- switch between concurrent threads. The crit-
ing a thread’s execution, but in practice we observed that it ical code is considered to be one step and the
rarely occurs during a time quantum allocated to that thread time quantum is filled with such steps. The
(because of an interrupt for example). That is why we tried last one intersects the end of a time quantum,
to place the critical code in areas where context switches which was exactly our intention.
are supposed to occur, in order to increase the probability
of race conditions manifestation.
A context switch happens for a CPU-bound [12] thread, 6. Testing and Results
as the threads of our application are, when its allocated time
quantum expires. So, we tried to place the critical code
We have developed our tests extending the method de-
to the end of each time quantum allocated to a thread. Of
scribed in the previous sections. In order to make a thread
course, when we wrote our application, we had no way to
use more then a time quantum, a couple of modifications
find out and control the positions in the source code, where
were required to the basic structure of a thread. As men-
the context switches will occur. In fact, if we had done it,
tioned in Section 5, the three operations on the shared vari-
than there would have been no source of randomness. The
able, which we refer to as a step or a basic instruction block
idea was just to let the things happen randomly in one way
executed by a thread, require less CPU clock cycles then a
(normal, i.e. no race conditions) or another (abnormal, i.e.
time quantum has. Thus, the first modification was to repeat
in the case of race conditions). Notwithstanding, the critical
this basic block several times (steps). In this way, for a suf-
code had to be near the end of time quantum. We succeeded
ficiently large number of steps we can be sure the context
that knowing the fact that our critical code is executed in a
switch point where the entropy is present will be encoun-
much smaller time than a time quantum. Executing only
tered. We also created more then two threads in order to
such a step (formed by the read, modify, an update opera-
further exploit the competition for the CPU. Each test was
tions on the shared variable) would have always fitted and
repeated many times in order to get sequences of random
executed in a time quantum, so never met a context switch.
numbers large enough to be evaluated by NIST benchmark
Observing this, all we had to do was to try to “fill” (in fact to
programs. We tested our application using the following
“override”) the entire time quantum with such small steps.
two approaches:
That way, because the time quantum is not a multiple of our
step size, with a great probability, the last step will cross 1. The first approach was to keep the modification oper-
the limit of the time quantum and will be interrupted by a ation simple and increase the number of threads com-
context switch. The idea is illustrated in the Figure 4. peting for the shared variable (up to 64), using a fixed
We must explain one more thing in order to make clear number of steps (e.g. 30000). The cost of this ap-
why random values are generated. The critical code, the proach is the time required to get the final value of the
one we called a step, consists of more operations, so the variable. We observed that the sequences of numbers
occurrence of a context switch during such a step can hap- obtained using this way passed over 90% of NIST tests
pen according to the two scenarios described in Section 3. with no post-processing. Furthermore the quality of
Depending on which scenario takes place, the final value the generated numbers was proportional with the num-
of the shared variable will be one or another. As we saw ber of threads and with the time required to obtain the
in Section 4 there is a small random variation in number of output. We could conclude that in order to obtain good
CPU clock cycles of time quanta allocated to the concurrent random numbers by this method we need a high gen-
threads. Because this variation is smaller than the number of eration time.
CPU clock cycles needed to completely execute the critical
code, it is possible the context switch to happen randomly 2. The second approach was to increase the complexity
in different points of our step, leading to one scenario or of the modification operation, in order to increase the
another and, consequently, to different random values. number of CPU clock cycles required to execute it.
Steps No post-processing XOR post-processing occur at unpredictable moments in the case of race condi-
6500 10% 45% tions between concurrent threads. The factors we identi-
11000 19.88% 67.2% fied as contributing to the randomness of our method are
13000 24.19% 80.1% cache misses, hazard in the processor’s instruction pipeline
14000 38.17% 91.93%
and imperfectness of hardware clocks. Their influence is
20000 42% 97.84%
increased by the fact that the execution environment is vari-
able in time and cannot be reproduced.
Table 3. The pass rate for the NIST tests, for The results are quite good providing some simple post-
different steps values. processing on a couple of output files. Taking into ac-
count that it is a cheap TRNG compared with other hard-
ware counterparts and does not depend on external factors
(e.g. network traffic), we consider our method to be a very
This was done by replacing it with more arithmetic op- promising way of generating random numbers.
erations and introducing others shared variables. We Future work directions involve the improvement of
applied a modulo 2 operation to the final result of the throughput without affecting the quality of the RNG. A
arithmetic operations in order to keep it in the {0, 1} deeper analysis of the sources of randomness could be the
set. We used 8 threads. The program throughput was key to further improving the quality of the generated se-
improved, since there was no need for such a big num- quences of numbers.
ber of threads. They also passed several NIST tests,
demonstrating that this was a relatively good way of
getting random numbers. The tests were carried on References
different number of steps (see Table 3). They demon-
strated that, similar with the previos approach, increas- [1] F. Christof and C. Flaviu. Building fault-tolerant hardware
ing the number of steps the quality of the output is im- clocks from cots components. In Proceedings of the con-
ference on Dependable Computing for Critical Applications
proved. It is obvious that a good compromise must be
(DCCA), pages 67–86, Novmber 1999.
made between the cost (time required for generating a [2] M. Dichtl. Bad and good ways of post-processing biased
bit) and quality of the sequence of random numbers. physical random numbers. August 2007.
A significant improvement for the program output was [3] W. Diffie and M. Hellman. New directions in cryptography.
obtained by a simple XOR post-processing discussed November 1976.
in the next sub-chapter. [4] M. Drutarovsky and P. Galajda. A robust chaos-based
true random number generator embedded in reconfigurable
switched-capacitor hardware. Radioelektronika, April 2007.
[5] E. J. Gentle. Random Number Generation and Monte Carlo
6.1. Post-Processing
Methods. Springer, 2003.
[6] I. Goldberg and D. Wagner. Randomness and the netscape
There are several post-processing methods used in this browser. January 1996.
field like the XOR operation or the von Neumann method [7] P. Kohlbrenner, M. Lockhead, and K. Gaj. An embedded
[2]. We tried both of these in order to decide which of them true random number generator for fpgas. FPGA’04, Febru-
is the most appropriate for us. The Von Neumann not only ary 2004.
caused a loss of 75% of the output but also didnt improve [8] M. Mitchell, J. Oldham, and A. Samuel. Advanced Linux
Programming, 1st Edition. New Riders, 2001.
the quality too much. On the other hand the XOR opera-
[9] D. Schellekens, B. Preneel, and I. Verbauwhede. Fpga ven-
tion made very good correction of the output. This can be dor agnostic true random number generator. In Proceed-
seen by comparing the last two columns in Table 3. The im- ings of Field Programmable Logic and Applications, FPL
provement is compared based on the NIST tests. We found 06, August 2006.
that this post processing method can improve the quality by [10] J. Soto. Statistical testing of random number generators. Na-
almost 60%. Although that a XOR between two bits will tional Institute of Standards and Technology, October 1999.
generate only one, so the output will be 50% reduced, the [11] T. Stojanovski and L. Kocarev. Chaos-based random number
quality is improved a lot. We can conclude that this is a generators-part i: analysis[cryptography]. March 2001.
good compromise between cost and quality. [12] A. Tanenbaum. Modern Operating Systems, 3rd Edition.
Prentice Hall, 2007.
[13] A. Tanenbaum. Structured Computer Systems, 3rd Edition.
7. Conclusions Prentice Hall, 2007.
[14] A. Walsh. Java Bible. John Wiley & Sons, 1998.
We developed a new method of implementing a software
TRNGs. It is based on the fact that a context switch can

View publication stats

You might also like