Michael J. Fischer
, Michaela Iorga
and Ren´e Peralta
Department of Computer Science, Yale University, New Haven, U.S.A.
National Institute of Standards and Technology, ITL, Computer Security Division, Gaithersburg, U.S.A
Randomness server, Electronic commerce, Electronic voting.
We argue that it is time to design, implement, and deploy a trusted public randomness server on the Internet.
NIST plans to deploy a prototype during 2011. We discuss some of the engineering choices that have been
made as well as some of the issues currently under discussion.
The theoretical community has developed many
clever cryptographic security protocols over the years
for access, authentication, privacy, and authorization
in networking and e-commerce applications. How-
ever, except for the simplest and most basic proto-
cols, few have been widely deployed. A major rea-
son concerns efficiency. Many of the more sophis-
ticated security protocols, such as Zero Knowledge
proof systems, are highly interactive and require too
many communication rounds to be feasible in most
situations. Other privacy-preserving protocols elim-
inate the need for many rounds of communication
but assume the availability of a trusted source of ran-
domness, an assumption that is not generally valid at
present. We argue that it is time to design, implement,
and deploy a trusted public randomness server on the
Internet. The “NIST Beacon”
project aims at doing
just that, starting with a prototype in 2011.
Trust is a complex concept, involving technical as
well as social components. At the technical level,
three fundamental properties will be provided: un-
predictability, autonomy, and consistency. Unpre-
dictability means that users cannot predict bits before
they are made available by the source. Autonomy
means that the source is resistant to attempts to al-
ter the distribution of the random bits. Consistency
means that a set of users can access the source in
such a way that they are confident that they all re-
ceive the same random string. We describe some ap-
plications of trusted public randomness servers in sec-
tions 2 and 3.
Note that the requirements for some applications
Rabin appears to have first proposed this type of ser-
vice, calling it a “beacon” (Rabin, 1983)
of random numbers are quite different from these. For
example, password generation requires confidential-
ity of the random string and the inability of an ad-
versary to recover the string, even long after the fact.
This stands in sharp contrast to our requirement of
consistency for the NIST Beacon.
Experience teaches us that it is extremely hard to
secure any single point in the Internet. However, it
would be hard to simultaneously compromise the in-
tegrity of several independent servers. Therefore, we
see our role as proposing a format that can be emu-
lated by others so that the end result is several inde-
pendent servers available to users. These servers can
then be incorporated into a suitable multi-party pro-
tocol to provide a collective randomness server that
achieves unpredictability, autonomy, and consistency,
and that is also resistant to the failure or corruption
of a limited number of servers. The basic idea is to
take the XOR of the shares produced by an agreed-
upon set of servers. However, other issues must be
addressed as well, including how to control member-
ship in the approved server pool, and how to prevent a
corrupted server from learning the other servers’ ran-
dom shares before choosing its own share.
The most basic service a trusted public randomness
service provides is to post a random bit string S
of a
fixed length L, time-stamped by its time of generation,
and digitally signed by the server. The difference be-
tween a random string obtained from the randomness
service and one generated locally by flipping coins
is that the former is widely trusted to have resulted
J. Fischer M., Iorga M. and Peralta R..
DOI: 10.5220/0003612604340438
In Proceedings of the International Conference on Security and Cryptography (SECRYPT-2011), pages 434-438
ISBN: 978-989-8425-71-3
2011 SCITEPRESS (Science and Technology Publications, Lda.)
from a prescribed random process, was not “cooked”
or otherwise falsified, and was not known to anyone
before the indicated time.
2.1 A Simple Authentication
The following situation occurs frequently in modern
cryptographic applications:
User P claims to have authorization to pass
through a security checkpoint.
The sentinel V says to P, “Passage is restricted to
those who are able to invertthe function f. Here is
a challenge number y. Please tell me what f
P calculates and sends back x = f
(y) to V.
Then V checks that f (x) = y and, if so, allows
P to pass.
The terms “sentinel”, “security checkpoint”, and
“user” are abstractions. The user can be a program,
a person holding a smart card, a client accessing a
server, etc. The security checkpoint can be a phys-
ical place, an operating system security mechanism,
an ATM, etc. The sentinel can be an operating system
or one of its subsystems, an algorithm running on a
network component, a human guard, etc.
This protocol uses the notion of a trap-door one-
way function. Such a function f has the property that,
given an arbitrary input x, f(x) can be easily com-
puted. However, only a user who knows the secret
trap-door information can invert f in reasonable time.
Desired Properties. User P is assumed not to be
trusted. The purpose of this protocol is to prevent a
corrupted P from getting past the sentinel without ac-
tually knowing the secret trap-door information. In
many potential applications, the sentinel V also can-
not be trusted. A corrupted V might try to obtain the
secret trap-door information in order to enable her
to bypass security mechanisms not only at her sta-
tion, but at any other station using the same one-way
function. Even with a trusted sentinel, if the sentinel
knows the secret trap-door information, then the sen-
tinel’s data must be protected from the outside world.
Depending on the application, this may be hard, ex-
pensive, or even impossible to do. Thus, one would
like to implement this protocol in such a way that the
sentinel does not need to know the trap-door infor-
mation. Furthermore, we would like the trap-door to
remain secret after the execution of the protocol so
that the mechanism can be safely reused.
The beautiful insight, that perhaps it is not nec-
essary for the sentinel to know the trap-door, is at-
tributed to von Neumann.
Clearly, only P needs to
know the trap-door secret in order to carry out this
protocol, for only P inverts f. The sentinel V only
needs the ability to generate suitable challenge num-
bers and to compute f in order to verify the challenge
response. But before we can be assured of the safety
of this protocol, we must ensure that V doesn’t in-
advertently learn the trap-door secret during the pro-
tocol’s execution. Moreover, this should remain true
even if V is corrupt and is maliciously trying to com-
promise Ps secret.
Use of Randomness. A question that comes to
mind when considering the security of this proto-
col is, How does the sentinel V choose the number
y?” Clearly, if y were a fixed value, then anybody
who witnessed an execution of the protocol could
later pass through the checkpoint, as he would know
(y). Therefore, the sentinel must change y in
each iteration of the protocol. But a simple change
of y (such as adding 1 to the previously-used chal-
lenge) might allow special properties of f to be used
to compute f
(y) from previously known values of
(y). Ideally, from the point of view of preventing
a corrupt P from passing the checkpoint, the num-
ber y should be chosen uniformly at random from a
large set of “hard to invert” numbers. Thus, we must
provide the sentinel with a good random (or pseudo-
random) number generator. Fortunately, this is not
hard to do. But neither is it trivial. For example, lin-
ear congruential generators turn out to be predictable
by polynomial-time algorithms (Boyar, 1989).
Another, less obvious, question that arises is,
“How does the user P know that the sentinel chooses
y at random, and why does he care?” In general, the
user has no way of knowing, but it is important that
the protocol be designed in such a way that the se-
cret trap-door information remains secret regardless
of how a corrupt V chooses y.
An Insecure Implementation. Consider the fol-
lowing implementation of a trap-door function
N is a product of two large primes p and q, each
congruent to 3 modulo 4. Such a number is called
a “Blum Integer”. The trap-door is the prime p. It
is known by authorized users but not by sentinels.
N is publicly known.
f(x) = x
mod N.
To pass the checkpoint, users must be able to
demonstrate the ability to compute square roots mod-
ulo N. Since not all numbers have modular square
He, of course, did not pose the problem in these terms,
as the notion of trap-door one-way function is quite recent.
roots, we must first solve the problem of how a sen-
tinel should produce a random challenge. For Blum
integers it is easy to find a number α modulo N such
that for all y Z
, exactly one of the four numbers in
the setC
= y mod N, ±αy mod N} has a modular
square root.
3, 4
The challenge issued by the sentinel
to the party who wants to prove its knowledge of the
factors of N is simply a number y chosen uniformly
at random from Z
. The response should be a modu-
lar square root x of the element in C
that is guaran-
teed to have a modular square root. To verify that x
is a valid response to the challenge, the sentinel only
needs to compute x
mod N and test for membership
in the set C
It can be formally shown that users reveal no in-
formation about the trap-door by responding to a chal-
lenge generated according to this protocol. On the
other hand, factorization of a composite N is (proba-
bilistic) polynomial-time reducible to the problem of
computing square roots modulo N. Thus, the usual
assumption of intractability of factoring implies that
a user who does not know the factorization of N will
not be able to compute the necessary square root.
We have just described an authentication mecha-
nism that works provided all parties follow the proto-
col. The problem, in many applications, is that nei-
ther party can assume the other is acting honestly. For
example, the sentinel may be after the trap-door se-
cret and may not follow the protocol in generating
y. Such a sentinel can discover the factorization of
N with high probability, rendering this protocol ab-
solutely insecure. Here’s how. The dishonest sentinel
generates y by choosing u at random from Z
and then
choosing y at random from the set C
. It is easily
shown that the numbers y chosen in this way are uni-
formly distributed over Z
; hence, it is undetectable
by P that V is not following the protocol. The value x
that P returns satisfies x
= u
mod N. With probabil-
ity 0.5, x 6= ±u mod N, in which case gcd(x+u, N) is
a proper factor of N. The gcd is an easy computation;
therefore the trap-door will not remain secret for long
from this cheating sentinel.
To summarize, the failure of this protocol came
about because P could not detect that V picked the
number y in a special way that gave her additional
informationabout y. However, the protocol does work
correctly if both parties can trust that the challenge y
is an exogenously generated random number. This
is precisely what a public trusted randomness server
is the group of invertible elements modulo N.
The reader acquainted with Number Theory will rec-
ognize that α can be any number with Jacobi symbol 1
modulo N. Such a number is easily found in probabilistic
polynomial time without having to factor N.
provides. If we simply change the protocol so that
in the second step, P and V obtain y from the trusted
randomness server, then the protocol indeed becomes
The above example, aside from whatever mer-
its it may have as an authentication scheme, was
constructed to introduce the main practical issue ad-
dressed by this paper:
Access to a common trusted source of random-
ness can make simple protocols secure that
otherwise would not be.
A trusted randomness beacon has many different uses.
Here are some examples.
3.1 Cryptographic Primitives
Providing network security and reliability requires
the use of cryptographic primitives. Examples of
such primitives are encryption, decryption, and dig-
ital signatures. Over the last three decades, cryp-
tographers have identified a number of other primi-
tives as being powerful tools for developing secure
network applications. Some early examples of these
are bit-commitment (see (Boyar et al., 1990; Bo-
yar et al., 1993; Brassard et al., 1988; Brassard
and Cr´epeau, 1987; Goldwasser and Micali, 1984)),
oblivious transfer (see (Halpern and Rabin, 1983; Fis-
cher et al., 1985; Berger et al., 1985; Fischer et al.,
1996)), digital coin-flipping (Blum, 1982), crypto-
graphically secure pseudo-random number generators
(Blum and Micali, 1984), and zero-knowledge (ZK)
proofs (Brassard and Cr´epeau, 1987; Goldreich et al.,
1991). The latter primitiveimplies the ability to prove
to a third party that a Boolean function f(x) is satis-
fiable without revealing a satisfying assignment. Fur-
thermore, some instantiations of this primitive allow
proving knowledge of a satisfying assignment x
f without revealing x
ZK proofs are interactive: the prover engages the
other party (the “verifier”) in a conversation. After the
conversationis over, the verifier is convincedthat f(x)
In practice, we would probably use a one-way version
of this protocol in which P presented both y and f
(y) to
V, and V checked that f(x) = y and that y was a recent and
previously-unused value from the beacon. Time-stamps,
signatures and possibly other features can be used in order
to guard against replay attacks.
SECRYPT 2011 - International Conference on Security and Cryptography
is satisfiable but has not obtained any information be-
sides this fact. Unfortunately, ZK proofs are usually
impractical: they require too much interaction and in-
volve too much communication and computation.
There are a number of variants of ZK proofs in
which the interaction is minimized, both in total num-
ber of bits communicated and in number of rounds.
Among these, the most practical protocols assume,
in one way or another, access to a common random
3.2 Voting
Voting technology is currently in a state of flux. There
are various ways in which new technologies are being
used. Ensuring security and promoting trust in these
new applications is a difficult challenge. A common
source of randomness will be useful in at least two
ways: i) in random auditing of machines and ballots
(see, for example, (Norden et al., 2007)); ii) in facil-
itating so-called end-to-end voting systems (see, for
example, (Adida et al., 2009)).
An online source of randomness is not a new idea.
Implementations date to the 1980s (George Davida,
at the Univ. of Wisconsin, deployed a system that
provides on-demand random strings using white
noise from radio waves as the source of entropy.) A
currently functioning source of randomness can be
found at http://www.random.org/. There are many
adequate technologies for entropy extraction. There
are also published guidelines for randomness gener-
ation by standards organizations (see, for example
http://csrc.nist.gov/groups/ST/toolkit/random number.html).
This position paper simply argues that it is time to
design, standardize, and deploy a service tailored to
electronic commerce applications. There are a num-
ber of design and implementation issues that need to
be addressed. Some of them are the following:
source of entropy;
rate: how many bits per second;
user interface;
full-entropy strings or cryptographically secure
pseudo-random strings;
authentication method;
time-stamping method;
archival properties (e.g. can old strings be authen-
trust model: what, exactly, can the consumer as-
securing the source from cyber attacks;
using multiple sources to provide tolerance
against failed or corrupted sources.
At this moment we are thinking of broadcasting
full-entropy bit-strings. We plan to post them in
blocks of 256 bits per second. We intend to sign and
time-stamp the bit-strings. We also plan to link the
sequence of blocks with a secure hash so that it will
not be possible, even for the source itself, to retroac-
tively change a block without detection. As for source
of entropy, we are talking to NIST physicists. We see
no reason not to use the most sophisticated entropy
source we can afford.
Adida, B., Pereira, O., Marneffe, O. D., and Quisquater, J.
(2009). Electing a university president using open-
audit voting: Analysis of real-world use of helios. In
Electronic Voting Technology/Workshop on Trustwor-
thy Elections (EVT/WOTE).
Berger, R., Peralta, R., and Tedrick, T. (1985). A prov-
ably secure oblivious transfer protocol. In Advances
in Cryptology - Proceedings of EUROCRYPT 84, vol-
ume 209 of Lecture Notes in Computer Science, pages
379–386. Springer-Verlag.
Blum, M. (1982). Coin flipping by telephone. In IEEE
COMPCON, pages 133–137.
Blum, M. and Micali, S. (1984). How to generate crypto-
graphically strong sequences of pseudo-random bits.
SIAM Journal on Computing, 13:850–864.
Boyar, J. (1989). Inferring sequences produced by pseudo-
random number generators. J. ACM, 36(1):129–141.
Boyar, J., Krentel, M., and Kurtz, S. (1990). A discrete
logarithm implementation of zero-knowledge blobs.
Journal of Cryptology, 2(2):63–76.
Boyar, J., Lund, C., and Peralta, R. (1993). On the commu-
nication complexity of zero-knowledge proofs. Jour-
nal of Cryptology, 6(2):65–85.
Brassard, G., Chaum, D., and Cr´epeau, C. (1988). Min-
imum disclosure proofs of knowledge. Journal of
Computer and System Sciences, 37:156–189.
Brassard, G. and Cr´epeau, C. (1987). Zero-knowledge sim-
ulation of boolean circuits. In Advances in Cryptology
- Proceedings of CRYPTO 86, volume 263 of Lecture
Notes in Computer Science, pages 223–233. Springer-
Fischer, M. J., Micali, S., and Rackoff, C. (1996). A secure
protocol for the oblivious transfer (extended abstract).
J. Cryptology, 9(3):191–195. This work was origi-
nally presented at EuroCrypt 84.
Fischer, M. J., Micali, S., Rackoff, C., and Wittenberg,
K. D. (1985). An oblivious transfer protocol equiv-
alent to factoring. Presented at the NSF Workshop on
the Mathematical Theory of Security, MIT Endicott
House, Dedham, Massachusetts, 1985.
Goldreich, O., Micali, S., and Wigderson, A. (1991). Proofs
that yield nothing but their validity or all languages
in NP have zero-knowledge proof systems. JACM,
Goldwasser, S. and Micali, S. (1984). Probabilistic en-
cryption. Journal of Computer and System Sciences,
Halpern, J. and Rabin, M. (1983). A logic to reason about
likelihood. In Proceedings of the 15th Annual ACM
Symposium on the Theory of Computing, pages 310–
Norden, L., Burstein, A., Hall, J., and Chen, M. (2007).
Post-election audits: restoring trust in elections. Tech-
nical report, Brennan Center for Justice at New York
University School of Law.
Rabin, M. (1983). Transaction protection by beacons. J.
Comput. Syst. Sci., 27(2):256–267.
SECRYPT 2011 - International Conference on Security and Cryptography