Foundations of Cryptography II - Basic Applications
Foundations of Cryptography II - Basic Applications
com
www.Ebook777.com
Foundations of Cryptography
www.Ebook777.com
Foundations of Cryptography
II Basic Applications
Oded Goldreich
Weizmann Institute of Science
Free ebooks ==> www.Ebook777.com
Published in the United States of America by Cambridge University Press, New York
www.cambridge.org
Information on this title: www.cambridge.org/9780521119917
A catalogue record for this publication is available from the British Library
www.Ebook777.com
To Dana
Free ebooks ==> www.Ebook777.com
www.Ebook777.com
Contents
II Basic Applications
viii
www.Ebook777.com
CONTENTS
ix
Free ebooks ==> www.Ebook777.com
CONTENTS
Bibliography 785
Index 795
www.Ebook777.com
List of Figures
xi
Free ebooks ==> www.Ebook777.com
www.Ebook777.com
Preface
Cryptography is concerned with the construction of schemes that withstand any abuse.
Such schemes are constructed so as to maintain a desired functionality, even under
malicious attempts aimed at making them deviate from their prescribed functionality.
The design of cryptographic schemes is a very difficult task. One cannot rely on
intuitions regarding the typical state of the environment in which the system operates.
For sure, the adversary attacking the system will try to manipulate the environment into
untypical states. Nor can one be content with countermeasures designed to withstand
specific attacks because the adversary (which acts after the design of the system is
completed) will try to attack the schemes in ways that are typically different from the
ones envisioned by the designer. The validity of the foregoing assertions seems self-
evident; still, some people hope that in practice, ignoring these tautologies will not result
in actual damage. Experience shows that these hopes rarely come true; cryptographic
schemes based on make-believe are broken, typically sooner than later.
In view of these assertions, we believe that it makes little sense to make assumptions
regarding the specific strategy that the adversary may use. The only assumptions that
can be justified refer to the computational abilities of the adversary. Furthermore,
it is our opinion that the design of cryptographic systems has to be based on firm
foundations, whereas ad hoc approaches and heuristics are a very dangerous way to
go. A heuristic may make sense when the designer has a very good idea about the
environment in which a scheme is to operate, yet a cryptographic scheme has to operate
in a maliciously selected environment that typically transcends the designer’s view.
This work is aimed at presenting firm foundations for cryptography. The foundations
of cryptography are the paradigms, approaches, and techniques used to conceptualize,
define, and provide solutions to natural “security concerns.” We will present some of
these paradigms, approaches, and techniques, as well as some of the fundamental results
xiii
Free ebooks ==> www.Ebook777.com
PREFACE
obtained using them. Our emphasis is on the clarification of fundamental concepts and
on demonstrating the feasibility of solving several central cryptographic problems.
Solving a cryptographic problem (or addressing a security concern) is a two-stage
process consisting of a definitional stage and a constructive stage. First, in the defini-
tional stage, the functionality underlying the natural concern is to be identified, and an
adequate cryptographic problem has to be defined. Trying to list all undesired situations
is infeasible and prone to error. Instead, one should define the functionality in terms of
operation in an imaginary ideal model, and require a candidate solution to emulate this
operation in the real, clearly defined model (which specifies the adversary’s abilities).
Once the definitional stage is completed, one proceeds to construct a system that satis-
fies the definition. Such a construction may use some simpler tools, and its security is
proven relying on the features of these tools. In practice, of course, such a scheme may
also need to satisfy some specific efficiency requirements.
This work focuses on several archetypical cryptographic problems (e.g., encryption
and signature schemes) and on several central tools (e.g., computational difficulty,
pseudorandomness, and zero-knowledge proofs). For each of these problems (resp.,
tools), we start by presenting the natural concern underlying it (resp., its intuitive
objective), then define the problem (resp., tool), and finally demonstrate that the problem
may be solved (resp., the tool can be constructed). In the last step, our focus is on demon-
strating the feasibility of solving the problem, not on providing a practical solution. As
a secondary concern, we typically discuss the level of practicality (or impracticality)
of the given (or known) solution.
Computational Difficulty
The specific constructs mentioned earlier (as well as most constructs in this area) can
exist only if some sort of computational hardness exists. Specifically, all these problems
and tools require (either explicitly or implicitly) the ability to generate instances of hard
problems. Such ability is captured in the definition of one-way functions (see further
discussion in Section 2.1). Thus, one-way functions are the very minimum needed for
doing most sorts of cryptography. As we shall see, one-way functions actually suffice for
doing much of cryptography (and the rest can be done by augmentations and extensions
of the assumption that one-way functions exist).
Our current state of understanding of efficient computation does not allow us to prove
that one-way functions exist. In particular, the existence of one-way functions implies
that N P is not contained in BPP ⊇ P (not even “on the average”), which would
resolve the most famous open problem of computer science. Thus, we have no choice
(at this stage of history) but to assume that one-way functions exist. As justification for
this assumption, we may only offer the combined beliefs of hundreds (or thousands) of
researchers. Furthermore, these beliefs concern a simply stated assumption, and their
validity follows from several widely believed conjectures that are central to various
fields (e.g., the conjecture that factoring integers is hard is central to computational
number theory).
Since we need assumptions anyhow, why not just assume what we want (i.e., the
existence of a solution to some natural cryptographic problem)? Well, first we need
xiv
www.Ebook777.com
PREFACE
to know what we want: As stated earlier, we must first clarify what exactly we want;
that is, we must go through the typically complex definitional stage. But once this stage
is completed, can we just assume that the definition derived can be met? Not really.
Once a definition is derived, how can we know that it can be met at all? The way to
demonstrate that a definition is viable (and so the intuitive security concern can be
satisfied at all) is to construct a solution based on a better-understood assumption (i.e.,
one that is more common and widely believed). For example, looking at the definition
of zero-knowledge proofs, it is not a priori clear that such proofs exist at all (in a
non-trivial sense). The non-triviality of the notion was first demonstrated by presenting
a zero-knowledge proof system for statements regarding Quadratic Residuosity that
are believed to be hard to verify (without extra information). Furthermore, contrary to
prior beliefs, it was later shown that the existence of one-way functions implies that
any NP-statement can be proven in zero-knowledge. Thus, facts that were not at all
known to hold (and were even believed to be false) were shown to hold by reduction to
widely believed assumptions (without which most of modern cryptography collapses
anyhow). To summarize, not all assumptions are equal, and so reducing a complex,
new, and doubtful assumption to a widely believed simple (or even merely simpler)
assumption is of great value. Furthermore, reducing the solution of a new task to the
assumed security of a well-known primitive typically means providing a construction
that, using the known primitive, solves the new task. This means that we not only know
(or assume) that the new task is solvable but also have a solution based on a primitive
that, being well known, typically has several candidate implementations.
Organization of the Work. This work is organized in two parts (see Figure 0.1): Basic
Tools and Basic Applications. The first volume (i.e., [108]) contains an introductory
chapter as well as the first part (Basic Tools), which consists of chapters on computa-
tional difficulty (one-way functions), pseudorandomness, and zero-knowledge proofs.
These basic tools are used for the Basic Applications of the second part (i.e., the current
xv
Free ebooks ==> www.Ebook777.com
PREFACE
Organization of the Current Volume. The current (second) volume consists of three
chapters that treat encryption schemes, digital signatures and message authentication,
and general cryptographic protocols, respectively. Also included is an appendix that pro-
vides corrections and additions to Volume 1. Figure 0.2 depicts the high-level structure
of the current volume. Inasmuch as this volume is a continuation of the first (i.e., [108]),
one numbering system is used for both volumes (and so the first chapter of the cur-
rent volume is referred to as Chapter 5). This allows a simple referencing of sections,
definitions, and theorems that appear in the first volume (e.g., Section 1.3 presents
the computational model used throughout the entire work). The only exception to this
rule is the use of different bibliographies (and consequently a different numbering of
bibliographic entries) in the two volumes.
Historical notes, suggestions for further reading, some open problems, and some
exercises are provided at the end of each chapter. The exercises are mostly designed to
help and test the basic understanding of the main text, not to test or inspire creativity.
The open problems are fairly well known; still, we recommend a check on their current
status (e.g., in our updated notices web site).
Web Site for Notices Regarding This Work. We intend to maintain a web site listing
corrections of various types. The location of the site is
http://www.wisdom.weizmann.ac.il/∼oded/foc-book.html
xvi
www.Ebook777.com
PREFACE
More advanced material is typically presented at a faster pace and with fewer details.
Thus, we hope that the attempt to satisfy a wide range of readers will not harm any of
them.
Teaching. The material presented in this work, on the one hand, is way beyond what
one may want to cover in a course and, on the other hand, falls very short of what one
may want to know about Cryptography in general. To assist these conflicting needs, we
make a distinction between basic and advanced material and provide suggestions for
further reading (in the last section of each chapter). In particular, sections, subsections,
and subsubsections marked by an asterisk (*) are intended for advanced reading.
xvii
Free ebooks ==> www.Ebook777.com
PREFACE
This work is intended to provide all material required for a course on Foundations
of Cryptography. For a one-semester course, the teacher will definitely need to skip all
advanced material (marked by an asterisk) and perhaps even some basic material; see
the suggestions in Figure 0.3. Depending on the class, this should allow coverage of the
basic material at a reasonable level (i.e., all material marked as “main” and some of the
“optional”). This work can also serve as a textbook for a two-semester course. In such
a course, one should be able to cover the entire basic material suggested in Figure 0.3,
and even some of the advanced material.
Practice. The aim of this work is to provide sound theoretical foundations for cryp-
tography. As argued earlier, such foundations are necessary for any sound practice of
cryptography. Indeed, practice requires more than theoretical foundations, whereas the
current work makes no attempt to provide anything beyond the latter. However, given a
sound foundation, one can learn and evaluate various practical suggestions that appear
elsewhere (e.g., in [149]). On the other hand, lack of sound foundations results in an
inability to critically evaluate practical suggestions, which in turn leads to unsound
xviii
www.Ebook777.com
PREFACE
decisions. Nothing could be more harmful to the design of schemes that need to with-
stand adversarial attacks than misconceptions about such attacks.
Writing the first volume was fun. In comparison to the current volume, the definitions,
constructions, and proofs in the first volume were relatively simple and easy to write.
Furthermore, in most cases, the presentation could safely follow existing texts. Conse-
quently, the writing effort was confined to reorganizing the material, revising existing
texts, and augmenting them with additional explanations and motivations.
Things were quite different with respect to the current volume. Even the simplest
notions defined in the current volume are more complex than most notions treated in the
first volume (e.g., contrast secure encryption with one-way functions or secure protocols
with zero-knowledge proofs). Consequently, the definitions are more complex, and
many of the constructions and proofs are more complex. Furthermore, in most cases,
the presentation could not follow existing texts. Indeed, most effort had to be (and was)
devoted to the actual design of constructions and proofs, which were only inspired by
existing texts.
The mere fact that writing this volume required so much effort may imply that this
volume will be very valuable: Even experts may be happy to be spared the hardship of
trying to understand this material based on the original research manuscripts.
xix
Free ebooks ==> www.Ebook777.com
www.Ebook777.com
Acknowledgments
. . . very little do we have and inclose which we can call our own in the
deep sense of the word. We all have to accept and learn, either from our
predecessors or from our contemporaries. Even the greatest genius would
not have achieved much if he had wished to extract everything from inside
himself. But there are many good people, who do not understand this,
and spend half their lives wondering in darkness with their dreams of
originality. I have known artists who were proud of not having followed
any teacher and of owing everything only to their own genius. Such fools!
Goethe, Conversations with Eckermann, 17.2.1832
First of all, I would like to thank three remarkable people who had a tremendous
influence on my professional development: Shimon Even introduced me to theoretical
computer science and closely guided my first steps. Silvio Micali and Shafi Goldwasser
led my way in the evolving foundations of cryptography and shared with me their
constant efforts for further developing these foundations.
I have collaborated with many researchers, yet I feel that my collaboration with
Benny Chor and Avi Wigderson had the most important impact on my professional
development and career. I would like to thank them both for their indispensable contri-
bution to our joint research and for the excitement and pleasure I had when collaborating
with them.
Leonid Levin deserves special thanks as well. I had many interesting discussions
with Leonid over the years, and sometimes it took me too long to realize how helpful
these discussions were.
Special thanks also to four of my former students, from whom I have learned a lot
(especially regarding the contents of this volume): to Boaz Barak for discovering the
unexpected power of non-black-box simulations, to Ran Canetti for developing defini-
tions and composition theorems for secure multi-party protocols, to Hugo Krawczyk
for educating me about message authentication codes, and to Yehuda Lindell for signif-
icant simplification of the construction of a posteriori CCA (which enables a feasible
presentation).
xxi
Free ebooks ==> www.Ebook777.com
ACKNOWLEDGMENTS
Next, I’d like to thank a few colleagues and friends with whom I had significant
interaction regarding Cryptography and related topics. These include Noga Alon,
Hagit Attiya, Mihir Bellare, Ivan Damgard, Uri Feige, Shai Halevi, Johan Hastad,
Amir Herzberg, Russell Impagliazzo, Jonathan Katz, Joe Kilian, Eyal Kushilevitz,
Yoad Lustig, Mike Luby, Daniele Micciancio, Moni Naor, Noam Nisan, Andrew
Odlyzko, Yair Oren, Rafail Ostrovsky, Erez Petrank, Birgit Pfitzmann, Omer Reingold,
Ron Rivest, Alon Rosen, Amit Sahai, Claus Schnorr, Adi Shamir, Victor Shoup,
Madhu Sudan, Luca Trevisan, Salil Vadhan, Ronen Vainish, Yacob Yacobi, and David
Zuckerman.
Even assuming I did not forget people with whom I had significant interaction on
topics touching upon this book, the list of people I’m indebted to is far more extensive.
It certainly includes the authors of many papers mentioned in the reference list. It also
includes the authors of many Cryptography-related papers that I forgot to mention, and
the authors of many papers regarding the Theory of Computation at large (a theory
taken for granted in the current book).
Finally, I would like to thank Boaz Barak, Alex Healy, Vlad Kolesnikov, Yehuda
Lindell, and Minh-Huyen Nguyen for reading parts of this manuscript and pointing out
various difficulties and errors.
xxii
www.Ebook777.com
CHAPTER FIVE
Encryption Schemes
Organization. Our main treatment (i.e., Sections 5.1–5.3) refers to security under
“passive” (eavesdropping) attacks. In contrast, in Section 5.4, we discuss notions of se-
curity under active attacks, culminating in robustness against chosen ciphertext attacks.
Additional issues are discussed in Section 5.5.
Teaching Tip. We suggest to focus on the basic definitional treatment (i.e., Sections 5.1
and 5.2.1–5.2.4) and on the the feasibility of satisfying these definitions (as demon-
started by the simplest constructions provided in Sections 5.3.3 and 5.3.4.1). The
overview to security under active attacks (i.e., Section 5.4.1) is also recommended.
We assume that the reader is familiar with the material in previous chapters (and
specifically with Sections 2.2, 2.4, 2.5, 3.2–3.4, and 3.6). This familiarity is important
not only because we use some of the notions and results presented in these sections but
also because we use similar proof techniques (and do so while assuming that this is not
the reader’s first encounter with these techniques).
373
Free ebooks ==> www.Ebook777.com
ENCRYPTION SCHEMES
1 In fact, in many cases, the legitimate interest may be served best by publicizing the scheme itself, because this
allows an (independent) expert evaluation of the security of the scheme to be obtained.
374
www.Ebook777.com
5.1 THE BASIC SETTING
plaintext plaintext
ciphertext
X
E D X
K K
decryption-key, yields the original plaintext. We stress that knowledge of the decryption-
key is essential for the latter transformation.
375
Free ebooks ==> www.Ebook777.com
ENCRYPTION SCHEMES
plaintext plaintext
ciphertext
X
E D X
e
e e d
Sender’s protected region Receiver’s protected region
ADVERSARY
absolute security. However, its usage of the key is inefficient; or, put in other words,
it requires keys of length comparable to the total length (or information contents) of
the data being communicated. By contrast, the rest of this chapter will focus on en-
cryption schemes in which n-bit long keys allow for the secure communication of
data having an a priori unbounded (albeit polynomial in n) length. In particular, n-bit
long keys allow for significantly more than n bits of information to be communicated
securely.
www.Ebook777.com
5.1 THE BASIC SETTING
where the probability is taken over the internal coin tosses of algorithms E and D.
The integer n serves as the security parameter of the scheme. Each (e, d) in the range
of G(1n ) constitutes a pair of corresponding encryption/decryption keys. The string
E(e, α) is the encryption of the plaintext α ∈ {0, 1}∗ using the encryption-key e, whereas
D(d, β) is the decryption of the ciphertext β using the decryption-key d.
We stress that Definition 5.1.1 says nothing about security, and so trivial (insecure)
def def
algorithms may satisfy it (e.g., E(e, α) = α and D(d, β) = β). Furthermore, Defini-
tion 5.1.1 does not distinguish private-key encryption schemes from public-key ones.
The difference between the two types is introduced in the security definitions: In a
public-key scheme the “breaking algorithm” gets the encryption-key (i.e., e) as an ad-
ditional input (and thus e = d follows), while in private-key schemes e is not given to
the “breaking algorithm” (and thus, one may assume, without loss of generality, that
e = d).
We stress that this definition requires the scheme to operate for every plaintext,
and specifically for plaintext of length exceeding the length of the encryption-key.
(This rules out the information theoretic secure “one-time pad” scheme mentioned
earlier.)
Notation. In the rest of this text, we write E e (α) instead of E(e, α) and Dd (β) instead
of D(d, β). Sometimes, when there is little risk of confusion, we drop these subscripts.
Also, we let G 1 (1n ) (resp., G 2 (1n )) denote the first (resp., second) element in the
pair G(1n ). That is, G(1n ) = (G 1 (1n ), G 2 (1n )). Without loss of generality, we may
assume that |G 1 (1n )| and |G 2 (1n )| are polynomially related to n, and that each of these
integers can be efficiently computed from the other. (In fact, we may even assume that
|G 1 (1n )| = |G 2 (1n )| = n; see Exercise 6.)
Comments. Definition 5.1.1 may be relaxed in several ways without significantly harm-
ing its usefulness. For example, we may relax Condition (2) and allow a negligible de-
cryption error (e.g., Pr[Dd (E e (α)) = α] < 2−n ). Alternatively, one may postulate that
Condition (2) holds for all but a negligible measure of the key-pairs generated by G(1n ).
At least one of these relaxations is essential for some suggestions of (public-key) en-
cryption schemes.
Another relaxation consists of restricting the domain of possible plaintexts (and
ciphertexts). For example, one may restrict Condition (2) to α’s of length (n), where
: N → N is some fixed function. Given a scheme of the latter type (with plaintext
length ), we may construct a scheme as in Definition 5.1.1 by breaking plaintexts into
blocks of length (n) and applying the restricted scheme separately to each block. (Note
that security of the resulting scheme requires that the security of the length-restricted
scheme be preserved under multiple encryptions with the same key.) For more details
see Sections 5.2.4 and 5.3.2.
377
Free ebooks ==> www.Ebook777.com
ENCRYPTION SCHEMES
In this section we present two fundamental definitions of security and prove their equiv-
alence. The first definition, called semantic security, is the most natural one. Semantic
security is a computational-complexity analogue of Shannon’s definition of perfect pri-
vacy (which requires that the ciphertext yield no information regarding the plaintext).
Loosely speaking, an encryption scheme is semantically secure if it is infeasible to
learn anything about the plaintext from the ciphertext (i.e., impossibility is replaced
by infeasibility). The second definition has a more technical flavor. It interprets se-
curity as the infeasibility of distinguishing between encryptions of a given pair of
messages. This definition is useful in demonstrating the security of a proposed encryp-
tion scheme and for the analysis of cryptographic protocols that utilize an encryption
scheme.
We stress that the definitions presented in Section 5.2.1 go far beyond saying that it
is infeasible to recover the plaintext from the ciphertext. The latter statement is indeed a
minimal requirement for a secure encryption scheme, but we claim that it is far too weak
a requirement. For example, one should certainly not use an encryption scheme that
leaks the first part of the plaintext (even if it is infeasible to recover the entire plaintext
from the ciphertext). In general, an encryption scheme is typically used in applications
where even obtaining partial information on the plaintext may endanger the security
of the application. The question of which partial information endangers the security
of a specific application is typically hard (if not impossible) to answer. Furthermore,
we wish to design application-independent encryption schemes, and when doing so
it is the case that each piece of partial information may endanger some application.
Thus, we require that it be infeasible to obtain any information about the plaintext
from the ciphertext. Moreover, in most applications the plaintext may not be uniformly
distributed, and some a priori information regarding it may be available to the adversary.
We thus require that the secrecy of all partial information be preserved also in such a
case. That is, given any a priori information on the plaintext, it is infeasible to obtain
any (new) information about the plaintext from the ciphertext (beyond what is feasible
to obtain from the a priori information on the plaintext). The definition of semantic
security postulates all of this.
www.Ebook777.com
5.2 DEFINITIONS OF SECURITY
Loosely speaking, semantic security means that nothing can be gained by looking
at a ciphertext. Following the simulation paradigm, this means that whatever can be
efficiently learned from the ciphertext can also be efficiently learned from scratch (or
from nothing).
that is exponential in the security parameter (see Exercise 3). Likewise, we restrict the
functions f and h to be polynomially-bounded, that is, | f (z)|, |h(z)| ≤ poly(|z|).
The difference between private-key and public-key encryption schemes is manifested
in the definition of security. In the latter case, the adversary (which is trying to obtain
information on the plaintext) is given the encryption-key, whereas in the former case
it is not. Thus, the difference between these schemes amounts to a difference in the
adversary model (considered in the definition of security). We start by presenting the
definition for private-key encryption schemes.
(The probability in these terms is taken over X n as well as over the internal coin tosses
of either algorithms G, E, and A or algorithm A .)
We stress that all the occurrences of X n in each of the probabilistic expressions re-
fer to the same random variable (see the general convention stated in Section 1.2.1
in Volume 1). The security parameter 1n is given to both algorithms (as well as to the
functions h and f ) for technical reasons.2 The function h provides both algorithms with
partial information regarding the plaintext X n . Furthermore, h also makes the defini-
tion implicitly non-uniform; see further discussion in Section 5.2.1.2. In addition, both
algorithms get the length of X n . These algorithms then try to guess the value f (1n , X n );
namely, they try to infer information about the plaintext X n . Loosely speaking, in a se-
mantically secure encryption scheme the ciphertext does not help in this inference task.
That is, the success probability of any efficient algorithm (i.e., algorithm A) that is given
the ciphertext can be matched, up to a negligible fraction, by the success probability of
an efficient algorithm (i.e., algorithm A ) that is not given the ciphertext at all.
Definition 5.2.1 refers to private-key encryption schemes. To derive a definition of
security for public-key encryption schemes, the encryption-key (i.e., G 1 (1n )) should
be given to the adversary as an additional input.
2 The auxiliary input 1n is used for several purposes. First, it allows smooth transition to fully non-uniform
formulations (e.g., Definition 5.2.3) in which the (polynomial-size) adversary depends on n. Thus, it is good to
provide A (and thus also A ) with 1n . Once this is done, it is natural to allow also h and f to depend on n. In
fact, allowing h and f to explicitly depend on n facilitates the proof of Proposition 5.2.7. In light of the fact
that 1n is given to both algorithms, we may replace the input part 1|X n | by |X n |, because the former may be
recovered from the latter in poly(n)-time.
380
www.Ebook777.com
5.2 DEFINITIONS OF SECURITY
Recall that (by our conventions) both occurrences of G 1 (1n ), in the first probabilistic
expression, refer to the same random variable. We comment that it is pointless to give
the random encryption-key (i.e., G 1 (1n )) to algorithm A (because the task as well as
the main inputs of A are unrelated to the encryption-key, and anyhow A could generate
a random encryption-key by itself).
given to the algorithms) the algorithms are asked to guess the value of f (at a plaintext
implicit in the ciphertext given only to A). However, as we shall see in the sequel (see
also Exercise 13), the actual technical content of semantic security is that the proba-
bility ensembles {(1n , E(X n ), 1|X n | , h(1n , X n ))}n and {(1n , E(1|X n | ), 1|X n | , h(1n , X n ))}n
are computationally indistinguishable (and so whatever A can compute can also be
computed by A ). Note that the latter statement does not refer to the function f , which
explains why we need not make any restriction regarding f.
1
| Pr C n (E G 1 (1n ) (x)) = 1 − Pr Cn (E G 1 (1n ) (y)) = 1 | <
p(n)
The probability in these terms is taken over the internal coin tosses of algorithms G
and E.
Note that the potential plaintexts to be distinguished can be incorporated into the circuit
Cn . Thus, the circuit models both the adversary’s strategy and its a priori information:
See Exercise 11.
Again, the security definition for public-key encryption schemes is derived by adding
the encryption-key (i.e., G 1 (1n )) as an additional input to the potential distinguisher.
382
www.Ebook777.com
5.2 DEFINITIONS OF SECURITY
1
| Pr Cn (G 1 (1n ), E G 1 (1n ) (x)) = 1 − Pr Cn (G 1 (1n ), E G 1 (1n ) (y)) = 1 | <
p(n)
Let (G, E, D) be an encryption scheme. We formulate a proposition for each of the two
directions of this theorem. Each proposition is in fact stronger than the corresponding
direction stated in Theorem 5.2.5. The more useful direction is stated first: It asserts
that the technical interpretation of security, in terms of ciphertext-indistinguishability,
implies the natural notion of semantic security. Thus, the following proposition yields
a methodology for designing semantically secure encryption schemes: Design and
prove your scheme to be ciphertext-indistinguishable, and conclude (by applying the
proposition) that it is semantically secure. The opposite direction (of Theorem 5.2.5)
establishes the “completeness” of the latter methodology, and more generally asserts
that requiring an encryption scheme to be ciphertext-indistinguishable does not rule
out schemes that are semantically secure.
383
Free ebooks ==> www.Ebook777.com
ENCRYPTION SCHEMES
Observe that the four itemized conditions limit the scope of the four universal quantifiers
in Definition 5.2.1, whereas the last sentence removes a restriction on the existential
quantifier (i.e., removes the complexity bound on A ) and reverses the order of quanti-
fiers allowing the existential quantifier to depend on all universal quantifiers (rather than
only on the last one). Thus, each of these modifications makes the resulting definition
potentially weaker. Still, combining Propositions 5.2.7 and 5.2.6, it follows that a weak
version of Definition 5.2.1 implies (an even stronger version than) the one stated in
Definition 5.2.1.
www.Ebook777.com
5.2 DEFINITIONS OF SECURITY
A guesses f (1n , X n ) essentially as well when given a dummy encryption as when given
the encryption of X n ). Details follow.
The construction of A : Let A be an algorithm that tries to infer partial information (i.e.,
the value f (1n , X n )) from the encryption of the plaintext X n (when also given 1n , 1|X n |
and a priori information h(1n , X n )). Intuitively, on input E(α) and (1n , 1|α| , h(1n , α)),
algorithm A tries to guess f (1n , α). We construct a new algorithm, A , that performs
essentially as well without getting the input E(α). The new algorithm consists of invok-
ing A on input E G 1 (1n ) (1|α| ) and (1n , 1|α| , h(1n , α)), and outputting whatever A does.
That is, on input (1n , 1|α| , h(1n , α)), algorithm A proceeds as follows:
1. A invokes the key-generator G (on input 1n ), and obtains an encryption-key e ←
G 1 (1n ).
2. A invokes the encryption algorithm with key e and (“dummy”) plaintext 1|α| , ob-
taining a ciphertext β ← E e (1|α| ).
3. A invokes A on input (1n , β, 1|α| , h(1n , α)), and outputs whatever A does.
Observe that A is described in terms of an oracle machine that makes a single oracle
call to (any given) A, in addition to invoking the fixed algorithms G and E. Furthermore,
the construction of A depends neither on the functions h and f nor on the distribution
of plaintexts to be encrypted (represented by the probability ensembles {X n }n∈N ). Thus,
A is probabilistic polynomial-time whenever A is probabilistic polynomial-time (and
regardless of the complexity of h, f , and {X n }n∈N ).
Indistinguishability of encryptions will be used to prove that A performs essentially
as well as A. Specifically, the proof will use a reducibility argument.
Claim 5.2.6.1: Let A be as in the preceding construction. Then, for every {X n }n∈N , f ,
h, and p as in Definition 5.2.1, and all sufficiently large n’s
Pr A(1n , E G 1 (1n ) (X n ), 1|X n | , h(1n , X n )) = f (1n , X n )
1
< Pr A (1n , 1|X n | , h(1n , X n )) = f (1n , X n ) +
p(n)
def
Proof: To simplify the notations, let us incorporate 1|α| into h n (α) = h(1n , α) and let
def
f n (α) = f (1n , α). Also, we omit 1n from the inputs given to A, shorthanding A(1n , c, v)
by A(c, v). Using the definition of A , we rewrite the claim as asserting
Pr A(E G 1 (1n ) (X n ), h n (X n )) = f n (X n ) (5.1)
1
< Pr A(E G 1 (1n ) (1|X n | ), h n (X n )) = f n (X n ) +
p(n)
Intuitively, Eq. (5.1) follows from the indistinguishability of encryptions. Otherwise,
by fixing a violating value of X n and hardwiring the corresponding values of h n (X n )
and f n (X n ), we get a small circuit that distinguishes an encryption of this value of X n
from an encryption of 1|X n | . Details follow.
385
Free ebooks ==> www.Ebook777.com
ENCRYPTION SCHEMES
Assume toward the contradiction that for some polynomial p and infinitely many
n’s Eq. (5.1) is violated. Then, for each such n, we have E[n (X n )] > 1/ p(n), where
def
n (x) = Pr A(E G 1 (1n ) (x), h n (x)) = f n (x) − Pr A(E G 1 (1n ) (1|x| ), h n (x)) = f n (x)
We use an averaging argument to single out a string xn in the support of X n such that
n (xn ) ≥ E[n (X n )]: That is, let x n ∈ {0, 1}poly(n) be a string for which the value of
n (·) is maximum, and so n (xn ) > 1/ p(n). Using this x n , we introduce a circuit Cn ,
which incorporates the fixed values f n (xn ) and h n (xn ), and distinguishes the encryption
of xn from the encryption of 1|xn | . The circuit Cn operates as follows. On input β = E(α),
the circuit C n invokes A(β, h n (xn )) and outputs 1 if and only if A outputs the value
f n (xn ). Otherwise, Cn outputs 0.
This circuit is indeed of polynomial size because it merely incorporates strings of
polynomial length (i.e., f n (xn ) and h n (xn )) and emulates a polynomial-time computation
(i.e., that of A). (Note that the circuit family {Cn } is indeed non-uniform since its
definition is based on a non-uniform selection of xn ’s as well as on a hardwiring of
(possibly uncomputable) corresponding strings h n (xn ) and f n (x n ).) Clearly,
Pr C n (E G 1 (1n ) (α)) = 1 = Pr A(E G 1 (1n ) (α), h n (xn )) = f n (x n ) (5.2)
Combining Eq. (5.2) with the definition of n (xn ), we get
Pr Cn (E G 1 (1n ) (xn )) = 1 − Pr Cn (E G 1 (1n ) (1|xn | )) = 1 = n (x n )
1
>
p(n)
This contradicts our hypothesis that E has indistinguishable encryptions, and the claim
follows.
We have just shown that A performs essentially as well as A, and so Proposition 5.2.6
follows.
Comments. The fact that we deal with a non-uniform model of computation allows
the preceding proof to proceed regardless of the complexity of f and h. All that
our definition of C n requires is the hardwiring of the values of f and h on a single
string, and this can be done regardless of the complexity of f and h (provided that
| f n (xn )|, |h n (x n )| ≤ poly(n)).
When proving the public-key analogue of Proposition 5.2.6, algorithm A is defined
exactly as in the present proof, but its analysis is slightly different: The distinguishing
circuit, considered in the analysis of the performance of A , obtains the encryption-key
as part of its input and passes it to algorithm A (upon invoking the latter).
www.Ebook777.com
5.2 DEFINITIONS OF SECURITY
Proof: The desired algorithm A merely uses C n = h(1n , X n ) to distinguish E(x n ) from
E(yn ), and thus given E(X n ) it produces a guess for the value of f (1n , X n ). Specifically,
on input β = E(α) (where α is in the support of X n ) and (1n , 1|α| , h(1n , α)), algorithm A
387
Free ebooks ==> www.Ebook777.com
ENCRYPTION SCHEMES
Proof: Just observe that the output of A , on its constant input values 1n , 1|X n | and
h(1n , X n ), is stochastically independent of the random variable f (1n , X n ), which in
turn is uniformly distributed in {0, 1}. Eq. (5.5) follows (and equality holds in case A
always outputs a value in {0, 1}).
Combining Claim 5.2.7.1 and Fact 5.2.7.2, we reach a contradiction to the hypothesis
that the scheme is semantically secure (even in the restricted sense mentioned in the
furthermore-clause of the proposition). Thus, the proposition follows.
3 We comment that the value “1” output by Cn is an indication that α is more likely to be xn , whereas the
def
output of A is a guess of f (α). This point may be better stressed by redefining f such that f (1n , x n ) = x n and
def
f (1 , x) = yn if x = x n , and having A output x n if C n outputs 1 and output yn otherwise.
n
388
www.Ebook777.com
5.2 DEFINITIONS OF SECURITY
5.2.4.1. Definitions
For a sequence of strings x = (x (1) , ..., x (t) ), we let E e (x) denote the sequence of the
t results that are obtained by applying the randomized process E e to the t strings
x (1) , ..., x (t) , respectively. That is, E e (x) = (E e (x (1) ), ..., E e (x (t) )). We stress that in
each of these t invocations, the randomized process E e utilizes independently chosen
random coins. For the sake of simplicity, we consider the encryption of (polynomi-
ally) many plaintexts of the same (polynomial) length (rather than the encryption of
plaintexts of various lengths as discussed in Exercise 20). The number of plaintexts
as well as their total length (in unary) are given to all algorithms either implicitly or
explicitly.4
For private-key: An encryption scheme, (G, E, D), is semantically secure for mul-
tiple messages in the private-key model if for every probabilistic polynomial-
time algorithm A, there exists a probabilistic polynomial-time algorithm A such
(1) (t(n)) (1)
that for every probability ensemble {X n = (X n , ..., X n )}n∈N , with |X n | = · · · =
(t(n))
|X n | ≤ poly(n) and t(n) ≤ poly(n), every pair of polynomially bounded functions
f, h : {0, 1}∗ → {0, 1}∗ , every positive polynomial p and all sufficiently large n
Pr A(1n , E G 1 (1n ) (X n ), 1|X n | , h(1n , X n )) = f (1n , X n )
1
< Pr A (1n , t(n), 1|X n | , h(1n , X n )) = f (1n , X n ) +
p(n)
For public-key: An encryption scheme, (G, E, D), is semantically secure for multiple
messages in the public-key model if for A, A , t, {X n }n∈N , f, h, p, and n, as in the
4 For example, A can infer the number of plaintexts from the number of ciphertexts, whereas A is given this
number explicitly. Given the number of the plaintexts as well as their total length, both algorithms can infer the
length of each plaintext.
389
Free ebooks ==> www.Ebook777.com
ENCRYPTION SCHEMES
(The probability in these terms is taken over X n as well as over the internal coin tosses
of the relevant algorithms.)
We stress that the elements of X n are not necessarily independent; they may depend on
one another. Note that this definition also covers the case where the adversary obtains
some of the plaintexts themselves. In this case it is still infeasible for him/her to obtain
information about the missing plaintexts (see Exercise 22).
The equivalence of Definitions 5.2.8 and 5.2.9 can be established analogously to the
proof of Theorem 5.2.5.
Thus, proving that single-message security implies multiple-message security for one
definition of security yields the same for the other. We may thus concentrate on the
ciphertext-indistinguishability definitions.
www.Ebook777.com
5.2 DEFINITIONS OF SECURITY
(i)
Cn (e, E e (h̄ (i) )) = C n (Hn ). Thus, by Eq. (5.6), we have
Pr Dn (G 1 (1n ), E G (1n ) (yi+1 )) = 1
1
1
−Pr Dn (G 1 (1n ), E G 1 (1n ) (xi+1 )) = 1 >
t(n) · p(n)
Discussion. The fact that we are in the public-key model is essential to this proof. It
allows the circuit Dn to form encryptions relative to the same encryption-key used in
the ciphertext given to it. In fact, as previously stated (and proven next), the analogous
result does not hold in the private-key model.
Proposition 5.2.12: Suppose that there exist pseudorandom generators (robust against
polynomial-size circuits). Then, there exists a private-key encryption scheme that sat-
isfies Definition 5.2.3 but does not satisfy Definition 5.2.9.
Proof: We start with the construction of the desired private-key encryption scheme. The
encryption/decryption key for security parameter n is a uniformly distributed n-bit long
string, denoted s. To encrypt a ciphertext, x, the encryption algorithm uses the key s
as a seed for a (variable-output) pseudorandom generator, denoted g, that stretches
seeds of length n into sequences of length |x|. The ciphertext is obtained by a bit-by-bit
exclusive-or of x and g(s). Decryption is done in an analogous manner.
We first show that this encryption scheme satisfies Definition 5.2.3. Intuitively,
this follow from the hypothesis that g is a pseudorandom generator and the fact that
x ⊕ U|x| is uniformly distributed over {0, 1}|x| . Specifically, suppose toward the contra-
diction that for some polynomial-size circuit family {Cn }, a polynomial p, and infinitely
many n’s
1
| Pr[Cn (x ⊕ g(Un )) = 1] − Pr[Cn (y ⊕ g(Un )) = 1] | >
p(n)
where Un is uniformly distributed over {0, 1}n and |x| = |y| = m = poly(n). On the
other hand,
Pr[Cn (x ⊕ Um ) = 1] = Pr[Cn (y ⊕ Um ) = 1]
392
www.Ebook777.com
5.2 DEFINITIONS OF SECURITY
Discussion. The single-message security of the scheme used in the proof of Propo-
sition 5.2.12 was proven by considering an ideal version of the scheme in which the
pseudorandom sequence is replaced by a truly random sequence. The latter scheme
is secure in an information-theoretic sense, and the security of the actual scheme fol-
lowed by the indistinguishability of the two sequences. As we show in Section 5.3.1, this
construction can be modified to yield a private-key “stream-cipher” that is secure for
multiple message encryptions. All that is needed in order to obtain multiple-message
security is to make sure that (as opposed to this construction) the same portion of the
pseudorandom sequence is never used twice.
5 On input the ciphertexts β1 and β2 , knowing that the first plaintext is x 1 , we first retrieve the pseudorandom
def
sequence (i.e., it is just r = β1 ⊕ x1 ), and next retrieve the second plaintext (i.e., by computing β2 ⊕ r ).
393
Free ebooks ==> www.Ebook777.com
ENCRYPTION SCHEMES
def
where E e (x) = (E e (x (1) ), ..., E e (x (t(n)) )), for x = (x (1) , ..., x (t(n)) ) ∈ {0, 1}t(n)·(n) , is as
in Definition 5.2.8.
394
www.Ebook777.com
5.2 DEFINITIONS OF SECURITY
The random variable Z n represented a priori information about the plaintexts for which
encryptions should be distinguished. A special case of interest is when Z n = X n Y n .
Uniformity is captured in the requirement that D be a probabilistic polynomial-time
algorithm (rather than a family of polynomial-size circuits) and that the ensemble {T n =
X n Y n Z n }n∈N be polynomial-time constructible. Recall that in the non-uniform case (i.e.,
Definition 5.2.9), the random variable Z n can be incorporated in the distinguishing
circuit C n (and thus be eliminated).6 Thus, Definition 5.2.14 is seemingly weaker than
the corresponding non-uniform definition (i.e., Definition 5.2.9).
An analogous result holds for the private-key model. The important direction of the
theorem holds also for the single-message version (this is quite obvious from the
395
Free ebooks ==> www.Ebook777.com
ENCRYPTION SCHEMES
following proof). In the other direction, we seem to use the multiple-message ver-
sion (of semantic security) in an essential way. An alterative treatment is provided in
Exercise 23.
Proof Sketch: Again, we start with the more important direction (i.e., “indistinguisha-
bility” implies semantic security). Specifically, assuming that (G, E, D) has indistin-
guishable encryptions in the uniform sense, even merely in the special case where
Z n = X n Y n , we show that it is semantically secure in the uniform sense. Our construc-
tion of algorithm A is analogous to the construction used in the non-uniform treatment.
Specifically, on input (1n , t(n), 1|α| , h(1n , α)), algorithm A generates a random encryp-
tion of a dummy sequence of plaintexts (i.e., 1|α| ), feeds it to A, and outputs whatever
A does.7 That is,
As in the non-uniform case, the analysis of algorithm A reduces to the following claim.
Claim 5.2.15.1: For every two polynomials t and , every polynomial-time constructible
(1) (t(n)) (i)
ensemble {X n }n∈N , with X n = (X n , ..., X n ) and |X n | = (n), every polynomial-
time computable h, every positive polynomial p, and all sufficiently large n’s
Pr A(1n , G 1 (1n ), E G 1 (1n ) (X n ), 1|X n | , h(1n , X n )) = f (1n , X n )
1
< Pr A(1n , G 1 (1n ), E G 1 (1n ) (1|X n | ), 1|X n | , h(1n , X n )) = f (1n , X n ) +
p(n)
Proof Sketch: Analogously to the non-uniform case, assuming toward the contradiction
that the claim does not hold yields an algorithm that distinguishes encryptions of X n
from encryptions of Y n = 1|X n | , when getting auxiliary information Z n = X n Y n =
X n 1|X n | . Thus, we derive a contradiction to Definition 5.2.14 (even under the special
case postulated in the theorem).
We note that the auxiliary information that is given to the distinguishing algorithm
replaces the hardwiring of auxiliary information that was used in the non-uniform
case (and is not possible in the uniform-complexity model). Specifically, rather than
using a hardwired value of h (at some non-uniformly fixed sequence), the distinguish-
ing algorithm will use the auxiliary information Z n = X n 1|X n | in order to compute
def
h n (X n ) = (1n , 1|X n | , h(1n , X n )), which it will pass to A. Indeed, we rely on the hypoth-
esis that h is efficiently computable.
The actual proof is quite simple in case the function f is also polynomial-
time computable (which is not the case in general). In this special case, on input
(1n , e, z, E e (α)), where z = (x, 1|x| ) and α ∈ {x, 1|x| } for x ← X n , the distinguishing
algorithm computes u = h(1n , x) and v = f (1n , x), invokes A, and outputs 1 if and
only if A(1n , e, E e (α), 1|x| , u) = v.
7 More accurately, algorithm A proceeds as follows. Using t(n), the algorithm breaks 1|α| into a sequence of t(n)
equal-length (unary) strings, using 1n it generates a random encryption-key, and using this key it generates the
corresponding sequence of encryptions.
396
www.Ebook777.com
5.2 DEFINITIONS OF SECURITY
The proof becomes more involved in the case where f is not polynomial-time
computable.8 Again, the solution is in realizing that indistinguishability of encryp-
tion postulates a similar output profile (of A) in both cases, where the two cases
correspond to whether A is given an encryption of x or an encryption of 1x (for
x ← X n ). In particular, no value can occur as the output of A in one case with non-
negligibly higher probability than in the other case. To clarify the point, for every fixed
x, we define n,v (x) to be the difference between Pr[A(G 1 (1n ), E G 1 (1n ) (x), h n (x)) = v]
def
and Pr[A(G 1 (1n ), E G 1 (1n ) (1|x| ), h n (x)) = v], where h n (x) = (1n , 1|x| , h(1n , x)) and the
probability space is over the internal coin tosses of algorithms G, E, and A. Taking
the expectation over X n , the contradiction hypothesis means that E[n, f (1n , X n ) (X n )] >
1/ p(n), and so with probability at least 1/2 p(n) over the choice of x ← X n we have
n, f (1n , x) (x) > 1/2 p(n). The problem is that, given x (and 1n ), we cannot even approx-
imate n, f (1n , x) (x), because we do not have the value f (1n , x) (and we cannot compute
def
it). Instead, we let n (x) = maxv∈{0,1}poly(n) {n,v (x)}, and observe that E[n (X n )] ≥
E[n, f (1n , X n ) (X n )] > 1/ p(n). Furthermore, given (1n , x), we can (efficiently) approx-
imate n (x) as well as find a value v such that n,v (x) > n (x) − (1/2 p(n)), with
probability at least 1 − 2−n .
8 Unlike in the non-uniform treatment, here we cannot hardwire values (such as the values of h and f on good
sequences) into the algorithm D , because D is required to be uniform.
397
Free ebooks ==> www.Ebook777.com
ENCRYPTION SCHEMES
our selection, let us denote by vn a string s that maximizes n,s (x) (i.e., n,vn (x) =
n (x)). Then, with probability at least 1 − 2−n , the string ṽ satisfies
n, ṽ (x) − (1/4 p(n))
n, ṽ (x) ≥
n,vn (x) − (1/4 p(n))
≥
≥ n,vn (x) − (1/4 p(n)) − (1/4 p(n))
where the first and last inequalities are due to the quality of our approximations, and
the second inequality is due to the fact that ṽ maximizes n,· (x). Thus, n, ṽ (x) ≥
n (x) − (1/2 p(n)).
Thus, on input (1n , e, z, E e (α)), where z = (x, 1|x| ), the distinguisher, denoted D ,
operates in two stages.
1. In the first stage, D ignores the ciphertext E e (α). Using z, algorithm D recovers
def
x, and computes u = h n (x) = (1n , 1|x| , h(1n , x)). Using x and u, algorithm D esti-
mates n (x), and finds a value v as noted. That is, with probability at least 1 − 2−n ,
it holds that n,v (x) > n (x) − (1/2 p(n)).
2. In the second stage (using u and v, as determined in the first stage), algorithm D
invokes A, and outputs 1 if and only if A(e, E e (α), u) = v.
Let Vn (x) be the value found in the first stage of algorithm A (i.e., obliviously of the
ciphertext E e (α)). The reader can easily verify that
n
Pr D (1 , G 1 (1n ), Z n , E G (1n ) (X n )) = 1
1
− Pr D (1n , G 1 (1n ), Z n , E G 1 (1n ) (1 X n )) = 1
= E n, Vn (X n ) (X n )
1
≥ 1 − 2−n · E n (X n ) − − 2−n
2 p(n)
2 1
> E n (X n ) − >
3 p(n) 3 p(n)
where the first inequality is due to the quality of the first stage (and the 2−n factors ac-
count for the probability that the value found in that stage is bad). Thus, we have derived
a probabilistic polynomial-time algorithm (i.e., D ) that distinguishes encryptions of X n
from encryptions of Y n = 1|X n | , when getting auxiliary information Z n = X n 1|X n | . By
hypothesis, {X n } is polynomial-time constructible, and it follows that so is {X n Y n Z n }
Thus, we derive contradiction to Definition 5.2.14 (even under the special case postu-
lated in the theorem), and the claim follows.
Having established the important direction, we now turn to the opposite one. That is,
we assume that (G, E, D) is (uniformly) semantically secure and prove that it has (uni-
formly) indistinguishable encryptions. Again, the proof is by contradiction. However,
the proof is more complex than in the non-uniform case, because here “distinguishable
encryptions” means distinguishing between two plaintext-distributions (rather than be-
tween two fixed sequences of plaintexts), when also given a possibly related auxiliary
398
www.Ebook777.com
5.2 DEFINITIONS OF SECURITY
input Z n . Thus, it seems that we need to incorporate Z n into the input given to the
(semantic-security) adversary, and the only way to do so seems to be by letting Z n be
part of the a priori information given to that adversary (i.e., letting h(plaintext) = Z n ).
Indeed, this will be part of the construction presented next.
Suppose, without loss of generality, that there exists a probabilistic polynomial-time
def
algorithm D , a polynomial-time constructible ensemble T = {T n = X n Y n Z n }n∈N (as
in Definition 5.2.14), a positive polynomial p, and infinitely many n’s such that
Pr D (Z n , G 1 (1n ), E G 1 (1n ) (X n )) = 1
1
> Pr D (Z n , G 1 (1n ), E G 1 (1n ) (Y n )) = 1 | +
p(n)
Let t(n) and (n) be such that X n (resp., Y n ) consists of t(n) strings, each of length
(n). Suppose, without loss of generality, that |Z n | = m(n) · (n), and parse Z n into
) ∈ ({0, 1}(n) )m(n) such that Z n = Z n · · · Z n
(1) (m(n)) (1) (m(n))
Z n = (Z n , ..., Z n . We define an
def
auxiliary polynomial-time constructible ensemble Q = {Q n }n∈N such that
0(n) Z n X n Y n with probability 1
Qn = 2
(5.8)
1(n) Z n Y n X n with probability 1
2
That is, Q n is a sequence of 1 + m(n) + 2t(n) strings, each of length (n), that contains
Z n X n Y n in addition to a bit (encoded in the (n)-bit long prefix) indicating whether or
not the order of X n and Y n is switched. We define the function f to be equal to this
“switch”-indicator bit, and the function h to provide all information in Q n except this
switch bit. That is, we define f and h as follows:
r We define f (1n , q) def
= f n (q), where f n returns the first bit of its input; that is,
f n (σ (n) zαβ) = σ , for (z, α, β) ∈ ({0, 1}l(n) )m(n)+2t(n) .
r We define h(1n , q) def = h n (q), where h n reorders the suffix of its input according to
the first bit; that is, h n (0(n) zαβ) = zαβ and h n (1(n) zαβ) = zβα. Thus, h(1n , Q n ) =
Z n X n Y n , where Z n X n Y n is determined by T n = X n Y n Z n (and is independent of
the switch-case chosen in Eq. (5.8)).
We stress that both h and f are polynomial-time computable.
We will show that the distinguishing algorithm D (which distinguishes E(X n ) from
E(Y n ), when also given Z n ≡ Z n ) can be transformed into a polynomial-time algo-
rithm A that guesses the value of f (1n , Q n ), from the encryption of Q n (and the value
of h(1n , Q n )), and does so with probability non-negligibly greater than 1/2. This vio-
lates semantic security, since no algorithm (regardless of its running time) can guess
f (1n , Q n ) with probability greater than 1/2 when only given h(1n , Q n ) and 1|Q n | (be-
cause, conditioned on the value of h(1n , Q n ) (and 1|Q n | ), the value of f (1n , Q n ) is
uniformly distributed over {0, 1}).
On input (e, E e (α), 1|α| , h(1n , α)), where α = σ (n) z u v ∈ ({0, 1}l(n) )1+m(n)+2t(n)
equals either (0(n) , z, x, y) or (1(n) , z, y, x), algorithm A proceeds in two stages:
1. In the first stage, algorithm A ignores the ciphertext E e (α). It first extracts
x, y and z ≡ z out of h(1n , α) = z x y, and approximates n (z, x, y), which is
399
Free ebooks ==> www.Ebook777.com
ENCRYPTION SCHEMES
defined to equal
Pr D (z, G 1 (1n ), E G 1 (1n ) (x)) = 1 − Pr D (z, G 1 (1n ), E G 1 (1n ) (y)) = 1
Specifically, using O(n · p(n)2 ) samples, algorithm A obtains an approximation, de-
noted n (z, x, y), such that | n (z, x, y) − n (z, x, y)| < 1/3 p(n) with probability
at least 1 − 2−n .
Algorithm A sets ξ = 1 if n (z, x, y) > 1/3 p(n), sets ξ = −1 if n (z, x, y) <
−1/3 p(n), and sets ξ = 0 otherwise (i.e., |n (z, x, y)| ≤ 1/3 p(n)). Intuitively, ξ
indicates the sign of n (z, x, y), provided that the absolute value of the latter is
large enough, and is set to zero otherwise. In other words, with overwhelmingly high
probability, ξ indicates whether the value of Pr[D (z, ·, E · (x)) = 1] is significantly
greater, smaller, or about the same as Pr[D (z, ·, E · (y)) = 1].
In case ξ = 0, algorithm A halts with an arbitrary reasonable guess (say a randomly
selected bit). (We stress that all this is done obliviously of the ciphertext E e (α),
which is only used next.)
2. In the second stage, algorithm A extracts the last block of ciphertexts (i.e., E e (v))
out of E e (α) = E e (σ (n) z u v), and invokes D on input (z, e, E e (v)), where z is as
extracted in the first stage. Using the value of ξ as determined in the first stage,
algorithm A decides (i.e., determines its output bit) as follows:
r In case ξ = 1, algorithm A outputs 1 if and only if the output of D is 1.
r In case ξ = −1, algorithm A outputs 0 if and only if the output of D is 1.
That is, ξ = 1 (resp., ξ = −1) indicates that D is more (resp., less) likely to output
1 when given the encryption of x than when given the encryption of y.
Claim 5.2.15.2: Let p, Q n , h, f , and A be as in Eq. (5.8) and the text that follows it.
1 1
Pr A(G 1 (1n ), E G 1 (1n ) (Q n ), h(1n , Q n )) = f (1n , Q n ) > +
2 7 · p(n)
Proof Sketch: We focus on the case in which the approximation of n (z, x, y) computed
by (the first stage of) A is within 1/3 p(n) of the correct value. Thus, in case ξ = 0, the
sign of ξ agrees with the sign of n (z, x, y). It follows that for every possible (z, x, y)
such that ξ = 1 (it holds that n (z, x, y) > 0 and) the following holds:
Pr A(G 1 (1n ), E G 1 (1n ) (Q n ), h(1n , Q n )) = f (1n , Q n ) | (Z n , X n , X n ) = (z, x, y)
1
= · Pr A(G 1 (1n ), E G 1 (1n ) (0(n) , z, x, y), h n (0(n) , z, x, y)) = 0
2
1
+ · Pr A(G 1 (1n ), E G 1 (1n ) (1(n) , z, y, x), h n (1(n) , z, y, x)) = 1
2
1
= · Pr D (z, G 1 (1n ), E G 1 (1n ) (y)) = 0
2
1
+ · Pr D (z, G 1 (1n ), E G 1 (1n ) (x)) = 1
2
1
= · (1 + n (z, x, y))
2
400
www.Ebook777.com
5.2 DEFINITIONS OF SECURITY
Similarly, for every possible (z, x, y) such that ξ = −1 (it holds that n (z, x, y) < 0
and) the following holds:
Pr A(G 1 (1n ), E G 1 (1n ) (Q n ), h(1n , Q n )) = f (1n , Q n ) | (Z n , X n , X n ) = (z, x, y)
1
= · (1 − n (z, x, y))
2
Thus, in both cases where ξ = 0, algorithm A succeeds with probability
1 + ξ · n (z, x, y) 1 + |n (z, x, y)|
=
2 2
and in case ξ = 0 it succeeds with probability 1/2, which is (artificially) lower-bounded
by (1 + |n (z, x, y)| − (2/3 p(n)))/2 (because |n (z, x, y)| ≤ 2/3 p(n) for ξ = 0).9
Thus, ignoring the negligible probability that the approximation deviated from the
correct value by more than 1/3 p(n), the overall success probability of algorithm A is
Discussion. The proof of the first (i.e., important) direction holds also in the single-
message setting. In general, for any function t, in order to prove that semantic security
holds with respect to t-long sequences of ciphertexts, we just use the hypothesis that t-
long message-sequences have indistinguishable encryptions. In contrast, the proof of the
second (i.e., opposite) direction makes an essential use of the multiple-message setting.
In particular, in order to prove that t-long message-sequences have indistinguishable
encryptions, we use the hypothesis that semantic security holds with respect to (1 +
m + 2t)-long sequences of ciphertexts, where m depends on the length of the auxiliary
input in the claim of ciphertext-indistinguishability. Thus, even if we only want to
establish ciphertext-indistinguishability in the single-message setting, we do so by
using semantic security in the multiple-message setting. Furthermore, we use the fact
that given a sequence of ciphertexts, we can extract a certain subsequence of ciphertexts.
9 This analysis looks somewhat odd but is nevertheless valid. Our aim is to get a “uniform” expression for
the success probability of A in all cases (i.e., for all values of ξ ). In case |ξ | = 1, we have the lower-bound
(1 + |n (z, x, y)|)/2, which is certainly lower-bounded by (1 + |n (z, x, y)| − (2/3 p(n)))/2, whereas in case
ξ = 0 we artificially lower-bound 1/2 by the same expression. Once we have such a “uniform” expression, we
may take expectation over it (without breaking it to cases).
401
Free ebooks ==> www.Ebook777.com
ENCRYPTION SCHEMES
nition 5.2.14, induces an ensemble of triplets, {Tn = X n Yn Z n }n∈N , for the case t ≡ 1.
Specifically, we shall use X n = X n , Yn = Yn , and Z n = (X n , Y n , Z n , i), where i
(i) (i)
www.Ebook777.com
5.3 CONSTRUCTIONS OF SECURE ENCRYPTION SCHEMES
objects), rather than handed down from heaven (where it might have been selected
non-uniformly or using non-recursive procedures).
In addition, we review some popular suggestions for private-key and public-key en-
cryption schemes.
5.3.1.* Stream-Ciphers
It is common practice to use “pseudorandom generators” as a basis for private-key
stream-ciphers (see definition in Section 5.3.1.1). Specifically, the pseudorandom gen-
erator is used to produce a stream of bits that are XORed with the corresponding
plaintext bits to yield corresponding ciphertext bits. That is, the generated pseudoran-
dom sequence (which is determined by the a priori shared key) is used as a “one-time
pad” instead of a truly random sequence, with the advantage that the generated se-
quence may be much longer than the key (whereas this is not possible for a truly
random sequence). This common practice is indeed sound, provided one actually uses
pseudorandom generators (as defined in Section 3.3 of Volume 1), rather than programs
that are called “pseudorandom generators” but actually produce sequences that are easy
to predict (such as the linear congruential generator or some modifications of it that
output a constant fraction of the bits of each resulting number).
As we shall see, by using any pseudorandom generator one may obtain a secure
private-key stream-cipher that allows for the encryption of a stream of plaintext bits.
We note that such a stream-cipher does not conform to our formulation of an encryption
scheme (i.e., as in Definition 5.1.1), because in order to encrypt several messages one is
required to maintain a counter (to prevent reusing parts of the pseudorandom “one-time
pad”). In other words, we obtain a secure encryption scheme with a variable state that
is modified after the encryption of each message. We stress that constructions of secure
10 We note that this does not hold with respect to private-key schemes in the single-message setting (or for the
augmented model of state-based ciphers discussed in Section 5.3.1). In such a case, the private-key can be
augmented to include a seed for a pseudorandom generator, the output of which can be used to eliminate
randomness from the encryption algorithm. (Question: Why does the argument fail in the public-key setting
and in the multi-message private-key setting?)
11 The (private-key) stream-ciphers discussed in Section 5.3.1 are an exception, but (as further explained in Sec-
tion 5.3.1) these schemes do not adhere to our (basic) formulation of encryption schemes (as in Definition 5.1.1).
404
www.Ebook777.com
5.3 CONSTRUCTIONS OF SECURE ENCRYPTION SCHEMES
and stateless encryption schemes (i.e., conforming with Definition 5.1.1) are known
and are presented in Sections 5.3.3 and 5.3.4. The traditional interest in stream-ciphers
is due to efficiency considerations. We discuss this issue at the end of Section 5.3.3.
But before doing so, let us formalize the discussion.
5.3.1.1. Definitions
We start by extending the simple mechanism of encryption schemes (as presented
in Definition 5.1.1). The key-generation algorithm remains unchanged, but both the
encryption and decryption algorithm take an additional input and emit an additional
output, corresponding to their state before and after the operation. The length of the state
is not allowed to grow by too much during each application of the encryption algorithm
(see Item 3 in Definition 5.3.1), or else the efficiency of the entire “repeated encryption”
process cannot be guaranteed. For the sake of simplicity, we incorporate the key in the
state of the corresponding algorithm. Thus, the initial state of each of the algorithms is
set to equal its corresponding key. Furthermore, one may think of the intermediate states
as updated values of the corresponding key. For clarity, the reader may consider the
special case in which the state contains the initial key, the number of times the scheme
was invoked (or the total number of bits in such invocations), and auxiliary information
that allows a speedup of the computation of the next ciphertext (or plaintext).
For simplicity, we assume that the decryption algorithm (i.e., D) is deterministic
(otherwise formulating the reconstruction condition would be more complex). Intu-
itively, the main part of the reconstruction condition (i.e., Item 2 in Definition 5.3.1)
is that the (proper) iterative encryption–decryption process recovers the original plain-
texts. The additional requirement in Item 2 is that the state of the decryption algorithm
is updated correctly so long as it is fed with strings of length equal to the length of
the valid ciphertexts. The reason for this additional requirement is discussed following
Definition 5.3.1. We comment that in traditional stream-ciphers, the plaintexts (and ci-
phertexts) are individual bits or blocks of a fixed number of bits (i.e., |α (i) | = |β (i) | =
for all i’s).
12 Alternatively, we may decompose the decryption (resp., encryption) algorithm into two algorithms, where the
first takes care of the actual decryption (resp., encryption) and the second takes care of updating the state. For
details see Exercise 24.
405
Free ebooks ==> www.Ebook777.com
ENCRYPTION SCHEMES
3. There exists a polynomial p such that for every pair (e(0) , d (0) ) in the range of
G(1n ), and every sequence of α (i) ’s and e(i) ’s as in Item 2, it holds that |e(i) | ≤
|e(i−1) | + |α (i) | · p(n). Similarly for the d (i) ’s.
A theoretical reason: It implies that without loss of generality (albeit with possible
loss in efficiency), the decryption algorithm may be stateless. Furthermore, without
loss of generality (again, with possible loss in efficiency), the state of the encryption
algorithm may consist of the initial encryption-key and the lengths of the plaintexts
encrypted so far.
A practical reason: It allows for recovery from the loss of some of the ciphertexts. That
is, assuming that all ciphertexts have the same (known) length (which is typically
the case in the relevant applications), if the receiver knows (or is given) the total
number of ciphertexts sent so far, then it can recover the current plaintext from the
current ciphertext, even if some of the previous ciphertexts were lost. See the special
provision in Construction 5.3.3.
We comment that in Construction 5.3.3, it holds that |e(i) | ≤ |e(0) | + log2 ij=1 |α ( j) |,
which is much stronger than the requirement in Item 3 (of Definition 5.3.1).
We stress that Definition 5.3.1 refers to the encryption of multiple messages (and
meaningfully extends Definition 5.1.1 only when considering the encryption of multiple
messages). However, Definition 5.3.1 by itself does not explain why one should encrypt
the ith message using the updated encryption-key e(i−1) , rather than reusing the initial
encryption-key e(0) in all encryptions (where decryption is done by reusing the initial
decryption-key d (0) ). Indeed, the reason for updating these keys is provided by the
following security definition that refers to the encryption of multiple messages, and
holds only in case the encryption-keys in use are properly updated (in the multiple-
message encryption process). Here we present only the semantic security definition for
private-key schemes.
www.Ebook777.com
5.3 CONSTRUCTIONS OF SECURE ENCRYPTION SCHEMES
Note that Definition 5.3.2 (only) differs from Definition 5.2.8 in the preamble defin-
ing the random variable E e (x), which mandates that the encryption-key e(i−1) is used
in the ith encryption. Furthermore, Definition 5.3.2 guarantees nothing regarding
an encryption process in which the plaintext sequence x (1) , ..., x (t) is encrypted by
E(e, x (1) ), E(e, x (2) ), ..., E(e, x (t) ) (i.e., the initial encryption-key e itself is used in all
encryptions, as in Definition 5.2.8).
Key-generation and initial state: On input 1n , uniformly select s ∈ {0, 1}n , and output
the key-pair (s, s). The initial state of each algorithm is set to (s, 0, s).
(We maintain the initial key s and a step-counter in order to allow recovery from
loss of ciphertexts.)
Encrypting the next plaintext bit x with state (s, t, s ): Let s σ = g(s ), where |s | =
|s | and σ ∈ {0, 1}. Output the ciphertext bit x ⊕ σ , and set the new state to (s, t +
1, s ).
407
Free ebooks ==> www.Ebook777.com
ENCRYPTION SCHEMES
Decrypting the ciphertext bit y with state (s, t, s ): Let s σ = g(s ), where |s | = |s |
and σ ∈ {0, 1}. Output the plaintext bit y ⊕ σ , and set the new state to (s, t + 1, s ).
Special recovery procedure: When notified that some ciphertext bits may have been
lost and that the current ciphertext bit has index t , the decryption procedure first
recovers the correct current state, denoted st , to be used in decryption instead of s .
def
This can be done by computing si σi = g(si−1 ), for i = 1, ..., t , where s0 = s.13
Note that both the encryption and decryption algorithms are deterministic, and that the
state after encryption of t bits has length 2n + log2 t < 3n (for t < 2n ).
Recall that g (as in Construction 5.3.3) is called a next-step function of an on-
p
line pseudorandom generator if for every polynomial p the ensemble {G n }n∈N is
p
pseudorandom (with respect to polynomial-size circuits), where G n is defined by the
following random process:
Uniformly select s0 ∈ {0, 1}n ;
For i = 1 to p(n), let si σi ← g(si−1 ), where σi ∈ {0, 1} (and si ∈ {0, 1}n );
Output σ1 σ2 · · · σ p(n) .
Also recall that if g is itself a pseudorandom generator, then it constitutes a next-step
function of an on-line pseudorandom generator (see Exercise 21 of Chapter 3). We
have:
Proof Idea: Consider an ideal version of Construction 5.3.3 in which a truly random
sequence is used instead of the output produced by the on-line pseudorandom gener-
ator defined by g. The ideal version coincides with the traditional one-time pad, and
thus is perfectly secure. The security of the actual Construction 5.3.3 follows by the
pseudorandomness of the on-line generator.
13 More generally, if the decryption procedure holds the state at time t < t then it needs only compute st+1 , ..., st .
14 In using the term block-cipher, we abuse standard terminology by which a block-cipher must, in addition to op-
erating on plaintext of specific length, produce ciphertexts of a length that equals the length of the corresponding
plaintexts. We comment that the latter cannot be semantically secure; see Exercise 25.
408
www.Ebook777.com
5.3 CONSTRUCTIONS OF SECURE ENCRYPTION SCHEMES
5.3.2.1. Definitions
We start by considering the syntax (cf. Definition 5.1.1).
Pr[Dd (E e (α)) = α] = 1
Typically, we use either (n) = (n) or (n) = 1. Analogously to Definition 5.1.1, this
definition does not distinguish private-key encryption schemes from public-key ones.
The difference between the two types is captured in the security definitions, which are
essentially as before, with the modification that we only consider plaintexts of length
(n). For example, the analogue of Definition 5.2.8 (for private-key schemes) reads:
where E e (x (1) , ..., x (t) ) = (E e (x (1) ), ..., E e (x (t) )), as in Definition 5.2.8.
Note that, in case is polynomial-time computable, we can omit the auxiliary input
1|X n | = 1t(n)·(n) , because it can be reconstructed from the security parameter n and the
value t(n).
is easily resolved by padding the last block (while indicating the end of the actual
plaintext).15
To decrypt the ciphertext (m, β1 , ..., βt ) (with decryption-key d), we let αi = Dd (βi )
for i = 1, ..., t, and let the plaintext be the m-bit long prefix of the concatenated string
α1 · · · αt .
This construction yields ciphertexts that reveal the exact length of the plaintext. Recall
that this is not prohibited by the definitions of security, and that we cannot hope to totally
hide the plaintext length. However, we can easily construct encryption schemes that hide
some information about the length of the plaintext; see examples in Exercise 5. Also,
note that the above construction applies even to the special case where is identically 1.
Proof Sketch: The proof is by a reducibility argument. Assuming toward the contra-
diction that the encryption scheme (G , E , D ) is not secure, we conclude that neither
is (G, E, D), contradicting our hypothesis. Specifically, we rely on the fact that in
both schemes, security means security in the multiple-message setting. Note that in
case the security of (G , E , D ) is violated via t(n) messages of length L(n), the se-
curity of (G, E, D) is violated via t(n) · L(n)/(n) messages of length (n). Also,
the argument may utilize any of the two notions of security (i.e., semantic security or
ciphertext-indistinguishability).
15 We choose to use a very simple indication of the end of the actual plaintext (i.e., to include its length in the
ciphertext). In fact, it suffices to include the length of the plaintext modulo (n). Another natural alternative
is to use a padding of the form 10((n)−|α|−1) mod (n) , while observing that no padding is ever required in case
(n) = 1.
16 Recall that throughout this section security means security in the multiple-message setting.
410
www.Ebook777.com
5.3 CONSTRUCTIONS OF SECURE ENCRYPTION SCHEMES
Clearly, for every k (in the range of I (1n )) and x ∈ {0, 1}n ,
φ 1
Pr M (z) = 1 − Pr M f I (1n ) (z) = 1 <
q(n)
The proof of Proposition 5.3.10 follows. Combining Propositions 5.3.8 and 5.3.10 (with
a non-uniform version of Corollary 3.6.7), we obtain:
Theorem 5.3.11: If there exist (non-uniformly strong) one-way functions, then there
exist secure private-key encryption schemes.
411
Free ebooks ==> www.Ebook777.com
ENCRYPTION SCHEMES
Proof of Proposition 5.3.10: The proof consists of two steps (suggested as a general
methodology in Section 3.6):
1. Prove that an idealized version of the scheme, in which one uses a uniformly selected
function φ : {0, 1}n → {0, 1}n , rather than the pseudorandom function f s , is secure
(in the sense of ciphertext-indistinguishability).
2. Conclude that the real scheme (as presented in Construction 5.3.9) is secure (because
otherwise one could distinguish a pseudorandom function from a truly random one).
Specifically, in the ideal version, the messages x (1) , ..., x (t) are encrypted by
(r (1) , φ(r (1) ) ⊕ x (1) ), ..., (r (t) , φ(r (t) ) ⊕ x (t) ), where the r ( j) ’s are independently and uni-
formly selected, and φ is a random function. Thus, with probability greater than
1 − t 2 · 2−n , the r ( j) ’s are all distinct, and so the values φ(r ( j) ) ⊕ x ( j) are independently
and uniformly distributed, regardless of the x ( j) ’s. It follows that the ideal version is
ciphertext-indistinguishable; that is, for any x (1) , ..., x (t) and y (1) , ..., y (t) , the statisti-
(1) (1) (t) (t)
cal difference between the distributions (Un , φ(Un ) ⊕ x (1) ), ..., (Un , φ(Un ) ⊕ x (t) )
(1) (1) (t) (t) −n
and (Un , φ(Un ) ⊕ y ), ..., (Un , φ(Un ) ⊕ y ) is at most t · 2 .
(1) (t) 2
Now, if the actual scheme is not ciphertext-indistinguishable, then for some sequence
of r ( j) ’s and v ( j) ’s, a polynomial-size circuit can distinguish the φ(r ( j) ) ⊕ v ( j) ’s from
the f s (r ( j) ) ⊕ v ( j) ’s, where φ is random and f s is pseudorandom.17 But this contra-
dicts the hypothesis that polynomial-size circuits cannot distinguish between the two
cases.
Discussion. Note that we could have gotten rid of the randomization if we had al-
lowed the encryption algorithm to be history dependent (as discussed in Section 5.3.1).
Specifically, in such a case, we could have used a counter in the role of r . Further-
more, if the encryption scheme is used for fifo communication between the parties and
both can maintain the counter-value, then there is no need for the sender to send the
counter-value. However, in the latter case, Construction 5.3.3 is preferable (because the
adequate pseudorandom generator may be more efficient than a pseudorandom function
as used in Construction 5.3.9). We note that in case the encryption scheme is not used
for fifo communication and one may need to decrypt messages with arbitrary varying
counter-values, it is typically better to use Construction 5.3.9. Furthermore, in many
cases it may be preferable to select a value (i.e., r ) at random, rather, than rely on a
counter that must be stored in a reliable manner between applications (of the encryption
algorithm).
The ciphertexts produced by Construction 5.3.9 are longer than the corresponding
plaintexts. This is unavoidable in the case of secure (history-independent) encryption
schemes (see Exercise 25). In particular, the common practice of using pseudorandom
17 The v ( j) ’s either equal the x ( j) ’s or the y ( j) ’s, whereas the r ( j) ’s are random (or are fixed by an averaging
argument). The conclusion follows by considering the actual encryptions of the x ( j) ’s and the y ( j) ’s versus
their ideal encryptions. Since the actual encryptions are distinguishable whereas the ideals are not, the actual
encryption of either the x ( j) ’s or the y ( j) ’s must be distinguishable from the corresponding ideal version.
412
www.Ebook777.com
5.3 CONSTRUCTIONS OF SECURE ENCRYPTION SCHEMES
permutations as block-ciphers18 is not secure (e.g., one can distinguish two encryptions
of the same message from encryptions of two different messages).
Recall that by combining Constructions 5.3.7 and 5.3.9 (and referring to Proposi-
tions 5.3.8 and 5.3.10), we obtain a (full-fledged) private-key encryption scheme. A
more efficient scheme is obtained by a direct combination of the ideas underlying both
constructions:
Decrypting ciphertext (r, m, y1 , ..., yt ) (using the key k): For i = 1, ..., t, compute
αi = V (k, (r + i mod 2n )) ⊕ yi , and output the m-bit long prefix of α1 · · · αt . That
is, Dk (r, m, y1 , ..., yt ) is the m-bit long prefix of
18 That is, letting E k (x) = pk (x), where pk is the permutation associated with the string k.
413
Free ebooks ==> www.Ebook777.com
ENCRYPTION SCHEMES
www.Ebook777.com
5.3 CONSTRUCTIONS OF SECURE ENCRYPTION SCHEMES
Clearly, for every possible (α, τ ) output of G and for every σ ∈ {0, 1}, it holds that
Dτ (E α (σ )) = Dτ (F(α, S(α)), σ ⊕ b(S(α)))
= (σ ⊕ b(S(α))) ⊕ b(B(τ, F(α, S(α))))
= σ ⊕ b(S(α)) ⊕ b( pα−1 ( pα (S(α))))
= σ ⊕ b(S(α)) ⊕ b(S(α)) = σ
The security of this public-key encryption scheme follows from the (non-uniform) one-
way feature of the collection { pα } (or rather from the hypothesis that b is a corresponding
hard-core predicate).
Proof: Recall that by the equivalence theorems (i.e., Theorems 5.2.5 and 5.2.11), it
suffices to show single-message ciphertext-indistinguishability. Furthermore, by the
fact that here there are only two plaintexts (i.e., 0 and 1), it suffices to show that one
cannot distinguish the encryptions of these two plaintexts. That is, all we need to prove
is that, given the encryption-key α, it is infeasible to distinguish E α (0) = ( pα (r ), b(r ))
from E α (1) = ( pα (r ), 1 ⊕ b(r )), where r ← S(α). But this is easily implied by the
hypothesis that b is a hard-core of the collection { pα }. Details follow.
Recall that by saying that b is a hard-core of { pα }, we mean that for every polynomial-
size circuit family {Cn }, every polynomial p and all sufficiently large n’s
1 1
Pr[Cn (I1 (1n ), p I1 (1n ) (S(I1 (1n )))) = b(S(I1 (1n )))] < + (5.9)
2 p(n)
However, analogously to the second proof of Theorem 3.4.1, it can be shown that this
implies that for every polynomial-size circuit family {C n }, every polynomial p , and all
sufficiently large n’s
1
|Pr[Cn (α, pα (r ), b(r )) = 1] − Pr[Cn (α, pα (r ), 1 ⊕ b(r )) = 1]| <
p (n)
where α ← I1 (1n ) and r ← S(α). Thus, (α, E α (0)) is computationally indistinguishable
from (α, E α (1)), and the proposition follows.
Using Propositions 5.3.8 and 5.3.14, and recalling that Theorem 2.5.2 applies also to
collections of one-way permutations and to the non-uniform setting, we obtain:
The bandwidth of this scheme is much better than in Construction 5.3.13: A plaintext of
length n is encrypted via a ciphertext of length 2n + n = 3n. Furthermore, Randomized
RSA is almost as efficient as “plain RSA” (or the RSA function itself).
19 The conjectured security of the common practice relies on a seemingly stronger assumption; that is, the as-
sumption is that for every x ∈ {0, ..., 2n − 1}, given (N , e) as generated in Construction 5.3.16, it is infeasible
to distinguish r e mod N from (x + s2n )e mod N , where r (resp., s) is uniformly distributed in {0, ..., N − 1}
(resp., in {0, ..., N /2n − 1}).
416
www.Ebook777.com
5.3 CONSTRUCTIONS OF SECURE ENCRYPTION SCHEMES
where the last equality is due to r ed ≡ r (mod N ). The security of Randomized RSA
(as a public-key encryption scheme) follows from the large hard-core conjecture for
RSA, analogously to the proof of Proposition 5.3.14.
Proposition 5.3.17: Suppose that the large hard-core conjecture for RSA does hold.
Then Construction 5.3.16 constitutes a secure public-key block-cipher (with block-
length (n) = n).
Proof Sketch: Recall that by the equivalence theorems (i.e., Theorems 5.2.5 and 5.2.11),
it suffices to show single-message ciphertext-indistinguishability. Considering any
two strings x and y, we need to show that ((N , e), r e mod N , x ⊕ lsb(r )) and
((N , e), r e mod N , y ⊕ lsb(r )) are indistinguishable, where N , e and r are selected
at random as in the construction. It suffices to show that for every fixed x, the distribu-
tions ((N , e), r e mod N , x ⊕ lsb(r )) and ((N , e), r e mod N , x ⊕ s) are indistinguish-
able, where s ∈ {0, 1}n is uniformly distributed, independently of anything else. The
latter claim follows from the hypothesis that the n least-significant bits are a hard-core
function for RSA with moduli of length 2n.
Discussion. We wish to stress that encrypting messages by merely applying the RSA
function to them (without randomization) yields an insecure encryption scheme. Un-
fortunately, this procedure (previously referred to as “plain RSA”) is quite common in
practice. The fact that plain RSA is definitely insecure is a special case of the fact that
any public-key encryption scheme that employs a deterministic encryption algorithm
is insecure. We warn that the fact that in such deterministic encryption schemes one
can distinguish encryptions of two specific messages (e.g., the all-zero message and
the all-one message) is not “merely of theoretical concern”; it may seriously endanger
some applications! In contrast, Randomized RSA (as defined in Construction 5.3.16)
may be secure, provided a quite reasonable conjecture (i.e., the large hard-core con-
jecture for RSA) holds. We comment that the more common practice of applying the
RSA function to a randomly padded version of the plaintext is secure if and only if a
seemingly stronger (and yet reasonable) assumption holds; see footnote 19. Thus, the
latter practice is far superior to using the RSA function directly (i.e., without random-
ization): The randomized version is likely to be secure, whereas the non-randomized
(or plain) version is definitely insecure.
We note that Construction 5.3.16 (or, alternatively, Construction 5.3.13) generalizes
to any collection of trapdoor permutations having a corresponding large hard-core
function. Suppose that { pα } is such a collection, and h (or rather {h α }) is a corresponding
hard-core function (resp., a corresponding collection of hard-core functions), such
417
Free ebooks ==> www.Ebook777.com
ENCRYPTION SCHEMES
418
www.Ebook777.com
5.3 CONSTRUCTIONS OF SECURE ENCRYPTION SCHEMES
The security of this public-key encryption scheme follows from the (non-uniform)
one-way feature of the collection { pα }, but here we restrict the sampling algorithm
S to produce almost uniform distribution over the domain (so that this distribution is
preserved under successive applications of pα ).
Details: We need to prove that for every polynomial and every sequence of
def (n)
pairs (σn , σn ) ∈ {0, 1}(n) × {0, 1}(n) , the distributions Dn = (α, pα (S(α)), σn ⊕
def (n)
Dn = σn ⊕
((n)) ((n))
G α (S(α)))and (α, pα (S(α)), G α (S(α))) are indistinguishable,
where α ← I1 (1n ). We prove this in two steps:
def
1. We first prove that for every sequence of σn ’s, the distributions Dn =
(n) ((n)) def (n)
(α, pα (S(α)), σn ⊕ and Rn = (α,
G α (S(α))) σn ⊕ U(n) ) are
pα (S(α)),
indistinguishable, where U(n) denotes a random variable uniformly distributed
over {0, 1}(n) and α ← I1 (1n ).
Suppose first that S(α) is uniform over the domain of pα . Then the indistin-
guishability of {Dn }n∈N and {Rn }n∈N follows directly from Proposition 3.4.6
(as adapted to circuits): The adapted form refers to the indistinguishability of
(n) ((n)) (n)
(α, pα (S(α)), G α (S(α))) and (α, pα (S(α)), U(n) ), and yields the desired
claim by noting that σn can be incorporated in the prospective distinguisher.
The extension (to the case that S(α) has negligible statistical difference to the
uniform distribution over the domain of pα ) is straightforward.
2. Applying the previous item to Dn and Rn = (α, pα(n) (S(α)), σn ⊕ U(n) ), we
def
conclude that {Dn }n∈N and {Rn }n∈N are indistinguishable. Similarly, {Dn }n∈N
def (n)
and {Rn }n∈N , where Rn = (α, pα (S(α)), σn ⊕ U(n) ), are indistinguishable.
Furthermore, {Rn }n∈N and {Rn }n∈N are identically distributed. Thus, {Dn }n∈N
and {Dn }n∈N are indistinguishable.
An instantiation. Assuming that factoring Blum Integers (i.e., products of two primes
each congruent to 3 (mod 4)) is hard, one may use the modular squaring function
(which induces a permutation over the quadratic residues modulo the product of these
419
Free ebooks ==> www.Ebook777.com
ENCRYPTION SCHEMES
integers) in the role of the trapdoor permutation used in Construction 5.3.18. This yields
a secure public-key encryption scheme with efficiency comparable to that of plain RSA
(see further discussion latter in this section).
www.Ebook777.com
5.3 CONSTRUCTIONS OF SECURE ENCRYPTION SCHEMES
Again, one can easily verify that this construction constitutes an encryption scheme:
The main fact to verify is that the value of s1 as reconstructed in the decryption stage
equals the value used in the encryption stage. This follows by combining the Chinese
Reminder Theorem with the fact that for every quadratic residue s mod N , it holds that
(n) (n)
s ≡ (s 2 mod N )d P (mod P) and s ≡ (s 2 mod N )d Q (mod Q).
Details: Recall that for a prime P ≡ 3 (mod 4), and every quadratic residue r , we
have r (P+1)/2 ≡ r (mod P). Thus, for every quadratic residue s (modulo N ) and
every , we have
((P+1)/4)
(s 2 mod N )d P ≡ s 2 mod N (mod P)
≡ s ((P+1)/2) (mod P)
≡s (mod P)
2
Similarly, (s mod N ) ≡ s (mod Q). Finally, observing that c P and c Q are as
dQ
Corollary 5.3.21: Suppose that factoring is infeasible in the sense that for every
polynomial-size circuit {Cn }, every positive polynomial p, and all sufficiently large
n’s
1
Pr[Cn (Pn · Q n ) = Pn ] <
p(n)
where Pn and Q n are uniformly distributed n-bit long primes. Then Construction 5.3.20
constitutes a secure public-key encryption scheme.
Thus, the conjectured infeasibility of factoring (which is a necessary condition for secu-
rity of RSA) yields a secure public-key encryption scheme with efficiency comparable
421
Free ebooks ==> www.Ebook777.com
ENCRYPTION SCHEMES
to that of (plain or Randomized) RSA. In contrast, recall that plain RSA itself is not
secure (as it employs a deterministic encryption algorithm), whereas Randomized RSA
(i.e., Construction 5.3.16) is not known to be secure under a standard assumption such
as intractability of factoring (or even of inverting the RSA function).21
Our treatment so far has referred only to a “passive” attack in which the adversary
merely eavesdrops on the line over which ciphertexts are being sent. Stronger types
of attacks, culminating in the so-called Chosen Ciphertext Attack, may be possible in
various applications. Specifically, in some settings it is feasible for the adversary to
make the sender encrypt a message of the adversary’s choice, and in some settings the
adversary may even make the receiver decrypt a ciphertext of the adversary’s choice.
This gives rise to chosen plaintext attacks and to chosen ciphertext attacks, respectively,
which are not covered by the security definitions considered in previous sections. Thus,
our main goal in this section is to provide a treatment of such types of “active” attacks.
In addition, we also discuss the related notion of non-malleable encryption schemes
(see Section 5.4.5).
5.4.1. Overview
We start with an overview of the type of attacks and results considered in the current
(rather long) section.
21 Recall that Randomized RSA is secure provided that the n/2 least-significant bits constitute a hard-core function
for n-bit RSA moduli. This is a reasonable conjecture, but it seems stronger than the conjecture that RSA is hard
to invert: Assuming that RSA is hard to invert, we only know that the O(log n) least-significant bits constitute
a hard-core function for n-bit moduli.
422
www.Ebook777.com
5.4* BEYOND EAVESDROPPING SECURITY
Chosen Plaintext Attacks. Here the attacker may obtain encryptions of plaintexts of
its choice (under the key being attacked). Indeed, such an attack does not add power in
the case of public-key schemes.
Chosen Ciphertext Attacks. Here the attacker may obtain decryptions of ciphertexts
of its choice (under the key being attacked). That is, the attacker is given oracle access
to the decryption function corresponding to the decryption-key in use. We distinguish
two types of such attacks.
1. In an a priori chosen ciphertext attack, the attacker is given access to the decryption
oracle only prior to being presented with the ciphertext that it should attack (i.e., the
ciphertext for which it has to learn partial information). That is, the attack consists
of two stages: In the first stage, the attacker is given the above oracle access, and in
the second stage, the oracle is removed and the attacker is given a “test ciphertext”
(i.e., a test of successful learning).
2. In an a posteriori chosen ciphertext attack, after being given the test ciphertext, the
decryption oracle is not removed, but rather the adversary’s access to this oracle is
restricted in the natural way (i.e., the adversary is allowed to query the oracle on any
ciphertext except for the test ciphertext).
In both cases, the adversary may make queries that do not correspond to a legitimate
ciphertext, and the answer will be accordingly (i.e., a special “failure” symbol). Fur-
thermore, in both cases the adversary may effect the selection of the test ciphertext (by
specifying a distribution from which the corresponding plaintext is to be drawn).
Formal definitions of all these types of attacks are given in the following subsections
(i.e., in Sections 5.4.2, 5.4.3, and 5.4.4, respectively). In addition, in Section 5.4.5,
we consider the related notion of malleability, that is, attacks aimed at generating
encryptions of plaintexts related to the secret plaintext, rather than gaining information
about the latter.
5.4.1.2. Constructions
As in the basic case (i.e., Section 5.3), actively secure private-key encryption schemes
can be constructed based on the existence of one-way functions, whereas actively
secure public-key encryption schemes are based on the existence of (enhanced) trapdoor
permutations. In both cases, withstanding a posteriori chosen ciphertext attacks is harder
than withstanding a priori chosen ciphertext attacks. We will present the following
results.
For Private-Key Schemes. In Section 5.4.4.3, we show that the private-key encryption
scheme based on pseudorandom functions (i.e., Construction 5.3.9) is secure also under
a priori chosen ciphertext attacks, but is not secure under an a posteriori chosen
ciphertext attack. We also show how to transform any passively secure private-key
encryption scheme into a scheme secure under (a posteriori) chosen ciphertext attacks
by using a message-authentication scheme on top of the basic encryption. Thus, the latter
construction relies on message-authentication schemes as defined in Section 6.1. We
423
Free ebooks ==> www.Ebook777.com
ENCRYPTION SCHEMES
22 Indeed, an alternative presentation may start with the strongest notion of security (i.e., corresponding to a-
posteriori chosen ciphertext attacks), and obtain the weaker notions by imposing various restrictions (on the
attacks).
424
www.Ebook777.com
5.4* BEYOND EAVESDROPPING SECURITY
ciphertext attacks, we will use some results from Chapter 6. In the case of private-key
encryption schemes (treated in Section 5.4.4.3), we will use a message-authentication
scheme, but do so in a self-contained way. In the case of public-key encryption schemes
(treated in Section 5.4.4.4), we will use signature schemes (having an extra property)
in order to construct a certain non-interactive zero-knowledge proof, which we use for
the construction of the encryption scheme. At that point we will refer to a specific result
proved in Chapter 6.
5.4.2.1. Definitions
Recall that we seek a definition that guarantees that partial information about the plain-
text remains secret even if the plaintext does depend on the encryption-key in use. That
is, we seek a strengthening of semantic security (as defined in Definition 5.2.2) in which
one allows the plaintext distribution ensemble (denoted {X n }n∈N in Definition 5.2.2)
to depend on the encryption-key in use (i.e., for encryption-key e, we consider the
distribution X e over {0, 1}poly(|e|) ). Furthermore, we also allow the partial information
functions (denoted f and h in Definition 5.2.2) to depend on the encryption-key in use
(i.e., for encryption-key e, we consider the functions f e and h e ). In the actual definition
23 Indeed, it is natural (and even methodologically imperative) that a high-level application that uses encryption as
a tool be oblivious of the keys used by that tool. However, this refers only to a proper operation of the application,
and deviation may be caused (in some settings) by an improper behavior (i.e., an adversary).
425
Free ebooks ==> www.Ebook777.com
ENCRYPTION SCHEMES
Definition 5.4.1 (semantic security under key-dependent passive attacks): The se-
quence {( f e , h e , X e )}e∈{0,1}∗ is admissible for the current definition if
1. The functions f e : {0, 1}∗ → {0, 1}∗ are polynomially bounded; that is, there exists
a polynomial such that | f e (x)| ≤ (|x| + |e|).
2. There exists a non-uniform family of polynomial-size (h-evaluation) circuits {Hn }n∈N
such that for every e in the range of G 1 (1n ) and every x in the support of X e , it holds
that Hn (e, x) = h e (x).
3. There exists a non-uniform family of (probabilistic) polynomial-size (sampling) cir-
cuits {Sn }n∈N such that for every e in the range of G 1 (1n ) and for some m = poly(|e|),
the random variables Sn (e, Um ) and X e are identically distributed.25
24 Recall that without loss of generality, we may assume that the keys generated by G(1n ) have length n. Thus,
there is no point in providing the algorithms with 1n as an auxiliary input (as done in Definition 5.2.2).
25 As usual, S (e, r ) denotes the output of the circuit S on input e and coins r . We stress that for every e, the
n n
length of X e is fixed.
426
www.Ebook777.com
5.4* BEYOND EAVESDROPPING SECURITY
As in the basic case (i.e., Section 5.2), the two definitions are equivalent.
26 Here we use the convention by which A gets e along with h e (x) (and 1|x| ). This is important because A must
feed a matching pair (e, h e (x)) to A.
427
Free ebooks ==> www.Ebook777.com
ENCRYPTION SCHEMES
Assuming (to the contrary of the above claim) that Eq. (5.11) does not hold, we obtain
a sequence of admissible pairs {(xe , ye )}e∈{0,1}∗ for Definition 5.4.2 such that their
encryptions can be distinguished (in contradiction to our hypothesis). Specifically,
def def def
we set x e = Sn (e) and ye = 1|xe | , and let C n (e, α) = A(e, α, 1|xe | , Hn (e, x e )). Thus, we
obtain a (poly(n)-size) circuit Cn such that for some positive polynomial p and infinitely
many n’s
1
Pr[C (e, E e (x e )) = f e (xe )] − Pr[C (e, E e (ye )) = f e (xe )] >
n n
p(n)
Details: We refer to the proof of Claim 5.2.15.1 (contained in the proof of The-
orem 5.2.15). Recall that the idea was to proceed in two stages. First, using
only
e (which also yields xe and ye ), we find an arbitrary value v such that
Pr[C (e, E e (x e )) = v] − Pr[C (e, E e (ye )) = v] is large. In the second stage, we
n n
use this value v in order to distinguish the case in which we are given an encryption
of x e from the case in which we are given an encryption of ye . (We comment if
(e, x) → f e (x) were computable by a poly(n)-size circuit, then converting Cn into a
distinguisher Cn would have been much easier; we further comment that as a corol-
lary to the current proof, one can conclude that the restricted form is equivalent to
the general one.)
This concludes the proof that indistinguishability of encryptions (as per Definition 5.4.2)
implies semantic security (as per Definition 5.4.1), and we now turn to the opposite
direction.
Suppose that (G, E, D) does not have indistinguishable encryptions, and consider an
admissible sequence {(xe , ye )}e∈{0,1}∗ that witnesses this failure. Following the proof of
Proposition 5.2.7, we define a probability ensemble {X e }e∈{0,1}∗ and function ensembles
{h e }e∈{0,1}∗ and { f e }e∈{0,1}∗ in an analogous manner:
Using the admissibility of the sequence {(x e , ye )}e (for Definition 5.4.2), it follows that
{( f e , h e , X e )}e is admissible for Definition 5.4.1. Using the same algorithm A as in the
proof of Proposition 5.2.7 (i.e., A(e, β, Cn ) = Cn (e, β), where β is a ciphertext and
Cn = h e (X e )), and using the same analysis, we derive a contradiction to the hypothesis
that (G, E, D) satisfies Definition 5.4.1.
428
www.Ebook777.com
5.4* BEYOND EAVESDROPPING SECURITY
5.4.2.2. Constructions
All the results presented in Section 5.3.4 extend to security under key-dependent passive
attacks. That is, for each of the constructions presented in Section 5.3.4, the same
assumption used to prove security under key-oblivious passive attacks actually suffices
for proving security under key-dependent passive attacks. Before demonstrating this
fact, we comment that (in general) security under key-oblivious passive attacks does
not necessarily imply security under key-dependent passive attacks; see Exercise 32.
Initial observations. We start by observing that Construction 5.3.7 (i.e., the transfor-
mation of block-ciphers to general encryption schemes) maintains its security in our
context. That is:
Proof Idea: As in the proof of Proposition 5.3.8, we merely observe that multiple-
message security of (G , E , D ) is equivalent to multiple-message security of
(G, E, D).
Since (G, E, D) is secure under key-oblivious passive attacks, it follows that (for every
x, y
(x, y) ∈ {0, 1}m × {0, 1}m , where m ≤ poly(n)) the circuit C n cannot distinguish the
case α = E e (x) from the case α = E e (y). Thus, for some negligible function µ : N →
[0,1] and every pair (x, y) ∈ {0, 1}m × {0, 1}m , the following holds:
µ(n) > Pre [Cnx, y (e, E e (x)) = 1] − Pre [Cnx, y (e, E e (y)) = 1]
Cn (e, E e (xe )) = 1 Cn (e, E e (ye )) = 1
= Pre − Pre
∧ (xe , ye ) = (x, y) ∧ (xe , ye ) = (x, y)
where e ← G 1 (1n ), and equality holds because in case (xe , ye ) = (x, y), the output of
x, y x, y x, y
C n (e, α) is independent of α (and so in this case Cn (e, E e (x)) = Cn (e, E e (y))).
Since this holds for any pair (x, y) ∈ {0, 1}m × {0, 1}m , and since |xe | = |ye | = (n), it
430
www.Ebook777.com
5.4* BEYOND EAVESDROPPING SECURITY
follows that
|Pre [Cn (e, E e (xe )) = 1] − Pre [Cn (e, E e (ye )) = 1]|
≤ Pre C n (e, E e (x e )) = 1 − Pre Cn (e, E e (ye )) = 1
∧ (x e , ye ) = (x, y) ∧ (xe , ye ) = (x, y)
|x|=|y|=(n)
A Feasibility Result. Combining Theorem 5.3.15 with Propositions 5.4.4 and 5.4.5,
we obtain a feasibility result:
More Efficient Schemes. In order to obtain more efficient schemes, we directly analyze
the efficient constructions presented in Section 5.3.4. For example, extending the proof
of Proposition 5.3.19, we obtain:
and (α, pα (S(α)), U ) are indistinguishable. The latter claim follows as in the proof of
Proposition 5.3.19 (i.e., by a minor extension to Proposition 3.4.6). The proposition
follows.
27 Recall that here α serves as an encryption-key and C n (α) is a key-dependent plaintext. Typically, Cn (α) would
be the first or second element in the plaintext pair (xα , yα ) = Pn (α).
431
Free ebooks ==> www.Ebook777.com
ENCRYPTION SCHEMES
attacks, we start with mild active attacks in which the adversary may obtain (from some
legitimate user) ciphertexts corresponding to plaintexts of the adversary’s choice. Such
attacks will be called chosen plaintext attacks, and they characterize the adversary’s
abilities in some applications. For example, in some settings, the adversary may (directly
or indirectly) control the encrypting module (but not the decrypting module).
Intuitively, a chosen plaintext attack poses additional threat in the case of private-
key encryption schemes (see Exercise 33), but not in the case of public-key encryption
schemes. In fact, we will show that in the case of public-key encryption schemes, a
chosen plaintext attack can be emulated by a passive key-dependent attack.
5.4.3.1. Definitions
We start by rigorously formulating the framework of chosen plaintext attacks. Intu-
itively, such attacks proceed in four stages corresponding to the generation of a key (by
a legitimate party), the adversary’s requests (answered by the legitimate party) to encrypt
plaintexts under this key, the generation of a challenge ciphertext (under this key and
according to a template specified by the adversary), and additional requests to encrypt
plaintexts (under the same key). That is, a chosen plaintext attack proceeds as follows:
www.Ebook777.com
5.4* BEYOND EAVESDROPPING SECURITY
first part, denoted A1 , represents the adversary’s behavior during Step 2. It is given
a security parameter (and possibly an encryption-key), and its output is a pair (τ, σ ),
where τ is the template generated in the beginning of Step 3 and σ is state information
passed to the second part of the adversary. The second part of the adversary, denoted
A2 , represents the adversary’s behavior during Step 4. It is given the state σ (of the first
part), as well as the actual challenge (generated Step 3), and produces the actual output
of the adversary.
In accordance with the use of non-uniform formulations, we let each of the two
oracle machines have a (non-uniform) auxiliary input. In fact, it suffices to provide
only the first machine with such a (non-uniform) auxiliary input, because it can pass
auxiliary input to the second machine in the state information σ . (Similarly, in the case
of public-key schemes, it suffices to provide only the first machine with the encryption-
key.) We comment that we provide these machines with probabilistic oracles; that is, in
response to a plaintext query x, the oracle E e returns a random ciphertext E e (x) (i.e.,
the result of a probabilistic process applied to e and x). Thus, in the case of public-key
schemes, the four-step attack process can be written as follows:
(e, d) ← G(1n )
(τ, σ ) ← A1E e (e, z)
def
c = an actual challenge generated according to the template τ
output ← A2E e (σ, c)
where z denotes (non-uniform) auxiliary input given to the adversary. In the case of
private-key schemes, the adversary (i.e., A1 ) is given 1n instead of e.
433
Free ebooks ==> www.Ebook777.com
ENCRYPTION SCHEMES
Note that as in almost all other definitions of semantic security (with the exception of
Definition 5.4.1), algorithm A1 does not get a (random) encryption-key as input (but may
rather generate one by itself).28 Since the challenge template is not fixed (or determined
by e) but, rather, is chosen by A and A themselves, it is very important to require
that in both cases, the challenge template be distributed identically (or approximately
so): There is no point in relating the success probability of A and A , unless these
probabilities refer to same distribution of problems (i.e., challenge templates).29 (The
issue arises also in Definition 5.4.1 where it was resolved by forcing A to refer to the
challenge template determined by the public-key e.)30
Definition 5.4.8 implies Definition 5.4.1, but this may not be evident from the def-
initions themselves (most importantly, because here f m is computationally bounded
whereas in Definition 5.4.1 the function is computationally unbounded). Still, the va-
lidity of the claim follows easily from the equivalence of the two definitions to the
28 In fact, A1 is likely to start by generating e ← G 1 (1n ), because it has to generate a challenge template that is
distributed as the one produced by A1 on input e ← G 1 (1n ).
29 Failure to make this requirement would have resulted in a fundamentally bad definition (by which every encryp-
tion scheme is secure). For example, algorithm A1 could have set h m to equal the function f m selected by A1 (in
a corresponding attack). Doing so, the success of A to guess the value of fm (x) from the (insecure) encryption
of x and a (possibly) useless value h m (x) (e.g., for a constant function h m ) would have been met by the success
of A to “guess” the value of f m (x) from f m (x) itself (without being given the encryption of x). An alternative
approach, which follows the formulation of Definition 5.4.1, is presented in Exercise 34.
30 Indeed, an alternative solution could have been the one adopted here and in the sequel; that is, in an alternative
to Definition 5.4.1, one may allow A to select the challenge template by itself, provided that the selection yields
a distribution similar to the one faced by A (as induced by the public-key e). For details, see Exercise 30.
434
www.Ebook777.com
5.4* BEYOND EAVESDROPPING SECURITY
corresponding notions of indistinguishability of encryptions (and the fact that the im-
plication is evident for the latter formulations).
Clearly, Definition 5.4.9 implies Definition 5.4.2 as a special case. Furthermore, for
public-key schemes, the two definitions are equivalent (see Proposition 5.4.10), whereas
for private-key schemes, Definition 5.4.9 is strictly stronger (see Exercise 33).
Proposition 5.4.10: Let (G, E, D) be a public-key encryption scheme that has indis-
tinguishable encryptions under key-dependent passive attacks. Then (G, E, D) has
indistinguishable encryptions under chosen plaintext attack.
Proof Sketch: The key observation is that in the public-key model, a chosen plaintext
attack can be emulated by a passive key-dependent attack. Specifically, the (passive)
attacker can emulate access to an encryption oracle by itself (by using the encryption-
key given to it). Thus, we obtain an attacker as in Definition 5.4.9, with the important
exception that it never makes oracle calls (but rather emulates E e by itself). In other
words, we have an attacker as in Definition 5.4.2, with the minor exception that it is
a probabilistic polynomial-time machine with auxiliary input z (rather than being a
polynomial-size circuit) and that it distinguishes a pair of plaintext distributions rather
than a pair of (fixed) plaintexts (which depend on the encryption-key). However, fixing
435
Free ebooks ==> www.Ebook777.com
ENCRYPTION SCHEMES
the best-possible coins for this attacker (and incorporating them as well as z in an
adequate circuit), we obtain an attacker exactly as in Definition 5.4.2 such that its
distinguishing gap is at least as large as the one of the (initial) chosen plaintext attacker.
(For details, see Exercise 30.)
Since A1 merely emulates the generation of a key-pair and the actions of A1 with respect
to such a pair, the equal distribution condition (i.e., Item 2 in Definition 5.4.8) holds.
Using the (corresponding) indistinguishability of encryption hypothesis, we show that
(even in the presence of an encryption oracle E e ) the distributions (σ, (E e (x), h(x)))
and (σ, (E e (1|x| ), h(x))) are indistinguishable, where (e, d) ← G(1n ), ((S, h, f ), σ ) ←
A1Ee (y, z) (with y = e or y = 1n depending on the model), and x ← S(Upoly(n) ).
E
Details: Suppose that given ((S, h, f ), σ ) generated by A1 e (y, z) and oracle
access to E e , where e ← G 1 (1 ), one can distinguish (σ, (E e (x), h(x))) and
n
436
www.Ebook777.com
5.4* BEYOND EAVESDROPPING SECURITY
and outputs ((x (1) , x (2) ), (σ, h(x (1) ))). That is, (x (1) , x (2) ) is the challenge template,
and it is answered with E e (x (i) ), where i is either 1 or 2. The second part of the
new distinguisher gets as input a challenge ciphertext α ← E e (x (i) ) and the state
generated by the first part (σ, h(x (1) )), and invokes the distinguisher of the contra-
diction hypothesis with input (σ, (α, h(x (1) ))), while answering its oracle queries by
forwarding these queries to its own E e oracle. Thus, the new distinguisher violates
the condition in Definition 5.4.9, in contradiction to the hypothesis that (G, E, D)
has indistinguishable encryptions.
It follows that indistinguishability of encryptions (as per Definition 5.4.9) implies se-
mantic security (as per Definition 5.4.8). (Here, this implication is easier to prove than
in previous cases because the function f is computable via a circuit that is generated
as part of the challenge template [and, without loss of generality, is part of σ ].)
We now turn to the opposite direction. Suppose that (G, E, D) does not have in-
distinguishable encryptions, and consider the pairs (x (1) , x (2) ) produced as a challenge
template by the distinguishing adversary. Following the ideas of the proof of Proposi-
tion 5.2.7, we let the semantic-security adversary generate a corresponding challenge
template (S, h, f ) such that
r The circuit S samples uniformly in {x (1) , x (2) }.
r The function f satisfies f (x (1) ) = 1 and f (x (2) ) = 0.
r The function h is defined arbitrarily subject to h(x (1) ) = h(x (2) ).
Note that here we do not need to use h for passing non-uniform information (e.g., a
description of the distinguisher). Instead, non-uniform information (i.e., the auxiliary
input z to the distinguisher) is passed explicitly by other means (i.e., as the auxiliary
input to the semantic-security adversary).
We stress that when the semantic-security adversary invokes the distinguishing adver-
sary, the former uses its own oracle to answer the queries made by the latter. (Likewise,
the former passes its auxiliary input z to the latter.) The reader may easily verify that
the semantic-security adversary has a noticeable advantage in guessing f (S(Upoly(n) ))
(by using the distinguishing gap between E e (x (1) ) and E e (x (2) )), whereas no algorithm
that only gets h(S(Upoly(n) )) can have any advantage in such a guess. We derive a con-
tradiction to the hypothesis that (G, E, D) satisfies Definition 5.4.8, and the theorem
follows.
these ciphertexts using knowledge of the encryption-key, which is only possible in the
public-key setting).
5.4.3.2. Constructions
In view of Proposition 5.4.10 (and Theorem 5.4.11), we focus on private-key encryption
schemes (because a public-key encryption scheme is secure under chosen plaintext
attacks if and only if it is secure under passive key-dependent attacks). All the results
presented in Section 5.3.3 extend to security under chosen plaintext attacks. Specifically,
we prove that Constructions 5.3.9 and 5.3.12 remain secure also under a chosen plaintext
attack.
Proof Sketch: We focus on Construction 5.3.9 and follow the technique underlying the
proof of Proposition 5.3.10. That is, we consider an idealized version of the scheme, in
which one uses a uniformly selected function φ : {0, 1}n → {0, 1}n , rather than the pseu-
dorandom function f s . Essentially, all that the adversary obtains by encryption queries
in the ideal version is pairs (r, φ(r )), where the r ’s are uniformly and independently
distributed in {0, 1}n . As to the challenge itself, the plaintext is “masked” by the value
of φ at another uniformly and independently distributed element in {0, 1}n . Thus, unless
the latter element happens to equal one of the r ’s used by the encryption oracle (which
happens with negligible probability), the challenge plaintext is perfectly masked. Thus,
the ideal version is secure under a chosen plaintext attack, and the same holds for the
real scheme (since otherwise one derives a contradiction to the hypothesis that F is
pseudorandom).
Summary. Private-key and public-key encryption schemes that are secure under cho-
sen plaintext attacks exist if and only if corresponding schemes that are secure under
passive (key-dependent) attacks exist.31
31 Hint: When establishing the claim for the private-key case, use Exercise 2.
438
www.Ebook777.com
5.4* BEYOND EAVESDROPPING SECURITY
Both types of attacks address security threats in realistic applications: In some set-
tings, the adversary may experiment with the decryption module, before the actual
ciphertext in which it is interested is sent. Such a setting corresponds to an a priori
chosen ciphertext attack. In other settings, one may invoke the decryption module on
inputs of one’s choice at any time, but all these invocations are recorded, and real
damage is caused only by knowledge gained with respect to a ciphertext for which a
decryption request was not recorded. In such a setting, protection against a posteriori
chosen ciphertext attacks is adequate. Furthermore, in both cases, decryption requests
can also be made with respect to strings that are not valid ciphertexts, in which case
the decryption module returns a special error symbol.
Typically, in settings in which a mild or strong form of a chosen ciphertext attack is
possible, a chosen plaintext attack is possible, too. Thus, we actually consider combined
attacks in which the adversary may ask for encryption and decryption of strings of its
choice. Indeed (analogously to Proposition 5.4.10), in the case of public-key schemes
(but not in the case of private-key schemes), the combined attack is equivalent to a
“pure” chosen ciphertext attack.
Organization. We start by providing security definitions for the two types of attacks
discussed here. In Section 5.4.4.2, we further extend the definitional treatment of se-
curity (and derive a seemingly stronger notion that is in fact equivalent to the notions
in Section 5.4.4.1). In Section 5.4.4.3 (resp., Section 5.4.4.4) we discuss the construc-
tion of private-key (resp., public-key) encryption schemes that are secure under chosen
ciphertext attacks.
is not a valid ciphertext (with respect to E e ) is answered with a special error symbol.
After making several such requests, the adversary moves to the next stage.
3. Challenge generation: Based on the information obtained so far, the adversary spec-
ifies a challenge template and is given an actual challenge. This is done as in the
corresponding step in the framework of chosen plaintext attacks.
4. Additional encryption and decryption requests: Based on the information obtained
so far, the adversary may request the encryptions of additional plaintexts of its
choice. In addition, in the case of an a posteriori chosen ciphertext attack (but
not in the case of an a priori chosen ciphertext attack), the adversary may make
additional decryption requests with the only (natural) restriction that it not be allowed
to ask for a decryption of the challenge ciphertext. All requests are handled as in
Step 2. After making several such requests, the adversary produces an output and
halts.
In the actual definition, as in the case of chosen plaintext attacks, the adversary’s
strategy will be decoupled into two parts corresponding to its actions before and after
the generation of the actual challenge. Each part will be represented by a (proba-
bilistic polynomial-time) two-oracle machine, where the first oracle is an “encryp-
tion oracle” and the second is a “decryption oracle” (both with respect to the cor-
responding key generated in Step 1). As in the case of chosen plaintext attacks, the
two parts are denoted A1 and A2 , and A1 passes state information (denoted σ ) to
A2 . Again, in accordance with the use of non-uniform formulations, we provide A1
with a (non-uniform) auxiliary input. Thus, in the case of (a posteriori chosen cipher-
text attacks on) public-key schemes, the four-step attack process can be written as
follows:
(e, d) ← G(1n )
(τ, σ ) ← A1Ee , Dd (e, z)
def
c = an actual challenge generated according to the template τ
output ← A2Ee , Dd (σ, c)
where A2 is not allowed to make a query regarding the ciphertext in c, and z denotes
the (non-uniform) auxiliary input given to the adversary. In the case of private-key
schemes, the adversary (i.e., A1 ) is given 1n instead of e. In the case of a priori chosen
ciphertext attacks, A2 is not allowed to query Dd (or, equivalently, A2 is only given
oracle access to the oracle Ee ).
www.Ebook777.com
5.4* BEYOND EAVESDROPPING SECURITY
outputting a challenge template, and the second is in charge of solving the challenge,
where state information is passed from the first part to the second part. Furthermore,
it is again important to require that the challenge template produced by the corre-
sponding algorithm be distributed exactly as the challenge template produced by the
adversary.
For public-key schemes: A public-key encryption scheme, (G, E, D), is said to be se-
mantically secure under a priori chosen ciphertext attacks if for every pair of
probabilistic polynomial-time oracle machines, A1 and A2 , there exists a pair of
probabilistic polynomial-time algorithms, A1 and A2 , such that the following two
conditions hold:
1. For every positive polynomial p, and all sufficiently large n and z ∈ {0, 1}poly(n)
it holds that
⎡ ⎤
v = f (x) where
⎢ (e, d) ← G(1n ) ⎥
⎢ ⎥
Pr ⎢⎢ ((S, h, f ), σ ) ← A1 E e , Dd
(e, z) ⎥
⎥
⎣ c ← (E e (x), h(x)) , where x ← S(Upoly(n) ) ⎦
v ← A2Ee (σ, c)
⎡ ⎤
v = f (x) where
⎢ ((S, h, f ), σ ) ← A1 (1n , z) ⎥
< Pr ⎢ ⎥+ 1
⎣ x ← S(Upoly(n) ) ⎦ p(n)
|x|
v ← A2 (σ, 1 , h(x))
2. For every n and z, the first elements (i.e., the (S, h, f ) part) in the random variables
E G (1n ) , DG 2 (1n )
A1 (1n , z) and A1 1 (G 1 (1n ), z) are identically distributed.
For public-key schemes: A public-key encryption scheme, (G, E, D), is said to have
indistinguishable encryptions under a priori chosen ciphertext attacks if for
every pair of probabilistic polynomial-time oracle machines, A1 and A2 , for every
positive polynomial p, and for all sufficiently large n and z ∈ {0, 1}poly(n) it holds
that
1
| pn,z
(1)
− pn,z
(2)
| <
p(n)
where
⎡ ⎤
v = 1 where
⎢ (e, d) ← G(1n ) ⎥
⎢ ⎥
(i)
pn,z
def ⎢
= Pr ⎢ E e , Dd
((x , x ), σ ) ← A1
(1) (2)
(e, z) ⎥
⎥
⎣ c ← E e (x (i) ) ⎦
Ee
v ← A2 (σ, c)
where |x (1) | = |x (2) |.
Clearly, the a posteriori version of Definition 5.4.14 implies its a priori version, which
in turn implies Definition 5.4.9 as a special case. Again, these implications are strict
(see again Exercises 36 and 35, respectively).
Proof Sketch: We adapt the proof of Theorem 5.4.11 to the current setting. The adap-
tation is straightforward, and we focus on the case of a posteriori CCA security (while
commenting on the case of a priori CCA security).
442
www.Ebook777.com
5.4* BEYOND EAVESDROPPING SECURITY
Again, since A1 merely emulates the generation of a key-pair and the actions of A1 with
respect to such a pair, the equal distribution condition (i.e., Item 2 in Definition 5.4.13)
holds. Using the (corresponding) indistinguishability of encryption hypothesis, we show
that (even in the presence of the encryption oracle E e and a restricted decryption oracle
Dd ) the distributions (σ, (E e (x), h(x))) and (σ, (E e (1|x| ), h(x))) are indistinguishable,
where (e, d) ← G(1n ), ((S, h, f ), σ ) ← A1Ee (y, z) (with y = e or y = 1n depending
on the model), and x ← S(Upoly(n) ). The main thing to notice is that the oracle queries
made by a possible distinguisher of these distributions can be handled by a distinguisher
of encryptions (as in Definition 5.4.14), by passing these queries to its own oracles.
It follows that indistinguishability of encryptions (as per Definition 5.4.14) implies
semantic security (as per Definition 5.4.13).
We now turn to the opposite direction. Here, the construction of a challenge template
(as per Definition 5.4.13) is exactly as the corresponding construction in the proof of
Theorem 5.4.11. Again, the thing to notice is that the oracle queries made by a possible
distinguisher of encryptions (as in Definition 5.4.14) can be handled by the semantic-
security adversary, by passing these queries to its own oracles. We derive a contra-
diction to the hypothesis that (G, E, D) satisfies Definition 5.4.13, and the theorem
follows.
chosen ciphertext attacks and is discussed next (i.e., in Subsection 5.4.4.2). Actually,
we will focus on this generalization when applied to a posteriori chosen ciphertext
attacks, although a similar generalization can be applied to a priori chosen ciphertext
attacks (and in fact also to chosen plaintext attacks).
32 Note that in this section we generalize the notion of an a posteriori chosen ciphertext attack. When generalizing
the notion of an a priori chosen ciphertext attack, we disallow decryption queries after the first challenge template
is produced.
33 Independently distributed plaintexts can be obtained by sampling circuits that refer to disjoint parts of the
random string r . On the other hand, we can obtain a pair of plaintexts such that the second plaintext is a function
of the first one by letting the second sampling circuit equal the composition of the first sampling circuit with
the said function. That is, making queries of the form (S, ·) and (C ◦ S, ·), where C is a deterministic circuit,
def
we obtain answers that refer to the plaintexts x = S(r ) and C(x).
34 In general, the description of functions in terms of circuits that are not of minimal size is redundant, and opens
444
www.Ebook777.com
5.4* BEYOND EAVESDROPPING SECURITY
We now turn to describe the benign adversary (which does not see the ciphertexts).
Such an adversary is given oracle access to a corresponding oracle, Tr , that behaves
as follows. On query a challenge template of the form (S, h), the oracle returns h(x),
where x = S(r ). (Again, r is not known to the adversary.) Like the real adversary, the
benign adversary also terminates by outputting a function f and a value v, hoping that
f (x 1 , ..., x t ) = v, where x i = S i (r ) and (S i , h i ) is the i-th challenge query made by
the adversary.
Security amounts to asserting that the effect of any efficient multiple-challenge CCA
can be simulated by an efficient benign adversary that does not see the ciphertexts. As
in Definition 5.4.13, the simulation has to satisfy two conditions: First, the probability
that f (x 1 , ..., x t ) = v in the CCA must be met by the probability that a corresponding
event holds in the benign model (where the adversary does not see ciphertexts). Second,
the challenge queries, as well as the function f , should be distributed similarly in the
two models. Actually, the second condition should be modified in order to account
for the case that the real CCA adversary makes a decryption query that refers to a
ciphertext that is contained in the answer given to a previous challenge query, denoted
(S, h). Note that such a decryption query (i.e., E e (S(r ))) reveals S(r ) to the attacker,
and that this has nothing to do with the security of the encryption scheme. Thus, it is
only fair to also allow the benign adversary (which sees no ciphertexts) to make the
corresponding query, which is equivalent to the challenge query (S, id), where id is
the identity function. (Indeed, the answer will be id(S(r )) = S(r ).)
In order to obtain the actual definition, we need to define the trace of the execution
of these two types of adversaries. For a multiple-challenge CCA adversary, denoted
A, the trace is defined as the sequence of challenge queries made during the attack,
augmented by fictitious challenge queries such that the (fictitious challenge) query
(S, id) is included if and only if the adversary made a decryption query c such that (c, ·)
is the answer given to a previous challenge query of the form (S, ·). (This convention
is justified by the fact that the answer (E e (S(r )), id(S(r ))) to the fictitious challenge
query (S, id) is efficiently computable from the answer S(r ) to the decryption query
c = E e (S(r )).)35 In fact, for simplicity, we will assume in the following definition that A
(or rather a minor modification of A) actually makes these fictitious challenge queries.
For the benign adversary, denoted B, the trace is defined as the sequence of challenge
queries made during the attack.
For public-key schemes: A public-key encryption scheme, (G, E, D), is said to be se-
cure under multiple-challenge-chosen ciphertext attacks if for every probabilis-
tic polynomial-time oracle machine A there exists a probabilistic polynomial-time
oracle machine B such that the following two conditions hold:
1. For every positive polynomial p, and all sufficiently large n and z ∈ {0, 1}poly(n)
35 Indeed, the value (E e (S(r )), id(S(r ))) is obtained from S(r ) by making an encryption query S(r ).
445
Free ebooks ==> www.Ebook777.com
ENCRYPTION SCHEMES
it holds that
⎡ ⎤
v = f (x 1 , ..., x t ) where
⎢ (e, d) ← G(1n ) and r ← Upoly(n) ⎥
Pr ⎢
⎣
⎥
⎦
( f, v) ← A Ee , Dd , Te,r (e, z)
x ← S (r ), for i = 1, ..., t.
i i
⎡ ⎤
v = f (x 1 , ..., x t ) where
⎢ r ← Upoly(n) ⎥
< Pr ⎢ ⎥+ 1
⎣ ( f, v) ← B Tr (1n , z) ⎦ p(n)
x ← S (r ), for i = 1, ..., t.
i i
where t is the number of challenge queries made by A (resp., B), and S i is the
first part of the i-th challenge query made by A (resp., B) to Te,r (resp., to Tr ).
2. The following two probability ensembles, indexed by n ∈ N and z ∈ {0, 1}poly(n),
are computationally indistinguishable:
(a) The trace of A E G 1 (1 ) , DG 2 (1 ) , TG 1 (1 ),Upoly(n) (G 1 (1n ), z) augmented by the first ele-
n n n
To get more comfortable with Definition 5.4.16, consider the special case in which
the real CCA adversary does not make decryption queries to ciphertexts obtained as
part of answers to challenge queries. (In the proof of Theorem 5.4.17, such adver-
saries will be called canonical and will be showed to be as powerful as the general
ones.) The trace of such adversaries equals the sequence of actual challenge queries
made during the attack (without any fictitious challenge queries), which simplifies the
meaning of Condition 2. Furthermore, the special case in which such an adversary
makes a single challenge query is very similar to Definition 5.4.13, with the exception
that here Condition 2 allows computational indistinguishability (rather than requiring
identical distributions). Still, this very restricted case (of Definition 5.4.16) does imply
security under a posteriori CCA (see Exercise 37). More importantly, the following
holds:
Proof Sketch: As a bridge between the multiple-challenge CCA and the corresponding
benign adversary that does not see the ciphertext, we consider canonical adversaries that
446
www.Ebook777.com
5.4* BEYOND EAVESDROPPING SECURITY
can perfectly simulate any multiple-challenge CCA without making decryption queries
to ciphertexts obtained as part of answers to challenge queries. Instead, these canonical
adversaries make corresponding queries of the form (S, id), where id is the identity
function and (S, ·) is the challenge-query that was answered with the said ciphertext.
Specifically, suppose that a multiple-challenge CCA has made the challenge query
(S, h), which was answered by (c, v) where c = Ee (x), v = h(x) and x = S(r ), and at a
later stage makes the decryption query c, which is to be answered by Dd (c) = x. Then,
the corresponding canonical adversary makes the challenge query (S, h) as the original
adversary, receiving the same pair (c, v), but later (instead of making the decryption
query c) the canonical adversary makes the challenge query (S, id), which is answered
by id(S(r )) = x = Dd (c). Note that the trace of the corresponding canonical adversary
is identical to the trace of the original CCA adversary (and the same holds with respect
to their outputs).
Thus, given an a posteriori CCA–secure encryption scheme, it suffices to establish
Definition 5.4.16 when the quantification is restricted to canonical adversaries A. In-
deed, as in previous cases, we construct a benign adversary B in the natural manner:
On input (1n , z), machine B generates (e, d) ← G(1n ), and invokes A on input (y, z),
where y = e if we are in the public-key case and y = 1n otherwise. Next, B emulates
all oracles expected by A, while using its own oracle Tr . Specifically, the oracles E e and
Dd are perfectly emulated by using the corresponding keys (known to B), and the oracle
Te,r is (imperfectly) emulated using the oracle Tr ; that is, the query (S, h) is forwarded
to Tr , and the answer h(S(r )) is augmented with E e (1m ), where m is the number of
output bits in S. Note that the latter emulation (i.e., the answer (E e (1|S(r )| ), h(S(r )))) is
imperfect since the answer of Te,r would have been (E e (S(r )), h(S(r ))), yet (as we shall
show) A cannot tell the difference.
In order to show that B satisfies both conditions of Definition 5.4.16 (with respect
to this A), we will show that the following two ensembles are computationally indis-
tinguishable:
1. The global view in a real attack of A on (G, E, D). That is, we consider the output
of the following experiment:
2. The global view in an attack emulated by B . That is, we consider the output of an
experiment as in Item 1, except that A Ee , Dd , Te,r (y, z) is replaced by A E e , Dd , Te,r (y, z),
where on query (S, h) the oracle Te,r replies with (E e (1|S(r )| ), h(S(r ))) rather than
with (Ee (S(r )), h(S(r ))).
can determine whether or not f (S 1 (r ), ..., S t (r )) = v holds (for ( f, v) and S 1 , ..., S t that
appear in the ensemble’s output). Also note that these ensembles may be computationally
indistinguishable only in the case where A is canonical (which we have assumed to be
the case).36
The computational indistinguishability of these two ensembles is proven using a
hybrid argument, which in turn relies on the hypothesis that (G, E, D) has indistin-
guishable encryptions under a posteriori CCAs. Specifically, we introduce t + 1 mental
experiments that are hybrids of the two ensembles (which we wish to relate). Each of
these mental experiments is given oracle access to E e and Dd , where (e, d) ← G(1n ) is
selected from the outside. The i-th hybrid experiment uses these two oracles (in addition
to y, which equals e in the public-key case and 1n otherwise) in order to emulate an
i
execution of A Ee , Dd , e,r ( y, z), where r is selected by the experiment itself and ie,r is
a hybrid of Te,r and Te,r . Specifically, ie,r is a history-dependent process that answers
like Te,r on the first i queries and like Te,r on the rest. Thus, for i = 0, ..., t, we define
the i-th hybrid experiment as a process that, given y (which equals e or 1n ) and oracle
access to E e and Dd , where (e, d) ← G(1n ), behaves as follows:
We stress that since A is canonical, none of the Dd -queries equals a ciphertext obtained
as part of the answer of a ie,r -query.
Clearly, the distribution of the 0-hybrid is identical to the distribution of the global
view in an attack emulated by B, whereas the distribution of the t-hybrid is identical to
the distribution of the global view in a real attack by A. On the other hand, distinguishing
the i-hybrid from the (i + 1)-hybrid yields a successful a posteriori CCA (in the sense of
distinguishing encryptions). That is, assuming that one can distinguish the i-hybrid from
the (i + 1)-hybrid, we construct an a posteriori CCA adversary (as per Definition 5.4.14)
36 Non-canonical adversaries can easily distinguish the two types of views by distinguishing the oracle Te,r from
. For example, suppose we make a challenge query with a sampling-circuit S that generates some
oracle Te,r
distribution over {0, 1}m \ {1m }, next make a decryption query on the ciphertext obtained in the challenge
query, and output the answer. Then, in case we query the oracle Te,r , we output Dd (E e (S(r ))) = 1m ; whereas
, we output D (E (1m )) = 1m . Recall, however, that at this point of the proof,
in case we query the oracle Te,r d e
we are guaranteed that A is canonical (and indeed A might have been derived by perfectly emulating some
non-canonical A ). An alternative way of handling non-canonical adversaries is to let B handle the disallowed
(decryption) queries by making the corresponding challenge query, and returning its answer rather than the
, can detect which queries are disallowed.)
decryption value. (Note that B, which emulates Tr,e
448
www.Ebook777.com
5.4* BEYOND EAVESDROPPING SECURITY
as follows. For (e, d) ← G(1n ), given y = e if we are in the public-key case and y = 1n
otherwise, the attacker (having oracle access to E e and Dd ) behaves as follows:
c ∈ {E e (S i+1 (r )), E e (1|S (r )| )}), and replying with (c, h i+1 (S i+1 (r ))).
i+1
= E e (1|S
i+1 (r )|
Note that if c = E e (S i+1 (r )), then we emulate i+1
e,r , whereas if c ) then
we emulate ie,r .
j
3. Again, ( f, v) denotes the output of A E e , Dd , e,r (y, z), and ((S 1 , h 1 ), ..., (S t , h t )) de-
notes its trace. The attacker feeds ((S 1 , h 1 ), ..., (S t , h t )), ( f, v), r to the hybrid dis-
tinguisher (which we have assumed to exist toward the contradiction), and outputs
whatever the latter does.
given the ciphertext c = E e (S i+1 (r )) from the case that it is given the ciphertext c =
E e (1|S (r )| ), without querying Dd on the challenge ciphertext c. The last assertion
i+1
follows by the hypothesis that A is canonical, and so none of the Dd -queries that A
j
makes equals the ciphertext c obtained as (part of) the answer to the i + 1st e,r -query.
Thus, distinguishing the i + 1 and i-th hybrids implies distinguishing encryptions
st
under an a posteriori CCA, which contradicts our hypothesis regarding (G, E, D). The
theorem follows.
Further Generalization. Recall that we have allowed arbitrary challenge queries of the
form (S, h) that were answered by (E e (S(r )), h(S(r ))). Instead, we may allow queries
of the form (S, h) that are answered by (E e (S(r )), h(r )); that is, h is applied to r itself
rather than to S(r ). Actually, given the “independence” of h from S, one could have
replaced the challenge queries by two types of queries: partial-information (on r )
queries that correspond to the h’s (and are answered by h(r )), and encrypted partial-
information queries that correspond to the S’s (and are answered by E e (S(r ))). As
shown in Exercise 38, all these forms are in fact equivalent.
Security under a-priori CCA. All the results presented in Section 5.3.3 extend to
security under a priori chosen ciphertext attacks. Specifically, we prove that Construc-
tions 5.3.9 and 5.3.12 remain secure also under an a priori CCA.
Proof Sketch: As in the proof of 5.4.12, we focus on Construction 5.3.9, and consider
an idealized version of the scheme in which one uses a uniformly selected function
φ : {0, 1}n → {0, 1}n (rather than the pseudorandom function f s ). Again, all that the ad-
versary obtains by encryption queries in the ideal version is pairs (r, φ(r )), where the
r ’s are uniformly and independently distributed in {0, 1}n . Similarly, decryption queries
provide the adversary with pairs (r, φ(r )), but here the r ’s are selected by the adversary.
Still in an a priori CCA, all decryption queries are made before the challenge is pre-
sented, and so these r ’s are selected (by the adversary) independent of the challenge.
Turning to the challenge itself, we observe that the plaintext is “masked” by the value
of φ at another uniformly and independently distributed element in {0, 1}n , denoted
rC . We stress that rC is independent of all r ’s selected in decryption queries (because
these occur before rC is selected), as well as being independent of all r ’s selected by the
encryption oracle (regardless of whether these queries are made prior or subsequently
to the challenge). Now, unless rC happens to equal one of the r ’s that appear in the
pairs (r, φ(r )) obtained by the adversary (which happens with negligible probability),
the challenge plaintext is perfectly masked. Thus, the ideal version is secure under an a
priori CCA. The same holds for the real scheme, because pseudorandom functions are
indistinguishable from truly random ones (even by machines that adaptively query the
function at arguments of their choice).
Security under a-posteriori CCA. Unfortunately, Constructions 5.3.9 and 5.3.12 are
not secure under a posteriori chosen ciphertext attacks: Given a challenge ciphertext
(r, x ⊕ f s (r )), the adversary may obtain f s (r ) by making the query (r, y ), for any
y = x ⊕ f s (r ). This query is allowed and is answered with x such that y = x ⊕ f s (r ).
Thus, the adversary may recover the challenge plaintext x from the challenge ciphertext
def
(r, y), where y = x ⊕ f s (r ), by computing y ⊕ (y ⊕ x ). Thus, we should look for new
private-key encryption schemes if we want to obtain one that is secure under a posteriori
CCA. Actually, we show how to transform any private-key encryption scheme that is
secure under chosen plaintext attack (CPA) into one that is secure under a posteriori
CCA.
The idea underlying our transformation (of CPA-secure schemes into CCA-secure
ones) is to eliminate the adversary’s gain from chosen ciphertext attacks by making it
infeasible to produce a legitimate ciphertext (other than the ones given explicitly to the
adversary). Thus, an a posteriori CCA adversary can be emulated by a chosen plaintext
attack (CPA) adversary, while almost preserving the success probability.
450
www.Ebook777.com
5.4* BEYOND EAVESDROPPING SECURITY
The question is indeed how to make it infeasible for the (a posteriori CCA) adversary
to produce a legitimate ciphertext (other than the ones explicitly given to it). One answer
is to use “Message Authentication Codes” (MACs) as defined in Section 6.1.37 That is,
we augment each ciphertext with a corresponding authentication tag (which is “hard
to forge”), and consider an augmented ciphertext to be valid only if it consists of a
valid (string,tag)-pair. For the sake of self-containment (and concreteness), we will use
a specific implementation of such MACs via pseudorandom functions. Incorporating
this MAC in Construction 5.3.9, we obtain the following:
Key-generation: G(1n ) = ((k, k ), (k, k )), where k and k are generated by two inde-
pendent invocations of I (1n ).
Encrypting plaintext x ∈ {0, 1}n (using the key (k, k )):
Proof Sketch: Following the motivation preceding the construction, we emulate any a
posteriori CCA adversary by a CPA adversary. Specifically, we need to show how to
answer decryption queries made by the CCA adversary. Let us denote such a generic
query by ((r, y), t), and consider the following three cases:
1. If ((r, y), t) equals the answer given to some (previous) encryption query x, then we
answer the current query with x.
Clearly, the answer we give is always correct.
2. If ((r, y), t) equals the challenge ciphertext, then this query is not allowed.
3. Otherwise, we answer that ((r, y), t) is not a valid ciphertext.
We need to show that our answer is indeed correct. Recall that in this case, ((r, y), t)
neither appeared before as an answer to an encryption query nor equals the chal-
lenge ciphertext. Since for every (r, y) there is a unique t such that ((r, y), t ) is
a valid ciphertext, the case hypothesis implies that one of the following sub-cases
37 In fact, we need to use secure Message Authentication Codes that have unique valid tags (or at least are
super-secure), as discussed in Section 6.5.1 (resp., Section 6.5.2).
451
Free ebooks ==> www.Ebook777.com
ENCRYPTION SCHEMES
must occur:
Case 1: Some ((r, y), t ), with t = t, has appeared before either as an answer to an
encryption query or as the challenge ciphertext. In this case, ((r, y), t) is definitely
not a valid ciphertext, because ((r, y), t ) is the unique valid ciphertext of the form
((r, y), ·).
Case 2: No triple of the form ((r, y), ·) has appear before (as such an answer to an
encryption query or as the challenge ciphertext). In this sub-case, the ciphertext
is valid if and only if t = f k (r, y). That is, in order to produce such a valid
ciphertext, the adversary must guess the value of f k at (r, y), when only seeing
the value of f k at other arguments. By the pseudorandomness of the function f k ,
the adversary may succeed in such a guess only with negligible probability, and
hence our answer is wrong only with negligible probability.
Finally, note that the CPA-security of Construction 5.3.9 (see Proposition 5.4.12) implies
the CPA-security of Construction 5.4.19. The proposition follows.
The same construction and analysis can be applied to Construction 5.3.12. Combin-
ing Proposition 5.4.20 with Corollary 3.6.7, we get:
Theorem 5.4.21: If there exist (non-uniformly hard) one-way functions, then there exist
private-key encryption schemes that are secure under a posteriori chosen ciphertext
attacks.
www.Ebook777.com
5.4* BEYOND EAVESDROPPING SECURITY
the constructions in order to handle a posteriori CCA. Specifically, we will show how
to transform any public-key encryption scheme that is secure in the passive (key-
dependent) sense into one that is secure under a posteriori CCA. As in the case of private-
key schemes, the idea underlying the transformation is to eliminate the adversary’s gain
from chosen ciphertext attacks.
Recall that in the case of private-key schemes, the adversary’s gain from a CCA was
eliminated by making it infeasible (for the adversary) to produce legitimate ciphertexts
(other than those explicitly given to it). However, in the context of public-key schemes,
the adversary can easily generate legitimate ciphertexts (by applying the keyed encryp-
tion algorithm to any plaintext of its choice). Thus, in the current context, the adversary’s
gain from a CCA is eliminated by making it infeasible (for the adversary) to produce
legitimate ciphertexts without “knowing” the corresponding plaintext. This, in turn,
will be achieved by augmenting the plaintext with a non-interactive zero-knowledge
“proof of knowledge” of the corresponding plaintext.
r Adaptive Soundness: For every : {0, 1}m → ({0, 1}poly(m) \ L) and every :
{0, 1} → {0, 1}
m
, the probability that V accepts the input (Um ) (based
poly(m)
on the proof (Um ) and the reference string Um ) is negligible; that is,
Pr[V ((Um ), Um , (Um )) = 1] is negligible.
r Adaptive Zero-Knowledge: There exist two probabilistic polynomial-time algo-
rithms, S1 and S2 , such that for every pair of functions : {0, 1}m → ({0, 1}poly(m) ∩
L) and W : {0, 1}m → {0, 1}poly(m) such that and W are both implementable
by polynomial-size circuits and ((r ), W (r )) ∈ R L (∀r ∈ {0, 1}m ), the ensembles
{(Um , (Um ), P((Um ), W (Um ), Um ))}m∈N and {S (1m )}m∈N are computationally
indistinguishable (by non-uniform families of polynomial-size circuits), where
S (1m ) denotes the output of the following randomized process:
1. (r, s) ← S1 (1m );
2. x ← (r );
3. π ← S2 (x, s);
4. Output (r, x, π).
Indeed, S is a two-stage simulator that first produces (obliviously of the actual input)
an alleged reference string r (along with the auxiliary information s),38 and then,
given an actual input (which may depend on r ), simulates the actual proof.
Note that it is important that in the zero-knowledge condition, the function is required
to be implementable by polynomial-size circuits (because otherwise only languages in
BPP can have such proof systems; see Exercise 39). In the rest of this subsection,
whenever we refer to an adaptive NIZK, we mean this definition. Actually, we may
relax the adaptive soundness condition so that it only applies to functions and that
are implementable by polynomial-size circuits. That is, computational soundness will
actually suffice for the rest of this subsection.
38 The auxiliary information s may explicitly contain r . Alternatively, s may just equal the coins used by S1 . In the
constructions that follow, we do not follow either of these conventions, but rather let s equal the very information
about r that S2 needs.
454
www.Ebook777.com
5.4* BEYOND EAVESDROPPING SECURITY
scheme, denoted (G, E, D), and an adaptive NIZK, denoted (P, V ), for a related
NP-set.
That is, (e1 , e2 , y1 , y2 ) ∈ L R if y1 and y2 are encryptions of the same plaintext, produced
using the encryption-keys e1 and e2 , respectively.
def
Key-generation: G (1n ) = ((e1 , e2 , r ), (d1 , d2 , r )), where (e1 , d1 ) and (e2 , d2 ) are se-
lected at random by invoking G(1n ) twice, and r is uniformly distributed in
{0, 1}n .
Encrypting plaintext x ∈ {0, 1}∗ (using the key e = (e1 , e2 , r )):
def
E e (x) = (y1 , y2 , π), where s1 , s2 are uniformly selected poly(n)-long bit strings,
y1 = E e1 (x, s1 ), y2 = E e2 (x, s2 ), and π ← P((e1 , e2 , y1 , y2 ), (x, s1 , s2 ), r ).
Decrypting ciphertext (y1 , y2 , π) (using the key d = (d1 , d2 , r )):
If V ((e1 , e2 , y1 , y2 ), r, π) = 1, then return Dd1 (y1 ) or else return an error symbol
indicating that the ciphertext is not valid.
Indeed, our choice to decrypt according to y1 (in case π is a valid proof) is immaterial,
and we could as well decrypt according to y2 . Another alternative could be to decrypt
according to both y1 and y2 and return a result only if both outcomes are identical (and
π is a valid proof). We stress that, here as well as in the following analysis, we rely
on the hypothesis that decryption is error-free, which implies that Dd (E e (x)) = x for
every (e, d) in the range of G. Thus, Dd1 (y1 ) = Dd2 (y2 ), for any (e1 , e2 , y1 , y2 ) ∈ L R ,
where the (ei , di )’s are in the range of G.
Clearly, Construction 5.4.23 constitutes a public-key encryption scheme; that is,
Dd (E e (x)) = x, provided that the NIZK proof generated during the encryption stage
was accepted during the decryption stage. Indeed, if the NIZK system enjoys perfect
completeness (which is typically the case), then the decryption error is zero. By the zero-
knowledge property, the passive security of the original encryption scheme (G, E, D)
is preserved by Construction 5.4.23. Intuitively, creating a valid ciphertext seems to
imply “knowledge” of the corresponding plaintext, but this appealing claim should be
examined with more care (and in fact is not always valid). Furthermore, as stated pre-
viously, our actual proof will not refer to the notion of “knowledge.” Instead, the actual
proof will proceed by showing how a chosen ciphertext attack on Construction 5.4.23
can be transformed into a (key-dependent) passive attack on (G, E, D). In fact, we will
need to augment the notion of (adaptive) NIZK in order to present such a transfor-
mation. We will do so in two steps. The first augmentation will be used to deal with
a priori CCA, and further augmentation will be used to deal with a posteriori CCA.
455
Free ebooks ==> www.Ebook777.com
ENCRYPTION SCHEMES
Note that the computational limitation on is essential to the viability of the definition
(see Exercise 40). It is tempting to conjecture that every adaptive NIZK (or rather its
simulator) satisfies weak simulation-soundness; however, this is not true (for further
discussion see Exercise 41). Nevertheless, adaptive NIZK (for N P) with a simulator
39 Indeed, prove that the distribution produced by the simulator must be far away from uniform. See related
Exercises 39 and 40.
456
www.Ebook777.com
5.4* BEYOND EAVESDROPPING SECURITY
satisfying weak simulation-soundness can be constructed given any adaptive NIZK (for
N P).
It is easy to see that Construction 5.4.25 preserves the adaptive NIZK features of
(P, V, S1 , S2 ). Furthermore, as will be shown, Construction 5.4.25 is weak simulation-
sound.
that s = (c1 , ..., cn , s1 , ..., sn ), where the ci ’s are selected uniformly in {0, 1}, whereas
(r ) has the form (b1 , ..., bn , π1 , ..., πn ). Let us denote the latter sequence of bi ’s by
B(r ); that is, (r ) = (B(r ), (r )). We distinguish two cases according to whether or
def
not B(r ) = c = (c1 , ..., cn ):
Eq. (5.13), which corresponds to the first case, must be negligible because the corre-
sponding probability that refers to a uniformly selected reference string (as appearing
in the real proof) is negligible, and the indistinguishability of a simulated reference
string from a uniformly distributed one was established previously.
Details: For a uniformly distributed reference string r , we have Pr[B(r ) = c] = 2−n
by information-theoretic considerations (i.e., r is statistically independent of c). On
the other hand, for a simulated reference string r and a corresponding c, the quantity
def
q = Pr[B(r ) = c] is lower-bounded by Eq. (5.13). The quality of the simulator’s
output (established in the first paragraph of the proof ) implies that the simulated
reference string is computationally indistinguishable from a uniformly distributed
reference string, which in turn implies that q − 2−n is negligible. It follows that
Eq. (5.13) is negligible.
Eq. (5.14) must be negligible because in this case, at least one of the alleged proofs (to
a false assertion) is with respect to a uniformly distributed reference string.
Details: By the case hypothesis (i.e., B(r ) = c), there exists an i such that the i-th bit
of B(r ) is different from ci (i.e., bi = ci ). Thus, the i-th alleged proof (i.e., πi ) is with
respect to a uniformly distributed reference string, that is, with respect to ribi = ri1−ci ,
where ri1−ci is selected uniformly in {0, 1}n . By the (adaptive) soundness of (P, V ),
this proof for a false assertion can be valid only with negligible probability, which
in turn implies that Eq. (5.14) is negligible.
Having established that both Eq. (5.13) and Eq. (5.14) are negligible, the proposition
follows.
Theorem 5.4.27: Suppose that the adaptive NIZK (P, V ) used in Construction 5.4.23
has the weak simulation-soundness property and that the public-key encryption scheme
(G, E, D) is passively secure in the key-dependent sense. Further suppose that the
probability that G(1n ) produces a pair (e, d) such that Pr[Dd (E e (x)) = x] < 1, for
some x ∈ {0, 1}poly(n) , is negligible. Then Construction 5.4.23 constitutes a public-key
encryption scheme that is secure under a priori CCA.
458
www.Ebook777.com
5.4* BEYOND EAVESDROPPING SECURITY
Combining the above with Theorem 4.10.16 and Proposition 5.4.26, it follows that
public-key encryption schemes that are secure under a priori CCA exist, provided that
enhanced40 trapdoor permutations exists.
Proof Sketch: Assuming toward the contradiction that the scheme (G , E , D ) is not
secure under a priori CCA, we show that the scheme (G, E, D) is not secure under
a (key-dependent) passive attack. Specifically, we refer to the definitions of security
in the sense of indistinguishability of encryptions (as in Definitions 5.4.14 and 5.4.2,
respectively). To streamline the proof, we reformulate Definition 5.4.2, incorporating
both circuits (i.e., the one selecting message pairs and the one trying to distinguish their
encryptions) into one circuit and allow this circuit to be probabilistic. (Certainly, this
model of a key-dependent passive attack is equivalent to the one in Definition 5.4.2.)
Let (A1 , A2 ) be an a priori CCA adversary attacking the scheme (G , E , D ) (as per
Definition 5.4.14), and (S1 , S2 ) be the two-stage simulator for (P, V ). We construct a
(key-dependent) passive adversary A (attacking (G, E, D)) that, given an encryption-
key e (in the range of G 1 (1n )), behaves as follows:
and π ← S2 (s, (e1 , e, y1 , y)). Finally, A invokes A2 on input (σ, (y1 , y, π)), and out-
puts whatever the latter does. Recall that here (in the case of a priori CCA), A2 is an
ordinary machine (rather than an oracle machine).
(Note that A emulates A2 on an input that is computationally indistringuishable from
the input given to A2 in a real attack. In particular, A typically invokes A2 with an
illegal ciphertext, whereas in a real attack, A2 is always given a legal ciphertext.)
459
Free ebooks ==> www.Ebook777.com
ENCRYPTION SCHEMES
the hybrid process invokes A2 with a legal ciphertext. (The question of how the hybrid
process “knows” or gets this y1 is out of place; we merely define a mental experiment.)
( j) ( j) ( j) ( j)
Let p A = p A (n) (resp., p H = p H (n)) denote the probability that A (resp., the hybrid
process H ) outputs 1 when x = x ( j) , where the probability is taken over the choices of
(e, d) ← G(1n ) and the internal coin tosses of A (resp., H ).
( j) ( j)
Claim 5.4.27.1: For both j’s, the absolute difference between p A (n) and p H (n) is a
negligible function in n.
Proof: Define an auxiliary hybrid process that behaves as the hybrid process, except
that when emulating Dd , the auxiliary process answers according to Dd2 (rather than
( j)
according to Dd1 ). (Again, this is a mental experiment.) Let p H H denote the probability
that this auxiliary process outputs 1 when x = x ( j) . Similarly, define another mental
experiment that behaves as A, except that when emulating Dd , this process answers
( j)
according to Dd2 (rather than according to Dd1 ), and let p A A denote the probability that
the latter process outputs 1 when x = x ( j) . We stress that in Step 3, the latter mental
experiment behaves exactly like A; the only aspect in which this mental experiment
differs from A is in its decryption operations at Step 2. The various processes are
tabulated next.
41 Here, we rely on the hypothesis that except with negligible probability over the key-generation process, the
decryption is error-less (i.e., always yields the original plaintext).
460
www.Ebook777.com
5.4* BEYOND EAVESDROPPING SECURITY
The reason is that the experiments A A and H H differ only in the in-
put (σ, (y1 , y, π)) that they feed to A2 ; whereas A A forms y1 ← E e1 (0|x| )
(and π ← S2 (s, (e1 , e, y1 , y))), the process H H forms y1 ← E e1 (x) (and π ←
S2 (s, (e1 , e, y1 , y))). However, A2 cannot distinguish the two cases because this
would have violated the security of E e1 .
That is, to establish Fact 3, we construct a passive attack, denoted B, that behaves
similarly to A except that it switches its reference to the two basic keys (i.e., the
first two components of the encryption-key e) and acts very differently in Step 3
(e.g., B produces a different challenge template). Specifically, given an attacked
encryption-key e, adversary B generates (e2 , d2 ) ← G(1n ), sets e = (e, e2 , ·), and
emulates A1 Dd (e) using the decryption-key d2 to answer queries. For a fixed j, when
obtaining (from A1 ) the challenge template ((x (1) , x (2) ), σ ), adversary B produces the
challenge template ((0|x | , x ( j) ), σ ), and invokes A2 on input (σ, (y, y2 , π)), where
( j)
Let us denote by pcca (n) the probability that the CCA adversary ( A1 , A2 ) outputs 1
( j)
when given a ciphertext corresponding to the j th plaintext in its challenge template (see
(1) (2)
Definitions 5.4.14). Recall that by the contradiction hypothesis, | pcca (n) − pcca (n)| is
not negligible.
( j) ( j)
Claim 5.4.27.2: For both j’s, the absolute difference between pcca (n) and p H (n) is a
negligible function in n.
Proof: The only difference between the output in a real attack of ( A1 , A2 ) and the output
of the hybrid process is that in the hybrid process, a “simulated reference string” and
a “simulated proof” are used instead of a uniformly distributed reference string and a
real NIZK proof. However, this difference is indistinguishable.42
Combining Claims 5.4.27.1 and 5.4.27.2, we obtain that for some negligible function
µ it holds that
(1) (2) (1) (2)
| p A (n) − p A (n)| > | p H (n) − p H (n)| − µ(n)
(1) (2)
> | pcca (n) − pcca (n)| − 2µ(n)
We conclude that (the passive attack) A violates the passive security of (G, E, D). This
contradicts the hypothesis (regarding (G, E, D)), and so the theorem follows.
42 We stress that the current claim relies only on the fact that the simulated reference-string and proof are indis-
tinguishable from the corresponding real objects.
461
Free ebooks ==> www.Ebook777.com
ENCRYPTION SCHEMES
Theorem 5.4.29: Suppose that the adaptive NIZK (P, V ) used in Construction 5.4.23
has the 1-proof simulation-soundness property and that the encryption scheme
(G, E, D) is as in Theorem 5.4.27. Then Construction 5.4.23 constitutes a public-key
encryption scheme that is secure under a posteriori CCA.
Proof Sketch: The proof follows the structure of the proof of Theorem 5.4.27. Specif-
ically, given an a posteriori CCA adversary ( A1 , A2 ) (attacking (G , E , D )), we first
construct a passive adversary A (attacking (G, E, D)). The construction is as in the proof
of Theorem 5.4.27, with the exception that in Step 3 we need to emulate the decryption
oracle (for A2 ). This emulation is performed exactly as the one performed in Step 2
462
www.Ebook777.com
5.4* BEYOND EAVESDROPPING SECURITY
(for A1 ). Next, we analyze this passive adversary as in the proof of Theorem 5.4.27,
while referring to an A2 that may make decryption queries.43 The analysis of the hand-
ling of these (additional) queries relies on the 1-proof simulation-soundness property.
In particular, when proving a claim analogous to Claim 5.4.27.1, we have to establish
two facts (corresponding to Facts 1 and 2) that refer to the difference in the process’s
output when decrypting according to Dd1 and Dd2 , respectively. Both facts follow from
the fact (established next) that, except with negligible probability, neither A1 nor A2 can
produce a query (q1 , q2 , q3 ) such that q3 is a valid proof that q1 and q2 are consistent
and yet Dd1 (q1 ) = Dd2 (q2 ). (We stress that in the current context we refer also to
A2 , which may try to produce such a query based on the challenge ciphertext given
to it.)
Fact 5.4.29.1: The probability that A1 produces a query (q1 , q2 , q3 ) such that q3 is a
valid proof (with respect to reference string r ) that (supposedly) there exists x, s1 , s2
such that qi = E ei (x, si ) (for i = 1, 2), and yet Dd1 (q1 ) = Dd2 (q2 ) is negligible. The
same holds for A2 so long as the query is different from the challenge ciphertext given
to it. This holds regardless of whether the challenge ciphertext (given to A2 ) is produced
as in A (i.e., y1 = E e1 (0m )) or as in the hybrid process H (i.e., y1 = E e1 (x)).
Proof: Recall that one of our hypotheses is that the encryption (G, E, D) is error-free
(except for a negligible measure of the key-pairs). Thus, the current fact refers to a
situation that either A1 or A2 produces a valid proof for a false statement. The first part
(i.e., referring to A1 ) follows from the weak simulation-soundness of the NIZK, which
in turn follows from its 1-proof simulation-soundness property. We focus on the second
part, which refers to A2 .
Let (y1 , y2 , π ) denote the challenge ciphertext given to A2 ; that is, y2 = y is the
challenge ciphertext given to A(e) (or to H (e)), which augments it with y1 and
π ← S2 (s, (e1 , e2 , y1 , y2 )). Recall that (r, s) ← S1 (1n ) and that e2 = e. Suppose that
A2 produces a query (q1 , q2 , q3 ) as in the claim; that is, (q1 , q2 , q3 ) = (y1 , y2 , π), the
encryptions q1 and q2 are not consistent (with respect to e1 and e2 , respectively), and
def
yet V ((e1 , e2 , q1 , q2 ), r, q3 ) = 1. Specifically, it holds that x 2 = (e1 , e2 , q1 , q2 ) ∈ L R ,
where L R is as in Construction 5.4.23 (see Eq. (5.12)), and yet V (x 2 , r, q3 ) = 1 (i.e.,
def
π 2 = q3 is a valid proof of the false statement regarding x 2 ). Since (y1 , y2 , π) is
def
produced by letting π ← S2 (s, (e1 , e2 , y1 , y2 )), it follows that π 1 = π is a simu-
lated proof (with respect to the reference string r ) for the alleged membership of
def
x 1 = (e1 , e2 , y1 , y2 ) in L R , where (r, s) ← S1 (1n ). Furthermore, given such a proof
(along with the reference string r ), A2 produces a query (q1 , q2 , q3 ) that yields a pair
(x 2 , π 2 ), where π 2 = q3 , such that x 2 = (e1 , e2 , q1 , q2 ) ∈ L R and yet V (x 2 , r, π 2 ) = 1
and (x 2 , π 2 ) = (x 1 , π 1 ). Thus, using A1 and A2 (along with (G, E, D)), we obtain cir-
cuits 1 , 2 , 2 that violate the hypothesis that (S1 , S2 ) is 1-proof simulation-sound.
Details: On input a (simulated) reference string r , the circuit 1 selects (e1 , d1 )
and (e2 , d2 ) in the range of G(1n ), and emulates the execution of A1 Dd (e), where
e = (e1 , e2 , r ) and d = (d1 , d2 , r ). (Indeed, we fix the best possible choice of
43 Indeed, in the proof of Theorem 5.4.27, where ( A1 , A2 ) is an a priori CCA, A2 makes no such queries.
463
Free ebooks ==> www.Ebook777.com
ENCRYPTION SCHEMES
(e1 , d1 ) and (e2 , d2 ), rather than selecting both at random, and emulate the or-
acle Dd using d that is known to the circuit.) When A1 outputs a challenge
template, 1 emulates the selection of the challenge x, sets y1 ← E e1 (0|x| ) (or
y1 ← E e1 (x) when we argue about the hybrid process H ), y2 ← E e2 (x), and out-
def
puts x 1 = (e1 , e2 , y1 , y2 ). (Again, we may fix the best choice of x 1 , y1 , and y2 ,
rather than generating them at random.) The challenge ciphertext is formed by
augmenting y1 , y2 with π 1 ← S2 (s, x 1 ), where s is the auxiliary information gen-
erated by S(1n ) (i.e., (r, s) ← S(1n )). Next, we describe the circuits 2 and 2 ,
which obtain x 1 = (e1 , e2 , y1 , y2 ) (as produced by 1 ) along with a simulated proof
π 1 = S2 (s, x 1 ). On input a reference string r and x 1 , π 1 (as just discussed), these
circuits emulate A2 Dd (σ, (y1 , y2 , π 1 )), where σ is the state information generated
by A1 . For some i (fixed as the best choice), we consider the i-th decryption query
made during the emulation (i.e., we emulate the answers to previous queries by
emulating Dd ). Denoting this (i.e., i-th) query by (q1 , q2 , q3 ), the circuit 2 out-
def def
puts x 2 = (e1 , e2 , q1 , q2 ) and 2 outputs π 2 = q3 . Since (q1 , q2 , q3 ) = (y1 , y2 , π 1 ),
it follows that (x 2 , π 2 ) = ((e1 , e2 , q1 , q2 ), π 2 ) = ((e1 , e2 , y1 , y2 ), π 1 ) = (x 1 , π 1 ).
The event stated in the claim refers to the case that x 2 ∈ L R and yet π 2 is ac-
cepted as a proof (with respect to the reference string r ). But this event and the
current process are exactly as in the definition of 1-proof simulation soundness. We
stress that the argument applies to the process defined by the actual attack, as well
as to the process defined by the hybrid H . In the first case x 1 ∈ L R , whereas in the
second case x 1 ∈ L R , but 1-proof simulation soundness applies to both cases.
It follows that a query (q1 , q2 , q3 ) as in the claim can be produced only with negligible
probability.
Fact 5.4.29.1 implies (an adequate extension of) the first two facts in the proof of a
claim analogous to Claim 5.4.27.1. The third fact in that proof, as well as the proof of
the analogue of Claim 5.4.27.2, do not refer to the soundness of the NIZK-proofs, and
are established here exactly as in the proof of Theorem 5.4.27. The current theorem
follows.
Proof Sketch: Let L ∈ N P. We construct a suitable NIZK for L using the following
three ingredients:
www.Ebook777.com
5.4* BEYOND EAVESDROPPING SECURITY
that operates with a reference string of length n and can be applied to prove (adap-
tively chosen) statements of length poly(n), where the adaptivity refers both to the
soundness and witness-indistinguishability requirements.
As shown in Section 4.10.3.2,44 the existence of enhanced trapdoor permutations
implies that every language in N P has an adaptive NIZK that operates with a
reference string of length n and can be applied to prove statements of length poly(n).
Indeed, in analogy to discussions in Section 4.6, any NIZK is a NIWI.
2. A super-secure one-time signature scheme, denoted (G OT , S OT , V OT ). Specifically,
one-time security (see Section 6.4.1) means that we consider only attacks in which
the adversary may obtain a signature to a single document of its choice (rather
than signatures to polynomially many documents of its choice). On the other hand,
super-security (see Section 6.5.2) means that the adversary should fail to produce a
valid document-signature that is different from the query-answer pair that appeared
in the attack. (We stress that unlike in ordinary security, the adversary is deemed
successful even if it produces a different signature to the same document for which
it has obtained a signature during the attack.)
By Theorem 6.5.2, super-secure one-time signature schemes can be constructed
on the basis of any one-way function. (If we were willing to assume the existence
of collision-free hashing functions, then we could have used instead the easier-to-
establish Theorem 6.5.1.)
3. A perfectly-binding commitment scheme, denoted C, as defined in Section 4.4.1,
with the following two additional properties: The first additional property is that
the commitment strings are pseudorandom; that is, the ensembles {C(x)}x∈{0,1}∗ and
{U|C(x)| }x∈{0,1}∗ are computationally indistinguishable. The second property is that
the support of C(Un ) is a negligible portion of {0, 1}|C(Un )| .
Using any collection of one-way permutations (e.g., the one in the hypothesis),
we may obtain the desired commitment scheme. Specifically, Construction 4.4.2
constitutes a commitment scheme that satisfies the pseudorandomness property (but
not the “negligible portion” property). To obtain the additional “negligible portion”
property, we merely let C(x) equal a pair of two independent commitments to x
(and it follows that the support of C(Un ) is at most a 2n · (2−n )2 = 2−n fraction of
{0, 1}|C(Un )| ).45 We denote by C(x, r ) the commitment to value x produced using
coins r ; that is, C(x) = C(x, r ), where r is uniformly chosen in {0, 1}(|x|) , for some
polynomial .
465
Free ebooks ==> www.Ebook777.com
ENCRYPTION SCHEMES
1. Generates a key-pair for the one-time signature scheme; that is, (s, v) ← G OT (1n ).
2. Computes a pre-proof p ← P WI ((x, r1 , v), w, r2 ), where (P WI , V WI ) is a proof
system (using r2 as reference string) for the following NP-language L :
Note that P indeed feeds P WI with an adequate NP-witness (i.e., ((x, r1 , v), w)
∈ R since (x, w) ∈ R). The first part of the reference string of P is part of the
statement fed to P WI , whereas the second part of P’s reference string
serves as a reference string for P WI . The behavior of V (with respect
to V WI ) will be analogous.
3. The prover computes a signature σ to (x, p) relative to the signing-key s (generated
in Step 1). That is, P computes σ ← SsOT (x, p).
Verifier V : On common input x and an alleged proof (v, p, σ ) (and reference string
r = (r1 , r2 )), the verifier accepts if and only if the following two conditions hold:
1. σ is a valid signature, with respect to the verification-key v, of the pair (x, p).
That is, VvOT ((x, p), σ ) = 1.
2. p is a valid proof, with respect to the reference string r2 , of the statement
(x, r1 , v) ∈ L . That is, V WI ((x, r1 , v), r2 , p) = 1.
Simulator’s first stage S1 : On input 1m+n (from which S1 determines n and m), the
first stage produces a reference string and auxiliary information as follows:
1. Like the real prover, S1 (1m+n ) starts by generating a key-pair for the one-time
signature scheme; that is, (s, v) ← G OT (1n ).
2. Unlike in the real setting, S1 (1m+n ) selects s1 uniformly in {0, 1}(|v|) , and sets r1 =
C(v, s1 ). (Note that in the real setting, r1 is uniformly distributed independently
of v, and thus in the real setting, r1 is unlikely to be in the support of C, let alone
in that of C(v).)
3. Like in the real setting, S1 (1m+n ) selects r2 uniformly in {0, 1}n .
S1 (1m+n ) outputs the pair (r , s), where r = (r1 , r2 ) is a simulated reference string
and s = (v, s, s1 , r2 ) is auxiliary information to be passed to S2 .
Simulator’s second stage S2 : On input a statement x and auxiliary input s =
(v, s, s1 , r2 ) (as generated by S1 ), S2 proceeds as follows:
466
www.Ebook777.com
5.4* BEYOND EAVESDROPPING SECURITY
Proof: Consider a hybrid distribution H (1m+n ), in which everything except the pre-
proof is produced as by S (1m+n ), and the pre-proof is computed as by the real prover.
That is, (r , s) ← S1 (1m+n ) (where r = (r1 , r2 ) and s = (v, s, s1 , r2 )) is produced as by
S , but then for (x, w) = ((r ), W (r )), the pre-proof is computed using the witness
w; that is, p ← P WI ((x, r1 , v), w, r2 ), rather than p ← P WI ((x, r1 , v), s1 , r2 ). The final
proof π = (v, p, σ ) is obtained (as in both cases) by letting σ ← SsOT (x, p). We now
relate the hybrid ensemble to each of the two ensembles referred to in the claim.
467
Free ebooks ==> www.Ebook777.com
ENCRYPTION SCHEMES
Proof: Recall that r = (r1 , r2 ) and s = (v, s, s1 , r2 ), where (s, v) ← G OT (1n ) and r1 =
C(v, s1 ) for a uniformly chosen s1 ∈ {0, 1}(|v|) (and r2 is selected uniformly in {0, 1}n ).
Also recall that π 1 = (v 1 , p 1 , σ 1 ), where v 1 = v, p 1 ← P WI ((x, C(v, s1 ), v), s1 , r2 )
def
and σ 1 ← SsOT (x 1 , p 1 ). Let us denote (v 2 , p 2 , σ 2 ) = π 2 . We need to upper-bound the
following:
Pr (x 2 ∈ L) ∧ ((x 2 , π 2 ) = (x 1 , π 1 )) ∧ (V (x 2 , r , π 2 ) = 1)
⎡ 2 ⎤
(x ∈ L) ∧ ((x 2 , π 2 ) = (x 1 , π 1 ))
⎢ ⎥
= Pr ⎣ ∧ (VvOT 2 ((x , p ), σ ) = 1) ⎦
2 2 2
(5.17)
∧ (V ((x , r1 , v ), r2 , p ) = 1)
WI 2 2 2
where the equality is due to the definition of V . We consider two cases (in which the
event in Eq. (5.17) may hold):
v2 = v 1 : In this case, either (x 2 , p2 ) = (x 1 , p 1 ) or σ 2 = σ 1 must hold (because other-
wise (x 2 , π 2 ) = (x 2 , (v 2 , p2 , σ 2 )) = (x 1 , (v 1 , p 1 , σ 1 )) = (x 1 , π 1 ) follows). But this
means that (2 , 2 ), given a single valid signature σ 1 (to the document (x 1 , p 1 )) with
respect to a randomly generated verification-key v = v 1 = v 2 , is able to produce a
valid document-signature pair ((x 2 , p 2 ), σ 2 ) (with respect to the same verification-
key) such that ((x 2 , p 2 ), σ 2 ) = ((x 1 , p1 ), σ 1 ), in contradiction to the super-security
of the one-time signature scheme.
Details: It suffices to upper-bound
(v 2 = v 1 ) ∧ ((x 2 , π 2 ) = (x 1 , π 1 ))
Pr (5.18)
∧ (VvOT
2 ((x , p ), σ ) = 1)
2 2 2
As explained in the previous paragraph, the first two conditions in Eq. (5.18)
imply that ((x 2 , p2 ), σ 2 ) = ((x 1 , p1 ), σ 1 ). Using (S1 , S2 ) and (1 , 2 , 2 ), we
derive an attacker, A, that violates the super-security of the (one-time) signa-
ture scheme. The attacker just emulates the process described in the claim’s
hypothesis, except that it obtains v as input (rather than generating the pair (s, v)
468
www.Ebook777.com
5.4* BEYOND EAVESDROPPING SECURITY
by invoking G OT ) and uses oracle access to SsOT (rather than s itself ) in order
to produce the signature σ 1 . Specifically, on input v, the attacker A first se-
lects s1 ∈ {0, 1} and r2 ∈ {0, 1}n uniformly, sets r1 = C(v, s1 ) and r = (r1 , r2 ),
and obtains x 1 ← 1 (r ). Next, A computes p 1 ← P WI ((x 1 , r1 , v), s1 , r2 ) and
queries SsOT on (x 1 , p1 ), obtaining the answer σ 1 ← SsOT (x 1 , p1 ) and setting
π 1 = (v, p 1 , σ 1 ). (Indeed, π 1 so produced is distributed exactly as S2 (s, x 1 ),
where s = (v, s, s1 , r2 ), although A does not know s; the argument relies on
the fact that S2 (s, x 1 ) can be implemented without knowledge of s and while
making a single query to the signing oracle SsOT .) Finally, A sets (x 2 , π 2 ) ←
(2 (r , π 1 ), 2 (r , π 1 )), and outputs ((x 2 , p2 ), σ 2 ), where π 2 = (v 2 , p2 , σ 2 ).
Note that A queries its signing oracle only once. (Recall that A queries SsOT
on (x 1 , p1 ), obtains the answer σ 1 , and produces the output pair ((x 2 , p2 ), σ 2 ).)
On the other hand, the probability that A produces a valid document-signature
pair (with respect to the verification-key v) that is different from the (single)
query-answer pair it makes equals Eq. (5.18). Thus, the super-security of the
one-time signature scheme implies that Eq. (5.18) is negligible.
v2 =
v 1 : Since r1 = C(v 1 , s1 ), it follows (by the perfect binding property of C) that r1
is not in the support of C(v 2 ) (i.e., for every w , r1 = C(v 2 , w )). Thus, if x 2 ∈ L, then
(x 2 , r1 , v 2 ) ∈ L . Now, by the adaptive soundness of (P WI , V WI ) and the fact that r2
was selected uniformly in {0, 1}n , it follows that, except with negligible probability,
p2 is not a valid proof (with respect to the reference string r2 ) of the false statement
“(x 2 , r1 , v 2 ) ∈ L .”
(v 2 = v 1 ) ∧ (x 2 ∈ L)
Pr (5.19)
∧ (V WI ((x 2 , r1 , v 2 ), r2 , p2 ) = 1)
As explained in the previous paragraph, the first two conditions in Eq. (5.19)
imply (x 2 , r1 , v 2 ) ∈ L . The key observation is that r2 (generated by S1 ) is uni-
formly distributed in {0, 1}n , and thus the adaptive soundness of the NIWI system
applies. We conclude that Eq. (5.19) is upper-bounded by the (negligible) sound-
ness error of the NIWI system, and the claim follows also in this case.
(See Section C.1 in Appendix C for a discussion of the notion of enhanced trapdoor
permutations.)
469
Free ebooks ==> www.Ebook777.com
ENCRYPTION SCHEMES
We shall show that with the exception of passive attacks on private-key schemes,
non-malleability always implies security against attempts to obtain information on the
plaintext. We shall also show that security and non-malleability are equivalent under
a posteriori chosen ciphertext attack. Thus, the results of the previous sections imply
that non-malleable (under a posteriori chosen ciphertext attack) encryption schemes
can be constructed based on the same assumptions used to construct passively secure
encryption schemes.
5.4.5.1. Definitions
For the sake of brevity, we present only a couple of definitions. Specifically, focusing
on the public-key model, we consider only the simplest and strongest types of attacks;
that is, we first consider (key-oblivious) passive attacks, and then we turn to chosen
ciphertext attacks. The definitions refer to an adversary that is given a ciphertext and
tries to generate a (different) ciphertext to a plaintext related to the original one. That
is, given E e (x), the adversary tries to output E e (y) such that (x, y) ∈ R with respect to
46 Note that considering a randomized process applied to the plaintext does not make the definition stronger.
470
www.Ebook777.com
5.4* BEYOND EAVESDROPPING SECURITY
We stress that the definition effectively prevents the adversary A from just outputting
the ciphertext given to it (because in this case, its output is treated as if it were E e (0|x| )).
This provision is important because otherwise no encryption scheme could have satis-
fied the definition (see Exercise 42). A more subtle issue, which was hand-waved in the
definition, is how to handle the case in which A produces an illegal ciphertext (i.e., is
y defined in such a case to be a standard string [e.g., 1|d| ] or a special error symbol).49
The rest of our text holds under both conventions. Note that A can certainly produce
plaintexts, but its information regarding X n is restricted to h(X n ) (and 1|X n | ). Thus, if
when given h(X n ) and 1|X n | it is infeasible to generate y such that (X n , y) ∈ R, then A
as in Definition 5.4.32 may produce such a y only with negligible probability. Conse-
quently, Definition 5.4.32 implies that in this case, given E e (X n ) (and e, h(X n ), 1|X n | ),
it is infeasible to produce E e (y) such that (X n , y) ∈ R.
47 The computational restriction on R is essential here; see Exercise 16, which refers to a related definition of
semantic security.
48 Potentially, this can only make the definition stronger, because the ability to produce plaintexts implies the
ability to produce corresponding ciphertexts (with respect to a given or a randomly chosen encryption-key).
49 It is interesting to note that in the case of passive attacks, the two possible conventions seem to yield non-
equivalent definitions. The issue is whether the adversary can correlate the generation of an illegal ciphertext
to the encrypted plaintext handed to it. The question of whether this issue is important or not seems to depend
on the type of application. (In contrust, in the case of a posteriori CCA, the two conventions yield equivalent
definitions, because without loss of generality, the attacker may check whether the ciphertext produced by it is
legal.)
471
Free ebooks ==> www.Ebook777.com
ENCRYPTION SCHEMES
Definition 5.4.32 cannot be satisfied by encryption schemes in which one can modify
bits in the ciphertext without changing the corresponding plaintext (i.e., consider the
identity relation). We stress that such encryption schemes may be semantically secure
under passive attacks (e.g., given a semantically secure encryption scheme (G, E, D),
consider E e (x) = E e (x)σ , for randomly chosen σ ∈ {0, 1}). However, such encryption
schemes may not be (semantically) secure under a posteriori CCA.
Turning to the definition of non-malleability under chosen ciphertext attacks, we
adopt the definitional framework of Section 5.4.4.1. Specifically, analogous to Defini-
tion 5.4.13, the challenge template produced by A1 (and A1 ) is a triplet of circuits rep-
resenting a distribution S (represented by a sampling circuit), a function h (represented
by an evaluation circuit), and a relation R (represented by a membership recognition
circuit). The goal of A2 (and A2 ) will be to produce a ciphertext of a plaintext that is
R-related to the challenge plaintext S(Upoly(n) ).
1. For every positive polynomial p and all sufficiently large n and z ∈ {0, 1}poly(n) :
⎡ ⎤
(x, y) ∈ R where
⎢ (e, d) ← G(1n ) ⎥
⎢ ⎥
⎢ ((S, h, R), σ ) ← A E e , Dd
(e, z) ⎥
⎢
Pr ⎢ 1 ⎥
(c, v) ← (E (x), h(x)) , where x ← S(U ) ⎥
⎢ e poly(n) ⎥
⎣ Ee
c ← A2 (σ, c, v) ⎦
|x|
y ← Dd (c ) if c = c and y ← 0 otherwise.
⎡ ⎤
(x, y) ∈ R where
⎢ ((S, h, R), σ ) ← A1 (1n , z) ⎥ ⎥+ 1
< Pr ⎢
⎣ ⎦ p(n)
x ← S(Upoly(n) )
|x|
y ← A2 (σ, 1 , h(x))
2. For every n and z, the first element (i.e., the (S, h, R) part) in the random variables
E G (1n )
A1 (1n , z) and A1 1 (G 1 (1n ), z) are identically distributed.
50 We warn that even in the case of public-key schemes, (single-message) non-malleability (under some type of
attacks) does not necessarily imply the corresponding notion of multiple-message non-malleability.
472
www.Ebook777.com
5.4* BEYOND EAVESDROPPING SECURITY
Proof Sketch: For clarity, the reader may consider the case of passive attacks, but the
same argument holds also for a posteriori chosen ciphertext attacks. Furthermore, the
argument only relies on the hypothesis that (G, E, D) is “non-malleable with respect
to a single (simple) relation.”51
Suppose (toward the contradiction) that (G, E, D) is not semantically secure (under
the relevant type of attack). Using the equivalence to indistinguishability of encryptions,
it follows that under such attacks, one can distinguish encryption to x n from encryption
to yn . Consider the relation R = {(x, x̄) : x ∈ {0, 1}∗ }, where x̄ is the complement of
x, and the uniform distribution Z n on {x n , yn }. We construct an algorithm that, given
a ciphertext (as well as an encryption-key e), runs the said distinguisher and produces
E e (x̄n ) in case the distinguisher “votes” for x n (and produces E e ( ȳn ) otherwise). Indeed,
given E e (Z n ), our algorithm outputs E e ( Z̄ n ) (and thus “hits” R) with probability that is
non-negligibly higher than 1/2. This performance cannot be met by any algorithm that
is not given E e (Z n ). Thus, we derive a contradiction to the hypothesis that (G, E, D)
is non-malleable.
We stress that this argument relies only on the fact that in the public-key model, we
can produce the encryption of any string, since we are explicitly given the encryption-
key. In fact, it suffices to have access to an encryption oracle, and thus the argument
extends also to active attacks in the private-key model (in which the attacker is allowed
encryption queries). On the other hand, under most types of attacks considered here,
non-malleability is strictly stronger than semantic security. Still, in the special case of
a posteriori chosen ciphertext attacks, the two notions are equivalent. Specifically, we
prove that in the case of a posteriori CCA, semantic security implies non-malleability.
Proof Sketch: Suppose toward the contradiction that (G, E, D) is not non-malleable
under a posteriori chosen ciphertext attacks, and let A = (A1 , A2 ) be an adversary
demonstrating this. We construct a semantic-security (a posteriori CCA) adversary
51 In order to avoid certain objections, we refrain from using the simpler relation R = {(x, x) : x ∈ {0, 1}∗ }.
473
Free ebooks ==> www.Ebook777.com
ENCRYPTION SCHEMES
B = (B1 , B2 ) that emulates A (while using its own oracles) and produces its own
output by querying its own decryption oracle on the ciphertext output by A, which is
assumed (without loss of generality) to be different from the challenge ciphertext given
to A. The key point is that B can make this extra query because it is an a posteriori
CCA adversary, and thus the difference between outputting a ciphertext and outputting
the corresponding plaintext disappears. Intuitively, B violates semantic security (with
respect to relations and a posteriori CCA, as can be defined analogously to Exercise 16).
Details follow.
Given an encryption-key e, algorithm B1 invokes A1 (e), while answering A1 ’s queries
by querying its own oracles, and obtains the challenge template (S, h, R) (and state σ ),
which it outputs as its own challenge template. Algorithm B2 is given a ciphertext
c (along with the adequate auxiliary information) and invokes A2 on the very same
input, while answering A2 ’s queries by querying its own oracles. When A2 halts with
output c = c, algorithm B2 forwards c to its decryption oracle and outputs the answer.
Thus, for every relation R, the plaintext output by B “hits” the relation R with the
same probability that the decryption of A’s output “hits” R. We have to show that this
hitting probability cannot be met by a corresponding benign algorithm that does not
get the ciphertext; but this follows from the hypothesis regarding A (and the fact that
in both cases, the corresponding benign algorithm [i.e., A or B ] outputs a plaintext
[rather than a ciphertext]). Finally, we have to establish, analogously to Exercise 16,
that semantic security with respect to relations holds (in our current context of chosen
ciphertext attacks) if and only if semantic security (with respect to functions) holds.
The latter claim follows as in Exercise 16 by relying on the fact that in the current
context, the relevant relations have polynomial-size circuits. (A similar argument holds
for private-key schemes.)
Theorem 5.4.37: If there exist (non-uniformly hard) one-way functions, then there
exist private-key encryption schemes that are non-malleable under a posteriori chosen
ciphertext attacks.
5.5. Miscellaneous
www.Ebook777.com
5.5 MISCELLANEOUS
52 One reason not to use the public-key encryption scheme itself for the actual (encrypted) communication is that
private-key encryption schemes tend to be much faster.
475
Free ebooks ==> www.Ebook777.com
ENCRYPTION SCHEMES
www.Ebook777.com
5.5 MISCELLANEOUS
which can be used to securely communicate an a priori bounded number of bits. Fur-
thermore, multiple messages may be handled provided that their total length is a priori
bounded and that we use a state (as in Construction 5.3.3). We stress that this state-based
private-key perfectly secure encryption scheme uses a key of length equal to the total
length of plaintexts to be encrypted. Indeed, the key must be at least that long (to allow
perfect-security), and a state is essential for allowing several plaintexts to be securely
encrypted.
Partial Information Models. Note that in the case of private-key encryption schemes,
the limitations of perfect-security hold only if the adversary has full information of the
communication over the channel. On the other hand, perfectly secure private channels
can be implemented on top of channels to which the adversary has limited access.
We mention three types of channels of the latter type, which have received a lot of
attention.
r The bounded-storage model, where the adversary can freely tap the communication
channel(s) but is restricted in the amount of data it can store (cf., [148, 48, 187]).53
r The noisy channel model (which generalizes the wiretap channel of [189]), where
both the communication between the legitimate parties and the tapping channel of
the adversary are subjected to noise (cf., [148, 69] and the references therein).
r Quantum channels, where an adversary is (supposedly) prevented from obtaining
full information by the (currently believed) laws of quantum mechanics (cf., [45]
and the references therein).
Following are the author’s subjective opinions regarding these models (as a possible
basis for actual secure communication). The bounded-storage model is very appealing,
because it clearly states its reasonable assumptions regarding the abilities of the ad-
versary. In contrast, making absolute assumptions about the noise level at any point in
time seems (overly) optimistic, and thus not adequate in the context of cryptography.
Basing cryptography on quantum mechanics sounds like a very appealing idea, but at-
tempts to implement this idea have often stumbled over unjustified hidden assumptions
(which are to be expected, given the confusing nature of quantum mechanics and the
discrepancy between its scientific culture and cryptography).
53 Typically, this model postulates the existence of an auxiliary (uni-directional) public channel on which a trusted
party (called a beacon) transmits a huge amount of random bits.
54 Typically, these schemes are not (semantically) secure. Furthermore, these proposals fail to suggest a weaker
477
Free ebooks ==> www.Ebook777.com
ENCRYPTION SCHEMES
55 The linear congruential generator is easy to predict [43]. The same holds for some modifications of it that
output a constant fraction of the bits of each resulting number [94]. We warn that sequences having large
linear-complexity (LFSR-complexity) are not necessarily hard to predict.
478
www.Ebook777.com
5.5 MISCELLANEOUS
Adleman [176] and by Merkle and Hellman [154]. The abstract notion, as well as
the concrete candidate implementations (especially the RSA scheme of [176]), have
been the driving force behind the theoretical study of encryption schemes. However, the
aforementioned pioneering works did not provide a definition of security. Such satisfac-
tory definitions were provided (only a few years later) by Goldwasser and Micali [123].
The two definitions presented in Section 5.2 originate in [123], where it was shown that
ciphertext-indistinguishability implies semantic security. The converse direction is due
to [156].
Regarding the seminal paper of Goldwasser and Micali [123], a few additional com-
ments are in place. Arguably, this paper is the basis of the entire rigorous approach
to cryptography (presented in the current work): It introduced general notions such
as computational indistinguishability, definitional approaches such as the simulation
paradigm, and techniques such as the hybrid argument. Its title (“Probabilistic Encryp-
tion”) is due to the authors’ realization that public-key encryption schemes in which
the encryption algorithm is deterministic cannot be secure in the sense defined in their
paper. Indeed, this led the authors to (explicitly) introduce and justify the paradigm
of “randomizing the plaintext” as part of the encryption process. Technically speak-
ing, the paper only presents security definitions for public-key encryption schemes,
and furthermore, some of these definitions are syntactically different from the ones
we have presented here (yet all these definitions are equivalent). Finally, the term
“ciphertext-indistinguishability” used here replaces the (generic) term “polynomial-
security” used in [123]. Many of our modifications (to the definitions in [123]) are
due to Goldreich [104], which is also the main source of our uniform-complexity
treatment.56
The first construction of a secure public-key encryption scheme based on a sim-
ple complexity assumption was given by Goldwasser and Micali [123].57 Specifically,
they constructed a public-key encryption scheme assuming that deciding Quadratic
Residiousity modulo composite numbers is intractable. The condition was weakened
by Yao [190], who showed that any trapdoor permutation will do. The efficient public-
key encryption scheme of Construction 5.3.20 is due to Blum and Goldwasser [41].
The security is based on the fact that the least-significant bit of the modular squaring
function is a hard-core predicate, provided that factoring is intractable, a result mostly
due to [1].
For decades, it has been common practice to use “pseudorandom generators” in the
design of stream-ciphers. As pointed out by Blum and Micali [42], this practice is sound
provided that one uses pseudorandom generators (as defined in Chapter 3 of this work).
The construction of private-key encryption schemes based on pseudorandom functions
is due to [111].
We comment that it is indeed peculiar that the rigorous study of (the security of)
private-key encryption schemes has lagged behind the corresponding study of public-
key encryption schemes. This historical fact may be explained by the very thing that
56 Section 5.2.5.5 was added during the copyediting stage, following discussions with Johan Håstad.
57 Recall that plain RSA is not secure, whereas Randomized RSA is based on the Large Hard-Core Conjecture for
RSA (which is less appealing that the standard conjecture referring to the intractability of inverting RSA).
479
Free ebooks ==> www.Ebook777.com
ENCRYPTION SCHEMES
makes it peculiar; that is, private-key encryption schemes are less complex than public-
key ones, and hence, the problematics of their security (when applied to popular can-
didates) is less obvious. In particular, the need for a rigorous study of (the security of)
public-key encryption schemes arose from observations regarding some of their con-
crete applications (e.g., doubts raised by Lipton concerning the security of the “mental
poker” protocol of [184], which used “plain RSA” as an encryption scheme). In con-
trast, the need for a rigorous study of (the security of) private-key encryption schemes
arose later and by analogy to the public-key case.
Definitional Issues. The original definitional treatment of Goldwasser and Micali [123]
actually refers to key-dependent passive attacks (rather than to key-oblivious passive
attacks). Chosen ciphertext attacks (of the a priori and a posteriori type) were first
considered in [164] (and [174], respectively). However, these papers focused on the
formulation in terms of indistinguishability of encryptions, and formulations in terms
of semantic security have not appeared before. Section 5.4.4.2 is based on [116]. The
study of the non-malleability of the encryption schemes was initiated by Dolev, Dwork,
and Naor [77].
www.Ebook777.com
5.5 MISCELLANEOUS
5.5.7. Exercises
Exercise 1: Secure encryption schemes imply secure communication protocols: A
secure communication protocol is a two-party protocol that allows the parties
to communicate in secrecy (i.e., as in Definition 5.2.1). We stress that the sender
58 We comment that the “reasonably-efficient” scheme of [68] is based on a strong assumption regarding a specific
computational problem related to the Diffie-Hellman Key Exchange. Specifically, it is assumed that for a prime P
and primitive element g, given (P, g, (g x mod P), (g y mod P), (g z mod P)), it is infeasible to decide whether
z ≡ x y (mod P − 1).
481
Free ebooks ==> www.Ebook777.com
ENCRYPTION SCHEMES
enters such a protocol with input that equals the message to be delivered, and the
receiver enters with no input (or with input that equals the security parameter).
1. Show that any secure public-key encryption scheme yields a (two-message) secure
communication protocol.
2. Define secure communication protocol with initial set-up, and show that any se-
cure private-key encryption scheme yields such a (one-message) protocol. (Here,
the communicating parties obtain an [equal] auxiliary input that is generated at
random according to some pre-determined process.)
Advanced: Show that a secure communication protocol (even with initial set-up but
with a priori unbounded messages) implies the existence of one-way functions.
Guideline (advanced part): See guideline for Exercise 2.
Exercise 2: Secure encryption schemes imply one-way function [132]: Show that the
existence of a secure private-key encryption scheme (i.e., as in Definition 5.2.1)
implies the existence of one-way functions.
Guideline: Recall that, by Exercise 11 of Chapter 3 in Volume 1, it suffices to prove
that the former implies the existence of a pair of polynomial-time constructible
probability ensembles that are statistically far apart and still are computationally in-
distinguishable. To prove the existence of such ensembles, consider the encryption
of (n + 1)-bit plaintexts relative to a random n-bit long key, denoted K n . Specif-
ically, let the first ensemble be {(Un+1 , E(Un+1 ))}n∈N , where E(x) = E K n (x), and
(1) (2) (1) (2)
the second ensemble be {(Un+1 , E(Un+1 ))}n∈N , where Un+1 and Un+1 are inde-
pendently distributed. It is easy to show that these ensembles are computationally
indistinguishable and are both polynomial-time constructible. The more interesting
part is to show that these ensembles are statistically far apart. Note that the cor-
rect decryption condition implies that (K n , E K n (Un+1 )) contains n + 1 − o(1) bits
of information about Un+1 . On the other hand, if these ensembles are statistically
close, then E K n (Un+1 ) contains o(1) bits of information about Un+1 . Contradiction
follows, because K n may contain at most n bits of information.
Exercise 4: Encryption schemes must leak information about the length of the plain-
text: Suppose that the definition of semantic security is modified so that the
482
www.Ebook777.com
5.5 MISCELLANEOUS
algorithms are not given the length of the plaintext. Prove that in such a case there
exists no semantically secure encryption scheme.
Guideline: First show that for some polynomial p, |E(1n )| < p(n) (always holds),
whereas for some x ∈ {0, 1} p(n) it must hold that Pr[|E(x)| < p(n)] < 1/2.
Exercise 5: Hiding partial information about the length of the plaintext: Using an ar-
bitrary secure encryption scheme, construct a correspondingly secure encryption
scheme that hides the exact length of the plaintext. In particular, construct an en-
cryption scheme that reveals only the following function h of the length of the
plaintext:
(Hint: Just use an adequate padding convention, making sure that it always allows
correct decryption.)
where C n ← T (C n ) and the probability is also taken over the internal coin
tosses of T .
Pr U (1n , E G 1 (1n ) (X n ), 1|X n | , h(1n , X n )) = f (1n , X n )
1
< Pr U (1n , 1|X n | , h(1n , X n )) = f (1n , X n ) +
p(n)
Still, a gap remains between Definition 5.2.1 and this definition: The last refers
only to one possible deterministic algorithm U , whereas Definition 5.2.1 refers
to all probabilistic polynomial-time algorithms. To close the gap, we first observe
that (by Propositions 5.2.7 and 5.2.6), Definition 5.2.1 is equivalent to a form
in which one only quantifies over deterministic polynomial-time algorithms A.
We conclude by observing that one can code any algorithm A (and polynomial
time-bound) referred to by Definition 5.2.1 in the auxiliary input (i.e., h(1n , X n ))
given to U .
Guideline: Use the furthermore-clause of Proposition 5.2.7 to show that the new
definition implies indistinguishability of encryptions, and conclude by applying
Proposition 5.2.6 and invoking Exercise 9.
Exercise 11: An alternative formulation of Definition 5.2.3: Prove that Definition 5.2.3
remains unchanged when supplying the circuit with auxiliary input. That is, an
encryption scheme satisfies the modified Definition 5.2.3 if and only if
59 Equivalently, one may require that for any polynomial-size circuit family {Cn }n∈N there exists a polynomial-size
circuit family {Cn }n∈N satisfying the relevant inequality.
484
www.Ebook777.com
5.5 MISCELLANEOUS
Exercise 12: Equivalence of the security definitions in the public-key model: Prove
that a public-key encryption scheme is semantically secure if and only if it has
indistinguishable encryptions.
Exercise 13: The technical contents of semantic security: The following explains the
lack of computational requirements regarding the function f , in Definition 5.2.1.
Prove that an encryption scheme, (G, E, D), is (semantically) secure (in the private-
key model) if and only if the following holds:
There exists a probabilistic polynomial-time algorithm A such that for ev-
ery {X n }n∈N and h as in Definition 5.2.1, the following two ensembles are
computationally indistinguishable:
Guideline: We care mainly about the fact that the latter formulation implies se-
mantic security. The other direction can be proven analogously to the proof of
Proposition 5.2.7.
Guideline (Part 1): Prove that this special case (i.e., obtained by the restriction on
h) is equivalent to the general one. This follows by combining Propositions 5.2.7
and 5.2.6. Alternatively, this follows by considering all possible probability ensem-
bles {X n }n∈N obtained from {X n }n∈N by conditioning that h(1n , X n ) = an (for every
possible sequence of an ’s).
Guideline (Part 2): The claim regarding h follows from Part 1. To establish the
claim regarding X n , observe that (by Propositions 5.2.7 and 5.2.6) we may consider
the case in which X n ranges over two strings.
485
Free ebooks ==> www.Ebook777.com
ENCRYPTION SCHEMES
Exercise 15: A variant on Exercises 13 and 14.1: Prove that an encryption scheme,
(G, E, D), is (semantically) secure (in the private-key model) if and only if the
following holds:
Guideline: Again, we care mainly about the fact that this variant implies seman-
tic security. The easiest proof of this direction is by applying Propositions 5.2.7
and 5.2.6. A more interesting proof is obtained by using Exercise 13: Indeed, the
current formulation is a special case of the formulation in Exercise 13, and so
we need to prove that it implies the general case. The latter is proven by observ-
ing that otherwise – using an averaging argument – we derive a contradiction in
one of the residual probability spaces defined by conditioning on h(1n , X n ) (i.e.,
(X n |h(1n , X n ) = v) for some v).
Exercise 16: Semantic security with respect to relations: The formulation of seman-
tic security in Definition 5.2.1 refers to computing a function (i.e., f ) of the
plaintext. Here we present a (related) definition that refers to finding strings that
are in a certain relation to the plaintext. Note that, unlike in Definition 5.2.1,
here we consider only efficiently recognizable relations. Specifically, we require the
following:
1. Prove that this definition is in fact equivalent to the standard definition of semantic
security.
2. Show that if the computational restriction on the relation R is removed, then no
encryption scheme can satisfy the resulting definition.
486
www.Ebook777.com
5.5 MISCELLANEOUS
Exercise 17: Semantic security with a randomized h: The following syntactic strength-
ening of semantic security is important in some applications. Its essence is in consid-
ering information related to the plaintext, in the form of a related random variable,
rather than partial information about the plaintext (in the form of a function of
it). Prove that an encryption scheme, (G, E, D), is (semantically) secure (in the
private-key model) if and only if the following holds:
For every probabilistic polynomial-time algorithm A there exists a proba-
bilistic polynomial-time algorithm A such that for every {(X n , Z n )}n∈N , with
|(X n , Z n )| = poly(n), where Z n may depend arbitrarily on X n , and f , p, and
n as in Definition 5.2.1
Pr A(1n , E G 1 (1n ) (X n ), 1|X n | , Z n ) = f (1n , X n )
1
< Pr A (1n , 1|X n | , Z n ) = f (1n , X n ) +
p(n)
That is, the auxiliary input h(1n , X n ) of Definition 5.2.1 is replaced by the random
variable Z n . Formulate and prove an analogous claim for the public-key model.
Guideline: Definition 5.2.1 is clearly a special case of the latter formulation. On
the other hand, the proof of Proposition 5.2.6 extends easily to this (seemingly
stronger) formulation of semantic security.
Exercise 18: Semantic Security with respect to Oracles (suggested by Boaz Barak):
Consider an extended definition of semantic security in which, in addition to the
regular inputs, the algorithms have oracle access to a function H1n , x : {0, 1}∗ →
{0, 1}∗ (instead of being given the value h(1n , x)). The H1n , x ’s have to be restricted to
have polynomial (in n + |x|) size circuits. That is, an encryption scheme, (G, E, D),
is extended-semantically secure (in the private-key model) if the following holds:
For every probabilistic polynomial-time algorithm A there exists a proba-
bilistic polynomial-time algorithm B such that for every ensemble {X n }n∈N ,
with |X n | = poly(n), every polynomially bounded function f , every family of
polynomial-sized circuits {H1n , x }n∈N, x∈{0,1}∗ ,every positive polynomial p, and
all sufficiently large n
Pr A H1n , X n (1n , E G 1 (1n ) (X n ), 1|X n | ) = f (1n , X n )
1
< Pr B H1n , X n (1n , 1|X n | ) = f (1n , X n ) +
p(n)
487
Free ebooks ==> www.Ebook777.com
ENCRYPTION SCHEMES
Guideline (for Part 1): The proof is almost identical to the proof of Proposi-
tion 5.2.6: The algorithm B forms an encryption of 1|X n | , and invokes A on it. In-
distinguishability of encryptions is used in order to establish that B H1n , X n (1n , 1|X n | )
performs essentially as well as A H1n , X n (1n , 1|X n | , E(X n )). Otherwise, we obtain a
distinguisher of E(xn ) from E(1|xn | ), for some infinite sequence of xn ’s. In particu-
lar, the oracle H1n , xn (being implementable by a small circuit) can be incorporated
into a distinguisher.
Guideline (for Part 2): In such a case, H1n , x may be defined such that, when queried
about a ciphertext, it reveals the decryption-key in use.60 Such an oracle allows A
(which is given a ciphertext) to recover the corresponding plaintext, but does not
help A (which is only given 1n , 1|X n | ) to find any information about the value of
Xn.
Exercise 19: Another equivalent definition of security: The following exercise is inter-
esting mainly for historical reasons. In the definition of semantic security appearing
in [123], the term maxu,v {Pr[ f (1n , X n ) = v|h(1n , X n ) = u]} appears instead of the
term Pr[A (1n , 1|X n | , h(1n , X n )) = f (1n , X n )]. That is, it is required that the follow-
ing holds:
For every probabilistic polynomial-time algorithm A, every ensemble {X n }n∈N ,
with |X n | = poly(n), every pair of polynomially bounded functions f, h :
{0, 1}∗ → {0, 1}∗ , every positive polynomial p, and all sufficiently large n
Pr A(1n , E G 1 (1n ) (X n ), 1|X n | , h(1n , X n )) = f (1n , X n )
1
< max Pr [ f (1n , X n ) = v|h(1n , X n ) = u] +
u,v p(n)
Prove that this formulation is in fact equivalent to Definition 5.2.1.
Guideline: First, note that this definition is implied by Definition 5.2.1 (be-
cause maxu,v {Pr[ f (1n , X n ) = v|h(1n , X n ) = u]} ≥ Pr[A (1n , 1|X n | , h(1n , X n )) =
f (1n , X n )], for every algorithm A ). Next note that in the special case, in which X n
satisfies Pr[ f (1n , X n ) = 0|h(1n , X n ) = u] = Pr[ f (1n , X n ) = 1|h(1n , X n ) = u] = 12 ,
for all u’s, the previous terms are equal (because A can easily achieve success prob-
ability 1/2 by simply always outputting 1). Finally, combining Propositions 5.2.7
and 5.2.6, infer that it suffices to consider only the latter special case.
60 This refers to the private-key case, whereas in the public-key case, H1n , x may be defined such that, when queried
about an encryption-key, it reveals the decryption-key in use.
488
www.Ebook777.com
5.5 MISCELLANEOUS
Exercise 22: Known plaintext attacks: Loosely speaking, in a known plaintext attack
on a private-key (resp., public-key) encryption scheme, the adversary is given some
plaintext/ciphertext pairs in addition to some extra ciphertexts (without correspond-
ing plaintexts). Semantic security in this setting means that whatever can be effi-
ciently computed about the missing plaintexts can also be efficiently computed given
only the length of these plaintexts.
1. Provide formal definitions of security under known plaintext attacks, treating both
the private-key and public-key models and referring to both the single-message
and multiple-message settings.
2. Prove that any secure public-key encryption scheme is also secure in the presence
of known plaintext attacks.
3. Prove that any private-key encryption scheme that is secure in the multiple-
message setting is also secure in the presence of known plaintext attacks.
489
Free ebooks ==> www.Ebook777.com
ENCRYPTION SCHEMES
Guideline (for Part 3): Consider a function h in the multiple-message setting that
reveals some of the plaintexts.
(
( j−1) |α |j)
and d ( j) = D (d ( j−1) , 1|E (e ,1 )| ) for j = 1, ..., i − 1. Prove the equivalence of
the two formulations.
Exercise 25: On the standard notion of block-cipher: A standard block-cipher is a
triple, (G, E, D), of probabilistic polynomial-time algorithms that satisfies Defini-
tion 5.3.5 as well as |E e (α)| = (n) for every pair (e, d) in the range of G(1n ) and
every α ∈ {0, 1}(n) .
1. Prove that a standard block-cipher cannot be semantically secure (in the multiple-
message private-key model). Furthermore, show that any semantically secure
encryption scheme must employ ciphertexts that are longer than the corresponding
plaintexts.
490
www.Ebook777.com
5.5 MISCELLANEOUS
Guideline (for Part 1): Consider the encryption of a pair of two identical messages
versus the encryption of a pair of two different messages, and use the fact that
E e must be a permutation of {0, 1}(n) . Extend the argument to any encryption
scheme in which plaintexts of length (n) are encrypted by ciphertexts of length
(n) + O(log n), observing that in this case most plaintexts have only poly(n)-many
ciphertexts under E e .
Exercise 28: On the importance of restricting the ensembles {h e }e∈{0,1}∗ and {X e }e∈{0,1}∗
in Definition 5.4.1:
Exercise 29: An alternative formulation of Definition 5.4.1: Show that the following
formulation of the definition of admissible ensembles {h e }e and {X e }e is equivalent
to the one in Definition 5.4.1:
r There is a non-uniform family of polynomial-size circuits {Tn } that transform
encryption-keys (i.e., e in G 1 (1n )) into circuits that compute the corresponding
functions (i.e., h e ). That is, on input e ← G 1 (1n ), the circuit Tn outputs a circuit Ce
such that Ce (x) = h e (x) holdsfor all strings of adequate length (i.e., ≤ poly(|e|)).
491
Free ebooks ==> www.Ebook777.com
ENCRYPTION SCHEMES
Note that this formulation is in greater agreement with the motivating discussion pre-
ceding Definition 5.4.1. The formulation in Definition 5.4.1 was preferred because
of its relative simplicity.
Exercise 30: Alternative formulations of Definitions 5.4.1 and 5.4.2: Following the
framework of Section 5.4.3, present alternative definitions of security for key-
dependent passive attacks (by replacing the oracle machines A1 and A2 in Def-
initions 5.4.8 and 5.4.9 with ordinary machines). Show that these definitions are
equivalent to Definitions 5.4.1 and 5.4.2.
Guideline: For example, show how to derive circuits Pn and C n (as in Defini-
tion 5.4.2) from the machines A1 , A2 and the auxiliary input z (of Definition 5.4.9).
Exercise 32: Key-oblivious versus key-dependent passive attacks: Assuming the ex-
istence of secure public-key encryption schemes, show that there exists one that
492
www.Ebook777.com
5.5 MISCELLANEOUS
satisfies the basic definition (i.e., as in Definition 5.2.2) but is insecure under
key-dependent passive attacks (i.e., as in Definition 5.4.1).
Guideline: Given a scheme (G, E, D), define (G, E , D ) such that E e (x) =
(1, E e (x)) if x = e and E e (x) = (0, x) otherwise (i.e., for x = e). Using Exercise 7
(which establishes that each encryption-key is generated with negligible proba-
bility), show that (G, E , D ) satisfies Definition 5.2.2. Alternatively, use G (1n ) =
((r, G 1 (1n )), G 2 (1n )), where r is uniformly distributed in {0, 1}n , which immediately
implies that each encryption-key is generated with negligible probability.
Exercise 33: Passive attacks versus Chosen Plaintext Attacks: Assuming the existence
of secure private-key encryption schemes, show that there exists one that is secure
in the standard (multi-message) sense (i.e., as in Definition 5.2.8) but is insecure
under a chosen plaintext attack (i.e., as in Definition 5.4.8).
Guideline: Given a scheme (G, E, D), define (G , E , D ) such that
1. G (1n ) = ((k, r ), (k, r )), where (k, k) ← G(1n ) and r is selected uniformly in
{0, 1}n .
2. E (k,r ) (x) = (1, r, E k (x)) if x = r and E (k,r ) (x) = (0, k, x) otherwise (i.e., for
x = r ).
Show that (G , E , D ) is secure in the standard sense, and present a (simple but
very “harmful”) chosen plaintext attack on it.
Exercise 34: Alternative formulations of semantic security for CPA and CCA: Consider
an alternative form of Definition 5.4.8 (resp., Definition 5.4.13) in which A1 (1, z) is
replaced by A1E e (e, z) (resp., A1Ee , Dd (e, z)), where (e, d) ← G(1n ) and Condition 2
is omitted. Show that the current form is equivalent to the one presented in the main
text.
Guideline: The alternative forms presented here restrict the choice of A1 (to a
canonical one), and thus the corresponding definitions are at least as strong as the
ones in the main text. However, since Theorem 5.4.11 (resp., Theorem 5.4.15) is
established using the canonical A1 , it follows that the current definitions are actually
equivalent to the ones in the main text. We comment that we consider the formulation
in the main text to be more natural, alas more cumbersome.
Exercise 35: Chosen Plaintext Attacks versus Chosen Ciphertext Attacks: Assuming
the existence of private-key (resp., public-key) encryption schemes that are secure
under a chosen plaintext attack, show that there exists one that is secure in the
former sense but is not secure under a chosen ciphertext attack (not even in the a
priori sense).
Guideline: Given a scheme (G, E, D), define (G , E , D ) such that G = G and
1. E e (x) = (1, E e (x)) with probability 1 − 2−|e| and E e (x) = (0, x) otherwise.
2. Dd (1, y) = Dd (y) and Dd (0, y) = (d, y).
Recall that decryption is allowed to fail with negligible probability, and note that the
construction is adequate for both public-key and private-key schemes. Alternatively,
to obtain error-free decryption, define Ee (x) = (1, E e (x)), Dd (1, y) = Dd (y) and
493
Free ebooks ==> www.Ebook777.com
ENCRYPTION SCHEMES
Dd (0, y) = (d, y). In the case of private-key schemes, we may define E k (k) =
(0, 1|k| ) and E k (x) = (1, E k (x)) for x = k.
Exercise 36: Chosen Ciphertext Attacks: a priori versus a posteriori: Assuming the
existence of private-key (resp., public-key) encryption schemes that are secure under
an a priori chosen plaintext attack, show that there exists one that is secure in the
former sense but is not secure under an a posteriori chosen ciphertext attack.
Guideline: Given a scheme (G, E, D), define (G , E , D ) such that G = G and
def
1. E e (x) = (b, E e (x)), where b is uniformly selected in {0, 1}.
def
2. Dd (b, y) = Dd (y).
Guideline: Show how the modified model of Part 1 can emulate the original model
(that’s easy), and how the original model can emulate the modified model of Part 1
(e.g., replace the query (S, h) by the pair of queries (S, 0) and (id, h)). Next relate
the models in Parts 1 and 2.
Exercise 39: On the computational restriction on the choice of input in the definition
of adaptive NIZK: Show that if Definition 5.4.22 is strengthened by waiving the
computational bounds on , then only trivial NIZKs (i.e., languages in BPP) can
satisfy it.
61 Furthermore, we may even restrict this challenge query to be of the form (S, 0), where 0 is the all-zero function
(which yields no information).
494
www.Ebook777.com
5.5 MISCELLANEOUS
Exercise 40: Weak simulation-soundness can hold only with respect to computationally
bounded cheating provers. Show that if Definition 5.4.24 is strengthened by waiving
the computational bounds on , then only trivial NIZKs (i.e., for languages in BPP)
can satisfy it.
Guideline: Show that otherwise the two-stage simulation procedure, S = (S1 , S2 ),
can be used to distinguish inputs in the language L from inputs outside the language,
because in the first case it produces a valid proof whereas in the second case it cannot
do so. The latter fact is proved by showing that if S2 (which also gets an auxiliary
input s produced by S1 along with the reference string) produces a valid proof for
some x ∈ L, then a computationally unbounded prover may do the same by first
generating s according to the conditional distribution induced by the reference string
(and then invoking S2 ).
Exercise 41: Does weak simulation-soundness hold for all adaptive NIZKs?
1. Detect the flaw in the following argument toward an affirmative answer: If weak
simulation-soundness does not hold, then we can distinguish a uniformly selected
reference string (for which soundness holds) from a reference string generated
by S1 (for which soundness does not hold).
2. Assuming the existence of one-way permutations (and adaptive NIZKs), show
an adaptive NIZK with a suitable simulator such that weak simulation-soundness
does not hold.
3. (Suggested by Boaz Barak and Yehuda Lindell): Consider languages containing
pairs (α, x) such that one can generate α’s along with suitable trapdoors t(α)’s that
allow for determining whether or not inputs of the form (α, ·) are in the language.
For such languages, define a weaker notion of simulation-soundness that refers
to the setting in which a random α is generated and then one attempts to produce
valid proofs for a no-instance of the form (α, ·) with respect to a reference-string
generated by S1 . (The weaker notion asserts that in this setting it is infeasible to
produce a valid proof for such a no-instance.) Provide a clear definition, prove
that it is satisfied by any adaptive NIZK for the corresponding language, and show
that this definition suffices for proving Theorem 5.4.27.
Guideline (Part 1): The existence of an efficient C = (, ) that violates weak
simulation-soundness only means that for a reference string generated by S1 , the
cheating generates a valid proof for a no-instance selected by . When C is given
a uniformly selected reference string, it either may fail to produce a valid proof or
may produce a valid proof for a yes-instance. However, we cannot necessarily
distinguish no-instances from yes-instances (see, for example, Part 2). This gap is
eliminated in Part 3.
495
Free ebooks ==> www.Ebook777.com
ENCRYPTION SCHEMES
496
www.Ebook777.com
CHAPTER SIX
Message authentication and (digital) signatures were the first tasks that joined en-
cryption to form modern cryptography. Both message authentication and digital sig-
natures are concerned with the “authenticity” of data, and the difference between
them is analogous to the difference between private-key and public-key encryption
schemes.
In this chapter, we define message authentication and digital signatures, and the se-
curity notions associated with them. We show how to construct message-authentication
schemes using pseudorandom functions, and how to construct signature schemes using
one-way permutations. We stress that the latter construction employs arbitrary one-way
permutations, which do not necessarily have a trapdoor.
Teaching Tip. In contrast to the case of encryption schemes (cf. Chapter 5), the def-
initional treatment of signatures (and message authentication) is quite simple. The
treatment of length-restricted schemes (see Section 6.2) plays an important role in
the construction of standard schemes, and thus we strongly recommend highlighting
this treatment. We suggest focusing on the presentation of the simplest construction of
message-authentication schemes (provided in Section 6.3.1) and on the (not-so-simple)
construction of signature schemes that is provided in Sections 6.4.1 and 6.4.2. As in
Chapter 5, we assume that the reader is familiar with the material in Chapters 2 and 3 of
Volume 1 (and specifically with Sections 2.2, 2.4, and 3.6). This familiarity is important
not only because we use some of the notions and results presented in these sections but
also because we use similar proof techniques (and do so while assuming that this is not
the reader’s first encounter with these techniques).
497
Free ebooks ==> www.Ebook777.com
DIGITAL SIGNATURES AND MESSAGE AUTHENTICATION
Both signature schemes and message-authentication schemes are methods for “validat-
ing” data, that is, verifying that the data was approved by a certain party (or set of parties).
The difference between signature schemes and message-authentication schemes is that
“signatures” should be “universally verifiable,” whereas “authentication tags” are only
required to be verifiable by parties that are also able to generate them. It is customary
to discuss each of these two types of schemes separately, and we start by providing a
brief overview of such a nature. We then turn to our actual treatment, which applies to
both types of schemes in a unified manner.
r Each user can efficiently produce his/her own signature on documents of his/her
choice;
r every user can efficiently verify whether a given string is a signature of another
(specific) user on a specific document; but
r it is infeasible to produce signatures of other users to documents that they did not
sign.
We note that the formulation of unforgeable digital signatures also provides a clear
statement of the essential ingredients of handwritten signatures. The ingredients are
each person’s ability to sign for him/herself, a universally agreed-upon verification
procedure, and the belief (or assertion) that it is infeasible (or at least hard) to
forge signatures in a manner that passes the verification procedure. It is not clear
to what extent handwritten signatures do meet these requirements. In contrast, our
treatment of digital-signature schemes provides precise statements concerning the
extend to which digital signatures meet these requirements. Furthermore, unforge-
able digital signature schemes can be constructed based on the existence of one-way
functions.
Message authentication is a task related to the setting considered for encryp-
tion schemes; that is, communication over an insecure channel. This time, we con-
sider an active adversary that is monitoring the channel and may alter the mes-
sages sent on it. The parties communicating through this insecure channel wish
to authenticate the messages they send so that their counterpart can tell an orig-
inal message (sent by the sender) from a modified one (i.e., modified by the
498
www.Ebook777.com
6.1 THE SETTING AND DEFINITIONAL ISSUES
adversary). Loosely speaking, a scheme for message authentication should satisfy the
following:
r Each of the communicating parties can efficiently produce an authentication tag to
any message of his/her choice;
r each of the communicating parties can efficiently verify whether a given string is an
authentication tag of a given message; but
r it is infeasible for an external adversary (i.e., a party other than the communicating
parties) to produce authentication tags to messages not sent by the communicating
parties.
Note that in contrast to the specification of signature schemes, we do not require uni-
versal verification: Only the designated receiver is required to be able to verify the
authentication tags. Furthermore, we do not require that the receiver be unable to pro-
duce authentication tags by itself (i.e., we only require that external parties not be able
to do so). Thus, message-authentication schemes cannot convince a third party that the
sender has indeed sent the information (rather than the receiver having generated it by
itself ). In contrast, signatures can be used to convince third parties. In fact, a signature
to a document is typically sent to a second party so that in the future, this party may
(by merely presenting the signed document) convince third parties that the document
was indeed generated (or sent or approved) by the signer.
499
Free ebooks ==> www.Ebook777.com
DIGITAL SIGNATURES AND MESSAGE AUTHENTICATION
the possession of the signing-key constitutes the sender’s advantage over the adversary
(see analogous discussion in Chapter 5). That is, without the signing-key, it is infeasible
to generate valid signatures (with respect to the corresponding verification-key). Fur-
thermore, even after receiving signatures to messages of its choice, an adversary (lacking
the signing-key) cannot generate a valid signature to any other message.
As previously stated, the ability to produce valid signatures is linked to the knowl-
edge of the signing-key. Loosely speaking, “security” (or “unforgeability”) means the
infeasibility of producing valid signatures without knowledge of the signing-key, where
validity means passing verification with respect to the corresponding verification-key.
The difference between message-authentication and signature schemes amounts to the
question of whether “security” also holds when the verification-key is publicly known:
In the case of message-authentication schemes, the verification-key is assumed to be
kept secret (and so these schemes are of the “private-key” type), whereas in the case
of signature schemes, the verification-key may be made public (and so these schemes
are of the “public-key” type). Thus, the difference between message-authentication
and signature schemes is captured by the security definition, and effects the possible
applications of these schemes.
From the point of view of their functionality, the difference between message-
authentication and signature schemes arises from the difference in the settings for which
they are intended, which amounts to a difference in the identity of the receiver and in
the level of trust that the sender has in the receiver. Typically, message-authentication
schemes are employed in cases where the receiver is predetermined (at the time of
message transmission) and is fully trusted by the sender, whereas signature schemes
allow verification of the authenticity of the data by anybody (which is certainly not
trusted by the sender). In other words, signature schemes allow for universal verifica-
tion, whereas message-authentication schemes may only allow predetermined parties
to verify the authenticity of the data. Thus, in signature schemes the verification-key
must be known to anybody, and in particular is known to the adversary. In contrast, in
message-authentication schemes, the verification-key is only given to a set of predeter-
mined receivers that are all trusted not to abuse this knowledge; that is, in such schemes
it is postulated that the verification-key is not (a priori) known to the adversary. (See
Figure 6.1.)
500
www.Ebook777.com
6.1 THE SETTING AND DEFINITIONAL ISSUES
lacks a good name (since both authentication and signatures are already taken by one
of the two versions). Still, seeking a uniform terminology, we shall sometimes refer to
message-authentication schemes (also known as Message Authentication Codes [mac])
as to private-key signature schemes. Analogously, we shall sometimes refer to signature
schemes as to public-key signature schemes.
where the probability is taken over the internal coin tosses of algorithms S and V .
The integer n serves as the security parameter of the scheme. Each (s, v) in the range
of G(1n ) constitutes a pair of corresponding signing/verification keys.
schemes, v is not given to the “adversary” (and thus one may assume, without loss of
generality, that v = s).
Notation. In the rest of this work, we shall write Ss (α) instead of S(s, α) and Vv (α, β)
instead of V (v, α, β). Also, we let G 1 (1n ) (resp., G 2 (1n )) denote the first (resp., second)
element in the pair G(1n ). That is, G(1n ) = (G 1 (1n ), G 2 (1n )). Without loss of generality,
we may assume that |G 1 (1n )| and |G 2 (1n )| are polynomially related to n, and that each
of these integers can be efficiently computed from the other.
1 Indeed, robustness follows by “amplification” (i.e., error- reduction) of the verification algorithm. For example,
given V as here, one may consider V that applies V to the tested pair for a linear number of times and accepting
if and only if V has accepted in all tries.
2 Indeed, in general, the distinction between “generating something new” and “extracting something implicit”
cannot be placed on firm grounds. However, our reference to this distinction is merely at the motivational
level. Furthermore, this distinction can be formalized in the context that we care about, which is the context
of comparing encryption and signature schemes (or, rather, the adversaries attacking these schemes). In the
case of encryption schemes, we consider adversaries that try to extract information about the plaintext from
the ciphertext. That is, the desired object is a function of the given input. In contrast, in the case of signature
schemes, we consider adversaries that try to generate a valid signature with respect to a certain verification-key.
That is, the desired object is not a function of the given input.
502
www.Ebook777.com
6.1 THE SETTING AND DEFINITIONAL ISSUES
Furthermore, the typical applications of signature schemes are to setting in which the
adversary may obtain from the legitimate signer valid signatures to some documents
of the adversary’s choice. For this reason, the basic definition of security of signature
schemes refers to such “chosen message attacks” (to be discussed and defined next).
(Indeed, the situation here is different from the case of encryption schemes, where the
basic definition refers to a “passive” adversary that only wire-taps a communication
line, in encrypted form, over this line.)
We shall consider a very strong definition of security (against “chosen message
attacks”). That is, we consider very powerful attacks on the signature scheme, as well
as a very liberal notion of breaking it. Specifically, during the course of the attack,
the attacker is allowed to obtain signatures to any document of its choice. One may
argue that in many applications, such a general attack is not possible (because, in
these applications, documents to be signed must have a specific format). Yet our view
is that it is impossible to define a general (i.e., application-independent) notion of
admissible documents, and thus a general/robust definition of an attack seems to have
to be formulated as suggested here. (Note that at worst, our approach is overly cautious.)
Likewise, the attacker is said to be successful if it can produce a valid signature to any
document for which it has not asked for a signature during its attack. Again, this defines
the ability to form signatures to possibly “nonsensical” documents as a breaking of the
scheme. Yet, again, we see no way to have a general (i.e., application-independent)
notion of “meaningful” documents (so that only forging signatures to them will be
considered a breaking of the scheme). This discussion leads to the following (slightly
informal) formulation:
r A chosen message attack is a process that can obtain signatures to strings of its
choice, relative to some fixed signing-key that is generated by G. We distinguish two
cases:
The private-key case: Here the attacker is given 1n as input, and the signatures are
produced relative to s, where (s, v) ← G(1n ).
The public-key case: Here the attacker is given v as input, and the signatures are
produced relative to s, where (s, v) ← G(1n ).
r Such an attack is said to succeed (in existential forgery) if it outputs a valid signature
to a string for which it has not requested a signature during the attack. That is, the
attack is successful if it outputs a pair (α, β) such that Vv (α, β) = 1 (where v is as in
the previous item) and α is different from all strings for which a signature has been
required during the attack.
r A signature scheme is secure (or unforgeable) if every feasible chosen message
attack succeeds with at most negligible probability.
which are spelled out in Definition 6.1.2. We refer the reader to the clarifying discussion
that follows Definition 6.1.2; in fact, some readers may prefer to read that discussion
first.
The private-key case: A private-key signature scheme is secure if for every proba-
bilistic polynomial-time oracle machine M, every positive polynomial p, and all
sufficiently large n, it holds that
Vv (α, β) = 1 & α ∈ Q SMs (1n ) 1
Pr <
where (s, v) ← G(1 ) and (α, β) ← M (1 )
n Ss n
p(n)
where the probability is taken over the coin tosses of algorithms G, S, and V, as well
as over the coin tosses of machine M.
The public-key case: A public-key signature scheme is secure if for every probabilistic
polynomial-time oracle machine M, every positive polynomial p, and all sufficiently
large n, it holds that
Vv (α, β) = 1 & α ∈ Q SMs (v) 1
Pr <
where (s, v) ← G(1 ) and (α, β) ← M (v)
n Ss
p(n)
where the probability is taken over the coin tosses of algorithms G, S, and V , as
well as over the coin tosses of machine M.
The definition refers to the following experiment. First a pair of keys, (s, v), is generated
by invoking G(1n ), and is fixed for the rest of the discussion. Next, an attacker is invoked
on input 1n or v, depending on whether we are in the private-key or public-key case.
In both cases, the attacker is given oracle access to Ss , where the latter may be a
probabilistic oracle rather than a standard deterministic one (e.g., if queried twice for
the same value, then the probabilistic signing-oracle may answer in different ways).
Finally, the attacker outputs a pair of strings (α, β). The attacker is deemed successful
if and only if the following two conditions hold:
1. The string α is different from all queries (i.e., requests for signatures) made by the
attacker; that is, the first string in the output pair (α, β) = M Ss (x) is different from
any string in Q SMs (x), where x = 1n or x = v, depending on whether we are in the
private-key or public-key case.
We stress that both M Ss (x) and Q SMs (x) are random variables that are defined based
on the same random execution of M (on input x and oracle access to Ss ).
2. The pair (α, β) corresponds to a valid document-signature pair relative to the verifi-
cation key v. In case V is deterministic (which is typically the case) this means that
504
www.Ebook777.com
6.1 THE SETTING AND DEFINITIONAL ISSUES
Vv (α, β) = 1. The same applies also in case V is probabilistic, and when viewing
Vv (α, β) = 1 as a random variable. (Alternatively, in the latter case, a condition such
as Pr[Vv (α, β) = 1] ≥ 1/2 may replace the condition Vv (α, β) = 1.)
6.1.5.* Variants
Clearly, any signature scheme that is secure in the public-key model is also secure in
the private-key model. The converse is not true: Consider, for example, the private-
key scheme presented in Construction 6.3.1 (as well as any other “natural” message-
authentication scheme). Following are a few other comments regarding the definitions.
www.Ebook777.com
6.2 LENGTH-RESTRICTED SIGNATURE SCHEME
6.2.1. Definition
The essence of the length-restriction is that security is guaranteed only with respect to
documents of the predetermined length. Note that the question of what is the result of
invoking the signature algorithm on a document of improper length is immaterial. What
is important is that an attacker (of a length-restricted scheme) is deemed successful only
if it produces a signature to a (different) document of proper length. Still, for the sake of
concreteness (and simplicity of subsequent treatment), we define the basic mechanism
only for documents of proper length.
507
Free ebooks ==> www.Ebook777.com
DIGITAL SIGNATURES AND MESSAGE AUTHENTICATION
Such a scheme is called secure (in the private-key or public-key model) if the (corre-
sponding) requirements of Definition 6.1.2 hold when restricted to attackers that only
make queries of length (n) and output a pair (α, β) with |α| = (n).
We stress that the essential modification is presented in the security condition. The latter
considers an adversary to be successful only in case it forges a signature to a (different)
document α of the proper length (i.e., |α| = (n)).
Results of this flavor can be established in two different ways, corresponding to two
methods of converting an -restricted signature scheme into a full-fledged one. Both
methods are applicable both to private-key and public-key signature schemes. The
first method (presented in Section 6.2.2.1) consists of parsing the original document
into blocks (with adequate “linkage” between blocks), and applying the -restricted
scheme to each block. The second method (presented in Section 6.2.2.2) consists of
hashing the document into an (n)-bit long value (via an adequate hashing scheme),
and applying the restricted scheme to the resulting value. Thus, the second method
requires an additional assumption (i.e., the existence of “collision-free” hashing), and
so Theorem 6.2.2 (as stated) is actually proved using the first method. The second
method is presented because it offers other benefits; in particular, it yields signatures
of fixed length (i.e., the signature-length only depends on the key-length) and uses a
single invocation of the restricted scheme. The latter feature will play an important role
in subsequent sections (e.g., in Sections 6.3.1.2 and 6.4.1.3).
3 Recall that such triviality does not hold in the context of encryption schemes, not even in the private-key case.
See Section 5.3.2.
508
www.Ebook777.com
6.2 LENGTH-RESTRICTED SIGNATURE SCHEME
Loosely speaking, the method consists of parsing the original document into blocks
(with adequate “linkage” between blocks), and applying the length-restricted scheme
to each (augmented) block.
Let and (G, S, V ) be as in Theorem 6.2.2. We construct a general signature scheme,
(G , S , V ), with G = G, by viewing documents as sequences of strings, each of length
(n) = (n)/O(1). That is, we associate α = α1 · · · αt with the sequence (α1 , ..., αt ),
where each αi has length (n). (At this point, the reader may think of (n) = (n),
but actually we will use (n) = (n)/4 in order to make room for some auxiliary
information.)
To motivate the actual construction, we consider first the following simpler schemes
all aimed at producing secure signatures for arbitrary (documents viewed as) sequences
of (n)-bit long strings. The simplest scheme consists of just signing each of the strings
in the sequence. That is, the signature to the sequence (α1 , ..., αt ), is a sequence of
βi ’s, each being a signature (with respect to the length-restricted scheme) to the cor-
responding αi . This will not do, because an adversary, given the (single) signature
(β1 , β2 ) to the sequence (α1 , α2 ) with α1 = α2 , can present (β2 , β1 ) as a valid signature
to (α2 , α1 ) = (α1 , α2 ). So how about foiling this forgery by preventing a reordering
of the “atomic” signatures (i.e., the βi ’s); that is, how about signing the sequence
(α1 , ..., αt ) by applying the restricted scheme to each pair (i, αi ), rather than to αi it-
self? This will not do either, because an adversary, given a signature to the sequence
(α1 , α2 , α3 ), can easily present a signature to the sequence (α1 , α2 ). So we also need
to include in each (n)-bit string the total number of αi ’s in the sequence. But even
this is not enough, because given signatures to the sequences (α1 , α2 ) and (α1 , α2 ),
with α1 = α1 and α2 = α2 , an adversary can easily generate a signature to (α1 , α2 ).
Thus, we have to prevent the forming of new sequences of “basic signatures” by com-
bining elements from different signature sequences. This can be done by associating
(say, at random) an identifier with each sequence and incorporating this identifier in
each (n)-bit string to which the basic (restricted) signature scheme is applied. This
discussion yields the signature scheme presented next, where a signature to a message
(α1 , ..., αt ) consists of a sequence of (basic) signatures to statements of the (effec-
tive) form “the string αi is the i-th block, out of t blocks, in a message associate with
identifier r .”
Signing with S : On input a signing-key s (in the range of G 1 (1n )) and a document
α ∈ {0, 1}∗ , algorithm S first parses α into a sequence of blocks (α1 , ..., αt ), such
that α is uniquely reconstructed from the αi ’s and each αi is an (n)-bit long string.4
4 The parsing rule should apply to strings of arbitrary length, regardless of whether or not this length is a multiple
of (n). For example, we may parse α as (α1 , ..., αt ) such that α1 · · · αt = α · 10 j and j ∈ {0, 1, ..., (n) − 1}.
(Note that under this parsing rule, if |α| is a multiple of (n), then |α1 · · · αt | = |α| + (n).)
509
Free ebooks ==> www.Ebook777.com
DIGITAL SIGNATURES AND MESSAGE AUTHENTICATION
Next, S uniformly selects r ∈ {0, 1} (n) . For i = 1, ..., t, algorithm S computes
βi ← Ss (r, t, i, αi )
where i and t are represented as (n)-bit long strings. That is, βi is essentially
a signature to the statement “αi is the i-th block, out of t blocks, in a sequence
associate with identifier r .” Finally, S outputs as signature the sequence
(r, t, β1 , ...., βt )
Verification with V : On input a verifying-key v (in the range of G 2 (1n )), a docu-
ment α ∈ {0, 1}∗ , and a sequence (r, t, β1 , ...., βt ), algorithm V first parses α into
α1 , ..., αt , using the same parsing rule as used by S . Algorithm V accepts if and
only if the following two conditions hold:
Clearly, the triplet (G , S , V ) satisfies Definition 6.1.1. We need to show that is also
inherits the security of (G, S, V ). That is:
www.Ebook777.com
6.2 LENGTH-RESTRICTED SIGNATURE SCHEME
This is a crucial point, because we use the fact that events that occur in a real attack
of A on (G , S , V ) occur with the same probability in the emulation of (G , S , V )
by A.
Assume that with (non-negligible) probability ε (n), the (probabilistic polynomial-
time) algorithm A succeeds in existentially forging relative to the complex scheme
(G , S , V ). We consider the following cases regarding the forging event:
1. The identifier supplied in the forged signature is different from all the random identi-
fiers supplied (by A) as part of the signatures given to A . In this case, each -restricted
signature supplied as part of the forged (complex) signature yields existential forgery
relative to the -restricted scheme.
Formally, let α(1) , ..., α (m) be the sequence of queries made by A , and let
(1) (m)
(r (1) , t (1) , β ), ..., (r (m) , t (m) , β ) be the corresponding (complex) signatures sup-
(i) (i)
plied to A by A (using Ss to form the β ’s). It follows that each β consists of a
sequence of Ss -signatures to (n)-bit strings starting with r (i) ∈ {0, 1}(n)/4 , and that
the oracle Ss was invoked (by A) only on strings of this form. Let (α, (r, t, β1 , ..., βt ))
be the output of A , where α is parsed as (α1 , ..., αt ), and suppose that applying Vv
to the output of A yields 1 (i.e., the output is a valid document-signature pair for the
complex scheme). The case hypothesis states that r = r (i) , for all i’s. It follows that
each of the β j ’s is an Ss -signature to a string starting with r ∈ {0, 1}(n)/4 , and thus
different from all queries made to the oracle Ss . Thus, each pair ((r, t, i, αi ), βi )
is a valid document-signature pair (because Vv (α, (r, t, β1 , ..., βt )) = 1 implies
Vv ((r, t, i, αi ), βi ) = 1), with a document different from all queries made to Ss .
This yields a successful forgery with respect to the -restricted scheme.
2. The identifier supplied in the forged signature equals the random identifier supplied
(by A) as part of exactly one of the signatures given to A . In this case, existen-
tial forgery relative to the -restricted scheme is obtained by considering the rela-
tion between the output of A and the single supplied signature having the same
identifier.
As in the previous case, let α (1) , ..., α (m) be the sequence of queries made by A , and
(1) (m)
let (r (1) , t (1) , β ), ..., (r (m) , t (m) , β ) be the corresponding (complex) signatures
supplied to A by A. Let (α, (r, t, β1 , ..., βt )) be the output of A , where α is parsed as
(α1 , ..., αt ), and suppose that α = α (i) for all i’s and that Vv (α, (r, t, β1 , ..., βt )) = 1.
The hypothesis of the current case is that there exists a unique i so that r = r (i) .
We consider two subcases regarding the relation between t and t (i) :
r t = t (i) . In this subcase, each -restricted signature supplied as part of the forged
(complex) signature yields existential forgery relative to the -restricted scheme.
The argument is analogous to the one employed in the previous case. Specifically,
here each of the β j ’s is an Ss -signature to a string starting with (r, t), and thus dif-
ferent from all queries made to the oracle Ss (because these queries either start with
r (i ) = r or start with (r (i) , t (i) ) = (r, t)). Thus, each pair ((r, t, j, α j ), β j ) is a valid
document-signature pair with a document different from all queries made to Ss .
r t = t (i) . In this subcase, we use the hypothesis α = α (i) , which (combined with
(i) (i)
t = t (i) ) implies that there exists a j such that α j = α j , where α j is the j th
511
Free ebooks ==> www.Ebook777.com
DIGITAL SIGNATURES AND MESSAGE AUTHENTICATION
block in the parsing of α (i) . For this j, the string β j (supplied as part of the forged
complex-signature) yields existential forgery relative to the -restricted scheme.
Specifically, we have Vv ((r, t, j, α j ), β j ) = 1, whereas (r, t, j, α j ) is different
(i )
from each query (r (i ) , t (i ) , j , α j ) made by A to Ss .
(i )
Justification for (r, t, j, α j ) = (r (i ) , t (i ) , j , α j ): In case i = i, it must hold
that r (i ) = r (by the [Case 2] hypothesis regarding the uniqueness of i s.t.
(i )
r (i) = r ). Otherwise (i.e., in case i = i), either j = j or α j = α j = α j ,
(i)
5 This observation only saves us a polynomial factor in the forging probability. That is, if A did not know which
part of the forged complex-signature to use for its own forgery, it could have just selected one at random (and
be correct with probability 1/poly(n) because there are only poly(n)-many possibilities).
512
www.Ebook777.com
6.2 LENGTH-RESTRICTED SIGNATURE SCHEME
1. (admissible indexing – technical):6 For some polynomial p, all sufficiently large n’s,
and every s in the range of I (1n ), it holds that n ≤ p(|s|). Furthermore, n can be
computed in polynomial-time from s.
2. (efficient evaluation): There exists a polynomial-time algorithm that, given s and x,
returns h s (x).
3. (hard-to-form collisions): We say that the pair (x, x ) forms a collision under the
function h if h(x) = h(x ) but x = x . We require that every probabilistic polynomial-
time algorithm, given I (1n ) as input, outputs a collision under h I (1n ) with negligible
probability. That is, for every probabilistic polynomial-time algorithm A, every pos-
itive polynomial p, and all sufficiently large n’s,
1
Pr A(I (1n )) is a collision under h I (1n ) <
p(n)
where the probability is taken over the internal coin tosses of algorithms I and A.
Note that the range specifier must be super-logarithmic (or else one may easily find a
collision by selecting 2(n) + 1 different pre-images and computing their image under
the function). In Section 6.2.3, we show how to construct collision-free hashing func-
tions using claw-free collections. But first, we show how to use the former in order to
convert a length-restricted signature scheme into a full-fledged one.
6 This condition is made merely in order to avoid annoying technicalities. In particular, this condition allows the
collision-forming adversary to run for poly(n)-time (because by this condition n = poly(|s|)), as well as allows
for determining n from s. Note that |s| = poly(n) holds by definition of I .
513
Free ebooks ==> www.Ebook777.com
DIGITAL SIGNATURES AND MESSAGE AUTHENTICATION
Construction 6.2.6 (hash and sign): Let and (G, S, V ) be as in Theorem 6.2.2, and
let {h r : {0, 1}∗ → {0, 1}(|r |) }r ∈{0,1}∗ be as in Definition 6.2.5. We construct a general
signature scheme, (G , S , V ), as follows:
www.Ebook777.com
6.2 LENGTH-RESTRICTED SIGNATURE SCHEME
relative to the complex scheme (G , S , V ), algorithm A tries to use this pair in order
to form a document-signature pair relative to the -restricted scheme, (G, S, V ). That
is, if A outputs the document-signature pair (α, β), then A will output the document-
signature pair (h r (α), β).
As in the proof of Proposition 6.2.4, we stress that the distribution of keys and oracle
answers that A provides A is exactly as in a real attack of A on (G , S , V ). This is
a crucial point, because we use the fact that events that occur in a real attack of A on
(G , S , V ) occur with the same probability in the emulation of (G , S , V ) by A.
Assume that with (non-negligible) probability ε (n), the (probabilistic polynomial-
time) algorithm A succeeds in existentially forging relative to the complex scheme
(G , S , V ). We consider the following two cases regarding the forging event, letting
(α (i) , β (i) ) denote the i-th query and answer pair made by A , and (α, β) denote the
forged document-signature pair that A outputs (in case of success):
Case 1: h r (α) = h r (α (i) ) for all i’s. (That is, the hash-value used in the forged signature
is different from all hash-values used in the queries to Ss .) In this case, the pair
(h r (α), β) constitutes a success in existential forgery relative to the -restricted
scheme.
Case 2: h r (α) = h r (α (i) ) for some i. (That is, the hash-value used in the forged sig-
nature equals the hash-value used in the i-th query to Ss , although α = α (i) .) In this
case, the pair (α, α (i) ) forms a collision under h r (and we do not obtain success in
existential forgery relative to the -restricted scheme).
Thus, if Case 1 occurs with probability at least ε (n)/2, then A succeeds in its attack
on (G, S, V ) with probability at least ε (n)/2, which contradicts the security of the
-restricted scheme (G, S, V ). On the other hand, if Case 2 occurs with probability
at least ε (n)/2, then we derive a contradiction to the collision-freeness of the hashing
collection {h r : {0, 1}∗ → {0, 1}(|r |) }r ∈{0,1}∗ . Details (regarding the second case) follow.
We construct an algorithm, denoted B, that given r ← I (1n ), attempts to form col-
lisions under h r as follows. On input r , algorithm B generates (s, v) ← G(1n ) and
emulates the attack of A on this instance of the -restricted scheme, with the exception
that B does not invoke algorithm I to obtain an index of a hash function but rather uses
the index r (given to it as input). Recall that A, in turn, emulates an attack of A on
the signing-oracle Sr,s , and that A answers the query q made by A by forwarding the
query q = h r (q ) to Ss . Thus, B actually emulates the attack of A (on the signing-oracle
Sr,s ) and does so in a straightforward manner; that is, to answer query q made by A ,
algorithm B first obtains q = h r (q ) (using its knowledge of r ) and then answers with
Ss (q) (using its knowledge of s). Finally, when A outputs a forged document-signature
pair, algorithm B checks whether Case 2 occurs (i.e., whether h r (α) = h r (α (i) ) holds
for some i), in which case it obtains (and outputs) a collision under hr . (Note that in
the public-key case, B invokes A on input (r, v), whereas in the private-key case, B
invokes A on input 1n . Thus, in the private-key case, B actually does not use r but
rather only uses an oracle access to hr .)
We stress that from the point of view of the emulated adversary A, the execu-
tion is distributed exactly as in its attack on (G, S, V ). Thus, since we assumed that
515
Free ebooks ==> www.Ebook777.com
DIGITAL SIGNATURES AND MESSAGE AUTHENTICATION
the second case occurs with probability at least ε (n)/2 in a real attack, it follows
that B succeeds in forming a collision under h I (1n ) with probability at least ε (n)/2.
This contradicts the collision-freeness of the hashing functions, and the proposition
follows.
Comment. For the private-key case, the proof of Proposition 6.2.7 actually established
a stronger claim than stated. Specifically, the proof holds even for a weaker definition of
collision-free hashing in which the adversary is not given a description of the hashing
function, but can rather obtain its value at any pre-image of its choice. This observation
is further pursued in Section 6.3.1.3.
516
www.Ebook777.com
6.2 LENGTH-RESTRICTED SIGNATURE SCHEME
Index selection algorithm: On input 1n , we first invoke I to obtain s ← I (1n ), and next
use the domain sampler to obtain a string r that is uniformly distributed in Ds .
We output the index (s, r ) ∈ {0, 1}n × {0, 1}(n) , which corresponds to the hashing
function
def
h (s,r ) (x) = f sy1 f sy2 · · · f syt (r )
Actually, as will become evident from the proof of Proposition 6.2.9, as far as
Construction 6.2.8 is concerned, the definition of claw-free permutations can be re-
laxed: We do not need an algorithm that, given an index s, generates a uniformly
distributed element in Ds ; any efficient algorithm that generates elements in Ds will do
(regardless of the distribution induced on Ds , and in particular, even if the algorithm
always outputs the same element in Ds ).
Proposition 6.2.9: Suppose that the collection of permutation pairs {( f s0 , f s1 )}s , to-
gether with the index-selecting algorithm I , constitutes a claw-free collection. Then,
the function ensemble {h (s,r ) : {0, 1}∗ → {0, 1}|r | }(s,r )∈{0,1}∗ ×{0,1}∗ as defined in Con-
struction 6.2.8 constitutes a collision-free hashing with a range specifying function
satisfying (n + (n)) = (n).
Proof: Intuitively, forming collisions under h (s,r ) means finding two different sequences
of functions from { f s0 , f s1 } that (when applied to r ) yield the same image (e.g., f s1 ◦
f s0 ◦ f s0 (r ) = f s1 ◦ f s1 (r ) ◦ f s1 (r )). Since these two sequences cannot be a prefix of one
another, it must be that somewhere along the process (of applying these f sσ ’s), the
application of two different functions yields the same image (i.e., a claw).
The proof is by a reducibility argument. Given an algorithm A that on input (s, r )
forms a collision under h (s,r ) , we construct an algorithm A that on input s forms a
claw for index s. On input s (supposedly generated by I (1n )), algorithm A selects r
(uniformly) in Ds , and invokes algorithm A on input (s, r ). Suppose that A outputs a
pair (x, x ) so that h (s,r ) (x) = h (s,r ) (x ) but x = x . Without loss of generality,8 assume
that the coding of x equals y1 · · · yi−1 0zi+1 · · · z t , and that the coding of x equals
y1 · · · yi−1 1z i+1 · · · z t . By the definition of h (s,r ) , it follows that
z z
f sy1 · · · f syi−1 f s0 f szi+1 · · · f sz t (r ) = f sy1 · · · f syi−1 f s1 f s i+1 · · · f s t (r ) (6.1)
z z
f s0 f szi+1 · · · f sz t (r ) = f s1 f s i+1 · · · f s t (r ) (6.2)
(w, w ) such that f s0 (w) = f s1 (w ). Thus, algorithm A forms claws for index I (1n )
with probability that is lower-bounded by the probability that A forms a collision
under h I (1n ) , where I is the index-selection algorithm as defined in Construction 6.2.8.
Using the hypothesis that the collection of pairs (together with I ) is claw-free, the
proposition follows.
8 Let C(x) (resp., C(x )) denote the prefix-free coding of x (resp., x ). Then C(x) is not a prefix of C(x ), and
C(x ) is not a prefix of C(x). It follows that C(x) = uv and C(x ) = uv , where v and v differ in their leftmost
bit. Without loss of generality, we may assume that the leftmost bit of v is 0, and the leftmost bit of v is 1.
518
www.Ebook777.com
6.2 LENGTH-RESTRICTED SIGNATURE SCHEME
X1 X2 X3 X4 X5 X6 X7
519
Free ebooks ==> www.Ebook777.com
DIGITAL SIGNATURES AND MESSAGE AUTHENTICATION
For the sake of uniformity, in case |x| ≤ (|s|), we let t = 2 and x 1 x2 = x02(|s|)−|x| .
On the other hand, we may assume that |x| < 2(|s|) , and so |x| can be represented
by an (|s|)-bit long string.9
def
2. Let y1 = x1 . For i = 2, ..., t, compute yi = h s (yi−1 xi ).
3. Set h s (x) to equal (yt , |x|).
An interesting property of Construction 6.2.11 is that it allows for computing the hash-
value of an input string while processing the input in an on-line fashion; that is, the
implementation of the hashing process may process the input x in a block-by-block
manner, while storing only the current block and a small amount of state information
(i.e., the current yi and the number of blocks encountered so far). This property is
important in applications in which one wishes to hash a long stream of input bits.
Proposition 6.2.12: Let {h s : {0, 1}2(|s|) → {0, 1}(|s|) }s∈{0,1}∗ and {h s : {0, 1}∗ →
{0, 1}2(|s|) }s∈{0,1}∗ be as in Construction 6.2.11, and suppose that the former is a col-
lection of 2-restricted collision-free hashing functions. Then the latter constitutes a
(full-fledged) collection of collision-free hashing functions.
Proof: Recall that forming a collision under h s means finding x = x such that h s (x) =
h s (x ). By the definition of h s , this means that (yt , |x|) = h s (x) = h s (x ) = (yt , |x |),
where t, t and yt , yt are determined by h s (x) and h s (x ). In particular, it follows that
|x| = |x | and so t = t (where, except when |x| ≤ (|s|), it holds that t = |x|/(|s|) =
|x |/(|s|) = t ). Recall that yt = yt and consider two cases:
Case 1: If (yt−1 , xt ) = (yt−1 , x t ), then we obtain a collision under h s (since
h s (yt−1 xt ) = yt = yt = h s (yt−1 x t )), and derive a contradiction to its collision-free
hypothesis.
Case 2: Otherwise (yt−1 , x t ) = (yt−1 , x t ), and we consider the two corresponding cases
with respect to the relation of (yt−2 , x t−1 ) to (yt−2 , xt−1 ); that is, we further consider
whether or not (yt−2 , xt−1 ) equals (yt−2 , x t−1 ).
Eventually, since x = x , we get to a situation in which yi = yi and (yi−1 , xi ) =
(yi−1 , xi ), which is handled as in the first case.
We now provide a formal implementation of this intuitive argument. Suppose toward the
contradiction that there exists a probabilistic polynomial-time algorithm A that on input
s forms a collision under h s (with certain probability). Then, we construct an algorithm
that will, with similar probability, succeed to form a suitable (i.e., length-restricted)
collision under h s . Algorithm A (s) operates as follows:
1. A (s) invokes A(s) and obtains (x, x ) ← A(s).
If either h s (x) = h s (x ) or x = x , then A failed, and A halts without output. In the
sequel, we assume that h s (x) = h s (x ) and x = x .
9 The adversary trying to form collisions with respect to h s runs in poly(|s|)-time. Using (|s|) = ω(log |s|), it
follows that such an adversary cannot output a string of length 2(|s|) . (The same also holds, of course, for
legitimate usage of the hashing function.)
520
www.Ebook777.com
6.2 LENGTH-RESTRICTED SIGNATURE SCHEME
2. A (s) computes t, x1 , ..., x t and y1 , ..., yt (resp., t , x1 , ..., xt and y1 , ..., yt ) as
in Construction 6.2.11. Next, A (s) finds an i ∈ {2, ..., t} such that yi = yi and
(yi−1 , xi ) = (yi−1 , xi ), and outputs the pair (yi−1 xi , yi−1
xi ). (We will show next that
such an i indeed exists.)
Note that (since h s (x) = h s (x )) it holds that t = t and yt = yt . On the other hand,
(x1 , ..., x t ) = (x1 , ..., xt ). As argued in the motivating discussion, it follows that there
exists an i ∈ {2, ..., t} such that yi = yi and (yi−1 , xi ) = (yi−1
, xi ).
X1 X2 X3 X4 X5 X6 X7 X8
y3,1 y y y y y y y
3,2 3,3 3,4 3,5 3,6 3,7 3,8
h’s h’s
y y
1,1 1,2
h’s
y
0,1
Figure 6.3: Collision-free hashing via tree-chaining (for t = 8).
collection of functions. Consider the collection {h s : {0, 1}∗ → {0, 1}2(|s|) }s∈{0,1}∗ ,
where h s (x) is defined by the following process, called tree-hashing:
def
1. Break x into t = 2 log2 (|x|/(|s|)) consecutive blocks, while possibly adding dummy
0-blocks and padding the last block with 0’s, such that each block has length (|s|).
Denote these (|s|)-bit long blocks by x 1 , ..., xt . That is, x 1 · · · x t = x0t·(|s|)−|x| .
Let d = log2 t, and note that d is a positive integer.
Again, for the sake of uniformity, in case |x| ≤ (|s|), we let t = 2 and x 1 x 2 =
x02(|s|)−|x| . On the other hand, again, we assume that |x| < 2(|s|) , and so |x| can be
represented by an (|s|)-bit long string.
def
2. For i = 1, ..., t, let yd,i = xi .
3. For j = d −1, ..., 1, 0 and i = 1, ..., 2 j , compute y j,i = h s (y j+1,2i−1 y j+1,2i ).
4. Set h s (x) to equal (y0,1 , |x|).
That is, hashing is performed by placing the (|s|)-bit long blocks of x at the leaves of
a binary tree of depth d, and computing the values of internal nodes by applying h s to
the values associated with the two children (of the node). The final hash-value consists
of the value associated with the root (i.e., the only level-0 node) and the length of x.
Proposition 6.2.14: Let {h s : {0, 1}2(|s|) → {0, 1}(|s|) }s∈{0,1}∗ and {h s : {0, 1}∗ →
{0, 1}2(|s|) }s∈{0,1}∗ be as in Construction 6.2.13, and suppose that the former is a col-
lection of 2-restricted collision-free hashing functions. Then the latter constitutes a
(full-fledged) collection of collision-free hashing functions.
Proof Sketch: Recall that forming a collision under h s means finding x = x such that
h s (x) = h s (x ). By the definition of h s , this means that (y0,1 , |x|) = h s (x) = h s (x ) =
(y0,1 , |x |), where (t, d) and y0,1 (resp., (t , d ) and y0,1 ) are determined by h s (x) (resp.,
h s (x )). In particular, it follows that |x| = |x | and so d = d (because 2d = t = t =
522
www.Ebook777.com
6.3 CONSTRUCTIONS OF MESSAGE-AUTHENTICATION SCHEMES
2d ). Recall that y0,1 = y0,1 , and let us state this fact by saying that for j = 0 and for
every i ∈ {1, ..., 2 }, it holds that y j,i = y j,i . Starting with j = 0, we consider two cases
j
Case 1: If for some i ∈ {1, ..., 2 j+1 } it holds that y j+1,i = y j+1,i , then we obtain a
collision under h s , and derive a contradiction to its collision-free hypothesis. Specif-
def
ically, the collision is obtained because z = y j+1,2 i/2 −1 y j+1,2 i/2 is different from
def
z = y j+1,2 i/2 −1 y j+1,2 i/2 , whereas h s (z) = y j, i/2 = y j, i/2 = h s (z ).
Case 2: Otherwise for every i ∈ {1, ..., 2 j+1
}, it holds that y j+1,i = y j+1,i . In this case,
we consider the next level.
Eventually, since x = x , we get to a situation in which for some j ∈ {1, ..., d −
def
1} and some i ∈ {1, ..., 2 j+1 }, it holds that z = y j+1,2 i/2 −1 y j+1,2 i/2 is different
def
from z = y j+1,2 i/2 −1 y j+1,2 i/2 , whereas h s (z) = y j, i/2 = y j, i/2 = h s (z ). This
situation is handled as in the first case.
The actual argument proceeds as in the proof of Proposition 6.2.12.
A Local Verification Property. Construction 6.2.13 has the extra property of support-
ing efficient verification of bits in x with respect to the hash-value. That is, suppose
that for a randomly selected h s , one party holds x and the other party holds h s (x).
Then, for every i, the first party may provide a short (efficiently verifiable) certifi-
cate that xi is indeed the i-th block of x. The certificate consists of the sequence
of pairs (yd,2 i/2 −1 , yd,2 i/2 ), ..., (y1,2 i/2d −1 , y1,2 i/2d ), where d and the y j,k ’s are
computed as in Construction 6.2.13 (and (y0,1 , |x|) = h s (x)). The certificate is ver-
ified by checking whether or not y j−1, i/2d− j+1 = h s (y j,2 i/2d− j+1 −1 y j,2 i/2d− j+1 ), for
every j ∈ {1, ..., d}. Note that if the first party can present two different values for
the i-th block of x along with corresponding certificates, then it can also form col-
lisions under h s . Construction 6.2.13 and its local-verification property were already
used in this work (i.e., in the construction of highly- efficient argument systems, pre-
sented in Section 4.8.4 of Volume 1). Jumping ahead, we note the similarity between
the local-verification property of Construction 6.2.13 and the authentication-tree of
Section 6.4.2.2.
Key-generation with G: On input 1n , we uniformly select s ∈ {0, 1}n , and output the
key-pair (s, s). (Indeed, the verification-key equals the signing-key.)
Signing with S: On input a signing-key s ∈ {0, 1}n and an (n)-bit string α, we compute
and output f s (α) as a signature of α.
Verification with V : On input a verification-key s ∈ {0, 1}n , an (n)-bit string α, and
an alleged signature β, we accept if and only if β = f s (α).
Indeed, signing amounts to applying f s to the given document string, and verification
amounts to comparing a given value to the result of applying f s to the document. Analo-
gous constructions can be presented by using the generalized notions of pseudorandom
functions defined in Definitions 3.6.9 and 3.6.12 (see further comments in the follow-
ing subsections). In particular, using a pseudorandom function ensemble of the form
{ f s : {0, 1}∗ → {0, 1}|s| }s∈{0,1}∗ , we obtain a general message-authentication scheme
(rather than a length-restricted one). In the following proof, we only demonstrate the
security of the -restricted message-authentication scheme of Construction 6.3.1. (The
security of the general message-authentication scheme can be established analogously;
see Exercise 8.)
Proposition 6.3.2: Suppose that { f s : {0, 1}(|s|) → {0, 1}(|s|) }s∈{0,1}∗ is a pseudoran-
dom function, and that is a super-logarithmically growing function. Then Construc-
tion 6.3.1 constitutes a secure -restricted message-authentication scheme.
Proof: The proof follows the general methodology suggested in Section 3.6.3. Specif-
ically, we consider the security of an ideal scheme in which the pseudorandom function
is replaced by a truly random function (mapping (n)-bit long strings to (n)-bit long
strings). Clearly, an adversary that obtains the values of this random function at ar-
guments of its choice cannot predict its value at a new point with probability greater
than 2−(n) . Thus, an adversary attacking the ideal scheme may succeed in existen-
tial forgery with at most negligible probability. The same must hold for any efficient
adversary that attacks the actual scheme, because otherwise such an adversary yields
524
www.Ebook777.com
6.3 CONSTRUCTIONS OF MESSAGE-AUTHENTICATION SCHEMES
1. Machine A attacks the actual scheme: On input 1n , machine A is given oracle access
to (the signing process) f s : {0, 1}(n) → {0, 1}(n) , where s is uniformly selected in
{0, 1}n . After making some queries of its choice, A outputs a pair (α, β), where
α is different from all its queries. Machine A is deemed successful if and only if
β = f s (α).
2. Machine A attacks the ideal scheme: On input 1n , machine A is given oracle access
to a function φ : {0, 1}(n) → {0, 1}(n) , uniformly selected among all such possible
functions. After making some queries of its choice, A outputs a pair (α, β), where
α is different from all its queries. Again, A is deemed successful if and only if
β = φ(α).
Clearly, A’s success probability in this experiment is at most 2−(n) , which is a
negligible function (since is super-logarithmic).
Assuming that A’s success probability in the actual attack is non-negligible, we derive
a contradiction to the pseudorandomness of the function ensemble { fs }. Specifically,
we consider a distinguisher D that, on input 1n and oracle access to a function f :
{0, 1}(n) → {0, 1}(n) , behaves as follows: First D emulates the actions of A, while
answering A’s queries using its oracle f . When A outputs a pair (α, β), the distinguisher
makes one additional oracle query to f and outputs 1 if and only if f (α) = β.
Note that when f is selected uniformly among all possible {0, 1}(n) → {0, 1}(n)
functions, D emulates an attack of A on the ideal scheme, and thus outputs 1 with
negligible probability (as explained in the beginning of the proof). On the other hand,
if f is uniformly selected in { f s }s∈{0,1}n , then D emulates an attack of A on the actual
scheme, and thus (due to the contradiction hypothesis) outputs 1 with non-negligible
probability. We reach a contradiction to the pseudorandomness of { f s }s∈{0,1}n . The
proposition follows.
In contrast to the feasibility result stated in Theorem 6.3.3, we now present alterna-
tive ways of using pseudorandom functions to obtain secure message-authentication
schemes (MACs). These alternatives yield more efficient schemes, where efficiency is
525
Free ebooks ==> www.Ebook777.com
DIGITAL SIGNATURES AND MESSAGE AUTHENTICATION
measured in terms of the length of the signatures and the time it takes to produce and
verify them.
Combining Propositions 6.2.7 and 6.3.2, it follows that Construction 6.3.4 constitutes
a secure message-authentication scheme (MAC), provided that the ingredients are
as postulated. In particular, this means that Construction 6.3.4 yields a secure MAC,
provided that collision-free hashing functions exist (and are used in Construction 6.3.4).
While this result uses a seemingly stronger assumption than the existence of one-way
functions (used to establish the Theorem 6.3.3), it yields more efficient MACs, both
in terms of signature length (as discussed previously) and authentication time (to be
discussed next).
Construction 6.3.4 yields faster signing and verification algorithms than the construc-
tion resulting from combining Constructions 6.2.3 and 6.3.1, provided that hashing a
long string is less time-consuming than applying a pseudorandom function to it (or to all
its blocks). The latter assumption is consistent with the current state of the art regarding
the implementation of both primitives. Further speed improvements are discussed in
Section 6.3.1.3.
526
www.Ebook777.com
6.3 CONSTRUCTIONS OF MESSAGE-AUTHENTICATION SCHEMES
10 This intuition may not hold when comparing a construction of ordinary hashing that is rigorously ana-
lyzed with an ad hoc suggestion of a collision-free hashing. But it certainly holds when comparing the
former to the constructions of collision-free hashing that are based on a well-established intractability
assumption.
527
Free ebooks ==> www.Ebook777.com
DIGITAL SIGNATURES AND MESSAGE AUTHENTICATION
In fact, for the current application, we can replace the third condition by the following
weaker condition, parameterized by a function cp : N →[0, 1] (s.t. cp(n) ≥ 2−m(n) ): For
every x = y ∈ {0, 1}(n) ,
Prh [h(x) = h(y)] ≤ cp(n) (6.3)
Indeed, the pairwise independence condition implies that Eq. (6.3) is satisfied with
cp(n) = 2−m(n) . Note that Eq. (6.3) asserts that the collision probability of S(n)
m(n)
is
at most cp(n), where the collision probability refers to the probability that h(x) =
h(y) when h is uniformly selected in S(n) and x = y ∈ {0, 1}(n) are arbitrary fixed
m(n)
strings.
Hashing ensembles with n ≤ (n) + m(n) and cp(n) = 2−m(n) can be constructed
(for a variety of functions , m : N → N, e.g., (n) = 2n/3 and m(n) = n/3); see Ex-
ercise 22. Using such ensembles, we first present a construction of length-restricted
message-authentication schemes (and later show how to generalize the construction to
obtain full-fledged message-authentication schemes).
Proposition 6.3.6: Suppose that { f s : {0, 1}m(|s|) → {0, 1}m(|s|) }s∈{0,1}∗ is a pseudoran-
dom function, and that the collision probability of the collection {h r : {0, 1}(|r |) →
528
www.Ebook777.com
6.3 CONSTRUCTIONS OF MESSAGE-AUTHENTICATION SCHEMES
Proof Sketch: As in the proof of Proposition 6.3.2, we first consider the security of
an ideal scheme in which the pseudorandom function is replaced by a truly random
function (mapping m(n)-bit long strings to m(n)-bit long strings). Consider any (proba-
bilistic polynomial-time) adversary attacking the ideal scheme. Such an adversary may
obtain the signatures to polynomially -many (n)-bit long strings of its choice. How-
ever, except with negligible probability, these strings are hashed to different m(n)-bit
long strings, which in turn are mapped by the random function to totally independent
and uniformly distributed m(n)-bit long strings. Furthermore, except with negligible
probability, the (n)-bit long string α contained in the adversary’s (alleged message-
signature) output pair is hashed to an m(n)-bit long string that is different from all the
previous hash-values, and so the single valid signature corresponding to α is a uniformly
distributed m(n)-bit long string that is independent of all previously seen signatures.
On the distribution of signatures in the ideal scheme: Suppose that the hashing
collection {h r : {0, 1}(|r |) → {0, 1}m(|r |) }r ∈{0,1}n has collision probability cp(n), and
φ : {0, 1}m(n) → {0, 1}m(n) is a random function. Then, we claim that an adversary
that obtains signatures to t(n) − 1 strings of its choice succeeds in forging a signature
to a new string with probability at most t(n)2 · cp(n) + 2−m(n) , regardless of its
computational powers. The claim is proved by showing that, except with probability
at most t(n)2 · cp(n), the t(n) strings selected by the adversary are mapped by
h r to distinct values. The latter claim is proved by induction on the number of
selected strings, denoted i, where the base case (i.e., i = 1) holds vacuously. Let
s1 , ..., si denote the strings selected so far, and suppose that with probability at
least 1 − i 2 · cp(n), the i hash-values h r (s j )’s are distinct. The adversary only sees
the corresponding φ(h r (s j ))’s, which are uniformly and independently distributed
(in a way independent of the values of the h r (s j )’s). Thus, loosely speaking, the
adversary’s selection of the next string, denoted si+1 , is independent of the values
of the h r (s j )’s, and so a collision of h r (si+1 ) with one of the previous h r (s j )’s occurs
with probability at most i · cp(n). The induction step follows (since 1 − i 2 · cp(n) −
i · cp(n) > 1 − (i + 1)2 · cp(n)).
It follows that any adversary attacking the ideal scheme may succeed in existential
forgery with at most negligible probability (provided it makes at most polynomially
many queries). The same must hold for any efficient adversary that attacks the actual
scheme, since otherwise such an adversary yields a violation of the pseudorandomness
of { f s : {0, 1}m(|s|) → {0, 1}m(|s|) }s∈{0,1}∗ . The exact implementation of this argument
follows the details given in the proof of Proposition 6.3.2.
529
Free ebooks ==> www.Ebook777.com
DIGITAL SIGNATURES AND MESSAGE AUTHENTICATION
Proof Sketch: The proof is identical to the proof of Proposition 6.3.6, except that here
the (polynomial-time) adversary attacking the scheme may query for the signatures
of strings of various lengths. Still, all these queries (as well as the final output) are
of polynomial length and thus shorter than (n). Thus, the (, cp)-collision property
implies that, except with negligible probability, all these queries (as well as the relevant
part of the output) are hashed to different values.
ε
On Constructing Adequate Hashing Ensembles. For some ε > 0 and f (n) = 2n ,
generalized hashing ensembles with a ( f, 1/ f )-collision property can be constructed
is several ways. One such way is by applying a tree-hashing scheme as in Construc-
tion 6.2.13; see Exercise 23. For further details about constructions of generalized
11 Note that it is essential to restrict the collision condition to strings of bounded length. In contrast, for every finite
family of functions H , there exist two different strings that are mapped to the same image by each function in
H . For details, see Exercise 21.
530
www.Ebook777.com
6.3 CONSTRUCTIONS OF MESSAGE-AUTHENTICATION SCHEMES
hashing ensembles, see Section 6.6.5. Combining any of these constructions with
Proposition 6.3.7, we get:
Theorem 6.3.8: Assuming the existence of one-way functions, there exist message-
authentication schemes with fixed-length signatures; that is, signatures of length that
depend on the length of the signing-key but not on the length of the document.
12 We comment that this specific hiding method is not 1-1, and furthermore, it is not clear whether it can also be
efficiently inverted when given the “secret key” (i.e., the seed of the pseudorandom function). In contrast, the
alternative hiding method described next is 1-1 and can be efficiently inverted when given the secret key.
13 The hashing function should belong to an AXU family, as defined in Section 6.3.2.2.
531
Free ebooks ==> www.Ebook777.com
DIGITAL SIGNATURES AND MESSAGE AUTHENTICATION
then show how to construct them based on the hash-and-hide (or rather tag-and-hide)
paradigm.
V (v (i−1) , α, β) = (v (i) , ·). That is, v (i) is actually determined by v (i−1) and
(|α (i) |, |β (i) |).14
3. There exists a polynomial p such that for every pair (s (0) , v (0) ) in the range of
G(1n ), and every sequence of α (i) ’s and s (i) ’s as in Condition 2, it holds that |s (i) | ≤
|s (i−1) | + |α (i) | · p(n). Similarly for the v (i) ’s.
14 Alternatively, we may decompose the verification (resp., signing) algorithm into two algorithms, where the first
takes care of the actual verification (resp., signing) and the second takes care of updating the state. For details,
see Exercise 18.
532
www.Ebook777.com
6.3 CONSTRUCTIONS OF MESSAGE-AUTHENTICATION SCHEMES
That is, as in Definition 6.1.1, the signing-verification process operates properly pro-
vided that the corresponding algorithms get the corresponding keys (states). Note that
in Definition 6.3.9, the keys are modified by the signing-verification process, and so
correct verification requires holding the correctly updated verification-key. We stress
that the furthermore-clause in Condition 2 guarantees that the verification-key is cor-
rectly updated as long as the verification process is fed with strings of the correct lengths
(but not necessarily with the correct document-signature pairs). This extra requirement
implies that, given the initial verification-key and the current document-signature pair,
as well as the lengths of all previous pairs (which may be actually incorporated in
the current signature), one may correctly decide whether or not the current document-
signature pair is valid. As in the case of state-based ciphers (cf. Section 5.3.1), this fact
is interesting for two reasons:
A theoretical reason: It implies that without loss of generality (alas, with possible loss
in efficiency), the verification algorithm may be stateless. Furthermore, without loss
of generality (alas, with possible loss in efficiency), the state of the signing algorithm
may consist of the initial signing-key and the lengths of the messages signed so far.
(We assume here that the length of the signature is determined by the length of the
message and the length of the signing-key.)
A practical reason: It allows for recovery from the loss of some of the message-
signature pairs. That is, assuming that all messages have the same length (which
is typically the case in MAC applications), if the receiver knows (or is given) the
total number of messages sent so far, then it can verify the authenticity of the current
message-signature pair, even if some of the previous message-signature pairs were
lost.
We stress that Definition 6.3.9 refers to the signing of multiple messages (and
is meaningless when considering the signing of a single message). However, Defi-
nition 6.3.9 (by itself) does not explain why one should sign the i-th message us-
ing the updated signing-key s (i−1) , rather than by reusing the initial signing-key s (0)
(where all corresponding verifications are done by reusing the initial verification-key
v (0) ). Indeed, the reason for updating these keys is provided by the following secu-
rity definition that refers to the signing of multiple messages, and holds only in case
the signing-keys in use are properly updated (in the multiple-message authentication
process).
during the attack, and V (v (i−1) , α, β) = (·, 1) holds for some intermediate state
(verification-key) v (i−1) (as in Definition 6.3.9).15
r A state-based MAC is secure if every probabilistic polynomial-time chosen message
attack as in the first item succeeds with at most negligible probability.
Note that Definition 6.3.10 (only) differs from Definition 6.1.2 in the way that the
signatures β (i) ’s are produced (i.e., using the updated signing-key s (i−1) , rather than the
initial signing-key s (0) ). Furthermore, Definition 6.3.10 guarantees nothing regarding
a signing process in which the signature to the i-th message is obtained by invoking
S(s (0) , ·) (as in Definition 6.1.2).
Construction 6.3.11 (a state-based MAC): Let g : {0, 1}∗ → {0, 1}∗ such that |g(s)| =
|s| + 1, for every s ∈ {0, 1}∗ . Let {h r : {0, 1}∗ → {0, 1}m(|r |) }r ∈{0,1}∗ be a family of func-
tions having an efficient evaluation algorithm.
15 In fact, one may strengthen the definition by using a weaker notion of success in which it is only required that
α = α (i) (rather than requiring that α ∈ {α ( j) } j ). That is, the attack is successful if, for some i, it outputs a
pair (α, β) such that α = α (i) and V (v (i−1) , α, β) = (·, 1), where the α ( j) ’s and v ( j) ’s are as in Definition 6.3.9.
The stronger definition provides “replay protection” (i.e., even if the adversary obtains a valid signature that
authenticates α as the j-th message, it cannot produce a valid signature that authenticates α as the i-th message,
unless α was actually authenticated as the i-th message).
534
www.Ebook777.com
6.3 CONSTRUCTIONS OF MESSAGE-AUTHENTICATION SCHEMES
Key-generation and initial state: Uniformly select s, r ∈ {0, 1}n , and output the key-
pair ((s, r ), (s, r )). The initial state of each algorithm is set to (s, r, 0, s).
(We maintain the initial key (s, r ) and a step-counter in order to allow recovery from
loss of message-signature pairs.)
def
Signing message α with state (s, r, t, s ): Let s0 = s . For i = 1, ..., m(n), compute
si σi = g(si−1 ), where |si | = n and σi ∈ {0, 1}. Output the signature h r (α) ⊕
σ1 · · · σm(n) , and set the new state to (s, r, t + m(n), sm(n) ).
Verification of the pair (α, β) with respect to the state (s, r, t, s ): Compute σ1· · · σm(n)
and sm(n) as in the signing process; that is, for i = 1, ..., m(n), compute si σi =
def
g(si−1 ), where s0 = s . Set the new state to (s, r, t + m(n), sm(n) ), and accept if and
only if β = hr (α) ⊕ σ1 · · · σm(n) .
Special recovery procedure: When notified that some message-signature pairs may
have been lost and that the current message-signature pair has index t , one first
recovers the correct current state, which as in the ordinary verification (of the pre-
def
vious paragraph) will be denoted s0 . This is done by setting s−t = s and computing
si−t σi−t = g(si−t −1 ), for i = 1, ..., t . Indeed, recovery of s0 is required only if
t = t.16
Note that both the signing and verification algorithms are deterministic, and that the
state after authentication of t messages has length 3n + log2 (t · m(n)) < 4n, provided
that t < 2n /m(n).
We now turn to the analysis of the security of Construction 6.3.11. The hashing
property of the collection of h r ’s should be slightly stronger than the one used in
Section 6.3.1.3. Specifically, rather than a bound on the collision probability (i.e., the
probability that h r (x) = h r (y) for any relevant fixed x, y and a random r ), we need
a bound on the probability that h r (x) ⊕ h r (y) equals any fixed string (again, for any
relevant fixed x, y and a random r ). This property is commonly referred to by the name
Almost-Xor-Universal (AXU). That is, {h r : {0, 1}∗ → {0, 1}m(|r |) }r ∈{0,1}∗ is called an
(, ε)-AXU family if for every n ∈ N, every x = y such that |x|, |y| ≤ (n), and every
z, it holds that
16 More generally, if the verification procedure holds the state at time t < t , then it need only compute
st+1−t , ..., s0 .
17 In fact, as shown in the proof, it suffices to assume that g is a next-step function of an on-line pseudorandom
generator.
535
Free ebooks ==> www.Ebook777.com
DIGITAL SIGNATURES AND MESSAGE AUTHENTICATION
Hence, by the AXU property, the probability that the adversary succeeds is at most
ε(n).
536
www.Ebook777.com
6.4 CONSTRUCTIONS OF SIGNATURE SCHEMES
The security of the real scheme follows (or else one could have distinguished
the sequence produced by iterating the next-step function g from a truly random
sequence).
Construction 6.3.11 Versus the Constructions of Section 6.3.1.3: Recall that all
these schemes are based on the hash-and-hide paradigm. The difference between the
schemes is that in Section 6.3.1.3, a pseudorandom function is applied to the hash-value
(i.e., the signature to α is f s (h r (α))), whereas in Construction 6.3.11, the hash-value
is XORed with a pseudorandom value (i.e., we may view the signature as consisting
of (c, h r (α) ⊕ f s (c)), where c is a counter value and f s (c) is the c-th block produced
by iterating the next-step function g starting with the initial seed s). We note two ad-
vantages of the state-based MAC over the MACs presented in Section 6.3.1.3: First,
applying an on-line pseudorandom generator is likely to be more efficient than ap-
plying a pseudorandom function. Second, a counter allows for securely authenticating
more messages than can be securely authenticated by applying a pseudorandom func-
tion to the hashed value. Specifically, the use of an an m-bit long counter allows for
securely authenticating 2m messages, whereas using an m-bit long √ hash-value suffers
from the “birthday effect” (i.e., collisions are likely to occur when 2m messages are
authenticated). Indeed, these advantages are relevant only in applications in which us-
ing state-based MACs is possible, and are most advantageous in applications where
verification is performed in the same order as signing (e.g., in fifo communication).
In the latter case, Construction 6.3.11 offers another advantage: “replay protection” (as
discussed in footnote 15).
use them instead of collision-free hashing (in the aforementioned constructions and,
in particular, within a modified hash-and-sign paradigm). Indeed, the gain in using
universal one-way hashing (rather than collision-free hashing) is that the former can be
constructed based on any one-way function (whereas this is not known for collision-free
hashing). Thus, we obtain:
Theorem 6.4.1: Secure signature schemes exist if and only if one-way functions exist.
The difficult direction is to show that the existence of one-way functions implies the
existence of signature schemes. For the opposite direction, see Exercise 7.
6.4.1.1. Definitions
Loosely speaking, one-time signature schemes are signature schemes for which the
security requirement is restricted to attacks in which the adversary asks for at most
one string to be signed. That is, the mechanics of one-time signature schemes is as
of ordinary signature schemes (see Definition 6.1.1), but the security requirement is
relaxed as follows:
r A chosen one-message attack is a process that can obtain a signature to at most
one string of its choice. That is, the attacker is given v as input, and obtains a signature
relative to s, where (s, v) ← G(1n ) for an adequate n.
(Note that in this section, we focus on public-key signature schemes and thus present
only the definition for this case.)
r Such an attack is said to succeed (in existential forgery) if it outputs a valid signature
to a string for which it has not requested a signature during the attack.
(Indeed, the notion of success is exactly as in Definition 6.1.2.)
r A one-time signature scheme is secure (or unforgeable) if every feasible chosen
one-message attack succeeds with at most negligible probability.
Moving to the formal definition, we again model a chosen message attack as a proba-
bilistic oracle machine; however, since here we care only about one-message attacks, we
consider only oracle machines that make at most one query. Let M be such a machine.
As before, we denote by Q O M (x) the set of queries made by M on input x and access
to oracle O, and let M O (x) denote the output of the corresponding computation. Note
that here |Q OM (x)| ≤ 1 (i.e., M may make either no queries or a single query).
538
www.Ebook777.com
6.4 CONSTRUCTIONS OF SIGNATURE SCHEMES
Such a scheme is called secure (in the one-time model) if the requirement of Defini-
tion 6.4.2 holds when restricted to attackers that only make queries of length (n) and
output a pair (α, β) with |α| = (n). That is, we consider only attackers that make at
most one query, with the requirements that this query be of length (n) and that the
output (α, β) satisfies |α| = (n).
Note that even the existence of secure 1-restricted one-time signature schemes implies
the existence of one-way functions, see Exercise 13.
Note that Construction 6.4.4 does not constitute a (general) -restricted signature
scheme: An attacker that obtains signatures to two strings (e.g., to the strings 0(n)
and 1(n) ), can present a valid signature to any (n)-bit long string (and thus totally
break the system). However, here we consider only attackers that may ask for at most
one string (of their choice) to be signed. As a corollary to Proposition 6.4.5, we obtain:
Corollary 6.4.6: If there exist one-way functions, then for every polynomially bounded
and polynomial-time computable : N → N, there exist secure -restricted one-time
signature schemes.
Proof of Proposition 6.4.5: Intuitively, forging a signature (after seeing at most one sig-
nature to a different message) requires inverting f on some random image (correspond-
ing to a bit location on which the two (n)-bit long messages differ). The actual proof
is by a reducibility argument. Given an adversary A attacking the scheme (G, S, V ),
while making at most one query, we construct an algorithm A for inverting f .
As a warm-up, let us first deal with the case in which A makes no queries at all. In
this case, on input y (supposedly in the range of f ), algorithm A proceeds as follows.
First A selects uniformly and independently a position p in {1, ..., (n)}, a bit b, and
a sequence of (2(n) many) n-bit long strings s10 , s11 , ..., s(n)0 1
, s(n) . (Actually, s bp is not
used and needs not be selected.) For every i ∈ {1, ..., (n)} \ { p}, and every j ∈ {0, 1},
algorithm A computes vi = f (si ). Algorithm A also computes v 1−b
j j
p = f (s 1−b
p ), and
sets v p = y and v = ((v1 , v1 ), ..., (v(n) , v(n) )). Note that if y = f (x), for a uniformly
b 0 1 0 1
distributed x ∈ {0, 1}n , then for each possible choice of p and b, the sequence v is
distributed identically to the public-key generated by G(1n ). Next, A invokes A on
input v, hoping that A will forge a signature, denoted β = τ1 · · · τ(n) , to a message
α = σ1 · · · σ(n) so that σ p = b. If this event occurs, A obtains a pre-image of y under
540
www.Ebook777.com
6.4 CONSTRUCTIONS OF SIGNATURE SCHEMES
σ
f , because the validity of the signature implies that f (τ p ) = v p p = v bp = y. Observe
that conditioned on the value of v and the internal coin tosses of A, the value b is
uniformly distributed in {0, 1}. Thus, A inverts f with probability ε(n)/2, where ε(n)
denotes the probability that A succeeds in forgery.
We turn back to the actual case in which A may make a single query. Without loss
of generality, we assume that A always makes a single query; see Exercise 11. In this
case, on input y (supposedly in the range of f ), algorithm A selects p, b and the si ’s,
j
j
and forms the vi ’s and v exactly as in the previous warm-up discussion. Recall that if
y = f (x), for a uniformly distributed x ∈ {0, 1}n , then for each possible choice of p
and b, the sequence v is distributed identically to the public-key generated by G(1n ).
Also note that for each vi other than v bp = y, algorithm A holds a random pre-image
j
(of vi ) under f . Next, A invokes A on input v, and tries to answer its query, denoted
j
Note that conditioned on the value of v, on the internal coin tosses of A, and on the
second case occuring, p is uniformly distributed in {1, ..., (n)}. When the second case
occurs, A obtains a signature to α, and this signature is distributed exactly as in a
real attack. We stress that since A asks at most one query, no additional query will be
asked by A. Also note that, in this case (i.e., σ p = 1 − b), algorithm A outputs a forged
message-signature pair, denoted (α , β ), with probability exactly as in a real attack.
We now turn to the analysis of A , and consider first the emulated attack of A. Recall
that α = σ1 · · · σ(n) denotes the (single) query19 made by A, and let α = σ1 · · · σ(n)
and β = s1 · · · s(n) , where (α , β ) is the forged message-signature pair output by A.
By our hypothesis (that this is a forgery-success event), it follows that α = α and that
σ
f (si ) = vi i for all i’s. Now, considering the emulation of A by A , recall that (under
all these conditions) p is uniformly distributed in {1, ..., (n)}. Hence, with probability
|{i:σi =σi }|
(n)
≥ (n)
1
, it holds that σ p = σ p , and in that case, A obtains a pre-image of y
σ 1−σ p
under f (since s p satisfies f (s p ) = v p p , which in turn equals v p = v bp = y).
18 This follows from an even stronger statement by which conditioned on the value of v, on the internal coin tosses
of A, and on the value of p, the current case happens with probability 12 . The stronger statement holds because
under all these conditions, b is uniformly distributed in {0, 1} (and so σ p = b happens with probability exactly
1
2 ).
19 Recall that, without loss of generality, we may assume that A always makes a single query; see Exercise 11.
541
Free ebooks ==> www.Ebook777.com
DIGITAL SIGNATURES AND MESSAGE AUTHENTICATION
Proof: The proof is identical to the proof of Proposition 6.2.7; we merely notice that if
the adversary A , attacking (G , S , V ), makes at most one query, then the same holds
for the adversary A that we construct (in that proof) to attack (G, S, V ). In general,
the adversary A constructed in the proof of Proposition 6.2.7 makes a single query per
each query of the adversary A .
Combining Proposition 6.4.7, Corollary 6.4.6, and the fact that collision-free hashing
collections, imply one-way functions (see Exercise 14), we obtain:
Corollary 6.4.8: If there exist collision-free hashing collections, then there exist secure
one-time signature schemes. Furthermore, the length of the resulting signatures depends
only on the length of the signing-key.
Comments. We stress that when using Construction 6.2.6, signing each document
under the (general) scheme (G , S , V ) only requires signing a single string under the
-restricted scheme (G, S, V ). This is in contrast to Construction 6.2.3, in which signing
a document under the (general) scheme (G , S , V ) requires signing many strings under
the -restricted scheme (G, S, V ), where the number of such strings depends (linearly)
on the length of the original document.
Construction 6.2.6 calls for the use of collision-free hashing. The latter can be con-
structed using any claw-free permutation collection (see Proposition 6.2.9); however,
542
www.Ebook777.com
6.4 CONSTRUCTIONS OF SIGNATURE SCHEMES
it is not know whether collision-free hashing can be constructed based on any one-way
function. Wishing to construct signature schemes based on any one-way function, we
later avoid (in Section 6.4.3) the use of collision-free hashing. Instead, we use “universal
one-way hashing functions” (to be defined), and present a variant of Construction 6.2.6
that uses these functions, rather than collision-free ones.
Theorem 6.4.9: If there exist secure one-time signature schemes, then secure (general)
signature schemes exist as well.
Actually, we can use length-restricted one-time signature schemes, provided that the
length of the strings being signed is at least twice the length of the verification-key.
Unfortunately, Construction 6.4.4 does not satisfy this condition. Nevertheless, Corol-
lary 6.4.8 does provide one-time signature schemes. Thus, combining Theorem 6.4.9
and Corollary 6.4.8, we obtain:
Corollary 6.4.10: If there exist collision-free hashing collections, then there exist se-
cure signature schemes.
Note that Corollary 6.4.10 asserts the existence of secure (public-key) signature
schemes, based on an assumption that does not mention trapdoors. We stress this point
because of the contrast to the situation with respect to public-key encryption schemes,
where a trapdoor property seems necessary for the construction of secure schemes.
the gain in terms of security is that a full-fledged chosen message attack cannot be
launched on (G, S, V ). All that an attacker may obtain (via a chosen message attack
on the new scheme) is signatures, relative to the original signing-key s, to randomly
chosen strings (taken from the distribution G 2 (1n )), as well as additional signatures
each relative to a random and independently chosen signing-key.
We refrain from analyzing the features of the signature scheme presented in this
example. Instead, as a warm-up to the actual construction used in the next section (in
order to establish Theorem 6.4.9), we present and analyze a similar construction (which
is, in some sense, a hybrid of the two constructions). The reader may skip this warm-up,
and proceed directly to Section 6.4.2.2.
Signing with S : On input a signing-key s (in the range of G 1 (1n )) and a document
α ∈ {0, 1}∗ , first invoke G to obtain (s , v ) ← G (1n ). Next, invoke S to obtain
β1 ← Ss (v ), and S to obtain β2 ← Ss (α). The final output is (β1 , v , β2 ).
Verification with V : On input a verifying-key v, a document α ∈ {0, 1}∗ , and an al-
leged signature β = (β1 , v , β2 ), we output 1 if and only if both Vv (v , β1 ) = 1 and
Vv (α, β2 ) = 1.
Construction 6.4.11 differs from the previous example only in that a one-time signature
scheme is used to generate the “second signature” (rather than using the same ordinary
signature scheme). The use of a one-time signature scheme is natural here, because it
is unlikely that the same signing-key s will be selected in two invocations of S .
Proposition 6.4.12: Suppose that (G, S, V ) is a secure signature scheme, and that
(G , S , V ) is a secure one-time signature scheme. Then (G , S , V ), as defined in
Construction 6.4.11, is a secure signature scheme.
We comment that the proposition holds even if (G, S, V ) is secure only against attackers
that select queries according to the distribution G 2 (1n ). Furthermore, (G, S, V ) need
only be -restricted, for some suitable function : N → N.
1. In case, for some i, the one-time verification-key v contained in the forged message
equals the one-time verification-key v (i) contained in the answer to the i-th query,
we derive violation to the security of the one-time scheme (G , S , V ).
544
www.Ebook777.com
6.4 CONSTRUCTIONS OF SIGNATURE SCHEMES
6.4.2.2. Authentication-Trees
The refreshing paradigm by itself (i.e., as employed in Construction 6.4.11) does not
seem to suffice for establishing Theorem 6.4.9. Recall that our aim is to construct
a general signature scheme based on a one-time signature scheme. The refreshing
paradigm suggests using a fresh instance of a one-time signature scheme in order to
sign the actual document; however, whenever we do so (as in Construction 6.4.11), we
must authenticate this fresh instance relative to the single verification-key that is public.
A straightforward implementation of this scheme (as presented in Construction 6.4.11)
calls for many signatures to be signed relative to the single verification-key that is
public, and so a one-time signature scheme cannot be used (for this purpose). Instead,
a more sophisticated method of authentication is called for.
20 Furthermore, all queries to Ss are distributed according to G 2 (1n ), justifying the comment made just before the
proof sketch.
545
Free ebooks ==> www.Ebook777.com
DIGITAL SIGNATURES AND MESSAGE AUTHENTICATION
x
sx vx
+ authx
x0 x1
sx0 vx0 sx1 vx1
+ authx0 + authx1
Let us try to sketch the basic idea underlying the new authentication method. The
idea is to use the public verification-key (of a one-time signature scheme) in order
to authenticate several (e.g., two) fresh instances (of the one-time signature scheme),
use each of these instances to authenticate several fresh instances, and so on. We
obtain a tree of fresh instances of the one-time signature, where each internal node
authenticates its children. We can now use the leaves of this tree in order to sign
actual documents, where each leaf is used at most once. Thus, a signature to an actual
document consists of (1) a one-time signature to this document authenticated with
respect to the verification-key associated with some leaf, and (2) a sequence of one-
time verification-keys associated with the nodes along the path from the root to this leaf,
where each such verification-key is authenticated with respect to the verification-key
of its parent (see Figures 6.4 and 6.5). We stress that each instance of the one-time
signature scheme is used to sign at most one string (i.e., several verification-keys if the
instance resides in an internal node, and an actual document if the instance resides in a
leaf).
This description may leave the reader wondering how one actually signs (and verifies
signatures) using the process outlined here. We start with a description that does not fit
our definition of a signature scheme, because it requires the signer to keep a record of
its actions during all previous invocations of the signing process.21 We refer to such a
scheme as memory dependent, and define this notion first.
546
www.Ebook777.com
6.4 CONSTRUCTIONS OF SIGNATURE SCHEMES
λ
sλ vλ
+ authλ
0 1
s0 v0 s1 v1
+ auth 0 + auth1
00 01
s00 v00 s01 v01
+ auth 00 + auth 01
010 011
s010 v010 s011 v011
+ auth 010 + auth 011
Mechanics: Item 1 of Definition 6.1.1 stays as it is, and the initial state (of the signing
algorithm) is defined to equal the output of the key-generator. Item 2 is modified
such that the signing algorithm is given a state, denoted γ , as auxiliary input and
returns a modified state, denoted δ, as auxiliary output. It is required that for every
pair (s, v) in the range of G(1n ), and for every α, γ ∈ {0, 1}∗ , if (β, δ) ← Ss (α, γ ),
then Vv (α, β) = 1 and |δ| ≤ |γ | + |α| · poly(n).
(That is, the verification algorithm accepts the signature β and the state does not
grow by too much.)
Security: The notion of a chosen message attack is modified so that the oracle Ss now
maintains a state that it updates in the natural manner; that is, when in state γ and
faced with query α, the oracle sets (β, δ) ← Ss (α, γ ), returns β, and updates its
state to δ. The notions of success and security are defined as in Definition 6.1.2,
except that they now refer to the modified notion of an attack.
www.Ebook777.com
6.4 CONSTRUCTIONS OF SIGNATURE SCHEMES
by x is labeled by x0 (resp., x1). Here we refer to the current state of the signing process
as to a record.
Initiating the scheme: To initiate the scheme, on security parameter n, we invoke G(1n )
and let (s, v) ← G(1n ). We record (s, v) as the key-pair associated with the root,
and output v as the (public) verification-key.
In the rest of the description, we denote by (sx , vx ) the key-pair associated with the
node labeled x; thus, (sλ , vλ ) = (s, v).
Signing with S using the current record: Recall that the current record contains the
signing-key s = sλ , which is used to produce authλ (defined in the sequel).
To sign a new document, denoted α, we first allocate an unused leaf. Let σ1 · · · σn be
the label of this leaf. For example, we may keep a counter of the number of documents
signed, and determine σ1 · · · σn according to the counter value (e.g., if the counter
value is c, then we use the c-th string in lexicographic order).22
Next, for every i = 1, ..., n and every τ ∈ {0, 1}, we try to retrieve from our record
the key-pair associated with the node labeled σ1 · · · σi−1 τ . In case such a pair is not
found, we generate it by invoking G(1n ) and store it (i.e., add it to our record) for
future use; that is, we let (sσ1 ···σi−1 τ , vσ1 ···σi−1 τ ) ← G(1n ).
Next, for every i = 1, ..., n, we try to retrieve from our record a signature to the
string vσ1 ···σi−1 0 vσ1 ···σi−1 1 relative to the signing-key sσ1 ···σi−1 . In case such a signature
is not found, we generate it by invoking Ssσ1 ···σi−1 , and store it for future use; that
is, we obtain Ssσ1 ···σi−1 (vσ1 ···σi−1 0 vσ1 ···σi−1 1 ). (The ability to retrieve this signature from
memory, for repeated use, is the most important place in which we rely on the memory
dependence of our signature scheme.)23 We let
def
authσ1 ···σi−1 = vσ1 ···σi−1 0 , vσ1 ···σi−1 1 , Ssσ1 ···σi−1 (vσ1 ···σi−1 0 vσ1 ···σi−1 1 )
(Intuitively, via authσ1 ···σi−1 , the node labeled σ1 · · · σi−1 authenticates the
verification-keys associated with its children.)
Finally, we sign α by invoking Ssσ1 ···σn , and output
22 Alternatively, as done in Construction 6.4.16, we may select the leaf at random (while ignoring the negligible
probability that the selected leaf is not unused).
23 This allows the signing process S to use each (one-time) signing-key s for producing a single S -signature.
s x sx
In contrast, the use of a counter for determining a new leaf can be easily avoided, by selecting a leaf at random.
549
Free ebooks ==> www.Ebook777.com
DIGITAL SIGNATURES AND MESSAGE AUTHENTICATION
where the σi ’s are bits and all other symbols represent strings.
(Jumping ahead, we mention that vi,τ is supposed to equal vσ1 ···σi τ ; that is,
the verification-key associated by the signing process with the node labeled
σ1 · · · σi τ . In particular, vi−1,σi is supposed to equal vσ1 ···σi .)
2. Vv (v0,0 v0,1 , β0 ) = 1.
(That is, the public-key (i.e., v) authenticates the two strings v0,0 and v0,1 claimed
to correspond to the instances of the one-time signature scheme associated with
the nodes labeled 0 and 1, respectively.)
3. For i = 1, ..., n − 1, it holds that Vvi−1,σi (vi,0 vi,1 , βi ) = 1.
(That is, the verification-key vi−1,σi , which is already believed to be authentic
and supposedly corresponds to the instance of the one-time signature scheme
associated with the node labeled σ1 · · · σi , authenticates the two strings vi,0 and
vi,1 that are supposed to correspond to the instances of the one-time signature
scheme associated with the nodes labeled σ1 · · · σi 0 and σ1 · · · σi 1, respectively.)
4. Vvn−1,σn (α, βn ) = 1.
(That is, the verification-key vn−1,σn , which is already believed to be authentic,
authenticates the actual document α.)
Regarding the verification algorithm, note that Conditions 2 and 3 establish that vi,σi+1 is
authentic (i.e., equals vσ1 ···σi σi+1 ). That is, v = vλ authenticates vσ1 , which authenticates
vσ1 σ2 , and so on up-to vσ1 ···σn . The fact that the vi,σ i+1 ’s are also proven to be authentic
(i.e., equal to the vσ1 ···σi σ i+1 ’s, where σ = 1 − σ ) is not really useful (when signing a
message using the leaf associated with σ1 · · · σn ). This excess is merely an artifact of
the need to use sσ1 ···σi only once during the entire operation of the memory-dependent
signature scheme: In the currently (constructed) Ss -signature, we may not care about the
authenticity of some vσ1 ···σi σ i+1 , but we may care about it in some other Ss -signatures.
For example, if we use the leaf labeled 0n to sign the first document and the leaf
labeled 0n−1 1 to sign the second, then in the first Ss -signature we care only about the
authenticity of v0n , whereas in the second Ss -signature we care about the authenticity
of v0n−1 1 .
(σ1 · · · σn , authλ , authσ1 , ..., authσ1 ···σn−1 , Ssσ1 ···σn (α)) (6.5)
(See Figure 6.4.) In this case, we say that this Ss -signature uses the leaf labeled
σ1 · · · σn . For every i = 1, ..., n, we call the sequence (authλ , authσ1 , ..., authσ1 ···σi−1 )
550
www.Ebook777.com
6.4 CONSTRUCTIONS OF SIGNATURE SCHEMES
an authentication path for vσ1 ···σi ; see Figure 6.5. (Note that this sequence is also an
authentication path for vσ1 ···σi−1 σ i , where σ = 1 − σ .) Thus, a valid Ss -signature to a
document α consists of an n-bit string σ1 · · · σn , authentication paths for each vσ1 ···σi
(i = 1, ..., n), and a signature to α with respect to the one-time scheme (G, S, V ) using
the signing-key sσ1 ···σn .
Intuitively, forging an Ss -signature requires either using only verification-keys sup-
plied by the signer (i.e., supplied by Ss as part of an answer to a query) or producing
an authentication path for a verification-key that is different from all verification-keys
supplied by the signer. In both cases, we reach a contradiction to the security of the one-
time signature scheme (G, S, V ). Specifically, in the first case, the forged Ss -signature
contains a one-time signature that is valid with respect to the one-time verification-key
associated by the signing process with a leaf labeled σ1 · · · σn , because by the case’s
hypothesis, the forged signature utilizes only verification-keys supplied by the signer.
This yields forgery with respect to the instance of the one-time signature scheme as-
sociated with the leaf labeled σ1 · · · σn (because the document that is Ss -signed by the
forger must be different from all Ss -signed documents, and thus the forged document
is different from all strings to which a one-time signature associated with a leaf was
applied).24 We now turn to the second case (i.e., forgery with respect to (G , S , V ) is
obtained by producing an authentication path for a verification-key that is different from
all verification-keys supplied by the signer). As in the first case, we denote by σ1 · · · σn
the label of the leaf used for the (forged) signature. Let i ∈ {0, ..., n − 1} be the largest
integer such that the signature produced by the forger refers to the verification-key
vσ1 ···σi (as supplied by the signer), rather than to a different value (claimed by the forger
to be the verification-key associated with the node labeled σ1 · · · σi ). (Note that i = 0
corresponds to the forger not even using vσ1 , whereas i < n by the case hypothesis.)
For this i, the triple authσ1 ···σi = (vi,0 , vi,1 , βi ) that is contained in the Ss -signature pro-
duced by the forger contains a one-time signature (i.e., βi ) that is valid with respect to
the one-time verification-key associated by the signing process with the node labeled
σ1 · · · σi (where vλ is always used by the signing process). Furthermore, by maximality
of i, the latter signature is to a string (i.e., vi,0 vi,1 ) that is different from the string
to which the Ss -signer has applied Ssσ1 ···σi (i.e., vi,σi+1 = vσ1 ···σi+1 ). This yields forgery
with respect to the instance of the one-time signature scheme associated with the node
labeled σ1 · · · σi .
The actual proof is by a reducibility argument. Given an adversary A attacking the
complex scheme (G , S , V ), we construct an adversary A that attacks the one-time
signature scheme, (G, S, V ). In particular, the adversary A will use its (one-time) oracle
access to Ss in order to emulate the memory-dependent signing oracle for A . We stress
that the adversary A may make at most one query to its Ss -oracle. Following is a detailed
description of the adversary A. Since we care only about probabilistic polynomial-time
adversaries, we may assume that A makes at most t = poly(n) many queries, where n
is the security parameter.
24 Note that what matters is merely that the document Ss -signed by the forger is different from the (single) document
to which Ssσ1···σn was applied by the Ss -signer, in case Ssσ1···σn was ever applied by the Ss -signer.
551
Free ebooks ==> www.Ebook777.com
DIGITAL SIGNATURES AND MESSAGE AUTHENTICATION
552
www.Ebook777.com
6.4 CONSTRUCTIONS OF SIGNATURE SCHEMES
adds it to its record. In this case, A does not know sσ1 ···σi−1 τ , which is defined to
equal s, yet A can obtain a single signature relative to s by making a (single)
query to its own oracle (i.e., the oracle Ss ).
From this point on, the one-time instance associated with the node labeled
σ1 · · · σi−1 τ will be called the j-th instance.
ii. Otherwise (i.e., the current instance is not the j-th one to be encountered), A
acts as the signing process: It invokes G(1n ), obtains (sσ1 ···σi−1 τ , vσ1 ···σi−1 τ ) ←
G(1n ), and adds it to the record. (Note that in this case, A knows sσ1 ···σi−1 τ
and can generate by itself signatures relative to it.)
The one-time instance just generated is given the next serial number. That
is, the one-time instance associated with the node labeled σ1 · · · σi−1 τ will
be called the k-th instance if the current record (i.e., after the generation of
the one-time key-pair associated with the node labeled σ1 · · · σi−1 τ ) contains
exactly k instances.
(c) For every i = 1, ..., n, machine A tries to retrieve from its record a (one-time)
signature to the string vσ1···σi−1 0 vσ1···σi−1 1 , relative to the signing-key sσ1···σi−1 .
If such a signature does not exist in the record then A distinguishes two
cases:
i. If the one-time signature instance associated with the node labeled
σ1 · · · σi−1 is the j-th such instance, then A obtains the one-time signa-
ture Ssσ1···σi−1 (vσ1···σi−1 0 vσ1···σi−1 1 ) by querying Ss , and adds this signature to
the record.
Note that by the previous steps (i.e., Step 3(b)i as well as Step 2), s is identified
with sσ1···σi−1 , and that the instance associated with a node labeled σ1 · · · σi−1 is
only used to produce a single signature; that is, to the string vσ1···σi−1 0 vσ1···σi−1 1 .
Thus, in this case, A queries Ss at most once.
We stress that this makes crucial use of the fact that, for every τ , the
verification-key associated with the node labeled σ1 · · · σi−1 τ is identical in all
executions of the current step. This fact guarantees that A only needs a single
signature relative to the instance associated with a node labeled σ1 · · · σi−1 ,
and thus queries Ss at most once (and retrieves this signature from memory
if it ever needs this signature again).
ii. Otherwise (i.e., the one-time signature instance associated with the node
labeled σ1 · · · σi−1 is not the j-th such instance), A acts as the signing process:
It invokes Ssσ1···σi−1 , obtains the one-time signature Ssσ1···σi−1 (vσ1···σi−1 0 vσ1···σi−1 1 ),
and adds it to the record. (Note that in this case, A knows sσ1···σi−1 and can
generate by itself signatures relative to it.)
Thus, in both cases, A obtains authσ1···σi−1 = (vσ1···σi−1 0 , vσ1···σi−1 1 , βi−1 ), where
βi−1 = Ssσ1···σi−1 (vσ1···σi−1 0 vσ1···σi−1 1 ).
553
Free ebooks ==> www.Ebook777.com
DIGITAL SIGNATURES AND MESSAGE AUTHENTICATION
leaf σ1 · · · σn .) This is done analogously to the previous step (i.e., Step 3c).
Specifically:
i. If the one-time signature instance associated with the (leaf) node labeled
σ1 · · · σn is the j-th instance, then A obtains the one-time signature Ssσ1···σn (α)
by querying Ss .
Note that in this case, s is identified with sσ1···σn , and that an instance associated
with a leaf is only used to produce a single signature. Thus, also in this case
(which is disjoint of Case 3(c)i), A queries Ss at most once.
ii. Otherwise (i.e., the one-time signature instance associated with the node
labeled σ1 · · · σn is not the j-th instance), A acts as the signing process: It
invokes Ssσ1···σn and obtains the one-time signature Ssσ1···σn (α). (Again, in this
case A knows sσ1···σn and can generate by itself signatures relative to it.)
Thus, in both cases, A obtains βn = Ssσ1···σn (α).
and that the various components satisfy all conditions stated in the verification
procedure. (In particular, the sequence (v0,0 , v0,1 , β0 ), ..., (vn−1,0
, vn−1,1
, βn−1 ) is
the authentication path (for vn−1,σn ) output by A .) Recall that strings of the form
vk,τ denote the verification-keys included in the output of A , whereas strings of the
form vx denote the verification-keys (as used in the answers given to A by A and)
as recorded by A.
Let i be maximal such that the sequence of key-pairs (v0,0 , v0,1 ), ..., (vi−1,0 , vi−1,1 )
25
appears in some authentication path supplied to A (by A). Note that
i ∈ {0, ..., n}, where i = 0 means that (v0,0 , v0,1 ) differs from (v0 , v1 ), and
i = n means that the sequence ((v0,0 , v0,1 ), ..., (vn−1,0 , vn−1,1 )) equals the
sequence ((v0 , v1 ), ..., (vσ1···σn−1
, v
0 σ1···σn−1 1 )). In general, the sequence ((v0,0 ,
v0,1 ), ..., (vi−1,0 , vi−1,1 )) equals the sequence ((v0 , v1 ), ..., (vσ1···σi−1 0 , vσ1···σi−1 1 )). In
particular, for i ≥ 1, it holds that vi−1,σ = vσ ···σ , whereas for i = 0 we shall only
1 i
i
25 That is, i is such that for some β0 , ..., βi−1 (which may but need not equal β0 , ..., βi−1 ), the sequence
(v0,0 , v0,1 , β0 ), ..., (vi−1,0 , vi−1,1 , βi−1 ) is a prefix of some authentication path (for some vσ ···σ σi+1···σn ) sup-
1 i
plied to A by A. We stress that here we only care about whether or not some vk,τ ’s equal the corresponding
verification-keys supplied by A, and ignore the question of whether (in case of equality) the verification-keys
were authenticated using the very same (one-time) signature. We mention that things will be different in the
analogous part of the proof of Theorem 6.5.2 (which refers to super-security).
554
www.Ebook777.com
6.4 CONSTRUCTIONS OF SIGNATURE SCHEMES
(a) In case i = n, the output of A contains the (one-time) signature βn that satisfies
Vvσ ···σn (α , βn ) = 1. Furthermore, α is different from the (possibly) only docu-
1
ment to which Ssσ ···σn was applied during the emulation of the S -signer by A,
1
since by our hypothesis the document α did not appear as a query of A . (Re-
call that by the construction of A, instances of the one-time signature scheme
associated with leaves are only applied to the queries of A .)
(b) In case i < n, the output of A contains the (one-time) signature βi that satisfies
Vvσ ···σ (vi,0 vi,1 , βi ) = 1. Furthermore, vi,0
vi,1 is different from vσ1···σi 0 vσ1···σi 0 ,
1 i
which is the (possibly) only string to which Ssσ ···σ was applied during the emu-
1 i
lation of the S -signer by A, where the last assertion is due to the maximality of
i (and the construction of A).
Thus, in both cases, A obtains from A a valid (one-time) signature relative to the
(one-time) instance associated with the node labeled σ1 · · · σi . Furthermore, in both
cases, this (one-time) signature is to a string that did not appear in the record of A.
The question is whether the instance associated with the node labeled σ1 · · · σi is
the j-th instance, for which A set v = vσ1···σi . In case the answer is yes, A obtains
forgery with respect to the (one-time) verification-key v (which it attacks).
In view of this discussion, A acts as follows. It determines i as in the beginning of the
current step (i.e., Step 4), and checks whether v = vσ1···σi (or, almost equivalently,
whether the j-th instance is the one associated with the node labeled σ1 · · · σi ). In
case i = n, machine A outputs the string-signature pair (α , βn ); otherwise (i.e.,
i < n) it outputs the string-signature pair (vi,0 vi,1 , βi ).
26 We shall make comments regarding the minor changes required in order to use ordinary pseudorandom functions.
The first comment is that we shall consider an encoding of strings of length up to n + 2 by strings of length
n + 3 (e.g., for i ≤ n + 2, the string x ∈ {0, 1}i is encoded by x10n+2−i ).
27 In case we use ordinary pseudorandom functions, rather than generalized ones, we select r uniformly in {0, 1}n+3
such that fr : {0, 1}n+3 → {0, 1}n+3 . Actually, we shall be using the function f r : {0, 1}n+3 → {0, 1}n derived
from the original fr by dropping the last 3 bits of the function value.
556
www.Ebook777.com
6.4 CONSTRUCTIONS OF SIGNATURE SCHEMES
Signing algorithm S : On input a signing-key (r, s) (in the range of G 1 (1n )) and a
document α, the algorithm proceeds as follows:
1. It selects uniformly σ1 · · · σn ∈ {0, 1}n .
(Algorithm S will use the leaf labeled σ1 · · · σn ∈ {0, 1}n to sign the current doc-
ument. Indeed, with exponentially vanishing probability, the same leaf may be
used to sign two different documents, and this will lead to forgery [but only with
negligible probability].)
(Alternatively, to obtain a deterministic signing algorithm, one may set σ1 · · ·
σn ← fr (select-leaf, α), where select-leaf is a special character.)28
2. Next, for every i = 1, ..., n and every τ ∈ {0, 1}, the algorithm invokes G and sets
(sσ1···σi−1 τ , vσ1···σi−1 τ ) ← G(1n , fr (key-gen, σ1 · · · σi−1 τ ))
where key-gen is a special character.29
3. For every i = 1, ..., n, the algorithm invokes Ssσ1···σi−1 and sets
def
authσ1···σi−1 = vσ1···σi−1 0 , vσ1···σi−1 1 ,
Ssσ1···σi−1 (vσ1···σi−1 0 vσ1···σi−1 1 , fr (sign, σ1 · · · σi−1 )))
28 In case we use ordinary pseudorandom functions, rather than generalized ones, this alternative can be (directly)
implemented only if it is guaranteed that |α| ≤ n. In such a case, we apply the f r to the (n + 3)-bit encoding of
00α.
29 In case we use ordinary pseudorandom functions, rather than generalized ones, the argument to f is the
r
(n + 3)-bit encoding of 10σ1 · · · σi−1 τ .
30 In case we use ordinary pseudorandom functions, rather than generalized ones, the argument to f is the
r
(n + 3)-bit encoding of 11σ1 · · · σi−1 .
31 In case we use ordinary pseudorandom functions, rather than generalized ones, the argument to f is the
r
(n + 3)-bit encoding of 11σ1 · · · σn .
557
Free ebooks ==> www.Ebook777.com
DIGITAL SIGNATURES AND MESSAGE AUTHENTICATION
www.Ebook777.com
6.4 CONSTRUCTIONS OF SIGNATURE SCHEMES
uniformly select a leaf) and generate the path from the root to a given leaf. We consider
a few possibilities:
When actually applying these constructions, one should observe that in variants
of Construction 6.4.14, the size of the tree determines the number of documents that
can be signed, whereas in variants of Construction 6.4.16, the tree size has an even
more drastic effect on the number of documents that can be signed.32 In some cases, a
hybrid of Constructions 6.4.14 and 6.4.16 may be preferable: We refer to a memory-
dependent scheme in which leaves are assigned as in Construction 6.4.14 (i.e., according
to a counter), but the rest of the operation is done as in Construction 6.4.16 (i.e., the
one-time instances are regenerated on the fly, rather than being recorded and retrieved
32 In particular, the number of documents that can be signed should definitely be smaller than the square root of
the size of the tree (or else two documents are likely to be assigned the same leaf ). Furthermore, we cannot use
a small tree (e.g., of size 1,000) even if we know that the total number of documents that will ever be signed is
small (e.g., 10), because in this case, the probability that two documents are assigned the same leaf is too big
(e.g., 1/20).
559
Free ebooks ==> www.Ebook777.com
DIGITAL SIGNATURES AND MESSAGE AUTHENTICATION
6.4.3.1. Definition
A collection of universal one-way hash functions is defined analogously to a collection of
collision-free hash functions. The only difference is that the hardness (to form collisions)
requirement is relaxed. Recall that in the case of (a collection of) collision-free hash
560
www.Ebook777.com
6.4 CONSTRUCTIONS OF SIGNATURE SCHEMES
functions, it was required that, given the function’s description, it is hard to form an
arbitrary collision under the function. In the case of (a collection of) universal one-way
hash functions, we only require that, given the function’s description h and a pre-image
x0 , it is hard to find an x = x 0 so that h(x) = h(x0 ). We refer to this requirement as to
hardness to form designated collisions.
Our formulation of the hardness to form designated collisions is actually seem-
ingly stronger. Rather than being supplied with a (random) pre-image x0 , the collision-
forming algorithm is allowed to select x 0 by itself, but must do so before being presented
with the function’s description. That is, the attack of the collision-forming algorithm
proceeds in three stages: First the algorithm selects a pre-image x 0 , next it is given a
description of a randomly selected function h, and finally it is required to output x = x0
such that h(x) = h(x0 ). We stress that the third stage in the attack is also given the
random coins used for producing the initial pre-image (at the first stage). This yields
the following definition, where the first stage is captured by a deterministic polynomial-
time algorithm A0 (which maps a sequence of coin tosses, denoted Uq(n) , to a pre-image
of the function), and the third stage is captured by algorithm A (which is given the very
same coins Uq(n) as well as the function’s description).
We stress that the hardness to form designated collisions condition refers to the
following three-stage process: First, using a uniformly distributed r ∈ {0, 1}q(n) , the
(initial) adversary generates a pre-image x 0 = A0 (r ); next, a function h is selected (by
invoking I (1n )); and, finally, the (residual) adversary A is given h (as well as r used
33 This condition is made merely to avoid annoying technicalities. Note that |s| = poly(n) holds by definition of I .
561
Free ebooks ==> www.Ebook777.com
DIGITAL SIGNATURES AND MESSAGE AUTHENTICATION
at the first stage) and tries to find a pre-image x = x0 such that h(x) = h(x0 ). Indeed,
def
Eq. (6.7) refers to the probability that x = A(h, r ) = x0 and yet h(x) = h(x 0 ).
Note that the range specifier (i.e., ) must be super-logarithmic (or else, given s and
x0 ← Un , one is too likely to find an x = x0 such that h s (x) = h s (x0 ), by uniformly
selecting x in {0, 1}n ). Also note that any UOWHF collection yields a collection of
one-way functions (see Exercise 19). Finally, note that any collision-free hashing is
universally one-way hashing, but the converse is false (see Exercise 20). Furthermore,
it is not known whether collision-free hashing can be constructed based on any one-way
functions (in contrast to Theorem 6.4.29, to follow).
6.4.3.2. Constructions
We construct UOWHF collections in several steps, starting with a related but restricted
notion, and relaxing the restriction gradually (until we reach the unrestricted notion
of UOWHF collections). The aforementioned restriction refers to the length of the
arguments to the function. Most importantly, the hardness (to form designated colli-
sions) requirement will refer only to an argument of this length. That is, we refer to the
following technical definition:
34 Here we chose to make a more stringent condition, requiring that |s| = n, rather than n ≤ poly(|s|). In fact, one
can easily enforce this more stringent condition by modifying I into I so that I (1l(n) ) = I (1n ) for a suitable
function l : N → N satisfying l(n) ≤ poly(n) and n ≤ poly(l(n)).
562
www.Ebook777.com
6.4 CONSTRUCTIONS OF SIGNATURE SCHEMES
Construction 6.4.20 (a (d, d − 1)-UOWHF): Let f : {0, 1}∗ → {0, 1}∗ be a 1-1 and
length-preserving function, and let Snn−1 be a family of hashing functions such that
log2 |Snn−1 | = p(n), for some polynomial p. (Specifically, suppose that log2 |Snn−1 | ∈
{3n − 2, 2n}, as in Exercises 22.2 and 23 of Chapter 3.) Then, for every s ∈ Snn−1 ≡
def
{0, 1} p(n) and every x ∈ {0, 1}n , we define h s (x) = h s ( f (x)).
563
Free ebooks ==> www.Ebook777.com
DIGITAL SIGNATURES AND MESSAGE AUTHENTICATION
def
Tedious details: In case |s| ∈ { p(n) : n ∈ N}, we define h s = h s where s is the
longest prefix of s satisfying |s | ∈ { p(n) : n ∈ N}. We refer to an index selection
algorithm that, on input 1m , uniformly selects s ∈ {0, 1}m .
That is, h s : {0, 1}d(|s|) → {0, 1}d(|s|)−1 , where d(m) is the largest integer n satisfying
p(n) ≤ m. Note that d is monotonically non-decreasing, and that for 1-1 p’s, the cor-
responding d is onto (i.e., d( p(n)) = n for every n).
The following analysis uses, in an essential way, an additional property of the afore-
mentioned families of hashing functions; specifically, we assume that given two pre-
image–image pairs, it is easy to uniformly generate a hashing function (in the family)
that is consistent with these two mapping conditions. Furthermore, to facilitate the
analysis, we use a specific family of hashing functions, presented in Exercise 23 of
Chapter 3: Functions in Snn−1 are described by a pair of elements of the finite field
GF(2n ) so that the pair (a, b) describes the function h a,b that maps x ∈ GF(2n ) to the
(n − 1)-bit prefix of the n-bit representation of ax + b, where the arithmetic is of
the field GF(2n ). This specific family satisfies all the additional properties required in
the next proposition (see Exercise 24).
www.Ebook777.com
6.4 CONSTRUCTIONS OF SIGNATURE SCHEMES
which maps p(n)-bit strings to n-bit strings. Then, we construct an algorithm A that
inverts f . On input y = f (x), where n = |y| = |x|, algorithm A proceeds as follows:
(1) Select r0 uniformly in {0, 1} p(n) , and compute x0 = A0 (r0 ) and y0 = f (x0 ).
(2) Select s uniformly in {s ∈ Snn−1 : h s (y0 ) = h s (y)}.
(Recall that y is the input to A, and y0 is generated by A at Step (1).)
(3) Invoke A on input (s, r0 ), and output whatever A does.
By Condition C2, Step (2) can be implemented in probabilistic polynomial-time.
Turning to the analysis of algorithm A, we consider the behavior of A on input
y = f (x) for a uniformly distributed x ∈ {0, 1}n , which implies that y is uniformly
distributed over {0, 1}n . We first observe that for every fixed r0 selected in Step (1), if y
is uniformly distributed in {0, 1}n , then s as determined in Step (2) is almost uniformly
distributed in Snn−1 .
On the distribution of s as selected in Step (2): Fixing r0 ∈ {0, 1}q(n) means that
y0 = f (A0 (r0 )) ∈ {0, 1}n is fixed. Using the pairwise independence property of
def
Snn−1 , it follows that for each y ∈ {0, 1}n \ {y0 }, the cardinality of S y = {s ∈ Snn−1 :
h s (y0 ) = h s (y)} equals |Snn−1 |/2n−1 . Furthermore, in case h s is 2-to-1, the string
s resides in exactly two S y ’s (one being Sy0 ). Recalling that all but a negligible
fraction of the h s ’s are 2-to-1 (i.e., Condition C1), it follows that each such function
is selected with probability 2 · 2−n · (|Snn−1 |/2n−1 )−1 = |Snn−1 |−1 . Other functions
(i.e., non-2-to-1 functions) are selected with negligible probability.
By the construction of A (which ignores y in Step (1)), the probability that f (x0 ) = y is
negligible (but we could have taken advantage of this case, too, by augmenting Step (1)
such that if y0 = y, then A halts with output x0 ). Note that in case f (x 0 ) = y and
h s is 2-to-1, if A returns x such that x = x0 and h s (x ) = h s (x0 ), then it holds that
f (x ) = y.
def
Justifying the last claim: Let v = h s (y) and suppose that h s is 2-to-1. Then, by
Step (2) and f (x0 ) = y, it holds that x = f −1 (y) and x0 are the two pre-images of
v = h s (x) = h s (x 0 ) under h s , where h s = h s ◦ f is 2-to-1 because f is 1-to-1 and
h s is 2-to-1. Since x = x0 is also a pre-image of v under h s , it follows that x = x.
We conclude that if A forms designated collisions with probability ε (n), then A inverts
f with probability ε (n) − µ(n), where µ is a negligible function (accounting for the
negligible probability that h s is not 2-to-1). (Indeed, we rely on the fact that s as selected
in Step (2) is distributed almost uniformly, and furthermore that each 2-to-1 function
appears with exectly the right probability.) The proposition follows.
Step II: Constructing (d , d /2)-UOWHFs. We now take the second step on our
way, and use any (d, d − 1)-UOWHF in order to construct a (d , d /2)-UOWHF. That
is, we construct length-restricted UOWHFs that shrink their input by a factor of 2.
The construction is obtained by composing a sequence of different functions taken
from different (d, d − 1)-UOWHFs. That is, each function in the sequence shrinks the
input by one bit, and the composition of d /2 functions shrinks the initial d -bit long
565
Free ebooks ==> www.Ebook777.com
DIGITAL SIGNATURES AND MESSAGE AUTHENTICATION
def
That is, letting x0 = x, and xi ← h si (xi−1 ) for i = 1, ..., d(n)/2 , we set h s (x0 ) =
x d(n)/2 . (Note that d(|si |) = d(n) + 1 − i and |xi | = d(n) + 1 − i indeed hold.)
Tedious details: We refer to an index selection algorithm that, on input 1m , deter-
def
mines the largest integer n such that m ≥ m = i=1 d −1 (d(n) + 1 − i), uni-
d(n)/2
d −1 (d(n)+1−i)
formly selects s1 , ..., s d(n)/2 such that si ∈ {0, 1} , and s0 ∈ {0, 1}m−m ,
def
and lets h s0 ,s1 ,...,s d(n)/2 = h s1 ,...,s d(n)/2 .
That is, for m = |s|, we have h s : {0, 1}d(n) → {0, 1}d(n)/2 , where n is the largest integer
d −1 (d(n) + 1 − i). Thus, d (m) = d(n), where n is the length
d(n)/2
such that m ≥ i=1
of the index in the (d, d – 1) - UOWHF; that is, we have h s : {0, 1}d (|s|) → {0, 1} d (|s|)/2 ,
with d (|s|) = d(n). Note that for d(n) = (n) (as in Construction 6.4.20), it holds that
√
d (O(n 2 )) ≥ d(n) and d (m) = ( m) follows. More generally, if for some polynomial
p it holds that p(d(n)) ≥ n ≥ d(n) (for all n’s), then for some polynomial p it holds
that p (d (m)) ≥ m ≥ d (m) (for all m’s), because d (d(n) · n) ≥ d(n). We call such a
function sufficiently growing; that is, d : N → N is sufficiently growing if there exists
a polynomial p so that for every n it holds that p(d(n)) ≥ n. (E.g., for every fixed
ε, ε > 0, the function d(n) = ε n ε is sufficiently growing.)
Proof Sketch: Intuitively, a designated collision under h s1 ,...,sd/2 yields a desig-
def
nated collision under one of the h si ’s. That is, let x0 = x and xi ← h si (xi−1 ) for
i = 1, ..., d(n)/2 . Then if given x and s = (s1 , ..., sd/2 ), one can find an x = x such
that h s (x) = h s (x ); then there exists an i so that xi−1 = xi−1
and xi = h si (xi−1 ) =
h si (xi−1 ) = xi , where the x j ’s are defined analogously to the x j ’s. Thus, we obtain
a designated collision under h si . We stress that because h s does not shrink its in-
put too much, the length of si is polynomially related to the length of s (and thus,
forming collisions with respect to h si by using the collision-finder for h s yields a
contradiction).
The actual proof uses the hypothesis that it is hard to form designated collisions
when one is also given the coins used in the generation of the pre-image (and not
merely the pre-image itself). In particular, we construct an algorithm that forms des-
ignated collisions under one of the h si ’s, when given not only xi−1 but also x0 (which
566
www.Ebook777.com
6.4 CONSTRUCTIONS OF SIGNATURE SCHEMES
actually yields xi−1 ). The following details are quite tedious and merely provide an
implementation of this idea.
As stated, the proof is by a reducibility argument. We are given a probabilistic
polynomial-time algorithm A that forms designated collisions under {h s }, with respect
to pre-images produced by a deterministic polynomial-time algorithm A0 that maps
p (n)-bit strings to n-bit strings. We construct algorithms A0 and A such that A forms
designated collisions under {h s } with respect to pre-images produced by algorithm A0 ,
which maps p(n)-bit strings to n-bit strings, for a suitable polynomial p. (Specifically,
p : N → N is 1-1 and p(n) ≥ p (d −1 (2d(n))) + n + n · d −1 (2d(n)), where the factor
of 2 appearing in the expression is due to the shrinking factor of h s .)
We start with the description of A0 , that is, the algorithm that generates pre-images
of {h s }. Intuitively, A0 selects a random j, uses A0 to obtain a pre-image x 0 of {h s },
generates random s0 , ..., s j−1 , and outputs a pre-image x j−1 of {h s j }, computed by
xi = h si (xi−1 ) for i = 1, ..., j − 1. (Algorithm A will be given x j−1 (or rather the coins
used to generate x j−1 ) and a random h s j and will try to form a collision with x j−1 under
h s j .)
(1) Write r = r1 r2 r3 such that |r1 | = n, |r2 | = n · q(n), and |r3 | = p (q(n)).
Using r1 , determine m in {n + 1, ..., n · q(n)} and j ∈ {1, ..., q(n)} such that
both m and j are almost uniformly distributed in the corresponding sets.
d(n )/2
(2) Compute the largest integer n such that m ≤ i=1 d −1 (d(n ) + 1 − i).
−1
(3) If d (d(n ) + 1 − j) = n, then output the d(n)-bit long suffix of r3 .
(Comment: the output in this case is immaterial to our proof.)
(4) Otherwise (i.e., n = d −1 (d(n ) + 1 − j), which is the case we care about), do:
(4.1) Let s0 s1 · · · s j−1 be a prefix of r2 such that
d(n )/2
|s0 | = m − i=1 d −1 (d(n ) + 1 − i),
−1
and |si | = d (d(n ) + 1 − i), for i = 1, ..., j − 1.
(4.2) Let x0 ← A0 (r ), where r is the p (d −1 (d(n )))-bit long suffix of r3 .
(Comment: x0 ∈ {0, 1}d(n ) .)
(4.3) For i = 1, ..., j − 1, compute xi ← h si (xi−1 ).
Output x j−1 ∈ {0, 1}d(n) .
(Note that d(n) = d(n ) − ( j − 1).)
As stated previously, we only care about the case in which Step (4) is applied.
This case occurs with noticeable probability, and the description of the following
algorithm A refers to it.
Algorithm A will be given x j−1 as produced by A0 (along with, or actually only, the
coins used in its generation), as well as a random h s j , and will try to form a collision with
x j−1 under h s j . On input s ∈ {0, 1}n (viewed as s j ) and the coins given to A0 , algorithm
A operates as follows. First, A selects j and s0 , s1 , ..., s j−1 exactly as A0 does (which is
the reason that A needs the coins used by A0 ). Next, A tries to obtain a collision under
h s by invoking A (r , s ), where r is the sequence of coins that A0 handed to A0 and
567
Free ebooks ==> www.Ebook777.com
DIGITAL SIGNATURES AND MESSAGE AUTHENTICATION
s = (s0 , s1 , ..., s j−1 , s, s j+1 , ..., sd(n)/2 ), where s j+1 , ..., sd(n)/2 are uniformly selected
by A. Finally, A outputs h s j−1 (· · · (h s1 (A (r , s ))· · ·).
Detailed description of A: On input s ∈ {0, 1}n and r ∈ {0, 1} p(n) , algorithm A
proceeds as follows.
www.Ebook777.com
6.4 CONSTRUCTIONS OF SIGNATURE SCHEMES
function’s description). Thus, the resulting construct yields a (d , d /2)-UOWHF for any
polynomially bounded function d (e.g., d (n) = n 2 ), whereas in Construction 6.4.22,
the function d is fixed and satisfies d (n) n. The construction itself amounts to
parsing the input into blocks and applying the same function (taken from a (d, d/2)-
UOWHF) to each block.
where x = x 1 · · · x t , 0 ≤ |xt | < d(n) and |xi | = d(n) for i = 1, ..., t − 1. The index-
selection algorithm of {h s } is identical to the one of {h s }.
i-th blocks of x and x differ, and yet both blocks are mapped by h s to the same image).
Thus, if algorithm A succeeds (in forming designated collisions with respect to {h s })
569
Free ebooks ==> www.Ebook777.com
DIGITAL SIGNATURES AND MESSAGE AUTHENTICATION
with probability ε (n), then algorithm A succeeds (in forming designated collisions
with respect to {h s }) with probability at least ε (n)/t(n), where t(n) is a bound on the
running time of A (which also upper-bounds the length of the output of A , and so
1/t(n) is a lower bound on the probability that the colliding strings differ at a certain
uniformly selected block). The proposition follows.
Step IV: Obtaining Full-Fledged UOWHFs. The last step on our way consists of
using any quasi-UOWHFs as constructed (in Step III) to obtain full-fledged UOWHFs.
That is, we use quasi-UOWHFs that are applicable to any input length but shrink each
input to half its length (rather than to a fixed length that only depends on the function
description). The resulting construct is a UOWHF (as defined in Definition 6.4.18).
The construction is obtained by composing a sequence of different functions (each
taken from the same quasi-UOWHF); that is, the following construction is analogous
to Construction 6.4.22.
Construction 6.4.26 (a UOWHF): Let {h s : {0, 1}∗ → {0, 1}∗ }s∈{0,1}∗ , such that
|h s (x)| = |x|/2, for every x ∈ {0, 1}2i·|s| where i ∈ N. Then, for every s1 , ..., sn ∈ {0, 1}n
and every t ∈ N and x ∈ {0, 1}2 ·n , we define
t
def
That is, we let x 0 = x, and xi ← h si (xi−1 ), for i = 1, ..., t.
Tedious details: Strings of lengths that are not of the form 2t · n are padded into
strings of such form in a standard manner. We refer to an index-selection algorithm
√
that, on input 1m , determines n = m, uniformly selects s1 , ..., sn ∈ {0, 1}n and
def
s0 ∈ {0, 1}m−n , and lets h s0 ,s1 ,...,sn = h s1 ,...,sn .
2
Observe that h s0 ,s1 ,...,sn (x) = h s0 ,s1 ,...,sn (x ) implies that both equal the pair (t, h st (· · · h s2
(h s1 (x))· · ·)), where t = log2 (|x|/n) = log2 (|x |/n) . Note that h s0 ,s1 ,...,sn :
{0, 1}∗ → {0, 1}n+log2 n , and that m = |s0 , s1 , ..., sn | < (n + 1)2 .
The proof of Proposition 6.4.27 is omitted because it is almost identical to the proof of
Proposition 6.4.23.
35 Actually, there is a minor gap between Constructions 6.4.24 and 6.4.26. In the former we constructed functions
that hash every x into a value of length (|x| + 1)/d(n) · d(n)/2, whereas in the latter we used functions that
hash every x ∈ {0, 1}2i·n into a value of length i · n.
570
www.Ebook777.com
6.4 CONSTRUCTIONS OF SIGNATURE SCHEMES
Theorem 6.4.28: If one-way permutations exist, then universal one-way hash functions
exist.
Note that the only barrier toward constructing UOWHFs based on arbitrary one-way
functions is Proposition 6.4.21, which refers to one-way permutations. Thus, if we
wish to constructs UOWHF based on any one-way function, then we need to present
an alternative construction of a (d, d − 1)-UOWHF (i.e., an alternative to Construc-
tion 6.4.20, which fails in case f is 2-to-1).36 Such a construction is actually known,
and so the following result is known to hold (but its proof it too complex to fit in this
work):
Theorem 6.4.29: Universal one-way hash functions exist if and only if one-way func-
tions exist.
We stress that the difficult direction is the one referred to earlier (i.e., from one-
way functions to UOWHF collections). For the much easier (converse) direction, see
Exercise 19.
36 For example, if f (σ, x ) = (0, f (x )), for σ ∈ {0, 1}, then forming designated collisions under Construc-
tion 6.4.20 is easy: Given (0, x ), one outputs (1, x ), and indeed a collision is formed (already under f ).
571
Free ebooks ==> www.Ebook777.com
DIGITAL SIGNATURES AND MESSAGE AUTHENTICATION
Proof Sketch: The proof follows the underlying principles of the proof of Proposi-
tion 6.2.7. That is, forgery with respect to (G , S , V ) yields either forgery with respect
to (G, S, V ) or a collision under the hash function, where in the latter case, a desig-
nated collision is formed (in contradiction to the hypothesis regarding the UOWHF).
For the furthermore-part, the observation underlying the proof of Proposition 6.4.7
still holds (i.e., the number of queries made by the forger constructed for (G, S, V )
equals the number of queries made by the forger assumed (toward the contradiction)
for (G , S , V )). Details follow.
Given an adversary A attacking the complex scheme (G , S , V ), we construct an
adversary A that attacks the -restricted scheme, (G, S, V ). The adversary A uses I (the
indexing algorithm of the UOWHF collection) and its oracle Ss in order to emulate the
oracle Ss for A . This is done in a straightforward manner; that is, algorithm A emulates
Ss by using the oracle Ss (exactly as Ss actually does). Specifically, to answer a query
572
www.Ebook777.com
6.4 CONSTRUCTIONS OF SIGNATURE SCHEMES
q, algorithm A generates a1 ← I (1n ), forwards (a1 , h a1 (q)) to its own oracle (i.e., Ss ),
and answers with (a1 , a2 ), where a2 = Ss (a1 , h a1 (q)). (We stress that A issues a single
Ss -query per each Ss -query made by A .) When A outputs a document-signature pair
relative to the complex scheme (G , S , V ), algorithm A tries to use this pair in order
to form a document-signature pair relative to the -restricted scheme, (G, S, V ). That
is, if A outputs the document-signature pair (α, β), where β = (β1 , β2 ), then A will
def
output the document-signature pair (α2 , β2 ), where α2 = (β1 , h β1 (α)).
Assume that with (non-negligible) probability ε (n), the (probabilistic polynomial-
time) algorithm A succeeds in existentially forging relative to the complex scheme
(G , S , V ). Let (α (i) , β (i) ) denote the i-th query and answer pair made by A , and let
(α, β) be the forged document-signature pair that A outputs (in case of success), where
(i) (i)
β (i) = (β1 , β2 ) and β = (β1 , β2 ). We consider the following two cases regarding the
forging event:
(i)
Case 1: (β1 , h β1 (α)) = (β1 , h β (i) (α (i) )) for all i’s. (That is, the Ss -signed value in the
1
forged signature (i.e., the value (β1 , h β1 (α))) is different from all queries made to
Ss .) In this case, the document-signature pair ((β1 , h β1 (α)), β2 ) constitutes a success
in existential forgery relative to the -restricted scheme (G, S, V ).
(i)
Case 2: (β1 , h β1 (α)) = (β1 , h β (i) (α (i) )) for some i. (That is, the Ss -signed value used
1
in the forged signature equals the i-th query made to Ss , although α = α (i) .) Thus,
(i)
β1 = β1 and h β1 (α) = h β (i) (α (i) ), although α = α (i) . In this case, the pair (α, α (i) )
1
forms a designated collision under h β (i) (and we do not obtain success in existential
1
forgery relative to the -restricted scheme). We stress that A selects α (i) before it
is given the description of the function h β (i) , and thus its ability to later produce
1
α = α (i) such that h β1 (α) = h β (i) (α (i) ) yields a violation of the UOWHF property.
1
Thus, if Case 1 occurs with probability at least ε (n)/2, then A succeeds in its attack
on (G, S, V ) with probability at least ε (n)/2, which contradicts the security of the
-restricted scheme (G, S, V ). On the other hand, if Case 2 occurs with probability
at least ε (n)/2, then we derive a contradiction to the difficulty of forming designated
collisions with respect to {h r }. Details regarding Case 2 follow.
We start with a sketch of the construction of an algorithm that attempts to form
designated collisions under a randomly selected hash function. Loosely speaking, we
construct an algorithm B that tries to form designated collisions by emulating the
attack of A on a random instance of (G , S , V ) that B selects by itself. Thus, B can
easily answer any signing-query referred to it by A , but in one of these queries (the
index of which is selected at random by B ), algorithm B will use a hash function given
to it from the outside (rather than generating such a function at random by itself). In
case A forges a signature while using this specific function-value pair (as in Case 2),
algorithm B obtains and outputs a designated collision.
We now turn to the actual construction of algorithm B (which attempts to form
designated collisions under a randomly selected hash function). Recall that such an
algorithm operates in three stages (see discussion in Section 6.4.3.1): First the algorithm
selects a pre-image x 0 , next it is given a description of a function h, and finally it is
573
Free ebooks ==> www.Ebook777.com
DIGITAL SIGNATURES AND MESSAGE AUTHENTICATION
required to output x = x0 such that h(x) = h(x0 ). We stress that the third stage in the
attack is also given the random coins used for producing the pre-image x0 (at the first
stage). Now, on input 1n , algorithm B proceeds in three stages:
Stage 1: Algorithm B selects uniformly i ∈ {1, ..., t(n)}, where t(n) bounds the run-
ning time of A (G 1 (1n )) (and thus the number of queries it makes). Next, B se-
lects (s, v) ← G (1n ) and emulates the attack of A (v) on Ss , while answering
the queries of Ss as follows. All queries except the i-th one are emulated in the
straightforward manner (i.e., by executing the program of Ss as stated). That is, for
( j)
j = i, the j-th query, denoted α ( j) , is answered by producing β1 ← I (1n ), com-
( j) ( j)
puting β2 ← Ss (β1 , h β ( j) (α ( j) )) (using the knowledge of s), and answering with
1
the pair (β1 , β2 ). The i-th query of A , denoted α (i) , will be used as the designated
( j) ( j)
pre-image. Once α (i) is issued (by A ), algorithm B completes its first stage (without
answering this query), and the rest of the emulation of A will be conducted by the
third stage of B .
Stage 2: At this point (i.e., after B has selected the designated pre-image α (i) ), B
obtains a description of a random hashing function h r (thus completing its second
operation stage). That is, this stage consists of B being given r ← I (1n ).
Stage 3: Next, algorithm B answers the i-th query (i.e., α (i) ) by applying Ss to the
pair (r, h r (α (i) )). Subsequent queries are emulated in the straightforward manner (as
in Stage 1). When A halts, B checks whether A has output a valid document-
( j)
signature pair (α, β) as in Case 2 (i.e., β1 = β1 and h β1 (α) = h β ( j) (α ( j) ) for some
1
j), and whether the collision formed is indeed on the i-th query (i.e., j = i, which
means that h r (α) = h r (α (i) )). When this happens, B outputs α (which is different
than α (i) ), and in doing so it has succeeded in forming a designated collision (with
α (i) under h r ).
Now, if Case 2 occurs with probability at least ε 2(n) (and A makes at most t(n) queries),
then B has succeeded in forming a designated collision with probability at least
1
t(n)
· ε 2(n) , because the actions of A are independent of the random value of i. This
contradicts the hypothesis that {h r } is UOWHF.
As mentioned earlier, the furthermore-part of the proposition follows by observing
that if the forging algorithm A makes at most one query, then the same holds for the
algorithm A constructed in the beginning of the proof. Thus, if (G , S , V ) can be
broken via a single-message attack, then either (G, S, V ) can be broken via a single-
message attack or one can form designated collisions (with respect to {h r }). In both
cases, we reach a contradiction.
Theorem 6.4.32: If there exist universal one-way hash functions, then secure one-time
signature schemes exist, too.
574
www.Ebook777.com
6.5* SOME ADDITIONAL PROPERTIES
Corollary 6.4.33: If one-way permutations exists, then there exist secure signature
schemes.
Like Corollary 6.4.10, Corollary 6.4.33 asserts the existence of secure (public-key) sig-
nature schemes, based on an assumption that does not mention trapdoors. Furthermore,
the assumption made in Corollary 6.4.33 seems weaker than the one made in Corol-
lary 6.4.10. We can further weaken the assumption by using Theorem 6.4.29 (which
was stated without a proof), rather than Theorem 6.4.28. Specifically, combining The-
orems 6.4.29, 6.4.32, and 6.4.9, we establish Theorem 6.4.1. That is, secure signature
schemes exist if and only if one-way functions exist. Furthermore, as in the case of MACs
(see Theorem 6.3.8), the resulting signature schemes have signatures of fixed length.
We briefly discuss several properties of interest that some signature schemes enjoy.
We first discuss properties that seem unrelated to the original purpose of signature
schemes but are useful toward utilizing a signature scheme as a building block toward
constructing other primitives (e.g., see Section 5.4.4.4). These (related) properties are
having unique valid signatures and being super-secure, where the latter term indi-
cates the infeasibility of finding a different signature even to a document for which a
signature was obtained during the attack. We next turn to properties that offer some
advantages in the originally intended applications of signature schemes. Specifically,
we consider properties that allow for speeding-up the response-time in some settings
(see Sections 6.5.3 and 6.5.4), and a property supporting legitimate revoking of forged
signatures (see Section 6.5.5).
Note that this property is related, but not equivalent, to the question of whether or
not the signing algorithm is deterministic (which is considered in Exercise 1). Indeed,
if the signing algorithm is deterministic, then for every key pair (s, v) and document α,
the result of applying Ss to α is unique (and indeed Vv (α, Ss (α)) = 1). Still, this does
not mean that there is no other β (which is never produced by applying Ss to α) such
that Vv (α, β) = 1. On the other hand, the unique signature property may hold even in
case the signing algorithm is randomized, but (as mentioned earlier) this randomization
can be eliminated anyhow.
Can Secure Signature Schemes Have Unique Signatures? The answer is definitely
affirmative, and in fact we have seen several such schemes in the previous sections.
Specifically, all private-key signature schemes presented in Section 6.3 have unique sig-
natures. Furthermore, every secure private-key signature scheme can be transformed
into one having unique signatures (e.g., by combining deterministic signing as in
Exercise 1 with canonical verification as in Exercise 2). Turning to public-key signature
schemes, we observe that if the one-way function f used in Construction 6.4.4 is 1-1,
then the resulting secure length-restricted one-time (public-key) signature scheme has
unique signatures (because each f -image has a unique pre-image). In addition, Con-
struction 6.2.6 (i.e., the basic hash-and-sign paradigm) preserves the unique signature
property. Let use summarize all these observations:
1. Assuming the existence of one-way functions, there exist secure message authenti-
cation schemes having the unique signature property.
2. Assuming the existence of 1-1 one-way functions, there exist secure length-restricted
one-time (public-key) signature schemes having the unique signature property.
3. Assuming the existence of 1-1 one-way functions and collision-free hashing collec-
tions, there exist secure one-time (public-key) signature schemes having the unique
signature property.
In addition, it is known that secure (full-fledged) signature schemes having the unique
signature property can be constructed based on a mild variant on the standard RSA
assumption (see reference in Section 6.6.5). Still, this leaves open the question of
whether or not secure signature schemes having the unique signature property exist if
and only if secure signature schemes exist.
www.Ebook777.com
6.5* SOME ADDITIONAL PROPERTIES
In other words, super-secure signature schemes exist if and only if secure signature
schemes exist. We comment that the signature scheme constructed in the following
proof does not have the unique signature property.
Proof: Starting from (Part 2 of) Theorem 6.5.1, we can use any 1-1 one-way func-
tion to obtain super-secure length-restricted one-time signature schemes. However,
wishing to use arbitrary one-way functions, we will first show that universal one-way
hashing functions can be used (instead of 1-1 one-way functions) in order to obtain
super-secure length-restricted one-time signature schemes. Next, we will show that
super-security is preserved by two transformations presented in Section 6.4: specifi-
cally, the transformation of length-restricted one-time signature schemes into one-time
signature schemes (i.e., Construction 6.4.30), and the transformation of the latter to
(full-fledged) signature schemes (i.e., Construction 6.4.16). Applying these transfor-
mations (to the first scheme), we obtain the desired super-secure signature scheme.
Recall that Construction 6.4.30 also uses universal one-way hashing functions, but the
latter can be constructed using any one-way function (cf. Theorem 6.4.29).37
Claim 6.5.2.1: If there exist universal one-way hashing functions, then for every
polynomially-bounded : N → N, there exist super-secure -restricted one-time sig-
nature schemes.
Proof Sketch: We modify Construction 6.4.4 by using universal one-way hashing func-
tions (UOWHFs) instead of one-way functions. Specifically, for each pre-image placed
in the signing-key, we select at random and independently a UOWHF, and place its de-
scription both in the signing- and verification-keys. That is, on input 1n , we uniformly
select s10 , s11 , ..., s(n)
0 1
, s(n) ∈ {0, 1}n and UOWHFs h 01 , h 11 , ..., h 0(n) , h 1(n) , and compute
j j j
vi = h i (si ), for i = 1, ..., (n) and j = 0, 1. We let s = ((s10 , s11 ), ..., (s(n)
0 1
, s(n) )),
37 We comment that a simpler proof suffices in case we are willing to use a one-way permutation (rather than
an arbitrary one-way function). In this case, we can start from (Part 2 of ) Theorem 6.5.1 (rather than prove
Claim 6.5.2.1), and use Theorem 6.4.28 (rather than Theorem 6.4.29, which has a more complicated proof ).
577
Free ebooks ==> www.Ebook777.com
DIGITAL SIGNATURES AND MESSAGE AUTHENTICATION
h = ((h 01 , h 11 ), ..., (h 0(n) , h 1(n) )), and v = ((v10 , v11 ), ..., (v(n)
0
, v(n)
1
)), and output the key-
pair (s, v) = ((h, s), (h, v)) (or, actually, we may set (s, v) = (s, (h, v))). Signing and
verification are modified accordingly; that is, the sequence (β1 , ..., β ) is accepted as
a valid signature of the string σ1 · · · σ (with respect to the verification-key v) if and
only if h σi i (βi ) = viσi for every i. In order to show that the resulting scheme is super-
secure under a chosen one-message attack, we adapt the proof of Proposition 6.4.5.
Specifically, fixing such an attacker A, we consider the event in which A violated the
super-security of the scheme. There are two cases to consider:
1. The valid signature formed by A is to the same document for which A has obtained a
different signature (via its single query). In this case, for at least one of the UOWHFs
contained in the verification-key, we obtain a pre-image (of the image also contained
in the verification-key) that is different from the one contained in the signing-key.
Adapting the construction presented in the proof of Proposition 6.4.5, we derive (in
this case) an ability to form designated collisions (in contradiction to the UOWHF
property). We stress that the pre-images contained in the signing-key are selected
independently of the description of the UOWHFs (because both are selected inde-
pendently by the key-generation process). In fact, we obtain a designated collision
for a uniformly selected pre-image.
2. The valid signature formed by A is to a document that is different from the one
for which A has obtained a signature (via its single query). In this case, the proof
of Proposition 6.4.5 yields the ability to invert a randomly selected UOWHF (on
a randomly selected image), which contradicts the UOWHF property (as shown in
Exercise 19).
Thus, in both cases we derive a contradiction, and the claim follows.
Claim 6.5.2.2: When applying the revised hash-and-sign construction (i.e., Construc-
tion 6.4.30) to a super-secure length-restricted signature scheme, the result is a super-
secure signature scheme. In case the length-restricted scheme is only super-secure un-
der a chosen one-message attack, the same holds for the resulting (length-unrestricted)
scheme.
Proof Sketch: We follow the proof of Proposition 6.4.31, and use the same construc-
tion of a forger for the length-restricted scheme (based on the forger for the complex
scheme). Furthermore, we consider the two forgery cases analyzed in the proof of
Proposition 6.4.31:38
(i)
Case 1: (β1 , h β1 (α)) = (β1 , h β (i) (α (i) )) for all i’s. In this case, the analysis is exactly
1
as in the original proof. Note that it does not matter whether or not α = α (i) , since
in both subcases we obtain a valid signature for a new string with respect to the
38 Recall that (α, β) denotes the document-signature pair output by the original forger (i.e., for the complex scheme),
whereas (α (i) , β (i) ) denotes the i-th query-answer pair (to that scheme). The document-signature pair that we
def
output (as a candidate forgery with respect to a length-restricted scheme) is (α2 , β2 ), where α2 = (β1 , h β1 (α))
and β = (β1 , β2 ). Recall that a generic valid document-signature for the complex scheme has the form (α , β ),
where β = (β1 , β2 ) satisfies Vv ((β1 , h β (α )), β2 ) = 1.
1
578
www.Ebook777.com
6.5* SOME ADDITIONAL PROPERTIES
39 Recall that forging a signature for the general scheme requires either using an authentication path supplied
by the (general) signing-oracle or producing an authentication path different from all paths supplied by the
(general) signing-oracle. These are the cases considered here. In contrast, in the proof of Proposition 6.4.15
we considered only the “text part” of these paths, ignoring the question of whether or not the authenticating
(one-time) signatures (provided as part of these paths) are equal.
579
Free ebooks ==> www.Ebook777.com
DIGITAL SIGNATURES AND MESSAGE AUTHENTICATION
associated with the sibling of this leaf was used for signing an actual document.)
In this subcase, as in the proof of Proposition 6.4.15, we obtain (ordinary) forgery
with respect to the instance of (G, S, V ) associated with the leaf (without making
any query to that instance of the one-time scheme).
(b) Otherwise (i.e., the instance associated with this leaf was used for signing an
actual document), the forged document-signature pair differs from the query-
answer pair that used the same leaf. The difference is either in the actual doc-
ument or in the part of the complex-signature that corresponds to the one-time
signature produced at the leaf (because, by the case hypothesis, the authenti-
cation paths are identical). In both subcases this yields violation of the super-
security of the instance of (G, S, V ) associated with that leaf. Specifically, in the
first sub-subcase, we obtain a one-time signature to a different document (i.e.,
violation of ordinary security), whereas in the second sub-subcase, we obtain
a different one-time signature to the same document (i.e., only a violation of
super-security). We stress that in both subcases, the violating signature is ob-
tained after making a single query to the instance of (G, S, V ) associated with
that leaf.
2. We now turn to the second case (i.e., forgery with respect to (G , S , V ) is obtained
by producing an authentication path different from all paths supplied by the signing-
oracle). In this case, we obtain violation of the (one-time) super-security of the
scheme (G, S, V ) associated with one of the internal nodes (specifically the first node
on which the relevant paths differ). The argument is similar (but not identical) to the
one given in the proof of Proposition 6.4.15. Specifically, we consider the maximal
prefix of the authentication path provided by the forger that equals a corresponding
prefix of an authentication path provided by the signing-oracle (as part of its answer).
The extension of this path in the complex-signature provided by the forger either
uses a different pair of (one-time) verification-keys or uses a different (one-time)
signature to the same pair. In the first subcase, we obtain a one-time signature to
a different document (i.e., violation of ordinary security), whereas in the second
subcase, we obtain a different one-time signature to the same document (i.e., only a
violation of super-security). We stress that in both subcases, the violating signature
is obtained after making a single query to the instance of (G, S, V ) associated with
that internal node.
Thus, in both cases we reach a contradiction to the super-security of the one-time
signature scheme, which establishes our claim that the general signature scheme must
be super-secure.
Combining the three claims (and recalling that universal one-way hashing functions
can be constructed using any one-way function [cf. Theorem 6.4.29]), the theorem
follows.
www.Ebook777.com
6.5* SOME ADDITIONAL PROPERTIES
in two steps, where the first step is independent of the actual message to be signed.
That is, the computation of Ss (α) can be decoupled into two steps, performed by ran-
domized algorithms that are denoted S off and S on , respectively, such that Ss (α) ←
Sson (α, S off (s)). Thus, one may prepare (or precompute) S off (s) before the document
is known (i.e., “off-line”), and produce the actual signature (on-line) once the document
α is presented (by invoking algorithm S on on input (α, S off (s))). This yields improve-
ment in on-line response-time to signing requests, provided that S on is significantly
faster than S itself. This improvement is worthwhile in many natural settings in which
on-line response-time is more important than off-line processing time.
We stress that S off must be randomized (because, otherwise, S off (s) can be incor-
porated in the signing-key). Indeed, one may view algorithm S off as an augmentation
of the key-generation algorithm that produces random extensions of the signing-key
on the fly (i.e., after the verification-key has already been determined). We stress that
algorithm S off is invoked once per each document to be signed, but this invocation can
take place at any time (and even before the document to be signed is even determined).
(In contrast, it may be insecure to reuse the result obtained from S off for two different
signatures.)
40 For example, when using the one-time signature scheme suggested in Proposition 6.4.7, producing one-
time signatures amounts to applying a collision-free hashing function and outputting corresponding parts of
the signing-key. This is all that needs to be performed in the on-line step of Construction 6.4.16. In contrast, the
off-line step (of Construction 6.4.16) calls for n applications of a pseudorandom function, n applications of
the key-generation algorithm of the one-time signature scheme, and n applications of the signing algorithm of
the one-time signature scheme.
581
Free ebooks ==> www.Ebook777.com
DIGITAL SIGNATURES AND MESSAGE AUTHENTICATION
modified document α in time proportional to the number of edit operations (rather than
proportional to |α |). Indeed, here time is measured in a direct-access model of compu-
tation. Of course, the time saved on the “signing side” should not come at the expense of
a significant increase in verification time. In particular, verification time should depend
only on the length of the final document (and not on the number of edit operations).41
An incremental signing process is beneficial in settings where one needs to sign
many textually related documents (e.g., in simple contracts, much of the text is almost
identical and the few edit changes refer to the party’s specific details, as well as to
specific clauses that may be modified from their standard form in order to meet the
party’s specific needs). In some cases, the privacy of the edit sequence may be of
concern; that is, one may require that the final signature be distributed in a way that
only depends on the final document (rather than depending also on documents that
“contributed” signatures to the process of generating the final signature).
Furthermore, in both parts, the resulting schemes protect the privacy of the edit
sequence.
41 This rules out the naive (unsatisfactory) solution of providing a signature of the original document along with a
signature of the sequence of edit operations. More sophisticated variants of this naive solution (e.g., refreshing
the signature whenever enough edits have occurred) are not ruled out here, but typically they will not satisfy
the privacy requirement discussed in the sequel.
582
www.Ebook777.com
6.5* SOME ADDITIONAL PROPERTIES
One important observation is that a 2–3 tree supports the said operations while incurring
only a logarithmic (in its size) cost; that is, by modifying only the links of logarithmically
many nodes in the tree. Thus, only the tags of these nodes and their ancestors in the tree
need to be modified in order to form the correspondingly modified signature. (Privacy
of the edit sequence is obtained by randomizing the standard modification procedure for
2–3 trees.) By analogy to Construction 6.2.13 (and Proposition 6.2.14), the incremental
signature scheme is secure.
1. Proper operation: In case the user is honest, the signatures produced by it will pass
the verification procedure (with respect to the corresponding verification-key).
2. Infeasibility of forgery: In case the user is honest, forgery is infeasible in the standard
sense. That is, every feasible chosen message attack may succeed (in generating a
valid signature to a new message) only with negligible probability.
3. Revocation of forged signatures: In case the user is honest and forgery is commit-
ted, the user can prove that indeed forgery has been committed. That is, for every
chosen message attack (even a computationally unbounded one)43 that produces a
valid signature to a new message, except for with negligible probability, the user
can efficiently convince anyone (which knows the verification-key) that this valid
signature was forged (i.e., produced by somebody else).
4. Infeasibility of revoking unforged signatures: It is infeasible for a user to create
a valid signature and later convince someone that this signature was forged (i.e.,
produced by somebody else). Indeed, it is possible (but not feasible) for a user to
cheat here.
Furthermore, Property 3 (i.e., revocation of forged signatures) holds also in case the
administrating entity participates in the forgery and even if it behaves improperly at the
key-generation stage. (In contrast, the other items hold only if the administrating entity
behaves properly during the key-generation stage.)
To summarize, fail-stop signature schemes allow proving that forgery has occurred,
and so offer an information-theoretic security guarantee to the potential signers (yet the
42 Allowing memory-dependent signing is essential to the existence of secure fail-stop signature schemes; see
Exercise 25.
43 It seems reasonable to restrict even computationally unbounded adversaries to polynomially many signing
requests.
583
Free ebooks ==> www.Ebook777.com
DIGITAL SIGNATURES AND MESSAGE AUTHENTICATION
6.6. Miscellaneous
44 We refer to the natural convention by which a proof of forgery frees the signer of any obligations implied by the
document. In this case, when accepting a valid signature, the recipient is only guaranteed that it is infeasible for
the signer to revoke the signature.
584
www.Ebook777.com
6.6 MISCELLANEOUS
On the Random Oracle Methodology. The Random Oracle Methodology [92, 28]
consists of two steps: First, one designs an ideal system in which all parties (including
the adversary) have oracle access to a truly random function, and proves this ideal
system to be secure (in which case, one says that the system is secure in the ran-
dom oracle model). Next, one replaces the random oracle with a “good cryptographic
hashing function,” providing all parties (including the adversary) with the succinct de-
scription of this function, and hopes that the resulting (actual) scheme is secure.46 We
warn that this hope has no sound justification. Furthermore, there exist encryption and
45 Needless to say, we did not even consider presenting schemes that are not known to satisfy some robust notion
of security.
46 Recall that, in contrast, the methodology of Section 3.6.3 (which is applied often in the current chapter) refers
to a situation in which the adversary does not have direct oracle access to the random function, and does not
obtain the description of the pseudorandom function used in the latter implementation.
586
www.Ebook777.com
6.6 MISCELLANEOUS
signature schemes that are secure in the Random Oracle Model, but replacing the ran-
dom function (used in them) by any function ensemble yields a totally insecure scheme
(cf., [54]).
47 The flaw in this folklore is rooted in implicit (unjustified) assumptions regarding the notion of a “constructive
proof of security” (based on factoring). In particular, it was implicitly assumed that the signature scheme uses
a verification-key that equals a composite number, and that the proof of security reduces the factoring of such a
composite N to forging with respect to the verification-key N . In such a case, the folklore suggested that the re-
duction yields an oracle machine for factoring the verification-key, where the oracle is the corresponding signing-
oracle (associated with N ), and that the factorization of the verification-key allows for efficiently producing
signatures to any message. However, none of these assumptions is justified. In contrast, the verification-key in the
scheme of [125] consists of a pair (N , x), and its security is proven by reducing the factoring of N to forging with
respect to the verification-key (N , r ), where r is randomly selected by the reduction. Furthermore, on input N , the
(factoring) reduction produces a verification-key (N , r ) that typically does not equal the verification-key (N , x)
being attacked, and so being given access to a corresponding signing-oracle does not allow the factoring of N .
587
Free ebooks ==> www.Ebook777.com
DIGITAL SIGNATURES AND MESSAGE AUTHENTICATION
www.Ebook777.com
6.6 MISCELLANEOUS
was introduced (and first instantiated) in [85]. The notion of incremental crypto-
graphic schemes (and, in particular, incremental signature schemes) was introduced
and instantiated in [18, 19]. In particular, the incremental MAC of [19] (i.e., Part 1 of
Theorem 6.5.3) builds on the message-authentication scheme of [22], and the incre-
mental signature scheme that protects the privacy of the edit sequence is due to [158]
(building upon [19]). Fail-stop signatures were defined and constructed in [167].
6.6.7. Exercises
Exercise 1: Deterministic signing and verification algorithms:
Guideline (for Part 1): Augment the signing-key with a description of a pseudo-
random function, and apply this function to the string to be signed in order to extract
the randomness used by the original signing algorithm.
Guideline (for Part 2): Analogous to Part 1. (Highlight your use of the private-key
hypothesis.) Alternatively, see Exercise 2.
Guideline (for Part 4): First transform the signature scheme into one in which
all valid signatures are of a length that is bounded by a polynomial in the security
parameter (and the length of the messages). Let (n) denote the length of the docu-
ments and m(n) denote the length of the corresponding signatures. Next, amplify the
verification algorithm such that its error probability is smaller than 2−((n)+m(n)+n) .
Finally, incorporate the coin tosses of the verification algorithm in the verification-
key, making the former deterministic.
590
www.Ebook777.com
6.6 MISCELLANEOUS
Exercise 2: Canonical verification in the private-key version: Show that, without loss
of generality, the verification algorithm of a private-key signature scheme may con-
sist of comparing the alleged signature to one produced by the verification algo-
rithm itself; that is, the verification algorithm uses a verification-key that equals the
signing-key and produces signatures exactly as the signing algorithm.
Why does this claim fail with respect to public-key schemes?
Guideline: Use Part 1 of Exercise 1, and conclude that on a fixed input, the signing
algorithm always produces the same output. Use the fact that (by Exercise 7.3) the
existence of message-authentication schemes implies the existence of pseudoran-
dom functions, which are used in Part 1 of Exercise 1.
1. Show that in case the private-key signature scheme has unique valid signatures,
it is secure against augmented attacks if and only if it is secure against ordinary
attacks (as in Definition 6.1.2).
2. Assuming the existence of secure private-key signature schemes (as in
Definition 6.1.2), present such a secure scheme that is insecure under augmented
attacks.
Guideline (Part 1): Analyze the emulation outlined in the proof of Proposi-
tion 6.1.3. Specifically, ignoring the redundant verification-queries (for which the
answer is determined by previous answers), consider the probability that the em-
ulation has gambled correctly on all the verification-queries up to (and including)
the first such query that should be answered affirmatively.
Guideline (Part 2): Given any secure MAC, (G, S, V ), assume without loss of
generality that in the key-pairs output by G, the verification-key equals the signing-
key. Consider the scheme (G , S , V ) (with G = G), where Ss (α) = (Ss (α), 0),
Vv (α, (β, 0)) = Vv (α, β), and Vv (α, (β, i, σ )) = 1 if both Vv (α, β) = 1 and the i-th
bit of v is σ . Prove that (G , S , V ) is secure under ordinary attacks, and present an
augmented attack that totally breaks it (i.e., obtains the signing-key s = v).
Exercise 4: The signature may reveal the document: Both for private-key and public-
key signature schemes, show that if such secure schemes exist, then there exist
secure signature schemes in which any valid signature to a message allows for
efficient recovery of the entire message.
Exercise 5: On the triviality of some length-restricted signature schemes:
(n)
Guideline (Part 1): On input 1n , the key-generator uniformly selects s ∈ {0, 1}2 ·n ,
and outputs the key pair (s, s). View s = s1 · · · s2(n) , where each si is an n-bit
long string, and consider any fixed ordering of the 2(n) strings of length (n).
The signature to α ∈ {0, 1}(n) is defined as si , where i is the index of α in the latter
ordering.
Exercise 6: Failure of Construction 6.2.3 in case (n) = O(log n): Show that if Con-
struction 6.2.3 is used with a logarithmically bounded , then the resulting scheme
is insecure.
Guideline: Note that by asking for polynomially many signatures, the adversary
may obtain two Ss -signatures that use the same (random) identifier. Specifically,
consider making the queries αα, for all possible α ∈ {0, 1}(n) , and note that if
αα and α α are Ss -signed using the same identifier, then we can derive a valid
Ss -signature to αα .
Exercise 7: Secure MACs imply one-way functions: Prove that the existence of se-
cure message-authentication schemes implies the existence of one-way functions.
Specifically, let (G, S, V ) be as in the hypothesis.
1. To simplify the following two items, show that, without loss of generality, G(1n )
uses n coins and outputs a signing-key of length n.
2. Assume first that S is a deterministic signing algorithm. Prove that
def
f (r, α1 , ..., αm ) = (Ss (α1 ), ..., Ss (αm ), α1 , ..., αm ) is a one-way function, where
s = G 1 (r ) is the signing-key generated with coins r , all αi ’s are of length n = |r |,
and m = (n).
3. Extend the proof to handle randomized signing algorithms, thus establishing the
main result.
Guideline (Parts 2 and 3): Note that with high probability (over the choice of the
αi ’s), the m signatures (i.e., Ss (αi )’s) determine a set R such that for every r ∈ R,
it holds that SG 1 (r ) (α) = Ss (α) for most α ∈ {0, 1}n . (Note that G 1 (r ) does not
necessarily equal s.) Show that this implies that the ability to invert f yields the
ability to forge (under a chosen message attack). (Hint: Use m random signing-
queries to produce a random image of f , and use the obtained pre-image under
f , which contains an adequate signing-key, to forge a signature to a new random
message.) The extension to randomized signing is obtained by augmenting the pre-
image of the one-way function with the coins used by the m invocations of the
signing algorithm.
592
www.Ebook777.com
6.6 MISCELLANEOUS
Guideline: Use the fact that the MAC may reveal the first part of its argument,
whereas the hashing function may yield an output value in which the second part
is fixed. Furthermore, it may be easy to infer the hashing function from sufficiently
many input–output pairs, and it may be easy to find a random pre-image of a
given hash function on a given image. Present constructions that satisfy all these
conditions, and show how combining them yields the desired result.
Exercise 10: Easily obtaining pseudorandom functions from certain MACs (advanced
exercise, based on [162]): Let (G, S, V ) be a secure message-authentication scheme,
and suppose that S is deterministic. Furthermore, suppose that |G 1 (1n )| = n and that
def
for every s, x ∈ {0, 1}n it holds that |Ss (x)| = (n) = |Ss (1n )|. Consider the Boolean
function ensemble { f s1 ,s2 : {0, 1}|s1 | → {0, 1}}s1 ,s2 , where s1 is selected according to
G 1 (1n ) and s2 ∈ {0, 1}(n) is uniformly distributed, such that f s1 ,s2 (α) is defined to
equal the inner product mod 2 of Ss1 (α) and s2 . Prove that this function ensemble
is pseudorandom (as defined in Definition 3.6.9 for the case d(n + (n)) = n and
r (n) = 1).
Guideline: Consider hybrid experiments such that in the i-th hybrid the first i
queries are answered by a truly random Boolean function and the rest of the queries
are answered by a uniformly distributed f s1 ,s2 . (Note that it seems important to use
this non-standard order of random versus pseudorandom answers.) Show that distin-
guishability of the i-th and i + 1st hybrids implies that a probabilistic polynomial-
time oracle machine can have a non-negligible advantage in the following game. In
the game, the machine is first asked to select α; next f s1 ,s2 is uniformly selected,
and the machine is given s2 as well as oracle access to Ss1 (but is not allowed the
query α) and is asked to guess f s1 ,s2 (α) (or, equivalently, to distinguish f s1 ,s2 (α)
from a truly random bit).48 At this point, one may apply the proof of Theorem 2.5.2,
48 Note that the particular order (of random versus pseudorandom answers in the hybrids) allows this oracle
machine to generate the (corresponding) hybrid while playing this game properly. That is, the player answers
593
Free ebooks ==> www.Ebook777.com
DIGITAL SIGNATURES AND MESSAGE AUTHENTICATION
and deduce that the said oracle machine can be modified to construct Ss1 (α) with
non-negligible probability (when given oracle access to Ss1 but not being allowed
the query α), in contradiction to the security of the MAC.
Exercise 11: Prove that without loss of generality, one can always assume that a chosen
message attack makes at least one query. (This holds for general signature schemes
as well as for length-restricted and/or one-time ones.)
Guideline: Given an adversary A that outputs a message-signature pair (α , β )
without making any query, modify it such that it makes an arbitrary query α ∈
{0, 1}|α | \ {α } just before producing that output.
Guideline: For Part 1, combine the ideas underlying Exercise 5.1 and Construc-
tion 6.4.4. For Part 2, use the ideas underlying Construction 6.3.11 and the proof of
Proposition 6.3.12. For Part 3, given a MAC as in the claim, consider the functions
def
h s (x) = Ss (x), where s is selected as in the key-generation algorithm.
Exercise 13: Secure one-time (public-key) signatures imply one-way functions: In con-
trast to Exercise 12, prove that the existence of secure one-time signature schemes
implies the existence of one-way functions. Furthermore, prove that this holds even
the first i queries at random, sets α to equal the i + 1st query, uses the tested bit value as the corresponding
answer, and uses s2 and the oracle Ss1 to answer the subsequent queries. It is also important that the game be
defined such that s2 is given only after the machine has selected α; see [162].
594
www.Ebook777.com
6.6 MISCELLANEOUS
for 1-restricted signature schemes that are secure (only) under attacks that make no
signing-queries.
Exercise 14: Prove that the existence of collision-free hashing collections implies the
existence of one-way functions.
Exercise 15: Modify Construction 6.2.8 so as to allow the computation of the hash-
value of an input string while processing the input in an on-line fashion; that is,
the implementation of the hashing process should process the input x in a bit-by-bit
manner, while storing only the current bit and a small amount of state information
(i.e., the number of bits encountered so far and an element of Ds ).
def y y y
Guideline: All that is needed is to redefine h (s,r ) (x) = f s t f s t−1 · · · f s 1 (r ), where
y1 · · · yt is a suffix-free encoding of x; that is, for any x = x , the coding of x is not
a suffix of the coding of x .
Exercise 16: Secure MACs that hide the message: In contrast to Exercise 4, show that
if secure message-authentication schemes exist, then there exist such schemes in
which it is infeasible (for a party not knowing the key) to extract from the sig-
nature any partial information about the message (except for the message length).
(Indeed, privacy of the message is formulated as the definition of semantic security
of encryption schemes; see Chapter 5.)
Exercise 17: In continuation of Exercise 16, show that if there exist collision-free
hashing functions, then there exist message-authentication schemes in which it is
infeasible (for a party not knowing the key) to extract from the signature any partial
information about the message including the message length. How come we can
595
Free ebooks ==> www.Ebook777.com
DIGITAL SIGNATURES AND MESSAGE AUTHENTICATION
hide the message length in this context, whereas we cannot do this in the context of
encryption schemes?
Exercise 18: Alterntaive formulation of state-based MACs (by Boaz Barak): For
S = (S , S ) and V = (V , V ), consider the following reformulation of Item 2 of
Definition 6.3.9: For every pair (s (0) , v (0) ) in the range of G(1n ), every sequence
of messages α (i) ’s, and every i, it holds that V (v (i−1) , α (i) , S (s (i−1) , α (i) )) = 1,
( j−1) |α ( j) |
where s ( j) = S (s ( j−1) , 1|α | ) and v ( j) = V (v ( j−1) , 1|α | , 1|S (e ,1 )| ) for j =
( j) ( j)
Exercise 19: Prove that the existence of collections of UOWHF implies the existence
of one-way functions. Furthermore, show that uniformly chosen functions in any
collection of UOWHFs are hard to invert (in the sense of Definition 2.4.3).
Exercise 20: Assuming the existence of one-way functions, show that there exists a
collection of universal one-way hashing functions that is not collision-free.
Exercise 21: Show that for every finite family of functions H , there exists x = y such
that h(x) = h(y) for every h ∈ H . Furthermore, show that for H = {h : {0, 1}∗ →
{0, 1}m }, this holds even for |x|, |y| ≤ m · |H |.
596
www.Ebook777.com
6.6 MISCELLANEOUS
Guideline: Note that we have eliminated the shifting vector b used in Exercise 22.2
of Chapter 3, but this does not affect the relevant analysis.
Guideline (Part 1): Let {h r : {0, 1}2m(|r |) → {0, 1}m(|r |) }r be a hashing ensemble
with collision probability cp. Recall that such ensembles with m(n) = n/3 and
cp(n) = 2−m(n) can be constructed (see Exercise 22). Then, consider the function
ensemble {h r1 ,...,rm(n) : {0, 1}∗ → {0, 1}2m(n) }n∈N , where all ri ’s are of length n, such
that h r1 ,...,rm(n) (x) is defined as follows:
def
1. As in Construction 6.2.13, break x into t = 2 log2 (|x|/m(n)) consecutive blocks,
denoted x 1 , ..., xt , and let d = log2 t.
def
2. Let i = 1, ..., t, and let yd,i = xi . For j = d − 1, ..., 1, 0 and i = 1, ..., 2 j , let
y j,i = h r j (y j+1,2i−1 y j+1,2i ). The hash value equals (y0,1 , |x|).
def
The above functions have description length N = m(n) · n and map strings of
length at most 2m(n) to strings of length 2m(n). It is easy to bound the collision
probability (for strings of equal length) by the probability of collision occuring
in each of the levels of the tree. In fact, for x1 · · · x t = x1 · · · xt such that xi = xi ,
it suffices to bound the sum of the probabilities that y j, i/2d− j = y j, i/2d− j holds
(given that y j+1, i/2d−( j+1) = y j+1, i/2d−( j+1) ) for j = d − 1, ..., 1, 0. Thus, this gen-
eralized hashing ensemble has a (, )-collision property, where (N ) = 2m(n) and
(N ) = m(n) · cp(n). We stress that the collision probability of the tree-hashing
scheme grows linearly with the depth of the tree (rather than linearly with its size).
Recalling that we may use m(n) = n/3 and cp(n) = 2−m(n) , we obtain (using N =
n 2 /3 = 3m(n)2 ) (N ) = 2(N /3) > 2(N /4) and (N ) < (N /(N )) < 2−(N /4) (as
1/2 1/2 1/2
desired).
Guideline (Part 2): Given a hashing family as in the hypothesis, modify it into
{h r,s : {0, 1}2m → {0, 1}m }r,s , where s ∈ {0, 1}m , such that h r,s (02m ) = s, h r,s (sv) =
0m for all v ∈ {0, 1}m , and h r,s (w) = h r (w) for each other w ∈ {0, 1}2m . Note that
the new family maintains the collision probability of the original one up to an
additive term of O(2−m ). On the other hand, for every w ∈ {0, 1}2m , it holds
597
Free ebooks ==> www.Ebook777.com
DIGITAL SIGNATURES AND MESSAGE AUTHENTICATION
that TreeHashr,s (02m w) = h r,s (h r,s (02m ) h r,s (w)) = h r,s (s v) = 0m , where v =
hr,s (w).
Guideline (Part 3): For h r,s as in Part 2 and every v ∈ {0, 1}m , it holds that
ChainHashr,s (0 v) = h r,s (h r,s (02m ) v) = h r,s
2m
(sv) = 0m .
Guideline: Suppose toward the contradiction that there exists a secure memoryless
fail-stop signature scheme. For every signing-key s ∈ {0, 1}n , consider the ran-
domized process Ps in which one first selects uniformly x ∈ {0, 1}n , produces a
(random) signature y ← Ss (x), and outputs the pair (x, y). Show that, given poly-
nomially many samples of Ps , one can find (in exponential time) a string s ∈ {0, 1}n
such that with probability at least 0.99, the statistical distance between Ps and Ps is
at most 0.01. Thus, a computationally unbounded adversary making polynomially
many signing queries can find a signing-key that typically produces the same sig-
natures as the true signer. It follows that either these signatures cannot be revoked
or that the user may also revoke its own signatures.
598
www.Ebook777.com
CHAPTER SEVEN
Teaching Tip. The contents of the current chapter are quite complex. We suggest
covering in class only the overview section (i.e., Section 7.1), and consider the rest of
this chapter to be advanced material. Furthermore, we assume that the reader is familiar
with the material in all the previous chapters. This familiarity is important, not only
because we use some of the notions and results presented in these chapters but also
because we use similar proof techniques (and do so while assuming that this is not the
reader’s first encounter with these techniques).
Organization. In addition to the overview section (i.e., Section 7.1), the current chapter
consists of two main parts:
The first part (i.e., Sections 7.2–7.4) consists of a detailed treatment of general secure
two-party protocols. Our ultimate goal in this part is to design two-party protocols
that withstand any feasible adversarial behavior. We proceed in two steps. First, we
consider a benign type of adversary, called semi-honest, and construct protocols that
are secure with respect to such an adversary (cf. Section 7.3). Next, we show how to
force parties to behave in a semi-honest manner (cf. Section 7.4). That is, we show
how to transform any protocol, secure in the semi-honest model, into a protocol
that is secure against any feasible adversarial behavior. But before presenting these
constructions, we present the relevant definitions (cf. Section 7.2).
The second part (i.e., Sections 7.5 and 7.6) deals with general secure multi-party pro-
tocols. Specifically, in Section 7.5 we extend the treatment presented in the first part
to multi-party protocols, whereas in Section 7.6 we consider the “private channel
model” and present alternative constructions for it.
Although it is possible to skip some of the earlier sections of this chapter before reading
a later section, we recommend not doing so. In particular, we recommend reading the
overview section (i.e., Section 7.1) before reading any later section.
7.1. Overview
www.Ebook777.com
7.1 OVERVIEW
The results mentioned previously and surveyed later describe a variety of models in
which such an “emulation” is possible. The models vary by the underlying assumptions
regarding the communication channels, the numerous parameters relating to the extent
of adversarial behavior, and the desired level of emulation of the trusted party (i.e.,
level of “security”). We stress that unless stated differently, the two-party case is an
important special case of the treatment of the multi-party setting (i.e., we consider any
m ≥ 2).
the correctness of the honest parties’ local outputs (i.e., their consistency with the
functionality).
The approach outlined here can be applied in a variety of models, and is used to
define the goals of security in these models.1 We first discuss some of the parameters
used in defining various models, and next demonstrate the application of this approach
to a couple of important cases (cf. Sections 7.1.1.2 and 7.1.1.3).
1 A few technical comments are in place. Firstly, we assume that the inputs of all parties are of the same length.
We comment that as long as the lengths of the inputs are polynomially related, this convention can be enforced
by padding. On the other hand, some length restriction is essential for the security results, because (in general)
it is impossible to hide all information regarding the length of the inputs to a protocol. Secondly, we assume that
the desired functionality is computable in probabilistic polynomial-time, because we wish the secure protocol to
run in probabilistic polynomial-time (and a protocol cannot be more efficient than the corresponding centralized
algorithm). Clearly, the results can be extended to functionalities that are computable within any given (time-
constructible) time bound, using adequate padding.
602
www.Ebook777.com
7.1 OVERVIEW
parties); that is, the channels are postulated to be reliable (in the sense that they guar-
antee the authenticity of the data sent over them). Furthermore, one may postulate
the existence of a broadcast channel. Again, these assumptions can be justified in
some settings and emulated in others.
Most work in the area assumes that communication is synchronous and that point-
to-point channels exist between every pair of processors. However, one may also
consider asynchronous communication and arbitrary networks of point-to-point
channels.
r Computational limitations: Typically, we consider computationally bounded adver-
saries (e.g., probabilistic polynomial-time adversaries). However, the private-channel
model also allows for (meaningful) consideration of computationally unbounded ad-
versaries.
We stress that, also in the latter case, security should be defined by requiring
that for every real adversary, whatever the adversary can compute after partici-
pating in the execution of the actual protocol be computable within comparable
time (e.g., in polynomially related time) by an imaginary adversary participating
in an imaginary execution of the trivial ideal protocol (for computing the desired
functionality with the help of a trusted party). Thus, results in the computationally
unbounded–adversary model trivially imply results for computationally bounded
adversaries.
r Restricted adversarial behavior: The most general type of an adversary considered
in the literature is one that may corrupt parties to the protocol while the execution
goes on, and decide which parties to corrupt based on partial information it has
gathered so far. A somewhat more restricted model, which seems adequate in many
settings, postulates that the set of dishonest parties is fixed (arbitrarily) before the
execution starts (but this set is, of course, not known to the honest parties). The latter
model is called non-adaptive as opposed to the adaptive adversary mentioned
first.
An orthogonal parameter of restriction refers to whether a dishonest party takes
active steps to disrupt the execution of the protocol (i.e., sends messages that dif-
fer from those specified by the protocol), or merely gathers information (which it
may later share with the other dishonest parties). The latter adversary has been
given a variety of names, such as semi-honest, passive, and honest-but-curious.
This restricted model may be justified in certain settings, and certainly provides a
useful methodological locus (cf. Section 7.1.3). In the following, we refer to the
adversary of the unrestricted model as active; another commonly used name is
malicious.
r Restricted notions of security: One example is the willingness to tolerate “unfair”
protocols in which the execution can be suspended (at any time) by a dishonest
party, provided that it is detected doing so. We stress that in case the execution is
suspended, the dishonest party does not obtain more information than it could have
obtained if the execution were not suspended. What may happen is that some honest
parties will not obtain their desired outputs (although other parties did obtain their
corresponding outputs), but will rather detect that the execution was suspended. We
603
Free ebooks ==> www.Ebook777.com
GENERAL CRYPTOGRAPHIC PROTOCOLS
will say that this restricted notion of security allows abort (or allows premature
suspension of the execution).
r Upper bounds on the number of dishonest parties: In some models, secure multi-party
computation is possible only if a strict majority of the parties are honest.2 Some-
times even a special majority (e.g., 2/3) is required. General “resilient adversary-
structures” have been considered, too (i.e., security is guaranteed in the case that the
set of dishonest parties equals one of the sets specified in a predetermined family
of sets).
r Mobile adversary: In most works, once a party is said to be dishonest it remains
so throughout the execution. More generally, one may consider transient adversarial
behavior (e.g., an adversary seizes control of some site and later withdraws from
it). This model, which will not be further discussed in this work, allows for the
construction of protocols that remain secure, even in case the adversary may seize
control of all sites during the execution (but never control concurrently, say, more than
10 percent of the sites). We comment that schemes secure in this model were later
termed “proactive.”
In the rest of this chapter we will consider a few specific settings of these parameters.
Specifically, we will focus on non-adaptive, active, and computationally bounded ad-
versaries, and will not assume the existence of private channels. In Section 7.1.1.2 we
consider this setting while restricting the dishonest parties to a strict minority, whereas
in Section 7.1.1.3 we consider a restricted notion of security for two-party protocols
that allows “unfair suspension” of execution (or “allows abort”).
2 Indeed, requiring an honest majority in the two-party case yields a meaningless model.
604
www.Ebook777.com
7.1 OVERVIEW
Thus, security means that the effect of each minority group in a real execution of a secure
protocol is “essentially restricted” to replacing its own local inputs (independently of
the local inputs of the majority parties) before the protocol starts, and replacing its
605
Free ebooks ==> www.Ebook777.com
GENERAL CRYPTOGRAPHIC PROTOCOLS
own local outputs (depending only on its local inputs and outputs) after the protocol
terminates. (We stress that in the real execution, the minority parties do obtain additional
pieces of information; yet in a secure protocol they gain nothing from these additional
pieces of information.)
The fact that Definition 7.1.1 refers to a model without private channels is reflected
in the set of possible ensembles {real, A (x)}x that is determined by the (sketchy)
definition of the real-model adversary (which is allowed to tap all the communication
channels). When defining security in the private-channel model, the real-model adver-
sary is not allowed to tap channels between honest parties, which in turn restricts the
set of possible ensembles {real, A (x)}x . Thus, the difference between the two models
is only reflected in the definition of the real-model adversary. On the other hand, when
we wish to define security with respect to passive adversaries, both the scope of the
real-model adversaries and the scope of the ideal-model adversaries change. In the real-
model execution, all parties follow the protocol, but the adversary may alter the output
of the dishonest parties arbitrarily, depending on all their intermediate internal states
(during the execution). In the corresponding ideal-model, the adversary is not allowed
to modify the inputs of dishonest parties (in Step 1), but is allowed to modify their
outputs (in Step 3).
We comment that a definition analogous to Definition 7.1.1 can also be presented
in case the dishonest parties are not in the minority. In fact, such a definition seems
more natural, but the problem is that such a definition cannot be satisfied. That is,
most natural functionalities do not have a protocol for computing them securely in case
at least half of the parties are dishonest and employ an adequate (active) adversarial
strategy. This follows from an impossibility result regarding two-party computation,
which essentially asserts that there is no way to prevent a party from prematurely
suspending the execution. On the other hand, secure multi-party computation with
dishonest majority is possible if (and only if) premature suspension of the execution is
not considered a breach of security.
606
www.Ebook777.com
7.1 OVERVIEW
each of the two parties may “shut down” the trusted (third) party at any point in time.
In particular, this may happen after the trusted party has supplied the outcome of the
computation to one party but before it has supplied it to the second. That is, an execution
in the ideal model proceeds as follows:
1. Each party sends its input to the trusted party, where the dishonest party may replace
its input or send no input at all (which may be viewed as aborting).
2. Upon receiving inputs from both parties, the trusted party determines the corre-
sponding outputs and sends the first output to the first party.
3. In case the first party is dishonest, it may instruct the trusted party to halt; otherwise
it always instructs the trusted party to proceed. If instructed to proceed, the trusted
party sends the second output to the second party.
4. Upon receiving the output message from the trusted party, the honest party outputs
it locally, whereas the dishonest party may determine its output based on all it knows
(i.e., its initial input and its received output).
Theorem 7.1.2 (the main feasibility results – a sketch): Assuming the existence of
enhanced trapdoor permutations (as in Definition C.1.1 in Appendix C), general secure
multi-party computation is possible in the following three models:
1. Passive adversary, for any number of dishonest parties.
2. Active adversary that may control only a strict minority of the parties.
3. Active adversary, controlling any number of bad parties, provided that suspension
of execution is not considered a violation of security.
In all these cases, the adversary is computationally bounded and non-adaptive. On the
other hand, the adversary may tap the communication lines between honest parties (i.e.,
we do not assume the existence of private channels). The results for active adversaries
assume a broadcast channel.
Recall that a broadcast channel can be implemented (while tolerating any num-
ber of bad parties) using a signature scheme and assuming a public-key infrastruc-
ture (i.e., each party knows the verification-key corresponding to each of the other
parties).4
Most of the current chapter will be devoted to proving Theorem 7.1.2. In Sections 7.3
and 7.4 we prove Theorem 7.1.2 for the special case of two parties: In that case, Part 2
is not relevant, Part 1 is proved in Section 7.3, and Part 3 is proved in Section 7.4. The
general case (i.e., of multi-party computation) is treated in Section 7.5.
4 Note that the implementation of a broadcast channel can be cast as a cryptographic protocol problem (i.e., for
the functionality (v, λ, ..., λ) → (v, v, ..., v), where v ∈ {0, 1}∗ and λ denotes the empty string). Thus, it is not
surprising that the results regarding active adversaries assume the existence of either such a channel or a setting
in which such a channel can be implemented (e.g., either that less than a third of the parties are faulty or that a
public-key infrastructure exists). (This reasoning fails if the definition of secure protocols is relaxed such that
it does not imply agreement; see [122].)
608
www.Ebook777.com
7.1 OVERVIEW
r Assuming the intractability of inverting RSA (or of the DLP), general secure
multi-party computation is possible in a model allowing an adaptive and active
computationally bounded adversary that may control only less than one third of the
parties. We stress that this result does not assume the existence of private channels.
Results for asynchronous communication and arbitrary networks of point-to-point chan-
nels are also known. For further details, see Section 7.7.5.
protocol that is secure in one of the two models of active adversaries (i.e., either
in a model allowing the adversary to control only a minority of the parties or in a
model in which premature suspension of the execution is not considered a violation of
security).
Recall that in the model of passive adversaries, all parties follow the prescribed
protocol, but at termination, the adversary may alter the output of the dishonest parties
depending on all their intermediate internal states (during the execution). In the fol-
lowing, we refer to protocols that are secure in the model of passive (resp., general or
active) adversaries by the term passively secure (resp., actively secure).
1. Prior to the emulation of the original protocol, each party commits to its input (using
a commitment scheme). In addition, using a zero-knowledge proof-of-knowledge
(cf. Section 4.7 of Volume 1), each party also proves that it knows its own in-
put, that is, that it can properly decommit to the commitment it sent. (These
zero-knowledge proofs-of-knowledge are conducted sequentially to prevent dis-
honest parties from setting their inputs in a way that depends on inputs of honest
parties.)
2. Next, all parties jointly generate a sequence of random bits for each party such that
only this party knows the outcome of the random sequence generated for it, but
everybody gets a commitment to this outcome. These sequences will be used as the
random-inputs (i.e., sequence of coin tosses) for the original protocol. Each bit in
the random-sequence generated for Party X is determined as the exclusive-or of the
610
www.Ebook777.com
7.1 OVERVIEW
of the secrets associated with the wire exiting this gate. Furthermore, there is a
random correspondence between each pair of secrets and the Boolean values (of the
corresponding wire). That is, wire w is assigned a pair of secrets, denoted (sw , sw ),
and there is a random 1-1 mapping, denoted νw , between this pair and the pair of
Boolean values (i.e., {νw (sw ), νw (sw )} = {0, 1}).
Each gadget is constructed such that knowledge of a secret that corresponds to
each wire entering the corresponding gate (in the circuit) yields a secret corre-
sponding to the wire that exits this gate. Furthermore, the reconstruction of se-
crets using each gadget respects the functionality of the corresponding gate. For
example, if one knows the secret that corresponds to the 1-value of one entry-wire
and the secret that corresponds to the 0-value of the other entry-wire, and the gate
is an or-gate, then one obtains the secret that corresponds to the 1-value of the
exit-wire.
Specifically, each gadget consists of four templates that are presented in a random
order, where each template corresponds to one of the four possible values of the
two entry-wires. A template may be merely a double encryption of the secret that
corresponds to the appropriate output value, where the double encryption uses as
keys the two secrets that correspond to the input values. That is, suppose a gate
computing f : {0, 1}2 → {0, 1} has input wires w 1 and w 2 , and output wire w 3 .
Then, each of the four templates of this gate has the form E sw 1 (E sw 2 (sw 3 )), where
f (νw 1 (sw 1 ), νw 2 (sw 2 )) = νw 3 (sw 3 ).
r Sending the “scrambled” circuit: The first party sends the scrambled circuit to the
second party. In addition, the first party sends to the second party the secrets that
correspond to its own (i.e., the first party’s) input bits (but not the values of these bits).
The first party also reveals the correspondence between the pair of secrets associated
with each output (i.e., circuit-output wire) and the Boolean values.5 We stress that
the random correspondence between the pair of secrets associated with each other
wire and the Boolean values is kept secret (by the first party).
r Oblivious Transfer of adequate secrets: Next, the first party uses a (1-out-of-2) Obliv-
ious Transfer protocol in order to hand the second party the secrets corresponding
to the second party’s input bits (without the first party learning anything about these
bits).
Loosely speaking, a 1-out-of-k Oblivious Transfer is a protocol enabling one party to
obtain one of k secrets held by another party, without the second party learning which
secret was obtained by the first party. That is, we refer to the two-party functionality
5 This can be done by providing, for each output wire, a succinct 2-partition (of all strings) that separates the two
secrets associated with this wire.
612
www.Ebook777.com
7.1 OVERVIEW
6 For simplicity, we may assume the private-channel model, in which case a value sent to an honest party cannot
be read by the adversary.
613
Free ebooks ==> www.Ebook777.com
GENERAL CRYPTOGRAPHIC PROTOCOLS
Thus, the m-ary functionality of Eq. (7.2) and Eq. (7.3) can be computed as follows
(where all arithmetic operations are mod 2):
def
1. Each Party i locally computes z i,i = xi yi .
2. Next, each pair of parties (i.e., Parties i and j) securely compute random shares of
xi y j + x j yi . That is, Parties i and j (holding (xi , yi ) and (x j , y j ), respectively), need
to securely compute the randomized two-party functionality ((xi , yi ), (x j , y j )) →
(z i, j , z j,i ), where the z’s are random subject to z i, j + z j,i = xi y j + yi x j . The latter
(simple) two-party computation can be securely implemented using (a 1-out-of-4)
Oblivious Transfer. Specifically, Party i uniformly selects z i, j ∈ {0, 1}, and defines
its four secrets as follows:
www.Ebook777.com
7.2* THE TWO-PARTY CASE: DEFINITIONS
(secret-sharing) polynomials representing the two inputs (using the fact that for poly-
nomials p and q, and any field element e [and in particular e = 0, 1, ..., m], it holds
that p(e) + q(e) = ( p + q)(e)). The emulation of multiplication is more involved and
requires interaction (because the product of polynomials yields a polynomial of higher
degree, and thus the polynomial representing the output cannot be the product of the
polynomials representing the two inputs). Indeed, the aim of the interaction is to turn
the shares of the product polynomial into shares of a degree d polynomial that has the
same free-term as the product polynomial (which is of degree 2d). This can be done
using the fact that the coefficients of a polynomial are a linear combination of its values
at sufficiently many arguments (and the other way around), and the fact that one can
privately compute any linear combination (of secret values). For further details, see
Section 7.6.
In this section we define security for two models of adversaries for two-party proto-
cols. In both models, the adversary is non-adaptive and computationally bounded (i.e.,
restricted to probabilistic polynomial-time with [non-uniform] auxiliary inputs). In the
first model, presented in Section 7.2.2, we consider a restricted adversary called semi-
honest, whereas the general case of malicious adversary is considered in Section 7.2.3.
In addition to being of independent interest, the semi-honest model will play a major
role in the constructions of protocols for the malicious model (see Sections 7.3 and 7.4).
such that no party can “influence the outcome” by itself). This task can be cast
by requiring that for every input pair (x, y), the output pair f (x, y) is uniformly
distributed over {(0, 0), (1, 1)}.
r Asymmetric functionalities: The general case of asymmetric functionalities is cap-
def
tured by functionalities of the form f (x, y) = ( f (x, y), λ), where f : {0, 1}∗ ×
∗ ∗
{0, 1} → {0, 1} is a randomized process and λ denotes the empty string. A spe-
cial case of interest is when one party wishes to obtain some predetermined partial
information regarding the secret input of the other party, where the latter secret
is verifiable with respect to the input of the first party. This task is captured by a
def def
functionality f such that f (x, y) = (R(y), λ) if V (x, y) = 1 and f (x, y) = (⊥, λ)
otherwise, where R represents the partial information to be revealed and V represents
the verification procedure.7
We stress that whenever we consider a protocol for securely computing f , it is implicitly
assumed that the protocol correctly computes f when both parties follow the prescribed
program. That is, the joint output distribution of the protocol, played by honest parties,
on input pair (x, y), equals the distribution of f (x, y).
Notation. We let λ denote the empty string and ⊥ denote a special error symbol.
That is, whereas λ ∈ {0, 1}∗ (and |λ| = 0), we postulate that ⊥ ∈ {0, 1}∗ (and is thus
distinguishable from any string in {0, 1}∗ ).
1. The protocol problem has to be solved only for inputs of the same length (i.e.,
|x| = |y|).
2. The functionality is computable in time polynomial in the length of the inputs.
3. Security is measured in terms of the length of the inputs.
As discussed next, these conventions (or assumptions) can be greatly relaxed, yet each
represents an essential issue that must be addressed.
We start with the first convention (or assumption). Observe that making no restriction
on the relationship among the lengths of the two inputs disallows the existence of
secure protocols for computing any “non-degenerate” functionality. The reason is that
the program of each party (in a protocol for computing the desired functionality) must
either depend only on the length of the party’s input or obtain information on the
counterpart’s input length. In case information of the latter type is not implied by the
output value, a secure protocol “cannot afford” to give it away.8 By using adequate
7 One may also consider the “non-verifiable” case (i.e., V ≡ 1), but in this case, nothing can prevent the second
party from acting as if its input is different from its “actual” secret input.
8 The situation is analogous to the definition of secure encryption, where it is required that the message length
be polynomially related to the key length. Actually, things become even worse in the current setting because of
the possible malicious behavior of parties.
616
www.Ebook777.com
7.2* THE TWO-PARTY CASE: DEFINITIONS
padding, any “natural” functionality can be cast as one satisfying the equal-length
convention.9
We now turn to the second convention. Certainly, the total running time of a secure
two-party protocol for computing the functionality cannot be smaller than the time re-
quired to compute the functionality (in the ordinary sense). Arguing as in the case of
input lengths, one can see that we need an a priori bound on the complexity of the func-
tionality. A more general approach would be to let such a bound be given explicitly to
both parties as an auxiliary input. In such a case, the protocol can be required to run for
a time that is bounded by a fixed polynomial in this auxiliary parameter (i.e., the time-
complexity bound of f ). Assuming that a good upper bound of the complexity of f is
time-constructible, and using standard padding techniques, we can reduce this general
case to the special case discussed previously: That is, given a general functionality, g,
and a time-bound t : N → N, we introduce the functionality
def g(x, y) if i = j = t(|x|) = t(|y|)
f ((x, 1 ), (y, 1 )) =
i j
(⊥, ⊥) otherwise
where ⊥ is a special error symbol. Now, the problem of securely computing g reduces
to the problem of securely computing f , which in turn is polynomial-time computable.
Finally, we turn to the third convention. Indeed, a more general convention would
be to have an explicit security parameter that determines the security of the protocol.
This general alternative is essential for allowing “secure” computation of finite func-
tionalities (i.e., functionalities defined on finite input domains). We may accommodate
the general convention using the special case, postulated previously, as follows. Sup-
pose that we want to compute the functionality f , on input pair (x, y) with security
(polynomial in) the parameter s. Then we introduce the functionality
and consider secure protocols for computing f . Indeed, this reduction corresponds to
the realistic setting where the parties first agree on the desired level of security, and
only then proceed to compute the function (using this level of security).
Partial functionalities. The first convention postulates that we are actually not con-
sidering mapping from the set of all pairs of bit strings, but rather mappings from a
certain (general) set of pairs of strings (i.e., ∪n∈N {0, 1}n × {0, 1}n ). Taking this conven-
tion one step further, one may consider functionalities that are defined only over a set
R ⊆ ∪n∈N {0, 1}n × {0, 1}n . Clearly, securely computing such a functionality f can be
reduced to computing any of its extensions to ∪n∈N {0, 1}n × {0, 1}n (e.g., computing
def def
f such that f (x, y) = f (x, y) for (x, y) ∈ R and f (x, y) = (⊥, ⊥) otherwise). With
one exception (to be discussed explicitly), our exposition only refers to functionalities
that are defined over the set of all pairs of strings of equal length.
9 In the sequel, we sometimes take the liberty of presenting functionalities in a form that violates the equal-length
convention (e.g., in the case of Oblivious Transfer). Indeed, these formulations can be easily modified to fit the
equal-length convention.
617
Free ebooks ==> www.Ebook777.com
GENERAL CRYPTOGRAPHIC PROTOCOLS
For every polynomial-size circuit family, {Cn }n∈N , every positive polynomial
p(·), every sufficiently large n, and every w ∈ S ∩ {0, 1}n ,
1
|Pr [Cn (X w ) = 1] − Pr [Cn (Yw ) = 1] | < (7.5)
p(n)
Note that an infinite sequence of w’s may be incorporated in the family; hence, the
definition is not strengthened by providing the circuit Cn with w as an additional input.11
Recall that computational indistinguishability is a relaxation of statistical indistin-
def def
guishability, where here, the ensembles X = {X w }w∈S and Y = {Yw }w∈S are statisti-
s
cally indistinguishable, denoted X ≡ Y , if for every positive polynomial p(·), every
sufficiently large n and every w ∈ S ∩ {0, 1}n ,
1
|Pr [X w = α] − Pr [Yw = α]| < (7.6)
α∈{0,1}∗
p(n)
In case the differences are all equal to zero, we say that the ensembles are identically
distributed (and denote this by X ≡ Y ).
10 Consequently, the value of f n (x, y) may depend only on poly(n)-long prefixes of x and y.
11 Indeed, here we capitalize on the non-uniformity of the class of potential distinguishers. In case one considers the
class of uniform (probabilistic polynomial-time) distinguishers, it is necessary to provide these distinguishers
with the distribution’s index (i.e., w); see (Part 2 of) Definition 3.2.2.
618
www.Ebook777.com
7.2* THE TWO-PARTY CASE: DEFINITIONS
Definition 7.2.1 (privacy with respect to semi-honest behavior): Let f : {0, 1}∗ ×
{0, 1}∗ → {0, 1}∗ × {0, 1}∗ be a functionality, and f 1 (x, y) (resp., f 2 (x, y)) denote
the first (resp., second) element of f (x, y). Let be a two-party protocol for
computing f .12 The view of the first (resp., second) party during an execution of
on (x, y), denoted view
1 (x, y) (resp., view2 (x, y)), is (x, r, m 1 , ..., m t ) (resp.,
(y, r, m 1 , ..., m t )), where r represents the outcome of the first (resp., second) party’s
internal coin tosses, and m i represents the i-th message it has received. The output
of the first (resp., second) party after an execution of on (x, y), denoted output 1 (x, y)
(resp., output 2 (x, y)), is implicit in the party’s own view of the execution, and
output (x, y) = (output1 (x, y), output2 (x, y)).
r (deterministic case) For a deterministic functionality f , we say that privately
computes f if there exist probabilistic polynomial-time algorithms, denoted S1 and
S2 , such that
c
{S1 (x, f 1 (x, y))}x, y∈{0,1}∗ ≡ {view
1 (x, y)} x, y∈{0,1}∗ (7.7)
c
{S2 (y, f 2 (x, y))}x, y∈{0,1}∗ ≡ {view
2 (x, y)} x, y∈{0,1}∗ (7.8)
c
where |x| = |y|. (Recall that ≡ denotes computational indistinguishability by (non-
uniform) families of polynomial-size circuits.)
r (general case) We say that privately computes f if there exist probabilistic
polynomial-time algorithms, denoted S1 and S2 , such that
c
{(S1 (x, f 1 (x, y)), f (x, y))}x, y ≡ {(view
1 (x, y), output (x, y))} x, y (7.9)
c
{(S2 (y, f 2 (x, y)), f (x, y))}x, y ≡ {(view
2 (x, y), output (x, y))} x, y (7.10)
12 By saying that computes (rather than privately computes) f , we mean that the output distribution of the
protocol (when played by honest or semi-honest parties) on input pair (x, y) is distributed identically to f (x, y).
620
www.Ebook777.com
7.2* THE TWO-PARTY CASE: DEFINITIONS
13 Recall that the input pairs (x, y) serve as indices to the distributions in the two ensembles under consideration,
and as such they are always given (or incorporated) in the potential distinguisher; see Section 7.2.1.2.
621
Free ebooks ==> www.Ebook777.com
GENERAL CRYPTOGRAPHIC PROTOCOLS
give it the output). Note that a simulator S1 (1n , r ) that uniformly selects s ∈ {0, 1}n
and outputs (s, F(s)) satisfies Eq. (7.7) (but does not satisfy Eq. (7.9)).
We comment that the current issue is less acute than the first one (i.e., the one raised
in Item 1). Indeed, consider the following alternative to both Eq. (7.7) and Eq. (7.9):
c
{(S1 (x, f 1 (x, y)), f 2 (x, y))}x, y ≡ {(view
1 (x, y), output2 (x, y))}x, y (7.11)
Note that Eq. (7.11) addresses the problem raised in Item 1, but not the problem raised
in the current item. But is the current problem a real one? Note that the only difference
between Eq. (7.9) and Eq. (7.11) is that the former forces the simulated view to fit the
output given to the simulator, whereas this is not guaranteed in Eq. (7.11). Indeed,
in Eq. (7.11) the view simulated for Party 1 may not fit the output given to the
simulator, but the simulated view does fit the output given to the honest Party 2. Is
the former fact of real importance or is it the case that all that matters is the relation
of the simulated view to the honest party’s view? We are not sure, but (following a
general principle) when in doubt, we prefer to be more careful and adopt the more
stringent definition. Furthermore, the stronger definition simplifies the proof of the
Composition Theorem for the semi-honest model (i.e., Theorem 7.3.3).
What about Auxiliary Inputs? Auxiliary inputs are implicit in Definition 7.2.1. They
are represented by the fact that the definition asks for computational indistinguisha-
bility by non-uniform families of polynomial-size circuits (rather than computational
indistinguishability by probabilistic polynomial-time algorithms). In other words, in-
distinguishability also holds with respect to probabilistic polynomial-time machines
that obtain (non-uniform) auxiliary inputs.
www.Ebook777.com
7.2* THE TWO-PARTY CASE: DEFINITIONS
of each party sending its input to the trusted party (via a secure private channel), and
the third party computing the corresponding output pair and sending each output to the
corresponding party. The only adversarial behavior allowed here is for one of the parties
to determine its own output based on its input and the output it has received (from the
trusted party).14 This adversarial behavior represents the attempt to learn something
from the party’s view of a proper execution (which, in the ideal model, consists only of
its local input and output). The other (i.e., honest) party merely outputs the output that
it has received (from the trusted party).
Next, we turn to the real model. Here, there is a real two-party protocol and the
adversarial behavior is restricted to be semi-honest. That is, the protocol is executed
properly, but one party may produce its output based on (an arbitrary polynomial-
time computation applied to) its view of the execution (as defined earlier). We
stress that the only adversarial behavior allowed here is for one of the parties to
determine its own output based on its entire view of the proper execution of the
protocol.
Finally, we define security in the semi-honest model. A secure protocol for the real
(semi-honest) model is such that for every feasible semi-honest behavior of one of the
parties, we can simulate the joint outcome (of their real computation) by an execution in
the ideal model (where also one party is semi-honest and the other is honest). Actually,
we need to augment the definition to account for a priori information available to semi-
honest parties before the protocol starts. This is done by supplying these parties with
auxiliary inputs.
Note that in both (ideal and real) models, the (semi-honest) adversarial behavior
takes place only after the proper execution of the corresponding protocol. Thus, in the
ideal model, this behavior is captured by a computation applied to the local input–output
pair, whereas in the real model, this behavior is captured by a computation applied to
the party’s local view (of the execution).
Definition 7.2.2 (security in the semi-honest model): Let f : {0, 1}∗ × {0, 1}∗ →
{0, 1}∗ × {0, 1}∗ be a functionality, where f 1 (x, y) (resp., f 2 (x, y)) denotes the first
(resp., second) element of f (x, y), and let be a two-party protocol for computing f .
r Let B = (B1 , B2 ) be a pair of probabilistic polynomial-time algorithms representing
parties’ strategies for the ideal model. Such a pair is admissible (in the ideal model)
if for at least one Bi we have Bi (u, v, z) = v, where u denotes the party’s local input,
v its local output, and z its auxiliary input. The joint execution of f under B in the
ideal model on input pair (x, y) and auxiliary input z, denoted ideal f, B(z) (x, y), is
defined as ( f (x, y), B1 (x, f 1 (x, y), z), B2 (y, f 2 (x, y), z)).
(That is, if Bi is honest, then it just outputs the value fi (x, y) obtained from the
trusted party, which is implicit in this definition. Thus, our peculiar choice to feed
both parties with the same auxiliary input is immaterial, because the honest party
ignores its auxiliary input.)
14 We stress that unlike in the malicious model, discussed in Section 7.2.3, here the dishonest (or rather semi-honest)
party is not allowed to modify its input (but must hand its actual input to the trusted party).
623
Free ebooks ==> www.Ebook777.com
GENERAL CRYPTOGRAPHIC PROTOCOLS
Observe that the definition of the joint execution in the real model prohibits both
parties (honest and semi-honest) from deviating from the strategies specified by .
The difference between honest and semi-honest parties is merely in their actions on the
corresponding local views of the execution: An honest party outputs only the output part
of the view (as specified by ), whereas a semi-honest party may output an arbitrary
(feasibly computable) function of the view. Note that including the output f (x, y) (resp.,
output (x, y)) in ideal f, B(z) (x, y) (resp., in real, A(z) (x, y)) is meaningful only in
the case of a randomized functionality f , and is done in order to match the formulation
in Definition 7.2.1. We stress that the issue is the inclusion of the output of the dishonest
party (see Item 2 in the discussion that follows Definition 7.2.1).
We comment that, as will become clear in the proof of Proposition 7.2.3, omitting
the auxiliary input does not weaken Definition 7.2.2. Intuitively, since the adversary is
passive, the only affect of the auxiliary input is that it appears as part of the adversary’s
view. However, since Eq. (7.12) refers to the non-uniform formulation of computational
indistinguishability, augmenting the ensembles by auxiliary inputs has no affect.
Proof Sketch: We first show that Definition 7.2.2 implies Definition 7.2.1. Suppose
that securely computes f in the semi-honest model (i.e., satisfies Definition 7.2.2).
624
www.Ebook777.com
7.2* THE TWO-PARTY CASE: DEFINITIONS
Without loss of generality, we show how to simulate the first party’s view. Toward this
end, we define the following admissible pair A = (A1 , A2 ) for the real model: A1 is
merely the identity transformation (i.e., it outputs the view given to it), whereas A2
(which represents an honest strategy for Party 2) produces an output as determined
by the view given to it. We stress that we consider an adversary A1 that does not
get an auxiliary input (or alternatively ignores it). Furthermore, the adversary merely
outputs the view given to it (and leaves the possible processing of this view to the
potential distinguisher). Let B = (B1 , B2 ) be the ideal-model adversary guaranteed
by Definition 7.2.2. We claim that (using) B1 (in the role of S1 ) satisfies Eq. (7.9),
rather than only Eq. (7.7). Loosely speaking, the claim holds because Definition 7.2.2
guarantees that the relation between the view of Party 1 and the outputs of both parties
in a real execution is preserved in the ideal model. Specifically, since A1 is a passive
adversary (and computes f ), the output of Party 1 in a real execution equals the
value that is determined in the view (of Party 1), which in turn fits the functionality.
Now, Definition 7.2.2 implies that the same relation between the (simulated) view of
Party 1 and the outputs must hold in the ideal model. It follows that using B1 in role of
S1 guarantees that the simulated view fits the output given to the simulator (as well as
the output not given to it).
We now show that Definition 7.2.1 implies Definition 7.2.2. Suppose that privately
computes f , and let S1 and S2 be as guaranteed in Definition 7.2.1. Let A = (A1 , A2 ) be
an admissible pair for the real-model adversaries. Without loss of generality, we assume
that A2 merely maps the view (of the second party) to the corresponding output (i.e.,
f 2 (x, y)); that is, Party 2 is honest (and Party 1 is semi-honest). Then, we define an ideal-
def def
model pair B = (B1 , B2 ) such that B1 (x, v, z) = A1 (S1 (x, v), z) and B2 (y, v, z) = v.
(Note that B is indeed admissible with respect to the ideal model.) The following holds
(for any infinite sequence of (x, y, z)’s):
Conclusion. This proof demonstrates that the alternative formulation of Definition 7.2.2
is merely a cumbersome form of the simpler Definition 7.2.1. We stress that the rea-
son we have presented the cumbersome form is the fact that it follows the general
framework of definitions of security that is used for the malicious adversarial behav-
ior. In the rest of this chapter, whenever we deal with the semi-honest model (for
625
Free ebooks ==> www.Ebook777.com
GENERAL CRYPTOGRAPHIC PROTOCOLS
two-party computation), we will use Definition 7.2.1. Furthermore, since much of the
text focuses on deterministic functionalities, we will be able to use the simpler case of
Definition 7.2.1.
www.Ebook777.com
7.2* THE TWO-PARTY CASE: DEFINITIONS
which case the complications introduced by aborting do not arise. The interested reader
may proceed directly to Section 7.2.3.2, which is mostly self-contained.)
The Ideal Model. We first translate the previous discussion into a definition of an ideal
model. That is, we will allow in the ideal model whatever cannot possibly be prevented
in any real execution. An alternative way of looking at things is that we assume that the
the two parties have at their disposal a trusted third party, but even such a party cannot
prevent certain malicious behavior. Specifically, we allow a malicious party in the ideal
model to refuse to participate in the protocol or to substitute its local input. (Clearly,
neither can be prevented by a trusted third party.) In addition, we postulate that the
first party has the option of “stopping” the trusted party just after obtaining its part of
the output, and before the trusted party sends the other output part to the second party.
Such an option is not given to the second party.15 Thus, an execution in the ideal model
proceeds as follows (where all actions of both the honest and the malicious parties must
be feasible to implement):
15 This asymmetry is due to the non-concurrent nature of communication in the model. Since we postulate that
the trusted party sends the answer first to the first party, the first party (but not the second) has the option of
stopping the trust party after obtaining its part of the output. The second party can only stop the trust party
before obtaining its output, but this is the same as refusing to participate. See further discussion at the end of
the current subsection.
16 We comment that restricting the ideal-model adversary (to replacing u by u of the same length) only strengthens
the definition of security. This restriction is essential to our formulation, because (by our convention) the
functionality f is defined only for pairs of strings of equal length.
627
Free ebooks ==> www.Ebook777.com
GENERAL CRYPTOGRAPHIC PROTOCOLS
special abort symbol) in case it does not get an input from one of the parties.17 Thus, the
ideal model (computation) is captured by the following definition, where the algorithms
B1 and B2 represent all possible actions in the model.18 In particular, B1 (x, z, r ) (resp.,
B2 (y, z, r )) represents the input handed to the trusted party by Party 1 (resp., Party 2)
having local input x (resp., y) and auxiliary input z and using random-tape r . Indeed, if
Party 1 (resp., Party 2) is honest, then B1 (x, z, r ) = x (resp., B2 (y, z, r ) = y). Likewise,
B1 (x, z, r, v) = ⊥ represents a decision of Party 1 to stop the trusted party, on input
x (auxiliary input z and random-tape r ), after receiving the (output) value v from the
trusted party. In this case, B1 (x, z, r, v, ⊥) represents the party’s local output. Otherwise
(i.e., B1 (x, z, r, v) = ⊥), we let B1 (x, z, r, v) itself represent the party’s local output.
The local output of Party 2 is always represented by B2 (y, z, r, v), where y is the party’s
local input (z is the auxiliary input, r is the random-tape) and v is the value received from
the trusted party. Indeed, if Party 1 (resp., Party 2) is honest, then B1 (x, z, r, v) = v
(resp., B2 (y, z, r, v) = v).
Definition 7.2.4 (malicious adversaries, the ideal model): Let f : {0, 1}∗ × {0, 1}∗ →
{0, 1}∗ × {0, 1}∗ be a functionality, where f 1 (x, y) (resp., f 2 (x, y)) denotes the first
(resp., second) element of f (x, y). Let B = (B1 , B2 ) be a pair of probabilistic
polynomial-time algorithms representing strategies in the ideal model. Such a pair
is admissible (in the ideal malicious model) if for at least one i ∈ {1, 2}, called hon-
est, we have Bi (u, z, r ) = u and Bi (u, z, r, v) = v, for every possible value of u, z, r ,
and v. Furthermore, |Bi (u, z, r )| = |u| must hold for both i’s. The joint execution of
f under B in the ideal model (on input pair (x, y) and auxiliary input z), denoted
ideal f, B(z) (x, y), is defined by uniformly selecting a random-tape r for the adversary,
def
and letting ideal f, B(z) (x, y) = ϒ(x, y, z, r ), where ϒ(x, y, z, r ) is defined as follows:
r In case Party 1 is honest, ϒ(x, y, z, r ) equals
def
( f 1 (x, y ) , B2 (y, z, r, f 2 (x, y ))), where y = B2 (y, z, r ). (7.13)
r In case Party 2 is honest, ϒ(x, y, z, r ) equals
17 Both options (i.e., default value or a special abort symbol) are useful, and the choice depends on the protocol
designer. In case a special abort symbol is used, the functionality should be modified accordingly, such that if
one of the inputs equals the special abort symbol, then the output is a special abort symbol.
18 As in Definition 7.2.2, we make the peculiar choice of feeding both B ’s with the same auxiliary input z (and the
i
same random-tape r ). However, again, the honest strategy ignores this auxiliary input, which is only used by the
malicious strategy. Note that unlike in previous definitions, we make the random-tape (of the adversary) explicit
in the notation, the reason being that the same strategy is used to describe two different actions of the adversary
(rather than a single action, as in Definition 7.2.2). Since these actions may be probabilistically related, it is
important that they be determined based on the same random-tape.
628
www.Ebook777.com
7.2* THE TWO-PARTY CASE: DEFINITIONS
Eq. (7.14) and Eq. (7.15) refer to the case in which Party 2 is honest (and Party 1 may
be malicious). Specifically, Eq. (7.14) represents the sub-case where Party 1 invokes
the trusted party with a possibly substituted input, denoted B1 (x, z, r ), and aborts while
stopping the trusted party right after obtaining the output, f 1 (B1 (x, z, r ), y). In this
sub-case, Party 2 obtains no output (from the trusted party). Eq. (7.15) represents the
sub-case where Party 1 invokes the trusted party with a possibly substituted input, and
allows the trusted party to answer Party 2. In this sub-case, Party 2 obtains and outputs
f 2 (B1 (x, z, r ), y). In both sub-cases, the trusted party computes f (B1 (x, z, r ), y), and
Party 1 outputs a string that depends on both x, z, r and f 1 (B1 (x, z, r ), y). Likewise,
Eq. (7.13) represents possible malicious behavior of Party 2; however, in accordance
with the previous discussion, the trusted party first supplies output to Party 1, and so
Party 2 does not have a “real” aborting option (analogous to Eq. (7.14)).
Execution in the Real Model. We next consider the real model in which a real (two-
party) protocol is executed (and there exist no trusted third parties). In this case, a
malicious party may follow an arbitrary feasible strategy, that is, any strategy imple-
mentable by a probabilistic polynomial-time algorithm (which gets an auxiliary input).
In particular, the malicious party may abort the execution at any point in time, and when
this happens prematurely, the other party is left with no output. In analogy to the ideal
case, we use algorithms to define strategies in a protocol, where these strategies (or
algorithms implementing them) map partial execution histories to the next message.
Definition 7.2.5 (malicious adversaries, the real model): Let f be as in Definition 7.2.4,
and be a two-party protocol for computing f . Let A = (A1 , A2 ) be a pair of prob-
abilistic polynomial-time algorithms representing strategies in the real model. Such a
pair is admissible (with respect to ) (for the real malicious model) if at least one Ai
coincides with the strategy specified by . (In particular, this Ai ignores the auxiliary
input.) The joint execution of under A in the real model (on input pair (x, y) and
auxiliary input z), denoted real, A(z) (x, y), is defined as the output pair resulting from
the interaction between A1 (x, z) and A2 (y, z). (Recall that the honest Ai ignores the
auxiliary input z, and so our peculiar choice of providing both Ai ’s with the same z is
immaterial.)
In some places (in Section 7.4), we will assume that the algorithms representing the
real-model adversaries (i.e., the algorithm Ai that does not follow ) are deterministic.
This is justified by observing that one may just (consider and) fix the “best” possible
choice of coins for a randomized adversary and incorporate this choice in the auxiliary
input of a deterministic adversary (cf. Section 1.3.3 of Volume 1).
Security as Emulation of Real Execution in the Ideal Model. Having defined the
ideal and real models, we obtain the corresponding definition of security. Loosely
speaking, the definition asserts that a secure two-party protocol (in the real model)
emulates the ideal model (in which a trusted party exists). This is formulated by saying
that admissible adversaries in the ideal model are able to simulate (in the ideal model)
the execution of a secure real-model protocol under any admissible adversaries.
629
Free ebooks ==> www.Ebook777.com
GENERAL CRYPTOGRAPHIC PROTOCOLS
One important property that Definition 7.2.6 implies is privacy with respect to ma-
licious adversaries. That is, all that an adversary can learn by participating in the
protocol, while using an arbitrary (feasible) strategy, can be essentially inferred from
the corresponding output alone. Another property that is implied by Definition 7.2.6 is
correctness, which means that the output of the honest party must be consistent with
an input pair in which the element corresponding to the honest party equals the party’s
actual input. Furthermore, the element corresponding to the adversary must be chosen
obliviously of the honest party’s input. We stress that both properties are easily implied
by Definition 7.2.6, but the latter is not implied by combining the two properties. For
further discussion, see Exercise 3.
We wish to highlight another property that is implied by Definition 7.2.6: Loosely
speaking, this definition implies that at the end of the (real) execution of a secure pro-
tocol, each party “knows” the value of the corresponding input for which the output
is obtained.19 That is, when a malicious Party 1 obtains the output v, it knows an x
(which does not necessarily equal to its initial local input x) such that v = f 1 (x , y) for
some y (i.e., the local input of the honest Party 2). This “knowledge” is implied by the
equivalence to the ideal model, in which the party explicitly hands the (possibly modi-
fied) input to the trusted party. For example, say Party 1 uses the malicious strategy A1 .
Then the output values (in real, A (x, y)) correspond to the input pair (B1 (x), y),
where B1 is the ideal-model adversary derived from the real-model adversarial
strategy A1 .
We comment that although Definition 7.2.6 does not talk about transforming ad-
missible A’s to admissible B’s, we will often use such phrases. Furthermore, although
the definition does not even guarantee that such a transformation is effective (i.e.,
computable), the transformations used in this work are all polynomial-time com-
putable. Moreover, these transformations consist of generic programs for Bi that use
19 One concrete case where this property plays a central role is in the input-commitment functionality (of Sec-
tion 7.4.3.6). Specifically, if a secure implementation of this functionality is first used in order to let Party 1
commit to its input, and next, Party 2 uses it in order to commit to its own input, then this property implies that
Party 2 cannot just copy the “commitment” made by Party 1 (unless Party 2 knows the input of Party 1).
630
www.Ebook777.com
7.2* THE TWO-PARTY CASE: DEFINITIONS
Further Discussion. As explained earlier, it is unavoidable that one party can abort the
real execution after it (fully) learns its output but before the other party (fully) learns
its own output. However, the convention by which this ability is designated to Party 1
(rather than to Party 2) is quite arbitrary. More general conventions (and corresponding
definitions of security) may be more appealing, but the current one seems simplest
and suffices for the rest of our exposition.20 An unrelated is issue is that unlike in
the treatment of the semi-honest model (cf. Definitions 7.2.1 and 7.2.2), we did not
explicitly include the output f (x, y) (resp., output (x, y)) in ideal f, B(z) (x, y) (resp.,
in real, A(z) (x, y)). Note that such an augmentation would not make much sense in the
current (malicious) context. Furthermore, recall that this issue is meaningful only in the
case of a randomized functionality f , and that its concrete motivation was to simplify
the proof of the composition theorem for the semi-honest model (which is irrelevant
here). Finally, referring to a third unrelated issue, we comment that the definitional
treatment can be extended to partial functionalities.
Remark 7.2.7 (security for partial functionalities): For functionalities that are defined
only for input pairs in some set R ⊂ {0, 1}∗ × {0, 1}∗ (see Section 7.2.1.1), security is
defined as in Definition 7.2.6 with the following two exceptions:
1. When defining the ideal model, the adversary is allowed to modify its input arbitrarily
as long as the modified input pair is in R.
2. The ensembles considered are indexed by triplets (x, y, z) that satisfy (x, y) ∈ R as
well as |x| = |y| and |z| = poly(|x|).
20 One alternative convention is to associate with each protocol a binary value indicating which of the two parties is
allowed to meaningfully abort. This convention yields a more general (or less restrictive) definition of security,
where Definition 7.2.6 is obtained as a special case (in which this value is always required to equal 1). Yet the
protocols presented in this work are shown to be secure under the more restrictive definition.
21 Actually, the treatment of the case in which only the second party obtains an output (i.e., f (x, y) = (λ, f (x, y)))
2
is slightly different. However, also in this case, the event in which the first party aborts after obtaining its (empty)
output can be discarded. In this case, this event (of obtaining an a priori fixed output) is essentially equivalent
to the party aborting before obtaining output, which in turn can be viewed as replacing its input by a special
symbol.
631
Free ebooks ==> www.Ebook777.com
GENERAL CRYPTOGRAPHIC PROTOCOLS
with what happens after the first party obtains its output (because the second party has
no output), and thus the complications arising from the issue of aborting the execution
can be eliminated. Consequently, computation in the ideal model takes the following
form:
Sending inputs to the trusted party: An honest party always sends u to the trusted party.
A malicious party may, depending on u (as well as on an auxiliary input and its coin
tosses), either abort or send some u ∈ {0, 1}|u| to the trusted party. However, without
loss of generality, aborting at this stage may be treated as supplying the trusted party
with a special symbol.
The answer of the trusted party: Upon obtaining an input pair, (x, y), the trusted party
(for computing f ) replies to the first party with f 1 (x, y). Without loss of generality,
the trusted party only answers the first party, because the second party has no output
(or, alternatively, should always output λ).
Outputs: An honest party always outputs the message it has obtained from the trusted
party. A malicious party may output an arbitrary (polynomial-time computable)
function of its initial input (auxiliary input and its coin tosses) and the message it
has obtained from the trusted party.
Thus, the ideal model (computation) is captured by the following definition, where
the algorithms B1 and B2 represent all possible actions in the model. In particular,
B1 (x, z, r ) (resp., B2 (y, z, r )) represents the input handed to the trusted party by Party 1
(resp., Party 2) having local input x (resp., y), auxiliary input z, and random-tape r .
Indeed, if Party 1 (resp., Party 2) is honest, then B1 (x, z, r ) = x (resp., B2 (y, z, r ) = y).
Likewise, B1 (x, z, r, v) represents the output of Party 1, when having local input x
(auxiliary input z and random-tape r ) and receiving the value v from the trusted party,
whereas the output of Party 2 is represented by B2 (y, z, r, λ). Indeed, if Party 1 (resp.,
Party 2) is honest, then B1 (x, z, r, v) = v (resp., B2 (y, z, r, λ) = λ).
Definition 7.2.8 (the ideal model): Let f : {0, 1}∗ × {0, 1}∗ → {0, 1}∗ × {λ} be a
single-output functionality such that f (x, y) = ( f 1 (x, y), λ). Let B = (B1 , B2 ) be a
pair of probabilistic polynomial-time algorithms representing strategies in the ideal
model. Such a pair is admissible (in the ideal malicious model) if for at least one
i ∈ {1, 2}, called honest, we have Bi (u, z, r ) = u and Bi (u, z, r, v) = v for all pos-
sible u, z, r , and v. Furthermore, |Bi (u, z, r )| = |u| must hold for both i’s. The joint
execution of f under B in the ideal model (on input pair (x, y) and auxiliary input
z), denoted ideal f, B(z) (x, y), is defined by uniformly selecting a random-tape r for the
def
adversary, and letting ideal f, B(z) (x, y) = ϒ(x, y, z, r ), where
def
ϒ(x, y, z, r ) = (B1 (x, z, r, f 1 (B1 (x, z, r ), B2 (y, z, r ))) , B2 (y, z, r, λ)) (7.16)
632
www.Ebook777.com
7.2* THE TWO-PARTY CASE: DEFINITIONS
def
That is, ideal f, B(z) (x, y) = (B1 (x, z, r, v), B2 (y, z, r, λ)), where v ← f 1 (B1 (x, z, r ),
B2 (y, z, r )) and r is uniformly distributed among the set of strings of adequate length.22
We next consider the real model in which a real (two-party) protocol is executed (and
there exist no trusted third parties). In this case, a malicious party may follow an arbitrary
feasible strategy, that is, any strategy implementable by a probabilistic polynomial-time
algorithm. The definition is identical to Definition 7.2.5, and is reproduced here (for
the reader’s convenience).
Definition 7.2.9 (the real model): Let f be as in Definition 7.2.8, and be a two-party
protocol for computing f . Let A = (A1 , A2 ) be a pair of probabilistic polynomial-time
algorithms representing strategies in the real model. Such a pair is admissible (with
respect to ) (for the real malicious model) if at least one Ai coincides with the strategy
specified by . The joint execution of under A in the real model (on input pair
(x, y) and auxiliary input z), denoted real, A(z) (x, y), is defined as the output pair
resulting from the interaction between A1 (x, z) and A2 (y, z). (Note that the honest Ai
ignores the auxiliary input z.)
Having defined the ideal and real models, we obtain the corresponding definition of
security. Loosely speaking, the definition asserts that a secure two-party protocol (in
the real model) emulates the ideal model (in which a trusted party exists). This is
formulated by saying that admissible adversaries in the ideal model are able to simulate
(in the ideal model) the execution of a secure real-model protocol under any admissible
adversaries. The definition is analogous to Definition 7.2.6.
Definition 7.2.10 (security): Let f and be as in Definition 7.2.9. Protocol is said
to securely compute f (in the malicious model) if for every probabilistic polynomial-
time pair of algorithms A = (A1 , A2 ) that is admissible for the real model (of Defini-
tion 7.2.9), there exists a probabilistic polynomial-time pair of algorithms B = (B1 , B2 )
that is admissible for the ideal model (of Definition 7.2.8) such that
c
{ideal f, B(z) (x, y)}x, y,z ≡ {real, A(z) (x, y)}x, y,z
where x, y, z ∈ {0, 1}∗ such that |x| = |y| and |z| = poly(|x|).
22 Recall that if Bi is honest, then it passes its input to the trusted party and outputs its response. Thus, our peculiar
choice to feed both parties with the same auxiliary input and same random-tape is immaterial, because the
honest party ignores both.
633
Free ebooks ==> www.Ebook777.com
GENERAL CRYPTOGRAPHIC PROTOCOLS
Proposition 7.2.11: Suppose that there exist one-way functions and that any single-
output functionality can be securely computed as per Definition 7.2.10. Then any func-
tionality can be securely computed as per Definition 7.2.6.
Proof Sketch: Suppose that the parties wish to securely compute the (two-output)
functionality (x, y) → ( f 1 (x, y), f 2 (x, y)). The first idea that comes to mind is to
first let the parties (securely) compute the first output (i.e., by securely computing
(x, y) → ( f 1 (x, y), λ)) and next let them (securely) compute the second output (i.e.,
by securely computing (x, y) → (λ, f 2 (x, y))). This solution is insecure, because a
malicious party may enter different inputs in the two invocations (not to mention that
the approach will fail for randomized functionalities even if both parties are honest).
Instead, we are going to let the first party obtain its output as well as an (authenticated
and) encrypted version of the second party’s output, which it will send to the second
party (which will be able to decrypt and verify the value). That is, we will use private-
key encryption and authentication schemes, which exist under the first hypothesis, as
follows. First, the second party generates an encryption/decryption-key, denoted e, and
a signing/verification-key, denoted s. Next, the two parties securely compute the ran-
domized functionality ((x, (y, e, s)) → (( f 1 (x, y), c, t) , λ), where c is the ciphertext
obtained by encrypting the plaintext v = f 2 (x, y) under the encryption-key e, and t is
an authentication-tag of c under the signing-key s. Finally, the first party sends (c, t) to
the second party, which verifies that c is properly signed and (if so) recovers f 2 (x, y)
from it.
Recall that our ultimate goal is to design (two-party) protocols that withstand any
feasible adversarial behavior. We proceed in two steps. In this section, we show how
to construct protocols for privately computing any functionality, that is, protocols that
are secure with respect to the semi-honest model. In Section 7.4, we will show how to
compile these protocols into ones that are secure also in the malicious model.
Throughout the current section, we assume that the desired (two-party) functionality
(along with the desired input length) is represented by a Boolean circuit. We show how to
transform this circuit into a two-party protocol for evaluating the circuit on a given pair
of local inputs. The transformation follows the outline provided in in Section 7.1.3.3.23
The circuit-evaluation protocol, to be presented in Section 7.3.4, scans the circuit
from the input wires to the output wires, processing a single gate in each basic step.
When entering each basic step, the parties hold shares of the values of the input wires of
the gate, and when the step is completed, they hold shares of the output wire of the gate.
The shares held by each party yield no information about the corresponding values, but
combining the two shares of any value allows for reconstructing the value. Each basic
step is performed without yielding any additional information; that is, the generation
of shares for all wires (and in particular for the circuit’s output wires) is performed in
634
www.Ebook777.com
7.3* PRIVATELY COMPUTING (TWO-PARTY) FUNCTIONALITIES
a private manner. Put in other words, we will show that privately evaluating the circuit
“reduces” to privately evaluating single gates on values shared by both parties.
Our presentation is modular, where the modularity is supported by an appropriate
notion of a reduction. Thus, we first define such notion, and show that indeed it is
suitable to our goals; that is, combining a reduction of (the private computation of) g
to (the private computation of) f and a protocol for privately computing f yields a
protocol for privately computing g. Applying this notion of a reduction, we reduce the
private computation of general functionalities to the private computation of determin-
istic functionalities, and thus focus on the latter.
We next consider, without loss of generality, the evaluation of Boolean circuits with
and and xor gates of fan-in 2.24 Actually, we find it more convenient to consider the
corresponding arithmetic circuits over GF(2), where multiplication corresponds to and
and addition to xor. A value v is shared by the two parties in the natural manner (i.e.,
the sum of the shares equals v mod 2). We show how to propagate shares of values
through any given gate (operation). Propagation through an addition gate is trivial, and
we concentrate on propagation through a multiplication gate. The generic case is that
the first party holds (a1 , b1 ) and the second party holds (a2 , b2 ), where a1 + a2 is the
value of one input wire and b1 + b2 is the value of the other input wire. What we want
is to provide each party with a random share of the value of the output wire, that is, a
share of the value (a1 + a2 ) · (b1 + b2 ). In other words, we are interested in privately
computing the following randomized functionality
((a1 , b1 ), (a2 , b2 )) → (c1 , c2 ) (7.17)
where c1 + c2 = (a1 + a2 ) · (b1 + b2 ). (7.18)
That is, (c1 , c2 ) ought to be uniformly distributed among the pairs satisfying c1 + c2 =
(a1 + a2 ) · (b1 + b2 ). As shown in Section 7.3.3, this functionality can be privately
computed by reduction to a variant of Oblivious Transfer (OT). This variant is defined
in Section 7.3.2, where it is shown that this variant can be privately implemented
assuming the existence of (enhanced) trapdoor one-way permutations. We stress that
the specific functionalities mentioned here are relatively simple (e.g., they have a finite
domain). Thus, Section 7.3.4 reduces the private computation of arbitrary (complex)
functionalities to the construction of protocols for privately computing a specific simple
functionality (e.g., the one of Eq. (7.17) and Eq. (7.18)).
The actual presentation proceeds bottom-up. We first define reductions between (two-
party) protocol problems (in the semi-honest model). Next, we define and implement
OT, and show how to use OT for privately computing a single multiplication gate. Finally,
we show how to use the latter protocol to derive a protocol for privately evaluating the
entire circuit.
Teaching Tip. Some readers may prefer to see a concrete protocol (and its privacy
analysis) before coping with the abstract notion of a privacy reduction (and a corre-
sponding composition theorem). We advise such readers to read Section 7.3.2 before
reading Section 7.3.1.
24 Indeed, negation can be emulated by xoring the given bit with the constant true.
635
Free ebooks ==> www.Ebook777.com
GENERAL CRYPTOGRAPHIC PROTOCOLS
We stress that the syntax of Definition 7.3.1 allows (only) sequential oracle calls (but
not parallel ones). We call the reader’s attention to the second item in Definition 7.3.2
that requires that the oracle-aided protocol privately compute the functionality, rather
than merely computes it.
25 The identity of the requesting party may be determined by the two parties (according to interaction prior to the
request). In particular, as in all protocols used in this work, the identity of the requesting party may be fixed a
priori.
26 This requirement guarantees that the security of the oracle calls be related to the security of the high-level
protocol.
636
www.Ebook777.com
7.3* PRIVATELY COMPUTING (TWO-PARTY) FUNCTIONALITIES
Theorem 7.3.3 (Composition Theorem for the semi-honest model): Suppose that g is
privately reducible to f and that there exists a protocol for privately computing f .
Then there exists a protocol for privately computing g.
It is left to show that Si indeed generates a distribution that (augmented by the value
of g) is indistinguishable from the view of Party i (augmented by the output of both
parties) in actual executions of . Toward this end, we introduce a hybrid distribution,
denoted Hi . This hybrid distribution represents the view of Party i (and the output of
637
Free ebooks ==> www.Ebook777.com
GENERAL CRYPTOGRAPHIC PROTOCOLS
27 Here we use the hypothesis (made in the first item of Definition 7.3.2) that the length of each query is polynomially
related to the length of the initial input.
638
www.Ebook777.com
7.3* PRIVATELY COMPUTING (TWO-PARTY) FUNCTIONALITIES
Proof: Clearly, this protocol, denoted , computes g. To show that privately computes
g, we need to present a simulator for each party view. The simulator for Party i, denoted
Si , is the obvious one. On input (xi , vi ), where xi is the local input to Party i and vi
is its local output, the simulator uniformly selects ri ∈ {0, 1}m , and outputs (x i , ri , vi ),
where m = poly(|xi |). The main observation underlying the analysis of this simulator
is that for every fixed x1 , x2 and r ∈ {0, 1}m , we have v = g(r, (x1 , x2 )) if and only if
v = f ((x 1 , r1 ), (x2 , r2 )), for every pair (r1 , r2 ) satisfying r1 ⊕ r2 = r . Now, let ζi be a
random variable representing the random choice of Party i in Step 1, and ζi denote the
corresponding choice made by the simulator Si . Then, referring to the general form of
Definition 7.2.1 (as we should, since g is a randomized functionality), we show that for
every fixed x1 , x2 , ri and v = (v1 , v2 ), it holds that
viewi (x1 , x2 ) = (xi , ri , vi )
Pr = Pr[(ζi = ri ) ∧ ( f ((x1 , ζ1 ), (x2 , ζ2 )) = v)]
∧ output (x1 , x2 ) = (v1 , v2 )
|{r3−i : f ((x1 , r1 ), (x2 , r2 )) = v}|
= Pr[ζi = ri ] ·
2m
|{r : g(r, (x 1 , x 2 )) = v}|
= 2−m ·
2m
= Pr[ζi = ri ] · Pr[g(x1 , x2 ) = v]
= Pr[(ζi = ri ) ∧ (g(x1 , x 2 ) = v)]
S (x , g (x , x )) = (xi , ri , vi )
= Pr i i i 1 2
∧ g(x1 , x 2 ) = (v1 , v2 )
where the equalities are justified as follows: the 1st by definition of , the 2nd by
independence of the ζi ’s, the 3rd by definition of ζi and f , the 4th by definition of
ζi and g, the 5th by independence of ζi and g, and the 6th by definition of Si . Thus,
639
Free ebooks ==> www.Ebook777.com
GENERAL CRYPTOGRAPHIC PROTOCOLS
the simulated view (and output) is distributed identically to the view (and output) in a
real execution. The claim (which only requires these ensembles to be computationally
indistinguishable) follows.
Inputs: The sender has input (σ1 , σ2 , ..., σk ) ∈ {0, 1}k , the receiver has input i ∈
{1, 2, ..., k}, and both parties have the auxiliary security parameter 1n .
Step S1: The sender uniformly selects an index-trapdoor pair, (α, t), by running the
generation algorithm, G, on input 1n . That is, it uniformly selects a random-tape, r ,
for G and sets (α, t) = G(1n , r ). It sends the index α to the receiver.
Step R1: The receiver uniformly and independently selects x1 , ..., xk ∈ Dα , sets yi =
f α (xi ) and y j = x j for every j = i, and sends (y1 , y2 , ..., yk ) to the sender. That is:
640
www.Ebook777.com
7.3* PRIVATELY COMPUTING (TWO-PARTY) FUNCTIONALITIES
We show next that the protocol indeed privately computes OTk1 . Intuitively, the sender
gets no information from the execution because, for any possible value of i, the senders
sees the same distribution; specifically, a sequence of k uniformly and independently
distributed elements of Dα . (Indeed, the key observation is that applying f α to a uni-
formly distributed element of Dα yields a uniformly distributed element of Dα .) In-
tuitively, the receiver gains no computational knowledge from the execution since, for
j = i, the only data it has regarding σ j is the triplet (α, r j , σ j ⊕ b( f α−1 (x j ))), where
x j = D(α, r j ), from which it is infeasible to predict σ j better than by a random guess.
Specifically, we rely on the “enhanced one-way” hypothesis by which, given α and r j ,
it is infeasible to find f α−1 (x j ) (or guess b( f α−1 (x j )) better than at random). A formal
argument is indeed due and given next.
We comment that the intractability assumption used in Proposition 7.3.6 will propagate
to all subsequent results in the current and next section (i.e., Sections 7.3 and 7.4).
In fact, the implementation of OTk1 seems to be the bottleneck of the intractability
assumptions used in these sections.
Proof Sketch: Note that since we are dealing with a deterministic functionality, we
may use the special (simpler) form of Definition 7.2.1 (which only refers to each
party’s view). Thus, we will present a simulator for the view of each party. Recall that
these simulators are given the local input (which also includes the security parameter)
and the local output of the corresponding party. The following schematic depiction of
641
Free ebooks ==> www.Ebook777.com
GENERAL CRYPTOGRAPHIC PROTOCOLS
the information flow in Construction 7.3.5 may be useful toward the constructions of
these simulators:
We start by presenting a simulator for the sender’s view. On input (((σ1 , ..., σk ), 1n ), λ),
this simulator randomly selects α (as in Step S1) and generates uniformly and inde-
pendently y1 , ..., yk ∈ Dα . That is, let r denote the sequence of coins used to generate
α, and assume without loss of generality that the inverting-with-trapdoor algorithm
is deterministic (which is typically the case anyhow). Then the simulator outputs
(((σ1 , ..., σk ), 1n ), r, (y1 , ..., yk )), where the first element represents the party’s input,
the second its random choices, and the third the (single) message that the party
has received. Clearly, this output distribution is identical to the view of the sender
in the real execution. (This holds because f α is a permutation, and thus applying
it to a uniformly distributed element of Dα yields a uniformly distributed element
of Dα .)
We now turn to the receiver. On input ((i, 1n ), σi ), the simulator (of the receiver’s
view) proceeds as follows:
1. Emulating Step S1, the simulator uniformly selects an index-trapdoor pair, (α, t), by
running the generation algorithm on input 1n .
2. As in Step R1, it uniformly and independently selects r1 , ..., rk for the domain sampler
D, and sets x j = D(α, r j ) for j = 1, ..., k. Next, it sets yi = f α (xi ) and y j = x j , for
each j = i.
3. It sets ci = σi ⊕ b(xi ), and uniformly selects c j ∈ {0, 1}, for each j = i.
4. Finally, it outputs ((i, 1n ), (r1 , ..., rk ), (α, (c1 , ..., ck ))), where the first element repre-
sents the party’s input, the second its random choices, and the third element represents
the two messages that the party has received.
Note that, except for the sequence of c j ’s, this output is distributed identically to the
corresponding prefix of the receiver’s view in the real execution. Furthermore, the said
equality holds even if we include the bit ci (which equals σi ⊕ b( f α−1 (yi )) = σi ⊕ b(xi )
in the real execution as well as in the simulation). Thus, the two distributions dif-
fer only in the values of the other c j ’s: For j = i, in the simulation c j is uni-
form and independent of anything else, whereas in the real execution c j equals
642
www.Ebook777.com
7.3* PRIVATELY COMPUTING (TWO-PARTY) FUNCTIONALITIES
1. Extensions of 1-out-of-k Oblivious Transfer to k secrets that are bit strings rather
than single bits.
2. Oblivious Transfer of a single secret (denoted σ ) that is to be delivered with prob-
ability 1/2. That is, the randomized functionality that maps (σ, λ) to (λ, σ ) with
probability 1/2 and to (λ, λ) otherwise.
Privacy reductions among these variants can be easily constructed (see Exercise 6).
Construction 7.3.7 (privately reducing the functionality of Eq. (7.17) – (7.18) to OT41 ):
Using its input (a2 , b2 ), Party 2 sets the receiver’s input (in the OT41 ) to equal 1 +
2a2 + b2 ∈ {1, 2, 3, 4}.
643
Free ebooks ==> www.Ebook777.com
GENERAL CRYPTOGRAPHIC PROTOCOLS
Thus, the receiver’s output will be the (1 + 2a2 + b2 )th element in Eq. (7.21), which
in turn equals f a2 ,b2 (a1 , b1 , c1 ). That is:
Proof Sketch: Simulators for the oracle-aided protocol of Construction 7.3.7 are easily
constructed. Specifically, the simulator of the view of Party 1 has input ((a1 , b1 ), c1 )
(i.e., the input and output of Party 1), which is identical to the view of Party 1 in
the corresponding execution (where here c1 serves as coins to Party 1). Thus, the
simulation is trivial (i.e., by the identity transformation). The same also holds for
the simulator of the view of Party 2: It gets input ((a2 , b2 ), c1 + (a1 + a2 ) · (b1 + b2 ))
(i.e., the input and output of Party 2), which is identical to the view of Party 2 in the
corresponding execution (where here c1 + (a1 + a2 ) · (b1 + b2 ) serves as the oracle
response to Party 2). Thus, again, the simulation is trivial. We conclude that the view of
each party can be perfectly simulated (rather than just be simulated in a computationally
indistinguishable manner). The same holds when we also account for the parties’ outputs
(as required in the general form of Definition 7.2.1), and the proposition follows.28
On the Generic Nature of Construction 7.3.7. The idea underlying Step 2 of Con-
struction 7.3.7 can be applied in order to reduce the computation of any determinis-
tic functionality of the form (x, y) → (λ, f y (x)) to 1-out-of-2|y| Oblivious Transfer.
Indeed, this reduction is applicable only when y is short (i.e., the number of pos-
sible y’s is at most polynomial in the security parameter). Specifically, consider the
644
www.Ebook777.com
7.3* PRIVATELY COMPUTING (TWO-PARTY) FUNCTIONALITIES
functions f y : {0, 1}k → {0, 1}, for y ∈ {0, 1} (when in Construction 7.3.7 = 2 (and
k = 3)). Then, privately computing (x, y) → (λ, f y (x)) is reduced to 1-out-of-2 Obliv-
ious Transfer by letting the first party play the sender with input set to the 2 -tuple
( f 0 (x), ..., f 1 (x)) and the second party play the receiver with input set to the index of
y among the -bit long strings.
645
Free ebooks ==> www.Ebook777.com
GENERAL CRYPTOGRAPHIC PROTOCOLS
We will consider an enumeration of all wires in the circuit. The input-wires of the
circuit, n per each party, will be numbered 1, 2...., 2n so that, for j = 1, ..., n, the j-th
input of party i corresponds to the (i − 1) · n + j-th wire. The wires will be numbered
so that the output-wires of each gate have a larger numbering than its input wires.
The output-wires of the circuit are clearly the last ones. For the sake of simplicity we
assume that each party obtains n output bits, and that the output bits of the second party
correspond to the last n wires of the circuit.
Inputs: Party i holds the bit string xi1 · · · xin ∈ {0, 1}n , for i = 1, 2.
Step 1 – Sharing the Inputs: Each party (splits and) shares each of its input bits with
the other party. That is, for every i = 1, 2 and j = 1, ..., n, Party i uniformly se-
j
lects a bit ri and sends it to the other party as the other party’s share of the input
wire (i − 1) · n + j. Party i sets its own share of the (i − 1) · n + j th input wire
j j
to xi + ri .
Step 2 – Circuit Emulation: Proceeding by the order of wires, the parties use their
shares of the two input-wires to a gate in order to privately compute shares for the
output-wire(s) of the gate. Suppose that the parties hold shares to the two input-wires
of a gate; that is, Party 1 holds the shares a1 , b1 and Party 2 holds the shares a2 , b2 ,
where a1 , a2 are the shares of the first wire and b1 , b2 are the shares of the second
wire. We consider two cases.31
Emulation of an addition gate: Party 1 just sets its share of the output-wire of the
gate to be a1 + b1 , and Party 2 sets its share of the output-wire to be a2 + b2 .
Emulation of a multiplication gate: Shares of the output-wire of the gate are obtained
by invoking the oracle for the functionality of Eq. (7.17) – (7.18), where Party 1
supplies the input (query part) (a1 , b1 ), and Party 2 supplies (a2 , b2 ). When the
oracle responds, each party sets its share of the output-wire of the gate to equal
its part of the oracle answer. Recall that, by Eq. (7.18), the two parts of the oracle
answer sum up to (a1 + b1 ) · (a2 + b2 ).
Step 3 – Recovering the Output Bits: Once the shares of the circuit-output wires are
computed, each party sends its share of each such wire to the party with which the
wire is associated. That is, the shares of the last n wires are sent by Party 1 to Party 2,
30 Alternatively, we may let the circuit be part of the input to both parties, which essentially means that the protocol
is computing the “universal circuit-evaluation” function.
31 In the text, we implicitly assume that each gate has a single output wire, but this assumption is immaterial and
the treatment extends easily to the case that the gates have several output wires. In the case of a multiplication
gate, both the natural possibilities (which follow) are fine. The first (more natural) possibility is to invoke the
oracle once per each multiplication gate and have each party use the same share for all output wires. The second
possibility is to invoke the oracle once per each output-wire (of a multiplication gate).
646
www.Ebook777.com
7.3* PRIVATELY COMPUTING (TWO-PARTY) FUNCTIONALITIES
whereas the shares of the preceding n wires are sent by Party 2 to Party 1. Each
party recovers the corresponding output bits by adding up the two shares, that is,
the share it had obtained in Step 2 and the share it has obtained in the current step.
Outputs: Each party locally outputs the bits recovered in Step 3.
For starters, let us verify that the output is indeed correct. This can be shown by induction
on the wires of the circuits. The induction claim is that the shares of each wire sum up
to the correct value of the wire. The base case of the induction are the input-wires of the
j j
circuits. Specifically, the (i − 1) · n + j-th wire has value xi , and its shares are ri and
j j j
ri + xi (indeed, summing up to xi ). For the induction step we consider the emulation
of a gate. Suppose that the values of the input-wires (to the gate) are a and b, and that
their shares a1 , a2 and b1 , b2 indeed satisfy a1 + a2 = a and b1 + b2 = b. In the case
of an addition gate, the shares of the output-wire were set to be a1 + b1 and a2 + b2 ,
indeed satisfying
(a1 + b1 ) + (a2 + b2 ) = (a1 + a2 ) + (b1 + b2 ) = a + b
In the case of a multiplication gate, the shares of the output-wire were set to be c1 and
c2 such that c1 + c2 = (a1 + a2 ) · (b1 + b2 ). Thus, c1 + c2 = a · b as required.
Privacy of the Reduction. We now turn to show that Construction 7.3.9 indeed
privately reduces the computation of a circuit to the multiplication-gate emulation.
That is,
Proof Sketch: Note that since we are dealing with a deterministic functionality, we
may use the special (simpler) form of Definition 7.2.1 and only refer to simulating
the view of each party. Recall that these simulators should produce the view of the
party in an oracle-aided execution (i.e., an execution of Construction 7.3.9, which is an
oracle-aided protocol). Without loss of generality, we present a simulator for the view
of Party 1. This simulator gets the party’s input x11 , ..., x1n , as well as its output, denoted
y 1 , ..., y n . It operates as follows:
j
1. The simulator uniformly selects r11 , ..., r1n and r21 , ..., r2n , as in Step 1. (The r1 ’s will
be used as the coins of Party 1, which are part of the view of the execution, whereas
j
the r2 ’s will be used as the message Party 1 receives at Step 1.) For each j ≤ n, the
j j
simulator sets x 1 + r1 as the party’s share of the value of the j-th wire. Similarly,
j
for j ≤ n, the party’s share of the n + j-th wire is set to r2 .
This completes the computation of the party’s shares of all the 2n circuit-input wires.
2. The party’s shares for all other wires are computed, iteratively gate by gate, as
follows:
r The party’s share of the output-wire of an addition gate is set to be the sum of the
party’s shares of the input-wires of the gate.
647
Free ebooks ==> www.Ebook777.com
GENERAL CRYPTOGRAPHIC PROTOCOLS
Conclusion. Combining Propositions 7.3.4, 7.3.10, and 7.3.8 with the transitivity of
privacy reductions (see Exercise 5), we obtain:
Combining Theorem 7.3.11 and Proposition 7.3.6 with the Composition Theorem (The-
orem 7.3.3), we obtain:32
32 Alternatively, one may avoid relying on the transitivity of privacy reductions by successively applying the Com-
position Theorem to derive private protocols first for the multiplication functionality, then for any deterministic
functionality, and finally for any functionality. That is, in the first application we use Propositions 7.3.8 and 7.3.6,
648
www.Ebook777.com
7.3* PRIVATELY COMPUTING (TWO-PARTY) FUNCTIONALITIES
Theorem 7.3.12: Suppose that there exist collections of enhanced trapdoor per-
mutations. Then any functionality can be privately computable (in the semi-honest
model).
For the sake of future usage (in Section 7.4), we point out a property of the protocols
underlying the proof of Theorem 7.3.12.
Stage 1: The parties privately compute the functionality (x, y) → ((r1 , r2 ), (s1 , s2 )),
where the ri ’s and si ’s are uniformly distributed among all possibilities that satisfy
(r1 ⊕ s1 , r2 ⊕ s2 ) = f (x, y).
Stage 2: Party 2 sends s1 to Party 1, which responds with r2 . Each party computes its
own output; that is, Party i outputs ri ⊕ si .
Indeed, the protocols underlying the proof of Theorem 7.3.12 are canonical. Hence,
Theorem 7.3.14: Suppose that there exist collections of enhanced trapdoor permuta-
tions. Then any functionality can be privately computable by a canonical protocol.
We present two alternative proofs of Theorem 7.3.14: The first proof depends on the
structure of the protocols used in establishing Theorem 7.3.11, whereas the second
proof is generic and uses an additional reduction.
First Proof of Theorem 7.3.14: Recall that the oracle-aided protocol claimed in The-
orem 7.3.11 is obtained by composing the reduction in Proposition 7.3.4 with Con-
structions 7.3.9 and 7.3.7. The high-level structure of the resulting protocol is induced
by the circuit-evaluation protocol (of Construction 7.3.9), which is clearly canonical
(with Step 3 fitting Stage 2 in Definition 7.3.13). Indeed, it is important that in Step 3
exactly two messages are sent and that Party 1 sends the last message. The fact that the
said oracle-aided protocol is canonical is also preserved when replacing the OT41 oracle
by an adequate sub-protocol.
Second Proof of Theorem 7.3.14: Using Theorem 7.3.12, we can first derive a protocol
for privately computing the functionality of Stage 1 (in Definition 7.3.13). Augment-
ing this protocol by the trivial Stage 2, we derive a canonical protocol for privately
computing the original functionality (i.e., f itself).
in the second we use Proposition 7.3.10 and the protocol resulting from the first application, and in the last
application we use Proposition 7.3.4 and the protocol resulting from the second application.
649
Free ebooks ==> www.Ebook777.com
GENERAL CRYPTOGRAPHIC PROTOCOLS
Our aim is to use Theorem 7.3.12 (or rather Theorem 7.3.14) in order to establish the
main result of this chapter; that is,
Theorem 7.4.1 (main result for the two-party case): Suppose that there exist collections
of enhanced trapdoor permutations. Then any two-party functionality can be securely
computable (in the malicious model).
Theorem 7.4.1 will be established by compiling any protocol for the semi-honest model
into an “equivalent” protocol for the malicious model. The current section is devoted
to the construction of the said compiler, which was already outlined in Section 7.1.3.1.
Loosely speaking, the compiler works by replacing the original instructions by macros
that force each party to either effectively behave in a semi-honest manner (hence, the
title of the current section) or be detected as cheating (in which case, the protocol aborts).
Teaching Tip. Some readers may prefer to see a concrete protocol (and its security
analysis) before getting to the general protocol compiler (and coping with the abstrac-
tions used in its exposition). We advise such readers to read Section 7.4.3.1 before
reading Sections 7.4.1 and 7.4.2.
1. A malicious party may enter the actual execution of the protocol with an input differ-
ent from the one it is given (i.e., “substitute its input”). As discussed in Section 7.2.3,
this is unavoidable. What we need to guarantee is that this substitution is done obliv-
iously of the input of the other party, that is, that the substitution only depends on
the original input.
Jumping ahead, we mention that the input-commitment phase of the compiled proto-
col is aimed at achieving this goal. The tools used here are commitment schemes (see
Section 4.4.1) and strong zero-knowledge proofs-of-knowledge (see Section 4.7.6).
Sequential executions of these proofs-of-knowledge guarantee the effective indepen-
dence of the committed values.
2. A malicious party may enter the actual execution of the protocol with a random-
tape that is not uniformly distributed. What we need to do is force the party
650
www.Ebook777.com
7.4* FORCING (TWO-PARTY) SEMI-HONEST BEHAVIOR
Coin-Generation Phase: The parties generate random-tapes for the emulation of the
original protocol. Each party obtains the value of the random-tape to be held by it,
whereas the other party obtains a commitment to this value. The party holding the
value also obtains the corresponding decommitment information. All this is obtained
by using a secure implementation of the (augmented) coin-tossing functionality (to
be defined in Section 7.4.3.5). It follows that each party obtains a random-tape that
is essentially random and independent of anything else.
to the sender. The functionality guarantees that either the corresponding (next-step)
message is delivered or the designated receiver detects cheating.
In order to allow a modular presentation of the compiled protocols, we start by defining
an adequate notion of reducibility (where here the oracle-aided protocol needs to be
secure in the malicious model rather than in the semi-honest one). We next turn to
constructing secure protocols for several basic functionalities, and use the latter to
construct secure protocols for the three main functionalities mentioned here. Finally,
we present and analyze the actual compiler.
652
www.Ebook777.com
7.4* FORCING (TWO-PARTY) SEMI-HONEST BEHAVIOR
effect of any admissible real-model strategies as in the previous item can be simulated
by admissible strategies for the ideal model, where the ideal model for computing g
is exactly as in Definition 7.2.4.
More specifically, the oracle-aided protocol (using oracle f ) is said to securely
compute g (in the malicious model) if for every probabilistic polynomial-time
pair A = (A1 , A2 ) that is admissible for the real model of the oracle-aided com-
putation, there exists a probabilistic polynomial-time pair B = (B1 , B2 ) that is
admissible for the ideal model (of Definition 7.2.4) such that
c f
{idealg, B(z) (x, y)}x, y,z ≡ {real, A(z) (x, y)}x, y,z
where x, y, z ∈ {0, 1}∗ such that |x| = |y| and |z| = poly(|x|).
r An oracle-aided protocol is said to securely reduce g to f if it securely computes
g when using the oracle-functionality f . In such a case, we say that g is securely
reducible to f ,
We are now ready to state a composition theorem for the malicious model.
Theorem 7.4.3 (Composition Theorem for the malicious model): Suppose that g is
securely reducible to f and that there exists a protocol for securely computing f . Then
there exists a protocol for securely computing g.
Recall that the syntax of oracle-aided protocols disallows concurrent oracle calls, and
thus Theorem 7.4.3 is actually a sequential composition theorem. As in the semi-
honest case, the Composition Theorem can be generalized to yield transitivity of secure
reductions; that is, if g is securely reducible to f and f is securely reducible to e, then
g is securely reducible to e (see Exercise 13).
As hinted in Section 7.3.1, the proof of Theorem 7.4.3 is significantly more complex
than the proof of Theorem 7.3.3. This does not refer to the construction of the resulting
protocol, but rather to establishing its security.
Proof Sketch: Analogously to the proof of Theorem 7.3.3, we are given an oracle-aided
protocol, denoted g| f , that securely reduces g to f , and an ordinary protocol f that
securely computes f . Again, we construct a protocol for computing g in the natural
manner; that is, starting with g| f , we replace each invocation of the oracle (i.e., of f )
by an execution of the protocol f .
Clearly, computes g, and we need to show that securely computes g. Specifically,
we should present a transformation of real-model adversaries for into ideal-model
adversaries for g. We have at our disposal two transformations of real-model adver-
saries (for g| f and for f ) into corresponding ideal-model adversaries (for g and f ,
respectively). So the first thing we should do is derive, from the real-model adversaries
of , real-model adversaries for g| f and for f .
We assume, without loss of generality, that all real-model adversaries output their
view of the execution. (Recall that any other output can be efficiently computed from
the view, and that any adversary can be easily modified to output its view.)
653
Free ebooks ==> www.Ebook777.com
GENERAL CRYPTOGRAPHIC PROTOCOLS
r Party i (i.e., A ) is not necessarily the party that plays the i-th party in f (i.e.,
i
Party 1 is not necessarily the party in g| f that requests this particular oracle call
to f ). Furthermore, the identity of the party (in f ) played by Ai is not fixed,
but is rather determined by the history of the execution of (which is given to
Ai as auxiliary input). In contrast, our definitions refer to adversaries that play a
predetermined party. This technical discrepancy can be overcome by considering
two versions of Ai , denoted Ai,1
and Ai,2 , such that Ai, j in used (instead of Ai ) in
case Party i is the party that plays the j-th party in f . Indeed, Ai, j is always used
to plays the j-th party in f .
r A minor problem is that Ai may have its own auxiliary input, in which case the
resulting Ai will have two auxiliary inputs (i.e., the first identical to the one of
Ai , and the second representing a partial execution transcript of ). Clearly, these
two auxiliary inputs can be combined into a single auxiliary input. (This fact holds
generically, but more so in this specific setting in which it is anyhow natural to
incorporate the inputs to an adversary in its view of the execution transcript.)
r The last problem is that it is not clear what “initial input” should be given to the
adversary Ai toward its current execution of f (i.e., the input that is supposed to be
used for computing f ). However, this problem (which is more confusing than real)
has little impact on our argument, because what matters is the actual actions of Ai
during the current execution of f , and these are determined based on its (actual)
auxiliary input (which represent the history of the execution of ). Still, the initial
inputs for the executions of f have to be defined so that they can be passed to
the ideal-model adversary that we derive from Ai . We may almost set these initial
inputs arbitrarily, except that (by our conventions regarding functionalities) we must
33 The simpler alternative of deriving a different pair of (real-model) strategies for each invocation of f would
have sufficed for handling oracle-aided protocols that make a constant number of oracle calls. The point is
that the corresponding ideal-model strategies (with respect to f ) need to be combined into a single real-model
strategy for g| f .
654
www.Ebook777.com
7.4* FORCING (TWO-PARTY) SEMI-HONEST BEHAVIOR
set them to strings of correct length (i.e., equal to the length of the other party’s
f -input). Here we use the hypothesis that this length can be determined from the
length of the input to itself.34
Thus, we have obtained an (admissible) ideal-adversary pair B = (B1 , B2 ) correspond-
ing to f such that
c
{ideal f, B (z ) (x , y )}x , y ,z ≡ {real f , A (z ) (x , y )}x , y ,z (7.22)
We comment that when applying Eq. (7.22), we set the input of the honest party to
equal the value on which the sub-protocol (or functionality) was invoked, and set the
auxiliary input to equal the current execution transcript of the high-level protocol (as
seen by the adversary). (As explained earlier, the setting of the primary input to the
dishonest party is immaterial, because the latter determines its actions according to its
auxiliary input.)
Our next step is to derive from A = (A1 , A2 ) a pair of strategies A = (A1 , A2 ) that
represents the behavior of A during the g| f -part of . Again, the honest Ai induces
a corresponding Ai that just behaves according to g| f . Turning to the dishonest Ai ,
we derive Ai by replacing the (real) actions of Ai that take place in Ai by simulated
actions of the ideal-model Bi . That is, the adversary Ai runs machine Ai locally, while
interacting with the actual other party of g| f , obtaining the messages that Ai would
have sent in a real execution of , and feeding Ai with messages that it expects to
receive (i.e., messages that Ai would have received in a real execution of ). The
handling of Ai ’s messages depends on whether they belong to the g| f -part or to one
of the invocations of f . The key point is the handling of the latter messages.
Handling Messages of g| f : These messages are forwarded to/from the other party
without change. That is, Ai uses Ai in order to determine the next message to be
sent, and does so by feeding Ai with the history of the execution so far (which
contains g| f -part messages that Ai has received before, as well as the f -
parts that it has generated so far by itself). In particular, if Ai aborts, then so
does Ai .
Handling Messages of f : Upon entering a new invocation of f , the adversary Ai
sets h i to record the history of the execution of so far. Now, rather than executing
f using Ai (h i ) (as Ai would have done), the adversary Ai invokes Bi (h i ), where
Bi is the ideal-model adversary for f (derived from Ai , which in turn was derived
from Ai ). Recall that Bi sends no messages and makes a single oracle-query (which it
views as sending a message to its imaginary trusted party). The real-model adversary
Ai (for the oracle-aided protocol g| f ) forwards this query to its own oracle (i.e.,
34 We comment that when using the alternative conventions discussed at the end of Section 7.2.1.1, we may waive
the requirement that the query length be determined by the input length. Instead, we postulate that all oracle
calls made by the oracle-aided program use the same security parameter as the one with which the program is
invoked. On the other hand, under the current conventions, when trying to extend the composition theorem to
partial functionalities (or when removing the “length determination” hypothesis), we run into trouble because
we need to determine some f -input that fits the unknown f -input of the other party. (This problem can be
resolved by introducing an adequate interface to oracle calls.)
655
Free ebooks ==> www.Ebook777.com
GENERAL CRYPTOGRAPHIC PROTOCOLS
f ), and feeds Bi with the oracle answer. At some point Bi terminates, and Ai uses
its output to update the simulated history of the execution of . In particular, oracle-
stopping events caused by Bi (h i ) (in case Party i requested this specific oracle call)
and ⊥-answers of the oracle (in the other case) are handled in the straightforward
manner.
On stopping the oracle and ⊥-answers: Suppose first that Party i has re-
quested this specific oracle call. In this case, after receiving the oracle answer
(which it views as the answer of its trusted party), the ideal-model adversary Bi
may stop its trusted party. If this happens, then machine Ai instructs its own
oracle (i.e., f ) not to respond to the other party. Next, suppose that Party i is
the party responding to this specific oracle call (rather than requesting it). In this
case, it may happen that the oracle is stopped by the other party (i.e., the oracle
is not allowed to answer Party i). When notified of this event (i.e., receiving a
⊥-answer from its oracle), machine Ai feeds ⊥ as answer to Bi .
35 Here we use the hypothesis that the query lengths are polynomially related to the length of the input. The issue is
that in Eq. (7.22), computational indistinguishability is with respect to the length of the queries (to f ), whereas
we need computational indistinguishability with respect to the length of the initial inputs. We also highlight the
key role of the auxiliary inputs to A and B in this argument (cf. the analysis of the sequential composition of
zero-knowledge [i.e., proof of Lemma 4.3.11]).
656
www.Ebook777.com
7.4* FORCING (TWO-PARTY) SEMI-HONEST BEHAVIOR
Proof Sketch: Suppose, without loss of generality, that Party 1 is malicious, and denote
by (x1 , r1 ) the query it makes to f . Denoting by xi the initial input of Party i (in ), it
follows that the oracle answer is f ((x1 , r1 ), (x2 , r2 )), where r2 is uniformly distributed
(because Party 2 is honest). Recalling that f ((x1 , r1 ), (x2 , r2 )) = g(r1 ⊕ r2 , (x1 , x2 )), it
follows that the oracle answer is distributed identically to g(x1 , x 2 ). Furthermore, by
the definition of , all that Party 1 gets is f 1 ((x 1 , r1 ), (x2 , U|r1 | )) ≡ g1 (x1 , x2 ). This is
easily simulated by a corresponding ideal-model adversary that sets x1 according to the
real-model adversary, and sends x1 to the trusted third party (which answers according
to g).
Commitment
ORDER OF PRESENTATION schemes ZK proofs ZK POKs TOOLS
1
COIN
TOSSING 2
restricted
AUTH. C. 3
IMAGE
TRANS.
4
5 AUTH.
COMMIT
6 AUG. Comput.
COIN
INPUT
Basic Tools and Conventions Regarding Them. Let us recall some facts and notations
regarding three tools that we will use:
r Commitment schemes (as defined in Definition 4.4.1). For the sake of simplicity, we
will use a non-interactive commitment scheme (as in Construction 4.4.2). We assume,
for simplicity, that on security parameter n, the commitment scheme utilizes exactly
n random bits. We denote by Cr (b) the commitment to the bit b using (security
parameter n and) randomness r ∈ {0, 1}n , and by C(b) the value of Cr (b) for a
uniformly distributed r ∈ {0, 1}n (where n is understood from the context).
r Zero-knowledge proofs of NP-assertions. We rely on the fact (cf. Theorem 4.4.11)
that there exist such proof systems in which the prover strategy can be implemented
in probabilistic polynomial-time, when given an NP-witness as auxiliary input. We
stress that by this we mean (zero-knowledge) proof systems with negligible sound-
ness error. Furthermore, we rely on the fact that these proof systems have perfect
completeness (i.e., the verifier accepts a valid statement with probability 1).
658
www.Ebook777.com
7.4* FORCING (TWO-PARTY) SEMI-HONEST BEHAVIOR
On the Adversaries Being Considered. For the sake of simplicity, in all the proofs of
security presented in this section, we only refer to malicious (real-model) adversaries
with no auxiliary input. Furthermore, we will assume that these malicious (real-model)
adversaries are deterministic. As discussed in Section 7.2.3.1 (see text following Def-
inition 7.2.5), the treatment of randomized adversaries (with auxiliary inputs) can be
reduced to the treatment of deterministic adversaries with auxiliary inputs, and so the
issue here is actually the fact that we ignore auxiliary inputs. However, in all cases,
the extension of our treatment to malicious adversaries with auxiliary input is straight-
forward. Specifically, in all cases, we construct ideal-model adversaries by using the
real-model adversaries as subroutines. This black-box usage easily supports the ex-
tension to adversaries with auxiliary inputs, because all that is needed is to pass the
auxiliary-input (given to the ideal-model adversary) to the real-model adversary (which
is invoked as a subroutine).
Comments Regarding the Following Exposition. All protocols are presented by spec-
ifying the behavior of honest parties, while keeping in mind that dishonest parties may
deviate from the specified behavior. Thus, we may instruct one party to send a specific
message that satisfies some property and next instruct the other party to check that the
message received indeed satisfies this property. When transforming real-model adver-
saries to ideal-model adversaries, we sometimes allow the latter to halt before invoking
the trusted party. As discussed in Section 7.2.3.1 (see text preceding Definition 7.2.4),
this can be viewed as invoking the trusted party with a special abort symbol, where in
this case, the latter responds to all parties with a special abort symbol.
7.4.3.1. Coin-Tossing
We start our assembly of functionalities that are useful for the compiler by presenting
and implementing a very natural functionality, which is of independent interest. Specif-
ically, we refer to the coin-tossing functionality (1n , 1n ) → (b, b), where b is uniformly
distributed in {0, 1}. This functionality allows a pair of distrustful parties to agree on a
common random value.36
36 Actually, in order to conform with the convention that the functionality has to be defined for any input pair, we
may consider the formulation (x, y) → (b, b).
659
Free ebooks ==> www.Ebook777.com
GENERAL CRYPTOGRAPHIC PROTOCOLS
Proof Sketch: We need to transform any admissible pair, (A1 , A2 ), for the real model
into a corresponding pair, (B1 , B2 ), for the ideal model. We treat separately each of the
37 These two conventions prevent the parties from aborting the execution before Step C3.
660
www.Ebook777.com
7.4* FORCING (TWO-PARTY) SEMI-HONEST BEHAVIOR
two cases corresponding to the identity of the honest party. Recall that we may assume,
for simplicity, that the adversary is deterministic (see discussion toward the end of the
preamble of Section 7.4.3). Also, for simplicity, we omit the input 1n in some places.
The following schematic depiction of the information flow in Construction 7.4.7 may
be useful toward the following analysis:
Party 1 Party 2
C1 selects (σ, s)
c ← Cs (σ ) −→ c −→
C2 selects σ ∈ {0, 1}
←− σ ←−
C3 b ←σ ⊕σ
−→ (σ, s) −→
output b b or ⊥
(depending on whether c = Cs (σ ))
We start with the case where the first party is honest. In this case, B1 is determined
(by the protocol), and we transform the real-model adversary A2 into an ideal-model
adversary B2 . Machine B2 will run machine A2 locally, obtaining the single message that
A2 would have sent in a real execution of the protocol (i.e., σ ∈ {0, 1}) and feeding A2
with the messages that it expects to receive. Recall that A2 expects to see the messages
Cs (σ ) and (σ, s) (and that B2 gets input 1n ).
1. B2 sends 1n to the trusted party and obtains an answer (bit), denoted b, which is
uniformly distributed. (Recall that b is also handed to Party 1.)
2. B2 tries to generate an execution view (of A2 ) ending with output b. This is done by
repeating the following steps at most n times:
def
(a) B2 uniformly select σ ∈ {0, 1} and s ∈ {0, 1}n , and feeds A2 with c = Cs (σ ).
Recall that A2 always responds with a bit, denoted σ , which may depend on c
(i.e., σ ← A2 (c)).
(b) If σ ⊕ σ = b, then B2 feeds A2 with the execution view (c, (σ, s)), and outputs
whatever A2 does. Otherwise, it continues to the next iteration.
In case all n iterations were completed unsuccessfully (i.e., without output), B2
outputs a special failure symbol.
We need to show that for the coin-tossing functionality, denoted f , and for Construc-
tion 7.4.7, denoted , it holds that
c
{ideal f, B (1n , 1n )}n∈N ≡ {real, A (1n , 1n )}n∈N
In fact, we will show that the two ensembles are statistically indistinguishable. We start
by showing that the probability that B2 outputs failure is exponentially small. This
is shown by proving that for every b ∈ {0, 1}, each iteration of Step 2 succeeds with
probability approximately 1/2. Such an iteration succeeds if and only if σ ⊕ σ = b,
661
Free ebooks ==> www.Ebook777.com
GENERAL CRYPTOGRAPHIC PROTOCOLS
that is, if A2 (Cs (σ )) = b ⊕ σ , where (σ, s) ∈ {0, 1} × {0, 1}n is uniformly chosen.
We have
Prσ,s [A2 (Cs (σ )) = b ⊕ σ ]
1 1
= · Pr[A2 (C(0)) = b] + · Pr[A2 (C(1)) = b ⊕ 1]
2 2
1 1
= + · (Pr[A2 (C(0)) = b] − Pr[A2 (C(1)) = b])
2 2
Using the hypothesis that C is a commitment scheme, the second term is a negligible
function in n, and so our claim regarding the probability that B2 outputs failure
follows. Letting µ denote an appropriate negligible function, we state the following for
future reference:
1
Prσ,s [A2 (Cs (σ )) = b ⊕ σ ] = ± µ(n) (7.25)
2
Next, we show that conditioned on B2 not outputting failure, the distribution
ideal f, B (1n , 1n ) is statistically indistinguishable from the distribution real, A (1n , 1n ).
Both distributions have the form (b , A2 (Cs (σ ), (σ, s))), with b = σ ⊕ A2 (Cs (σ )), and
thus both are determined by the (σ, s)-pairs. In real, A (1n , 1n ), all (σ, s)-pairs are
equally likely (i.e., each appears with probability 2−(n+1) ); whereas (as proven next) in
ideal f, B (1n , 1n ), each pair (σ, s) appears with probability
1 1
· (7.26)
2 |Sσ ⊕A2 (Cs (σ )) |
def
where Sb = {(x, y) ∈ {0, 1} × {0, 1}n : x ⊕ A2 (C y (x)) = b} is the set of pairs that pass
the condition in Step 2b (with respect to the value b obtained in Step 1). To justify
Eq. (7.26), observe that the pair (σ, s) appears as output if and only if it is selected
in Step 2a and the trusted party answers with σ ⊕ A2 (Cs (σ )), where the latter event
occurs with probability 1/2. Furthermore, the successful pairs, selected in Step 2a and
passing the condition in Step 2b, are uniformly distributed in Sσ ⊕A2 (Cs (σ )) , which justifies
Eq. (7.26). We next show that |Sb | ≈ 2n , for every b ∈ {0, 1}. By Eq. (7.25), for every
fixed b ∈ {0, 1} and uniformly distributed (σ, s) ∈ {0, 1} × {0, 1}n , the event (σ, s) ∈ Sb
(i.e., σ ⊕ A2 (Cs (σ )) = b) occurs with probability that is negligibly close to 1/2, and
so |Sb | = (1 ± µ(n)) · 12 · 2n+1 , where µ is a negligible function. Thus, for every pair
(σ, s), it holds that |Sσ ⊕A2 (Cs (σ )) | ∈ {|S0 |, |S1 |} resides in the interval (1 ± µ(n)) · 2n . It
follows that the value of Eq. (7.26) is (1 ± µ(n)) · 2−(n+1) , and so real, A (1n , 1n ) and
ideal f, B (1n , 1n ) are statistically indistinguishable.
We now turn to the case where the second party is honest. In this case, B2 is de-
termined, and we transform A1 into B1 (for the ideal model). On input 1n , machine
B1 runs machine A1 locally, obtaining the messages that A1 would have sent in a real
execution of the protocol and feeding A1 with the single message (i.e., σ ∈ {0, 1}) that
it expects to receive.
1. B1 invokes A1 (on input 1n ). Recall that by our conventions, A1 always sends a mes-
sage in Step C1. Let us denote this message (which is supposedly a commitment
662
www.Ebook777.com
7.4* FORCING (TWO-PARTY) SEMI-HONEST BEHAVIOR
using C) by c. Recall that c may be in the range of C(σ ) for at most one
σ ∈ {0, 1}.
2. Machine B1 tries to obtain the answers of A1 (in Step C3) to both possible messages
that could be sent in Step C2:
(a) B1 feeds A1 with the (Step C2) message 0 and records the answer, which is either
abort or (σ0 , s0 ). The case in which c = Cs0 (σ0 ) is treated as if A1 has aborted.
(b) Rewinding A1 to the beginning of Step C2, machine B1 feeds A1 with the message
1 and records the answer, which is either abort or (σ1 , s1 ). (Again, the case in
which c = Cs1 (σ1 ) is treated as abort.)
If A1 aborts in both cases, then machine B1 aborts with output A1 (1n , σ ), for a
uniformly chosen σ ∈ {0, 1} (and does so without invoking the trusted party, which
means that the honest Party 2 receives ⊥ from the latter).38 (In the following, we refer
to this case as to Case 0.) Otherwise, B1 proceed as follows, distinguishing two cases:
Case 1: A1 answers properly (in the previous experiment) for a single 0-1 value,
def
denoted σ . In this case, we define σ = σσ .
Case 2: A1 answers properly for both values. In this case, the values σ0 and σ1
(defined in Step 1) must be identical, because C s0 (σ0 ) = c = Cs1 (σ1 ), whereas the
def
ranges of C(0) and C(1) are disjoint. In this case, we define σ = σ0 (= σ1 ).
3. Machine B1 sends 1n to the trusted party, which responds with a uniformly selected
value b ∈ {0, 1}. Recall that the trusted party has not responded to Party 2 yet, and
that B1 still has the option of stopping the trusted party before it responds to Party 2.
5. Finally, B1 feeds A1 with the execution view, (1n , σ ), and outputs whatever A1 does.
We now show that ideal f, B (1n , 1n ) and real, A (1n , 1n ) are actually identically dis-
tributed. Consider first the case where A1 (and so B1 ) never aborts (i.e., Case 2). In this
case, we have
where σ and b are uniformly distributed in {0, 1}, and σ is determined by c = A1 (1n )
(i.e., σ = C −1 (c)). Observe that σ is distributed uniformly independently of σ , and so
38 We comment that whenever B1 is determined to abort, it need not invoke the trusted party at all, because it (i.e.,
B1 ) can simulate the trusted party’s answer by itself. The only reason to invoke the trusted party is to provide
Party 2 with an answer that is related to the output of B1 .
663
Free ebooks ==> www.Ebook777.com
GENERAL CRYPTOGRAPHIC PROTOCOLS
We assume, for simplicity, that h is length preserving. Otherwise, the definition may
be modified to consider the functionality ((α, 1|h(α)| ) , (h(α), 1|α| )) → (λ, f (α)). To
39 Recall that, in this case, σ and σ are determined by the Step C1 message.
664
www.Ebook777.com
7.4* FORCING (TWO-PARTY) SEMI-HONEST BEHAVIOR
facilitate the implementation, we assume that the function h is one-to-one, as is the case
in typical applications. This allows us to use (ordinary) zero-knowledge proofs, rather
than strong (zero-knowledge) proofs-of-knowledge. The issue is further discussed in
Section 7.4.3.3.
The functionality of Eq. (7.27) is implemented by having Party 1 send f (α) to
Party 2, and then prove in zero-knowledge the correctness of the value sent (with
respect to the common input h(α)). Note that this statement is of the NP type and
that Party 1 has the corresponding NP-witness. Actually, the following protocol is the
archetypical application of zero-knowledge proof systems.
Proposition 7.4.11: Suppose that the function h is one-to-one and that (P, V ) is
a zero-knowledge interactive proof (with negligible soundness error) for L. Then,
Construction 7.4.10 securely computes (in the malicious model) the h-authenticated
f -computation functionality of Eq. (7.27).
40 In particular, Party 1 sets (u, v) = (h(α), f (α)), whereas Party 2 sets u according to its own input and v according
to the message received in Step C1.
665
Free ebooks ==> www.Ebook777.com
GENERAL CRYPTOGRAPHIC PROTOCOLS
We stress that Proposition 7.4.11 refers to the security of a protocol for computing a
partial functionality, as discussed in Remark 7.2.7. In the case of Eq. (7.27), this means
that the ideal-model adversary is not allowed to “modify its input” (i.e., it must pass its
initial input to the trusted party), because its initial input is the unique value that fits
the other party’s input.
Proof Sketch: Again, we need to transform any admissible pair, (A1 , A2 ), for the real
model into a corresponding pair, (B1 , B2 ), for the ideal model. We treat separately each
of the two cases, corresponding to the identity of the honest party.
We start with the case where the first party is honest. In this case, B1 is determined,
and we transform (the real-model adversary) A2 into (an ideal-model adversary) B2 ,
which uses A2 as a subroutine. Recall that B2 gets input u = h(α), where α is the input
of the honest Party 1.
1. B2 sends u to the trusted party and obtains the value v, which equals f (α) for α
handed by (the honest) Party 1 to the trusted party. Thus, indeed, B2 does not modify
its input and (u, v) ∈ L. (Recall that Party 1 always obtains λ from the trusted party.)
2. B2 invokes the simulator guaranteed for the zero-knowledge proof system (P, V ), on
input (u, v), using (the residual) A2 as a possible malicious verifier.41 Note that we are
simulating the actions of the prescribed prover P, which in the real protocol is played
by the honest Party 1. Denote the obtained simulation transcript by S = S(u, v),
where (indeed) A2 is implicit in the notation.
3. Finally, B2 feeds A2 with the alleged execution view (v, S), and outputs whatever A2
does.
We need to show that for the functionality, denoted F, of Eq. (7.27) and for Construc-
tion 7.4.10, denoted , it holds that
c
{ideal F, B (α, h(α))}α∈{0,1}∗ ≡ {real, A (α, h(α))}α∈{0,1}∗ (7.29)
Let R(α) denote the verifier’s view of the real interaction with P on common input
(h(α), f (α)) and prover’s auxiliary input α, where the verifier is played by A2 . Then,
real, A (α, h(α)) = (λ , A2 (h(α), f (α), R(α)))
ideal F, B (α, h(α)) = (λ , A2 (h(α), f (α), S(h(α), f (α))))
However, by the standard formulation of zero-knowledge, it follows that {R(α)}α∈{0,1}∗
and {S(h(α), f (α))}α∈{0,1}∗ are computationally indistinguishable (also when given α
as auxiliary input), and so Eq. (7.29) follows.
We now turn to the case where the second party is honest. In this case, B2 is deter-
mined, and we transform (real-model) A1 into (ideal-model) B1 , which uses A1 as a
subroutine. Recall that B1 gets input α ∈ {0, 1}n .
1. B1 invokes A1 on input α. As (implicit) in the protocol, any action of A1 in Step C1
(including abort) is interpreted as sending a string. Let us denote by v the message
sent by A1 (i.e., v ← A1 (α)).
41 The case in which A2 executes Step C2 with respect to a different common input is just a special case of a
malicious behavior.
666
www.Ebook777.com
7.4* FORCING (TWO-PARTY) SEMI-HONEST BEHAVIOR
Actually, we will show that these two ensembles are statistically indistinguishable,
where the statistical difference is due to the case where the real adversary A1 succeeds
in convincing the verifier (played by the honest Party 2) that (u, v) satisfies Eq. (7.28),
and yet this claim is false. By the soundness of the proof system, this event happens only
with negligible probability. On the other hand, in case (u, v) satisfies Eq. (7.28), we show
that ideal F, B (α, h(α)) and real, A (α, h(α)) are identically distributed. Details follow.
One key observation is that the emulation of the proof system (with prover strategy
A1 (α)) performed in Step 2 by B1 is distributed identically to the real execution of the
proof system that takes place in Step C2 of .
def def
Fixing any α, recall that v = A1 (α) need not equal f (α), and that u = h(α) uniquely
determines α (because h is 1-1). We denote by p the probability that A1 (α) (playing a
possibly cheating prover) convinces the verifier (played in Step C2 by Party 2) that (u, v)
satisfies Eq. (7.28). (Since A1 is deterministic, v = A1 (α) is fixed and the probability
is only taken over the moves of Party 2.) We consider two cases corresponding to the
42 In particular, if A1 aborts the execution of Step C2, then the honest verifier will not be convinced.
43 Alternatively, machine B1 may invoke the trusted party but prevent it from answering Party 2. The difference
is immaterial, because Party 1 gets nothing from the trusted party. What matters is that (in either case) Party 2
will get an abort symbol (i.e., ⊥).
44 We comment that even if h were not 1-1 but a strong proof-of-knowledge (rather than an ordinary proof system)
was used in Step C2, then one could have inferred that Party 1 knows an α so that h(α ) = u and v = f (α ),
whereas α does not necessarily equal α. Sending α to the trusted party in the next (emulation) step, we would
have been fine, as it would have (also) meant that the trusted party’s response to Party 2 is v.
667
Free ebooks ==> www.Ebook777.com
GENERAL CRYPTOGRAPHIC PROTOCOLS
relation between p and the soundness error-bound function µ associated with the proof
system (P, V ).45
1. Suppose p > µ(n). In this case, by the soundness condition, it must be the
case that A1 (α) = v = f (α), because in this case (u, v) satisfies Eq. (7.28) and
so v = f (h −1 (u)) = f (h −1 (h(α))) = f (α). Thus, in both the real and the ideal
model, with probability p, the joint execution view is non-aborting and equals
(A1 (α, T ), A1 (α)) = (A1 (α, T ), f (α)), where T represents the prover’s view of the
execution of Step C2 (on common input (h(α), f (α)), where the prover is played by
A1 (α), and the verifier is honest). On the other hand, in both models, with proba-
bility 1 − p, the joint execution is aborting and equals (A1 (α, T ), ⊥), where T is as
before (except that here it is a rejecting execution transcript). Thus, in this case, the
distributions in Eq. (7.30) are identical.
We call the reader’s attention to the reliance of our analysis on the fact that the
emulation of the proof system (with prover A1 (α)) that is performed in Step 2 by B1
is distributed identically to the real execution of the proof system that takes place in
Step C2 of .
2. Suppose that p ≤ µ(n). Again, in both models, aborting executions are identical
and occur with probability 1 − p. However, in this case, we have no handle on
the non-aborting executions in the real model (because it is no longer guaranteed
that A1 (α) = f (h −1 (u)) holds in the real non-aborting execution, whereas in the
ideal model it still holds that in non-aborting executions, Party 2 outputs f (α) =
f (h −1 (u))). But we do not care, because (in this case) these non-aborting executions
occur with negligible probability (i.e., p ≤ µ(n)). Thus, in this case, the distribution
ensembles in Eq. (7.30) are statistically indistinguishable.
The proposition follows.
We comment that this treatment can be extended to the case that h is a randomized
process, rather than a function (as long as the image of h uniquely determines its pre-
image). Details are omitted in view of the fact that a much more general treatment will
be provided in Section 7.4.3.4.
45 We stress that an explicit error-bound can be associated with all standard zero-knowledge proof systems, and
that here we use a system for which µ is negligible. Furthermore, we may use a proof system with error-bound
def
µ(n) = 2−n .
46 Actually, in order to conform with the convention that the functionality has to be defined for any input pair, we
may consider the formulation (α, β) → (λ, f (α)).
668
www.Ebook777.com
7.4* FORCING (TWO-PARTY) SEMI-HONEST BEHAVIOR
that is known to the second party and uniquely determines α). In other words, the value
output by Party 2 is only required to be an image of f (corresponding to a pre-image of
a given length). Thus, at first glance, one may think that securely computing Eq. (7.31)
should be easier than securely computing Eq. (7.27), especially in case f is onto (in
which case any string is an f -image). This impression is wrong, because securely com-
puting Eq. (7.31) means emulating an ideal model in which Party 1 knows the string
it sends to the trusted party. That is, in a secure protocol for Eq. (7.31), whenever
Party 2 outputs some image (of f ), Party 1 must know a corresponding pre-image
(under f ).47 Still, proving knowledge of a pre-image (and doing so in zero-knowledge)
is what a zero-knowledge proof-of-knowledge is all about. Actually, in order to avoid
expected probabilistic polynomial-time adversaries, we use zero-knowledge strong-
proof-of-knowledge (as defined and constructed in Section 4.7.6). We will show that
Construction 7.4.10 can be easily adapted in order to yield a secure implementation
of Eq. (7.31). Specifically, all that is needed is to use (in Step C2) a zero-knowledge
strong-proof-of-knowledge (rather than an ordinary zero-knowledge proof), and set h
to be a constant function.
The analysis of this protocol, denoted , follows the ideas underlying the proof of
Proposition 7.4.11. The only significant modification is in the construction of ideal-
model adversaries for Party 1.
47 We comment that the same also holds with respect to Eq. (7.27). But there, the knowledge of a pre-image (of
the output v under f ) is guaranteed by the fact that security implies that the pre-image of v under f must be
consistent with h(α), whereas the only such pre-image is α itself, which in turn is the initial input of Party 1
and thus known to it.
669
Free ebooks ==> www.Ebook777.com
GENERAL CRYPTOGRAPHIC PROTOCOLS
Let us first justify why the treatment of the case in which Party 1 is honest is
exactly as in the proof of Proposition 7.4.11. In this case, we can use exactly the
same transformation of the real-model adversary A2 into an ideal-model adversary B2 ,
because what this transformation does is essentially invoke the simulator associated
with (the residual prover) A 2 on input the string v = f (α) that it obtains from the
trusted party. Furthermore, the adequateness of this transformation is established by
only referring to the adequateness of the (zero-knowledge) simulator, which holds also
here.
We now turn to the case where the second party is honest. In this case, B2 is deter-
mined, and we transform (real-model) A1 into (ideal-model) B1 , which uses A1 as a
subroutine. Recall that B1 gets input α ∈ {0, 1}n :
Actually, we will show that these two ensembles are statistically indistinguishable, where
the statistical difference is due to the case where the real-model adversary A1 succeeds
in convincing the knowledge-verifier (played by the honest A2 ) that it knows a pre-
image of v under f , and yet the knowledge-extractor failed to find such a preimage. By
definition of strong knowledge-verifiers, such an event may occur only with negligible
probability. Loosely speaking, ignoring the rare case in which extraction fails although
the knowledge-verifier (played by A2 ) is convinced, it can be shown that the distributions
ideal f, B ((σ, r ), 1n ) and real, A ((σ, r ), 1n ) are identical. Details follow.
670
www.Ebook777.com
7.4* FORCING (TWO-PARTY) SEMI-HONEST BEHAVIOR
def
Fixing any α, recall that v = A1 (α) need not be an image of f (let alone that it may
not equal f (α)). We denote by p the probability that A1 (α), playing a possibly cheating
prover, convinces the knowledge-verifier (played in Step C2 by Party 2) that it knows a
pre-image of v under f . We consider two cases corresponding to the relation between
p and the error-bound function µ referred to in Definition 4.7.13:
1. Suppose that p > µ(n). In this case, by Definition 4.7.13, with probability at least
1 − µ(n), machine B1 has successfully extracted a pre-image α (of v = A1 (α) under
f ). In the real model, with probability p, the joint execution ends up non-aborting.
By the aforementioned extraction property, in the ideal model, a joint execution is
non-aborting with probability p ± µ(n) (actually, the probability is at least p − µ(n)
and at most p). Thus, in both models, with probability p ± µ(n), a joint execution is
non-aborting and equals ( A1 (α, T ), A1 (α)) = (A1 (α, T ), f (α )), where T represents
the prover’s view of an execution of Step C2 (on common input f (α ) = v = A1 (α),
where the role of the prover is played by the residual strategy A1 (α) and the verifier is
honest). On the other hand, in both models, with probability 1 − p ± µ(n), the joint
execution is aborting and equals (A1 (α, T ), ⊥), where T is as before (except that
here it is a rejecting execution transcript). Thus, the statistical difference between the
two models is due only to the difference in the probability of producing an aborting
execution in the two models, which in turn is negligible.
We call the reader’s attention to the reliance of our analysis on the fact that the
emulation of the proof system (with prover A1 (α)) performed in Step 2 by B1 is
distributed identically to the real execution of the proof system that takes place in
Step C2 of .
2. Suppose that p ≤ µ(n). Again, in the real model the non-aborting probability is p,
which in this case is negligible. Thus, we ignore these executions and focus on the
aborting executions, which occur with probability at least 1 − p ≥ 1 − µ(n) in both
models. Recalling that aborting executions are identically distributed in both models,
we conclude that the statistical difference between the two models is at most µ(n).
Thus, in both case, the distribution ensembles in Eq. (7.32) are statistically indistin-
guishable. The proposition follows.
Definition 7.4.13 (authenticated computation, revisited): Let f : {0, 1}∗ × {0, 1}∗ →
{0, 1}∗ and h : {0, 1}∗ → {0, 1}∗ be polynomial-time computable. The h-authen-
ticated f -computation functionality is redefined by
(λ , f (α)) if β = h(α)
(α, β) → (7.33)
(λ , (h(α), f (α))) otherwise
48 In contrast, even privately computing the more natural functionality (α, β) → (λ , v), where v = f (α) if β =
h(α) and v = λ otherwise, is significantly harder than (securely or privately) implementing Eq. (7.33); see
Exercise 12. The difference is that Eq. (7.33) allows for revealing h(α) to Party 2 (specifically in case h(α) = β),
whereas the more natural functionality does not allow this.
672
www.Ebook777.com
7.4* FORCING (TWO-PARTY) SEMI-HONEST BEHAVIOR
do so during Step C1. Since Step C1 consists of an oracle invocation, aborting during
Step C1 means instructing the oracle not to answer Party 2.
Proof Sketch: We need to transform any admissible pair, (A 1 , A2 ), for the real oracle-
aided model into a corresponding pair, (B1 , B2 ), for the ideal model. We start by assum-
ing that the first party is honest and by transforming the real-model adversary A 2 (for
the oracle-aided execution) into a corresponding ideal-model adversary B2 . On input
β, the latter proceeds as follows:
def
1. Machine B2 sends β to the trusted party and obtains the answer, which equals v =
def
f (α) if β = h(α) and (u, v) = (h(α), f (α)) otherwise, where α is the (unknown
def
to B2 ) input of Party 1.49 In the first case, B2 sets u = β, and so in both cases
(u, v) = (h(α), f (α)).
2. Machine B2 emulates the protocol, by feeding A2 with β and the pair (u, v),
which A2 expects to get in Step C1, and outputting whatever the latter outputs (in
Step C2).
Note that both the ideal execution under (B1 , B2 ) and the real execution (in the oracle-
aided model) under ( A1 , A2 ) yield the output pair (λ , A2 (β, (h(α), f (α)))). Thus, the
ideal and real ensembles are identical.
We now turn to the case where the second party is honest and transform the real-
model adversary A1 into a corresponding ideal-model adversary B1 . On input α,
the latter proceeds as follows:
49 Recall that, in either case, the trusted party will send Party 1 the answer λ. Also note that the emulation will
remain valid regardless of which |β|-bit long string B2 sends to the trusted party (because, for any such choice,
B2 will [explicitly] receive f (α), as well as [explicitly or implicitly] receive h(α)).
673
Free ebooks ==> www.Ebook777.com
GENERAL CRYPTOGRAPHIC PROTOCOLS
Construction 7.4.16 (an oracle-aided protocol for Eq. (7.34)): For r1 , ..., r ∈ {0, 1}n
and σ1 , ..., σ ∈ {0, 1}, we let C r1 ,...,r (σ1 , ..., σ ) = (Cr1 (σ1 ), ..., Cr (σ )).
def
Inputs: Both parties get security parameter 1n , and set = (n).
Step C1: Party 1 uniformly selects σ1 , ..., σ ∈ {0, 1} and s1 , ..., s ∈ {0, 1}n , and lets
r = σ1 · · · σ and s = s1 · · · s .
def
Step C2: Party 1 uses the image-transmission functionality to send c = C s (r ) to
Party 2. Actually, since image-transmission functionality is a special case of the
general authenticated-computation functionality, we use the latter. That is, Party 1
enters Eq. (7.33) with input (r , s), Party 2 enters with input 1+·n , and Party 2 is
def
supposed to obtain f (C2) (r , s) = C s (r ).
Recall that, by definition, a party cannot abort the execution of an oracle call that
was not initiated (requested) by it, and so Party 2 cannot abort Steps C2–C4. For
simplicity, we assume that Party 1 does not abort Steps C2 and C3, but it may abort
Step C4.
Step C3: The parties invoke the basic coin-tossing functionality times to generate a
common random string r ∈ {0, 1} . That is, in the i-th invocation of the functionality
of Definition 7.4.6, the parties obtain the i-th bit of r .
def
Step C4: Party 1 sets r = r ⊕ r , and uses the authenticated-computation functional-
ity to send g(r ) to Party 2. Specifically, Party 1 enters Eq. (7.33) with input (r , s, r ),
674
www.Ebook777.com
7.4* FORCING (TWO-PARTY) SEMI-HONEST BEHAVIOR
def
Party 2 enters with input (c, r ), where (c, r ) is supposed to equal h (C4) (r , s, r ) =
def
(C s (r ), r ), and Party 2 is supposed to obtain f (C4) (r , s, r ) = g(r ⊕ r ). In case
Party 1 aborts or Party 2 obtains an answer of a different format, which happens if
the inputs to the functionality do not match, Party 2 halts with output ⊥ (indicating
that Party 1 misbehaved).
We comment that r = r ⊕ r is uniquely determined by c and r .
Outputs: Party 1 outputs r , and Party 2 outputs the value determined in Step C4, which
is either g(r ) or ⊥.
We stress that, in all oracle calls, Party 1 is the party initiating (requesting) the call. We
comment that more efficient alternatives to Construction 7.4.16 do exist; it is just that
we find Construction 7.4.16 easiest to analyze.
Proposition 7.4.17: Let F be the set of functionalities defined in Definition 7.4.6 and
Eq. (7.33), respectively. Then Construction 7.4.16 constitutes a security reduction from
the generalized coin-tossing functionality of Eq. (7.34) to F.
Proof Sketch: We start by assuming that the first party is honest and by transforming
the real-model adversary A2 (for the oracle-aided execution) into a corresponding ideal-
model adversary B2 . On input 1n , the latter proceeds as follows:
1. Machine B2 emulates the local actions of the honest Party 1 in Step C1 of the protocol,
by uniformly selecting r ∈ {0, 1} and s ∈ {0, 1}·n .
def
2. Machine B2 emulates Step C2 of the protocol, by feeding A2 with c = C s (r ). (Recall
that by our convention, A2 never aborts.)
3. Machine B2 emulates Step C3 of the protocol, by uniformly selecting r ∈ {0, 1}
and feeding A2 with it.
4. Machine B2 invokes the trusted party with input 1n and obtains the answer g(r ),
for a uniformly distributed r ∈ {0, 1} that is handed to Party 1.50 Next, machine
B2 obtains the input (or query) of A2 to the functionality of Step C4. If this input
(i.e., A2 (λ, C s (r ), r ), where λ represents the Step 1 emulation of Step C1) does not
equal the pair of values (C s (r ), r ) fed to A2 in Steps 2–3, then B2 halts with output
A2 (λ, c, r , ((c, r ), g(r ))). Otherwise, B2 halts with output A2 (λ, c, r , g(r )).
Note that in both cases, the output of B2 corresponds to the output of A2 when
fed with the corresponding emulation of Steps C1–C4. In particular, B2 emulates
Step C4 by feeding A2 either with g(r ) or with (h (C4) (r , s, r ), g(r )), where the
decision depends on whether or not A2 (λ, C s (r ), r ) = (C s (r ), r ). (Recall that
(C s (r ), r ) = h (C4) (r , s, r ).) Indeed, B2 is cheating (in the emulation of Step C4),
because A2 expects to get either f (C4) (r , s, r ) = g(r ⊕ r ) or (h (C4) (r , s, r ), g(r ⊕
r )), but (as we shall see) this cheating is undetectable.
Let us first assume that the input entered by A2 to the functionality of Step C4
does fit its view of Steps C2 and C3, an event that occurs with equal probability
50 Indeed, this part of the current step could also take place at an earlier stage.
675
Free ebooks ==> www.Ebook777.com
GENERAL CRYPTOGRAPHIC PROTOCOLS
in both models (because the emulation of Steps C2–C3 is perfect). In this case,
the ideal-model execution under (B1 , B2 ) yields the pair (r , A2 (λ, C(r ), r , g(r ))),
where r , r , r are uniformly and independently distributed. On the other hand, the
real-model execution (in the oracle-aided model) under (A1 , A2 ) yields the pair
(r ⊕ r , A2 (λ, C(r ), r , g(r ⊕ r ))), where r and r are as before, which (for
r = r ⊕ r ) is distributed identically to (r , A2 (λ, C(r ⊕ r ), r , g(r ))). However, due
to the hiding property of C, the two ensembles are computationally indistinguish-
able. In case the input entered by A2 to the functionality of Step C4 does not fit
its view of Steps C2 and C3, the ideal-model execution under (B1 , B2 ) yields the
pair (r , A2 (λ, C(r ), r , ((C(r ), r ), g(r )))), whereas the real-model execution under
(A1 , A2 ) yields the pair (r ⊕ r , A2 (λ, C(r ), r , ((C(r ), r ), g(r ⊕ r )))), which is
distributed identically to (r , A2 (λ, C(r ⊕ r ), r , ((C(r ⊕ r ), r ), g(r )))). Again, the
two ensembles are computationally indistinguishable.
We now turn to the case where the second party is honest and transform the real-
model adversary A1 into a corresponding ideal-model adversary B1 . On input 1n , the
latter proceeds as follows:
Note that the output of Party 1 in both the real model (under the Ai ’s) and the ideal model
(under the Bi ’s) equals A1 (1n , λ, r , λ), where r is uniformly distributed (in both mod-
els). The issue is the correlation of this output to the output of Party 2, which is relevant
only if Party 2 does have an output. Recall that Party 2 obtains an output (in both models)
only if the corresponding Party 1 does not abort (or stops the trusted party). Furthermore,
in both models, an output is obtained if and only if (C q (q ), q ) = (C s (r ), r ) holds,
def def
where (r , s) = A1 (1n ), and (q , q, q ) = A1 (1n , λ, r ). In particular, (C q (q ), q ) =
51 In particular, if (in contrary to our simplifying assumption) A1 aborts before Step C4, then the sequence
(q , q, q ) equals ⊥ and does not fit (C s (r ), r ).
676
www.Ebook777.com
7.4* FORCING (TWO-PARTY) SEMI-HONEST BEHAVIOR
An Important Special Case. An important special case of Eq. (7.34) is when g(r, s) =
C s (r ), where |s| = n · |r |. This special case will be called the augmented coin-tossing
functionality.
Certainly, the naive protocol of just letting Party 1 send Party 2 a commitment to x does
not constitute a secure implementation of Eq. (7.36): This naive suggestion does not
guarantee that the output is in the range of the commitment scheme, let alone that it is a
random commitment for which Party 1 knows a corresponding decommitment. Thus,
677
Free ebooks ==> www.Ebook777.com
GENERAL CRYPTOGRAPHIC PROTOCOLS
the naive protocol must be augmented by mechanisms that address all these concerns.
We show that Eq. (7.36) can be securely reduced to the set of functionalities presented
in previous subsections.
Inputs: Party 1 has input x ∈ {0, 1}n , whereas Party 2 gets input 1n .
Step C1: Party 1 selects uniformly r ∈ {0, 1}n .
2
def
Step C2: Party 1 uses the image-transmission functionality to send c = C r (x) to
Party 2. Again, we actually use the authenticated-computation functionality, where
Party 1 enters Eq. (7.33) with input (x, r ), Party 2 inputs 1n+n , and Party 2 is sup-
2
def
posed to obtain f (C2) (x, r ) = C r (x). Thus, Steps C1–C2 yield an initial commitment
to the input.
As in Construction 7.4.16, we recall that Party 2 cannot abort Steps C2–C4, and
assume that Party 1 does not abort Steps C2 and C3.
Step C3: Generating coins for the final commitment. The parties use the augmented
def
coin-tossing functionality to obtain the outputs (r, r ) and c = C r (r ), respectively,
n2 n3
where r ∈ {0, 1} and r ∈ {0, 1} are uniformly and independently distributed.
That is, Party 1 gets (r, r ), while Party 2 gets c .
Step C4: Sending the final commitment. Party 1 uses the authenticated-computation
functionality to send C r (x) to Party 2, where (x, r ) is uniquely determined by (c , c ).
Specifically, Party 1 enters Eq. (7.33) with input (x, r, r , r ), Party 2 enters with input
def
(c , c ), where (c , c ) is supposed to equal h (C4) (x, r, r , r ) = (C r (x), C r (r )), and
def
Party 2 is supposed to obtain f (C4) (x, r, r , r ) = C r (x).
In case Party 1 aborts or Party 2 obtains an answer of a different format, which
happens if the inputs to the functionality do not match, Party 2 halts with output ⊥
(indicating that Party 1 misbehaved).
Outputs: Party 1 outputs r , and Party 2 outputs the value determined in Step C4, which
is either C r (x) or ⊥.
Again, more efficient alternatives to Construction 7.4.20 do exist, but we prefer to
analyze the one here.
Proof Sketch: We start by assuming that the first party is honest and by transforming
the real-model adversary A2 (for the oracle-aided execution) into a corresponding ideal-
model adversary B2 . On input 1n , the latter proceeds as follows:
1. Machine B2 emulates (the actions of the honest Party 1 in) Step C1 of the protocol,
by uniformly selecting r ∈ {0, 1}n .
2
678
www.Ebook777.com
7.4* FORCING (TWO-PARTY) SEMI-HONEST BEHAVIOR
def
2. Machine B2 emulates Step C2 of the protocol, by feeding A2 with c = C r (0n ).
(Clearly, B2 is cheating, because A2 is supposed to be fed with C(x), where x is the
(unknown to B2 ) input of Party 1. However, A2 cannot detect this cheating.)
2
3. Machine B2 emulates Step C3 of the protocol, by uniformly selecting s ∈ {0, 1}n
def
and r ∈ {0, 1}n , and feeding A2 with c = C r (s).
3
4. Machine B2 invokes the trusted party with input 1n and obtains the answer C r (x),
2
for a uniformly distributed r ∈ {0, 1}n that is handed to Party 1.52 Next, machine B2
obtains the input (or query) of A2 to the functionality of Step C4. If this input (i.e.,
A2 (λ, c , c )) does not equal the pair of values (c , c ) = (C r (0n ), C r (s)) fed to A2
in Steps 2–3, then B2 halts with output A2 (λ, c , c , ((c , c ), C r (x))). Otherwise, B2
halts with output A2 (λ, c , c , C r (x)).
Note that in both cases, the output of B2 corresponds to the output of A2 when
fed with the corresponding emulation of Steps C1–C4. In particular, B2 emulates
Step C4 by feeding A2 with either C r (x) or with ((C(0n ), C(s)), C r (x)), where the
decision depends on whether or not A2 (λ, C r (0n ), C r (s)) = (C r (0n ), C r (s)). (Re-
call that (C r (0n ), C r (s)) = h (C4) (0n , s, r , r ).) Indeed, on top of cheating in the
emulation of Step C2, machine B2 cheats in the emulation of Step C4, firstly be-
cause the decision is supposed to depend on whether or not A2 (λ, C r (x), C r (r )) =
(C r (x), C r (r )), where (C r (x), C r (r )) = h (C4) (x, r, r , r ), and secondly because
A2 expects to get either C r (x) = f (C4) (x, r, r , r ) or ((C(x), C(r )), C r (x)) ≡
(h (C4) (x, r, r , r ), f (C4) (x, r, r , r )). However, as we shall see, this cheating is
undetectable.
Let us first assume that the input entered by A2 to the functionality of Step C4 does
fit its view of Steps C2 and C3. In this case, the ideal-model execution under (B1 , B2 )
yields the pair (r , A2 (λ, C(0n ), C(s), C r (x))), where r and s are uniformly and inde-
pendently distributed. On the other hand, the corresponding real-model execution (in
the oracle-aided model) under (A1 , A2 ) yields the pair (r , A2 (λ, C(x), C(r ), C r (x))),
where r is as before. However, due to the hiding property of C, the two ensembles
are computationally indistinguishable.53 In case the input entered by A2 to the func-
tionality of Step C4 does not fit its view of Steps C2 and C3, the ideal-model exe-
cution under (B1 , B2 ) yields the pair (r , A2 (λ, C(0n ), C(s), ((C(0n ), C(s)), C r (x)))),
whereas the corresponding real-model execution under (A1 , A2 ) yields the pair
(r , A2 (λ, C(x), C(r ), ((C(x), C(r )), C r (x)))). Again, the two ensembles are com-
putationally indistinguishable. Since the two cases occur with almost the same
probability in both models (because the decision depends on A2 (λ, c , c ), where
(c , c ) is either (C(0n ), C(s)) or (C(x), C(r ))), the outputs in the two models are
indistinguishable.
52 Indeed, this part of the current step could also take place at an earlier stage.
53 In fact, the said ensembles are computationally indistinguishable even when r and s are fixed, rather than being
random. That is, the ensembles {(C(0|x| ), C(s), C r (x))}x,r,s and {(C(x), C(r ), C r (x))}x,r,s are computationally
indistinguishable, where (as usual) the distribution’s index (x, r, s) is also given to the potential distinguisher.
This follows from the computational indistinguishability of {(C(0|x| ), C(s))}x,r,s and {(C(x), C(r ))} x,r,s , which
in turn follows from the hiding property of C.
679
Free ebooks ==> www.Ebook777.com
GENERAL CRYPTOGRAPHIC PROTOCOLS
We now turn to the case where the second party is honest and transform the real-
model adversary A1 into a corresponding ideal-model adversary B1 . On input x, the
latter proceeds as follows:
4. Machine B1 starts its emulation of Step C4, by checking whether or not the query that
A1 wishes to make (i.e., A1 (x, λ, (r, r ))) fits the tuple (x , r, r , r ) in the sense that it
def
yields the same value (C r (x ), C r (r )). That is, let (q1 , q2 , s1 , s2 ) = A1 (x, λ, (r, r )).
If (C s1 (q1 ), C s2 (q2 )) = (C r (x ), C r (r )), then B1 instructs the trusted party to answer
Party 2; otherwise B1 instructs the trusted party to stop (without answering Party 2).
Finally, B1 outputs whatever A1 does (i.e., A1 (x, λ, (r, r ), λ), where the four inputs
of A1 correspond to its view in each of the four steps).
Note that the output of Party 1 in both the real model (under the Ai ’s) and the ideal model
(under the Bi ’s) equals A1 (x, λ, (r, r ), λ), where r ∈ {0, 1}n and r ∈ {0, 1}n are uni-
2 3
formly and independently distributed (in both models). The issue is the correlation of
this output to the output of Party 2, which is relevant only if Party 2 does have an output.
Recall that Party 2 obtains an output (in both models) only if the corresponding Party 1
does not abort (or stops the trusted party). Furthermore, in both models, an output is ob-
tained if and only if (C s1 (q1 ), C s2 (q2 )) = (C r (x ), C r (r )), where (x , r ) = A1 (x) and
(q1 , q2 , s1 , s2 ) = A1 (x, λ, (r, r )). In particular, (C s1 (q1 ), C s2 (q2 )) = (C r (x ), C r (r ))
implies that (q1 , q2 ) = (x , r ) and that the inputs entered in Step C4 do match (i.e.,
h (C4) (q1 , q2 , s1 , s2 ) = (C r (x ), C r (r ))), which means that in the real model, the output
of Party 2 is f (C4) (q1 , q2 , s1 , s2 ) = f (C4) (x , r, s1 , s2 ) = C r (x ) (exactly as in the ideal
model). We conclude that the ideal model perfectly emulates the real model, and the
proposition follows.
7.4.3.7. Summary
Combining Proposition 7.4.8 (resp., Proposition 7.4.12) with suitable results regarding
the underlying primitives, we conclude that coin-tossing (resp., image transmission
as in Eq. (7.31)) can be securely implemented based on any 1-1 one-way function.
Combining Proposition 7.4.15 (resp., Proposition 7.4.19) [resp., Proposition 7.4.21]
with the previous results, by using the Composition Theorem (i.e., Theorem 7.4.3 or
Remark 7.4.5), we obtain secure implementations of the authenticated-computation
functionality (resp., augmented coin-tossing) [resp., input-commitment functionality].
The 1-1 restriction can be waived by using a slightly more cumbersome construction that
680
www.Ebook777.com
7.4* FORCING (TWO-PARTY) SEMI-HONEST BEHAVIOR
utilizes the commitment scheme of Construction 4.4.4 (instead of the simple scheme
of Construction 4.4.2). We thus state the following for future reference:
Inputs: Party 1 gets input x ∈ {0, 1}n and Party 2 gets input y ∈ {0, 1}n .
Input-Commitment Phase: Each of the two parties commits to its input by using the
input-commitment functionality of Eq. (7.36). Recall that Eq. (7.36) maps the input
2
pair (u, 1n ) to the output pair (s, C s (u)), where s is uniformly distributed in {0, 1}n .
Thus, each of the parties obtains decommitment information that will allow it to
perform its role in the protocol-emulation phase.
Specifically, we are talking about two invocations of Eq. (7.36). In the first invocation,
Party 1 wishing to commit to x, plays the role of the first party in Eq. (7.36), and
2
obtains a uniformly distributed ρ 1 ∈ {0, 1}n , whereas Party 2 (which plays the role
def
of the second party in Eq. (7.36)) obtains γ 1 = C ρ 1 (x). Likewise, in the second
invocation, Party 2, wishing to commit to y, plays the role of the first party in
2
Eq. (7.36), and obtains a uniformly distributed ρ 2 ∈ {0, 1}n , whereas Party 1 (which
def
plays the role of the second party in Eq. (7.36)) obtains γ 2 = C ρ 2 (y).
Coin-Generation Phase: Each of the parties generates a random-tape for the emula-
tion of , by invoking the augmented coin-tossing functionality of Eq. (7.35). Recall
that this functionality maps the input pair (1n , 1n ) to the output pair ((r, s), C s (r )),
where (r, s) is uniformly distributed in {0, 1}(n) × {0, 1}n·(n) . Thus, each party
681
Free ebooks ==> www.Ebook777.com
GENERAL CRYPTOGRAPHIC PROTOCOLS
obtains the random-tape to be held by it, whereas the other party obtains a commit-
ment to this value. The party holding the random-tape also obtains the randomization
used in the corresponding commitment, which it will use in performing its role in the
protocol-emulation phase.
Specifically, we are talking about two invocations of Eq. (7.35). In the first (resp.,
second) invocation, Party 1 (resp., Party 2) plays the role of the first party in
Eq. (7.35), and obtains a uniformly distributed (r 1 , ω1 ) ∈ {0, 1}(n) × {0, 1}n·(n)
(resp., (r 2 , ω2 ) ∈ {0, 1}(n) × {0, 1}n·(n) ), whereas Party 2 (resp., Party 1) which plays
def def
the other role, obtains δ 1 = C ω1 (r 1 ) (resp., δ 2 = C ω2 (r 2 )).
Protocol-Emulation Phase: The parties use the authenticated-computation function-
ality of Eq. (7.33) in order to emulate each step of protocol . Recall that, for
predetermined functions h and f , this functionality maps the input pair (α, β) to
the output pair (λ, f (α)) if β = h(α) and to (λ , (h(α), f (α))) otherwise, where the
second case is treated as abort.
The party that is supposed to send a message plays the role of the first (i.e., initiating)
party in Eq. (7.33), and the party that is supposed to receive the message plays the
role of the second party. Suppose that the current message in is to be sent by
def def
Party j, and let u = x if j = 1 and u = y otherwise. Then the functions h, f and
the inputs α, β, for the functionality of Eq. (7.33), are set as follows:
r The string α is set to equal (α1 , α2 , α3 ), where α1 = (u, ρ j ) is the query and
answer of Party j in the oracle call that it initiated in the input-commitment
phase, α2 = (r j , ω j ) is the answer that Party j obtained in the oracle call that
it initiated in the coin-generation phase, and α3 is the sequence of messages that
Party j obtained so far in the emulation of . The string β equals (γ j , δ j , α3 ),
where γ j and δ j are the answers that the other party obtained in the same oracle
calls in the first two phases (and α3 is as before).
In particular, u is the input to which Party j committed in the input-commitment
phase, and r j is the random-tape generated for it in the coin-generation phase.
Together with α3 , they determine the message that is to be sent by Party j in .
The auxiliary strings ρ j and ω j will be used to authenticate u and r j , as reflected
in the following definition of h.
r The function h is defined such that h((v1 , s1 ), (v2 , s2 ), v3 ) equals
(C s1 (v1 ), C s2 (v2 ), v3 ). Indeed, it holds that h(α1 , α2 , α3 ) = (C ρ j (u), C ω j (r j ),
α3 ) = β.
r The function f equals the computation that determines the message to be sent
in . Note that this message is computable in polynomial-time from the party’s
input (denoted u and being part of α1 ), its random-tape (denoted r j and being
part of α2 ), and the messages it has received so far (i.e., α3 ). Indeed, it holds that
f (α1 , α2 , α3 ) is the message that Party j should send in .
Recall that the party that plays the receiver in the current oracle call obtains either
f (α) or (h(α), f (α)). It treats the second case as if the other party has aborted,
which is also possible per se.
682
www.Ebook777.com
7.4* FORCING (TWO-PARTY) SEMI-HONEST BEHAVIOR
Aborting: In case any of the functionalities invoked in any of these phases terminates in
an abort state, the party (or parties) obtaining this indication aborts the execution,
and sets its output to ⊥. Otherwise, outputs are as follows.
Outputs: At the end of the emulation phase, each party holds the corresponding output
of the party in protocol . The party just locally outputs this value.
Clearly, in case both parties are honest, the input–output relation of is identical
to that of . (We will show that essentially the same also holds in general.) We note
that the transformation of to can be implemented in polynomial-time. Finally,
replacing the oracle calls by the sub-protocols provided in Proposition 7.4.22 yields a
standard protocol for the malicious model.
Entering the execution: Depending on its initial input, denoted u, the party may abort
before taking any step in the execution of . Otherwise, again depending on u, it
enters the execution with any input u ∈ {0, 1}|u| of its choice. From this point on, u
is fixed.
Proper selection of a random-tape: The party selects the random-tape to be used in
uniformly among all strings of the length specified by . That is, the selection of the
random-tape is exactly as specified by .
Proper message transmission or abort: In each step of , depending on its view of the
execution so far, the party may either abort or send a message as instructed by .
We stress that the message is computed as instructs based on input u , the selected
random-tape, and all messages received so far.
54 Indeed, Theorem 7.4.1 will follow as a special case of the general analysis of the compiler (as provided later).
See further discussion following the statement of Proposition 7.4.25.
683
Free ebooks ==> www.Ebook777.com
GENERAL CRYPTOGRAPHIC PROTOCOLS
Output: At the end of the interaction, the party produces an output depending on its
entire view of the interaction. We stress that the view consists of the initial input u,
the selected random-tape, and all messages received so far.
A pair of probabilistic polynomial-time strategies, C = (C 1 , C2 ), is admissible with
respect to in the augmented semi-honest model if one strategy implements and
the other implements an augmented semi-honest behavior with respect to .
The augmented semi-honest model extends the ordinary semi-honest model in allowing
adversaries to modify their initial input and to abort the execution at an arbitrary time.
The augmented semi-honest model is arguably more appealing than the semi-honest
model because in many settings, input modification and aborting can also be performed
at a high level, without modifying the prescribed program. In contrast, implementing
an effective malicious adversary may require some insight into the original protocol,
and it typically requires substitution of the program’s code.
Intuitively, the compiler transforms any protocol into an (oracle-aided) protocol
, such that executions of in the malicious model correspond to executions of
in the augmented semi-honest model. That is:
Proposition 7.4.25 (general analysis of the two-party compiler): Let be the (oracle-
aided) protocol produced by Construction 7.4.23 when given the protocol , and let G
denote the set of the three oracle functionalities that are used by protocol . Then, for
every pair of probabilistic polynomial-time strategies A = (A1 , A2 ) that are admissible
(with respect to ) for the (real) malicious model (of Definition 7.4.2),55 there exists a
pair of probabilistic polynomial-time strategies B = (B1 , B2 ) that are admissible with
respect to for the augmented semi-honest model (of Definition 7.4.24), such that
c
{real, B(z) (x, y)}x, y,z ≡ {real
G
, A(z) (x, y)} x, y,z
where x, y, z ∈ {0, 1}∗ such that |x| = |y| and |z| = poly(|x|).
55 Recall the definition of real-model adversaries for an oracle-aided protocol (i.e., Definition 7.4.2) extends the
definition of real-model adversaries for ordinary protocols (i.e., Definition 7.2.5).
684
www.Ebook777.com
7.4* FORCING (TWO-PARTY) SEMI-HONEST BEHAVIOR
Proof Sketch: Given a pair of strategies, (A1 , A2 ), which is admissible with respect
to for the real malicious model, we present a corresponding pair, (B1 , B2 ), that
is admissible with respect to for the augmented semi-honest model. In the current
proof, the treatment of the two cases for the identity of the honest party is symmetric.
Hence, we use a generic symbol for the said identity. (Alternatively, without loss of
generality, one may assume that Party 1 is honest.)
We denote by hon the identity of the honest party and by mal the identity of the
malicious party (i.e., {hon, mal} = {1, 2}). Thus, Bhon is determined by , and we
transform (the malicious adversary) Amal into (an augmented semi-honest adversary)
Bmal , which uses Amal as a subroutine. In particular, machine Bmal will emulate all
the oracles that are used in (which is an oracle-aided protocol compiled out of the
ordinary protocol ). On input u ∈ {0, 1}n , machine Bmal behaves as follows:
Entering the execution: Machine Bmal invokes Amal on input u, and decides whether
to enter the protocol, and if so, with what input. Toward this end, machine Bmal
emulates the input-committing phase of , using Amal (as subroutine). Machine Bmal
obtains from Amal the oracle-query that it makes to the input-committing functionality
(initiated by it), and uses this query to determine the replaced input u (to be used
in the rest of the execution). It also provides Amal with the oracle answers that A mal
expects to get. Details follow.
Recall that the the input-committing phase consists of two invocations of the input-
committing functionality, one by Partyhon and the other by Partymal . In each invoca-
tion, one party supplies an input and the other party gets a commitment to it (while
the first party gets the corresponding commitment coins).
In case Bmal did not abort, it enters protocol with input u . Note that this entire
step is implemented in polynomial-time, and the resulting u depends only on u (the
initial input of Bmal ).
Selection of random-tape: Bmal selects its random-tape uniformly in {0, 1}(n) (as spec-
ified by ), and emulates the execution of the coin-generation phase of ending
with this outcome, so as to place Amal in the appropriate state toward the protocol-
emulation phase. To achieve the latter goal, machine Bmal supplies Amal with the
oracle answers that it expects to see. Again, we distinguish between the two oracle
calls (to the augmented coin-tossing functionality) made during the coin-generation
phase of :
r In the invocation of the augmented coin-tossing functionality in which Party ob-
hon
tains the outcome of the coin-toss, machine Bmal generates a dummy commitment
(supposedly to the random-tape of Partyhon ) and feeds it to Amal , which expects
to get a commitment (as answer from the oracle). Specifically, Bmal uniformly se-
def
lects ωhon ∈ {0, 1}n·(n) , and computes the commitment δ hon = C ωhon (0(n) ), where
0(n) is an arbitrary (dummy) value (which replaces the unknown random-tape of
Partyhon ). Machine Bmal feeds Amal with δ hon (as if δ hon were the oracle answer).
r In the invocation of the augmented coin-tossing functionality in which Party
mal
obtains the outcome of the coin-toss, machine Bmal first selects uniformly r mal ∈
{0, 1}(n) and ωmal ∈ {0, 1}n·(n) , and feeds Amal with the pair (r mal , ωmal ). Machine
Bmal will use r mal as its random-tape in its (augmented semi-honest) execution of
. If Amal aborts this oracle call, then Bmal aborts.
In case Bmal did not abort, it will use r mal as its random-tape in the subsequent steps
of protocol . Note that this entire step is implemented in polynomial-time, and that
r mal is selected uniformly in {0, 1}(n) independent of anything else.
Subsequent steps – message transmission: Machine Bmal now enters the actual execu-
tion of . It proceeds in this real execution along with emulating the correspond-
ing oracle answers of the authenticated-computation functionality. In a message-
transmission step by Partyhon (in ), machine Bmal obtains from Partyhon (in the
real execution of ) a message, and emulates the answer given to Partymal by the
authenticated-computation functionality. In a message-transmission step by Partymal
in , machine Bmal computes the message to be sent to Partyhon (in ) as instructed
by , based on the input u determined earlier, the random-tape r mal selected earlier,
and the messages obtained so far from Partyhon (in ). It then checks if Amal makes
the correct oracle-query, in which case it sends Partyhon the message just computed,
and otherwise it aborts. Details follow:
r In a message-transmission step by Party (in ), machine Bmal first obtains
hon
from Partyhon (in the real execution of ) a message, denoted msg. Next, ma-
chine Bmal obtains from Amal the query that Amal makes to the authenticated-
computation functionality. Let us denote this query by β = (q1 , q2 , q3 ). If
(q1 , q2 ) = (γ hon , δ hon ) and q3 equals the sequence of messages sent so far (by
Bmal to Partyhon ), then Bmal feeds Amal with the received message msg. Other-
wise, Bmal feeds Amal with ((γ hon , δ hon , α3 ), msg), where α3 is the sequence of
686
www.Ebook777.com
7.4* FORCING (TWO-PARTY) SEMI-HONEST BEHAVIOR
messages sent so far (by Bmal to Partyhon ). (The latter case means that Amal is
cheating, but Partyhon does not detect this fact (because it obtains no answer from
the authenticated-computation functionality).)
r In a message-transmission step by Party (in ), machine Bmal first computes
mal
the message, denoted msg, that it should send (according to ) on input u (as de-
termined earlier), random-tape r mal (as recorded earlier), and the messages re-
ceived so far (from Partyhon in execution of ). Next, machine Bmal obtains
from Amal the query that Amal makes to the authenticated-computation function-
ality. Let us denote this query by ((u , ρ ), (r , ω ), α3 ). If C ρ (u ) = C ρ mal (u ),
C ω (r ) = C ωmal (r mal ), and α3 equals the sequence of messages received so far
(from Partyhon ), then Bmal sends the message msg to Partyhon . Otherwise, Bmal
aborts . (The latter case means that Amal is cheating in , and Partyhon detects
this fact and treats it as if Partymal has aborted in .)
Output: Machine Bmal just outputs, whatever machine Amal outputs, given the execution
history (in ) emulated earlier.
Clearly, machine Bmal (as described) implements an augmented semi-honest behavior
with respect to . It is left to show that
c
{real , A(z) (x, y)} x, y,z ≡ {real, B(z) (x, y)}x, y,z
G
(7.37)
There is only one difference between the two ensembles referred to in Eq. (7.37): In
the first distribution (i.e., real
G
, A(z) (x, y)), the commitments obtained by A mal in the
input-commitment and coin-generation phases are to the true input and true random-
tape of Partyhon . On the other hand, in the second distribution (i.e., real, B(z) (x, y)),
the emulated machine Amal is given commitments to dummy values (and the ac-
tions of Bmal are determined accordingly). We stress that, other than this differ-
ence, Bmal perfectly emulates Amal . However, the difference is “undetectable” (i.e.,
computationally indistinguishable) due to the hiding property of the commitment
scheme.
Composing the oracle-aided protocols produced by the compiler with secure imple-
mentations of these oracles (as provided by Proposition 7.4.22), and using the Compo-
sition Theorem and Proposition 7.4.25, we obtain:
where x, y, z ∈ {0, 1}∗ such that |x| = |y| and |z| = poly(|x|).
687
Free ebooks ==> www.Ebook777.com
GENERAL CRYPTOGRAPHIC PROTOCOLS
Proposition 7.4.27 (on canonical protocols): Let be a canonical protocol that pri-
vately computes the functionality f . Then, for every probabilistic polynomial-time pair
B = (B1 , B2 ) that is admissible for the (real) augmented semi-honest model (of Def-
inition 7.4.24), there exists a probabilistic polynomial-time pair C = (C 1 , C 2 ) that is
admissible for the ideal malicious model (of Definition 7.2.4) such that
c
{real, B(z) (x, y)}x, y,z ≡ {ideal f,C(z) (x, y)}x, y,z
where x, y, z ∈ {0, 1}∗ such that |x| = |y| and |z| = poly(|x|).
688
www.Ebook777.com
7.4* FORCING (TWO-PARTY) SEMI-HONEST BEHAVIOR
Proof Sketch: Recall that canonical protocols (cf. Definition 7.3.13) proceed in two
stages, where the first stage yields no information at all (to any semi-honest party) and
the second phase consists of the exchange of a single pair of messages (i.e., each party
sends a single message). We use the fact that canonical protocols admit a two-stage
simulation procedure (for the view of a semi-honest party). Such two-stage simulators
act as follows:
Input to simulator: A pair (u, v), where u is the initial input of the semi-honest party
and v the corresponding local output.
Simulation Stage 1: Based (only) on u, the simulator generates a transcript corre-
sponding to the view of the semi-honest party in the first stage of the canonical
protocol .
Recall that this is a truncated execution of , where the execution is truncated just
before the very last message is received by the semi-honest party. We stress that this
truncated view, denoted T , is produced without using v.
Simulation Stage 2: Based on T and v, the simulator produces a string corresponding
to the last message received by the semi-honest party. The simulator then outputs the
concatenation of T and this (last) message.
The reader may easily verify that any canonical protocol has two-stage simulators.
Loosely speaking, a simulator as required in Stage 1 is implicit in the definition of
a canonical protocol (cf. Definition 7.3.13), and the simulation of Stage 2 is trivial
(because Stage 1 in a canonical protocol ends with the parties holding shares of the
desired outputs, and Stage 2 consists of each party sending the share required by the
other party).
Next, for any protocol having two-stage simulators, given a pair (B1 , B2 ) that is
admissible with respect to for the augmented semi-honest model, we construct a
pair, (C1 , C2 ) that is admissible for the ideal malicious model. We distinguish two
cases, corresponding to the identity of the honest party. The difference between these
cases amounts to the possibility of (meaningfully) aborting the execution after receiving
the last message (and just before sending the last message). This possibility exists for
a dishonest Party 1 but not for a dishonest Party 2 (see Figure 7.3).
We start with the case where Party 1 is honest (and Party 2 is dishonest). In this
case, C1 is determined (by ), and we need to transform the augmented semi-honest
real adversary B2 into a malicious ideal-model adversary C 2 . The latter operates as
follows, using the two-stage simulator, denoted S2 , provided for the view of Party 2 in
semi-honest executions of (which privately computes f ). Recall that C2 gets input
y ∈ {0, 1}n .
1. Machine C2 first determines the input y to be sent to the trusted party, where y
is determined according to the behavior of B2 during the entire emulation of the
(canonical) protocol . In addition, C2 emulates the messages sent and received by
689
Free ebooks ==> www.Ebook777.com
GENERAL CRYPTOGRAPHIC PROTOCOLS
Party 1 Party 2
Stage 1
(r1,r2) (s1,s2)
Stage 2
s1
meaningful
abort
r2
r1 + s1 r2+ s2
Figure 7.3: Schematic depiction of a canonical protocol.
B2 during the first phase of , and also determines the last message of B2 (i.e., its
single Stage 2-message). This is done as follows:
(a) First, C2 computes the substituted input with which (the augmented semi-honest
adversary) B2 enters . That is, y ← B2 (y). In case B2 aborts, machine C 2 sets
y = ⊥ (so as to conform with the [simplifying] convention that the ideal-model
adversary always sends input to the trusted party).
(b) Next, C 2 invokes the first stage of the simulator S2 in order to obtain the view of
the execution of the first stage of as seen by a semi-honest party having input
y . Denote this view by T , and note that T includes y . Machine C2 extracts from
T the random-tape, denoted r , of Party 2. This random-tape will be fixed for the
use of B2 .
(c) Using T , machine C 2 emulates the execution of B2 on input y and random-tape
r , up to the point where Party 2 is to receive the last message (in ). We stress
that this point is just after Party 2 has sent its last message. Thus, the last message
of Party 2 (in ) is determined at this step. To perform the emulation, C 2 feeds
B2 with input y and random-tape r , and iteratively feeds B2 with the sequence
of (incoming) messages as appearing in the corresponding locations in T . We
stress that although T is only the transcript of Stage 1 in , it determines all
messages of Party 2 in (including its single Stage 2 message).
Note that the augmented semi-honest strategy B2 may abort in such an execution,
but in case it does not abort, the messages it sends fit the transcript T . Conse-
quently, the view of (the augmented semi-honest adversary) B2 in an execution
of the first stage of is emulated by a prefix of T (which in turn represents the
simulated view of a semi-honest party on input y ).
690
www.Ebook777.com
7.4* FORCING (TWO-PARTY) SEMI-HONEST BEHAVIOR
In case B2 has aborted the execution (even just before sending the last message,
which belongs to Stage 2), machine C 2 resets y to ⊥.
2. Machine C2 invokes the trusted party with input y and obtains a response, denoted v.
(Since the trusted party answers Party 1 first, Party 2 does not have the option of
stopping the trusted party before it answers Party 1. But this option is not needed
because Party 2 cannot meaningfully abort after receiving the last message in it.
That is, if B2 has not aborted so far, then it cannot (meaningfully) abort now, because
it has already sent (or rather determined) its last message.)
www.Ebook777.com
7.5* EXTENSION TO THE MULTI-PARTY CASE
In this section, we extend the treatment of general secure protocols from the two-
party case to the multi-party case. Again, our ultimate goal is to design protocols that
withstand any feasible adversarial behavior, and again we proceed in two steps. We first
consider a benign type of adversary, called semi-honest, and construct protocols that
are secure with respect to such an adversary. The definition of this type of adversary
is very much the same as in the two-party case. Next, we turn to the case of general
adversary behavior, but here (unlike in the two-party case) we consider two different
models. The first model of malicious behavior mimics the treatment of adversaries in
the two-party case; it allows the adversary to control even a majority of the parties, but
it does not view the (unavoidable) early abort phenomena as a violation of security.
In the second model of malicious behavior, we assume that the adversary can control
only a strict minority of the parties. In this model, which would have been vacuous in
the two-party case, the early abort phenomena can be effectively prevented. We show
how to transform protocols secure in the semi-honest model into protocols secure in
each of the two malicious-behavior models. As in the two-party case, this is done by
forcing parties (in each of the latter models) to behave in an effectively semi-honest
manner.
The constructions are obtained by suitable modifications of the constructions used
in the two-party case. In fact, the construction of multi-party protocols for the semi-
honest model is a minor modification of the construction used in the two-party case. The
same holds for the compilation of protocols for the semi-honest model into protocols
for the first malicious model. When compiling protocols for the semi-honest model
into protocols for the second malicious model, we will use a new primitive, called
693
Free ebooks ==> www.Ebook777.com
GENERAL CRYPTOGRAPHIC PROTOCOLS
Verifiable Secret Sharing (VSS), in order to “effectively prevent” minority parties from
aborting the protocol prematurely. Actually, we shall compile protocols secure in the
first malicious model into protocols secure in the second malicious model.
Our treatment touches upon a variety of issues that were ignored (or are inapplicable)
in the two-party case. These issues include the communication model (i.e., the type of
communication channels), the consideration of an external adversary, and the way the
latter selects dishonest parties (or corrupts parties). In particular, in some models (i.e.,
postulating private channels and a majority of honest participants), it is possible to obtain
secure protocols without relying on any intractability assumptions: See Section 7.6.
Teaching Tip. We strongly recommend reading Sections 7.2–7.4 before reading the
current section. In many places in the current section, motivating discussions and
technical details are omitted, while relying on the fact that analogue elaboration has
appeared in the treatment of the two-party case (i.e., in Sections 7.2–7.4).
7.5.1. Definitions
A multi-party protocol problem is cast by specifying a random process that maps se-
quences of inputs (one input per each party) to corresponding sequences of outputs.
Let m denote the number of parties. It will be convenient to think of m as being fixed,
yet one can certainly think of it as an additional parameter. An m-ary functionality, de-
noted f : ({0, 1}∗ )m → ({0, 1}∗ )m , is thus a random process mapping sequences of the
form x = (x1 , ..., xm ) into sequences of random variables, f (x) = ( f 1 (x), ..., f m (x)).
The semantics is that for every i, the i-th party, initially holds an input xi , and wishes to
obtain the i-th element in f (x1 , ..., x m ), denoted fi (x1 , ..., xm ). For example, consider
deterministic functionalities for computing the maximum, average, or any other statis-
tics of the individual values held by the parties (and see more examples in Exercises 14
and 15). The discussions and simplifying conventions made in Section 7.2.1 apply in
the current context, too. Most importantly, we assume throughout this section that all
parties hold inputs of equal length; that is, |x i | = |x j |.
www.Ebook777.com
7.5* EXTENSION TO THE MULTI-PARTY CASE
The Number of Parties Controlled by the Adversary. In the two-party case, we have
focused on the case in which the adversary is identified with one of the participants
in the execution. Clearly, the case in which the adversary controls both participants is
of no interest, but the case in which the adversary controls none of the participants
may be of interest in case the adversary can wire-tap the communication line (as will
be discussed). In the multi-party case, we will consider adversaries that control any
number of participants.56 (Of course, when defining security following the “ideal-vs.-
real” paradigm, we should insist that the corresponding ideal adversary controls the
same set of participants.)
56 Indeed, the case in which the adversary controls all parties is of no interest, and is often ignored.
695
Free ebooks ==> www.Ebook777.com
GENERAL CRYPTOGRAPHIC PROTOCOLS
over the “standard model” (i.e., providing secret communication) is well understood,
and can be (easily) decoupled from the main treatment. Specifically, protocols secure in
the private-channel model can be compiled to withstand wire-tapping adversaries (by
using encryption schemes). Similarly, we assume that messages sent between honest
parties arrive intact, whereas one may want to consider adversaries that may inject mes-
sages on the communication line between honest parties. Again, this can be counteracted
by the use of well-understood paradigms, in this case, the use of signature schemes.
where output (x) denotes the output sequence of all parties during the execution
represented in view
I (x).
57 As in Section 7.2, by saying that computes (rather than privately computes) f , we mean that the output
distribution of the protocol (when played by honest or semi-honest parties) on the input sequence (x 1 , ..., xm )
is distributed identically to f (x 1 , ..., xm ).
696
www.Ebook777.com
7.5* EXTENSION TO THE MULTI-PARTY CASE
Eq. (7.40) asserts that the view of the parties in I can be efficiently simulated based
solely on their inputs and outputs. Note that view I (x) includes only the local views of
parties in I , and does not include the messages sent between pairs of honest parties.
Thus, Definition 7.5.1 refers to the case in which the semi-honest parties do not (or
cannot) wire-tap the channels between honest parties (and, hence, is labeled “without
wire-tapping”), which is equivalent to assuming the existence of “private channels.”
To deal with the case of wire-tapping, one just needs to augment view I (x) with the
transcript of the messages sent between all the pairs of honest parties. In this case, it is
more natural to consider an external adversary that obtains the views of all parties in
I , as well as all messages sent over all channels.
Definition 7.5.1 can be easily adapted to deal with a varying parameter m, by taking
advantage of the current order of quantifiers (i.e., “there exists an algorithm S such that
for every I ”).58 We also note that the simulator can certainly handle the trivial cases in
which either I = [m] or I = ∅. (The case I = [m] is always trivial, whereas the case
I = ∅ is trivial only because here we consider the case of no wire-tapping.)
As in the two-party case, Definition 7.5.1 is equivalent to a definition that can
be derived by following the real-vs.-ideal paradigm (analogously to the treatment in
Section 7.2.2.2).
1. A model in which the number of parties that deviate from the protocol is arbitrary.
The treatment of this case extends the treatment given in the two-party case. In
particular, in this model, one cannot prevent malicious parties from aborting the
protocol prematurely, and the definition of security has to account for this fact (if it
is to have a chance of being met).
2. A model in which the number of parties that deviate from the protocol is strictly less
than half the total number of parties. The definitional treatment of this case is sim-
pler than the treatment given in the two-party case. In particular, one may – in
some sense – (effectively) prevent malicious parties from aborting the protocol
prematurely.59 Consequently, the definition of security is “freed” from the need to
account for early stopping, and thus is simpler.
We further assume (toward achieving a higher level of security) that malicious parties
may communicate (without being detected by the honest parties), and may thus coor-
dinate their malicious actions. Actually, it will be instructive to think of all malicious
parties as being controlled by one (external) adversary. Our presentation follows the
58 Note that for a fixed m, it may make as much sense to reverse the order of quantifiers (i.e., require that “for
every I there exists an algorithm S I ”).
59 As we shall see, the assumption that malicious parties are in a minority opens the door to effectively preventing
them from aborting the protocol immaturely. This will be achieved by letting the majority parties have (together!)
enough information so as to be able to emulate the minority parties in case the latter abort.
697
Free ebooks ==> www.Ebook777.com
GENERAL CRYPTOGRAPHIC PROTOCOLS
ideal-vs.-real emulation paradigm introduced and used in the previous sections. The
difference between the two malicious models is reflected in a difference in the corre-
sponding ideal models, which captures the different types of benign behaviors that a
secure protocol is aimed at achieving. Another difference is in the number of malicious
parties considered in each model.
The first malicious model. Following the discussion in Section 7.2.3, we conclude
that three things cannot be avoided in the first malicious model:
1. Malicious parties may refuse to participate in the protocol (when the protocol is first
invoked). Actually, as explained in Section 7.2.3, this behavior may be viewed as a
special case of input substitution (as discussed in the next item).
2. Malicious parties may substitute their local inputs (and enter the protocol with inputs
other than the ones provided to them from the outside).
3. Malicious parties may abort the protocol prematurely (e.g., before sending their last
message).
Accordingly, the ideal model is derived by a straightforward generalization of Defini-
tion 7.2.4. In light of this similarity, we allow ourselves to be quite terse. To simplify
the exposition, we assume that for every I , first the trusted party supplies the adversary
with the I -part of the output (i.e., the value of f I ), and only next is it possibly allowed
(at the adversary’s discretion) to answer the other parties. Actually, as in the two-party
case, the adversary has the ability to prevent the trusted party from answering all parties
only in the case where it controls Party 1.60
Definition 7.5.2 (the ideal model – first malicious model): Let f : ({0, 1}∗ )m →
def
({0, 1}∗ )m be an m-ary functionality. For I = {i1 , ..., i t } ⊆ [m] = {1, ..., m}, let I =
[m] \ I and (x 1 , ..., x m ) I = (x i1 , ..., xit ). A pair (I, B), where I ⊆ [m] and B is a prob-
abilistic polynomial-time algorithm, represents an adversary in the ideal model. The
joint execution of f under (I, B) in the ideal model (on input x = (x1 , ..., xm ) and
(1)
auxiliary input z), denoted ideal f, I, B(z) (x), is defined by uniformly selecting a random-
(1) def
tape r for the adversary, and letting ideal f, I, B(z) (x) = ϒ(x, I, z, r ), where ϒ(x, I, z, r )
is defined as follows:
r In case Party 1 is honest (i.e., 1 ∈ I ),
where x = (x1 , ..., xm ) such that x i = B(x I , I, z, r )i for i ∈ I and xi = xi other-
def
wise.
r In case Party 1 is not honest (i.e., 1 ∈ I ), ϒ(x, I, z, r ) equals
60 As in the two-party case, this convention is rather arbitrary; see the discussion at the end of Section 7.2.3.1.
698
www.Ebook777.com
7.5* EXTENSION TO THE MULTI-PARTY CASE
where, in both cases, x = (x1 , ..., xm ) such that xi = B(x I , I, z, r )i for i ∈ I and
def
xi = xi otherwise.
In all cases, the trusted party is invoked with possibly substituted inputs, denoted
x = (x 1 , ..., xm ), where xi = xi only if i ∈ I . Eq. (7.42) represents the case where
the trusted party is stopped right after supplying the adversary with the I -part of the
output (i.e., f I (x )). This case is allowed only when 1 ∈ I , and so Party 1 can always
be “blamed” when this happens.61 Equations (7.41) and (7.43) represent the cases
where the trusted party is invoked with possibly substituted inputs, but is allowed to
answer all parties. We stress that either all honest parties get their output or all are
notified that the trusted party was stopped by the adversary. As usual, the definition of
security is obtained by requiring that for every feasible adversary in the real model,
there exists a corresponding adversary in the ideal model that achieves the same effect.
Specifically, in the real model, the adversary may tap all communication lines and
determine (adaptively) all the outgoing messages of all dishonest parties.
Definition 7.5.3 (Security in the first malicious model): Let f be as in Definition 7.5.2,
and be an m-party protocol for computing f .
r The joint execution of under (I, A) in the real model (on input a sequence
x = (x1 , ..., xm ) and auxiliary input z), denoted real, I, A(z) (x), is defined as the
output sequence resulting from the interaction between the m parties, where the
messages of parties in I are computed according to A(x I , I, z) and the messages of
def
parties in I¯ = [m] \ I are computed according to .62 Specifically, the messages of
malicious parties (i.e., parties in I ) are determined by the adversary A based on the
initial inputs of the parties in I , the auxiliary input z, and all messages sent so far
by all parties (including messages received by the honest parties [i.e., parties in I¯]).
r Protocol is said to securely compute f (in the first malicious model) if for ev-
ery probabilistic polynomial-time algorithm A (representing a real-model adversary
strategy), there exists a probabilistic polynomial-time algorithm B (representing an
ideal-model adversary strategy), such that for every I ⊆ [m]
(1) c
{ideal f, I, B(z) (x)}x,z ≡ {real, I, A(z) (x)}x,z
When the context is clear, we sometimes refer to as an implementation of f .
We stress that the ideal-model adversary (i.e., B) controls exactly the same set of parties
(i.e., I ) as the the real-model adversary (i.e., A). Definition 7.5.3 (as well as the following
Definition 7.5.4) refers to an adversary that may wire-tap all communication channels.
This is reflected in the definition of real, I, A(z) (x), which allows A to determine its
actions based on all messages communicated so far. (Thus, for m = 2, Definition 7.5.3
is stronger than Definition 7.2.6, because [unlike the latter] the former also refers to the
61 In fact, in the protocols presented in this work, early abort is always due to malicious behavior of Party 1. By
Definition 7.5.3, this translates to malicious behavior of Party 1 in the ideal model.
62 To fit the format used in Definition 7.5.2, the outputs of the parties (in real
, I, A(z) (x)) are arranged such that
the outputs of the honest parties appear on the left-hand side.
699
Free ebooks ==> www.Ebook777.com
GENERAL CRYPTOGRAPHIC PROTOCOLS
case I = ∅, which is non-trivial because it refers to an adversary that may wire-tap the
communication channel.) In order to derive a definition for the private-channel model,
one should modify the definition of real, I, A(z) (x), such that A’s actions may depend
only on the messages received by parties in I .
The Second Malicious Model. In the second model, where malicious parties are in
a strict minority, the early-abort, phenomena can be effectively prevented. Thus, in
this case, there is no need to “tolerate” early-abort, and consequently our definition of
security requires “proper termination” of executions. This is reflected in the definition
of the ideal model, which actually becomes simpler.63
Discussion. The two alternative malicious models give rise to two appealing and yet
fundamentally incomparable notions of security. Put in other words, there is a trade-
off between the willingness to put up with early-abort (i.e., not consider it a breach
of security) and requiring the protocol to be robust also against malicious coalitions
controlling a majority of all parties. The question of which notion of security is prefer-
able depends on the application (or on the setting). In some settings, one may prefer
to be protected from malicious majorities, while giving up the guarantee that parties
cannot abort the protocol prematurely (while being detected doing so). On the other
hand, in settings in which a strict majority of the parties can be trusted to follow the
protocol, one may obtain the benefit of effectively preventing parties to abort the proto-
col prematurely. We stress that all definitions are easily adapted to deal with a varying
parameter m.
63 In this case, the definition extends the one presented in Section 7.2.3.2.
700
www.Ebook777.com
7.5* EXTENSION TO THE MULTI-PARTY CASE
Thus, all that we need to do on top of Section 7.3 is to provide a private m-party
computation of this functionality. This is done by privately reducing, for arbitrary m,
the computation of Eq. (7.44) – (7.45) to the computation of the same functionality for
the case m = 2, which in turn coincides with Eq. (7.17) – (7.18). But first we need to
define an appropriate notion of a reduction. Indeed, the new notion of a reduction is
merely a generalization of the notion presented in Section 7.3.1.
Definition 7.5.5 (m-party protocols with k-ary oracle access): As in the two-party
case, an oracle-aided protocol is an ordinary protocol augmented by a pair of oracle-
tapes per each party, and oracle-call steps defined as follows. Each of the m parties
701
Free ebooks ==> www.Ebook777.com
GENERAL CRYPTOGRAPHIC PROTOCOLS
may send a special oracle-request message to all other parties. The oracle-request
message contains a sequence of k distinct parties, called the request sequence, that
are to supply queries in the current oracle call. In response, each party specified in the
request sequence writes a string, called its query, on its own write-only oracle-tape, and
responds to the requesting party with an oracle-call message. At this point, the oracle
is invoked and the result is that a string, not necessarily the same, is written by the
oracle on the read-only oracle-tape of each of the k specified parties. This k-sequence
of strings is called the oracle answer.
One may assume, without loss of generality, that the party who invokes the oracle is
the one who plays the role of the first party in the reduction (i.e., the first element in
the request sequence is always the identity of the party that requests the current oracle
call).
Theorem 7.5.7 (Composition Theorem for the multi-party semi-honest model): Sup-
pose that the m-ary functionality g is privately reducible to the k-ary functionality f ,
and that there exists a k-party protocol for privately computing f . Then there exists an
m-party protocol for privately computing g.
As in the two-party case, the Composition Theorem can be generalized to yield tran-
sitivity of privacy reductions; that is, if g is privately reducible to f and f is privately
reducible to e, then g is privately reducible to e.
Proof Sketch: The construction supporting the theorem is identical to the one used in
the proof of Theorem 7.3.3: Let g| f be an oracle-aided protocol that privately reduces
g to f , and let f be a protocol that privately computes f . Then, a protocol for
702
www.Ebook777.com
7.5* EXTENSION TO THE MULTI-PARTY CASE
computing g is derived by starting with g| f , and replacing each invocation of the
oracle by an execution of f . Clearly, computes g. We need to show that it privately
computes g (as per Definition 7.5.1).
We consider an arbitrary (non-trivial) set I ⊆ [m] of semi-honest parties in the
execution of , where the trivial cases (i.e., I = ∅ and I = [m]) are treated (differently)
in a straightforward manner. Note that for k < m (unlike the situation in the two-
party case), the set I may induce different sets of semi-honest parties in the different
executions of f (replacing different invocations of the oracle). Still, our “uniform”
definition of simulation (i.e., uniform over all possible sets of semi-honest parties)
keeps us away from trouble. Specifically, let S g| f and S f be the simulators guaranteed
for g| f and f , respectively. We construct a simulation S, for , in the natural
manner. On input (I, x I , f I (x)), we first run S g| f (I, x I , f I (x)), and obtain the view
of the semi-honest coalition I = ∅ in g| f . This view includes the sequence of all
oracle-call requests made during the execution, which in turn consists of the sequence
of parties that supply query-parts in each such call. The view also contains the query-
parts supplied by the parties in I , as well as the corresponding answer-parts. For each
such oracle call, we denote by J the subset of I that supplied query-parts in this
call and invoke S f , providing it with the subset J , as well as with the corresponding
J -parts of the queries and answers. Thus, we fill up the view of I in the current
execution of f . (Recall that S f can also handle the trivial cases in which either |J | = k
or |J | = 0.)
It is left to show that S indeed generates a distribution indistinguishable from the
view of semi-honest parties in actual executions of . As in the proof of Theorem 7.3.3,
this is done by introducing a hybrid distribution, denoted H . This hybrid distribution
represents the view of the parties in I (and output of all parties) in an execution of g| f
that is augmented by corresponding invocations of S f . In other words, H represents
the execution of , with the exception that the invocations of f are replaced by
simulated transcripts. Using the guarantees regarding S f (resp., S g| f ), we show that
the distributions corresponding to H and (resp., H and S) are computationally
indistinguishable. The theorem follows.
7.5.2.2. Privately Computing i ci = ( i ai ) · ( i bi )
We now turn to the m-ary functionality defined in Eq. (7.44) – (7.45). Recall that the
arithmetic is that of GF(2), and so −1 = +1, and so forth. The key observation is that
m m m
ai · bi = ai bi + ai b j + a j bi (7.46)
i=1 i=1 i=1 1≤i< j≤m
m
= (1 − (m − 1)) · ai bi + (ai + a j ) · (bi + b j )
i=1 1≤i< j≤m
m
=m· ai bi + (ai + a j ) · (bi + b j ) (7.47)
i=1 1≤i< j≤m
703
Free ebooks ==> www.Ebook777.com
GENERAL CRYPTOGRAPHIC PROTOCOLS
where the last equality relies on the specifics of GF(2). Now, looking at Eq. (7.47),
we observe that each party, i, can compute (by itself) the term m · ai bi , whereas each
2-subset, {i, j}, can privately compute shares to the term (ai + a j ) · (bi + b j ) by in-
voking the two-party functionality of Eq. (7.17) – (7.18). This leads to the following
construction:
Construction 7.5.8 (privately reducing the m-party computation of Eq. (7.44) – (7.45)
to the two-party computation of Eq. (7.17) – (7.18)):
Proposition 7.5.9: Construction 7.5.8 privately reduces the computation of the m-ary
functionality given by Eq. (7.44)–(7.45) to the computation of the 2-ary functionality
given by Eq. (7.17)–(7.18).
Proof Sketch: We construct a simulator, denoted S, for the view of the parties in the
oracle-aided protocol, denoted , of Construction 7.3.7. Given a set of semi-honest
parties, I = {i 1 , ..., i t } (with t < m), and a sequence of inputs (ai1 , bi1 ), ...., (ait , bit )
and outputs ci1 , ..., ci t , the simulator proceeds as follows:
{i, j}
1. For each pair, (i, j) ∈ I × I where i < j, the simulator uniformly selects ci ∈
{i, j} {i, j}
{0, 1} and sets c j = ci + (ai + a j ) · (bi + b j ).
def
2. Let I¯ = [m] \ I , and let be the largest element in I¯. (Such an ∈ [m] exists since
|I | < m).
{i, j}
(a) For each i ∈ I and each j ∈ I¯ \ {}, the simulator uniformly selects ci ∈
{0, 1}. {i, j}
(b) For each i ∈ I , the simulator sets ci{i,} = ci + mai bi + j ∈{i,} ci , where the
{i, j}
latter ci ’s are as generated in Steps 1 and 2a.
{i, j}
3. The simulator outputs all ci ’s generated here. That is, it outputs the sequence of
{i, j}
ci ’s corresponding to all i ∈ I and j ∈ [m] \ {i}.
We claim that the output of the simulator is distributed identically to the view of the
parties in I during the execution of the oracle-aided protocol. Furthermore, we claim
704
www.Ebook777.com
7.5* EXTENSION TO THE MULTI-PARTY CASE
that for every such I , every x = ((a1 , b1 ), ..., (am , bm )), and every possible outcome
(c1 , ..., cm ) of the functionality f of Eq. (7.44)–(7.45), it holds that the conditional
distribution of S(I, x I , f I (x)) is distributed identically to the conditional distribution
of view I (x).
To prove this claim, we first note that f (x) is uniformly distributed over the m-
def
bit, long sequences that sum up to c = ( i ai ) · ( i bi ). The same holds also for
the outputs of the parties in protocol , because the sequence of the outputs of Par-
ties 1, ..., m − 1 is uniformly distributed over {0, 1}m−1 (due to the contribution of
{i,m}
ci to the output of Party i), whereas the sum of all m outputs equals c. Turning to
the conditional distributions (i.e., conditioning on f (x) = (c1 , ..., cm ) = output (x)),
{i, j}
we show that the sequence of ci ’s (for i ∈ I ) is distributed identically in both distri-
butions (i.e., in the execution view and in the simulation). Specifically, in both cases,
{i, j}
the sequence (ci )i∈I, j∈[m]\{i} is uniformly distributed among the sequences satisfy-
{i, j} {i, j} {i, j}
ing ci + c j = (ai + a j ) · (bi + b j ) (for each i ∈ I and j = i) and j =i ci =
ci + mai bi (for each i ∈ I ).
{i, j}
Details: Consider the distribution of the sub-sequence (ci )i∈I, j∈[m]\{i,} , where
∈ I¯ is as in the preceding. In both cases, the conditioning (on f (x) = (c1 , ..., cm ) =
{i,}
output (x)) does not affect this distribution, because the ci ’s are missing. Thus,
in both cases, this sub-sequence is uniformly distributed among the sequences sat-
{i, j} {i, j}
isfying ci + c j = (ai + a j ) · (bi + b j ) (for each i = j ∈ I ). Furthermore, in
{i,} {i, j}
both cases, the ci ’s are set such that j =i ci = ci + mai bi holds.
Inputs: Party i holds the bit string xi = xi1 · · · xin ∈ {0, 1}n , for i = 1, ..., m.
Step 1 – Sharing the Inputs: Each party splits and shares each of its input bits with
all other parties. That is, for every i = 1, ..., m and j = 1, ..., n, and every k = i,
(i−1)n+ j
Party i uniformly selects a bit rk and sends it to Party k as the party’s share
of input-wire (i − 1) · n + j. Party i sets its own share of the (i − 1) · n + j th input
j (i−1)n+ j
wire to xi + k =i rk .
Step 2 – Circuit Emulation: Proceeding by the order of wires, the parties use their
shares of the two input wires to a gate in order to privately compute shares for the
Output-wire of the gate. Suppose that the parties hold shares to the two input-wires
of some gate; that is, for i = 1, ..., m, Party i holds the shares ai , bi , where a1 , ..., am
are the shares of the first wire and b1 , ..., bm are the shares of the second wire. We
consider two cases:
Emulation of an addition gate: Each party, i, just sets its share of the output-wire of
the gate to be ai + bi .
Emulation of a multiplication gate: Shares of the output-wire of the gate are ob-
tained by invoking the oracle for the functionality of Eq. (7.44) – (7.45), where
Party i supplies the input (query-part) (ai , bi ). When the oracle responds, each
party sets its share of the output-wire of the gate to equal its part of the oracle
answer.
Step 3 – Recovering the Output Bits: Once the shares of the circuit-output wires are
computed, each party sends its share of each such wire to the party with which the
wire is associated. That is, for i = 1, ..., m and j = 1, ..., n, each party sends its share
of wire N − (m + 1 − i) · n + j to Party i. Each party recovers the corresponding
output bits by adding up the corresponding m shares; that is, it adds the share it had
obtained in Step 2 to the m − 1 shares it has obtained in the current step.
Outputs: Each party locally outputs the bits recovered in Step 3.
As in the two-party case, one can easily verify that the output of the protocol is indeed
correct. Specifically, by using induction on the wires of the circuits, one can show that
the shares of each wire sum up to the correct value of the wire. Indeed, for m = 2, Con-
struction 7.5.10 coincides with Construction 7.3.9. The privacy of Construction 7.5.10
is also shown by extending the analysis of the two-party case; that is, analogously to
Proposition 7.3.10, one can show that Construction 7.5.10 privately reduces the com-
putation of a circuit to the multiplication-gate emulation.
706
www.Ebook777.com
7.5* EXTENSION TO THE MULTI-PARTY CASE
Proof Sketch: Just follow the proof of Proposition 7.3.10, treating the parties in I
analogously to the way that Party 1 is treated there. In treating the output wires of
parties in I (i.e., Step 3 in the simulation), note that the shares of parties in I and the
known output value uniquely determine the shares received in Step 3 of the protocol
only if |I | = m − 1 (as was the case in the proof of Proposition 7.3.10). Otherwise (i.e.,
for |I | < m − 1), the shares sent (in Step 3 of the protocol) by parties in I¯ should be
selected uniformly among all sequences that (together with the shares of parties in I )
add up to the given output value.
def
f ((x 1 , r1 ), ..., (xm , rm )) = g(⊕i=1
m
ri , (x1 , ..., x m )) (7.48)
where g(r, x) denotes the value of g(x) when using coin-tosses r ∈ {0, 1}poly(|x|)
(i.e., g(x) is the randomized process consisting of uniformly selecting r ∈
{0, 1}poly(|x|) , and deterministically computing g(r, x)). Combining this fact with Propo-
sitions 7.5.11, 7.5.9, and 7.3.8 (and using the transitivity of privacy reductions),
we obtain:
Combining Theorem 7.5.12 and Proposition 7.3.6 with the Composition Theorem (The-
orem 7.5.7), we conclude that if enhanced trapdoor permutations exist, then any m-ary
functionality is privately computable. As in the two-party case, we wish to highlight a
useful property of the protocols underlying the latter fact. Indeed, we refer to a notion
of canonical m-party computation that extends Definition 7.3.13.
Stage 1: The parties privately compute the functionality x → ((r11 , ..., rm1 ), ...,
(r1m , ..., rmm )), where the r ij ’s are uniformly distributed among all possibilities that
satisfy (⊕i=1 m
r1i , ..., ⊕i=1
m
rmi ) = f (x).
Stage 2: For i = 2, ..., m and j ∈ [m] \ {i}, Party i sends r ij to Party j. Next, Party 1
sends r 1j to Party j, for j = 2..., m. Finally, each party computes its own output; that
is, for j = 1..., m, Party j outputs ⊕i=1m
r ij .
707
Free ebooks ==> www.Ebook777.com
GENERAL CRYPTOGRAPHIC PROTOCOLS
Indeed, the protocols underlying the proof of Theorem 7.5.12 are essentially
canonical.64 Hence,
Theorem 7.5.14: Suppose that there exist collections of enhanced trapdoor permuta-
tions. Then any functionality can be privately computable by a canonical protocol.
We comment that the said protocols happen to maintain their security even if the adver-
sary can wire-tap all communication lines. This follows from the fact that privacy with
respect to wire-tapping adversaries happens to hold for all privacy reductions presented
in the current section, as well as for the protocols presented in Section 7.3.
Theorem 7.5.15 (main result for the multi-party case): Suppose that there exist collec-
tions of enhanced trapdoor permutations. Then any m-ary functionality can be securely
computable in each of the two malicious models, provided that a public-key infrastruc-
ture exists in the network.65
The theorem will be established in two steps. First, we compile any protocol for the
semi-honest model into an “equivalent” protocol for the first malicious model. This
compiler is very similar to the one used in the two-party case. Next, we compile any
protocol for the first malicious model into an “equivalent” protocol for the second
malicious model. The heart of the second compiler is a primitive, which is alien to
the two-party case, called Verifiable Secret Sharing (VSS). For simplicity, we again
think of the number of parties m as being fixed. The reader may again verify that the
dependence of our constructions on m is at most polynomial.
To simplify the exposition of the multi-party compilers, we describe them as pro-
ducing protocols for a communication model consisting of a single broadcast channel
(and no point-to-point links). In this model, in each communication round, only one
(predetermined) party may send a message, and this message arrives to all parties. We
stress that only this predetermined party may send a message in the said round (i.e.,
the message is “authenticated” in the sense that each other party can verify that, in-
deed, the message was sent by the designated sender). Such a broadcast channel can
be implemented via an (authenticated) Byzantine Agreement protocol, thus providing
an emulation of the broadcast model on the standard point-to-point model (in which a
broadcast channel does not exist).
64 This assertion depends on the exact implementation of Step 3 of Construction 7.5.10, and holds provided that
Party 1 is the last party to send its shares to all other parties.
65 That is, we assume that each party has generated a pair of keys for a signature scheme and has publicized its
verification-key (so that it is known to all other parties). This set-up assumption can be avoided if the network
is augmented with a broadcast channel.
708
www.Ebook777.com
7.5* EXTENSION TO THE MULTI-PARTY CASE
Recall that our goal is to transform protocols that are secure in the semi-honest
point-to-point model into protocols that are secure in the two malicious broadcast
models. Starting with (semi-honestly secure) protocols that operate in the point-to-
point communication model, we first derive equivalent protocols for the broadcast-
channel model, and only next we apply the two compilers, where each compiler takes
and produces protocols in the broadcast-channel model (which are secure with respect
to a corresponding type of adversaries). Thus, the full sequence of transformations
establishing Theorem 7.5.15 (based on Theorem 7.5.14) is as follows:
r We first use the pre-compiler (of Section 7.5.3.1) to transform a protocol 0 that
privately computes a functionality f in the (private-channel) point-to-point model
into a protocol 0 that privately computes f in the broadcast model (where no private
point-to-point channels exist).
Note that, since we refer to semi-honest behavior, we do not gain by having a broad-
cast channel, and we may only lose by the elimination of the private point-to-point
channels (because this allows the adversary to obtain all messages sent). However,
the protocols presented in Section 7.5.2 happen to be secure in the semi-honest
broadcast model,66 and so this pre-compiler is actually not needed (provided we
start with these specific protocols, rather than with arbitrary semi-honestly secure
protocols).
r Using the first compiler (of Section 7.5.4), we transform (which is secure in
0
the semi-honest model) into a protocol 1 that is secure in the first malicious
model.
We stress that both 0 and 1 operate and are evaluated for security in a communi-
cation model consisting of a single broadcast channel. The same holds also for 2
mentioned next.
r Using the second compiler (of Section 7.5.5), we transform (which is secure in
1
the first malicious model) into a protocol 2 that is secure in the second malicious
model.
r Finally, we use the post-compiler (of Section 7.5.3.2) to transform each of the pro-
tocols 1 and 2 , which are secure in the first and second malicious models when
communication is via a broadcast channel, into corresponding protocols, 1 and 2 ,
for the standard point-to-point model. That is, 1 (resp., 2 ) securely computes f
in the first (resp., second) malicious model in which communication is via standard
point-to-point channels.
We stress that security holds even if the adversary is allowed to wire-tap the (point-
to-point) communication lines between honest parties.
We start by discussing the security definitions for the broadcast communication model
and by presenting the aforementioned pre-compiler and the post-compiler. Once this is
66 As noted at the very end of Section 7.5.2, these protocols also happen to be secure against semi-honest adversaries
that do wire-tape all communication channels. These protocols can be trivially converted to work in the broadcast
model by letting the honest parties ignore broadcast messages that are not intended for them. Indeed, in the
resulting protocol, the adversary receives all messages (including those intended for other parties), but it could
also obtain these messages in the original protocol by wire-tapping all point-to-point channels.
709
Free ebooks ==> www.Ebook777.com
GENERAL CRYPTOGRAPHIC PROTOCOLS
done, we turn to the real core of this section: the two compilers (which are applied to
protocols that operate in the broadcast model).
Definitions. Indeed, security in the broadcast model was not defined so far. However,
the three relevant definitions for the broadcast communication model are easily de-
rived from the corresponding definitions given in Section 7.5.1, where a point-to-point
communication model was used. Specifically, in defining security in the semi-honest
model, one merely includes the entire transcript of the communication over the (single)
broadcast channel in each party’s view. Similarly, when defining security in the two
malicious models, one merely notes that the “real execution model” (i.e., real, I, A )
changes (since the protocol is now executed over a different communication media),
(1) (2)
whereas the “ideal model” (i.e., ideal f, I, B or ideal f, I, B ) remains intact.
www.Ebook777.com
7.5* EXTENSION TO THE MULTI-PARTY CASE
The analysis of the latter simulator combines the guarantee given for the “point-to-
point simulator” and the guarantee that the encryption scheme is secure. That is, the
ability to distinguish the output of the broadcast-model simulator from the execution
view (in the broadcast model) yields either (1) the ability to distinguish the output of
the “point-to-point” simulator from the execution view (in the point-to-point model) or
(2) the ability to distinguish encryptions under the public-key encryption scheme being
used. In both cases we reach contradiction to our hypothesis.
67 Such a signature scheme can be constructed given any one-way function. In particular, one may use Construc-
tion 6.4.30. Maintaining short signatures is important in this application, because we are going to iteratively
sign messages consisting of (the concatenation of an original message and) prior signatures.
711
Free ebooks ==> www.Ebook777.com
GENERAL CRYPTOGRAPHIC PROTOCOLS
Phase i = 2, ..., m: Each honest party (other than Party 1) inspects the messages it
has received at Phase i − 1, and forwards signed versions of the (·, i − 1)-authentic
messages that it has received. Specifically, for every v ∈ {0, 1}, if Party j has received
a (v, i − 1)-authentic message (v, s p1 , ..., s pi−1 ) such that all pk ’s are different from
j, then it appends its signature to the message and sends the resulting (v, i)-authentic
message to all parties.
We stress that for each value of v, Party j sends at most one (v, i)-authentic message
to all parties. Actually, it may refrain from sending (v, i)-authentic messages if it has
already sent (v, i )-authentic messages for some i < i.
Termination: Each honest party (other than Party 1) evaluates the situation as follows:
1. If, for some i 0 , i 1 ∈ [m] (which are not necessarily different), it has received both
a (0, i 0 )-authentic message and a (1, i 1 )-authentic message, then it decides that
Party 1 is malicious and outputs an error symbol, say ⊥.
2. If, for a single v ∈ {0, 1} and some i, it has received a (v, i)-authentic message,
then it outputs the value v.
3. If it has never received a (v, i)-authentic message, for any v ∈ {0, 1} and i, then
it decides that Party 1 is malicious and outputs an error symbol, say ⊥.
We comment that in the Distributed Computing literature, an alternative presentation
is preferred in which if a party detects cheating by Party 1 (i.e., in Cases 1 and 3),
then the party outputs a default value, say 0, rather than the error symbol ⊥.
The protocol can be easily adapted to handle non-binary input values. For the sake of
efficiency, one may instruct honest parties to forward at most two authentic messages
that refer to different values (because this suffices to establish that Party 1 is malicious).
Proposition 7.5.18: Assuming that the signature scheme in use is unforgeable, Con-
struction 7.5.17 satisfies the following two conditions:
1. It is infeasible to make any two honest parties output different values.
2. If Party 1 is honest, then it is infeasible to make any honest party output a value
different from the input of Party 1.
The claim holds regardless of the number of dishonest parties and even if dishonest
parties abort the execution.
www.Ebook777.com
7.5* EXTENSION TO THE MULTI-PARTY CASE
but do so without limiting the number of dishonest parties. That is, for any number of
dishonest parties, the protocol effectively prevents dishonest parties from aborting (be-
cause abort is treated as sending some illegal message). In particular, the case in which
Party 1 does not even enter the execution is treated as the case in which it sent illegal
messages.
Proof Sketch: Fixing any j and v, suppose that in Phase i − 1, Party j receives a
(v, i − 1)-authentic message, and assume that i is the smallest integer for which this
happens. For this event to happen, it must be that i ≤ m, because the message must
contain i − 1 signatures from different parties (other than Party j itself).68 In such
a case, if Party j is honest, then it will send an authentic (v, i)-message in Phase i
(i ≤ m), and so all parties will receive an authentic (v, i)-message in Phase i. Thus, for
every v, if an honest party sees a (v, ·)-authentic message, then so do all other honest
parties, and Part 1 follows. Part 2 follows by observing that if Party 1 is honest and has
input v, then all honest parties see a (v, 1)-authentic message. Furthermore, none can
see a (v , i)-authentic message, for v = v and any i.
Proposition 7.5.19 (post-compiler): Suppose that one-way functions exist. Then any
m-ary functionality that is securely computable in the first (resp., second) malicious
broadcast model is also securely computable in the first (resp., second) malicious point-
to-point model, provided that a public-key infrastructure exists in the network.
Proof Sketch: The idea is to replace any broadcast message sent in the original protocol
by an execution of the Authenticated Byzantine Agreement (AuthBA) protocol. This
idea needs to be carefully implemented because it is not clear that the security of
AuthBA is preserved under multiple executions, and thus applying Proposition 7.5.18
per se will not do. The problem is that the adversary may use authenticated messages
sent in one execution of the protocol in order to fool some parties in a different execution.
This attack can be avoided in the current context by using identifiers (which can be
assigned consistently by the higher-level protocol) for each of the executions of the
AuthBA protocol. That is, authentic messages will be required to bear the distinct
identifier of the corresponding AuthBA execution (and all signatures will be applied to
that identifier as well). Thus, authentic messages of one AuthBA execution will not be
authentic in any other AuthBA execution. It follows that the proof of Proposition 7.5.18
can be extended to our context, where sequential executions of AuthBA (with externally
assigned distinct identifiers) take place.
The proof of security transforms any real-model adversary for the point-to-point
protocol to a real-model adversary for the broadcast-channel protocol. In particular,
the latter determines the messages posted on the broadcast channel exactly as an hon-
est party determines the values delivered by the various executions of AuthBA. In
the transformation, we assume that each instance of the AuthBA sub-protocol satis-
fies the conditions stated in Proposition 7.5.18 (i.e., it delivers the same value to all
68 Note that the said message cannot contain a signature of Party j due to the minimality of i: If the (v, i − 1)-
authentic message had contained a signature of Party j, then for some i < i, Party j would have received a
(v, i − 1)-authentic message in Phase i − 1.
713
Free ebooks ==> www.Ebook777.com
GENERAL CRYPTOGRAPHIC PROTOCOLS
honest parties, and this value equals the one entered by the honest sender). In case the
assumption does not hold, we derive a forger for the underlying signature scheme.
www.Ebook777.com
7.5* EXTENSION TO THE MULTI-PARTY CASE
oracle-aided protocol (i.e., Definition 7.5.5), but require such a protocol to be secure
in the first malicious model (rather than be secure in the semi-honest model). As in the
two-party case, we require that the length of each oracle-query can be determined from
the length of the initial input to the oracle-aided protocol.
(1) c f
{idealg, I, B(z) (x)}x,z ≡ {real, I, A(z) (x)}x,z
We are now ready to state a composition theorem for the first multi-party malicious
model.
715
Free ebooks ==> www.Ebook777.com
GENERAL CRYPTOGRAPHIC PROTOCOLS
Theorem 7.5.21 (Composition Theorem for the first multi-party malicious model):
Suppose that the m-ary functionality g is securely reducible to the k-ary functionality
f and that there exists a k-party protocol for securely computing f . Then there exists
an m-party protocol for securely computing g.
Recall that the syntax of oracle-aided protocols disallows concurrent oracle calls, and
thus Theorem 7.5.21 is actually a sequential composition theorem. As in the two-
party case, the Composition Theorem can be generalized to yield transitivity of secure
reductions and to account for reductions that use several oracles rather than one.
Proof Sketch: Analogously to the proof of previous composition theorems, we are given
an oracle-aided protocol, denoted g| f , that securely reduces g to f , and an ordinary
protocol f that securely computes f . Again, we construct a protocol for computing
g in the natural manner; that is, starting with g| f , we replace each invocation of the
oracle (i.e., of f ) by an execution of the protocol f . Clearly, computes g, and
we need to show that securely computes g. This is proven by merely generalizing
the proof of Theorem 7.4.3 (i.e., the two-party case). The only point that is worthwhile
stressing is that the real-model adversary for f , derived from the real-model adversary
for , is constructed obliviously of the set of parties I that the adversary controls.69 As
in the proof of Theorem 7.5.7, we determine the set of parties for every such invocation
of f , and rely on the fact that security holds with respect to adversaries controlling
any subset of the k parties participating in an execution of f . In particular, the security
of an invocation of f by parties P = { p1 , ..., pk } holds also in case I ∩ P = ∅, where
it means that a real-model adversary (which controls no party in P) learns nothing by
merely tapping the broadcast channel.70
69 Unlike in the two-party case, here we cannot afford to consider a designated adversary for each subset of parties.
70 Security holds also in the other extreme case, where I ∩ P = P, but it is not meaningful in that case.
716
www.Ebook777.com
7.5* EXTENSION TO THE MULTI-PARTY CASE
Proof Sketch: The first idea that comes to mind is to let each party generate a pair of
keys for a public-key encryption scheme and broadcast the encryption-key, and then let
Party 1 broadcast the encryption of its input under each of these encryption-keys. The
problem with this protocol is that it is no longer guaranteed that all parties receive the
same value. One solution is to let Party 1 provide zero-knowledge proofs (to each of
the parties) that the posted ciphertexts are consistent (i.e., encrypt the same value), but
the implementation of this solution is not straightforward (cf. Construction 7.5.24). An
alternative solution, adopted here, is to use the encryption scheme in order to emulate a
set of private (point-to-point) channels, as in Section 7.5.3.1, and run an authenticated
Byzantine Agreement on this network. Since we have an ordinary broadcast channel at
our disposal, we do not need to assume an initial set-up that corresponds to a public-key
infrastructure, but can rather generate it on the fly. The resulting protocol is as follows:
1. Each party generates a pair of keys for a signature scheme and posts the verification-
key on the broadcast channel. This establishes the public-key infrastructure as relied
upon in Construction 7.5.17.
2. Each party generates a pair of keys for a public-key encryption scheme and posts
the encryption-key on the broadcast channel. This effectively establishes a network
of private (point-to-point) channels to be used in Step 3.
3. The parties invoke the authenticated Byzantine Agreement protocol of Construc-
tion 7.5.17 in order to let Party 1 broadcast its input to all other parties. All messages
of this protocol are sent in encrypted form, where each message is encrypted using
the encryption-key posted in Step 2 by the designated receiver.
Combining the ideas underlying the proofs of Propositions 7.5.16 and 7.5.18 (and con-
sidering two cases corresponding to whether I is empty or not), the current proposition
follows.
def def
where vi = f (α) if βi = h(α) and vi = (h(α), f (α)) otherwise, for each i.71
71 Indeed, an alternative multi-party generalization may require that all vi ’s equal f (α) if β2 = · · · = βm = h(α)
and equal (h(α), f (α)) otherwise. However, this alternative generalization seems harder to implement, whereas
Eq. (7.50) suffices for our application.
717
Free ebooks ==> www.Ebook777.com
GENERAL CRYPTOGRAPHIC PROTOCOLS
Note that the obvious reduction of Eq. (7.50) to the two-party case (i.e., to Eq. (7.33))
does not work (see Exercise 16). As in the two-party case, we will securely reduce
Eq. (7.50) to an adequate multi-party generalization of the image-transmission func-
tionality and provide a secure implementation of the latter. We start by implementing the
adequate multi-party generalization of the image-transmission functionality, defined as
follows:
Indeed, Eq. (7.51) is essentially a special case of Eq. (7.50). We stress that in a secure
implementation of Eq. (7.51), either all parties obtain the same f -image or they all
obtain an indication that Party 1 has misbehaved. Thus, the honest parties must be in
agreement regarding whether or not Party 1 has misbehaved, which makes the gener-
alization of the two-party protocol less obvious than it may seem. In particular, the fact
that we use a proof system of perfect completeness plays a central role in the analysis of
the multi-party protocol. The same holds with respect to the fact that all messages are
sent using (secret) broadcast (and so the honest parties agree on their value). Together,
these two facts imply that any party can determine whether some other party has “jus-
tifiably rejected” some claim, and this ability enables the parties to reach agreement
regarding whether or not Party 1 has misbehaved.
def
Construction 7.5.24 (image-transmission protocol, multi-party version): Let R =
{(v, w) : v = f (w)}. For simplicity, we assume that f is length-regular; that is,
| f (x)| = | f (y)| for every |x| = |y|.
Inputs: Party 1 gets input α ∈ {0, 1}∗ , and each other party gets input 1n , where n = |α|.
def
Step C1: Party 1 secretly broadcasts v = f (α). That is, Party 1 invokes Eq. (7.49) with
input v, whereas each other party enters the input 1| f (1 )| and receives the output v.
n
www.Ebook777.com
7.5* EXTENSION TO THE MULTI-PARTY CASE
actually an oracle-aided protocol, using the secret broadcast oracle. Consequently, if the
real-model adversary controls none of the parties, then it learns nothing (as opposed
to what might have happened if we were to use an ordinary broadcast in Steps C1
or C2).
Proposition 7.5.25: Suppose that the proof system, (P, V ), used in Step C2 is indeed a
zero-knowledge strong-proof-of-knowledge for the relation R. Then Construction 7.5.24
securely reduces Eq. (7.51) to Eq. (7.49).
Proof Sketch: The proof extends the two-party case treated in Proposition 7.4.12. Here,
we transform any real-model adversary A into a corresponding ideal-model adversary
B, where both get the set I as auxiliary input. The case I = ∅ is handled by relying
on the secret broadcast functionality (which implies that in this case, the real-model
adversary, which refers to an oracle-aided protocol in which all messages are sent using
Eq. (7.49), gets nothing). Otherwise, the operation of B depends on whether or not
1 ∈ I , which corresponds to the cases handled in the two-party case.
As in the two-party case, when transforming real-model adversaries to ideal-model
adversaries, we sometimes allow the latter to halt before invoking the trusted party.
This can be viewed as invoking the trusted party with a special abort symbol, where in
this case, the latter responds to all parties with a special abort symbol.
We start with the case where the first party is honest, which means here that 1 ∈ I .
In this case, the input to B consists essentially of 1n , where n = |α|, and it operates as
follows (assuming I = ∅):
occurs in the real-model execution (when Party 1 is honest). This follows from the
perfect completeness of (P, V ), as discussed earlier.
We now turn to the case where the first party is dishonest (i.e., 1 ∈ I ). In this
case, the input to B includes α, and it operates as follows (ignoring the easy case
I = [m]):
1. B invokes A on input α, and obtains the Step C1 message, denoted v, that A instructs
Party 1 to send (i.e., v = A(α)). As (implicit) in the protocol, any action of A in
Step C1 (including abort) is interpreted as sending a string.
2. B tries to obtain a pre-image of v under f . Toward this end, B uses the (strong)
knowledge-extractor associated with (P, V ). Specifically, providing the strong
knowledge-extractor with oracle access to (the residual prover) A(α), machine B
tries to extract (from A) a string w such that f (w) = v. This is done for each of the
| I¯| executions of the proof system in which the verifier is played by an honest party,
while updating the history of A accordingly.72 In case the extractor succeeds (in one
def def
of these | I¯| attempts), machine B sets α = w. Otherwise, B sets α = ⊥.
3. B now emulates an execution of Step C2. Specifically, for each i ∈ I¯, machine B lets
the adequate residual A play the prover, and emulates by itself the (honest) verifier
interacting with A (i.e., B behaves as a honest Party i). (The emulation of the proofs
given to parties in I is performed in the straightforward manner.) Next, B decides
whether or not to invoke the trusted party and let it respond to the honest parties.
This decision is based on all the m − 1 emulated proofs.
r In case any of the m − 1 emulated verifiers rejects justifiably, machine B aborts
(without invoking the trusted party), and outputs whatever A does (when fed with
these emulated proof transcripts).
r Otherwise (i.e., no verifier rejects justifiably), we consider two sub-cases:
(a) If α = ⊥, then B sends α (on behalf of Party 1) to the trusted party and
allows it to respond the honest parties. (The response will be f (α ), which
by Step 2 must equal v.)
(b) Otherwise (i.e., α = ⊥, indicating that extraction has failed), B fails. (Note
that this means that in Step 3 the verifier was convinced, while in Step 2
the extraction attempt has failed.)
4. Finally, B feeds A with the execution view, which contains the prover’s view of the
emulation of Step C2 (produced in Step 3), and outputs whatever A does.
As in the two-party case (see proof of Proposition 7.4.12), the real-model execution
differs from the ideal-model execution only in case the real-model adversary A succeeds
in convincing the knowledge-verifier (which is properly emulated for any i ∈ I¯) that it
knows a pre-image of v under f , and yet the knowledge-extractor failed to find such a
pre-image. By definition of strong knowledge-verifiers, such an event may occur only
with negligible probability.
72 If necessary (i.e., | I¯| = {2, ..., | I¯| + 1}), we also emulate the interleaved proofs that are given to parties in I .
This is performed in the straightforward manner (i.e., by letting A emulate both parties in the interaction).
720
www.Ebook777.com
7.5* EXTENSION TO THE MULTI-PARTY CASE
Inputs: Party 1 gets input α ∈ {0, 1}∗ , and Party i = 1 gets input βi ∈ {0, 1}|α| .
Step C1: Party 1 uses the (multi-party) image-transmission functionality to send the
def
pair (u, v) = (h(α), f (α)) to the other parties. That is, the parties invoke the func-
tionality of Eq. (7.51), where Party 1 enters the input α and Party i is to obtain
def
g(α) = (h(α), f (α)).
Step C2: Assuming that Step C1 was not aborted by Party 1 and that Party i receives
the pair (u, v) in Step C2, Party i outputs v if u = βi and (u, v) otherwise.
Outputs: If not aborted (with output ⊥), Party i = 1 sets its local output as directed in
Step C2. (Party 1 has no output.)
Extending the proof of Proposition 7.4.15 (to apply to Construction 7.5.26), and using
Propositions 7.5.25 and 7.5.22, we obtain:
Proof Sketch: We focus on the analysis of Construction 7.5.26, which extends the
proof of Proposition 7.4.15. As in the proof of Proposition 7.5.25, when extending
the proof of the two-party setting, the two cases (in the proof) correspond to whether
or not Party 1 is honest (resp., 1 ∈ I or 1 ∈ I ). Again, we discard the case I = ∅,
where here the justification is that the oracle-aided protocol does not use the broadcast
channel at all (and so no information is available to the real-model adversary in this
case). The case 1 ∈ I = ∅ is handled exactly as the case that Party 1 is honest in the
proof of Proposition 7.4.15 (i.e., B sends the βi ’s it holds to the trusted party, obtains
h(α) (either explicitly or implicitly) and f (α), where α is the input of Party 1, and
uses (h(α), f (α)) to emulate the real execution). In case 1 ∈ I , we need to extend the
two-party treatment a little, because we also have to emulate the oracle answer given (in
Step C1) to dishonest parties (different from Party 1, which gets no answer). However,
this answer is determined by the query α made in Step C1 by Party 1, and indeed, we
merely need to feed A with the corresponding oracle answer (h(α ), f (α )). The rest of
the treatment is exactly as in the two-party case. The proposition follows.
can be discarded (as done in the proof of Proposition 7.5.27).73 In fact, Construc-
tion 7.5.24 is also a pure oracle-aided protocol (by virtue of its use of the secret broadcast
functionality).
where r is uniformly distributed in {0, 1}(n) . We securely reduce Eq. (7.52) to the multi-
party authenticated-computation functionality. We note that the following construction
is different from the one used in the two-party case:
Construction 7.5.28 (an oracle-aided protocol for Eq. (7.52)): Let C be a commitment
scheme and C r1 ,...,r (σ1 , ..., σ ) = (Cr1 (σ1 ), ..., Cr (σ )) be as in Construction 7.4.16.
def
Inputs: Each party gets input 1n and sets = (n).
Step C1: For i = 1, .., m, Party i uniformly selects ri ∈ {0, 1} and si ∈ {0, 1}·n .
Step C2: For i = 1, .., m, Party i uses the image-transmission functionality to send
def
ci = C si (ri ) to all parties. Actually, Party i enters Eq. (7.50) with input (ri , si ); each
def
other party enters with input 1+·n , which is supposed to equal h (C2) (ri , si ) = 1|ri |+|si | ,
def
and is supposed to obtain f (C2) (ri , si ) = C si (ri ). Abusing notation, let us denote by ci
the answer received by each party, where ci may equal ⊥ in case Party i has aborted
the i-th oracle call. Thus, in Steps C1–C2, each party commits to a random string.
Without loss of generality, we assume that no party aborts these steps (i.e., we treat
abort as if it were some legitimate default action).
Step C3: For i = 2, .., m (but not for i = 1), Party i uses the authenticated-
computation functionality to send ri to all parties. That is, Party i enters Eq. (7.50)
with input (ri , si ); each other party enters with input ci , where ci is supposed to equal
def def
h (C3) (ri , si ) = C si (ri ), and is supposed to obtain f (C3) (ri , si ) = ri . If Party i aborts the
oracle call (that it has invoked) or some Party j obtains an answer of a different
format, which happens in case the inputs of these two parties do not match, then
Party j halts with output ⊥. Otherwise, Party j obtains f (C3) (ri , si ) = ri and sets
j j def
ri = ri . (For simplicity, let r j = r j .)
Thus, in this step, each party (except Party 1), reveals the -bit long string to which
it has committed in Step C2. The correctness of the revealed value is guaranteed
by the definition of the authenticated-computation functionality, which is used here
73 Recall that in Section 7.4 we did not consider such external adversaries, and thus the notion of pure oracle-aided
protocols was neither discussed nor used.
722
www.Ebook777.com
7.5* EXTENSION TO THE MULTI-PARTY CASE
Proposition 7.5.29: Construction 7.5.28 securely reduces Eq. (7.52) to Eq. (7.50).
Proof Sketch:74 We transform any real-model adversary A (for the oracle-aided exe-
cution) into a corresponding ideal-model adversary B. The operation of B depends on
whether or not Party 1 is honest (i.e., 1 ∈ I¯), and we ignore the trivial cases of I = ∅
and I = [m]. In case 1 ∈ I¯ (i.e., Party 1 is honest), machine B proceeds as follows:
1. Machine B emulates the local actions of the honest parties in Step C1. In particular,
it uniformly selects (ri , si ) for each i ∈ I¯ (including i = 1).
2. For every i ∈ I¯, machine B emulates the i-th sub-step of Step C2 by feeding A with
the corresponding ci = C si (ri ) (as if it were the answer of the i-th oracle call). For
every i ∈ I , machine B obtains the input (ri , si ) that A enters (on behalf of Party i)
to the i-th oracle call of Step C2, and feeds A with adequate emulations of the oracle
answers (to other parties in I ).
3. For every i ∈ I¯ \ {1}, machine B emulates the i-th sub-step of Step C3 by feeding A
with a sequence in {ri , (ci , ri )}|I | that corresponds to whether or not each Party j ∈ I
has entered the input ci (defined in Step 2). For every i ∈ I , machine B obtains the
input (ri , si ) that A enters (on behalf of Party i) to the i-th oracle call of Step C3,
records whether or not C si (ri ) = C si (ri ), and feeds A with adequate emulations of
the oracle answers.
74 As in the proof of Proposition 7.5.25, we sometimes present ideal-model adversaries that halt before invoking
the trusted party. This can be viewed as invoking the trusted party with a special abort symbol.
723
Free ebooks ==> www.Ebook777.com
GENERAL CRYPTOGRAPHIC PROTOCOLS
For every i ∈ I¯, machine B sets ri1 = ri . For every i ∈ I , machine B sets ri1 = ri if
C si (ri ) = C si (ri ) and aborts otherwise (while outputting whatever A outputs [when
Party 1 halts in Step C3]). Note that for every i, this setting of ri1 agrees with the
setting of ri1 in the protocol. In particular, B aborts if and only if (the honest) Party 1
would have halted in the corresponding (emulated) execution of Step C3.75
4. In case B did not abort, it invokes the trusted party with input 1n and obtains the
answer g(r ), where r is the uniformly distributed -bit string handed to Party 1.
Next, machine B emulates Step C4 by feeding each dishonest party with either
def
g(r ) or ((c1 , r 1 ), g(r )), where r 1 = ⊕i=2
m
ri1 . The choice is determined by whether
or not (in Step C4) this party has entered the input (c1 , r 1 ). (Note that we cheat in
the emulation of the oracle answer in Step C4; specifically, we use g(r ) rather than
g(r1 ⊕ r 1 ).) Finally, machine B outputs whatever A does.
We stress that in this case (i.e., 1 ∈ I ), machine B may possibly abort only before
invoking the trusted party (which satisfies the security definition). Observe that the
only difference between the ideal-model execution, under B and the real-model exe-
cution under A is that in the ideal-model execution, an independently and uniformly
distributed r ∈ {0, 1} is used (in the emulation of Step C4), whereas in the real-model
execution, r (as used in Step C4) is set to ⊕i=1
m
ri1 = r1 ⊕ r 1 . That is, in the ideal-model,
r1 is independent of r and r , whereas in the real-model, r1 = r ⊕ r 1 , where g(r ) and
1
r 1 = r i (for every i) are known to the adversary (and r appears in the joint-view). Thus,
in addition to its possible affect on r (in the real model), the only (other) affect that r 1
has on the joint-view is that the latter contains c1 = C(r1 ). In other words, (the joint-
views in) the real model and the ideal model differ only in whether c1 is a commitment
to r ⊕ r 1 or to a uniformly and independently distributed string, where r and r 1 are
explicit in the joint-view. By the hiding property of C, this difference is undetectable.
We now turn to the case that 1 ∈ I (i.e., Party 1 is dishonest). The treatment of this
case differs in two main aspects. First, unlike in the previous case, here the real-model
adversary (which controls Party 1) obtains all ri ’s, and so we must guarantee that in the
ideal-model execution, the trusted party’s answer (to Party 1) equals ⊕i=1 m
ri . Second,
unlike in the previous case, here the real-model adversary may effectively abort Step C4
(i.e., abort after obtaining the outcome), but this is easy to handle using the ideal-
model adversary’s ability to instruct the trusted party not to respond the honest parties.
Returning to the first issue, we present a different way of emulating the real-model
execution.76 Specifically, we will cheat in our emulation of the honest parties and use (in
Step 1–2) commitments to the value 0 , rather than commitments to the corresponding
ri ’s, which will be determined only at the end of Step 2. Details follow:
1. Machine B starts by invoking the trusted party and obtains a uniformly distributed
r ∈ {0, 1} . At this time, B does not decide whether or not to allow the trusted party
to answer the honest parties.
75 Indeed, in Step C3, Party 1 halts if and only if for some i, the input that Party 1 enters to the i-th sub-step (which
in turn equals the value ci = C si (ri ) that Party 1 has obtained in the i-th sub-step of Step C2) does not fit the
input (ri , si ) that is entered by Party i (i.e., iff ci = C s (ri )).
i
76 We comment that the alternative emulation strategy can also be used in case Party 1 is honest.
724
www.Ebook777.com
7.5* EXTENSION TO THE MULTI-PARTY CASE
In addition, B emulates the local actions of the honest parties in Step C1 by uniformly
selecting only the si ’s, for each i ∈ I¯.
2. For every i ∈ I¯, machine B emulates the i-th sub-step of Step C2 by feeding A with
ci = C si (0 ). For every i ∈ I , machine B obtains the input (ri , si ) that A enters (on
behalf of Party i) to the i-th oracle call of Step C2. Finally, B uniformly selects
all other ri ’s (i.e., for i’s in I¯) such that ⊕i=1
m
ri = r holds; for example, for each
i ∈ I \ {1}, select ri ∈ {0, 1} uniformly, and set r1 = r ⊕ (⊕i=2
¯ m
ri ).
3. For every i ∈ I , machine B emulates the i-th sub-step of Step C3 by feeding A with
¯
a sequence in {ri , (ci , ri )}|I | that corresponds to whether or not each Party j ∈ I has
entered the input ci . Note that the fact that ci is unlikely to be a commitment to ri is
irrelevant here. The rest of this step (i.e., the determination of the ri1 ’s) is as in the
case that Party 1 is honest. In particular, we let B halt if some Party i ∈ I behaves
improperly (i.e., invokes the corresponding oracle with input that does not fit ci as
recorded in the emulation of Step C2).
The next step is performed only in case B did not abort. In this case, ri1 = ri holds
for every i = 2, ..., m, and r = r1 ⊕ (⊕i=2 m
ri1 ) follows.
4. Next, machine B emulates Step C4 and determines whether or not A instructs Party 1
to abort its oracle call (in Step C4). The decision is based on whether or not the oracle
query (q1 , q2 , q3 ) of Party 1 (in Step C4) matches the oracle query (ri , si ) it made
in Step C2 and the value of ⊕i=2 m
ri1 as determined in Step 3 (i.e., whether or not
C q2 (q1 ) = C si (ri ) and q3 = ⊕i=2ri1 ). If Party 1 aborts, then B prevents the trusted
m
party from answering the honest parties, and otherwise B allows the trusted party to
answer. (Indeed, in case the trusted party answers Party i = 1, the answer is g(r )). In
addition, B emulates the answers of the Step C4 oracle call to the dishonest parties
(as in the case that Party 1 is honest). Finally, machine B outputs whatever A does.
Observe that the only difference between of the ideal-model execution under B and the
real-model execution under A is that in the former, commitments to 0 (rather than to
the ri ’s, for i ∈ I¯) are delivered in Step C2. However, by the hiding property of C, this
difference is undetectable.
An Important Special Case. An important special case of Eq. (7.52) is the case that
g(r, s) = C s (r ), where |s| = n · |r |. This special case will be called the augmented
(m-party) coin-tossing functionality. That is, for some fixed commitment scheme, C,
and a positive polynomial , we consider the m-ary functionality:
where (r, s) is uniformly distributed in {0, 1}(n) × {0, 1}(n)·n . Combining Proposi-
tions 7.5.27 and 7.5.29, we get:
725
Free ebooks ==> www.Ebook777.com
GENERAL CRYPTOGRAPHIC PROTOCOLS
Proof Sketch: Starting from Construction 7.4.20, we replace each oracle call to a two-
party functionality by a call to the corresponding multi-party functionality. That is, in
Step C2 Party 1 uses the image-transmission (or rather the authenticated-computation)
def
functionality to send c = C r (x) to all other parties, in Step C3 an augmented coin-
tossing is used to provide Party 1 with a random pair (r, r ) whereas each other party gets
def
c = C r (r ), and in Step C4 Party 1 uses the authenticated-computation functionality
to send C r (x) to all other parties. Each of the other parties acts exactly as Party 2 acts
in Construction 7.4.20.
The security of the resulting multi-party oracle-aided protocol is established as in the
two-party case (treated in Proposition 7.4.21). As in the previous analysis of multi-party
protocols that generalize two-party ones, the two cases here are according to whether
or not Party 1 is honest (resp., 1 ∈ I or 1 ∈ I ). Finally, composing the oracle-aided
protocol with secure implementations of the adequate multi-party functionalities (as
provided by Propositions 7.5.27 and 7.5.30), the proposition follows.
726
www.Ebook777.com
7.5* EXTENSION TO THE MULTI-PARTY CASE
compiler produces the following oracle-aided m-party protocol, denoted , for the
first malicious model:
727
Free ebooks ==> www.Ebook777.com
GENERAL CRYPTOGRAPHIC PROTOCOLS
Aborting: In case any of the functionalities invoked in any of the above phases termi-
nates in an abort state, the parties obtaining this indication abort the execution and
set their output to ⊥. Otherwise, outputs are as follows.
Outputs: At the end of the emulation phase, each party holds the corresponding output
of the party in protocol . The party just locally outputs this value.
We note that both the compiler and the protocols produced by it are efficient, and that
their dependence on m is polynomially bounded.
Theorem 7.5.33 (Restating half of Theorem 7.5.15): Suppose that there exist collec-
tions of enhanced trapdoor permutations. Then any m-ary functionality can be securely
computable in the first malicious model (using only point-to-point communication
lines), provided that a public-key infrastructure exists in the network. Furthermore,
security holds even if the adversary can read all communication among honest parties.
Proof Sketch: We start by noting that the definition of the augmented semi-honest
model (i.e., Definition 7.4.24) applies without any change to the multi-party context
(also in case the communication is via a single broadcast channel). Recall that the
augmented semi-honest model allows parties to enter the protocol with modified inputs
(rather than the original ones) and abort the execution at any point in time. We stress
that in the multi-party augmented semi-honest model, an adversary controls all non-
honest parties and coordinates their input modifications and abort decisions. As in the
two-party case, other than these non-proper actions, the non-honest parties follow the
protocol (as in the semi-honest model).
The first significant part of the proof is showing that the compiler of Construc-
tion 7.5.32 transforms any protocol into a protocol such that executions of
in the first malicious real model can be emulated by executions of in the aug-
mented semi-honest model (over a single broadcast channel). This part is analogous
to Proposition 7.4.25, and its proof is analogous to the proof presented in the two-
party case. That is, we transform any real-model adversary ( A, I ) for into a cor-
responding augmented semi-honest adversary, (B, I ), for . The construction of B
out of A in analogous to the construction of Bmal out of Amal (carried out in the proof
of Proposition 7.4.25): Specifically, B modifies inputs according to the queries that
A makes in the input-committing phase, uniformly selects random-tapes (in accor-
dance with the coin-generation phase), and aborts in case the emulated machine does
so. Thus, B, which is an augmented semi-honest adversary, emulates the malicious
adversary A.
The second significant part of the proof is showing that canonical protocols (as
provided by Theorem 7.5.14) have the property that their execution in the augmented
728
www.Ebook777.com
7.5* EXTENSION TO THE MULTI-PARTY CASE
semi-honest model can be emulated in the (first) malicious ideal model of Defini-
tion 7.5.2. This part is analogous to Proposition 7.4.27, and its proof is analogous to
the proof presented in the two-party case.
Thus, given any m-ary functionality f , we first (use Theorem 7.5.14 to) obtain a
canonical protocol that privately computes f . (Actually, we use the version of
that operates over a single broadcast channel, as provided by the pre-compiler [i.e.,
Proposition 7.5.16].) Combining the two parts, we conclude that when feeding to
the compiler of Construction 7.5.32, the result is an oracle-aided protocol such that
executions of in the (first) malicious real model can be emulated in the ideal model
of Definition 7.5.2. Thus, securely computes f in the first malicious model.
We are almost done, but there are two relatively minor issues to address. First,
is an oracle-aided protocol rather than an ordinary one. However, an ordinary protocol
that securely computes f can be derived by using secure implementations of the oracles
used by (as provided by Propositions 7.5.27, 7.5.30, and 7.5.31). Second, operates
in the broadcast-channel communication model, whereas we claimed a protocol in the
point-to-point communication model. This gap is bridged by using the post-compiler
(i.e., Proposition 7.5.19).
The sharing phase: Each party shares its input and random-tape with all the parties
such that any strict majority of parties can retrieve their value, whereas no minority
group can obtain any knowledge of these values. This is done by using Verifiable
Secret Sharing (VSS).
77 In our application, we feed the current compiler with a protocol generated by the first compiler. Still, the random-
tape and protocol actions mentioned here refer to the secure protocol compiled by the first compiler, not to the
semi-honest protocol from which it was derived.
729
Free ebooks ==> www.Ebook777.com
GENERAL CRYPTOGRAPHIC PROTOCOLS
Intuitively, the malicious parties (which are in a strict minority) are effectively pre-
vented from aborting the protocol by the following conventions:
r If a party aborts the execution prior to completion of the sharing phase, then the
honest parties (which are in the majority) will set its input and random-tape to
some default value and will carry out the execution (“on its behalf”).
r If a party aborts the execution after the completion of the sharing phase, then
the honest (majority) parties will reconstruct its input and random-tape and will
carry out the execution (“on its behalf”). The ability of the majority parties to
reconstruct the party’s input and random-tape relies on the properties of VSS.
The fact that communication is conducted over a broadcast channel, as well as the
abovementioned conventions, guarantee that the (honest) majority parties will always
be in consensus as to which parties have aborted (and what messages were sent).
Protocol-emulation phase: The parties emulate the execution of the original protocol
with respect to the input and random-tapes shared in the first phase. This will be done
using a secure (in the first malicious model) implementation of the authenticated-
computation functionality of Eq. (7.50).
We start by defining and implementing the only new tool needed; that is, Verifiable
Secret Sharing.
78 At this point, we place no computational requirements on G m,t and Rm,t . Typically, when m is treated as a
parameter, these algorithms will operate in time that is polynomial in m.
730
www.Ebook777.com
7.5* EXTENSION TO THE MULTI-PARTY CASE
Then we require that for any such I , the random variables g I (0) and g I (1) are
identically distributed.
Indeed, an m-out-of-m secret-sharing scheme is implicit in the construction presented
in Section 7.5.2: To share a bit σ , one just generates m random bits that sum up to
σ (mod 2). Efficient t-out-of-m secret-sharing schemes do exist for any value of t ≤ m.
The most popular one, which uses low-degree polynomials over finite fields, is presented
next.
Construction 7.5.35 is analyzed in Exercise 17. Getting back to our subject matter, we
derive the basic definition of verifiable secret sharing.
Definition 7.5.36 (Verifiable Secret Sharing, basic version): A verifiable secret shar-
ing (VSS) scheme with parameters (m, t) is an m-party protocol that implements
(i.e., securely computes in the first malicious model) the share-generation func-
tionality of some t-out-of-m secret-sharing scheme. That is, let G m,t be a share-
generation algorithm of some t-out-of-m secret-sharing scheme. Then the correspond-
ing share-generation functionality that the VSS securely computes (in the first malicious
model) is
((σ, 1n ), 1n , ..., 1n ) → G m,t (σ ) (7.55)
79 By the Fundamental Theorem of Number Theory, p ≤ 2m. Thus, p can be found by merely (brute-force)
factoring all integers between m + 1 and 2m.
731
Free ebooks ==> www.Ebook777.com
GENERAL CRYPTOGRAPHIC PROTOCOLS
def
n-bit long strings; that is, G m,t (σ1 , ..., σn ) = (s1 , ..., sm ), where si = si,1 · · · si,n
and (s1, j , ..., sm, j ) ← G m,t (σ j ) for every i = 1, ..., m and j = 1, ..., n. Suppose
that G m,t (α) ∈ ({0, 1}(|α|) )m , and let C be a commitment scheme and C be as
in Construction 7.5.28. Consider the corresponding (augmented) share-generation
functionality
(α, 1|α| , ..., 1|α| ) → ((s, ρ), (s2 , ρ2 , c), ..., (sm , ρm , c)) (7.56)
def
where s = (s1 , ..., sm ) ← G m,t (α), (7.57)
def 2
ρ = (ρ1 , ..., ρm ) ∈ {0, 1}m·(|α|)
(7.58)
is uniformly distributed,
def
and c = (C ρ1 (s1 ), ..., C ρm (sm )). (7.59)
Then any m-party protocol that securely computes Eq. (7.56) – (7.59) in the first ma-
licious model is called a verifiable secret sharing (VSS) scheme with parameters
(m, t).
Observe that each party may demonstrate (to each other party) the validity of its
“primary” share (i.e., the si ) with respect to the globally held c, by revealing the
corresponding ρi . We shall be particularly interested in VSS schemes with param-
eters (m, m/2); that is, t = m/2. The reason for this focus is that we assume
throughout this section that the malicious parties are in strict minority. Thus, by the
secrecy requirement, setting t ≥ m/2 guarantees that the (less than m/2) dishonest
parties are not able to obtain any information about the secret from their shares. On
the other hand, by the recovery requirement, setting t ≤ m/2 guarantees that the
(more than m/2) honest parties are able to efficiently recover the secret from their
shares. Thus, in the sequel, whenever we mention VSS without specifying the param-
eters, we mean the VSS with parameters (m, m/2), where m is understood from the
context.
Clearly, by Theorem 7.5.33, verifiable secret sharing schemes exist, provided that
enhanced trapdoor permutations exist. Actually, to establish the existence of VSS, we
merely need to apply the first compiler to the straightforward protocol that privately
computes Eq. (7.56) – (7.59); see Exercise 10. For the sake of subsequent reference, we
state the latter result.
Proposition 7.5.38: Suppose that trapdoor permutations exist. Then for every t ≤ m,
there exists a verifiable secret-sharing scheme with parameters (m, t).
Note that the assumption used in Proposition 7.5.38 is (merely) the one needed for the
operation of the first compiler, which amounts to the assumption needed for imple-
menting the functionalities used in Construction 7.5.32.
732
www.Ebook777.com
7.5* EXTENSION TO THE MULTI-PARTY CASE
def
Construction 7.5.39 (The second multi-party compiler): Let t = m/2. Given an m-
party protocol, , for the first malicious model, the compiler produces the following
m-party protocol, denoted , for the second malicious model.
Inputs: Party i gets input x i ∈ {0, 1}n .
Random-Tape: Party i uniformly selects a random-tape, denoted r i ∈ {0, 1}c(n) , for
the emulation of .
The Sharing Phase: Each party shares its input and random-tape with all the parties,
using a Verifiable Secret Sharing scheme. That is, for i = 1, ..., m, Party i invokes
the VSS scheme playing the first party with input x i r i , while the other parties play
the roles of the other parties in Eq. (7.56) – (7.59) with input 1n+c(n) .
Regarding the i-th VSS invocation,81 we denote the output that Party i ob-
tains by (s i , ρ i ), and the outputs that each other Party j obtains by (s ij , ρ ij , ci ),
where s i = (s1i , ..., smi ) ← G m,t (x i r i ), ρ i = (ρ1i , ..., ρmi ) is uniformly distributed,
ci = (c1i , ..., cmi ), and cki = C ρki (ski ). Note that either all honest parties get the correct
outcome or they all detect that Party i is cheating and set their outcome to ⊥.
Handling Abort: If Party i aborts the i-th VSS invocation, which means that all honest
parties received the outcome ⊥, then the honest parties set its input and random-tape
to some default value; that is, they set their record of the input and random-tape of
Party i (which are otherwise unknown to them) to some default value. Note that by
definition, the VSS scheme is secure in the first malicious model, and thus all honest
parties agree on whether or not the VSS initiator (i.e., Party i) has aborted.82
80 For this reason, we cannot utilize a composition theorem for the second malicious model. We comment that such
a composition theorem would anyhow be more restricted than Theorem 7.5.21. One issue is that the second mali-
cious model depends on a bound on the fraction of dishonest parties. Thus, if the m-party oracle-aided protocol in-
vokes a k-ary functionality with k < m, then the bound (on the fraction of dishonest parties) may be violated in the
sub-protocol that replaces the latter. For this reason, when dealing with the second malicious model, one should
confine the treatment to m-party oracle-aided protocols that use m-ary (rather than k-ary) functionalities.
81 Indeed, this notation is slightly inconsistent with the one used in Definition 7.5.37. Here, Party i plays the first
party in the VSS, and being consistent with Definition 7.5.37 would required calling its share s1i rather than sii .
Consequently, the share of Party j in this invocation would have been denoted sπi i ( j) , where πi ( j) is the role that
Party j plays in this invocation. However, such notation would have made our exposition more cumbersome.
82 This is reflected in the corresponding ideal-model adversary that either makes all honest parties detect abort
(i.e., output ⊥) or allows all of them to obtain (and output) the corresponding entries in a valid m-sequence.
733
Free ebooks ==> www.Ebook777.com
GENERAL CRYPTOGRAPHIC PROTOCOLS
We stress that in case Party i aborts the i-th VSS invocation, its (default) input and
random-tape become known to all parties. Since the entire execution takes place
over a broadcast channel, each party can determine by itself what messages Party i
should send in a corresponding execution of . Thus, there is actually no need to
send actual messages on behalf of Party i.
Protocol-Emulation Phase: The parties emulate the execution of protocol with re-
spect to the input and random-tapes shared in the first phase. This will be done by
using a secure (in the first malicious model) implementation of the authenticated-
computation functionality of Eq. (7.50).
That is, Party i, which is supposed to send a message in , plays the role of the first
party in Eq. (7.50), and the other parties play the other roles. The inputs α, β2 , ..., βm
and the functions h, f , for the functionality of Eq. (7.50), are set as follows:
r The string α = (α1 , α2 ) is set such that α1 = (x i r i , s i , ρ i ) and α2 equals the
concatenation of all previous messages sent in the emulation of previous steps
of . Recall that (x i r i , (s i , ρ i )) is the input–output pair of Party i in the i-th
invocation of the VSS.
r The string β j equals β def = (ci , α2 ), where α2 is as in previous item. Recall that ci
is part of the output that each other party got in the i-th invocation of the VSS.
r The function h is defined such that h((z, (s1 , ..., sm ), (r1 , ..., rm )), γ ) =
((C r1 (s1 ), ..., C rm (sm )), γ ). Indeed, h(α1 , α2 ) = β.
r The function f is set to be the computation that determines the message to be sent
in . Note that this message is computable in polynomial-time from the party’s
input (denoted x i ), its random-tape (denoted r i ), and the previous messages posted
so far (i.e., α2 ).
As a result of the execution of the authenticated-computation sub-protocol, each
party either gets an indication that Party i aborted or determines the message that
Party i should have sent in a corresponding execution of . By the definition of
security in the first malicious model, all honest parties agree on whether or not
Party i aborted, and in case it did not abort, they also agree on the message it sent.
Handling Abort: If a party aborts when playing the role of the first party in an
invocation of Eq. (7.50) during the emulation phase, then the majority parties re-
cover its (actual) input and random-tape, and carry out the execution on its behalf.
Specifically, if Party j detects that Party i has aborted, then it broadcasts the pair
(s ij , ρ ij ) that it has obtained in the sharing phase, and each party uses the correctly
decomitted shares (i.e., the s ij ’s) to reconstruct x i r i .
We note that the completion of the sharing phase (and the definition of VSS) guarantee
that the majority parties hold shares that yield the input and random-tape of any party.
Furthermore, the correct shares are verifiable by each of the other parties, and so
reconstruction of the initial secret is efficiently implementable whenever a majority
of parties wishes to do so.
734
www.Ebook777.com
7.5* EXTENSION TO THE MULTI-PARTY CASE
Outputs: At the end of the emulation phase, each party holds the corresponding output
of the party in protocol . The party just locally outputs this value.
Note that the VSS scheme is implicitly used as a commitment scheme for the value
of x i r i ; that is, ci = (c1i , ..., cmi ) serves as a commitment to the sequence of shares
(s1i , ..., smi ), which in turn determine x i r i . Actually, the main steps in the emulation
phase only refer to this aspect of the VSS, whereas only the abort-handling procedure
refers to the additional aspects (e.g., the fact that Party j holds the value of the share s ij
that is determined by the commitment cij , as well as the corresponding decommitment
information).
Comment. Applying the two (multi-party protocol) compilers one after the other is
indeed wasteful. For example, we enforce proper emulation (via the authenticated-
computation functionality) twice: first with respect to the semi-honest protocol, and
next with respect to the protocol resulting from the first compiler. Indeed, more ef-
ficient protocols for the second malicious model could be derived by omitting the
authenticated-computation protocols generated by the first compiler (and having the
second compiler refer to the actions of the semi-honest protocol). Similarly, one can
omit the input-commit phase in the first compiler. In general, feeding the second com-
piler with protocols that are secure in the first malicious model is an overkill; see further
discussion subsequent to Proposition 7.5.42.
Theorem 7.5.40 (Restating the second half of Theorem 7.5.15): Suppose that there
exist collections of enhanced trapdoor permutations. Then any m-ary functionality
can be securely computable in the second malicious model (using only point-to-point
communication lines), provided that a public-key infrastructure exists in the network.
Furthermore, security holds even if the adversary can read all communication among
honest parties.
As will be shown here, given a protocol as guaranteed by Theorem 7.5.33, the second
compiler produces a protocol that securely computes (in the second malicious model)
the same functionality. Thus, for any functionality f , the compiler transforms proto-
cols for securely computing f in the first malicious model into protocols for securely
computing f in the second malicious model. This suffices to establish Theorem 7.5.40,
yet it does not say what the compiler does when given an arbitrary protocol (i.e., one
not provided by Theorem 7.5.33). In order to analyze the action of the second compiler,
in general, we introduce the following model that is a hybrid of the semi-honest and
the two malicious models. We call this new model the second-augmented semi-honest
model. Unlike the (first) augmented semi-honest model (used in the analysis of the first
compiler [see proof of Theorem 7.5.33]), the new model allows a dishonest party to
select its random-tape arbitrarily, but does not allow it to abort.
735
Free ebooks ==> www.Ebook777.com
GENERAL CRYPTOGRAPHIC PROTOCOLS
Entering the execution: Depending on their initial inputs and in coordination with each
other, the parties in I may enter the execution of with any input of their choice.
Selection of random-tape: Depending on their inputs and in coordination with each
other, the parties in I may arbitrarily select their random-tapes for the execution
of .
Here and in the previous step, the parties in I may employ randomized procedures,
but the randomization in their procedures is not to be confused with the random-tapes
for selected in the current step.
Proper message transmission: In each step of , depending on its view so far, the
designated (by ) party sends a message as instructed by . We stress that the
message is computed as instructs based on the party’s (possibly modified) input,
its (possibly non-uniformly selected) random-tape, and the messages received so far,
where the input and random-tape are as set in the previous two steps.
Output: At the end of the interaction, the parties in I produce outputs depending on
their entire view of the interaction. We stress that the view contains their initial inputs
and all messages sent over all channels.83
Intuitively, the compiler transforms any protocol into a protocol so that executions
of in the second malicious model correspond to executions of in the second
augmented semi-honest model. That is:
83 This model is applicable both when the communication is via a single broadcast channel and when the commu-
nication is via point-to-point channels that can be wire-tapped by the adversary.
736
www.Ebook777.com
7.5* EXTENSION TO THE MULTI-PARTY CASE
model (of Definition 7.5.4). Thus, Theorem 7.5.40 will follow. We start by establishing
Proposition 7.5.42:
Proof Sketch: Given a real-model adversary A (for ), we present a corresponding
adversary B that is admissible with respect to for the second augmented semi-honest
model. We stress two points. First, whereas A may abort some parties, the adversary B
may not do so (as per Definition 7.5.41). Second, we may assume that the number of
parties controlled by A (and thus by B) is less than m/2 (because nothing is required
otherwise).
Machine B will use A as well as the ideal-model adversaries derived (as per Def-
inition 7.5.3) from the behavior of A in the various sub-protocols invoked by . We
stress that these ideal-model adversaries are of the first malicious model. Furthermore,
machine B will also emulate the behavior of the trusted party in these ideal-model em-
ulations (without communicating with any trusted party; there is no trusted party in the
augmented semi-honest model). Thus, the following description contains an implicit
special-purpose composition theorem (in which sub-protocols that are secure in the
first malicious model are used to implement the oracles of an oracle-aided protocol that
is secure in the second malicious model):
Entering the execution and selecting a random-tape: B invokes A (on the very input
supplied to it) and decides with what input and random-tape to enter the execution
of . Toward this end, machine B emulates the execution of the sharing phase of
, using A (as subroutine). Machine B supplies A with the messages it expects to
see, thus emulating the honest parties in , and obtains the messages sent by the
parties in I (i.e., those controlled by A). We stress that this activity is internal to B
and involves no real interaction (of B in ).
Specifically, B emulates the executions of the VSS protocol in an attempt to obtain
the values that the parties in I share with all parties. The emulation of each such
VSS-execution is done by using the ideal-model adversary derived from (the residual
real-model malicious adversary) A. We stress that in accordance with the definition
of VSS (i.e., security in the first malicious model), the ideal-model adversary derived
from (the residual) A is in the first malicious model and may abort some parties.
Note that (by Definitions 7.5.3 and 7.5.2) this may happen only if the initiator of
the VSS is dishonest. In case the execution initiated by some party aborts, its input
and random-tape are set to the default value (as in the corresponding abort-handling
procedure of ). Details follow:
r In an execution of VSS initiated by an honest party (i.e., in which an honest party
plays the role of the first party in VSS), machine B obtains the corresponding
augmented shares (available to I ).84 Machine B will use an arbitrary value, say
0n+c(n) , as the first party’s input for the current emulation of the VSS (because
the real value is unknown to B). In emulating the VSS, machine B will use
the ideal-model adversary, denoted A , that emulates the behavior of A in this
VSS (in ), when given the history so far. We stress that since the initiating
737
Free ebooks ==> www.Ebook777.com
GENERAL CRYPTOGRAPHIC PROTOCOLS
party of the VSS is honest, this ideal-model adversary (i.e., A ) cannot abort any
party.
Invoking the ideal-model adversary A , and emulating both the honest (ideal-
model) parties and the trusted party, machine B obtains the outputs of all parties
(i.e., and in particular, the output of the initiating party). That is, machine B emu-
lates the sharing of value 0n+c(n) by the initiating party and emulates the response
of the trusted oracle (i.e., by setting s ← G m,t (0n+c(n) ), uniformly selecting ρ of
adequate length, and computing the outputs as in Eq. (7.56) – (7.59)).
r In an execution of VSS initiated by a party in I (i.e., a dishonest party plays the
role of the first party in VSS), machine B obtains the corresponding input and
random-tape of the initiator, as well as the randomization used in the commitment
to it. As before, machine B uses the derived ideal-model adversary, denoted A ,
to emulate the execution of the VSS. Recall that A emulates the behavior of A in
the corresponding execution of the VSS.
Suppose that we are currently emulating the instance of VSS initiated by Party i,
where i ∈ I . Then B invokes A on input x i r i (i.e., the initial input and random-
tape of Party i), and emulating both the honest (ideal-model) parties and the
trusted party, machine B obtains the outputs of all parties (including the “VSS-
randomization” (i.e., (s i , ρ i )) handed to Party i which is in I ). A key point is
that machine B has obtained, while emulating the trusted party, the input handed
by A to the trusted party. This value is recorded as the modified input and
random-tape of Party i .
In case the emulated machine did not abort the initiator (i.e., Party i), machine
B records the previous value, as well as the randomization used by B (as trusted
party) in the execution of VSS. Otherwise (i.e., A aborts Party i in the invocation
of VSS initiated by it), the input and random-tape of Party i are set to the default
value (as in ). In either case, B concatenates the emulation of the VSS to the
history of the execution of A.
Thus, inputs and random-tapes are determined for all parties in I , depending only
on their initial inputs. (All this is done before entering the actual execution of .)
Furthermore, the view of machine A in the sharing phase of has been emulated,
and the VSS-randomizations (i.e., the pairs (s i , ρ i )) used in the sharing of all values
have been recorded by B. (Actually, it suffices to record the VSS-randomization
handed to dishonest parties and the commitments made on behalf of honest ones;
these will be used in the emulation of the message-transmission steps of ,
where the VSS-randomization will be used only in case the corresponding party
aborts.)
Subsequent steps – message transmission: Machine B now enters the actual execution
of (with inputs and random-tapes for I -parties as determined earlier). It proceeds
in this real execution of , along with emulating the corresponding executions of
the authenticated computation of Eq. (7.50) (which are invoked in ).
In a message-transmission step by an honest party in , machine B obtains a mes-
sage from this honest party (in the real execution of ) and emulates an execution
738
www.Ebook777.com
7.5* EXTENSION TO THE MULTI-PARTY CASE
85 Recall that β = (ci , α2 ), where ci is the commitment produced by the VSS that was invoked by Party i, which
is assumed to be the sender in the current message-transmission step, and α2 equals the sequence of messages
sent so far in the emulated execution of .
739
Free ebooks ==> www.Ebook777.com
GENERAL CRYPTOGRAPHIC PROTOCOLS
740
www.Ebook777.com
7.6* PERFECT SECURITY IN THE PRIVATE CHANNEL MODEL
741
Free ebooks ==> www.Ebook777.com
GENERAL CRYPTOGRAPHIC PROTOCOLS
7.6.1. Definitions
We consider both the semi-honest and the malicious models, where in both cases
we refer to explicit bounds on the number of dishonest parties. Furthermore, in both
cases, we consider a communication network consisting of point-to-point channels
that cannot be wire-taped by the adversary. Finally, in both models, we require the
relevant probability ensembles to be statistically indistinguishable, rather than (only)
computationally indistinguishable.
Security in the Semi-Honest Model. The following definition is derived from Defi-
nition 7.5.1 by restricting the number of dishonest parties and strengthening the indis-
tinguishability requirement.
We stress that Eq. (7.62) requires statistical indistinguishability, whereas the ana-
logue requirement in Definition 7.5.1 is of computational indistinguishability. As in
742
www.Ebook777.com
7.6* PERFECT SECURITY IN THE PRIVATE CHANNEL MODEL
Definition 7.5.1, the view of parties in I does not include messages sent among parties
def
in I¯ = [m] \ I .
In case the ensembles in Eq. (7.63) are identically distributed, we say that the emulation
is perfect.
We stress that Eq. (7.63) requires statistical indistinguishability, whereas the analogue
requirement in Definition 7.5.4 is of computational indistinguishability. More impor-
tantly, we make no computational restrictions regarding the real-model adversary, and
require the corresponding ideal-model adversary to be of comparable complexity. The
latter requirement is very important: It prevents obviously bad protocols (see Exer-
cise 18), and it guarantees that Definition 7.6.2 is actually a strengthening of Defini-
tion 7.5.4 (see Exercise 19).
Construction 7.6.3 (t-private m-party protocol for propagating shares through a mul-
tiplication gate): Recall that t < m/2, and so 2t ≤ m − 1.
Input: Party i enters with input (ai , bi ), where ai = a(i) and bi = b(i) for degree t
polynomials a(·) and b(·).
The protocol itself proceeds as follows:
1. For every i, Party i (locally) computes ci ← ai · bi .
def
Indeed, these ci ’s are the values of the polynomial c(z) = a(z) · b(z) at the corre-
sponding i’s, and c(0) = u · v. However, c may have degree 2t (rather than at most t).
2. For every i, Party i shares ci with all other parties. That is, Party i selects uniformly
a polynomial qi of degree t such that qi (0) = ci , and sends qi ( j) to Party j, for
every j.
86 Here and in the following, when we say a degree d polynomial, we actually mean a polynomial of degree at
most d.
744
www.Ebook777.com
7.6* PERFECT SECURITY IN THE PRIVATE CHANNEL MODEL
Fact 7.6.4: Let the di ’s be defined as in Construction 7.6.3, and t < m/2. Then there
exists a degree t polynomial, d, such that d(0) = a(0) · b(0) and d(i) = di for i =
1, ..., m.
def m
Proof: Consider the formal polynomial q(z) = i=1 γi qi (z), where the qi ’s are the
polynomials selected at Step 2. Since each qi has degree t, this holds also for q. For
m
every j = 1, ..., m, by Step 3, we have d j = i=1 γi qi ( j) = q( j), where the second
equality is due to the definition of q. Finally, note that
m
q(0) = γi qi (0)
i=1
m
= γi ci
i=1
m
= γi · a(i) · b(i)
i=1
= a(0) · b(0)
where the second equality is by Step 2, the third equality is by Step 1, and the last
equality is by the Extrapolation Theorem (applied to the 2t ≤ m − 1 degree polynomial
a(z) · b(z)).
745
Free ebooks ==> www.Ebook777.com
GENERAL CRYPTOGRAPHIC PROTOCOLS
Conclusion. Using Fact 7.6.4, for t < m/2, one can show (see Exercise 23)
that Construction 7.6.3 constitutes a t-private computation of the (partial) m-ary
functionality
Theorem 7.6.5: For t < m/2, any m-ary functionality is t-privately computable. Fur-
thermore, the emulation is perfect.
In contrast, very few m-ary functionalities are t-privately computable for t ≥ m/2.
In particular, the only m-ary Boolean-valued functions that are m/2-privately com-
putable are linear combinations
m of Boolean-valued functions of the individual inputs
(i.e., f (x 1 , ..., x m ) = i=1 ci f (i) (xi ) mod 2).
Theorem 7.6.6: For t < m/3, any m-ary functionality is t-securely computable. Fur-
thermore, the emulation is perfect.
We briefly sketch the ideas that underlie the proof of Theorem 7.6.6. Let us first assume
that t < m/4, and note that Steps 2–3 of Construction 7.6.3 constitute a t-private
computation of the (partial) m-ary functionality
746
www.Ebook777.com
7.7 MISCELLANEOUS
Specifically, this task is to find the free term of the unique degree 2t polynomial (i.e.,
c) that fits at least m − t of the inputs (i.e., the correct c(i)’s), and we can perform
this task in a t-secure manner. (The desired polynomial is indeed unique, because
otherwise we get two different degree 2t polynomials that agree on m − 2t ≥ 2t + 1
of the inputs.) Finally, observe that the parties can t-securely generate shares of a
random degree t polynomial with free term equal to zero. Combining the two linear
computations, one obtains the desired t-secure implementation of Eq. (7.66), provided
that t < m/4.
In order to handle the case m/4 ≤ t < m/3, we have to work directly with Eq. (7.65),
rather than with Eq. (7.66); that is, we use the fact that the parties actually hold the
shares of two degree t polynomials, rather than only the product of these shares (which
corresponds to shares of a degree 2t polynomial).
7.7. Miscellaneous
747
Free ebooks ==> www.Ebook777.com
GENERAL CRYPTOGRAPHIC PROTOCOLS
a functionality f , which without loss of generality satisfies | f 1 (x, y)| = | f 2 (x, y)|,
we may proceed in two stages:
1. The parties securely compute shares of the desired outputs of f . Specifically, the
parties securely compute the functionality
where (v1 , v2 ) ← f (x, y), the si ’s are uniformly distributed in {0, 1}|vi | , and
c ← C r1 ⊕r2 (v1 , v2 ), for uniformly distributed r1 , r2 ∈ {0, 1}|v1 ,v2 | . Note that at this
2
stage, each individual party obtains no knowledge of the desired outputs, but to-
gether they hold (verifiable) secrets (i.e., the vi ⊕ si ’s and si ’s) that yield both
outputs.
2. The parties gradually exchange the secrets that they hold. That is, Party 1 re-
veals pieces of s2 in exchange for pieces of s1 (revealed by Party 2), where one
piece of s2 is revealed per one piece of s1 . The pieces are revealed by using
a secure computation of an adequate functionality. Suppose that Party i is sup-
posed to obtain the piece πi (si ), where πi may be a (predetermined) Boolean
function or a randomized process. Then the parties securely compute the func-
tionality that maps ((a1 , a2 , ρ1 , γ1 ) , (b1 , b2 , ρ2 , γ2 )) to (π1 (b1 ), π2 (a2 )) if γ1 =
γ2 = C ρ1 ⊕ρ2 (a1 ⊕ b1 , a2 ⊕ b2 ) and to (λ, λ) otherwise. Indeed, each party en-
ters this secure computation with the input it has received in the first stage;
that is, Party 1 (resp., Party 2) enters with input (v1 ⊕ s1 , s2 , r1 , c) (resp.,
(s1 , v2 ⊕ s2 , r2 , c)).
The entire approach (and, in particular, the gradual exchange of secrets) depends
on a satisfactory definition of a piece of a secret. Such a definition should satisfy
two properties: (1) Given sufficiently many pieces of a secret, one should be able to
recover the secret, whereas (2) getting yet another piece of the secret contributes little
to the knowledge of the secret. We admit that we do not know of a definition (of a
piece of a secret) that is “uncontroversially satisfactory”; still, some suggestions (for
what these pieces of information may be) seem quite appealing. For example, consider
the randomized process π that maps the n-bit long secret σ1 · · · σn to the n-bit long
string τ1 · · · τn , such that τi = σi with probability 12 + ε and τi = 1 − σi otherwise, for
every i, independently.87 Then each piece carries O(nε 2 ) bits of information, whereas
after seeing t such pieces of the secret, one can guess it with success probability at
least 1 − n · exp(−tε 2 ), which for t = O(n/ε 2 ) means practically obtaining the secret.
However, if Party 1 knows that s1 ∈ {0n , 1n }, whereas Party 2 only knows that s2 ∈
{0, 1}n , then π (s1 ) seems more meaningful to Party 1 than π(s2 ) is to Party 2. Is it really
so or is the proposed exchange actually fair? Note that things are even more complex
(than they seem), because the uncertainty of the parties is actually not information-
theoretic but rather computational.
87 An alternative randomized process π maps the n-bit string s to the random pair (r, b), such that r is uniformly
distributed in {0, 1}n and b ∈ {0, 1} equals the inner product (mod 2) of s and r with probability 12 + ε (and the
complementary value otherwise). In this case, each piece carries O(ε 2 ) bits of information about s, whereas
after seeing O(n/ε2 ) such pieces, one practically obtains s.
748
www.Ebook777.com
7.7 MISCELLANEOUS
Definition 7.7.1 (security in the malicious adaptive model, a sketch): Let f and be
as in Section 7.5.1, and t be a bound on the number of parties that the adversary is
allowed to control (e.g., t < m/2).
88 The issue of adaptivity also arises, but in a more subtle way, in the case of two-party protocols.
89 The non-adaptive model can be viewed as a special case in which the adversary selects the parties that it controls
up-front, before learning any information regarding the current execution. But in general (in the adaptive model),
only the choice of the first controlled party is oblivious of the execution.
749
Free ebooks ==> www.Ebook777.com
GENERAL CRYPTOGRAPHIC PROTOCOLS
90 As in Definition 7.5.4 (and unlike in Definition 7.5.2), the trusted party always answers all parties; that is, the
adversary has no option of preventing the trusted party from answering the honest parties. Recall that here the
trusted party is invoked (by the adversary) at the time the adversary decides that it controls enough parties.
750
www.Ebook777.com
7.7 MISCELLANEOUS
data that was explicitly erased by an instruction of the protocol). Our definitional choice
is motivated by the fear that the past values of the party’s local variables (i.e., the party’s
view as per Definition 7.2.1) may be available somewhere on its computing system; see
analogous discussion in Section 7.2.2 (regarding the semi-honest model).
r Parties are given inputs for the current iteration; that is, in the j-th iteration Party i is
( j)
given input xi . In addition, there is a global state: The global state at the beginning of
the j-th iteration is denoted s ( j) , where the initial global state is empty (i.e., s (1) = λ).
r Depending on the current inputs and the global state, the parties are supposed to
compute outputs for the current iteration, as well as update the global state. That is,
91 As usual, the number of iterations (and the length of the inputs) must be polynomial in the security parameter.
Furthermore, the length of the global state (at any time) must also be polynomial in the security parameter.
751
Free ebooks ==> www.Ebook777.com
GENERAL CRYPTOGRAPHIC PROTOCOLS
( j)
the outputs in iteration j are determined by the xi ’s, for all i’s, and s ( j) . The new
( j)
global state, s ( j+1) , is determined similarly (i.e., also based on xi ’s and s ( j) ).
As it is an abstraction, one may think of the global state as being held by a trusted
party. In other words, reactive systems are captured by reactive functionalities in
which the trusted party maintains a state and interacts with the actual parties in iter-
ations. Indeed, in each iteration, the trusted party obtains an input from each party,
responds (as directed by the reactive functionality) with corresponding outputs, de-
pending also on its state, and updates its state. Note that the latter formulation fits
a definition of an ideal model (for computing the reactive functionality), whereas a
(real-model) reactive protocol must emulate this augmented notion of a trusted party.
Thus, the reactive protocol should emulate the iterative computation of outputs while
maintaining the state of the imaginary trusted party. Indeed, it is natural to have the
real-model parties use a secret-sharing scheme in order to maintain the latter state
(such that the state remains unknown to individual parties and even to a bounded num-
ber of dishonest parties). In fact, we need to use a verifiable secret-sharing scheme (see
Section 7.5.5.1), because dishonest parties should be prevented from (illegally) modi-
fying the (system’s) state (except from the predetermined effect of the choice of their
own inputs).
This discussion suggests that the secure implementation of reactive functionalities
can be reduced to the secure implementation of ordinary (i.e., non-reactive) function-
alities. For example, we refer to security in the second malicious model, as defined
in Definition 7.5.4 (for ordinary functionalities). That is, we postulate that a major-
ity of the parties are honest and require that the dishonest parties cannot (effectively)
abort the execution. In such a case, we use a verifiable secret-sharing scheme in which
only a majority of the pieces yield the secret. Once a verifiable secret-sharing scheme
is fixed and the (system’s) state is shared using it, the computation of each iteration
of the reactive system can be cast as an ordinary functionality. The latter maps se-
quences of the form ((x1 , s1 ), ..., (xm , sm )), where xi denotes the current input of Party i
and si denotes its share of the current state, to the sequence ((y1 , r1 ), ..., (ym , rm )),
where yi denotes the next output of Party i and ri denotes its share of the updated
state.
We conclude that the results regarding secure computation of ordinary (i.e., non-
reactive) computations can be extended to reactive systems (thus obtaining secure
implementations of the latter).
www.Ebook777.com
7.7 MISCELLANEOUS
7.7.2.1. Definitions
One may say that a protocol is concurrently secure if whatever the adversary may ob-
tain by invoking and controlling parties in real concurrent executions of the protocol is
also obtainable by a corresponding adversary that controls corresponding parties mak-
ing concurrent functionality calls to a trusted party (in a corresponding ideal model).
More generally, one may consider concurrent executions of many sessions of several
protocols, and say that a set of protocols is concurrently secure if whatever the ad-
versary may obtain by invoking and controlling such real concurrent executions is also
obtainable by a corresponding adversary that invokes and controls concurrent calls to
a trusted party (in a corresponding ideal model). Consequently, a protocol is said to be
secure with respect to concurrent compositions if adding this protocol to any set of
concurrently secure protocols yields a set of concurrently secure protocols.
A much more appealing approach has been recently suggested by Canetti [51].
Loosely speaking, he suggests considering a protocol to be secure (hereafter referred
to as environmentally secure)92 only if it remains secure when executed within any
(feasible) environment. The notion of an environment is a generalization of the notion
of an auxiliary-input; in a sense, the environment is an auxiliary oracle (or rather a
state-dependent oracle) that the adversary may access. In particular, the environment
may represent other executions of various protocols that are taking place concurrently
(to the execution that we consider). We stress that the environment is not supposed to
assist the proper execution of the protocol (and, in fact, honest parties merely obtain
their inputs from it and return their outputs to it). In contrast, potentially, the envi-
ronment may assist the adversary in attacking the execution. Following the simulation
paradigm, we say that a protocol is environmentally secure if any feasible real-model
adversary attacking the protocol, with the assistance of any feasible environment, can
be emulated by a corresponding ideal-model adversary that uses the same environment,
while making similar queries to the environment. In the following formulation, the en-
vironment is implemented by a (non-uniform) family of polynomial-size circuits, and
is also responsible for providing the parties with inputs and for trying to distinguish the
real-model execution from the ideal-model execution.
92 The term used in [51] is Universally Composable, but we believe that a reasonable sense of “universal compos-
ability” is only a corollary of the suggested definition.
753
Free ebooks ==> www.Ebook777.com
GENERAL CRYPTOGRAPHIC PROTOCOLS
where ideal f, I, B(1n ), En (resp., real, I, A(1n ), E n ) denotes the output of E n after interacting
with the ideal-model (resp., real-model) execution under (I, B) (resp., (I, A)).
As hinted earlier, the environment may account for other executions of various protocols
that are taking place concurrently with the main execution being considered. Defini-
tion 7.7.3 implies that such environments cannot distinguish the real execution from an
ideal one. This means that anything that the real-model adversary gains from the exe-
cution of the protocol and any environment (representing other concurrent executions)
can also be obtained by an adversary operating in the ideal model and having access to
the same environment. Thus, each single execution of an environmentally secure pro-
tocol can be replaced by an ideal oracle call to the corresponding functionality, without
affecting the other concurrent executions. Furthermore, one can simultaneously replace
all these concurrent executions by ideal oracle calls and use a hybrid argument to show
that the behavior is maintained. (One needs to use the fact that a single replacement does
not affect the other concurrent executions, even in case some of the other executions
are in the real model and the rest are in the ideal model.) It follows that environmentally
secure protocols are secure with respect to concurrent compositions [51]. We wonder
whether the reverse direction holds.
7.7.2.2. Constructions
The main positive result currently known is that environmentally secure protocols
for any functionality can be constructed for settings in which more than two-thirds
of the active parties are honest (cf. [51]). This holds unconditionally for the private-
channel model and under standard assumptions (e.g., allowing the construction of
public-key encryption schemes) for the standard model (i.e., without private channel).
93 Thus, the definition should actually specify an additional parameter bounding the number of parties that may
be controlled by the adversary.
754
www.Ebook777.com
7.7 MISCELLANEOUS
The immediate consequence of this result is that general environmentally secure multi-
party computation is possible, provided that more than two-thirds of the parties are
honest.
In contrast, general environmentally secure two-party computation is not possible
(in the standard sense).94 Still, one can salvage general environmentally secure two-
party computation in the following reasonable model: Consider a network that contains
servers that are willing to participate (as “helpers,” possibly for a payment) in compu-
tations initiated by a set of (two or more) users. Now, suppose that two users wishing to
conduct a secure computation can agree on a set of servers such that each user believes
that more than two-thirds of the servers (in this set) are honest. Then, with the active
participation of this set of servers, the two users can compute any functionality in an
environmentally secure manner.
Another reasonable model where general environmentally secure two-party com-
putation is possible is the shared random-string model [59]. In this model, all parties
have access to a universal random string (of length related to the security parameter).
We stress that the entity trusted to post this universal random string is not required to
take part in any execution of any protocol, and that all executions of all protocols may
use the same universal random string.
94 Of course, some specific two-party computations do have environmentally secure protocols. See [51] for several
important examples (e.g., key exchange).
755
Free ebooks ==> www.Ebook777.com
GENERAL CRYPTOGRAPHIC PROTOCOLS
chapters. In contrast to the impression given in other parts of this work, it is now obvious
that we cannot get all that we may want. Instead, we should study the alternatives and
go for the one that best suits our real needs.
Indeed, as stated in the preface, the fact that we can define a cryptographic goal
does not mean that we can satisfy it as defined. In case we cannot satisfy the initial
definition, we should search for acceptable relaxations that can be satisfied. These
relaxations should be defined in a clear manner so that it would be obvious what
they achieve and what they fail to achieve. Doing so will allow a sound choice of
the relaxation to be used in a specific application. That is, the choice will have to be a
circumstantial rather than a generic one. This seems to be a good point at which to end the
current work.
756
www.Ebook777.com
7.7 MISCELLANEOUS
and [118, Sec. 5]. We comment that the original sources (i.e., [117, 118]) are very terse,
and that full details were only provided in [107]. Our treatment differs from [107] in
using a higher level of modularity, which is supported by composition theorems for the
malicious models.
As stated earlier, a satisfactory definitional treatment of secure multi-party compu-
tation was provided after the presentation of the constructions of [117, 118, 191]. The
basic approach was developed by Micali and Rogaway [157] and Beaver [10, 11],95 and
reached maturity in Canetti’s work [50], which provides a relatively simple, flexible,
and comprehensive treatment of the (basic) definitions of secure multi-party com-
putation. In particular, the composition theorems that we use are essentially taken
from [50].
A variety of cryptographic tools is used in establishing the main results of this chapter.
Firstly, we mention the prominent role of Oblivious Transfer in the protocols developed
for the semi-honest model.96 An Oblivious Transfer protocol was first suggested by
Rabin [172], but our actual definition and implementation follow the ideas of Even,
Goldreich, and Lempel [84] (as further developed in the proceedings version of [117]).
Several ingredients play a major role in the compilation of protocols secure in the semi-
honest model into generally secure protocols (for the malicious models). These include
commitment schemes, zero-knowledge proofs-of-knowledge, verifiable secret sharing
(introduced by Chor, Goldwasser, Micali, and Awerbuch [63]), and secure coin-flipping
(introduced by Blum [37]).
The Private Channel Model. In contrast to the bulk of this chapter (as well as the
bulk of the entire work), the private channel model (treated in Section 7.6) allows
the presentation of results that do not rely on intractability assumptions. These re-
sults (e.g., Theorem 7.6.6) were obtained by Ben-Or, Goldwasser, and Wigderson [34]
and Chaum, Crépeau, and Damgård [62]. These works were done after the results of
Yao [191] and Goldreich, Micali, and Wigderson [117, 118] were known, with the ex-
plicit motivation of obtaining results that do not rely on intractability assumptions. Our
presentation is based on [34] (cf. [97]). The essential role of the bound on the number of
dishonest parties (even in the semi-honest model) was studied in [64] and subsequent
works.
95 The approach of Goldwasser and Levin [121] is more general: It avoids the definition of security (with respect
to a given functionality) and defines instead a notion of protocol robustness. Loosely speaking, a protocol is
robust if whatever an arbitrary malicious adversary can obtain by attacking it can also be obtained by a very
benign adversarial behavior.
96 Subsequent results by Kilian [137] further demonstrate the importance of Oblivious Transfer in this context.
757
Free ebooks ==> www.Ebook777.com
GENERAL CRYPTOGRAPHIC PROTOCOLS
The aforementioned results were originally obtained using protocols that use a
polynomial number of rounds. In some cases, subsequent works obtained secure
constant-round protocols (e.g., in the case of multi-party computations with honest
majority [27], and in the case of two-party computations allowing abort [143]).
We have mentioned (e.g., in Section 7.7.1.1) the impossibility of obtaining fairness
in secure computations without an honest majority. These statements are backed by the
impossibility of implementing a fair two-party coin-toss, as proven in [65].
We have briefly discussed the notion of adaptive adversaries. A more detailed dis-
cussion of the definitions is provided in [50], which builds on [49]. For a proof of
Theorem 7.7.2, the reader is referred to [49, 53]. For a study of adaptive versus non-
adaptive security, the reader is referred to [52].
Our treatment of multi-party protocols assumes a synchronous network with point-
to-point channels between every pair of parties. Results for asynchronous communi-
cation and arbitrary networks of point-to-point channels were presented in [33, 49]
and [78], respectively.
General secure multi-party computation in a model of transient adversarial behavior
was considered in [166]. In this model, the adversary may seize control of each party
during the protocol’s execution, but can never control more than (say) 10 percent of the
parties at any point in time. We comment that schemes secure in this model were later
termed “proactive” (cf., [57]).
Whenever we have restricted the adversary’s control of parties, we have done so by
bounding the cardinality of the set of controlled parties. It is quite natural to consider
arbitrary restrictions on the set of controlled parties (i.e., that this set belongs to a
family of sets against which security is guaranteed). The interested reader is referred
to [131].
For further discussion of Byzantine Agreement, see any standard textbook on Dis-
tributed Computing (e.g., [3, 147]). We mention that whereas plain m-party Byzantine
Agreement can tolerate at most (m − 1)/3 malicious parties, Authenticated Byzan-
tine Agreement can tolerate any number of malicious parties (see Construction 7.5.17,
which follows [80]). The problems arising when composing Authenticated Byzantine
Agreement are investigated in [144].
www.Ebook777.com
7.7 MISCELLANEOUS
suffice).97 For further discussion of enhanced trapdoor permutations, see Section C.1
in Appendix C.
7.7.7. Exercises
Exercise 1: Oblivious sampling: Suppose that both parties hold a function (or circuit)
that defines a distribution in the natural way and wish to obtain a sample from
this distribution without letting any party learn the corresponding pre-image. Cast
this problem as one of securely computing a corresponding functionality, treating
differently the case in which the function (or circuit) is fixed and the case in which it
is given as input to both parties. Consider also the case in which only the first party
is to obtain the output.
Exercise 2: Oblivious signing: In continuation of Exercise 1, consider the case in
which the distribution to be sampled is determined by the inputs of both parties. For
example, consider the task of oblivious signing in which one party wishes to obtain
the signature of the second party to some document without revealing the document
to the signer (i.e., the document is the input of the first party, whereas the signing-key
is the input of the second party).
Exercise 3: Privacy and Correctness: Referring to the discussion that follows Defini-
tion 7.2.6, consider the following definitions of (partial) privacy and correctness (with
respect to malicious adversaries). Partial privacy is defined as a restriction of Defi-
nition 7.2.6 to the adversary’s component of the random variables real, A(z) (x, y)
and ideal f, B(z) (x, y), whereas partial correctness coincides with a restriction of
Definition 7.2.6 to the honest party’s component of these random variables.
1. Show that both properties are implied by Definition 7.2.6, but that even their
combination does not imply Definition 7.2.6.
2. Why were both properties qualified by the term “partial”?
Guideline (Item 2): This is related to the need to use the general formulation
of Definition 7.2.1 for randomized functionalities; see the discussion that follows
Definition 7.2.1.
Exercise 4: On the importance of the length convention: Show that if the equal-length
convention is omitted from definitions like Definition 7.2.1 and 7.2.6, then they
cannot be satisfied for many natural functionalities. That is, consider these definitions
when the ensembles are indexed by the set of all pairs of strings, rather than by the
set of pairs of equal-length strings.
97 Partial progress toward this goal is reported in Haitner’s work “Implementing Oblivious Transfer using collection
of dense trapdoor permutations” (proceedings of the first Theory of Cryptography Conference, 2004).
759
Free ebooks ==> www.Ebook777.com
GENERAL CRYPTOGRAPHIC PROTOCOLS
Guideline: (Here, privacy and security refer to the notions obtained when
omitting the equal-length convention.) Show that the functionality (x, y) →
def def
( f (x, y), f (x, y)), where f (x, y) = 1 if |x| = |y| and f (x, y) = 0 otherwise, can-
not be privately computed. Show that (x, y) → (|y|, |x|) can be privately computed
but tthat he simple protocol in which Party 1 sends |x| to Party 2 (and Party 2 sends
|y| to Party 1) fails to securely compute it. Challenge: Try to show that the latter
functionality cannot be securely computed.
Assuming that k and are polynomially bounded and efficiently computable, present
privacy reductions between all these variants. Specifically, show a privacy reduction
of the extended 1-out-of-k Oblivious Transfer to the original 1-out-of-2 Oblivious
Transfer of bits, and between 1-out-of-2 Oblivious Transfer of -bit long secrets and
Oblivious Transfer of a single (n)-bit long secret.
Guideline: Note that you are asked only to present oracle-aided protocols that are
secure in the semi-honest model. The only non-obvious reduction is from 1-out-
of-2 Oblivious Transfer to single-secret Oblivious Transfer (OT), presented next.
The first party randomly selects r1 , r2 ∈ {0, 1}(n) , and the parties invoke OT twice
where the first party inputs r1 in the first time and r2 in the second time. If the second
party wishes to obtain the i-th secret, for i ∈ {1, 2}, then it says OK if and only if
it has obtained ri but not r3−i . Otherwise, the parties repeat the experiment. Once
the second party says OK, the first party sends it the pair (σ1 ⊕ r1 , σ2 ⊕ r2 ), where
the σ j ’s are the actual secrets.
www.Ebook777.com
7.7 MISCELLANEOUS
communication protocol. Recall that the latter implies the existence of one-way
functions.
Guideline: To transmit a bit σ , the sender invokes the 1-out-of-2 Oblivious Transfer
with input (σ, 0), while the receiver sets its input to 1 and gets σ (i.e., the sender’s
first bit in the OT). Observe that “privacy with respect to the sender” implies that
(the sender and thus also) the adversary cannot distinguish the case where the
receiver enters 1 from the case where it enters 2. Likewise, “privacy with respect
to the receiver” implies that, in the (fictitious) case where the receiver enters 2, the
adversary (like the receiver) cannot tell whether the sender enters (0, 0) or (1, 0).
Thus, also in the (real) case where the receiver enters 1, the adversary cannot tell
whether the sender enters (0, 0) or (1, 0).
Exercise 9: Alternative analysis of Construction 7.3.7: The said construction can be de-
coupled into two reductions. First, the functionality of Eq. (7.17) – (7.18) is reduced to
the deterministic functionality ((a1 , b1 , c1 ), (a2 , b2 )) → (λ, f a2 ,b2 (a1 , b1 , c1 )), where
def
f a,b (x, y, z) = z + (x + a) · (y + b), and next the latter is reduced to OT41 . Present
each of these reductions and prove that each is a privacy reduction.
Guideline: When analyzing the second reduction, use the fact that it is used to com-
pute a deterministic functionality and that thus, the simpler form of Definition 7.2.1
can be used.
Exercise 10: Some functionalities that are trivial to privately compute: Show that each
of the following types of functionalities has a trivial protocol for privately computing
it (i.e., using a single message):
1. Each deterministic functionality that only depends on the input of one party (i.e.,
(x, 1|x| ) → ( f 1 (x), f 2 (x)) for arbitrary functions f 1 and f 2 ).
2. Each randomized functionality of the form (x, 1|x| ) → (g(x), f (x, g(x))), where
g is any randomized process and f is a function.
Generalize these functionality types and their treatment to the multi-party case.
Exercise 11: In continuation of Exercise 10, show that all six functionalities introduced
in Section 7.4.3 are trivial to compute in a private manner.
Guideline: Note that the restricted authenticated-computation functionality of
Eq. (7.27) and the image-transmission functionality of Eq. (7.31) fit Item 1,
whereas the basic and augmented coin-tossing functionalities, as well as the input-
commitment functionality, fit Item 2. What about Eq. (7.33)?
761
Free ebooks ==> www.Ebook777.com
GENERAL CRYPTOGRAPHIC PROTOCOLS
Guideline (Part 2): Privately reduce the single-secret (bit) version of Oblivious
Transfer to the special case of natural auth-comp in which h(α) (resp., f (α)) equals
the first (resp., second) bit of α. On input a secret bit σ , Party 1 sets its oracle-query
to 1σ and Party 2 sets its query to a uniformly selected bit (and so if the latter equals
h(1σ ) = 1, then Party 2 gets f (1σ ) = σ , and otherwise it gets λ).
Exercise 14: Voting, Elections, and Lottery: Write a specification for some social pro-
cedure (e.g., voting, elections, or lottery), and cast it as a multi-party functional-
ity. Note that allowing appeals and various forms of interaction requires a reactive
functionality (see Section 7.7.1.3), which in turn can be reduced to a standard (non-
reactive) functionality.
Exercise 15: Threshold Cryptography: Loosely speaking, Threshold Cryptography is
concerned with allowing a set of parties to share the ability to perform certain
(cryptographic) operations (cf. [74, 96]). For example, suppose that we wish m
parties to hold shares of a signing-key (with respect to some signature scheme), such
that every t of these parties (but not fewer) can generate signatures to documents of
their choice. Cast this example as a multi-party functionality. (The same holds for
other versions of Threshold Cryptography.)
Exercise 16: Failure of a simple protocol for multi-party authenticated computation:
Consider the m-party oracle-aided protocol for computing Eq. (7.50) in which, for
i = 2, ..., m, Parties 1 and i invoke Eq. (7.33), with Party 1 entering the input α and
Party i entering the input βi . Show that this oracle-aided protocol does not constitute
a secure implementation of Eq. (7.50).
Exercise 17: Analysis of Shamir’s Secret-Sharing Scheme: Prove that Construc-
tion 7.5.35 satisfies the conditions of Definition 7.5.34.
Guideline: For every sequence (u 1 , v1 ), ..., (u , v ), where the u i ’s are distinct,
consider the set of degree d ≥ − 1 polynomials q that satisfy q(u i ) = vi for
762
www.Ebook777.com
7.7 MISCELLANEOUS
Exercise 19: Perfect security implies ordinary security: Show that Definition 7.6.2
implies Definition 7.5.4.
Exercise 20: Private computation of linear functions: For any fixed m-by-m matrix
M, over a finite field, show that the m-ary functionality x → x M can be m-privately
computed (as per Definition 7.6.1).
763
Free ebooks ==> www.Ebook777.com
GENERAL CRYPTOGRAPHIC PROTOCOLS
Guideline (Item 2): Show that the computation of the free term of the polynomial
c can be captured by an adequate M1 , whereas the generation of the values of a
random degree t polynomial with free-term equal to zero can be captured by an
adequate M2 .
Exercise 23: Analysis of Construction 7.6.3: For t < m/2, show that Construc-
tion 7.6.3 constitutes a protocol that t-privately computes Eq. (7.65).
Guideline: Consider, without loss of generality, I = {1, ..., t}. The simulator is
given an input sequence ((a1 , b1 ), ..., (at , bt )) and an output sequence (r1 , ..., rt ),
and needs to emulate the messages that the parties in I obtain at Step 2. This can
be done by randomly selecting degree t polynomials q j ’s that are consistent with
these sequences and letting the messages that Party i obtains equal q1 (i), ..., qm (i).
Specifically, for i = 1, .., t, the polynomial qi is selected like qi (i.e., uniformly
among the t polynomials having free-term ai bi ); for i = t + 1, .., m − 1, the poly-
nomial qi is selected uniformly among all t polynomials, and qm is selected such
that mj=1 γ j q j (i) = ri holds for all i ∈ [t].
764
www.Ebook777.com
APPENDIX C
In this appendix, we list a few corrections and additions to the previous chapters of this
work (which appeared in [108]).
765
Free ebooks ==> www.Ebook777.com
CORRECTIONS AND ADDITIONS TO VOLUME 1
We comment that the RSA collection (presented in Section 2.4.3.1 and further discussed
in Section 2.4.4.2) is, in fact, an enhanced collection of trapdoor permutations,1 pro-
vided that RSA is hard to invert in the same sense as assumed in Section 2.4.3.1.
In contrast, the Rabin Collection (as defined in Section 2.4.3) does not satisfy
Definition C.1.1 (because the coins of the sampling algorithm give away a modular
square root of the domain element). Still, the Rabin Collection can be easily mod-
ify to yield an enhanced collection of trapdoor permutations, provided that factor-
ing is hard (in the same sense as assumed in Section 2.4.3). Actually, we present
1 Here and in the following, we assume that sampling Z ∗N , for a composite N , is trivial. However, sampling
Z ∗N (or even Z N ) by using a sequence of unbiased coins is not that trivial. The straightforward sampler may
def
take = 2log2 N random bits, view them as an integer in i ∈ {0, 1, ..., 2 − 1}, and output i mod N . This
yields an almost uniform sample in Z N . Also note that given an element e ∈ Z N , one can uniformly sample an
i ∈ {0, 1, ..., 2 − 1} such that i ≡ e (mod N ). Thus, the actual sampler does not cause trouble with respect
to the enhanced hardness requirement.
766
www.Ebook777.com
C.1 ENHANCED TRAPDOOR PERMUTATIONS
1. Modifying the functions. Rather than squaring modulo the composite N , we consider
the function of raising to the power of 4 modulo N . It can be shown that the resulting
permutations over the quadratic residues modulo N satisfy Definition C.1.1, provided
that factoring is hard. Specifically, given N and a random r ∈ Z N , the ability to
extract the 4th root of r 2 mod N (modulo N ) yields the ability to factor N , where
the algorithm is similar to the one used in order to establish the intractability of
extracting square roots.
2. Changing the domains. Rather than considering the permutation induced (by the
modular squaring function) on the set Q N of the quadratic residues modulo N , we
consider the permutations induced on the set M N , where M N contains all integers in
{1, ..., N /2} that have Jacobi symbol modulo N that equals 1. Note that as in the case
of Q N , each quadratic residue has a unique square root in M N (because exactly two
square roots have a Jacobi symbol that equals 1 and their sum equals N ).2 However,
unlike Q N , membership in M N can be determined in polynomial-time (when given
N without its factorization). Thus, sampling M N can be done in a straightforward
way, which satisfies Definition C.1.1.
Actually, squaring modulo N is a 1-1 mapping of M N to Q N . In order to obtain
a permutation over M N , we modify the function a little, such that if the result of
modular squaring is bigger than N /2, then we use its additive inverse (i.e., rather
than outputting y > N /2, we output N − y).
We comment that the special case of Definition 2.4.5 in which the domain of f α equals
{0, 1}|α| is a special case of Definition C.1.1 (because, without loss of generality, the
sampling algorithm may satisfy S (α, r ) = r ). Clearly, the RSA and the Rabin collec-
tions can be slightly modified to fit the former special case.
767
Free ebooks ==> www.Ebook777.com
CORRECTIONS AND ADDITIONS TO VOLUME 1
The focus of Section 3.6 was on a special case of pseudorandom functions, hereafter
referred to as the fixed-length variant. For some function : N → N (e.g., (n) = n),
these functions map (n)-bit long strings to (n)-bit long strings, where n denotes the
lengths of the function’s seed. More general definitions were presented in Section 3.6.4.
In particular, functions mapping strings of arbitrary length to (n)-bit long strings were
considered. Here, we refer to the latter as the variable-length variant.
A natural question regarding these variants is how to directly (or efficiently) trans-
form functions of the fixed-length variant into functions of the variable-length variant.3
Exercises 30 and 31 in Chapter 3 implicitly suggest such a transformation, and so does
Proposition 6.3.7. Because of the interest in this natural question, we next state the
actual result explicitly.
Proof Idea: The proofs of Propositions 6.3.6 and 6.3.7 actually establish
Proposition C.2.1.
3 An indirect construction may use the fixed-length variant in order to obtain a one-way function, and then
construct the variable-length variant using this one-way function. Needless to say, this indirect construction is
very wasteful.
4 Recall that the (t, 1/t)-collision property means that for every n ∈ N and every x = y such that |x|, |y| ≤ t(n),
the probability that h r (x) = h r (y) is at most 1/t(n), where the probability is taken over all possible choices of
r ∈ {0, 1}n with uniform probability distribution.
5 We comment that the notion of strong witness indistinguishability was introduced by the author at a late stage
of writing [108].
768
www.Ebook777.com
C.3 ON STRONG WITNESS INDISTINGUISHABILITY
Notation. To facilitate the rest of the discussion, we let WI stand for “(regular) witness
indistinguishability” and strong-WI stand for “strong witness indistinguishability.”
6 Theorem 4.6.8 does not mention the public-coin condition, but the construction that is supposed to support it
is of the public-coin type. Note that constant-round zero-knowledge protocols are presented in Section 4.9, but
these are in relaxed models and are not of the public-coin type.
769
Free ebooks ==> www.Ebook777.com
CORRECTIONS AND ADDITIONS TO VOLUME 1
that (P, V ) is strong witness indistinguishable. Then for every two probability en-
1 1 1 2 2 2 j j j
sembles {(X n , Y n , Z n )}n∈N and {(X n , Y n , Z n )}n∈N such that X n = (X n,1 , ..., X n, Q(n) ),
j j j j j j j j j
Y n = (Yn,1 , ..., Yn, Q(n) ), and Z n = (Z n,1 , ..., Z n, Q(n) ), where (X n,i , Yn,i , Z n,i ) is inde-
pendent of (X n,k , Yn,k , Z n,k )k =i,∈{1,2} , the following holds:
1 1 2 2
If {(X n , Z n )}n∈N and {(X n , Z n )}n∈N are computationally indistinguishable,
1 1 1 2 2 2
then so are { PQ (Y n ), VQ∗ (Z n ) (X n )}n∈N and { PQ (Y n ), VQ∗ (Z n ) (X n )}n∈N ,
for every probabilistic polynomial-time machine VQ∗ .
j j
We stress that the components of Y n (resp., Z n ) may depend on the corresponding
j j
components of X n , but they are independent of the other components of Y n (resp.,
j j
Z n ), as well as of the other components of X n . Note that statistical independence of
this form holds vacuously in Lemma 4.6.6, which refers to fixed sequences of strings.
Lemma C.3.1 is proved by extending the proof of Lemma 4.6.6. Specifically, we consider
hybrids as in the original proof, and construct a verifier V ∗ that interacts with P on the
i-th session (or copy), while emulating all the other sessions (resp., copies). Toward this
j
emulation, we provide V ∗ with the corresponding Q(n) − 1 components of both Y n ’s
j j
(as well as of both X n ’s and Z n ’s). Fixing the best possible choice for these Q(n) − 1
components, we derive a verifier that interacts with P and contradicts the hypothesis
that (P, V ) is strong witness indistinguishable. The key point is that revealing (or fixing)
j
the other Q(n) − 1 components of both Y n ’s does not allow for distinguishing the i-th
1 1 2 2
component of X n and Z n from the i-th component of X n and Z n .
www.Ebook777.com
C.3 ON STRONG WITNESS INDISTINGUISHABILITY
j
be proven that the Tn ’s are computationally indistinguishable (by considering what
happens if at Step 1 the prover sends a commitment to 1).
3. Using a strong witness indistinguishable proof (which is indeed the missing compo-
nent or the sub-protocol to which the current protocol is reduced), the prover proves
that the string sent at Step 1 is a commitment to 0.
Note that it suffices to show that the verifier cannot distinguish the two possible
transcript distributions of the current step, where both possible distributions refer
to executions with the same common input (i.e., the commitment) and the same
prover’s auxiliary input (i.e., the decommitment information). In contrast, these
two distributions (of executions) refer to two different distributions of the verifier’s
auxiliary input (i.e., either Tn1 or Tn2 ), which are indistinguishable.
The foregoing reduction demonstrates that the notion of strong witness indistinguisha-
bility actually refers to issues that are fundamentally different from witness indistin-
guishability. Specifically, the issue is whether or not the interaction with the prover helps
to distinguish between two possible distributions of some auxiliary information (which
are indistinguishable without such an interaction). Furthermore, this issue arises also
in case the prover’s auxiliary inputs (i.e., the “witnesses”) are identically distributed.
C.3.3. Consequences
In view of the fact that we do not have constant-round public-coin strong witness
indistinguishable proofs with negligible error for N P, we suggest replacing the use of
such proofs with some cumbersome patches. A typical example is the construction of
non-oblivious commitment schemes (i.e., Theorem 4.9.4).
Other Applications. Fortunately, Theorem 4.9.4 is the only place where strong witness
indistinguishable proofs are used in this work. We believe that in many other applications
of strong witness indistinguishable proofs, an analogous modification can be carried
out (in order to salvage the application). A typical example appears in [7]. Indeed, the
current situation is very unfortunate, and we hope that it will be redeemed in the future.
Specifically, we propose the following open problem:
In retrospect, it appears that Section 4.10 is too laconic. As is usually the case, laconic
style gives rise to inaccuracies and gaps, which we wish to address here. (See also
Section C.6.)
772
www.Ebook777.com
C.4 ON NON-INTERACTIVE ZERO-KNOWLEDGE
the family (i.e., an α not in I ). In such a case, the function is not necessarily 1-1, and,
consequently, the soundness property may be violated. This concern can be addressed
by using a (simple) non-interactive (zero-knowledge) proof for establishing that the
function is “typically 1-1” (or, equivalently, is “almost onto the designated range”).
The proof proceeds by presenting pre-images (under the function) of random elements
specified in the reference string. Note that for any fixed polynomial p, we can only
prove that the function is 1-1 on at least a 1 − (1/ p(n)) fraction of the designated range
(i.e., {0, 1}n ), yet this suffices for moderate soundness of the entire proof system (which
in turn can be amplified by repetitions). For further details, consult [32].
Although the known candidate trapdoor permutations can be modified to fit this form,
we wish to further generalize the result such that any enhanced trapdoor permutation
(as in Definition C.1.1) can be used. This can be done by letting the reference string
consist of the coin sequences used by the domain-sampling algorithm (rather than of
elements of the function’s domain). By virtue of the enhanced hardness condition (i.e.,
Eq. (C.3)), the security of the hard-core is preserved, and so is the zero-knowledge
property.
As stated at the end of Section C.1, in contrast to what was claimed in Remark 4.10.6,
we do not known how to extend the construction to arbitrary (rather than enhanced)
trapdoor permutations. This leads to the following open problem.
773
Free ebooks ==> www.Ebook777.com
CORRECTIONS AND ADDITIONS TO VOLUME 1
We stress that the same reference string (i.e., Upoly(n) ) is used in all invocations of the
prover P. Thus, Proposition C.4.1 does not refer to multiple samples of computationally
indistinguishable ensembles (nor even to independent samples from a sequence of
computationally indistinguishable pairs of ensembles, as would have been the case if
the various invocations were to use independently distributed reference strings). Still,
Proposition C.4.1 can be established by using the hybrid technique. The key observation
is that, given a single proof with respect to some reference string along with the reference
string (as well as the relevant sequence s), one can efficiently generate all the other proofs
(with respect to the same reference string). Indeed, the internal coins used by P in each
of these proofs are independent.
7 Recall that the distinguisher is also given the index of the distribution, which in this case is the triple t.
8 Indeed, here is where we use the fact that the corrected definition (see Definition 5.4.22) refers only to input-
selection and witness-selection strategies that can be implemented by polynomial-size circuits.
774
www.Ebook777.com
C.5 SOME DEVELOPMENTS REGARDING ZERO-KNOWLEDGE
that the simulator generates f -images by selecting random pre-images and applying
f to each of them, and so it knows the pre-images and can reveal them later. Next,
the simulator determines the input graph and the corresponding Hamiltonian cycle (by
using the abovementioned polynomial-size circuit) and acts as the real prover. Finally,
it feeds the original distinguisher with the corresponding output. Observe that in case
the given sequence of f (x)’s satisfies b(x) = 0 (resp., b(x) = 1) for each f (x), the
“b-value distinguisher” produces outputs distributed exactly as in the simulation (resp.,
the real proof ).
A recent result by Barak [5] calls for reevaluation of the significance of all nega-
tive results regarding black-box zero-knowledge9 (as defined in Definition 4.5.10).
In particular, relying on standard intractability assumptions, Barak presents round-
efficient public-coin zero-knowledge arguments for N P (using non-black-box simu-
lators), whereas only BPP can have such black-box zero-knowledge arguments (see
comment following Theorem 4.5.11). It is interesting to note that Barak’s simulator
works in strict (rather than expected) probabilistic polynomial-time, addressing an open
problem mentioned in Section 4.12.3. Barak’s result is further described in Section C.5.2
In Section C.5.1, we review some recent progress in the study of the preservation of
zero-knowledge under concurrent composition. We seize the opportunity to provide a
wider perspective on the question of the preservation of zero-knowledge under various
forms of protocol composition operations.
We mention that the two problems discussed in this section (i.e., the “preservation of
security under various forms of protocol composition” and the “use of the adversary’s
program within the proof of security”) arise also with respect to the security of other
cryptographic primitives. Thus, the study of zero-knowledge protocols serves as a good
benchmark for the study of various problems regarding cryptographic protocols.
9 Specifically, one should reject the interpretation, offered in Section 4.5 (see Sections 4.5.0, 4.5.4.0, and 4.5.4.2),
by which negative results regarding black-box zero-knowledge indicate the inherent limitations of zero-
knowledge.
775
Free ebooks ==> www.Ebook777.com
CORRECTIONS AND ADDITIONS TO VOLUME 1
honest parties in each execution are independent of the messages they received in other
executions. The adversary, however, may coordinate the actions it takes in the various
executions, and in particular, its actions in one execution may also depend on messages
it received in other executions.
Let us motivate the asymmetry between the postulate that honest parties act inde-
pendently in different executions and the absence of such an assumption with respect
to the adversary’s actions. Typically, coordinating actions in different executions is dif-
ficult but not impossible. Thus, it is desirable to use stand-alone protocols that preserve
security under “composition” (as defined earlier), rather than to use protocols that in-
clude inter-execution coordination actions. Note that at the very least, inter-execution
coordination requires users to keep track of all executions that they perform. Actually,
trying to coordinate honest executions is even more problematic than it seems, because
one may need to coordinate executions of different honest parties (e.g., all employees of
a big corporation or an agency under attack), which in many cases is highly unrealistic.
On the other hand, the adversary attacking the system may be willing to go to the extra
trouble of coordinating its attack in the various executions of the protocol.
For T ∈ {sequential, parallel, concurrent}, we say that a protocol is T -zero-
knowledge if it is zero-knowledge under a composition of type T . The definitions of
T -zero-knowledge are derived from the standard definition by considering appropriate
adversaries (i.e., adversarial verifiers), that is, adversaries that can initiate a polynomial
number of interactions with the prover, where these interactions are scheduled according
to the type T .10 The corresponding simulator (which, as usual, interacts with nobody) is
required to produce an output that is computationally indistinguishable from the output
of such a type T adversary.
10 Without loss of generality, we may assume that the adversary never violates the scheduling condition; it may
instead send an illegal message at the latest possible adequate time. Furthermore, without loss of generality, we
may assume that all the adversary’s messages are delivered at the latest possible adequate time.
11 The preliminary version of Goldwasser, Micali, and Rackoff ’s work [124] uses the “basic” definition (i.e.,
Definition 4.3.2), whereas the final version of that work as well as most subsequent works use the augmented
776
www.Ebook777.com
C.5 SOME DEVELOPMENTS REGARDING ZERO-KNOWLEDGE
definition (i.e., Definition 4.3.10). In some works, the “basic” definition is used for simplicity, but typically one
actually needs and means the augmented definition.
12 The presentation in Section 4.5.4.1 is in terms of two protocols, each being zero-knowledge, such that executing
them in parallel is not zero-knowledge. These two protocols can be easily combined into one protocol (e.g., by
letting the second party determine, in its first message, which of the two protocols to execute).
13 In the case of parallel zero-knowledge proofs, there is no need to specify the soundness error because it can
always be reduced via parallel composition. As mentioned later, this is not the case with respect to arguments.
777
Free ebooks ==> www.Ebook777.com
CORRECTIONS AND ADDITIONS TO VOLUME 1
14 By non-trivial proof systems we mean ones for languages outside BPP, whereas by significantly less than loga-
rithmic we mean any function f : N → N satisfying f (n) = o(log n/ log log n). In contrast, by almost logarithmic
we mean any function f satisfying f (n) = ω(log n).
778
www.Ebook777.com
C.5 SOME DEVELOPMENTS REGARDING ZERO-KNOWLEDGE
Back to Parallel Composition. Given our opinion about the timing model, it is not
surprising that we consider the problem of parallel composition almost as important as
the problem of concurrent composition in the timing model. Firstly, it is quite reasonable
to assume that the parties’ local clocks have approximately the same rate, and that
drifting is corrected by occasional clock synchronization. Thus, it is reasonable to
assume that the parties have an approximately good estimate of some global time.
Furthermore, the global time may be partitioned into phases, each consisting of a
constant number of rounds, so that each party wishing to execute the protocol just
delays its invocation to the beginning of the next phase. Thus, concurrent execution
15 The rate should be computed with respect to reasonable intervals of time; for example, for as defined next,
one may assume that a time period of units is measured as units of time on the local clock, where
/ρ ≤ ≤ ρ.
779
Free ebooks ==> www.Ebook777.com
CORRECTIONS AND ADDITIONS TO VOLUME 1
780
www.Ebook777.com
C.5 SOME DEVELOPMENTS REGARDING ZERO-KNOWLEDGE
In contrast to these conjectures (and to the reasoning underlying them), Barak showed
how to construct non-black-box simulators and obtained several results that were known
to be unachievable via black-box simulators [5]. In particular, under standard intractabil-
ity assumptions (see also [7]), he presented constant-round public-coin zero-knowledge
arguments with negligible soundness error for any language in N P. (Moreover, the
simulator runs in strict polynomial-time, which is impossible for black-box simula-
tors of non-trivial constant-round protocols [9].) Furthermore, these protocols pre-
serve zero-knowledge under a fixed16 polynomial number of concurrent executions,
in contrast to the result of [58] (regarding black-box simulators) that also holds in
that restricted case. Thus, Barak’s result calls for the reevaluation of many common
beliefs. Most concretely, it says that results regarding black-box simulators do not
reflect inherent limitations of zero-knowledge (but rather an inherent limitation of a
natural way of demonstrating the zero-knowledge property). Most abstractly, it says
that there are meaningful ways of using a program other than merely invoking it as a
black-box.
Does this means that a method was found to “reverse-engineer” programs or to
“understand” them? We believe that the answer is negative. Barak [5] is using the
adversary’s program in a significant way (i.e., more significant than just invoking it),
without “understanding” it. So, how does he use the program?
The key idea underlying Barak’s protocol [5] is to have the prover prove that either the
original NP-assertion is valid or that he (i.e., the prover) “knows the verifier’s residual
strategy” (in the sense that it can predict the next verifier message). Indeed, in a real
interaction (with the honest verifier), it is infeasible for the prover to predict the next
verifier message, and so computational soundness of the protocol follows. However,
a simulator that is given the code of the verifier’s strategy (and not merely oracle
access to that code) can produce a valid proof of the disjunction by properly executing
the sub-protocol using its knowledge of an NP-witness for the second disjunctive. The
simulation is computational indistinguishable from the real execution, provided that one
cannot distinguish an execution of the sub-protocol in which one NP-witness (i.e., an
NP-witness for the original assertion) is used from an execution in which the second NP-
witness (i.e., an NP-witness for the auxiliary assertion) is used. That is, the sub-protocol
should be a witness indistinguishable argument system (see Sections 4.6 and 4.8). We
warn the reader that the actual implementation of this idea requires overcoming several
technical difficulties (cf. [5, 7]).
16 The protocol depends on the polynomial that bounds the number of executions, and thus is not known to be
concurrent zero-knowledge (because the latter requires fixing the protocol and then considering any polynomial
number of concurrent executions).
781
Free ebooks ==> www.Ebook777.com
CORRECTIONS AND ADDITIONS TO VOLUME 1
proving the failure of the “Random Oracle Methodology” [54] and the impossibility of
software obfuscation [8]). In contrast, in [5] (and [6]), the code of the adversary is being
used within a sophisticated proof of security. What we wish to highlight here is that
non-black-box usage of programs is also relevant to proving (rather than to disproving)
the security of systems.
17 In the general case, the generation protocol may generate an instance x in L , but it is infeasible for
the prover to obtain a corresponding witness (i.e., a w such that (x , w ) ∈ R ). In the second step,
the sub-protocol in use ought to be a proof-of-knowledge, and computational soundness of the main
protocol will follow (because otherwise, the prover, using a knowledge-extractor, can obtain a witness
for x ∈ L ).
782
www.Ebook777.com
C.6 ADDITIONAL CORRECTIONS AND COMMENTS
is invoked with the corresponding NP-witness as auxiliary input (i.e., with (w, λ),
where w is the NP-witness for x given to the main prover).
The computational soundness of this protocol follows by Property (a) of the gen-
eration protocol (i.e., with high probability x ∈ L , and so x ∈ L follows by the
soundness of the protocol used in Step 2). To demonstrate the zero-knowledge prop-
erty, we first generate a simulated transcript of Step 1 (with outcome x ∈ L ), along
with an adequate NP-witness (i.e., w such that (x , w ) ∈ R ), and then emulate
Step 2 by feeding the sub-prover strategy with the NP-witness (λ, w ). Combining
Property (b) of the generation protocol and the witness indistinguishability prop-
erty of the protocol used in Step 2, the simulation is indistinguishable from the real
execution.
Regarding Constriction 4.10.7 and the Proof of Proposition 4.10.9. The current
description of the setting of the mapping of the input graph G to the Hamiltonian matrix
H (via the two mappings π1 and π2 ) is confusing and even inaccurate. Instead, one may
identify the rows (resp., columns) of H with [n] and use a single permutation π over
[n] (which supposedly maps the vertices of G to those of H ).18 Alternatively, one may
compose this permutation π with the two (1-1) mappings φi ’s (where φi : [n] → [n 3 ]
is as in the original text), and obtain related πi ’s (i.e., πi (v) = φi (π(v))), which should
be used as in the original text. We stress that the real prover determines π to be an
isomorphism between the Hamiltonian cycle of G and the Hamiltonian cycle of H ,
whereas the simulator selects π at random.
Some Missing Credits. The sequential composition lemma for zero-knowledge pro-
tocols (i.e., Lemma 4.3.11) is due to [119]. The notions of strong witness indistin-
guishability (Definition 4.6.2) and strong proofs-of-knowledge (Section 4.7.6), and
the Hidden Bit Model (Section 4.10.2) have first appeared in early versions of this
work.
18 The identification is via the two mappings φ1 and φ2 mentioned in the original text. We stress that these mappings
only depend on the matrix M that contains H .
783
Free ebooks ==> www.Ebook777.com
CORRECTIONS AND ADDITIONS TO VOLUME 1
(Leibniz admits that counter-examples to this principle are conceivable but will not
occur in real life because God is much too benevolent.)
A: Please.
B: Please.
A: I insist.
B: So do I.
A: OK then, thank you.
B: You are most welcome.
A protocol for two Italians to pass through a door.
Source: Silvio Micali, 1985.
784
www.Ebook777.com
Bibliography
785
Free ebooks ==> www.Ebook777.com
BIBLIOGRAPHY
[14] M. Bellare, R. Canetti, and H. Krawczyk. Keying Hash Functions for Message Authen-
tication. In Crypto96, Springer Lecture Notes in Computer Science (Vol. 1109), 1996,
pages 1–15.
[15] M. Bellare, R. Canetti, and H. Krawczyk. Modular Approach to the Design and Analysis
of Authentication and Key Exchange Protocols. In 30th ACM Symposium on the Theory
of Computing, 1998, pages 419–428.
[16] M. Bellare, A. Desai, D. Pointcheval, and P. Rogaway. Relations among Notions of Security
for Public-Key Encryption Schemes. In Crypto98, Springer Lecture Notes in Computer
Science (Vol. 1462), 1998, pages 26–45.
[17] M. Bellare and O. Goldreich. On Defining Proofs of Knowledge. In Crypto92, Springer-
Verlag Lecture Notes in Computer Science (Vol. 740), 1992, pages 390–420.
[18] M. Bellare, O. Goldreich, and S. Goldwasser. Incremental Cryptography: The Case of
Hashing and Signing. In Crypto94, Springer-Verlag Lecture Notes in Computer Science
(Vol. 839), 1994, pages 216–233.
[19] M. Bellare, O. Goldreich, and S. Goldwasser. Incremental Cryptography and Applica-
tion to Virus Protection. In 27th ACM Symposium on the Theory of Computing, 1995,
pages 45–56.
[20] M. Bellare, O. Goldreich, and H. Krawczyk. Stateless Evaluation of Pseudorandom Func-
tions: Security Beyond the Birthday Barrier. In Crypto99, Springer Lecture Notes in
Computer Science (Vol. 1666), 1999, pages 270–287.
[21] M. Bellare and S. Goldwasser. New Paradigms for Digital Signatures and Message Authen-
tication Based on Non-Interative Zero-Knowledge Proofs. In Crypto89, Springer-Verlag
Lecture Notes in Computer Science (Vol. 435), 1990, pages 194–211.
[22] M. Bellare, R. Guerin, and P. Rogaway. XOR MACs: New Methods for Message Authenti-
cation Using Finite Pseudorandom Functions. In Crypto95, Springer-Verlag Lecture Notes
in Computer Science (Vol. 963), 1995, pages 15–28.
[23] M. Bellare, S. Halevi, A. Sahai, and S. Vadhan. Trapdoor Functions and Public-Key
Cryptosystems. In Crypto98, Springer Lecture Notes in Computer Science (Vol. 1462),
1998, pages 283–298.
[24] M. Bellare, R. Impagliazzo, and M. Naor. Does Parallel Repetition Lower the Error in
Computationally Sound Protocols? In 38th IEEE Symposium on Foundations of Computer
Science, 1997, pages 374–383.
[25] M. Bellare, J. Kilian, and P. Rogaway. The Security of Cipher Block Chaining. In Crypto94,
Springer-Verlag Lecture Notes in Computer Science (Vol. 839), 1994, pages 341–358.
[26] M. Bellare and S. Micali. How to Sign Given Any Trapdoor Function. Journal of the ACM,
Vol. 39, 1992, pages 214–233.
[27] D. Beaver, S. Micali, and P. Rogaway. The Round Complexity of Secure Protocols. In
22nd ACM Symposium on the Theory of Computing, 1990, pages 503–513.
[28] M. Bellare and P. Rogaway. Random Oracles Are Practical: A Paradigm for Designing
Efficient Protocols. In 1st Conf. on Computer and Communications Security, ACM, 1993,
pages 62–73.
[29] M. Bellare and P. Rogaway. Entity Authentication and Key Distribution. In Crypto93,
Springer-Verlag Lecture Notes in Computer Science (Vol. 773), 1994, pages 232–249.
[30] M. Bellare and P. Rogaway. Provably Secure Session Key Distribution: The Three Party
Case. In 27th ACM Symposium on the Theory of Computing, 1995, pages 57–66.
[31] M. Bellare and P. Rogaway. The Exact Security of Digital Signatures: How to Sign with
RSA and Rabin. In EuroCrypt96, Springer Lecture Notes in Computer Science (Vol. 1070),
1996, pages 399–416.
[32] M. Bellare and M. Yung. Certifying Permutations: Noninteractive Zero-Knowledge Based
on Any Trapdoor Permutation. Journal of Cryptology, Vol. 9, 1996, pages 149–166.
786
www.Ebook777.com
BIBLIOGRAPHY
787
Free ebooks ==> www.Ebook777.com
BIBLIOGRAPHY
[51] R. Canetti. Universally Composable Security: A New Paradigm for Cryptographic Proto-
cols. In 42nd IEEE Symposium on Foundations of Computer Science, 2001, pages 136–
145. Full version (with different title) is available from Cryptology ePrint Archive, Report
2000/067.
[52] R. Canetti, I. Damgard, S. Dziembowski, Y. Ishai, and T. Malkin. On Adaptive Versus
Non-Adaptive Security of Multiparty Protocols. Journal of Cryptology, forthcoming.
[53] R. Canetti, U. Feige, O. Goldreich, and M. Naor. Adaptively Secure Multiparty Compu-
tation. In 28th ACM Symposium on the Theory of Computing, 1996, pages 639–648.
[54] R. Canetti, O. Goldreich, and S. Halevi. The Random Oracle Methodology, Revisited. In
30th ACM Symposium on the Theory of Computing, 1998, pages 209–218.
[55] R. Canetti, O. Goldreich, S. Goldwasser, and S. Micali. Resettable Zero-Knowledge. In
32nd ACM Symposium on the Theory of Computing, 2000, pages 235–244.
[56] R. Canetti, S. Halevi, and A. Herzberg. How to Maintain Authenticated Communication
in the Presence of Break-Ins. Journal of Cryptology, Vol. 13, No. 1, 2000, pages 61–106.
[57] R. Canetti and A. Herzberg. Maintaining Security in the Presence of Transient Faults. In
Crypto94, Springer-Verlag Lecture Notes in Computer Science (Vol. 839), 1994, pages
425–439.
[58] R. Canetti, J. Kilian, E. Petrank, and A. Rosen. Black-Box Concurrent Zero-Knowledge
Requires (log
˜ n) Rounds. In 33rd ACM Symposium on the Theory of Computing, 2001,
pages 570–579.
[59] R. Canetti, Y. Lindell, R. Ostrovsky, and A. Sahai. Universally Composable Two-Party and
Multi-Party Secure Computation. In 34th ACM Symposium on the Theory of Computing,
2002, pages 494–503.
[60] L. Carter and M. Wegman. Universal Hash Functions. Journal of Computer and System
Science, Vol. 18, 1979, pages 143–154.
[61] D. Chaum. Blind Signatures for Untraceable Payments. In Crypto82. New York: Plenum
Press, 1983, pages 199–203.
[62] D. Chaum, C. Crépeau, and I. Damgård. Multi-party Unconditionally Secure Protocols.
In 20th ACM Symposium on the Theory of Computing, 1988, pages 11–19.
[63] B. Chor, S. Goldwasser, S. Micali, and B. Awerbuch. Verifiable Secret Sharing and Achiev-
ing Simultaneity in the Presence of Faults. In 26th IEEE Symposium on Foundations of
Computer Science, 1985, pages 383–395.
[64] B. Chor and E. Kushilevitz. A Zero-One Law for Boolean Privacy. SIAM J. on Disc. Math.,
Vol. 4, 1991, pages 36–47.
[65] R. Cleve. Limits on the Security of Coin Flips When Half the Processors Are Faulty. In
18th ACM Symposium on the Theory of Computing, 1986, pages 364–369.
[66] J. D. Cohen and M. J. Fischer. A Robust and Verifiable Cryptographically Secure Election
Scheme. In 26th IEEE Symposium on Foundations of Computer Science, 1985, pages
372–382.
[67] R. Cramer and I. Damgård. New Generation of Secure and Practical RSA-Based Signa-
tures. In Crypto96, Springer Lecture Notes in Computer Science (Vol. 1109), 1996, pages
173–185.
[68] R. Cramer and V. Shoup. A Practical Public-Key Cryptosystem Provably Secure Against
Adaptive Chosen Ciphertext Attacks. In Crypto98, Springer-Verlag Lecture Notes in Com-
puter Science (Vol. 1462), 1998, pages 13–25.
[69] C. Crépeau. Efficient Cryptographic Protocols Based on Noisy Channels. In EuroCrypt97,
Springer, Lecture Notes in Computer Science (Vol. 1233), 1997, pages 306–317.
[70] I. Damgård. Collision Free Hash Functions and Public Key Signature Schemes. In
EuroCrypt87, Springer-Verlag Lecture Notes in Computer Science (Vol. 304), 1988,
pages 203–216.
788
www.Ebook777.com
BIBLIOGRAPHY
[71] I. Damgård. A Design Principle for Hash Functions. In Crypto89, Springer-Verlag Lecture
Notes in Computer Science (Vol. 435), 1990, pages 416–427.
[72] I. Damgård. Concurrent Zero-Knowledge in Easy in Practice: Theory of Cryptography
Library, 99-14, June 1999. http://philby.ucsd.edu/cryptolib. See also “Efficient
Concurrent Zero-Knowledge in the Auxiliary String Model” (in Eurocrypt’00, 2000).
[73] A. De Santis, G. Di Crescenzo, R. Ostrovsky, G. Persiano, and A. Sahai. Robust Non-
interactive Zero-Knowledge. In Crypto01, Springer Lecture Notes in Computer Science
(Vol. 2139), 2001, pages 566–598.
[74] Y. Desmedt and Y. Frankel. Threshold Cryptosystems. In Crypto89, Springer-Verlag Lec-
ture Notes in Computer Science (Vol. 435), 1990, pages 307–315.
[75] W. Diffie and M. E. Hellman. New Directions in Cryptography. IEEE Trans. on Info.
Theory, IT-22, Nov. 1976, pages 644–654.
[76] H. Dobbertin. The Status of MD5 after a Recent Attack. In CryptoBytes, RSA Lab., Vol. 2,
No. 2, 1996, pages 1–6.
[77] D. Dolev, C. Dwork, and M. Naor. Non-Malleable Cryptography. In 23rd ACM Symposium
on the Theory of Computing, 1991, pages 542–552. Full version available from authors.
[78] D. Dolev, C. Dwork, O. Waarts, and M. Yung. Perfectly Secure Message Transmission.
Journal of the ACM, Vol. 40 (1), 1993, pages 17–47.
[79] D. Dolev and A. C. Yao. On the Security of Public-Key Protocols. IEEE Trans. on Inform.
Theory, Vol. 30, No. 2, 1983, pages 198–208.
[80] D. Dolev and H. R. Strong. Authenticated Algorithms for Byzantine Agreement. SIAM
Journal on Computing, Vol. 12, 1983, pages 656–666.
[81] C. Dwork and M. Naor. An Efficient Existentially Unforgeable Signature Scheme and Its
Application. Journal of Cryptology, Vol. 11 (3), 1998, pages 187–208
[82] C. Dwork, M. Naor, and A. Sahai. Concurrent Zero-Knowledge. In 30th ACM Symposium
on the Theory of Computing, 1998, pages 409–418.
[83] S. Even and O. Goldreich. On the Security of Multi-party Ping-Pong Protocols. In 24th
IEEE Symposium on Foundations of Computer Science, 1983, pages 34–39.
[84] S. Even, O. Goldreich, and A. Lempel. A Randomized Protocol for Signing Contracts.
CACM, Vol. 28, No. 6, 1985, pages 637–647.
[85] S. Even, O. Goldreich, and S. Micali. On-line/Off-line Digital Signatures. Journal of
Cryptology, Vol. 9, 1996, pages 35–67.
[86] S. Even, A.L. Selman, and Y. Yacobi. The Complexity of Promise Problems with
Applications to Public-Key Cryptography. Information and Control, Vol. 61, 1984,
pages 159–173.
[87] S. Even and Y. Yacobi. Cryptography and NP-Completeness. In Proceedings of 7th ICALP,
Springer-Verlag Lecture Notes in Computer Science (Vol. 85), 1980, pages 195–207. See
[86].
[88] U. Feige, A. Fiat, and A. Shamir. Zero-Knowledge Proofs of Identity. Journal of Cryptol-
ogy, Vol. 1, 1988, pages 77–94.
[89] U. Feige, D. Lapidot, and A. Shamir. Multiple Non-Interactive Zero-Knowledge Proofs
under General Assumptions. SIAM Journal on Computing, Vol. 29 (1), 1999, pages 1–28.
[90] U. Feige and A. Shamir. Zero-Knowledge Proofs of Knowledge in Two Rounds.
In Crypto89, Springer-Verlag Lecture Notes in Computer Science (Vol. 435), 1990,
pages 526–544.
[91] U. Feige and A. Shamir. Witness Indistinguishability and Witness Hiding Protocols. In
22nd ACM Symposium on the Theory of Computing, 1990, pages 416–426.
[92] A. Fiat and A. Shamir. How to Prove Yourself: Practical Solution to Identification and
Signature Problems. In Crypto86, Springer-Verlag Lecture Notes in Computer Science
(Vol. 263), 1987, pages 186–189.
789
Free ebooks ==> www.Ebook777.com
BIBLIOGRAPHY
790
www.Ebook777.com
BIBLIOGRAPHY
791
Free ebooks ==> www.Ebook777.com
BIBLIOGRAPHY
792
www.Ebook777.com
BIBLIOGRAPHY
[154] R. C. Merkle and M. E. Hellman. Hiding Information and Signatures in Trapdoor Knap-
sacks. IEEE Trans. Inform. Theory, Vol. 24, 1978, pages 525–530.
[155] S. Micali, M. O. Rabin, and S. Vadhan. Verifiable Random Functions. In 40th IEEE
Symposium on Foundations of Computer Science, 1999, pages 120–130.
[156] S. Micali, C. Rackoff, and B. Sloan. The Notion of Security for Probabilistic Cryptosys-
tems. SIAM Journal on Computing, Vol. 17, 1988, pages 412–426.
[157] S. Micali and P. Rogaway. Secure Computation. In Crypto91, Springer-Verlag Lecture
Notes in Computer Science (Vol. 576), 1992, pages 392–404.
[158] D. Micciancio. Oblivious Data Structures: Applications to Cryptography. In 29th ACM
Symposium on the Theory of Computing, 1997, pages 456–464.
[159] National Bureau of Standards. Data Encryption Standard (DES). Federal Information
Processing Standards, Publ. 46, 1977.
[160] National Institute for Standards and Technology. Digital Signature Standard (DSS).
Federal Register, Vol. 56, No. 169, Aug. 1991.
[161] M. Naor. Bit Commitment Using Pseudorandom Generators. Journal of Cryptology,
Vol. 4, 1991, pages 151–158.
[162] M. Naor and O. Reingold. From Unpredictability to Indistinguishability: A Simple Con-
struction of Pseudorandom Functions from MACs. In Crypto98, Springer-Verlag Lecture
Notes in Computer Science (Vol. 1464), 1998, pages 267–282.
[163] M. Naor and M. Yung. Universal One-Way Hash Functions and their Crypto-
graphic Application. 21st ACM Symposium on the Theory of Computing, 1989,
pages 33–43.
[164] M. Naor and M. Yung. Public-Key Cryptosystems Provably Secure Against Chosen Ci-
phertext Attacks. In 22nd ACM Symposium on the Theory of Computing, 1990, pages
427–437.
[165] R. Ostrovsky, R. Venkatesan, and M. Yung. Secure Commitment Against Powerful Ad-
versary: A Security Primitive Based on Average Intractability. In Proceedings of the 9th
Symposium on Theoretical Aspects of Computer Science (STACS92), 1992, pages 439–
448.
[166] R. Ostrovsky and M. Yung. How to Withstand Mobile Virus Attacks. In 10th ACM Sym-
posium on Principles of Distributed Computing, 1991, pages 51–59.
[167] T. P. Pedersen and B. Pfitzmann. Fail-Stop Signatures. SIAM Journal on Computing,
Vol. 26, No. 2, 1997, pages 291–330. Based on several earlier works (see first footnote in
the paper).
[168] B. Pfitzmann. Digital Signature Schemes (General Framework and Fail-Stop Signatures).
Springer-Verlag Lecture Notes in Computer Science (Vol. 1100), 1996.
[169] M. Prabhakaran, A. Rosen, and A. Sahai. Concurrent Zero-Knowledge Proofs in Logarith-
mic Number of Rounds. In 43rd IEEE Symposium on Foundations of Computer Science,
2002, pages 366–375.
[170] M. O. Rabin. Digitalized Signatures. In Foundations of Secure Computation, R. A.
DeMillo et al., eds. New York: Academic Press, 1977, pages 155–168.
[171] M. O. Rabin. Digitalized Signatures and Public Key Functions as Intractable as Factoring.
TR-212, LCS, MIT, 1979.
[172] M. O. Rabin. How to Exchange Secrets by Oblivious Transfer. Tech. Memo TR-81, Aiken
Computation Laboratory, Harvard University, 1981.
[173] T. Rabin and M. Ben-Or. Verifiable Secret Sharing and Multi-party Protocols with Honest
Majority. In 21st ACM Symposium on the Theory of Computing, 1989, pages 73–85.
[174] C. Rackoff and D. R. Simon. Non-Interactive Zero-Knowledge Proof of Knowledge and
Chosen Ciphertext Attack. In Crypto91, Springer Verlag Lecture Notes in Computer
Science (Vol. 576), 1991, pages 433–444.
793
Free ebooks ==> www.Ebook777.com
BIBLIOGRAPHY
794
www.Ebook777.com
Index
795
Free ebooks ==> www.Ebook777.com
INDEX
Chinese Reminder Theorem, 421 The malicious model, 600, 603, 608,
Claw-free pairs. See One-way permutations 610–611 626, 634, 650–693,
Collision-free hashing. See Hashing 697–700, 708–741, 746–747
Collision-resistent hashing. See Hashing The semi-honest model, 600, 603, 608,
Commitment schemes 610–615, 619 626, 634–650, 696,
non-oblivious, 771 697, 700–708, 743–746
perfectly binding, 465–469 Two-party, 599, 600, 606–607, 608,
Computational indistinguishability, 382, 611–613, 615–693, 755
395–402, 446, 447–449, 457, 465, Universally Composable, 753
467–468, 479, 618, 770 Verifiable Secret Sharing. See Secret
by circuits, 382–393, 412, 417, 419, 431, Sharing
454, 618
Cryptographic protocols, 599–764 Discrete Logarithm Problem. See DLP
active adversary, 603 function
adaptive adversary, 603, 748–751 DLP function, 584
Authenticated Computation, 664–668,
671–674, 717–722 Encryption schemes, 373–496
Coin-tossing, 659–664, 674–677, 722–725 active attacks, 422–425, 431–474
Communication models, 602–603 asymmetric, 376
Computational limitations, 603 Basic Setting, 374–377
Concurrent executions, 752–755 Block-Ciphers, 408–418, 420
Definitional approach, 601–607 chosen ciphertext attacks, 423, 438–469,
Definitions, 615–634, 694–700, 742–743, 472–474
749, 752–754 chosen plaintext attacks, 423, 431–438
Environmentally-secure, 753–755 Definitions, 378–403
Fairness, 604, 747–748 indistinguishability of encryptions, 378,
functionality, 599 382–383, 403, 412, 415, 417, 419,
General circuit evaluation, 645–648, 424, 432, 459, 461, 479
705–707 multiple messages, 378, 389–393,
honest-but-curious adversary, 603 394–402, 429, 437–438, 443–449,
Image Transmission, 668–671, 672, 489
718–721 non-malleability, 422, 470–474
Input-Commitment, 677–680, 725–726 passive attacks, 422, 425–431
Multi-party, 599, 600, 604–606, 607–609, perfect privacy, 378, 476–477
610–611, 613–615, 693–747, 755 perfect security, 476–477
non-adaptive adversary, 603 Private-Key, 375–376, 377, 380, 381,
number of dishonest parties, 604 404–408, 410–413
Oblivious Transfer, 612, 614, 635, Probabilistic Encryption, 404, 410–422
640–645 Public-Key, 376, 377, 380, 381,
Oracle-aided, 636, 639, 644, 646, 652, 413–422
672, 674, 678, 681, 701, 704, 715, Randomized RSA, 416–417, 478
718, 721, 722, 726, 729, 737 Semantic Security, 378, 379–382, 478
Overview, 599–615 Stream-Ciphers, 404–408
passive adversary, 603 symmetric, 375
Privacy reductions, 635–640, 643, 644, The Blum-Goldwasser, 420–422, 478
647, 648, 701–703, 704 the mechanism, 376–377
Private Channels, 741–747 uniform-complexity treatment, 393–403
Pure oracle-aided, 721–722
Reactive, 609, 751–752 Factoring integers, 421, 584
Secret Broadcast, 716–717, 718, 722
Security reductions, 652–657, 673, 675, Hard-core predicates. See One-way
677, 678, 714–716, 719, 721, 723 permutations
Setup assumptions, 602, 608, 755 Hash and Sign. See Techniques
796
www.Ebook777.com
INDEX
797
Free ebooks ==> www.Ebook777.com
INDEX
798
www.Ebook777.com