0% found this document useful (0 votes)
12 views30 pages

Probability and Secrecy

This document discusses probability and cryptography concepts like joint probability, conditional probability, and Bayes' theorem. It uses examples like a deck of cards to illustrate key ideas. The document also introduces a toy cipher and calculates probabilities of ciphertexts and plaintexts using the cipher's distributions.

Uploaded by

hardisnetwork
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
12 views30 pages

Probability and Secrecy

This document discusses probability and cryptography concepts like joint probability, conditional probability, and Bayes' theorem. It uses examples like a deck of cards to illustrate key ideas. The document also introduces a toy cipher and calculates probabilities of ciphertexts and plaintexts using the cipher's distributions.

Uploaded by

hardisnetwork
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 30

Einführung in die Kryptographie

Probability, Entropy
and
Notions of Secrecy

1
Short Detour
Introduction to Probability

2
Example: Deck of Cards

• Assume deck of 52 cards with random variables S, C


and V denoting suit, color, and value

• Example:
1
– 𝑝 𝑆 = 𝐶𝑙𝑢𝑏𝑠 =
4
1
– 𝑝 𝐶 = 𝑅𝑒𝑑 =
2
1
– 𝑝 𝑉 = 𝐴𝑐𝑒 =
13

Note: Clubs Spades Hearts Diamonds

3
Joint Probability

• Let X and Y be discrete random variables. The joint probability


p(X = x, Y = y) is the probability that X = x and Y = y

Examples:
1
– 𝑝 𝐶 = 𝑅𝑒𝑑, 𝑆 = 𝐶𝑙𝑢𝑏𝑠 = 0 𝑝 𝐶 = 𝑅𝑒𝑑, 𝑉 = 𝐴𝑐𝑒 =
26
1 1
– 𝑝 𝑉 = 𝐴𝑐𝑒, 𝑆 = 𝐶𝑙𝑢𝑏𝑠 = 𝑝 𝐶 = 𝑅𝑒𝑑, 𝑆 = 𝐻𝑒𝑎𝑟𝑡𝑠 =
52 4

• Random variables 𝑋 and 𝑌 are independent iff


𝑝 𝑋 = 𝑥, 𝑌 = 𝑦 = 𝑝(𝑋 = 𝑥) ∙ 𝑝(𝑌 = 𝑦)

Examples: 𝐶 and 𝑉 are independent, and 𝑆 and 𝑉 are independent

4
Conditional Probability

• Let 𝑋 and 𝑌 be discrete random variables. The conditional probability


𝑝(𝑋 = 𝑥 | 𝑌 = 𝑦) is the probability that 𝑋 = 𝑥 given that 𝑌 = 𝑦, i.e.
𝑝(𝑋=𝑥, 𝑌=𝑦)
𝑝 𝑋 = 𝑥 𝑌 = 𝑦) =
𝑝(𝑌=𝑦)

• Bayes Theorem
𝑝 𝑌 = 𝑦 𝑋 = 𝑥) ∙ 𝑝(𝑋 = 𝑥)
𝑝 𝑋 = 𝑥 𝑌 = 𝑦) =
𝑝(𝑌 = 𝑦)

Proof:
𝑝 𝑋 = 𝑥 𝑌 = 𝑦) ∙ 𝑝 𝑌 = 𝑦 = 𝑝 𝑋 = 𝑥, 𝑌 = 𝑦 = 𝑝 𝑌 = 𝑦 𝑋 = 𝑥) ∙ 𝑝(𝑋 = 𝑥)

5
Conditional Probability (Bayes Theorem)

Examples:

𝑝(𝑆 = 𝑆𝑝𝑎𝑑𝑒𝑠, 𝐶 = 𝑅𝑒𝑑) 0


𝑝 𝑆 = 𝑆𝑝𝑎𝑑𝑒𝑠 𝐶 = 𝑅𝑒𝑑) = = = 0
𝑝(𝐶 = 𝑅𝑒𝑑) 0.5

𝑝(𝑉 = 𝐴𝑐𝑒, 𝐶 = 𝐵𝑙𝑎𝑐𝑘) 1 1 1


𝑝 𝑉 = 𝐴𝑐𝑒 𝐶 = 𝐵𝑙𝑎𝑐𝑘) = = ∙ =
𝑝(𝐶 = 𝐵𝑙𝑎𝑐𝑘) 26 0.5 13

𝑝(𝑉 = 𝐴𝑐𝑒, 𝑆 = 𝑆𝑝𝑎𝑑𝑒𝑠) 1 1 1


𝑝 𝑉 = 𝐴𝑐𝑒 𝑆 = 𝑆𝑝𝑎𝑑𝑒𝑠) = = ∙ =
𝑝(𝑆 = 𝑆𝑝𝑎𝑑𝑒𝑠) 52 0.25 13

6
Summary: Probability

S be a finite / countable set of values

• 𝑝 𝑋 = 𝑠 denote the probability (between 0 and 1) that 𝑋 = 𝑠


𝑝 𝑋 = 𝑠 = 1 means 𝑋 is always 𝑠, 𝑝 𝑋 = 𝑠 = 0 means 𝑋 is never 𝑠

• Joint probability
𝑝(𝑋 = 𝑥, 𝑌 = 𝑦) is the probability that 𝑋 = 𝑥 and 𝑌 = 𝑦
𝑋 and 𝑌 are independent iff 𝑝(𝑋 = 𝑥, 𝑌 = 𝑦) = 𝑝 𝑋 = 𝑥 ∙ 𝑝(𝑌 = 𝑦)

• Conditional probability
𝑝(𝑋=𝑥, 𝑌=𝑦)
the probability that 𝑋 = 𝑥 if 𝑌 = 𝑦 : 𝑝 𝑋 = 𝑥 𝑌 = 𝑦) =
𝑝(𝑌=𝑦)

• Bayes Theorem
𝑝 𝑌 = 𝑦 𝑋 = 𝑥) ∙ 𝑝(𝑋 = 𝑥)
𝑝 𝑋 = 𝑥 𝑌 = 𝑦) =
𝑝(𝑌 = 𝑦)

7
Example: Toy Cipher

Probability of plaintext
𝑃 𝑒𝑛𝑐𝑘 (𝑥) 𝐶
𝑎 𝑏 𝑐 𝑑
𝑝(𝑃 = ⋯ ) 1 3 3 3
𝐾 4 10 20 10
𝑃 = 𝑎, 𝑏, 𝑐, 𝑑 ,
𝐶 = 1,2,3,4 ,
Probability of keys
𝐾 = 𝑘1 , 𝑘2 , 𝑘3 𝑘1 𝑘2 𝑘3
𝑝(𝐾 = ⋯ ) 1 1 1
Encryption 𝑒𝑛𝑐 4 2 4
𝑒𝑛𝑐 𝑎 𝑏 𝑐 𝑑
𝑘1 3 4 2 1
𝑘2 3 1 4 2
𝑘3 4 3 1 2

8
Ciphertexts

• 𝑒𝑛𝑐𝑘 𝑚 : encryption of 𝑚 by key 𝑘


• 𝑑𝑒𝑐𝑘 𝑐 : decryption of 𝑐 by key 𝑘

• Ciphertexts under key 𝑘:


𝐶𝑡(𝑘) = {𝑒𝑛𝑐𝑘 (𝑚) | 𝑚 ∈ 𝑃 }

• Probability that a specific ciphertext 𝑐 occurs


𝑝(𝐶 = 𝑐) = σ𝑘∈𝐾,𝑐∈𝐶𝑡(𝑘) 𝑝(𝐾 = 𝑘) ∙ 𝑝(𝑃 = 𝑑𝑒𝑐𝑘 𝑐 )

9
Distribution of Ciphertexts
Probability that a specific ciphertext c occurs

𝑝 𝐶 = 𝑐 = σ𝑘∈𝐾,𝑐∈𝐶𝑡 𝑘 𝑝(𝐾 = 𝑘) ∙ 𝑝(𝑃 = 𝑑𝑒𝑐𝑘 (𝑐))

Example: Toy Cipher (given distributions for 𝑃, 𝐾 and 𝐶)


1 3 6 1 3 12 1 3 3
∙ = ∙ = ∙ =
4 10 80 2 10 80 4 20 80
21
=
𝑝 𝐶 = 1 = 𝑝 𝐾 = 𝑘1 ∙ 𝑝 𝑃 = 𝑑 + 𝑝 𝐾 = 𝑘2 ∙ 𝑝 𝑃 = 𝑏 + 𝑝 𝐾 = 𝑘3 ∙ 𝑝 𝑃 = 𝑐 = 0.2625 80

𝑝 𝐶 = 2 = 𝑝 𝐾 = 𝑘1 ∙ 𝑝 𝑃 = 𝑐 + 𝑝 𝐾 = 𝑘2 ∙ 𝑝 𝑃 = 𝑑 + 𝑝 𝐾 = 𝑘3 ∙ 𝑝 𝑃 = 𝑑 = 0.2625
𝑝 𝐶 = 3 = 𝑝 𝐾 = 𝑘1 ∙ 𝑝 𝑃 = 𝑎 + 𝑝 𝐾 = 𝑘2 ∙ 𝑝 𝑃 = 𝑎 + 𝑝 𝐾 = 𝑘3 ∙ 𝑝 𝑃 = 𝑏 = 0.2625
𝑝 𝐶 = 4 = 𝑝 𝐾 = 𝑘1 ∙ 𝑝 𝑃 = 𝑏 + 𝑝 𝐾 = 𝑘2 ∙ 𝑝 𝑃 = 𝑐 + 𝑝 𝐾 = 𝑘3 ∙ 𝑝 𝑃 = 𝑎 = 0.2125

share of using 𝑘1 share of using 𝑘2 share of using 𝑘3

𝑐 1 2 3 4
𝑃(𝐶 = 𝑐) 21 21 21 17
80 80 80 80 10
Probabilities of Ciphertexts for Fixed Plaintext

Probability of a specific ciphertext 𝑐 given a plaintext 𝑚

𝑝(𝐶 = 𝑐 | 𝑃 = 𝑚) = σ𝑘∈𝐾,𝑚∈𝑑𝑒𝑐𝑘(𝑐) 𝑝(𝐾 = 𝑘)

Example: Toy Cipher (given distributions for 𝑃, 𝐾 and 𝐶)


1 1 3
𝑝 𝐾 = 𝑘1 + 𝑝 𝐾 = 𝑘2 = + =
4 2 4
𝑝 𝐶 = 1 𝑃 = 𝑎) = 0 𝑝 𝐶 = 3 𝑃 = 𝑎) = 0.75
𝑝 𝐶 = 2 𝑃 = 𝑎) = 0 𝑝 𝐶 = 4 𝑃 = 𝑎) = 0.25

𝑝 𝐶 = 1 𝑃 = 𝑏) = 0.5 𝑝 𝐶 = 3 𝑃 = 𝑏) = 0.25
𝑝 𝐶 = 2 𝑃 = 𝑏) = 0 𝑝 𝐶 = 4 𝑃 = 𝑏) = 0.25

𝑝 𝐶 = 1 𝑃 = 𝑐) = 0.25 𝑝 𝐶 = 3 𝑃 = 𝑐) = 0,
𝑝 𝐶 = 2 𝑃 = 𝑐) = 0,25 𝑝 𝐶 = 4 𝑃 = 𝑐) = 0.5

𝑝 𝐶 = 1 𝑃 = 𝑑) = 0.25 𝑝 𝐶 = 3 𝑃 = 𝑑) = 0
𝑝 𝐶 = 2 𝑃 = 𝑑) = 0.75 𝑝 𝐶 𝑎= 4 𝑃
𝑏 = 𝑑)𝑐 = 0 𝑑
1 1 1
𝑝 𝐶 = 1 𝑃 = ⋯) 0
2 4 4
11
Probabilities of Plaintext for a Fixed Ciphertext

Probability of a specific plaintext m given a ciphertext c


𝑝 𝑃 = 𝑚 ∙ 𝑝 𝐶 = 𝑐 𝑃 = 𝑚)
𝑝 𝑃 = 𝑚 𝐶 = 𝑐) =
𝑝(𝐶 = 𝑐)

Example: Toy Cipher (given distributions for P, K and C)


3 1 80 4
∙ ∙ =
10 2 21 7
𝑝 𝑃 = 𝑎 𝐶 = 1) = 0, 𝑝 𝑃 = 𝑏 𝐶 = 1) = 0.571
𝑝 𝑃 = 𝑐 𝐶 = 1) = 0.143 𝑝 𝑃 = 𝑑 𝐶 = 1) = 0.286

𝑝 𝑃 = 𝑎 𝐶 = 2) = 0 𝑝 𝑃 = 𝑏 𝐶 = 2) = 0
𝑝 𝑃 = 𝑐 𝐶 = 2) = 0.143 𝑝 𝑃 = 𝑑 𝐶 = 2) = 0.857

𝑝 𝑃 = 𝑎 𝐶 = 3) = 0.714 𝑝 𝑃 = 𝑏 𝐶 = 3) = 0,286
𝑝 𝑃 = 𝑐 𝐶 = 3) = 0, 𝑝 𝑃 = 𝑑 𝐶 = 3) = 0

𝑝 𝑃 = 𝑎 𝐶 = 4) = 0.294 𝑝 𝑃 = 𝑏 𝐶 = 4) = 0.352
𝑝 𝑃 = 𝑐 𝐶 = 4) = 0.352 𝑝 𝑃 = 𝑑 𝑎𝐶 = 4)
𝑏 = 0𝑐 𝑑
4 1 2
𝑝 𝑃 = ⋯ 𝐶 = 1) 0
7 7 7
12
Observations about Ciphertexts

• If we see the ciphertext 1 then we know the message is not equal


to 𝑎. We also can guess that it is more likely to be 𝑏 rather than 𝑐
or 𝑑.
• If we see the ciphertext 2 then we know the message is not equal
to 𝑎 or 𝑏, and it is quite likely that the message is equal to 𝑑.
• If we see the ciphertext 3 then we know the message is not equal
to 𝑐 or 𝑑 and there is a good chance that it is equal to a.
• If we see the ciphertext 4 then we know the message is not equal
to 𝑑, but we cannot really guess with confidence whether the
message is 𝑎, 𝑏 or 𝑐.

A lot of information is revealed about the plaintext


just by looking at the probabilities !

13
Back to the main question:
What is secrecy ?

14
Lessons (to be) Learned so far

• Ciphertexts of toy example discloses information on possible


plaintexts

• Attacker may collect such information over the time and may be
able to decode some ciphertext

• What are the requirements that an attacker is not able to draw


any conclusion about the plaintext or key?

• So what is a „perfect“ secrecy?

15
Perfect Secrecy

A cryptosystem has perfect secrecy iff the probability for knowing


a particular plaintext is independent of knowing its ciphertext

Definition:
A cryptosystem has perfect secrecy iff
∀𝑚 ∈ 𝑃, 𝑐 ∈ 𝐶. 𝑝 𝑃 = 𝑚 𝐶 = 𝑐 = 𝑝(𝑃 = 𝑚)

Lemma:
A cryptosystem has perfect secrecy if
𝑝 𝐶 = 𝑐 𝑃 = 𝑚) = 𝑝(𝐶 = 𝑐)
Proof:
𝑝 𝐶=𝑐 ∗𝑝 𝑃=𝑚 𝐶=𝑐) 𝑝 𝐶=𝑐 ∗𝑝(𝑃=𝑚)
𝑝 𝐶 = 𝑐 𝑃 = 𝑚) = 𝑝(𝑃=𝑚)
= 𝑝(𝑃=𝑚)
= 𝑝(𝐶 = 𝑐)

16
Perfect Secrecy (II)

Perfect secrecy has at least as many different keys as ciphertexts:

Lemma:
Let a cryptosystem have perfect secrecy then #𝐾 ≥ #𝐶 ≥ #𝑃

Proof:
• #𝐶 ≥ #𝑃 since encryption is injective
• Assume that ∀𝑐 ∈ 𝐶. 𝑝 𝐶 = 𝑐 > 0 then
• ∀𝑚 ∈ 𝑃, 𝑐 ∈ 𝐶. 𝑝 𝐶 = 𝑐 𝑃 = 𝑚) = 𝑝(𝐶 = 𝑐) > 0
• ∀𝑚 ∈ 𝑃. ∃𝑘 ∈ 𝐾. 𝑒𝑛𝑐𝑘 𝑚 = 𝑐
which implies #𝐾 ≥ #𝐶

17
Perfect Secrecy (III)

Theorem (Shannon)

Let 𝑃, 𝐶, 𝐾, 𝑒𝑛𝑐𝑘 , 𝑑𝑒𝑐𝑘 be a cryptosystem with #𝐾 = #𝐶 = #𝑃

Then the cryptosystem provides perfect secrecy iff


1
(1) every key is used with equal probability and
#𝐾
(2) ∀𝑚 ∈ 𝑃, 𝑐 ∈ 𝐶. ∃1 𝑘 ∈ 𝐾. 𝑒𝑛𝑐𝑘 𝑚 = 𝑐
i.e., for all pairs 𝑚, 𝑐 there is exactly one key 𝑘 which encrypts 𝑚 to 𝑐

Proof: next slides …

18
Shannon‘s Theorem (Proof ⇐)

Perfect secrecy implies conditions (1) and (2):

(1) we know ∀ 𝑚 ∈ 𝑃, 𝑐 ∈ 𝐶. ∃ 𝑘 ∈ 𝐾. 𝑒𝑘 𝑚 = 𝑐

Since #𝐾 = #𝐶 = # 𝑒𝑘 𝑚 𝑘 ∈ 𝐾}, 𝑒𝑘1 𝑚 = 𝑒𝑘2 𝑚 implies 𝑘1 = 𝑘2 .

Thus, ∀ 𝑚 ∈ 𝑃, 𝑐 ∈ 𝐶. ∃1 𝑘 ∈ 𝐾. 𝑒𝑘 𝑚 = 𝑐

(2) Let 𝑃 = 𝑚1 … 𝑚𝑛 , c ∈ 𝐶 and 𝐾 = 𝑘1 … 𝑘𝑛 with 𝑒𝑘𝑖 𝑚𝑖 = 𝑐.

𝑝 𝐶=𝑐 𝑃=𝑚𝑖 )𝑝(𝑃=𝑚𝑖 ) 𝑝 𝐾=𝑘𝑖 𝑝(𝑃=𝑚𝑖 )


∀ 𝑖. 𝑝 𝑃 = 𝑚𝑖 = 𝑝 𝑃 = 𝑚𝑖 𝐶 = 𝑐) = =
𝑝(𝐶=𝑐) 𝑝(𝐶=𝑐)

Hence, ∀ 𝑖. 𝑝 𝐶 = 𝑐) = 𝑝(𝐾 = 𝑘𝑖 i.e., the probability is independent of 𝑖


1
Thus, the keys are used with equal probability: ∀ 𝑖. 𝑝 𝐾 = 𝑘𝑖 = 𝑛

19
Shannon‘s Theorem (Proof ⇒)

Conditions (1) and (2) imply perfect secrecy

Since each k has equal probability, we know:


1
𝑝 𝐶 = 𝑐 = σ𝑘 𝑝 𝐾 = 𝑘 ∙ 𝑝 𝑃 = 𝑑𝑒𝑐𝑘 𝑐 = σ 𝑝 𝑃 = 𝑑𝑒𝑐𝑘 𝑐
#𝐾 𝑘

Since there is a unique key 𝑘 with 𝑒𝑛𝑐𝑘 𝑚 = 𝑐 for all 𝑚, 𝑐:


σ𝑘 𝑝 𝑃 = 𝑑𝑒𝑐𝑘 𝑐 = σ𝑚 𝑝 𝑃 = 𝑚 = 1
1
Hence, 𝑝 𝐶 = 𝑐 = #𝐾
.

1
Further, 𝑐 = 𝑒𝑛𝑐𝑘 (𝑚) implies 𝑝 𝐶 = 𝑐 𝑃 = 𝑚) = 𝑝 𝐾 = 𝑘 = #𝐾
1
𝑝 𝐶=𝑐 𝑃=𝑚) 𝑝(𝑃=𝑚) 𝑝(𝑃=𝑚)
#𝐾
Hence: 𝑝 𝑃 = 𝑚 𝐶 = 𝑐) = = 1 = 𝑝(𝑃 = 𝑚)
𝑝(𝐶=𝑐)
#𝐾

20
One-Time Pad (the perfectly secure system)

Definition
Encryption by: 𝑐 = 𝑒𝑛𝑐𝑘 𝑚 = 𝑚 ⊕ 𝑘 with 𝑙𝑒𝑛 𝑘 = 𝑙𝑒𝑛 𝑚

Decryption by:
𝑑𝑒𝑐𝑘 𝑐 = 𝑐 ⊕ 𝑘 = 𝑚 ⊕ 𝑘 ⊕ 𝑘
=𝑚⊕0= 𝑚

Example:
Let 𝑘 = 0111 0010 and 𝑚 = 1001 1011
then 𝑐 = 1110 1001

One-time pad is impractical:


Key distribution problem is as hard as the original problem!

21
Computational Security

• Unconditional (Perfect) secrecy


– Cipher cannot be broken regardless the effort we spent in decrypting
– Cipher text does not provide any additional information about the
plaintext 𝑝 𝑃 = 𝑚 𝐶 = 𝑐) = 𝑝(𝑃 = 𝑚)
– But one-time pad is the only (and impracticable) example

• Computational secrecy
– Cost of breaking cipher is greater than the value of the encrypted
message
– Time to break cipher is larger than useful lifetime of encrypted
message
– Probability that an attacker can distinguish a ciphertext from an
arbitrary bitstring with reasonable effort (i.e., in polynomial space/time)
is very low

22
Three Shades of Security

• Perfect security (Shannon)


– information theoretic secure against attacks with infinite
computer power

• Semantic security
– Secure against attacks with polynomially bounded computer
power, i.e., such an attacker cannot gain any advantage of
guessing the plaintext by knowing the ciphertext

• Indistinguishability security
– Attacker with polynomially bounded computer power cannot
relate ciphertext to one of two alternative plaintexts

Next week: security games

23
Next Detour
Entropy

24
Entropy (in Computer Science)
Entropy is a measure of uncertainty
It measures how much information is stored in an event

We measure the amount of information of an answer of probability 𝑝 by


1
𝑙𝑜𝑔2 𝑝
= − 𝑙𝑜𝑔2 𝑝
1
being the number of bits necessary to represent a selection from 𝑝 equally
distributed events

Definition:
Let 𝑋 be a discrete random variable with distribution 𝑝 𝑋 = 𝑥𝑖 f(x) = x log2(x)
Then the entropy 𝐻 𝑋 is defined as:
𝐻(𝑋) = − σ𝑛𝑖=1 𝑝𝑖 ∙𝑙𝑜𝑔2(𝑝𝑖 ) with 𝑝𝑖 = 𝑝 𝑋 = 𝑥𝑖

Notice: if 𝑝𝑖 = 0 then 𝑝𝑖 ∙ 𝑙𝑜𝑔2(𝑝𝑖 ) = 0 by convention

25
Joint and Conditional Entropy

Definitions
Joint entropy of observing pairs (𝑥, 𝑦) (or in general tupels (𝑥, 𝑦, 𝑧 …))
𝐻 𝑋, 𝑌 = − σ𝑋 σ𝑌 𝑝 𝑋 = 𝑥, 𝑌 = 𝑦 ∙ 𝑙𝑜𝑔2 (𝑝 𝑋 = 𝑥, 𝑌 = 𝑦 )

Case-based conditional entropy


𝐻 𝑋 𝑌 = 𝑦) = − σ𝑋 𝑝 𝑋 = 𝑥 𝑌 = 𝑦) ∙ 𝑙𝑜𝑔2 (𝑝 𝑋 = 𝑥 𝑌 = 𝑦))

Conditional entropy (“equivocation”, uncertainty about X given Y)


𝐻 𝑋 𝑌) = σ𝑌 𝑝 𝑌 = 𝑦 ∙ 𝐻 𝑋 𝑌 = 𝑦) =
− σ𝑌 σ𝑋 𝑝 𝑌 = 𝑦 ∙ 𝑝 𝑋 = 𝑥 𝑌 = 𝑦) ∙ 𝑙𝑜𝑔2 (𝑝 𝑋 = 𝑥 𝑌 = 𝑦))

26
Some Properties on Entropy

• 𝐻 𝑋 ≥0 the minimal entropy is 0

• 𝐻 𝑋 =0 the answer is always the same

• 0 < 𝐻 𝑋 ≤ 𝑙𝑜𝑔2 𝑛 there are n different answers possible

• 𝐻 𝑋 = 𝑙𝑜𝑔2 𝑛 all n answers have the same probability


• 𝐻 𝑋 =1 for a yes/no question and equal
probabilities of both answers

• 𝐻 𝑋, 𝑌 = 𝐻 𝑌 + 𝐻 𝑋 𝑌)
• 𝐻 𝑋 𝑌) ≤ 𝐻 𝑋 𝐻 𝑋 𝑌) = 𝐻 𝑋 only if X and Y are independent

27
Entropy in Cryptographic Ciphers

𝐻 𝑃 𝐾, 𝐶) = 0 :
If you know the ciphertext and the key, then you know the plaintext.
This must hold since otherwise decryption will not work correctly.

𝐻 𝐶 𝑃, 𝐾) = 0 :
If you know the plaintext and the key, then you know the ciphertext.
This holds for all ciphers we have seen so far and holds for all the block
ciphers we shall see in later chapters.

However, for modern encryption schemes we do not have this last property
when they are used correctly, as many ciphertexts can correspond to the same
plaintext

28
Entropies of Keys and Ciphertexts

Definition
The key equivocation („Mehrdeutigkeit“) 𝐻 𝐾 𝐶) is the amount of uncertainty left about
the key after one ciphertext is revealed.

Theorem
Given the plaintext entropy 𝐻 𝑃 ,
a key entropy 𝐻 𝐾 = − σ𝐾 𝑝 𝐾 = 𝑘𝑖 ∙ 𝑙𝑜𝑔2 ( 𝑝 𝐾 = 𝑘𝑖 ), and
a ciphertext entropy 𝐻 𝐶 = − σ𝐶 𝑝 𝐶 = 𝑐𝑖 ∙ 𝑙𝑜𝑔2 ( 𝑝 𝐶 = 𝑐𝑖 )
Then 𝐻 𝐾 𝐶) = 𝐻 𝑃 + 𝐻 𝐾 − 𝐻(𝐶)

Proof:
since 𝐻 𝐶 𝑃, 𝐾 = 0 , 𝐻 𝑃 𝐾, 𝐶 = 0 and 𝑃, 𝐾 are independent:
𝐻 𝑃 + 𝐻 𝐾 = 𝐻 𝑃, 𝐾 = 𝐻 𝑃, 𝐾 + 𝐻 𝐶 𝑃, 𝐾) = 𝐻 𝐾, 𝑃, 𝐶
= 𝐻 𝐾, 𝐶 + 𝐻 𝑃 𝐾, 𝐶) = 𝐻 𝐾, 𝐶 = 𝐻 𝐾 𝐶 + 𝐻(𝐶)

29
Entropies in our Toy Cipher

Given 𝑃 = 𝑎, 𝑏, 𝑐, 𝑑 , 𝐾 = 𝑘1 , 𝑘2 , 𝑘3 , C = {1, 2, 3, 4} with

results in entropies: 𝐻 𝑃 ≈ 1.9527, 𝐻 𝐾 ≈ 1.5, 𝐻 𝐶 ≈ 1.9944

which yields the key equivocation


𝐻 𝐾 𝐶 = 𝐻 𝑃 + 𝐻 𝐾 − 𝐻 𝐶 ≈ 1.9527 + 1.5 − 1.944 ≈ 1.4583

That means roughly one and a half bit of information about the
key is leaked after observing a single ciphertext

30

You might also like