1.
Identify the need for randomness in cryptographic primitives and deterministic generator and
non-deterministic generator approaches in detail.
The Need for Randomness:
Most cryptographic primitives take structured input and turn it into something that has no structure.
Many cryptographic primitives require sources of randomness in order to function securely.
Randomness is about ideas such as ‘unpredictability’ and ‘uncertainty’.
Non-deterministic Generators:
A non-deterministic generator is based on the randomness produced by physical phenomena and
therefore provides a source of ‘true randomness’. The source is very hard to control and replicate.
Hardware-based: Rely on the randomness of physical phenomena and require specialist
hardware. Examples include:
o Measurement of the time intervals involved in radioactive decay.
o Semiconductor thermal (Johnson) noise.
o Instability measurements of free running oscillators.
Software-based: Rely on physical phenomena detectable by the hardware in a computing
device. Examples include:
o Capture of keystroke timing.
o Outputs from a system clock.
o Capturing times between interrupts (like mouse clicks).
Disadvantages of Non-deterministic Generators:
1. They tend to be expensive to implement.
2. It is essentially impossible to produce two identical strings of true randomness in two
different places.
Deterministic Generators:
A deterministic generator is an algorithm that outputs a pseudorandom bit string that has no
apparent structure. Its output is not truly random; if anyone knows the secret input (the seed), they
can completely predict the output.
The Seed: This is the secret information input into the generator, essentially a cryptographic
key. It must be protected and changed frequently.
The Generator: This is the cryptographic algorithm that produces the pseudorandom output
from the seed.
Advantages of Deterministic Generators:
1. They are cheap to implement and fast to run.
2. Two identical pseudorandom outputs can be produced in two different locations if they use
the same generator and the same seed.
2. Implement how a dynamic password scheme can be applied to enhance authentication security
in a real-world login system.
A dynamic password scheme, also known as a one-time password scheme, enhances security by
limiting the exposure of the password and using it to generate dynamic data that changes on each
authentication attempt.
Implementation in a Login System:
1. Components:
o User: Has a smart token (e.g., a key fob or mobile app) that implements a password
function and a PIN to activate the token.
o Authentication Server: Shares a secret key K with the user's token and uses the
same password function (algorithm A).
2. Process:
o Step 1: The user initiates a login. The server randomly generates a challenge and
sends it to the user.
o Step 2: The user authenticates themselves to their smart token by entering their PIN.
o Step 3: If the PIN is correct, the token is activated. The user enters the server's
challenge into the token via its keypad.
o Step 4: The token uses the password function (e.g., an encryption algorithm A with
key K) to compute a response to the challenge. The token displays this response.
o Step 5: The user reads the response from the token and sends it back to the server.
o Step 6: The server checks that the challenge is still valid. It then inputs the same
challenge into the same password function (algorithm A and key K) to compute the
expected response.
o Step 7: The server compares the response it computed with the response sent by
the user. If they match, the user is authenticated.
Security Enhancements:
Two-Factor Authentication: Requires something the user has (the token) and something
they know (the PIN).
Dynamic Responses: Each authentication attempt uses a different challenge, so the response
is unique and cannot be replayed.
Local PIN Use: The PIN is only entered into the user's own token, not transmitted over a
network, reducing the risk of it being stolen.
3. Make use of Diffie-Hellman protocol to identify the protocol security goals against the typical
AKE.
The Diffie-Hellman protocol establishes a shared secret Z_AB = g^(ab) mod p but does not meet all
the goals of a typical Authentication and Key Establishment (AKE) protocol.
Analysis against AKE Goals:
Mutual Entity Authentication: NOT PROVIDED. There is nothing in the basic Diffie-Hellman
protocol that gives either party assurance of who they are communicating with. The
values g^a and g^b are effectively random and cannot be linked to Alice or Bob.
Mutual Data Origin Authentication: NOT PROVIDED. For the same reason as above, there is
no assurance that the messages g^a and g^b originated from the claimed entities.
Mutual Key Establishment: PROVIDED. Alice and Bob do establish a common symmetric
key Z_AB at the end of the protocol.
Key Confidentiality: PROVIDED. The shared value Z_AB is not computable by anyone other
than Alice or Bob, due to the difficulty of the Discrete Logarithm Problem.
Key Freshness: PROVIDED. Assuming that Alice and Bob choose fresh private keys a and b,
then Z_AB will also be fresh.
Mutual Key Confirmation: NOT PROVIDED. Neither party obtains any explicit evidence that
the other has successfully computed the same shared value Z_AB.
Unbiased Key Control: PARTIALLY PROVIDED. Both parties contribute to the key, but if one
party (e.g., Bob) receives g^a before generating b, they can influence the final key by trying
different values of b.
4. Identify the i) zero knowledge mechanism with diagram. ii) Problems with passwords
i) Zero-Knowledge Mechanism:
A zero-knowledge mechanism allows a prover to prove knowledge of a secret to a verifier without
revealing any information about the secret. The verifier cannot later impersonate the prover, even
after observing many successful authentication attempts.
Analogy (Cave Diagram):
The popular analogy involves a circular cave with two entrances (A and B) and a magic door in the
middle that only opens with a secret phrase.
1. The Verifier waits outside while the Prover (the guide) goes in and randomly chooses path A
or B.
2. The Verifier shouts into the cave, asking the Prover to come out from a specific entrance
(e.g., A).
3. If the Prover already chose A, they can simply walk back. If they chose B, they must use the
secret phrase to open the door and exit via A.
4. If the Prover exits from the correct entrance, the test is passed. This is repeated n times.
5. The probability that the Prover is cheating and just getting lucky is (1/2)^n, which becomes
negligible after multiple rounds. The Verifier learns nothing about the secret phrase.
ii) Problems with Passwords:
Length: Since passwords are designed to be memorised by humans, there is a natural limit to
their length, which restricts the password space and makes exhaustive search easier.
Complexity: Humans find randomly generated passwords hard to remember, so they often
choose from a highly restricted password space (e.g., dictionary words), making dictionary
attacks possible.
Repeatability: For the lifetime of a password, each time it is used it is exactly the same. If an
attacker steals it, they can reuse it for a significant period.
Vulnerability: Passwords are relatively easy to steal through:
o Shoulder surfing at the point of entry.
o Social engineering or phishing attacks.
o Observation of network traffic or compromise of a password database.
5. Identify the operations of i) one-way function for UNIX password protection system. ii) Nonce
based freshness mechanism
i) One-Way Function for UNIX Password Protection:
The system used in early UNIX operating systems to protect the password database (/etc/passwd)
worked as follows:
1. A salt, a 12-bit number, is randomly generated using the system clock. This salt uniquely
modifies the DES encryption algorithm to create DES+.
2. The user's 8-character ASCII password (56 bits) is used as the key for DES+.
3. A plaintext consisting of 64 zero bits is encrypted using DES+ with the key derived from the
password.
4. The result of this encryption is then encrypted again, and this process is repeated 25 times.
5. The final output is the password image, which is stored in /etc/passwd along with the salt.
When a user logs in, the system looks up the salt, regenerates DES+, and repeats the 25
encryptions. If the resulting password image matches the stored one, the password is
accepted.
ii) Nonce-Based Freshness Mechanism:
The only requirement is the ability to generate nonces (numbers used only once), which are
randomly generated numbers for one-off use.
The general principle is that one entity (e.g., Alice) generates a nonce at some stage in a
communication session.
If Alice receives a subsequent message that contains this nonce, she has assurance that the
new message is fresh. 'Fresh' means the received message must have been created after the
nonce was generated.
This mechanism requires a minimum of two message exchanges to provide freshness.
6. Identify the AKE protocol goals.
The typical goals of an Authentication and Key Establishment (AKE) protocol are:
1. Mutual Entity Authentication: Alice and Bob are able to verify each other’s identity.
2. Mutual Data Origin Authentication: Alice and Bob are able to be sure that information being
exchanged originates with the other party.
3. Mutual Key Establishment: Alice and Bob establish a common symmetric key.
4. Key Confidentiality: The established key should at no time be accessible to any party other
than Alice and Bob.
5. Key Freshness: Alice and Bob should be happy that the established key is not one that has
been used before.
6. Mutual Key Confirmation: Alice and Bob should have some evidence that they both end up
with the same key.
7. Unbiased Key Control: Neither party can unduly influence the generation of the established
key.
7. Identify the General categories used for identification information.
The general categories for providing identification information for entity authentication are:
1. Something The Claimant Has: Identity information based on a physical object held by the
user. Examples include:
o Dumb tokens: e.g., a plastic card with a magnetic stripe.
o Smart cards: A plastic card with a chip, providing memory and processing power.
o Smart tokens: Have their own user interface to enter data like a challenge.
2. Something The Claimant Is: Based on physical characteristics of the human body
(biometrics).
o Static: Measure unchanging features (e.g., fingerprints, iris patterns).
o Dynamic: Measure features that change each time (e.g., voice, handwriting).
3. Something The Claimant Knows: Based on information known to the claimant. Examples
include PINs, passwords, and passphrases.
8. Identify the different stages of protocol design in brief.
There are three main stages to the process of designing a cryptographic protocol:
1. Defining the objectives: This is the problem statement, which identifies what the problem is
that the protocol is intended to solve, including performance-related objectives.
2. Determining the protocol goals: This stage translates the objectives into a set of clear
cryptographic requirements. The goals are typically statements of the form "at the end of the
protocol, entity X will be assured of security service Y."
3. Specifying the protocol: This takes the protocol goals as input and involves determining the
cryptographic primitives, message flow, and actions that achieve these goals.
9. Demonstrate first and second candidate protocol to in detail.
Protocol 1 (Symmetric Key with MAC):
Objective: Bob wants to check that Alice is "alive".
Assumptions:
1. Bob has access to a secure source of randomness.
2. Alice and Bob share a symmetric key K.
3. They agree on a strong MAC algorithm.
Protocol Flow:
1. Bob generates a nonce r_B and sends to Alice: r_B || "It's Bob, are you OK?"
2. Alice forms the reply text: r_B || Bob || "Yes, I'm OK".
3. Alice computes MAC_K(reply text), and sends to Bob: reply text || MAC.
Analysis:
o Data origin authentication is provided by the MAC, which only Alice and Bob can
compute.
o Freshness is provided by the nonce r_B.
o Linking reply to request is provided by r_B and the identifier Bob.
Protocol 2 (Digital Signature):
Objective: Same as Protocol 1.
Assumptions:
1. Bob has access to a secure source of randomness.
2. Alice has a signature key and Bob has her verification key.
3. They agree on a strong digital signature scheme.
Protocol Flow: Identical to Protocol 1, but Alice digitally signs the reply text Sig_A(r_B || Bob
|| "Yes, I'm OK") instead of using a MAC.
Analysis: The analysis is the same as Protocol 1. Data origin authentication is provided by the
digital signature, which only Alice can compute.
10. Demonstrate the sixth and seventh candidate protocol in detail.
Protocol 6 (Clock-Based with Alice's Timestamp):
Objective: Bob wants to check that Alice is "alive".
Assumptions:
1. Alice and Bob share a symmetric key K.
2. Alice can generate timestamps that Bob can verify (they have synchronised clocks).
Protocol Flow:
1. Bob sends to Alice: "It's Bob, are you OK?"
2. Alice generates a timestamp T_A, forms the reply text: T_A || Bob || "Yes, I'm OK".
3. Alice computes MAC_K(reply text) and sends to Bob: reply text || MAC.
Analysis:
o Data origin authentication is provided by the MAC.
o Freshness is provided by the timestamp T_A.
o Linking reply to request is NOT PROVIDED, as the request contains no unique
identifier.
Protocol 7 (Session ID with Hidden Timestamp):
Objective & Assumptions: Same as Protocol 6.
Protocol Flow:
1. Bob sends to Alice: "It's Bob, are you OK?" || ID_S (a session identifier).
2. Alice generates a timestamp T_A.
3. Alice sends two things to Bob:
The clear reply text: ID_S || Bob || "Yes, I'm OK".
The MAC, computed on a different string: MAC_K(T_A || Bob || "Yes, I'm
OK").
Analysis:
o This protocol is FLAWED. Bob cannot verify the MAC because he does not know the
value of T_A that Alice used. He would have to guess all possible timestamps within
a time window, which is inefficient. The inclusion of ID_S in the clear does not fix this
fundamental flaw.
11] Make Use of appropriate freshness mechanisms in a given cryptographic
scenario to ensure message freshness and prevent replay attacks
In cryptography, message freshness ensures that a received message is recent and not a replay of an
old one. Replay attacks occur when an attacker captures a valid message and re-sends it later to trick
the receiver into performing the same action again (e.g., reusing login tokens or payment
commands). To prevent such attacks, freshness mechanisms are used to verify that each message is
unique and recent.
Below are the appropriate freshness mechanisms commonly used in cryptographic scenarios:
1. Timestamps
A timestamp adds the current date and time to each message before encryption or signing.
The receiver checks whether the timestamp is within an acceptable time window (for
example, ±5 seconds).
If the message’s timestamp is too old, it is rejected.
This method assumes that both sender and receiver clocks are synchronized.
Example:
A client sends an authentication request:
Message = {UserID, Timestamp, EncryptedData}
If an attacker replays this message after a few seconds or minutes, the timestamp will be outdated,
and the server will reject it.
2. Nonce (Number Used Once)
A nonce is a random or unique number generated for each session or transaction.
The receiver ensures that the same nonce is not used twice.
Even if an attacker captures the message, reusing the same nonce will make the replay
invalid.
Example:
1. Server → Client: sends a random nonce N1.
2. Client → Server: replies with {Encrypted(Message + N1)}.
The server checks whether N1 matches its original value. If the attacker replays the old
message, the nonce won’t match.
3. Sequence Numbers
A sequence number is attached to every message exchanged between two parties in an ongoing
session.
Each new message increases the sequence number by one.
The receiver maintains the latest sequence number and rejects any message with an older or
duplicate number.
Example:
If messages are numbered 1, 2, 3..., and the receiver gets a message with sequence number 2 again,
it knows it’s a replay.
4. Session Keys
A session key is a temporary encryption key used only for a single communication session.
Even if an attacker records old encrypted messages, they can’t replay them in a new session
because the session key will be different.
Session keys are often established using key exchange protocols like Diffie–Hellman.
5. Challenge–Response Protocols
This interactive mechanism prevents replay by using nonces dynamically.
The receiver sends a random challenge (nonce).
The sender encrypts or signs the challenge and returns it.
The receiver verifies the response to ensure it corresponds to the current challenge.
Example:
1. Server → Client: Challenge = N1
2. Client → Server: Response = Encrypt(N1, SecretKey)
If an attacker replays an old response, it will not match the current challenge.
12] Demonstrate the analysis of a simple cryptographic protocol by applying
appropriate steps to identify potential security weaknesses.
1. Write the protocol
Describe each message, sender, receiver, and crypto used.
Keep notation clear and list any implicit assumptions.
This gives a precise basis for analysis.
2. State security goals
List authentication, confidentiality, integrity, freshness, and secrecy goals.
Be explicit about what the protocol must guarantee.
Goals guide what attacks to look for.
3. List assumptions
State trusted keys, clock sync, randomness quality, and trusted parties.
Clarify what the attacker cannot do.
Assumptions constrain the threat model.
4. Model the attacker
Choose Dolev–Yao or a stronger real-world model.
Specify capabilities: intercept, replay, inject, modify.
This defines possible adversary actions.
5. Trace honest runs
Show a normal, step-by-step successful execution.
Record generated nonces, keys, and session data.
This is the baseline behavior to compare against.
6. Enumerate interleavings
Consider concurrent sessions and message reorderings.
Try mixing messages from different runs to find confusion.
Many replay/MITM attacks come from interleavings.
7. Search for simple attacks
Try replay, reflection, MITM, type-confusion, and key-compromise.
Construct concrete traces showing how goals fail.
If an attack exists, document the exact message flow.
8. Identify root causes
Pinpoint missing bindings (identity, session ID) or weak freshness.
Explain why the attack succeeds in protocol terms.
This directs minimal, effective fixes.
9. Propose fixes
Suggest adding identities, session IDs, nonces, or authenticated encryption.
Prefer minimal changes that restore the violated goals.
Re-check that fixes don’t introduce new issues.
10. Verify formally
Model the protocol in a tool (ProVerif/Tamarin) or re-run manual checks.
Confirm authentication and secrecy properties after fixes.
Formal verification increases confidence in correctness.
13] Identify the 3-level key hierarchy and list the two advantages of
deploying keys in a hierarchy.
A 3-level key hierarchy is a structured system used for efficient and secure key management in
cryptographic systems. It organizes keys into three levels, where each level has a specific purpose
and relationship with the others.
Levels of Key Hierarchy:
1. Master Key (Root Key)
o The highest-level key that is used to encrypt or generate lower-level keys.
o It is stored securely and rarely used directly for data encryption.
o Example: Key Encryption Key (KEK) used to protect other keys.
2. Session Key (Intermediate Key)
o Generated and distributed for temporary use in a specific session or transaction.
o It is encrypted using the master key.
o Ensures that compromise of one session key doesn’t affect others.
3. Data Encryption Key (Working Key)
o The lowest-level key used directly to encrypt and decrypt data.
o It changes frequently for each operation or dataset.
o Provides confidentiality for actual user or file data.
Advantages of Key Hierarchy:
1. Improved Security
o If a lower-level key (like a data key) is compromised, the higher-level keys remain
safe.
o This limits the damage to only specific data, not the entire system.
2. Simplified Key Management
o Keys can be replaced or rotated easily at different levels without re-encrypting all
data.
o The master key can securely manage many subordinate keys, making administration
efficient.
14] Identify the following in brief with respect to governing key management.
i)Key management policies, practices and procedures
ii) Key generation ceremony.
i) Key Management Policies, Practices, and Procedures
Key Management Policies define the overall rules and objectives for how
cryptographic keys are created, distributed, used, stored, and destroyed.
Practices refer to the actual methods and standards followed to implement these
policies securely (e.g., using HSMs or secure key storage).
Procedures are the detailed step-by-step operations that ensure proper key lifecycle
management — including key generation, backup, rotation, recovery, and disposal.
In short: Policies provide direction, practices ensure compliance, and procedures define the
exact actions for secure key handling.
ii) Key Generation Ceremony
A Key Generation Ceremony is a formal, controlled event where cryptographic keys—
especially master or root keys—are created in a secure environment.
It involves multiple authorized personnel, strict documentation, and security controls
to ensure transparency and prevent unauthorized key access.
Typically, it is conducted under camera surveillance with witnesses, and all steps are
recorded for audit and compliance purposes.
15] Demonstrate how the finite lifetime of cryptographic keys is applied in
maintaining the security of cryptographic systems.
Finite Lifetime of Cryptographic Keys
In cryptographic systems, keys are not used forever — they have a finite lifetime, meaning each key
is valid only for a limited period or number of uses. This practice helps maintain strong security and
limits damage if a key is compromised.
1. Concept
A finite key lifetime means that a key is automatically expired, revoked, or replaced after a set
duration or number of operations.
For example, a session key might last only for one communication session, while a master key may
last for months or years under strict protection.
This prevents long-term use of the same key, which could increase the risk of exposure or
cryptanalysis.
2. Application in Security
a) Session Keys:
Temporary keys generated for a single session and discarded afterward.
Example: In SSL/TLS, each session uses a new symmetric key that expires when the
connection ends.
b) Key Rotation:
Periodically replacing old keys with new ones in systems like databases or encryption
services.
This ensures that even if an old key is compromised, new communications remain secure.
c) Expiry and Revocation:
Keys are assigned validity periods (e.g., certificates valid for one year).
Once expired, they are revoked and replaced to prevent reuse or unauthorized access.
3. Benefits of Finite Key Lifetime
1. Limits Damage from Compromise – If a key is exposed, only data encrypted during its active
period is affected.
2. Enhances Cryptographic Strength – Regular key changes prevent attackers from collecting
enough data for successful analysis.
3. Supports Compliance and Auditing – Many standards (like ISO 27001, PCI DSS) require
periodic key rotation.
16] Demonstrate how the key life cycle is applied in managing cryptographic
keys securely within an information system.
Key Life Cycle in Cryptographic Key Management
The key life cycle defines the complete process of how cryptographic keys are created, used,
managed, and eventually destroyed in a secure manner.
Applying a proper key life cycle ensures that keys remain protected at all stages and
minimizes the risk of unauthorized access or misuse.
1. Key Generation
Keys are securely generated using strong random number generators or Hardware
Security Modules (HSMs).
Proper algorithms and key lengths are chosen to meet security standards (e.g., AES-
256, RSA-2048).
The process must be auditable and controlled to prevent unauthorized key creation.
2. Key Distribution
Once generated, keys must be securely delivered to authorized entities or systems.
Methods include encrypted key exchange protocols or physical secure transfer.
This step ensures that only intended recipients obtain the key.
3. Key Storage
Keys are stored securely, typically within HSMs, encrypted databases, or secure key
vaults.
Access is restricted based on roles and permissions.
Proper backups are created for recovery but kept under strong protection.
4. Key Usage
Keys are used only for their intended cryptographic purpose (encryption, signing,
etc.).
Access is logged and monitored to detect misuse.
Policies ensure that keys are never hardcoded or exposed in plaintext form.
5. Key Rotation (Renewal)
Keys are replaced periodically or after a defined lifetime.
Rotation limits the damage if a key is compromised and maintains system integrity.
New keys are distributed securely, and old ones are retired after transition.
6. Key Revocation
If a key is suspected to be compromised, it must be immediately revoked.
Revocation prevents further use by updating key management databases or
revocation lists.
This step maintains system trustworthiness.
7. Key Archival
Expired keys may be archived in encrypted form for future data decryption or
compliance audits.
Only authorized personnel can access archived keys.
Ensures old data can still be decrypted when legally required.
8. Key Destruction
When keys are no longer needed, they must be securely destroyed (e.g., overwriting,
zeroization).
This ensures that the key cannot be recovered or misused in the future.
The destruction process is documented for audit purposes.