0% found this document useful (0 votes)
16 views

Unit - III - WEB SECURE API

Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
16 views

Unit - III - WEB SECURE API

Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 34

Unit - III – WEB SECURE API

3.1. API Security


 Application programming interface (API) security refers to the practice of preventing
or mitigating attacks on APIs.
 APIs work as the backend framework for mobile and web applications.
 Critical to protect the sensitive data they transfer.
 An API is an interface that defines how different software interacts.
 It controls the types of requests that occur between programs, how these requests are
made, and the kinds of data formats that are used.
 APIs are used in Internet of Things (IoT) applications and on websites.
 They often gather and process data or allow the user to input information that gets
processed within the environment housing the API.
Example, there is an API that runs Google Maps.
 A web designer can embed Google Maps into a page they are building.
 When the user uses Google Maps, they are not using code the web designer wrote
piece by piece, but they are simply using a prewritten API provided by Google.
 API security covers the APIs you own, as well as the ones you use indirectly.

Here are the main characteristics of traditional web security:


 A castle and moat approach – the traditional network has a clear perimeter that
controls access points. This perimeter allows or denies access to requestors and then
assumes those that have entered are benign.
 Mostly static protocols – incoming requests adhere to mostly static protocols,
allowing administrators to configure a web application firewall (WAF) to enforce
these protocols.
 Clients use a web browser – the WAF can verify clients’ browser environments, and
if the verification fails, it assumes the client is a bot that uses a headless browser or an
emulator.
 Examining requests to detect attacks – a traditional network can employ a WAF to
block attempts at cross-site scripting (XSS). If it notices a large traffic volume from a
single IP, the WAF can determine it is an attempted Distributed Denial of Service
(DDoS) attack.
Here are key characteristics of API security that distinguish it from traditional
security:
 A castle with many openings and no moat – in the past, traditional networks needed
to protect only common ports like 80 (HTTP) and 443 (HTTPS). Today’s web
applications have numerous API endpoints that use different protocols. As APIs
typically expand over time, even one API can make security a difficult endeavor.
 Incoming request formats that change frequently – APIs evolve rapidly in a
DevOps environment, and most WAFs cannot accommodate this level of elasticity.
Every time an API changes, traditional security tools need manual tuning and
reconfiguring, an error-prone process that consumes resources and time.
 Clients often do not use a web browser – most service or microservice APIs are
accessed by native and mobile applications or other services and software
components. Since these clients do not use a browser, web security tools cannot use
browser verification. Solutions that rely on browser verification to detect malicious
bots usually find it difficult to exclude automated traffic from API endpoints.
Examining incoming requests does not guarantee detecting attacks – many API
abuse attacks exploit requests that look legitimate.
 REST API Security
Representational state transfer (REST) API security is one of the most common API
securities available. With REST API security, you have a Hypertext Transfer Protocol
(HTTP) Uniform Resource Identifier (URI), which controls which data the API accesses as it
operates. REST API security can therefore prevent attacks involving malicious data an
attacker is trying to introduce using an API.
 Secure REST API
REST API supports secure sockets layer (SSL), transport layer security (TLS), and
Hypertext Transfer Protocol Secure (HTTPS) protocols, which provide security by encrypting
data during the transfer process. You can also secure REST APIs with tokens used to make
sure communications are valid before allowing them to go through.
On the API level, security works by examining the data moving into the API
environment. On the application level, API security blocks attempts to make the application
malfunction or to allow other users to get inside and steal sensitive information.
 REST API vs SOAP API
Simple Object Access Protocol (SOAP) is a messaging protocol based on Extensible
Markup Language (XML). It is used in the transfer of information between computers. It uses
XML signatures and Security Assertion Markup (SAML) tokens to authenticate and
authorize messages that get transferred. In this way, it provides API keys that prevent
attackers from gaining access.
The signatures and tokens have to match approved formats for the message to be
allowed to pass through. REST is different from SOAP API security, particularly in that it
does not require the routing and parsing of data. Instead, REST uses HTTP requests and does
not require that data to be repackaged during the transfer process.
Users may prefer to use SOAP over REST because SOAP services can be easier to
design, and it is easier to operate SOAP across proxies and firewalls without modifying it
first.
 API Security Standards
It is crucial to protect data, particularly given the rise of data-dependent projects. The
best way to secure APIs is to follow the API security best practices below.
 Vulnerabilities
API security begins with understanding the risks within your system. To identify weak
points in the API lifecycle, you can look for specific vulnerabilities. For example, you can
check for signature-based attacks like Structured Query Language (SQL) injections, use
tighter rules for JavaScript Object Notation (JSON) paths and schemas, or use rate limits to
provide protection for API backends.
 Tokens
Security tokens work by requiring the authentication of a token on either side of a
communication before the communication is allowed to proceed. Tokens can be used to
control access to network resources because any program or user that tries to interact with the
network resource without the proper token will be rejected.
 Encryption
Encryption works by disguising data at one end of the communication and only
allowing it to be deciphered at the other end if the proper decryption key is used. Otherwise,
the encrypted data is a nonsensical jumble of characters, numbers, and letters. Encryption
supports API security by making data unreadable to unauthorized users whose devices cannot
decipher the data.
 OAuth and OpenID Connect
Open authorization (OAuth) dictates how the client-side application obtains access
tokens. OpenID Connect (OIDC) is an authentication layer that sits on OAuth, and it enables
clients to check the identity of the end-user. Both of these work to strengthen authentication
and authorization by limiting the transfer of information to only include those with either the
appropriate, verifiable token or with the proper identification credentials.
 Throttling and quotas
Throttling and quotas protect bandwidth because they limit access to a system.
Certain attacks, like DDoS assaults, seek to overwhelm a system. Throttling limits the speed
at which data is transferred, which can thwart an attack that depends on a continual, quick
bombardment of data. Quotas limit the amount of data that can be transferred, which can
prevent attacks that leverage large quantities of data in an attempt to overwhelm a system’s
processing resources.
 API gateway
An API gateway sits between the client and the collection of services specific to the
backend. It serves the purpose of a reverse proxy, and as traffic passes through it, it is
authenticated according to predetermined standards.
 Zero-trust approach
The zero-trust security model presumes that all traffic, regardless of whether it
originates from within a network or from the outside, cannot be trusted. Hence, before traffic
can be allowed to travel into or through the network, the user’s rights need to be
authenticated. A zero-trust approach can provide security for data and applications by
preventing unauthorized users from accessing a system—and this includes repeat users an
imposter may impersonate using a previously authenticated device. In a zero-trust model,
both the user and the device are untrusted.

3.2. Session Cookies


Session cookies are small pieces of data stored on the client-side (typically a browser) that
help in identifying and authenticating users during their interactions with a web application.
Due to HTTP being stateless, without session cookies or tokens, we would have to send
credentials with each request to prove authentication.
Session cookies are cookies that last for a session. A session starts when you launch a website
or web app and ends when you leave the website or close your browser window. Session
cookies contain information that is stored in a temporary memory location which is deleted
after the session ends. Unlike other cookies, session cookies are never stored on your device.
Therefore, they are also known as transient cookies, non-persistent cookies, or temporary
cookies.
The session cookie is a server-specific cookie that cannot be passed to any machine
other than the one that generated the cookie.
The server creates a “session ID” which is a randomly generated number that
temporarily stores the session cookie.
Purpose of Session Cookies:
A website itself cannot track a user’s movement on its webpage and treats each new page
request as a new request from a new user. Session cookies allow websites to remember
users within a website when they move between web pages. These cookies tell the server
what pages to show the user so the user doesn’t have to remember where they left off or start
navigating the site all over again. Therefore, without session cookies, websites have no
memory. Session cookies are vital for user experience on online shops and websites when the
functionalities depend on users’ activities.

Session Cookies Examples:


The most common example of a session cookie in action is the shopping cart on eCommerce
websites. When you visit an online shop and add items to your shopping cart, the session
cookie remembers your selection so your shopping cart will have the items you selected when
you are ready to checkout. Without session cookies, the checkout page will not remember
your selection and your shopping cart will be empty.
Session cookies also help users to browse and add items to the shopping cart without logging
in on an eCommerce site. Only when users checkout, do they have to add their name, address,
and payment information.
The features you get on Cookie:
 Cookie banner with full customization
 Auto-translation of cookie banner in 30+ languages
 Cookie scanner to keep your cookie list up-to-date
 Granular control feature to selectively enable or disable cookies
 Automatic Third-party cookie blocking prior to getting consent
 In-built privacy and cookie policy generator
 Record of user consents and their cookie preferences
 Geo-targeting banner for GDPR and CCPA compliance

Difference Between Cookies and Sessions:


Cookies and sessions are used by websites to store users’ data across different pages of the
site. The key difference between sessions and cookies is that sessions are saved on the server
side while cookies are saved on the client side.

Cookies Session

Cookies are small text files used to store user Sessions are used to store user
information on the user’s computer. information on the user’s server side.

Cookies expire after a specified lifetime or A session ends when the user closes the
duration. browser or logs out.

Cookies can only store a limited amount of Sessions have a 128 MB size to store
data of 4KB in a browser. data for one time.

Session stores data in an encrypted


Cookies store information in a text file.
format.

3.3.Token Based Authentication


Digital transformation brings security concerns for users to protect their identity from bogus
eyes. According to US Norton, on average 8 lakh accounts are being hacked every year.
There is a demand for high-security systems and cybersecurity regulations for authentication.
Traditional methods rely on single-level authentication with username and password to grant
access to the web resources. Users tend to keep easy passwords or reuse the same password
on multiple platforms for their convenience. The fact is, there is always a wrong eye on your
web activities to take unfair advantage in the future.
Due to the rising security load, two-factor authentication (2FA) come into the picture and
introduced Token-based authentication. This process reduces the reliance on password
systems and added a second layer to security. Let’s straight jump on to the mechanism.
But first of all, let’s meet the main driver of the process: a T-O-K-E-N !!!
Authentication Token
Token-based authentication for web APIs refers to the method of authenticating
individuals or processes for cloud-based services. The authentication service checks the
user's identity and issues a token after the user's application sends a request to it. The
user can now access the application.
A Token is a computer-generated code that acts as a digitally encoded signature of a user.
They are used to authenticate the identity of a user to access any website or application
network.
A token is classified into two types: A Physical token and a Web token. Let’s understand them
and how they play an important role in security.
 Physical token: A Physical token use a tangible device to store the information of a
user. Here, the secret key is a physical device that can be used to prove the user’s
identity. Two elements of physical tokens are hard tokens and soft tokens. Hard tokens
use smart cards and USB to grant access to the restricted network like the one used in
corporate offices to access the employees. Soft tokens use mobile or computer to send
the encrypted code (like OTP) via authorized app or SMS.
 Web token: The authentication via web token is a fully digital process. Here, the
server and the client interface interact upon the user’s request. The client sends the
user credentials to the server and the server verifies them, generates the digital
signature, and sends it back to the client. Web tokens are popularly known as JSON
Web Token (JWT), a standard for creating digitally signed tokens.
A token is a popular word used in today’s digital climate. It is based on decentralized
cryptography. Some other token-associated terms are Defi tokens, governance tokens, Non
Fungible tokens, and security tokens. Tokens are purely based on encryption which is
difficult to hack.
Token-based Authentication
Token-based authentication is a two-step authentication strategy to enhance the security
mechanism for users to access a network. The users once register their credentials, receive a
unique encrypted token that is valid for a specified session time. During this session, users
can directly access the website or application without login requirements. It enhances the user
experience by saving time and security by adding a layer to the password system.
A token is stateless as it does not save information about the user in the database. This system
is based on cryptography where once the session is complete the token gets destroyed. So, it
gets the advantage against hackers to access resources using passwords.
The most friendly example of the token is OTP (One Time password) which is used to verify
the identity of the right user to get network entry and is valid for 30-60 seconds. During the
session time, the token gets stored in the organization’s database and vanishes when the
session expired.
Let’s understand some important drivers of token-based authentication-
 User: A person who intends to access the network carrying his/her username &
password.
 Client-server: A client is a front-end login interface where the user first interacts to
enroll for the restricted resource.
 Authorization server: A backend unit handling the task of verifying the credentials,
generating tokens, and send to the user.
 Resource server: It is the entry point where the user enters the access token. If
verified, the network greets users with a welcome note.
How does Token-based Authentication work?
Token-based authentication has become a widely used security mechanism used by internet
service providers to offer a quick experience to users while not compromising the security of
their data. Let’s understand how this mechanism works with 4 steps that are easy to grasp.

1. Request: The user intends to enter the service with login credentials on the application or
the website interface. The credentials involve a username, password, smartcard, or biometrics
2. Verification: The login information from the client-server is sent to the authentication
server for verification of valid users trying to enter the restricted resource. If the credentials
pass the verification the server generates a secret digital key to the user via HTTP in the form
of a code. The token is sent in a JWT open standard format which includes-
 Header: It specifies the type of token and the signing algorithm.
 Payload: It contains information about the user and other data
 Signature: It verifies the authenticity of the user and the messages transmitted.
3. Token validation: The user receives the token code and enters it into the resource server to
grant access to the network. The access token has a validity of 30-60 seconds and if the user
fails to apply it can request the Refresh token from the authentication server. There’s a limit
on the number of attempts a user can make to get access. This prevents brute force attacks
that are based on trial-and-error methods.
4. Storage: Once the resource server validated the token and grants access to the user, it
stores the token in a database for the session time you define. The session time is different for
every website or app. For example, Bank applications have the shortest session time of about
a few minutes only.
So, here are the steps that clearly explain how token-based authentication works and what are
the main drivers driving the whole security process.

3.4. Securing Natter API: Addressing Threads with Security


Controls Rate Limiting for Available

Applying security controls to the Natter API. Encryption prevents information


disclosure. Rate-limiting protects availability. Authentication is used to ensure that users are
who they say they are. Audit logging records who did what, to support accountability.
 Rate-limiting is used to prevent users overwhelming your API with requests, limiting
denial of service threats.
 Encryption ensures that data is kept confidential when sent to or from the API  and
when stored on disk, preventing information disclosure. Modern encryption also
prevents data being tampered with.
 Authentication makes sure that users are who they say they are, preventing spoofing.
This is essential for accountability, but also a foundation for other security controls.
 Audit logging is the basis for accountability, to prevent repudiation threats. Finally,
you’ll apply access control to preserve confidentiality and integrity, preventing
information disclosure, tampering and elevation of privilege attack

 Authenticating users with HTTP Basic authentication


 Authorizing requests with access control lists
 Ensuring accountability through audit logging
 Mitigating denial of service attacks with rate-limiting
Addressing threats with security controls
You’ll protect the Natter API against common threats by applying some basic security
mechanisms (also known as security controls). Figure 3.1 shows the new mechanisms that
you’ll develop, and you can relate each of them to a STRIDE threat (chapter 1) that they
prevent:
 Rate-limiting is used to prevent users overwhelming your API with requests, limiting
denial of service threats.
 Encryption ensures that data is kept confidential when sent to or from the API and
when stored on disk, preventing information disclosure. Modern encryption also
prevents data being tampered with.
Rate-limiting for availability
Threats against availability, such as denial of service (DoS) attacks, can be very
difficult to prevent entirely. Such attacks are often carried out using hijacked computing
resources, allowing an attacker to generate large amounts of traffic with little cost to
themselves. Defending against a DoS attack, on the other hand, can require significant
resources, costing time and money. But there are several basic steps you can take to reduce
the opportunity for DoS attacks.

DEFINITION A Denial of Service (DoS) attack aims to prevent legitimate users from
accessing your API. This can include physical attacks, such as unplugging network cables,
but more often involves generating large amounts of traffic to overwhelm your servers. A
distributed DoS (DDoS) attack uses many machines across the internet to generate traffic,
making it harder to block than a single bad client.Many DoS attacks are caused using
unauthenticated requests. One simple way to limit these kinds of attacks is to never let
unauthenticated requests consume resources on your servers. Authentication is covered in
section 3.3 and should be applied immediately after rate-limiting before any other processing.
However, authentication itself can be expensive so this doesn’t eliminate DoS threats on its
own.
Network-level DoS attacks can be easy to spot because the traffic is unrelated to
legitimate requests to your API. Application-layer DoS attacks attempt to overwhelm an API
by sending valid requests, but at much higher rates than a normal client. A basic defense
against application-layer DoS attacks is to apply rate-limiting to all requests, ensuring that
you never attempt to process more requests than your server can handle. It is better to reject
some requests in this case, than to crash trying to process everything. Genuine clients can
retry their requests later when the system has returned to normal.
DEFINITION Application-layer DoS attacks (also known as layer-7 or L7 DoS)
send syntactically valid requests to your API but try to overwhelm it by sending a very large
volume of requests. Rate-limiting should be the very first security decision made when a
request reaches your API. Because the goal of rate-limiting is ensuring that your API has
enough resources to be able to process accepted requests, you need to ensure that requests
that exceed your API’s capacities are rejected quickly and very early in processing. Other
security controls, such as authentication, can use significant resources, so rate limiting must
be applied before those processes, as shown in figu

Rate-limiting with Guava Often rate-limiting is applied at a reverse proxy, API


gateway, or load balancer before the request reaches the API, so that it can be applied to all
requests arriving at a cluster of servers. By handling this at a proxy server, you also avoid
excess load being generated on your application servers. In this example you’ll apply simple
rate-limiting in the API server itself using Google’s Guava library. Even if you enforce rate-
limiting at a proxy server, it is good security practice to also enforce rate limits in each server
so that if the proxy server misbehaves or is misconfigured, it is still difficult to bring down
the individual servers. This is an instance of the general security principle known as defense
in depth, which aims to ensure that no failure of a single mechanism is enough to
compromise your API.
DEFINITION The principle of defence in depth states that multiple layers of security
defenses should be used so that a failure in any one layer is not enough to breach the security
of the whole system. As you’llnow discover, there are libraries available to make basic rate-
limiting very easy to add to your API, while more complex requirements can be met with off-
the-shelf proxy/gateway products. Open the pom.xml file in your editor and add the
following dependency to the dependencies section

3.5. Encryption
 Encryption is used to protect data from being stolen, changed, or compromised and
works by scrambling data into a secret code that can only be unlocked with a unique
digital key.
 Encryption is the process by which a readable message is converted to an unreadable
form to prevent unauthorized parties from reading it.
 Decryption is the process of converting an encrypted message back to its original
(readable) format. The original message is called the plaintext message.
Encryption works by encoding “plaintext” into “ciphertext,” typically through the use of
cryptographic mathematical models known as algorithms. To decode the data back to
plaintext requires the use of a decryption key, a string of numbers or a password also created
by an algorithm. Secure encryption methods have such a large number of cryptographic keys
that an unauthorized person can neither guess which one is correct, nor use a computer to
easily calculate the correct string of characters by trying every potential combination
One early example of a simple encryption is the “Caesar cipher,” named for Roman emperor
Julius Caesar because he used it in his private correspondence. The method is a type of
substitution cipher, where one letter is replaced by another letter some fixed number of
positions down the alphabet. To decrypt the coded text, the recipient would need to know the
key to the cipher, such as shifting down the alphabet four places and over to the left (a “left
shift four”). Thus, every “E” becomes a “Y” and so on.
Modern cryptography is much more sophisticated, using strings of hundreds (even thousands,
in some cases) of computer-generated characters as decryption keys.
Types of encryption
The two most common types of encryption algorithms are symmetric and asymmetric.
Symmetric encryption, also known as a shared key or private key algorithm, uses the same
key for encryption and decryption. Symmetric key ciphers are considered less expensive to
produce and do not take as much computing power to encrypt and decrypt, meaning there is
less of delay in decoding the data.
The drawback is that if an unauthorized person gets their hands on the key, they will be able
to decrypt any messages and data sent between the parties. As such, the transfer of the shared
key needs to be encrypted with a different cryptographic key, leading to a cycle of
dependency.
Asymmetric encryption, also known as public-key cryptography, uses two separate keys to
encrypt and decrypt data. One is a public key shared among all parties for encryption.
Anyone with the public key can then send an encrypted message, but only the holders of the
second, private key can decrypt the message.
Common encryption algorithms
The most common methods of symmetric encryption include:
Data Encryption Standard (DES): An encryption standard developed in the early 1970s,
DES was adopted by the US government in 1977. The DES key size was only 56 bits,
making it obsolete in today’s technology ecosystem. That being said, it was influential in the
development of modern cryptography, as cryptographers worked to improve upon its theories
and build more advanced encryption systems.
Triple DES (3DES): The next evolution of DES took the cipher block of DES and applied it
three times to each data block it encrypted by encrypting it, decrypting it, and then encrypting
it again. The method increased the key size, making it much harder to decrypt with a brute
force attack. However, 3DES is still considered insecure and has been deprecated by the US
National Institute of Standards (NIST) for all software applications .
Advanced Encryption Standard (AES): The most used encryption method today, AES was
adopted by the US government in 2001. It was designed on a principle called a “substitution–
permutation network” that is a block cipher of 128 bits and can have keys at 128, 192, or 256
bits in length.
Twofish: Used in both hardware and software, Twofish is considered the fastest symmetric
encryption method. While Twofish is free to use, it’s not patented nor open source.
Nevertheless, it’s used in popular encryption applications like PGP (Pretty Good Privacy). It
can have key sizes up to 256 bits.
The most common methods of asymmetric encryption include:
RSA: Stands for Rivest-Shamir-Adelman, the trio of researchers from MIT who first
described the method in 1977. RSA is one of the original forms of asymmetric encryption.
The public key is created by the factoring of two prime numbers, plus an auxiliary value.
Anyone can use the RSA public key to encrypt data, but only a person who knows the prime
numbers can decrypt the data. RSA keys can be very large (2,048 or 4,096 bits are typical
sizes) and are thus considered expensive and slow. RSA keys are often used to encrypt
the shared keys of symmetric encryption.
Elliptic Curve Cryptography (ECC): An advanced form of asymmetric encryption based
on elliptic curves over finite fields. The method provides the robust security of massive
encryption keys, but with a smaller and more efficient footprint. For instance, a “256-bit
elliptic curve public key should provide comparable security to a 3,072-bit RSA public key.”
Often used for digital signatures and to encrypt shared keys in symmetric encryption.
Importance of data encryption
People encounter encryption every day, whether they know it or not. Encryption is used for
securing devices such as smartphones and personal computers, for protecting financial
transactions such as making a bank deposit and buying an item from an online retailer, and
for making sure messages such as email and texts are private.
If you’ve ever noticed that a website’s address starts with “https://” (the “s” means “secure”)
it means that the website is using transport encryption. Virtual private networks (VPNs) use
encryption to keep data coming and going from a device private from prying eyes.
Data encryption is important because it helps protect people’s privacy, and secures data from
attackers and other cybersecurity threats. Encryption is often mandatory from a regulatory
perspective for organizations such as in healthcare, education, finance and banking, and
retail.
Encryption performs four important functions:
 Confidentiality: keeps the contents of the data secret
 Integrity: verifies the origin of the message or data
 Authentication: validates that the content of the message or data has not been altered
since it was sent
 Nonrepudiation: prevents the sender of the data or message from denying they were
the origin
Advantages of encryption
Protects data across devices
Data is constantly on the move, be it messages between friends or financial transactions.
Encryption paired with other security functions like authentication can help keep data safe
when it moves between devices or servers.
Ensures data integrity
In addition to keeping unauthorized people from seeing the plaintext of data, encryption
safeguards the data so that malicious actors cannot use it to commit fraud or extortion, or
change important documents.
Protects digital transformations
With more organizations and individuals using cloud storage, encryption plays a key role in
protecting that data while it is in-transit to the cloud, once it is at rest on the server, and while
it’s being processed by workloads. Google offers different levels of encryption, as well as key
management services.
Disadvantages of encryption
Ransomware
While encryption is generally used to protect data, malicious actors can sometimes use it to
hold data hostage. If an organization is breached and its data accessed, the actors can encrypt
it and hold it ransom until the organization pays to have it released.
Key management
Encryption is much less effective if the cryptographic keys that encrypt and decrypt the data
are not secure. Malicious actors often concentrate their attacks on obtaining an organization’s
encryption keys. In addition to malicious actors, losing encryption keys (such as during a
natural disaster that compromises servers) can lock organizations out of important data. This
is why a secure key management system is often used by organizations to manage and secure
their keys.
Quantum computing
Quantum computing poses an existential threat to modern encryption techniques. When it is
ready, quantum computing will be able to process massive amounts of data in a fraction of
the time of normal computers. As such, quantum computing has the potential to break
existing encryption. In the future, all organizations will have to adapt encryption techniques
by using quantum encryption techniques. Currently, quantum computing is relatively limited
and not yet ready to break modern encryption standards. However, NIST has announced their
support of four new “quantum-resistant” algorithms that are designed to withstand quantum
computer attacks.

3.6. Audit logging


Audit logging is the process of documenting activity within the software systems used across
your organization. Audit logs record the occurrence of an event, the time at which it occurred,
the responsible user or service, and the impacted entity.
The purpose of web application audit is to review an application's codebase to determine
whether the code is doing something it shouldn't. Audits may also evaluate whether code can
be manipulated to do something inappropriate and whether the apps may be communicating
sensitive data in the clear.
They are primarily used for compliance, security, and computer forensic investigations. Audit
logs track user actions and system changes to ensure accountability and traceability. They
provide a chronological record of activities, crucial for audits and compliance checks.25 Jul
2024

When you use a technology service or product, audit logs are generated in response to every
user action and system response. These logs capture critical information that can be used to:
 Authenticate users.
 Identify and validate requests.
 Route requests to the appropriate service nodes.
 Perform relevant technology operations and processing.
While both audit logs and system logs record events and actions, they serve distinct purposes:
Audit Logs capture who did what, where, and when. They are primarily used for compliance,
security, and computer forensic investigations. Audit logs track user actions and system
changes to ensure accountability and traceability. They provide a chronological record of
activities, crucial for audits and compliance checks.
System Logs primarily record system events and operational activities, such as errors,
performance data, and service statuses. System logs are mainly used for debugging,
monitoring system health, and optimizing performance. They offer insights into the
operational state and efficiency of the system.

Audit logging important


Though the micro-actions behind audit logs are essential, the broader purpose of audit
logging is even more significant. The main objectives of collecting audit logs are two-fold:
To identify errors and improve system accuracy.
To understand the intent behind activities, which can be used later for accountability or
compliance.
At every step, the systems generate and record a trail of log and metrics data or metadata.
This documentation can be utilized for various use cases, including security, monitoring,
performance analysis, and cyber forensics.
Use cases for audit logs: how to connect the dots
Audit logging can have four key domain applications:
Security
Compliance
Accountability
Cyber forensics
Use case: Security
In terms of cybersecurity, audit logs help to identify anomalous behavior and network traffic
patterns. InfoSec teams can integrate the audit logging mechanism into their monitoring and
observability solutions to extract insights on potential security incidents.

Authentication and detection of unauthorized network changes, can be achieved by testing


network change actions against predefined security policies — looking at the delta. These
policies define how network and IT resources are allowed to be accessed – in terms of entity,
location, roles, and attributes, as well as action frequency and location.
Use case: Compliance with regulations
If your organization has to comply with external regulations, your organization may be
required to keep specific audit logs and establish monitoring capabilities that test the systems
for compliance by analyzing audit logs in real time. For instance:
ISO 27001 imposes requirements for audit logging and monitoring.

SOC1 imposes requirements for incident detection, configuration, management, and event log
collection.
(See how Splunk supports organizational compliance.)
Use case: Accountability & authentication
As with standard audit procedures, audit logging is frequently used for accountability and
verification of factual information. Common applications include:
Organizational policy enforcement
Accounting and finance
HR policies
In this context, audit logging is an important part of analyzing how users act and the accuracy
of information recorded by the systems. For example, audit logging can quickly enable
systems and uncover insights into the use of financial resources across all departments.
Imagine a world where all this was straightforward:
Authorizing and spending finances.
Understanding which users are responsible for the most spending.
Comparing against budget allocations.
Use case: Cyber forensics
Cyber forensics is another key application domain of audit logging practices that requires the
reconstruction of events and insights into a technology process. Often, this might stand up as
legal evidence in a court of law.
Typically, businesses aren’t conducting cyber forensics for all their activities. Instead, we
usually require cyber forensics in two situations:
An external requirement for investigation in the form of a court subpoena
An internal request by business executives and technical teams, perhaps around a major cyber
incident or significant, unplanned downtime in a website or system
Audit logs outline the action sequences that connect a user to an action. Investigators can
analyze audit logs to gain deeper insights into various scenarios and outcomes represented by
the audit logs. This requires a thorough analysis of raw logging data before it is converted
into insightful knowledge.
Audit logging best practices
Considering the vast volume of network, hardware, and application logs generated at scale, IT
teams can be easily overwhelmed by the audit trail data. To gain the right insights with your
audit log metrics data, you can adopt the following best practices:
Store all structures at a scale
Establish a data platform that can integrate and store data of all structural formats at scale. Data
platform technologies such as a data lake commonly capture real-time log data streams with a
schema-on-read consumption model.
Third-party analytics and monitoring tools integrate to make sense of this information in real-
time while processing only the most relevant portions of audit logs data based on the tooling
specifications for data structure.
Use statistical models, not predefined thresholds
Use statistical models to generalize system behavior instead of using predefined and fixed
thresholds to capture data. Since the network behavior evolves continuously, models based on
machine learning can continuously learn and adapt.
These models are helpful for accurate analysis of audit logs, where thresholds for anomalous
behavior can be a moving target.
Secure data with eye to CIA triad
Store audit logging data in secure environments with high standards of confidentiality,
integrity, and availability — known as the CIA triad. Modified audit logs and misconfigured
networking systems can generate misleading information, and likely lead your log analysis to
incorrect conclusions.
Infinite data storage is not sustainable
It is important to understand that data stores that integrate large volumes of real-time log data
streams can grow exponentially. When designing the data platform for audit log analysis,
evaluate the cost, security, and performance of your data platform against your security and
compliance requirements.
Additionally, implementing quotas and limits on logging uses is crucial to managing storage
efficiently. Setting quotas ensures that logging does not consume excessive resources and
helps maintain system performance. Define limits based on the importance and relevance of
the logs, ensuring that only critical data is retained long-term.

Securing Service – To – Service APIs


Introduction
With the rising threat of cyberattacks, securing APIs has become business-critical. Especially
as many security reports indicate that web APIs are quite vulnerable. Thankfully, by
following a few best practices, API providers can ward off many potential vulnerabilities.
Below, we cover top API security best practices, which are good things to keep in mind when
designing and creating APIs.
1. Always Use a Gateway
Our first recommendation is to always put your API behind a gateway. API gateways
centralize traffic features and apply them to every request that hits your API. These features
may be security-related, like rate limiting, blocking malicious clients, and proper logging. Or,
they may be more practical and business-related, like path and headers rewriting, gathering
business metrics, and so on.
Not having these controls could easily result in a serious security threat. Without a gateway,
API providers would have to reinforce each endpoint with these features one-by-one. An API
gateway eases the process of adding or fixing these features. Thankfully, there are plenty of
API gateway products available on the market.
2. Always Use a Central OAuth Server
Next, do not let your APIs or gateways issue access or refresh tokens. A centralized OAuth
server should always issue such tokens. Issuing tokens requires many complex processes:
authenticating the client, authenticating the user, authorizing the client, signing the tokens,
and other operations. All these functions require access to different data, such as client
information or the preferred authentication mechanism. Furthermore, if many entities issue
and sign tokens, it becomes increasingly challenging to manage all the credentials used for
signing. Only one entity can safely handle these processes — an OAuth server.
3. Only Use JSON Web Tokens Internally
When APIs are concerned, using JSON Web Tokens (JWTs) as access and refresh tokens is a
good practice. Services that receive JWTs can leverage claim information to make informed
business decisions: Is the caller allowed to access this resource? What data can the caller
retrieve?
However, when tokens are exposed outside your infrastructure and especially when exposed
to third-party clients, you should use opaque tokens instead of JWTs. Information in a JWT is
easy to decode and thus available to everyone. If JWT data is public, privacy becomes a
concern. You must ensure that no sensitive data ends up in the JWT's claims. What is more, if
you share JWTs with third-party clients, chances are that they will start depending on the data
in the JWT. It might become a liability, even if the data is not sensitive. Once integrators start
depending on the contents of a JWT, changing the token's claims could result in a breaking
change, requiring costly implementation upgrades in all third-party clients.
If you want to use opaque tokens externally but also benefit from JWTs in your internal
communication, you can use one of two approaches: the phantom token approach or the split
token approach. Both involve an API gateway in the process of translating an opaque token
into a JWT.
4. Use Scopes for Coarse-Grained Access Control
OAuth scopes limit the capabilities of an access token. If stolen client credentials have
limited scopes, an attacker will have much less power. Therefore, you should always issue
tokens with limited capabilities. Verification of token scopes can be done at the API gateway
to limit the malicious traffic reaching your API. You should use scopes during coarse-grained
access control. This control could include checking whether a request with a given access
token can query a given resource or verifying the client can use a given Content-Type.
5. Use Claims for Fine-Grained Access Control at the API Level
You should always implement fine-grained access control at the API level. This access
control complements any control done at the API gateway level, and should be architected so
that even if a malicious request slips through the gateway, the API will still reject it. This
practice safeguards against situations in which attackers bypass the gateway.
A fine-grained access control focuses on securing an API from a business perspective. The
API should verify whether the request can reach the given endpoint. It should also check
whether the caller has rights to the data and what information can be returned based on the
caller's identity (both for the client and user). The 2019 OWASP Top 10 API Security
Vulnerabilities lists broken object level authorization (BOLA) as the top API vulnerability, so
it's worth remembering this one.
6. Trust No One
Zero-trust is not just a buzzword — your API should limit trust to incoming traffic. Period.
One of the steps toward building zero-trust is using HTTPS for all API traffic. If possible, use
HTTPS internally so that traffic between services cannot be sniffed.
Your services should always verify incoming JWTs, even if they are transformed from an
opaque token by the gateway. This again helps to mitigate situations where a request manages
to bypass your gateway, preventing a malicious actor from operating inside your company or
infrastructure.
Zero-trust also means that your services should deny access by default. Then use claims-
based access control to allow access to requests that fulfill concrete access control policies.
7. Create or Reuse Libraries for JWT Validation
Proper JWT validation is crucial for the security of your APIs. Yet, if every team implements
their own JWT validation solution, you risk increasing overall system vulnerability. Mistakes
are more common, and it's difficult to fix bugs.
Instead, create a company-wide solution for JWT validation, preferably based on libraries
available on the market and tailored to your API's needs. Standardizing a company-wide JWT
validation process will help guarantee the same level of security across all your endpoints.
When issues arise, teams can resolve them more quickly. For security-sensitive tasks like
JWT validation, quick threat resolution is incredibly important.
8. Do Not Mix Authentication Methods
Do not mix authentication methods for the same resources. Authentication methods can have
different security levels. For example, consider Basic authentication versus multi-factor
authentication. If you have a resource secured with a higher level of trust, like a JWT with
limited scopes, but allow access with a lower level of trust, this can lead to API abuse. In
some cases, this could be a significant security risk.
9. Protect All APIs
Do not leave any of your APIs unprotected. Even internal APIs should have protections
implemented. This way, you're sure that the API is protected from any threat from inside your
organization.
APIs are commonly created for internal use only and made available to the public later on. In
such scenarios, proper API security tends to be overlooked. When published externally, the
API becomes vulnerable to attacks.
Remember that security by obscurity is not recommended. Just because you create a
complicated name for an endpoint or use an obscure Content-Type does not mean the API
will be secure. It's only a matter of time before someone finds the endpoint and abuses it.
10. Issue JWTs for Internal Clients Inside Your Network
If you have internal clients operating only inside your network, you can have your OAuth
server issue JWTs for such clients instead of opaque tokens. This will avoid unnecessary
token translations. However, you should only apply this strategy if the JWTs do not leave
your network. If you have external clients, or if the tokens are used externally, you should
hide them behind an opaque token, as noted before.
11. Use JSON Web Key Sets for Key Distribution
To verify a JWT's integrity, an API must access a public key (if the JWT is asymmetrically
signed, as recommended). You can accomplish this in a couple of ways: you can hardcode the
key's value or query some endpoint at your service startup and cache the result.
The recommended method is to obtain a key from a JWKS endpoint exposed by the OAuth
server. The API should cache the downloaded key to limit unnecessary traffic but should
query the JWKS endpoint again whenever it finds a signing key it doesn't know.
This allows for a simple key rotation, which the OAuth server can handle on-demand without
impeding the API services. Using key sets instead of keys also allows a seamless key rotation
for the clients. The OAuth server can begin issuing new tokens signed with a new key but
existing tokens will remain valid as long as the old public key is part of the key set.
12. Always Audit
Maintaining high standards for your APIs, both from a security and design point of view, is
not a trivial task. Therefore, consider splitting responsibility between different groups of
people and having other teams audit your APIs.
There are different approaches to setting up governance over your API. You could have a
dedicated team of API experts review the design and security aspects, or create a guild of API
experts picked from different groups to offer guidance. However you organize governance,
ensure you always have additional eyes checking your APIs.
13. Manage Claims Centrally
As defined by the JWT specification, a claim is a piece of information asserted about a
subject. It's good practice to have these claims asserted by a centralized OAuth server — this
makes it easier to control which claims appear in your tokens. This is important for privacy
and security reasons.
Whether calling internal or external services, all APIs should only use claims asserted by the
centralized server and should not add additional information nor issue tokens. Managing
claims centrally allows you to control the information flowing between the APIs to ensure
they do not leak excess data.
14. Abuse Doesn't Have to Be a Breach
Just because your API security isn't breached doesn't mean that everything is fine. You should
gather metrics and log usage of your API to catch any unwanted behavior. Watch out for
requests iterating over your IDs, requests with unexpected headers or data, customers creating
many clients to circumvent rate limits, and other suspicious cues. Losing data due to API
abuse can be just as harmful to your business as a hacker breaking through the security.
15. Keep Your Tokens Secure
Although not concerning APIs directly, an important part of a secure API is how securely
access tokens are handled by clients. If access tokens can easily be stolen, they can then be
used to steal data from an API. Mobile and backend clients can store those tokens pretty
securely, but it is not the case with browser-based applications. Single Page Applications
developers often wonder how to securely keep tokens in the browser, which should be treated
as a hostile environment. The OAuth for Browser-Based Apps specification currently
recommends keeping the tokens out of the browser altogether.
Micro-Service
Micro-Service is a very small or even micro-independent process that communicates and
return message through mechanisms like Thrift, HTTPS, and REST API. Basically, micro-
services architecture is the combination of lots of small processes which combine and form
an application. In micro-services architecture, each process is represented by multiple
containers. Each individual service is designed for a specific function and all services
together build an application.
How To Secure Micro-services
Now let’s discuss the actual point of security of micro-service architecture, nowadays many
applications use external services to build their application and with the greater demand, there
is a need for quality software development and architecture design. Systems administrators,
database administrators, cloud solution providers, and API gateway these are the basic services
used by the application. Security of micro-services mainly focuses on designing secure
communication between all the services which are implemented by the application.
(1) Password Complexity :
Password complexity is a very important part as a security feature is a concern.
The mechanism implemented by the developer must be able to enforce the user to create a
strong password during the creation of an account. All the password characters must be
checked to avoid the combination of weak passwords containing only strings or numbers.
(2) Authentication Mechanism :
Sometimes authentication is not considered a high priority during the implementation of
security features for security of Microservices. It’s important to lock users’ accounts after a
few numbers of fail login attempts. On login there must be rate-limiting is implemented to
avoid the brute force attack. if the application is using any external service all APIs must be
implemented with an authentication token to avoid interfering with the user in API endpoint
communication. Use multi-factor authentication in micro-services to avoid username
enumeration during login and password reset.
(3) Authentication Between Two Services :
The man-in-the-middle attack may happen during encounters during the service-to-service
communication. Always use HTTPS instead of HTTP, HTTPS always ensures the data
encryption between two services and also provides additional protection against penetration
of external entities on the traffic between client-server.
It is difficult to manage SSL certificates on servers in multi-machine scenarios, and it is very
complex to issue certificates on every device. There is a secure solution HMAC is available
over HTTPS. HMAC consists of a hash-based messaging code to sign the request.
(4) Securing Rest Data :
It is very important to secure the data which not currently in use. If the environment is secure,
the network is secure then we think that attackers can not reach stored data, but this is not
case there are many examples of data breaches in the protected system only due to weak
protection mechanisms on data security. All the endpoints of where data is stored must be
non-public. Also, during development take care of the API key. All the API keys must be
secret leakage of private API also leads to exposure of sensitive data in public. Don’t expose
any sensitive data, or endpoints in the source code.
(5) Penetration Testing :
It is always good practice to consider security features in the software development life cycle
itself. but in general, this is not always true, considering this problem is always important to
do penetration testing on the application after the final release. There are some important
attack vectors released by OWASP always try these attacks during the penetrating testing of
the application. Some of the important attack vectors are mentioned below.
SQL Injection.
Cross-Site Scripting (XSS).
Sensitive Information Disclosure.
Broken Authentication and Authorization.
Broken Access Control.
Service Mesh
A service mesh is a software layer that handles all communication between services in
applications. This layer is composed of containerized microservices. As applications scale
and the number of microservices increases, it becomes challenging to monitor the
performance of the services.
Introduction
One common argument that people bring against the microservices architecture is that it
increases the security risk of the application by expanding the risk surface. This is true in the
sense that when we have more microservices exposing functionality to external consumers,
we have to protect each service from external consumers. In a monolithic application, we
could have most of these functionalities as internal programs which are not exposed to
external consumers.
But that does not need to stop us from implementing microservices. Let us discuss different
approaches to implement security for microservices.
Securing microservices
In a typical microservices-based application, we can identify 2 levels of security that need to
be implemented as depicted in the below figure.

Figure: Securing microservices from external and internal consumers


As per the above diagram, we need to implement security to
 control access from external consumers (north-south traffic)
 control access from other services (east-west traffic)
The preceding figure uses a messaging platform to implement inter-service communication.
Hence the security of east-west traffic is implemented between the microservice and the
messaging platform. In a scenario where you don’t have such a messaging platform, you need
to implement security at the microservices layer. In this article, we assume that your
architecture contains a messaging platform for inter-service communication.
Securing north-south traffic (external consumers)
Most of the enterprises focus more on securing the microservices from external consumers
since that is the more vulnerable path that can be exploited by the bad guys (hackers). We can
implement security for north-south traffic using different approaches. The below-mentioned 3
approaches are common in the enterprise world.
1. Implement security at each microservice level
2. Implement security using a sidecar
3. Implement security using a shared gateway
Let’s take a look at what each of these approaches means.
Implement security at each microservice level
Given the polyglot nature of microservices development, different teams can come up with
their own programming languages and frameworks to implement security. In such a scenario,
we can follow this approach where individual microservices implement security for each
service. It is important to adhere to a common security standard such as OAuth 2.0 when
implementing security since that would make the life of the clients a lot easier. This approach
is depicted in the below figure.
Figure: Implement microservices security at each service level
As per the preceding figure, microservices A, B, and C are implemented using different
programming languages and each microservice has implemented security based on the
OAuth2 standard. The clients will communicate with these services using a common
approach (JWT token or opaque token) and a common Identity Provider (IDP) is used to
validate the security credentials presented by the clients. If the token type is a self-contained
JWT token, microservices will validate the token by itself without contacting the IDP. This
approach works fine and lets the teams decide on the best technology to implement security.
But the downside of this approach is that each team is spending time on implementing the
same functionality.
Implement security using a sidecar
This approach is a slight improvement from the previous approach. Here we use an external
framework such as Istio or Open Policy Agent (OPA) to implement the security for each
microservice and this security component (agent) runs alongside the microservice as a
sidecar. This approach is depicted in the below figure.
Figure: Implement microservices security using a sidecar
With this approach, microservices can be implemented in a polyglot manner and the security
functionality is handled through an external component (sidecar) which can be configured
independently from the microservice itself. This allows the user to change security
configurations without changing the source code of the microservice. It also keeps the overall
architecture as independent and microservices friendly as possible. The security validation
can be done within the sidecar itself or using a separate service (IDP).
Implement security using a shared gateway
Another approach to implement security for microservices is to use a shared component to
implement security for individual microservices. This shared component can be an API
gateway or a security gateway that will sit in front of the microservices layer. Every call to
the microservice will go through this component and be validated for the credentials before
reaching out to the microservice. This approach is depicted in the below figure.

Figure: Implement microservices security using a shared gateway


As depicted in the preceding figure, a shared, monolithic API gateway will be used to
implement the security for microservices. In this API gateway, there will be a proxy service
that will be created to represent the microservice and to implement security for the same.
Microservices need not be touched if you need to change the security configurations. One
drawback of this approach is that it introduces a monolithic component to the overall
architecture. In this approach, the API gateway will communicate with the IDP to validate the
credentials of the client based on the token or authentication approach used to implement
security.
There are more secure approaches with this architecture in which both the API gateway as
well as individual microservices are secured with OAuth 2.0 based security. In such a
scenario, the API gateway can validate the client request and generate the required tokens
according to the backend authentication or pass the original token to the backend directly if it
is valid for backend authentication.
Securing east-west traffic
The next step of securing microservices is to secure the communication between
microservices (inter-service communication). As mentioned at the beginning, we are using a
messaging platform to decouple the inter-service communication for the betterment of the
overall architecture. Let’s take a look at how we can implement security for inter-service
communication within a microservices architecture.
When compared to external consumer traffic (north-south), the security of inter-service
communication can be considered differently. Since both the consumer and the provider
reside in an internal secured network, sometimes it is fine to apply only the required level of
security rather than applying hard security for this. But it depends on the security
requirements of the enterprise platform and the overall security standards. We can implement
security for east-west traffic with the following approaches.
1. Transport layer security
2. Transport layer security with message layer security using authentication
3. Transport layer security with message layer security using authentication and
authorization
These 3 options provide an increasingly secure approach to implementing security for
microservices.
Transport layer security
This is the least secure approach with SSL enabled for communications between the
microservice and the messaging platform. If both messaging platform and the microservice
reside in a secured LAN, we can use this approach. The below figure depicts this concept.
Figure: Implement inter-service security using TLS
As per the preceding figure, the microservices will communicate with the messaging platform
over TLS, and the communication will be signed and encrypted in motion. No third party will
be able to look at the data with this approach which is the maximum level of security
provided with this approach. The services will communicate with each other through the
messaging platform (e.g. Kafka or NATS).
Transport layer security with authentication
If the previous approach is not secure enough for your enterprise, you can implement
authentication for communication between the microservices and the messaging platform.
This will make sure only the microservices with valid credentials can communicate with the
messaging platform and eventually with the other microservices. This approach is depicted in
the figure below.
Figure: Implement inter-service security using TLS and MLS
As depicted in the preceding figure, microservices are provided with the credentials to
validate their communication with the messaging platform to provide advanced security.
According to the messaging platform you choose, you can implement authentication using a
mechanism such as OAuth 2.0 or other mechanisms such as basic authentication or token-
based authentication. One important thing to note here is that you don’t necessarily need to
use the same security mechanism you used for north-south traffic for east-west traffic since
those are 2 separate communication links.
Transport layer security with authentication and authorization
The most secure approach to implement security for inter-service communication is to control
the access using an authorization scheme that defines what microservices can access to what
other microservices and channels. The below figure depicts this idea.
Figure: Implement inter-service security with TLS and MLS with authz
With this approach, microservices are equipped with the credentials that have both the
identity of the service and the entitlements to access data from other services. In a messaging
platform such as NATS, it has subjects to control the access using wildcard subscriptions to
control who can access what. This is the highest level of security we can implement for inter-
service communication

Securing Incoming Request:

1. Use HTTPS
 Ensure all incoming requests are served over HTTPS to encrypt the data in transit.
HTTPS prevents man-in-the-middle attacks and ensures the integrity of data sent
between the client and server.
2. Input Validation
 Sanitize inputs: Ensure that any user input, query parameters, or data submitted via
forms is sanitized to prevent attacks like SQL injection or XSS.
 Use whitelists: Limit accepted inputs to known good values using whitelisting rather
than blacklisting.
3. Authentication and Authorization
 Token-based authentication: Implement token-based systems like JWT (JSON Web
Token) to validate incoming requests.
 Session management: Secure sessions with strong session tokens, ensuring that
session data is not predictable or accessible by unauthorized users.
 Role-based access control: Ensure that users are authenticated and have the correct
permissions to access certain resources or perform specific actions.
4. Rate Limiting and Throttling
 Apply rate limiting to prevent brute-force attacks or denial of service (DoS) attacks.
By limiting the number of requests a client can make in a given time, you reduce the
risk of abuse.
5. Cross-Origin Resource Sharing (CORS)
 Configure CORS properly to ensure only trusted domains are allowed to interact with
your API. This reduces the risk of cross-origin attacks by controlling which domains
are permitted to send requests.
6. CSRF Protection
 Implement anti-CSRF (Cross-Site Request Forgery) tokens to ensure that actions
performed by authenticated users are genuinely authorized. This helps prevent
unauthorized commands being sent from trusted users.
7. Use Security Headers
 Implement security headers such as:
o Content-Security-Policy: Prevents XSS by restricting what resources can be
loaded.
o X-Content-Type-Options: Prevents the browser from interpreting files as a
different MIME type.
o X-Frame-Options: Protects against clickjacking by controlling whether your
content can be displayed in an iframe.
o Strict-Transport-Security (HSTS): Forces browsers to use HTTPS and
prevents them from using insecure connections.
8. Logging and Monitoring
 Log incoming requests and monitor for unusual patterns (e.g., high rates of requests
from a single IP or malicious payloads).
 Use monitoring tools that can alert you of potential intrusions or malicious activities
in real-time.
9. Use Web Application Firewall (WAF)
 A WAF helps protect against common attacks like SQL injection, XSS, and more. It
inspects incoming traffic and can block or flag requests that contain malicious
payloads.
10. Input Size Limits
 Set size limits on incoming data (e.g., request body, headers, etc.) to prevent attackers
from sending overly large payloads, which could lead to buffer overflow attacks or
denial of service.
11. Security Patches
 Regularly update server software, frameworks, and libraries to ensure that
vulnerabilities in third-party software are addressed.

1. How can I prevent SQL injection in incoming web requests?


2. What strategies can I use to prevent Cross-Site Scripting (XSS) attacks in
incoming requests?
3. How can I secure file uploads in web applications to prevent malicious file
requests?
4. What is the role of JWT (JSON Web Tokens) in securing API requests?
5. How do I ensure rate limiting and prevent DoS attacks on my web service?
6. What should be my strategy for handling sensitive information in incoming
requests (e.g., passwords, credit card details)?
7. How do I prevent replay attacks in incoming API requests?
1. How can I prevent SQL injection in incoming web requests?
 SQL injection occurs when attackers insert or manipulate SQL queries via web
inputs. Preventing SQL injection involves using prepared statements and
parameterized queries, input validation, and ORM (Object-Relational Mapping)
tools.
2. What strategies can I use to prevent Cross-Site Scripting (XSS) attacks in incoming
requests?
 XSS attacks inject malicious scripts into web pages. To mitigate these risks, input
sanitization and output encoding are critical, along with the use of Content
Security Policy (CSP) headers to limit the sources from which scripts can be
loaded.
3. How can I secure file uploads in web applications to prevent malicious file requests?
 When allowing file uploads, set strict file type restrictions, validate file extensions
and MIME types, limit file sizes, and store uploads in secure directories (outside
the web root). You can also scan files for malware before processing them.
4. How do I defend against Cross-Site Request Forgery (CSRF) in incoming requests?
 To prevent CSRF, implement anti-CSRF tokens that ensure each request comes
from a trusted user. Additionally, ensure CORS headers are properly configured,
and rely on SameSite cookies for authentication tokens.
5. What is the role of JWT (JSON Web Tokens) in securing API requests?
 JWTs are widely used to securely authenticate and authorize API requests. JWTs
are signed tokens that can be validated server-side to ensure that the request is
coming from a legitimate user. Security concerns include token expiration, secure
storage of JWTs, and ensuring the use of strong encryption algorithms for
signing tokens.
6. How do I ensure rate limiting and prevent DoS attacks on my web service?
 Rate limiting can be enforced at the API level to ensure that clients don't
overwhelm the server. Tools like Redis or rate-limiting middleware can track
request counts and block or throttle clients who exceed safe thresholds.
7. What should be my strategy for handling sensitive information in incoming requests
(e.g., passwords, credit card details)?
 Ensure sensitive data like passwords and credit card information are transmitted
securely using HTTPS and are stored encrypted. Implement strong hashing
algorithms (e.g., bcrypt) for password storage and use tokenization for handling
payment data.
8. How do I prevent replay attacks in incoming API requests?
 Replay attacks occur when attackers intercept and replay legitimate requests. To
defend against these, you can include timestamps in your requests and reject old
or duplicate requests. Additionally, nonce (number used once) values and
expiring tokens can be implemented to prevent this issue.

You might also like