Comptia Security + (Syo-601 & Syo - 701) Exam Study Guide
Comptia Security + (Syo-601 & Syo - 701) Exam Study Guide
Security +
(Syo-601 &
Syo - 701)
Exam Study
Guide
Table of Contents
● integrity - this ensures that data has not been tampered or altered in any
way with the use of hashing, checksums etc
Black hat hackers and cyber criminals aim for the dad triad.
● alteration - this means data has been compromised or tampered with. This
can be attained by malware, viruses and attacks like sql injection.
● deniability - this means data is not made available to those who need it with
the use of attacks like dos and ddos as well as ransomware.
Managers may have responsibility for a domain such as building control, ict or even
accounting.
Unlike The Nist Framework, The Iso 27001 Must Be Purchased. The Iso 27001 Is Part
Of An Overall 27000 Series Of Information Security Standards Also Known As 27k.
Iso 31k - This Is An Overall Framework For Enterprise Risk Management (Erm). Erm
Considers Risks And Opportunities Beyond Cybersecurity By Including Financial,
Customer Service And Legal Liability Factors.
Soc 3 - A Less Detailed Report Certifying Compliance With Soc2. They Can Be Freely
Distributed.
A non profit organization that publishes the well-known (the cis critical security
controls).
Operating System (Os) Best Practice Configuration Lists The Settings And Controls
That Should Be Applied For A Computing Platform To Work In Defined Roles Such
As Workstation, Server, Network Switch/Router Etc.
Most Vendors Will Provide Guides, Templates And Tools For Configuring And
Validating The Deployment Of Network Appliances And Operating Systems And
These Configurations Will Vary Not Only By Vendor But By Device And Version As
Well.
Application Servers
Most Application Architectures Use A Client/Server Model Which Means Part Of The
Application Is A Client Software Program Installed And Run On Separate Hardware
To The Server Application Code.
Attacks Can Therefore Be Directed At The Client, Server Or The Network Channel
Between Them.
Due Diligence Is A Legal Term Meaning That Responsible Persons Have Not Been
Negligent In Discharging Their Duties.
Compliance issues are complicated by the fact that laws derive from different
sources e.g gdpr does not apply to american data subjects but it does apply to
american companies that collect or process the personal data of people in eu
countries.
In the US there are federal laws such as the gramm-leach-bliley act (GLBA) for
financial services and the health insurance portability and accountability act
(HIPAA).
Threats Can Exist Without Risks But A Risk Needs An Associated Threat To Exist
The Path Or Tool Used By A Malicious Threat Actor Can Be Referred To As The
Attack Vector.
Risks Are Often Measured Based On The Probability That An Event Might Occur As
Well As The Impact Of The Event On The Business.
Risks Are Event Focused (The Database Server Goes Down) While Threats Focus On
Intentions (A Hacker Wants To Take Down The Database Server)
The attack vector is the path that a threat actor uses to gain access to a secure
system and can include
● Direct Access
● Removable Media
● Email
● Remote & Wireless
● Supply Chain
● Web & Social Media
● Cloud
threat data - computer data that can correlate events observed on a customer's
own networks and logs with known TTP and threat actor indicators.
Threat data can be packaged as feeds that integrate with a security information
and event management (SIEM) platform.
These feeds are usually described as cyber threat intelligence (cti) data.
Threat intelligence platforms and feeds are supplied as one of four different
commercial models
Strictly Speaking An Ioc Is Evidence Of An Attack That Was Successful. The Term
Indicator Of Attack (Ioa) Is Sometimes Also Used For Evidence Of An Intrusion
Attempt In Progress.
Automated indicator sharing (ais) - is a service offered by the dhs for companies
to participate in threat intelligence sharing. ais is based on the stix and taxii
standards and protocols.
Threat map - a threat map is an animated graphic showing the source, target and
type of attacks detected by a cti platform.
Predictive analysis - this refers to when a system can anticipate an attack and
possibly identify the threat actor before the attack is fully realized.
Section 3- Performing Security Assessments
An Ip Scanner Performs Host Discovery And Identifies How The Hosts Are
Connected Together In An Internetwork.
The Nmap Security Scanner Is One Of The Most Popular Open-Source Ip Scanners
Which Can Use Diverse Methods For Host Discovery And Is Available For Windows,
Linux And Macos.
● Tcp Syn (-Ss) - Fast Technique (Half Open) Where The Scanning Host
Requests A Connection Without Acknowledging It. The Target’s Response To
The Scan’s Syn Packet Identifies The Port State.
● Udp Scans (-Su) - Scan Udp Ports And Needs To Wait For A Response To
Determine The Port State.
● Port Range (-P) - By Default, Nmap Scans 1000 Commonly Used Ports And -P
Argument Can Be Used To Specify A Port Range.
Nmap Can Also Be Used For Fingerprinting Which Is The Process Of Discovering
Detailed Analysis Of Services On A Particular Host.
● Protocols
● Application Name & Version
● Os Type And Version
● Device Type
Netstat - Shows The State Of Tcp/Udp Ports On The Local Machine. Can Be Used On
Both Windows And Linux.
You May Also Be Able To Identify Suspect Remote Connections To Services On The
Local Host Or From The Host To Remote Ip Addresses.
Nslookup/Dig - Query Name Records For A Given Domain Using A Particular Dns
Resolver Under Windows (Nslookup) Or Linux (Dig).
The Basic Syntax Of The Command Is Tcpdump -I Eth0 Where Eth0 Is The Interface
To Listen On.
The Utility Will Then Display Captured Packets Until Halted Manually.
The Following Command Filters Frames To Those With The Source Ip 10.1.0.100 And
Destination Port 53 Or 80:
Tcpdump -I Eth0 “Src Host 10.1.0.100 And (Dst Port 53 Or Dst Port 80)”
Exploitation Frameworks
A Remote Access Trojan (Rat) Is A Malware That Gives An Adversary The Means Of
Remotely Accessing The Network.
The Best Known Exploit Framework Is Metasploit. It’s Open Source But Also Has Pro
And Express Commercial Editions Of The Framework.
● Fireelf
● Routersploit
● Browser Exploitation Framework (Beef)
● Zed Attack Proxy (Zap)
● Pacu
Netcat
This Is A Tool Used For Testing Connectivity Available On Both Windows And Linux
And Can Also Be Used For Port Scanning And Fingerprinting.
The Following Command Attempts To Connect To The Http Port On A Server And
Return Any Banner By Sending The “Head” Http Keyword.
Data Exfiltration Is The Methods And Tools By Which An Attacker Transfers Data
Without Authorization From The Victim's System To An External Network Or Media.
There Are Two Main Data Risks When Using Third Parties
Data Storage
baiting - dropping infected usb drives in the parking lot to influence employees.
● Viruses & Worms - Spread Without Any Authorization From The User By
Being Concealed Within The Executable Code Of Another Process.
● Trojan - Malware Concealed Within An Installer Package For Software That
Appears To Be Legitimate
● Potentially Unwanted Programs/Applications (Pups/Puas) - These Are
Software Installed Alongside A Package Selected By The User. Unlike A
Trojan, Their Presence Isn’t Necessarily Malicious. They Are Sometimes
Referred To As Grayware.
Other Classifications Are Based On The Payload Delivered By The Malware. The
Payload Is The Action Performed By The Malware
● Spyware
● Rootkit
● Remote Access Trojan (Rat)
● Ransomware
4.3 - Computer Viruses
This Is A Type Of Malware Designed To Replicate And Spread From Computer To
Computer Usually By “Infecting” Executable Applications Or Program Code.
The Term Multipartite Is Used For Viruses That Use Multiple Vectors And
Polymorphic For Viruses That Can Dynamically Change Or Obfuscate Their Code To
Evade Detection. Viruses Must Infect A Host File Or Media. An Infected File Can Be
Distributed Through Any Normal Means - On A Disk, On A Network, A Download
From A Website Or Email Attachment.
Computer Worms - this is a memory resident malware that can run without user
intervention and replicate over network resources. viruses need the user to perform
an action but worms can execute by exploiting a vulnerability in a process and
replicate themselves.
Worms can rapidly consume network bandwidth as the worm replicates and they
may be able to crash an operating system or server application. worms can also
carry a payload that may perform some other malicious action.
Fileless malware - as security controls got more advanced so did malware and this
new sophisticated modern type of malware is often referred to as fileless.
● Fileless Malware Do Not Write Their Code To Disk. The Malware Uses Memory
Resident Techniques To Run Its Own Process Within A Host Process Or
Dynamic Link Library (Dll). The Malware May Change Registry Values To
Achieve Persistence.
● Fileless Malware Uses Lightweight Shellcode To Achieve A Backdoor
Mechanism On The Host. The Shellcode Is Easy To Recompile In An
Obfuscated Form To Evade Detection By Scanners. It Is Then Able To
Download Additional Packages Or Payloads To Achieve The Actor’s
Objectives.
● Fileless Malware May Use “Live Off The Land” Techniques Rather Than
Compiled Executables To Evade Detection. This Means That The Malware
Code Uses Legitimate System Scripting Tools Like Powershell To Execute
Payload Actions.
tracking cookies - can be used to record pages visited, the user’s ip address and
various other metadata.
backdoors & rats - a backdoor provides remote user admin control over a host and
bypasses any authentication method. A remote access trojan is a backdoor
malware that mimics the functionality of legitimate remote control programs but is
designed specifically to operate covertly. a group of bots under the same control of
the same malware are referred to as a botnet and can be manipulated by the
herder program.
ransomware - this type of malware tries to extort money from the victim by
encrypting the victim’s files and demanding payment. ransomware uses payment
methods such as wire transfer or cryptocurrency.
logic bombs - logic bombs are not always malware code. a typical example is a
disgruntled admin who leaves a scripted trap that runs in the event his or her
account is disabled or deleted. anti-malware software is unlikely to detect this kind
of script and this type of trap is also referred to as a mine.
● Antivirus Notifications
● Sandbox Execution
● Resource Consumption - Can Be Detected Using Task Manager Or Top Linux
Utility.
● File System
Along with observing how a process interacts with the file system, network activity
is one of the most reliable ways to identify malware.
● Hashing Algorithms
● Symmetric Encryption Cipher
● Asymmetric Encryption Cipher
Hashing Algorithms
5.2 - Encryption
An encryption algorithm is a type of cryptographic process that encodes data so
that it can be recovered or decrypted.
the use of a key, with the encryption cipher ensures that decryption can only be
performed by authorized persons.
In contrast to substitution ciphers, the units in a transposition cipher stay the same
in plaintext and ciphertext but their order is changed according to some
mechanism.
hlool
elwrd
The letters are simply written as columns and the rows are concatenated.
block cipher - the plaintext is divided into equal-size blocks (usually 128-bit). if
there is not enough data in the plaintext, it is padded to the correct size. e.g, a
1200-bit plaintext would be padded with an extra 80 bits to fit into 10 x 128-bit
blocks.
Mostly Used For Authentication And Non-Repudiation And For Key Agreement And
Exchange.
Ron Rivest, Adi Shamir And Leonard Adleman Published The Rsa Cipher In 1977.
5.3 - Cryptographic Modes Of Operation & Cipher Suites
A Mode Of Operation Is A Means Of Using A Cipher Within A Product To Achieve A
Security Goal Such As Confidentiality Or Integrity.
Public Key Cryptography Can Authenticate A Sender While Hashing Can Prove
Integrity.
Symmetric Encryption Can Encrypt And Decrypt Large Amounts Of Data But It's
Difficult To Distribute The Secret Key Securely.
Asymmetric (Pkc) Encryption Can Distribute The Key Easily But Cannot Be Used For
Large Amounts Of Data.
Digital Certificates - Public Keys Are Used And Are Freely Available But How Can
Anyone Trust The Identity Of The Person Or Server Issuing A Public Key?
A Third Party Known As A Certificate Authority (Ca) Can Validate The Owner Of The
Public Key By Issuing The Subject With A Certificate.
The Process Of Issuing And Verifying Certificates Is Called Public Key Infrastructure
(Pki)
● Signature Algorithm - Used To Assert The Identity Of The Server's Public Key
And Facilitate Authentication
A developer can make tampering more difficult through obfuscation which is the
art of making a message difficult to understand. Cryptography is a very effective
way of obfuscating code but it also means the computer might not be able to
understand and execute the code.
A Brute Force Attack Will Run Through A Combination Of Letters, Numbers And
Symbols While A Dictionary Attack Creates Hashes Of Common Words And
Phrases.
Both Attacks Can Be Slowed Down By Adding A Salt Value When Creating The
Hash.
Key Stretching - This Takes A Key That's Generated From A User Password Plus A
Random Salt Value And Repeatedly Converts It To A Longer And More Random Key.
This Means The Attacker Will Have To Do Extra Processing For Each Possible Key
Value Thus Make The Attack Even Slower.
This Can Be Performed By Using A Particular Software Library To Hash And Save
Passwords When They Are Created. The Password-Based Key Derivation Function 2
(Pbkdf2) Is Widely Used For This Purpose.
When you want others to send you confidential messages, you give them your
public key to encrypt the message and then you decrypt the message with your
private key.
When You Want To Authenticate Yourself To Others, You Create A Signature And
Sign It Using Your Private Key To Encrypt It. You Give Others Your Public Key To
Decrypt The Signature.
The CA Reviews The Certificate And Checks That The Information Is Valid. If The
Request Is Accepted, The CA Signs The Certificate And Sends It To The Subject.
When Certificates Were First Introduced, The Common Name (Cn) Attribute Was
Used To Identify The Fqdn By Which The Server Is Accessed.
A Wildcard Domain Such As *.Comptia.Org Means That The Certificate Issued To The
Parent Domain Will Be Accepted As Valid For All Subdomains.
Eku Field - Can Have The Following Values
● Server Authentication
● Client Authentication
● Code Signing
● Email Protection
● Machine/Computer Certificates
● Email/User Certificates
● Code Signing Certificates
● Root Certificate
● Self-Signed Certificates
● Key Generation
● Certificate Generation
● Storage
● Revocation
● Expiration And Renewal
If The Key Used To Decrypt Data Is Lost Or Damaged, Encrypted Data Cannot Be
Recovered Unless A Backup Of The Key Exists. However Making Too Many Backups
Can Make It More Difficult To Keep The Key Secure.
Escrow Means That Something Is Held Independently Which In Terms Of Key
Management, Means A Third Party Is Trusted To Store The Key Securely.
Certificates Are Issued With A Limited Duration Set By The Ca Policy For The
Certificate Type E.G A Root Certificate Might Have A 10 Year Expiry Date While A
Web Server Certificate Might Be Issued For 1 Year Only.
● Unspecified
● Key Compromise
● Ca Compromise
● Superseded
● Cessation Of Operation
● Spoofing
● Identity Theft
● Keylogging
● Escalation Of Privilege
● Information Leakage
● Identity Manager
● Fraud Analytics
● Multi Factor Authentication
● Single Sign On
● Behavior Analytics
● Role Based Approach
● Something You Have - An Ownership Factor Means That The Account Holder
Possesses Something That No One Else Does Such As A Smart Card,
Hardware Token Or Smartphone.
Multi Factor Authentication - This Combines The Use Of More Than One
Authentication Factor And Can Either Be 2factor Or 3 Factor Authentication.
Multifactor authentication requires a combination of different technologies. for
example, requiring a pin along with a date of birth isn't multifactor.
Authentication Attributes
● something you can do - behavioral characteristics such as the way you walk
or hold your smartphone can be used to identify you to a considerable
degree of activity.
● someone you know - this uses a web of trust model where new users are
vouched for by existing users.
online attacks - the threat actor interacts directly with the authentication service
using either a database of known passwords or a list of passwords that have been
cracked online. This attack can be prevented with the use of strong passwords and
restricting the number of login attempts within a specified period of time.
password spraying - a horizontal brute force attack where the attacker uses a
common password (123456) and tries it with multiple usernames.
offline attacks - an offline attack means the attacker has gotten access to a
database of password hashes e.g %systemroot%\system32\config\sam or
%systemroot%\ntds\ntds.dit (the active directory credential store)
brute force attack - attempts every possible combination in the output space in
order to match a captured hash and guess the plaintext that generated it. the more
the characters used in the plaintext password, the more difficult it would be to
crack.
rainbow table attack - a refined dictionary attack where the attacker uses a
precomputed lookup table of all possible passwords and their matching hashes.
password crackers - there are some windows tools including cain and l0phtcrack
but the majority of password crackers like hashcat run primarily on linux.
● false rejection rate (FRR) - where a legitimate user is not recognized. also
referred to as a type 1 error or false non-match rate (FNMR).
● crossover error rate (CER) - the point at which FRR and FAR meet. the lower
the CER, the more efficient and reliable the technology.
fingerprint & facial recognition - fingerprint recognition is the most widely used
as it's inexpensive and non-intrusive. facial recognition records multiple factors
about the size and shape of the face
Facial Recognition
● Retinal Scan - An Infrared Light Is Shone Into The Eye To Identify The Pattern
Of Blood Vessels. It Is Very Accurate, Secure But Also Quite Expensive
● Iris Scan - Matches Patterns On The Surface Of The Eye Using Near-Infrared
Imaging And Is Less Intrusive Than Retinal Scan.
Continuous Authentication Verifies That The User Who Logged On Is Still Operating
The Device.
Section 8- Identity and Management Controls
● Recruitment
● Operation
● Termination/Separation
A background check determines that a person is who they say they are and are not
concealing criminal activity. Onboarding is the process of welcoming a new
employee to the organization.
Onboarding Processes
Each account can be assigned permissions over files and other network resources.
These permissions might be assigned directly to the account or inherited through
membership of a security group or role. on a windows active directory network,
access policies can be configured via group policy objects (GPOS)
Account Password Policy Settings
● Complexity Rules Should Not Be Enforced And The Only Restriction Should
Be To Block Common Passwords.
● Aging Policies Should Not Be Enforced. Users Should Be Able To Select If And
When A Password Should Be Changed
● Password Hints Should Not Be Used.
Account & Usage Audits - Accounting And Auditing Processes Are Used To Detect
Whether An Account Has Been Compromised Or Is Being Misused. Usage Auditing
Means Configuring The Security Log To Record Key Indicators And Then Reviewing
The Logs For Suspicious Activity.
Discretionary Access Control (Dac) - It Is Very Flexible But Also The Easiest To
Compromise As It's Vulnerable To Insider Threats And Abuse Of Compromised
Accounts.
This Is Based On The Primacy Of The Resource Owner And This Means The Owner
Has Full Control Over The Resource And Can Decide Who To Grant Rights To.
This Adds An Extra Degree Of Centralized Control To The Dac Model Where Users
Are Not Granted Rights Explicitly (Assigned Directly) But Rather Implicitly (Through
Being Assigned A Role)
File System Permissions (Linux) - In Linux, There Are Three Basic Permissions:
These Permissions Can Be Applied In The Context Of The Owner User(U), A Group
Account(G) And All Other Users/World(O).
The String Above Shows That For The Directory(D), The Owner Has Read, Write And
Execute Permissions While The Group Context And Others Have Read And Execute
Permissions
The Chmod Command Is Used To Modify Permissions And Can Be Used Either In
Symbolic Or Absolute Mode.
The Effect Of This Command Is To Append Write Permission To The Group Context
And Remove Execute Permission From The Other Context.
In Absolute Mode, Permissions Are Assigned Using Octal Notation Where R=4, W=2
And X=1
mandatory access control (mac) - this is based on the idea of security clearance
levels (labels) instead of acls. in a hierarchical one, subjects are only permitted to
access objects at their own clearance level or below.
This system can monitor the number of events or alerts associated with a user
account or track resources to ensure they are consistent in terms of timing of
requests.
rule-based access control - this is a term that can refer to any sort of access
control model where access control policies are determined by system-enforced
rules rather than system users.
As such RBAC, ABAC and MAC are all examples of rule-based (or non-discretionary)
access control.
The types of attributes, what information they contain and the way object types are
defined through attributes is described by the directory schema.
E.G The Distinguished Name Of A Web Server Operated By Widget In The Uk Might
Be:
This is the notion that a network needs to be accessible to more than just a
well-defined group of employees. In business, a company might need to make
parts of its network open to partners, suppliers and customers.
oauth and openid connect - many public clouds use application programming
interfaces (apis) based on representational state transfer (rest) rather than soap.
authentication and authorization for a restful api is often implemented using the
open authorization (oauth) protocol. oauth is designed to facilitate sharing of
information within a user profile between sites
● Phishing Campaigns
● Capture The Flag - Usually Used In Ethical Hacker Training Programs And
Gamified Competitions.
● Computer-Based Training And Gamification
They work at layer 2 of the osi model and make forwarding messages based on the
hardware or media access control (mac) address of attached nodes.
they can establish network segments that either map directly to the underlying
cabling or to logical segments created in the switch configuration as virtual lans
(VLANS)
wireless access points - provide a bridge between a cabled network and wireless
clients or stations. They also work at layer 2 of the osi model.
firewalls - they apply an access control list (acl) to filter traffic passing in or out of a
network segment. they can work at layer 3 of the osi model or higher.
domain name system (DNS) servers - host name records and perform name
resolution to allow applications and users to address hosts and services using fully
qualified domain names (FQDNS) rather than IP addresses.
Segregation Means That The Hosts In One Segment Are Restricted In The Way They
Communicate With Hosts In Other Segments.
The Main Building Block Of A Topology Is A Zone Which Is An Area Of The Network
Where The Security Configuration Is The Same For All Hosts Within It.
Zones Can Be Segregated With Vlans While The Traffic Between Them Can Be
Controlled Using A Security Device, Typically A Firewall.
Network Zones
Such Hosts Are Placed In A Dmz (Perimeter Or Edge Network). In A Dmz, External
Clients Are Allowed To Access Data On Private Systems Such As Web Servers
Without Compromising The Security Of The Internal Network As A Whole.
East-West Traffic - Traffic That Goes To And From A Data Center Is Referred To As
North-South. This Traffic Represents Clients Outside The Data Center Making
Requests.
However In Data Centers That Support Cloud Services, Most Traffic Is Actually
Between Servers Within That Data Center And This Traffic Is Referred To As
East-West Traffic.
Zero Trust - This Is Based On The Idea That Perimeter Security Is Unlikely To Be
Robust Enough. As Such In A Zero Trust Model, Continuous Authentication And
Conditional Access Are Used To Mitigate Threats.
Zero Trust Also Uses A Technique Called Microsegmentation. This Is A Security
Process That Is Capable Of Applying Policies To A Single Node As Though It Was In
A Zone Of Its Own.
A MITM can also be performed on this layer due to the lack of security.
arp poisoning attack - arp poisoning attack uses a packet crafter such as ettercap
to broadcast unsolicited arp reply packets.
Because arp has no security mechanism, the receiving devices trust this
communication and update their mac:ip address cache table with the spoofed
address.
MAC flooding attacks - where arp poisoning is directed at hosts, mac flooding is
used to attack a switch.
The idea here is to exhaust the memory used to store the switch's mac address
table which is used by the switch to determine which port to use to forward unicast
traffic to its correct destination.
overwhelming the table can cause the switch to stop trying to apply mac-based
forwarding and simply flood unicast traffic out of all ports.
Router/Switch Security
● physical port security - access to physical switch ports and hardware should
be restricted to authorized staff by using a secure server room or lockable
hardware cabinets.
● mac limiting/filtering - configuring mac filtering on a switch means defining
which mac addresses are allowed to connect to a particular port by creating
a list of valid mac addresses. mac limiting involves specifying a limit to the
number of permitted addresses that can connect to a port.
● dhcp snooping - dynamic host configuration protocol is one that allows a
server to assign an ip address to a client when it connects to a network. dhcp
snooping inspects this traffic arriving on access ports to ensure that a host is
not trying to spoof its mac address. with dhcp snooping, only dhcp messages
from ports configured as trusted are allowed.
● network access control - nac products can extend the scope of
authentication to allow admins to devise policies or profiles describing a
minimum security configuration that devices must meet to be granted
network access. This is called a health policy.
● route security - a successful attack against route security enables the
attacker to redirect traffic from its intended destination. routes between
networks and subnets can be configured manually, but most routers
automatically discover routes by communicating with each other.
Normally A Device That Needs To Send A Packet To An Ip Address But Does Not
Know The Receiving Device's Mac Address Broadcasts Will Broadcast An Arp
Request Packet And The Device With The Matching Ip Responds With An Arp
Reply.
Internet Protocol (Ip)
This Provides The Addressing Mechanism For Logical Networks And Subnets.
172.16.1.101/16
The /16 Prefix Indicates That The First Half Of The Address (172.16.0.0) Is The Network
Id While The Remainder Uniquely Identifies A Host On That Network. Networks Also
Use 128-Bit Ipv6 Addressing.
2001:Db8::Abc:0:Def0:1234
The First 64-Bits Contain Network Information While The Last Are Fixed As The
Host's Interface Id.
A Route To A Network Can Be Configured Statically But Most Networks Use Routing
Protocols To Transmit New And Updated Routes Between Routers.
The passphrase length is typically between 8 and 63 ascii characters and is then
converted to a 256-bit HMAC value.
wpa3 personal authentication - wpa3 also uses a passphrase like wpa2 but it
changes the method by which this secret is used to agree on session keys. this
scheme is called password authenticated key exchange (PAKE)
wi-fi protected setup (wps) - this is a feature of both WPA and WPA2 that allows
enrollment in a wireless network based on an 8-digit pin.
It is vulnerable to brute force attacks and is set to be replaced by the easy connect
method in wpa3 which uses quick response (qr) codes of each device.
open authentication and captive portals - open authentication means that the
client is not required to authenticate however it can be combined with a secondary
authentication mechanism via a browser.
When the client launches the browser, the client is redirected to a captive portal or
splash page where they will be able to authenticate to the hotspot provider's
network.
Once authenticated, the aaa server transmits a master key (mk) to the station and
then both of them will derive the same pairwise master key (pmk) from the mk.
extensible authentication protocol (eap) - this defines a framework for
negotiating authentication mechanisms rather than the details of the mechanisms
themselves.
eap implementations can include smart cards, one-time passwords and biometric
identifiers.
eap with flexible authentication via secure tunneling (eap-fast) - is also similar
to peap but instead of a server side certificate, it uses a protected access credential
(pac) which is generated for each user from the authentication server's master key.
radius federation - most implementations of eap use a radius server to validate the
authentication credentials for each user.
radius federation means that multiple organizations allow access to one another's
users by joining their radius servers into a radius hierarchy or mesh.
rogue access points & evil twins - a rogue access point is one that has been
installed on the network without authorization.
A rogue wap masquerading as a legitimate one is called an evil twin. an evil twin
might have a similar ssid as the real one or the attacker might use some dos
technique to overcome the legitimate wap.
A rogue hardware WAP can be identified through physical inspections. there are
also various wi-fi analyzers that can detect rogue waps including inssider and
kismet
One type of disassociation attack injects management frames that spoof the mac
address of a single victim causing it to be disconnected from the network.
Syn Flood Attack - A Dos Attack Where The Attacker Sends Numerous Syn
Requests To A Targeted Server Hoping To Consume Enough Resources To Prevent
The Transfer Of Legitimate Traffic.
Those Servers Direct Their Syn/Ack Responses To The Victim Server Which Rapidly
Consumes The Victim's Available Bandwidth.
The Network Time Protocol (Ntp) Which Helps Servers On A Network To Keep The
Correct Time Can Also Be Targeted With This Type Of Attack.
In some cases, a stateful firewall can detect a ddos attack and automatically block
the source but in many other cases, the source addresses will be spoofed making it
difficult to locate the real source. Another option is to use sinkhole routing so that
the traffic flooding a particular ip address is routed to a different network where it
can be analyzed.
Load Balancing - A Load Balancer Distributes Client Requests Across Available
Server Nodes In A Farm Or Pool And Also Helps With Fault Tolerance. There Are Two
Main Types Of Load Balancers:
● Layer 7 - More Advanced Load Balancers That Make Forward Decisions Based
On Application-Level Data Such As Request For Data Types Like Audio Or
Video.
Clustering - This Allows Multiple Redundant Processing Nodes That Share Data
With One Another To Accept Connections Thus Providing Redundancy. If One Of
The Nodes In The Cluster Stops Working, Connections Can Failover To A Working
Node.
Active/Passive (A/P) Clustering - In A/P, One Node Is Active While The Other Is
Passive. Here The Performance Is Not Affected During Failover But The Operating
Costs Can Be Higher Because Of The Unused Capacity.
10.1 - Firewalls
Packet filtering firewalls - these are the earliest type of firewalls and are
configured by specifying a group of rules called an access control list (acl).
Each rule defines a specific type of data packet and the appropriate action to take
when a packet matches the rule. an action can either be to deny or to accept the
packet.
This firewall can inspect the headers of ip packets meaning that the rules can be
based on the information found in those headers.in certain cases, the firewall can
control only inbound or both inbound and outbound traffic and this is often
referred to as ingress and egress traffic or filtering.
A basic packet filtering firewall is stateless meaning that it does not preserve any
information about network sessions. The least processing effort is required for this
but it can be vulnerable to attacks that are spread over a sequence of packets.
Stateful inspection firewalls - this type of firewall can track information about the
session established between two hosts and the session data is stored in a state
table.
When a packet arrives, the firewall checks it to confirm that it belongs to an existing
connection and if it does then the firewall would allow the traffic to pass
unmonitored to conserve processing effort.
Transport layer (osi layer 4) - here, the firewall examines the tcp three-way
handshake to distinguish new from established connections.
any deviations from this sequence can be dropped as malicious flooding or session
hijacking attempts.
application layer (osi layer 7) - this type of firewall can inspect the contents of
packets at the application layer and one key feature is to verify the application
protocol matches the port e.g http web traffic will use port 80.
ip tables - iptables is a command on linux that allows admins to edit the rules
enforced by the linux kernel firewall.
iptables works with chains which apply to the different types of traffic such as the
input chain for traffic destined for the local host. Each chain has a default policy set
to drop or allow traffic that does not match a rule.
The rules in this example will drop any traffic from the specific host 10.1.0.192 and
allow icmp echo requests (pings), dns and http/https traffic either from the local
subnet (10.1.0.0/24) or from any network (0.0.0.0/0)
10.2 - Firewall Implementation
Firewall Appliances - This Is A Stand-Alone Firewall Deployed To Monitor Traffic
Passing Into And Out Of A Network Zone. It Can Be Deployed In Two Ways:
● Bridged (Layer 2) - The Firewall Inspects Traffic Between Two Nodes Such As
A Router And A Switch.
Application-Based Firewalls
A Reverse Proxy Can Be Deployed On The Network Edge And Configured To Listen
For Client Requests From A Public Network
The Most Specific Rules Are Placed At The Top And The Final Default Role Is
Typically To Block Any Traffic That Has Not Matched A Rule (Implicit Deny)
Each Rule Can Specify Whether To Block Or Allow Traffic Based On Several
Parameters, Often Referred To As Tuples.
Tuples Can Include Protocol, Source And Destination Address, Source And
Destination Port.
● Static & Dynamic Source Nat - Performs 1:1 Mappings Between Private And
Public Addresses. The Mappings Can Be Static Or Dynamically Assigned
● Overloaded Nat/Network Address Port Translation(Napt)/Port Address
Translation (Pat) - Provides A Means For Multiple Private Ip Addresses To Be
Mapped Onto A Single Public Address.
Port Forwarding Means That The Router Takes Requests From The Internet For A
Particular Application (Http/Port 80) And Sends Them To A Designated Host And
Port In The Dmz Or Lan
Virtual Firewalls - These Are Usually Deployed Within Data Centers And Cloud
Services And Can Be Implemented In Three Different Ways.
taps & port mirrors - typically the packet capture sensor is placed inside a firewall
or close to an important server and the idea is to identify malicious traffic that has
managed to get past the firewall. depending on network size and resources, one or
just a few sensors will be deployed to monitor key assets and network paths.
content/url filter - a firewall typically has to sustain high loads of traffic which can
increase latency and even cause network outages. a solution is to treat security
solutions for server traffic differently from that of user traffic.
One Other Core Feature Is File Integrity Monitoring (Fim). Fim Software Will Audit
Key System Files To Make Sure They Match The Authorized Versions.
They Are Used To Monitor Load Status For Cpu/Memory, State Tables, Disk Capacity,
Fan Speeds/Temperature And So On. This Information Can Be Collected Using The
Simple Network Management Protocol (Snmp).
Logs - Very Valuable And Can Be Used To Diagnose Availability Issues And Record
Both Authorized And Unauthorized Uses Of A Resource Or Privilege. Reviewing
Logs Are A Crucial Part Of Security Assurance.
The Core Function Is To Aggregate Traffic Data And Logs From The Os, Routers,
Firewalls, Switches, Malware Scanners, Databases Etc.
Log Aggregation - Log Aggregation Refers To Normalizing Data From Different
Sources So That It Is Consistent And Searchable.
The Basis Of Soar Is To Scan The Organization's Store Of Security And Threat
Intelligence Then Analyze It Using Machine/Deep Learning Techniques And Then
Use That Data To Automate And Provide Data Enrichment For Incident Response
And Threat Hunting.
Domain name resolution - the domain name system (dns) resolves fully qualified
domain names (FQDNS) to ip addresses by making use of a distributed database
system that contains information on domains and hosts within those domains.
Name servers work over port 53.
uniform resource locator (url) redirection - a url is an address for the pages and
files published as a website and consists of a FQDN, file path and script parameters.
url redirection refers to the use of http redirects to open a page other than the one
the user requested.this is often used for legitimate purposes like redirecting a user
from a broken link to the updated link.
However, if the redirect is not properly validated by the web application, an attacker
can craft a phishing link that might appear legitimate to a naive user.
https://trusted.foo/login.php?url = "https://tru5ted.foo"
domain reputation - if a domain website has been hijacked and used for spam or
distributing malware, the domain could end up being blacklisted.
dns poisoning - this is an attack that compromises the process by which clients
query name servers to locate the ip address for a fqdn.
● man in the middle - if the attacker has access to the same local network as
the victim, the attacker can use arp poisoning to respond to dns queries from
the victim with spoofed replies.
● dns client cache poisoning - here the attacker inserts fake information into
the dns or web cache for the purpose of diverting traffic from a legitimate
server to a malicious one. The attack is aimed primarily at the hosts text file.
● dns server cache poisoning - this aims to corrupt the records held by the dns
server itself. This can be accomplished by performing dos against the server
that holds the authorized records for the domain and then spoofing replies
to requests from other name servers. the nslookup or dig tool can be used to
query the name records and cached records held by a server to discover
whether any false records have been inserted.
11.2 - Dns Security, Directory Services & Snmp
Dns Security - To Ensure Dns Security On A Private Network, Local Dns Servers
Should Only Accept Recursive Queries From Local Authenticated Hosts And Not
From The Internet.
Dns Security Extensions (Dnssec) - These Help To Mitigate Against Spoofing And
Poisoning Attacks By Providing A Validation Process For Dns Responses.
Generally Two Levels Of Access To The Directory Can Be Granted Which Are
Read-Only Access (Query) And Read/Write Access (Update) And Is Implemented
Using An Access Control Policy.
Ntp Has Historically Lacked Any Sort Of Security Mechanism But There Are Moves
To Create A Security Extension For The Protocol Called Network Time Security.
The Server Uses Its Key Pair And The Tls Protocol To Agree Mutually Supported
Ciphers With The Client And Negotiate An Encrypted Communications Session.
Ssl/Tls Version - A Server Can Provide Support For Legacy Clients Meaning A Tls 1.2
Server Could Be Configured To Allow Clients To Downgrade To Tls 1.1 Or 1.0
Cipher Suites - This Is A Set Of Algorithms Supported By Both The Client And
Server To Perform The Different Encryption And Hashing Operations Required By
The Protocol.
Ecdhe-Rsa-Aes128-Gcm-Sha256
This Means That The Server Can Use Elliptic Curve Diffie-Hellman Ephemeral Mode
For A Session Key Agreement, Rsa Signatures, 128-Bit Aes-Gcm (Galois Counter
Mode) For Symmetric Bulk Encryption And 256-Bit Sha For Hmac Functions.
Internet Protocol Security (Ipsec) - Tls Is Applied At The Application Level Either
By Using A Separate Secure Port Or By Using Commands In The Application
Protocol To Negotiate A Secure Connection.
Ipsec Operates At The Network Layer (Layer 3) So It Can Operate Without Having To
Configure Specific Application Support.
The Recipient Performs The Same Function On The Packet And Key And Should
Derive The Same Value To Confirm That The Packet Has Not Been Modified.
Esp Attaches Three Fields To The Packet: A Header, A Trailer (Providing Padding For
The Cryptographic Function) And An Icv.
Ipsec Transport And Tunnel Modes - Ipsec Can Be Used In Two Modes:
Internet Key Exchange (Ike) - Ipsec's Encryption And Hashing Functions Depend
On A Shared Secret. The Secret Must Be Communicated To Both Hosts And The
Hosts Must Confirm One Another's Identity (Mutual Authentication) Otherwise The
Connection Is Vulnerable To Mitm And Spoofing Attacks. The Ike Protocol Handles
Authentication And Key Exchange Referred To As Security Associations (Sa).
● Phase 1 Establishes The Identity Of The Two Hosts And Performs Key
Agreement Using The Dh Algorithm To Create A Secure Channel. Digital
Certificates And Pre-Shared Key Are Used For Authenticating Hosts.
● Phase 2 Uses The Secure Channel Created In Phase 1 To Establish Which
Ciphers And Key Sizes Will Be Used With Ah And/Or Esp In The Ipsec Session.
Vpn Client Configuration - To Configure A Vpn Client, You May Need To Install The
Client Software If The Vpn Type Is Not Natively Supported By The Os.
Always-On Vpn - This Means That The Computer Establishes The Vpn Whenever
An Internet Connection Over A Trusted Network Is Detected, Using The User's
Cached Credentials To Authenticate.
When A Client Connected To A Remote Access Vpn Tries To Access Other Sites On
The Internet, There Are Two Ways To Manage The Connection:
Split Tunnel - The Client Accesses The Internet Directly Using Its "Native" Ip
Configuration And Dns Servers.
Full Tunnel - Internet Access Is Mediated By The Corporate Network, Which Will
Alter The Client's Ip Address And Dns Servers And May Use A Proxy.
Full Tunnel Offers Better Security But The Network Address Translations And Dns
Operations Required May Cause Problems With Some Websites Especially Cloud
Services.
Out-Of-Band Management - Remote Management Methods Can Be Described
As Either In-Band Or Out-Of-Band (Oob).
Secure Shell - This Is The Principal Means Of Obtaining Secure Remote Access To A
Command Line Terminal. Mostly Used For Remote Administration And Secure File
Transfer (Sftp).
Ssh Servers Are Identified By A Public/Private Key Pair (The Host Key).
Each TPM is hard-coded with a unique unchangeable asymmetric private key called
the endorsement key which can be used to create various other types of subkeys
used in key storage, signature and encryption operations.
In windows, a TPM can be managed via the tpm.msc console or through group
policy.
A major concern with establishing a rot is that devices are used in environments
where anyone can get complete control over them.
UEFI - most pcs and smartphones implement the unified extensible firmware
interface (uefi). uefi provides the code that allows the host to boot to an os and can
also enforce a number of boot integrity checks.
UEFIis configured with digital certificates from valid OS vendors so the system
firmware will check the OS boot loader and kernel using the stored certificate to
ensure that it has been digitally signed by the OS vendor.
it won't prevent the system from booting but will record the presence of unsigned
kernel-level code.
boot attestation - this is the capability to transmit a boot log report signed by the
TPM via a trusted process to a remote server.
This report can be analyzed for signs of compromise and the host can be prevented
from accessing the network if it does not meet the required health policy or if no
attestation report is received.
An End Of Service Life (Eosl) System Is One That Is No Longer Supported By Its
Developer Or Vendor.
Windows Versions Are Given Five Years Of Mainstream Support And Five Years Of
Extended Support (During Which Only Security Updates Are Provided).
The Essential Principle Is Of Least Functionality Meaning The System Should Run
Only The Protocols And Services Required By Legit Users And No More.
Interfaces, Services And Application Service Ports Not In Use Should Be Disabled.
● Processor Capability
● System Memory
● Persistent Storage
● Cost
● Power (Battery)
● Authentication Technologies
● Cryptographic Identification
● Network And Range Constraints
System On Chip - This Is A System Where All Processors, Controllers And Devices
Are Provided On A Single Processor Die Or Chip. This Is Often Very Power Efficient
And Is Commonly Used With Embedded Systems.
An Fpga Solves The Problem Because The Structure Is Not Fully Set At The Time Of
Manufacture Giving The End Customer The Ability To Configure The Programming
Logic Of The Device To Run A Specific Application.
Also Known As Baseband Radio And There Are Two Main Radio Technologies:
Any Lte-Based Cellular Radio Uses A Subscriber Identity Module (Sim) Card As An
Identifier. The Sim Is Issued By A Cellular Provider With Roaming To Allow The Use
Of Other Supplier's Tower Relays.
12.6 - Industrial Control Systems & Internet Of Things
Industrial Systems Have Different Priorities To It Systems And Tend To Prioritize
Availability And Integrity Over Confidentiality (Reversing The Cia Triad As The Aic
Triad)
Ics/Scada Applications - These Types Of Systems Are Used Within Many Sectors
Of Industry
● Network Segmentation
● Wrappers
● Firmware Code Control & Inability To Patch
● Bring Your Own Device (BYOD) - The Mobile Device Is Owned By The
Employee And Will Have To Meet Whatever Security Profile Is Required. It's
The Most Common Model For Employees But Poses The Most Difficulties For
Security Managers.
● Choose Your Own Device (CYOD) - Very Similar To Cope Except That Here,
The Employee Is Given A Choice Of Device From A List.
ios in the enterprise - in apple's ios ecosystem, third-party developers can create
apps using apple's software development kit available only on macos.
Corporate control over ios devices and distribution of corporate and b2b apps is
facilitated by participating in the device enrollment program, the volume purchase
program and the developer enterprise program.
android in the enterprise - android is open source meaning there is more scope
for vendor-specific versions and the app model is far more relaxed.
screen lock - the screen lock can also be configured with a lockout policy. For
example, the device can be locked out for a period of time after a certain number of
incorrect password attempts.
For example, even if a device has been unlocked, the user might need to
reauthenticate in order to access the corporate workspace.
remote wipe - if the phone is stolen, it can be set to factory defaults or cleared of
any personal data with the use of the remote wipe feature. it can also be triggered
by several incorrect password attempts.
In theory, the thief could prevent the remote wipe by ensuring the phone cannot
connect to the network then hacking the phone and disabling its security.
full device encryption & external media - in ios, there are various levels of
encryption:
● all user data on the device is always encrypted but the key is stored on the
device. It's this key that is deleted in a remote wipe to ensure the data is
inaccessible.
● Email data and any apps using the "data protection" option are subject to a
second round of encryption using a key derived from the user's credential.
● rooting - associated with android devices and typically involves using custom
firmware
● jailbreaking - associated with ios and is accomplished by booting the device
with a patched kernel
● carrier unlocking - for either ios or android and it means removing the
restrictions that lock a device to a single carrier.
Rooting or jailbreaking mobile devices involves subverting the security measures
on the device to gain super administrative access to it but also has the side effect of
permanently disabling certain security features.
ad hoc wi-fi and wi-fi direct - an ad hoc network involves a set of wireless stations
establishing peer-to-peer connections with one another rather than using an
access point.
wi-fi directly allows one-to-one connections between stations though one of them
will serve as a soft access point.
tethering and hotspots - a smartphone can share its internet connection with
other devices via wi-fi making it a hotspot.
Infrared & rfid connection methods - infrared has been used for pan but it's use
in modern smartphones and wearable technology focuses on two other uses:
● ir blaster - this allows the device to interact with an ir receiver and operate a
device such as a tv as though it were the remote control.
● ip sensor - these are used as proximity sensors and to measure health
information (heart rate & blood oxygen levels).
radio frequency id (rfid) is a means of encoding information into passive tags which
can easily be attached to devices, clothing and almost anything else.
Skimming involves using a fraudulent rfid reader to read the signals from a
contactless bank card
● point-to-point (p2p) - microwave uses high gain antennas to link two sites
and each antenna is pointed directly at the other. It's very difficult to
eavesdrop on the signal as an intercepting antenna would have to be
positioned within the direct path.
Privilege Escalation - A Design Flaw That Allows A Normal User Or Threat Actor To
Suddenly Gain Extended Capabilities Or Privileges On A System.
Improper Input Handling - Good Programming Practice Dictates That Any Input
Accepted By A Program Or Software Must Be Tested To Ensure That It Is Valid. Most
Application Attacks Work By Passing Invalid Or Maliciously Constructed Data To
The Vulnerable Process.
Integer overflow - an integer is a whole number and integers are used as a valid
data type with fixed lower and upper bounds. an integer flow attack causes the
target software to calculate a value that exceeds these bounds and can even cause
a positive number to become negative.
This occurs when the outcome from an execution process is directly dependent on
the order and timing of certain events and those events fail to execute in the order
and timing intended by the developer.
memory leaks & resource exhaustion - a process should release its block of
memory used when it no longer requires it but if it doesn't, it can lead to memory
leaks. such a situation can lead to less memory available for other applications and
could lead to a system crash.
resources refer to cpu time, system memory, fixed disk capacity & network
utilization. a malicious process could spawn multiple looping threads to use cpu
time or write thousands of files to disk.
dll injection & driver manipulation - dll (dynamic link library) is a binary package
that implements some sort of standard functionality such as establishing a network
connection or performing cryptography.
The main process of a software application is likely to load several DLLS during the
normal course of operations.
DLL injection is a vulnerability where the OS allows one process to attach to another
and a malware can force a legitimate process to load a malicious link library.
To perform dll injection, the malware must already be operating with sufficient
privileges and evade detection by anti-virus software.
Avoiding detection is done through a process called code refactoring where the
code performs the same function by using different methods (variable types and
control blocks).
pass the hash attack - pth is the process of harvesting an account's cached
credentials when the user is logged into a single sign-on (sso) system so the
attacker can use the credentials on other systems.
If the attacker can obtain the hash of the user password, it is possible to use it
(without cracking) to authenticate to network protocols that accept ntlm (windows
new technology lan manager) hashes as authentication credentials.
14.3 - Uniform Resource Locator Analysis & Percent
Encoding
Uniform Resource Locator Analysis - Besides Pointing To The Host Or Service
Location On The Internet, A Url Can Encode Some Action Or Data To Submit To The
Server Host. This Is A Common Vector For Malicious Activity.
● Post - Send Data To The Server For Processing By The Requested Resource
● Put - Create Or Replace The Resource. Delete Can Be Used To Remove The
Resource
● Head - Retrieve The Headers For A Resource Only (Not The Body)
Data Can Be Submitted To The Server Using A Post Or Put Method And The Http
Headers And Body Or By Encoding The Data Within The Url Used To Access The
Resource.
Data Submitted Via A Url Is Delimited By The ? Character Which Follows The
Resource Path And Query Parameters Are Usually Formatted As One Or More
Name=Value Pairs, With Ampersands Delimiting Each Pair.
Percent Encoding - A Url Can Contain Only Unreserved And Reserved Characters
From The Ascii Set. Reserved Ascii Characters Are Used As Delimiters Within The Url
Syntax.
There Are Also Unsafe Characters Which Cannot Be Used In A Url. Control
Characters Such As Null String Termination, Carriage Return, Line Feed, End Of File
And Tab Are Unsafe.
14.4 - Api & Replay Attacks, Cross-Site Request Forgery,
Clickjacking & Ssl Strip Attacks
Application Programming Interface Attacks - Web Applications And Cloud
Services Implement Application Program Interfaces (Apis) To Allow Consumers To
Automate Services.
If The Api Isn’t Secure, Threat Actors Can Easily Take Advantage Of It To
Compromise The Services And Data Stored On The Web Application. Api Calls Over
Plain Http Are Not Secure And Could Easily Be Modified By A Third Party.
To Establish A Session, The Server Normally Gives The Client Some Type Of Token
And A Replay Attack Works By Sniffing Or Guessing The Token Value And Then
Submitting It To Re-Establish The Session Illegitimately.
A Cookie Has A Name, Value And Optional Security And Expiry Attributes. Cookies
Can Either Be Persistent And Non-Persistent.
In Order To Work, The Attacker Must Convince The Victim To Start A Session With
The Target Site. The Attacker Must Then Pass An Http Request To The Victim’s
Browser That Spoofs An Action On The Target Site Such As Changing A Password
Or An Email Address.
If The Target Site Assumes The Browser Is Authenticated Because There Is A Valid
Session Cookie, It Will Accept The Attacker’s Input As Genuine.
Clickjacking - This Is An Attack Where What The User Sees And Trusts As A Web
Application With Some Sort Of Login Page Or Form Contains A Malicious Layer Or
Invisible Iframe That Allows An Attacker To Intercept Or Redirect User Input.
Clickjacking Can Be Launched Using Any Type Of Compromise That Allows The
Adversary To Run Arbitrary Code As A Script. It Can Be Mitigated By Using Http
Response Headers That Instruct The Browser Not To Open Frames From Different
Origins
Ssl Strip - This Is Launched Against Clients On A Local Network As They Try To
Make Connections To Websites. The Threat Actor First Performs A Mitm Attack Via
Arp Poisoning To Masquerade As The Default Gateway.
When A Client Requests An Http Site That Redirects To An Https Site In An Unsafe
Way, The Sslstrip Utility Proxies The Request And Response, Serving The Client The
Http Site With An Unencrypted Login Form Thus Capturing Any User Credentials.
for example a web form could construct a query from authenticating the valid
credentials for bob and pa$$w0rd like this:
if the form input is not sanitized, the threat actor could bypass the password check
by entering a valid username plus an ldap filter string
The threat actor submits a request for a file outside the web server’s root directory
by submitting a path to navigate to the parent directory (../)
The threat actor might use a canonicalization attack to disguise the nature of the
malicious input.
canonicalization refers to the way the server converts between different methods
by which a resource (file path or url) may be represented and submitted to the
simplest method used by the server to process the input.
It exploits both the lack of authentication between the internal servers and services
and weak input validation allowing the attacker to submit unsanitized requests or
api parameters.
To Mitigate This, There Should Be Routines To Check User Input And Anything That
Does Not Conform To What Is Required Must Be Rejected.
Output Encoding Means That A String Is Re-Encoded Safely For The Context In
Which It Is Being Used.
The Main Issue With Client-Side Validation Is That The Client Will Always Be More
Vulnerable To Some Sort Of Malware Interfering With The Validation Process.
● Http Strict Transport (Hsts) - Forces Browser To Connect Using Https Only,
Mitigating Downgrade Attacks Such As Ssl Stripping.
● Content Security Policy (Csp) - Mitigates Clickjacking, Script Injection And
Other Client-Side Attacks.
● Cache Control - Sets Whether The Browser Can Cache Responses. Preventing
Caching Of Data Protects Confidential And Personal Information Where The
Client Device Might Be Shared By Multiple Users.
The Error Must Not Reveal Any Platform Information Or Inner Workings Of The
Code To An Attacker.
Secure Code Usage - A Program May Make Use Of Existing Code In The Following
Ways:
dead code is executed but has no effect on the program flow (a calculation is
performed but the result is never stored as a variable or used to evaluate a
condition).
static code analysis - this is performed against the application code before it is
packaged as an executable process. The software will scan the source code for
signatures of known issues.
dynamic code analysis - static code review will not reveal any vulnerabilities that
exist in the runtime environment. dynamic analysis means that the application is
tested under real world conditions using a staging environment.
associated with fuzzing is the concept of stress testing an application to see how an
application performs under extreme performance or usage scenarios.
Finally, the fuzzer needs some means of detecting an application crash and
recording which input sequence generated the crash.
● Allow List - A Highly Restrictive Policy That Means Only Running Authorized
Processes And Scripts.
● Block List - A Permissive Policy That Only Prevents Execution Of Listed
Processes And Scripts. It Is Vulnerable To Software That Has Not Previously
Been Identified As Malicious.
Code Signing - This Is The Principal Means Of Proving The Authenticity And
Integrity Of Code. The Developer Creates A Cryptographic Hash Of The File Then
Signs The Hash Using His/Her Private Key.
The Program Is Shipped With A Copy Of The Developer’s Code Signing Certificate
Which Contains A Public Key That The Destination Computer Uses To Read And
Verify The Signature.
● Scalability - Means That The Costs Involved In Supplying The Service To More
Users Are Linear (Double Users = Double Cost)
● Elasticity - Refers To The Ability Of The Team To Handle Changes On Demand
In Real Time.
When a developer commits new or changed code to a product, the new source
code is tagged with an updated version number and the old version archived. This
allows changes to be rolled back if a problem is discovered.
The more recent agile paradigm uses iterative processes to release well-tested code
in smaller blocks or units. In this model, development and provisioning tasks are
conceived as continuous.
continuous delivery - this is about testing all of the infrastructure that supports
the app including networking, database functionality, client software and so on.
software diversity - this can refer to obfuscation techniques to make code difficult
to detect as malicious. This is widely used by threat actors in the form of shellcode
compilers to avoid signature detection but can also be used as a defensive
technique.
obfuscating api methods and automation code makes it harder for a threat actor to
reverse engineer and analyze the code to discover weaknesses.
Section 15 - Implement Secure Cloud
Solutions
Hosted Private - Hosted By A Third Party For The Exclusive Use Of An Organization.
Better Performance But More Expensive Than Public.
Examples Include Oracle Database, Microsoft Azure Sql Database And Google App
Engine.
Security As A Service
● Host Hardware - The Platform That Will Host The Virtual Environment
● Hypervisor/Virtual Machine Monitor (Vmm) - Manages The Virtual Machine
Environment
● Guest Operating Systems/ Virtual Machines
Virtual Desktop Infrastructure (Vdi) & Thin Clients - Vdi Refers To Using A Vm As
A Means Of Provisioning A Corporate Desktops.
Containerization is used in many cloud services like serverless architecture and also
used to implement corporate workspaces on mobile devices.
A wrong username will be rejected quickly but a valid one will take longer allowing
the attacker to harvest a list of valid usernames.
virtual machine life cycle management (vmlm) software can be deployed to enforce
vm sprawl avoidance as it provides a centralized dashboard for maintaining and
monitoring all the virtual environments.
15.3 - Cloud Security Solutions
Cloud computing is also a means of transferring risk and as such it is important to
identify which risks are being transferred and what responsibilities both the
company and service provider will undertake.
a company will always still be held liable for legal and regulatory consequences in
case of a security breach though the service provider could be sued for the breach.
the company will also need to consider the legal implications of using a csp if its
servers are located in a different country.
Application security in the cloud refers both to the software development process
and to the identify and access management (IAM) features designed to ensure
authorized use of applications.
cloud provides resources abstracted from physical hardware via one or more layers
of virtualization and the compute component provides process and system
memory (ram) resources as required for a particular workload.
high availability - one of the benefits of using the cloud is the potential for
providing services that are resilient to failures at different levels.
The terms hot and cold storage refer to how quickly data is retrieved and hot
storage is quicker but also more expensive to manage.
● local replication - replicates data within a single data center in the region
where the storage account was created.
● regional replication - replicates data across multiple data centers within one
or two regions.
● geo-redundant storage (GRS) - replicates data to a secondary region that is
distant from the primary region. This safeguards data in the event of a
regional outage or a disaster.
virtual private clouds (VPCs) - each customer can create one or more VPCs
attached to their account. By default, a VPC is isolated from other csp accounts and
from other VPCs operating in the same account.
Each subnet within a VPC can either be private or public. for external connectivity
that isn't appropriate for public.
Routing can be configured between subnets in a vpc and between vpcs in the
same account or with vpcs belonging to different accounts.
configuring additional vpcs rather than subnets within a vpc allows for a greater
degree of segmentation between instances.
cloud firewall security - filtering decisions can be made based on packet headers
and payload contents at various layers
cloud access security brokers (casb) -CASBs provide you with visibility into how
clients and other network nodes are using cloud services.
● forward proxy - positioned at the client network edge that forwards user
traffic to the cloud network
● reverse proxy - positioned at the cloud network edge and directs traffic to
cloud services
● api
Service Functions Are Self-Contained, Do Not Rely On The State Of Other Services
And Expose Clear Input/Output (I/O) Interfaces.
The Main Difference From Soa Is That While Soa Allows A Service To Be Built From
Other Services, Each Microservice Should Be Capable Of Being Developed, Tested
And Deployed Independently (Highly Decoupled)
● Simple Object Access Protocol (Soap) - Uses Xml Format Messaging And Has
A Number Of Extensions In The Form Of Web Services Standards That
Support Common Features Such As Authentication, Transport Security And
Asynchronous Messaging.
serverless architecture - this is a modern design pattern for service delivery and is
strongly associated with modern web applications - netflix.
billing is based on execution time rather than hourly charges and this type of
service provision is also called function as a service (FAAS).
The main objective of iac is to eliminate snowflake systems which are basically
systems that are different from others and this can happen when there is a lack of
consistency in terms of patch updates and stability issues.
iac means using carefully developed and tested scripts and orchestration runbooks
to generate consistent builds.
fog & edge computing - traditional data center architecture sensors are quite
likely to have low bandwidth and higher latency WAN links to data networks.
Fog computing developed by cisco addresses this by placing fog node processing
resources close to the physical location for the iot sensors. The sensors
communicate with the fog node using wi-fi or 4g/5g and the fog node prioritizes
traffic, analyzes and remediates alertable conditions.
● Edge Devices Collect And Depend Upon Data For Their Operation.
● Edge Gateways Perform Some Pre-Processing Of Data To And From Edge
Devices To Enable Prioritization.
● Fog Nodes Can Be Incorporated As A Data Processing Layer Positioned
Closed To The Edge Gateways.
● The Cloud Or Data Center Layer Provides The Main Storage And Processing
Resources Plus Distribution And Aggregation Of Data Between Sites.
Instead of depending on a cluster of clouds for computing and data storage, edge
computing leverages local computing (routers, PCs, smartphones) to produce
shorter response time as the data is processed locally.
It Is Important To Consider How Sensitive Data Must Be Secured Not Just At Rest
But Also In Transit.
● Creation/Collection
● Distribution/Use
● Retention
● Disposal
Data roles & responsibilities - a data governance policy describes the security
controls that will be applied to protect data at each stage of its life cycle.
data owner - a senior executive role with ultimate responsibility for maintaining the
cia of the information asset. The owner also typically chooses a steward and
custodian and directs their actions and sets the budget and resource allocation for
controls.
data steward - primarily responsible for data quality. ensuring data is labeled and
identified with appropriate metadata and that it is stored in a secure format
data custodian - this role handles managing the system on which the data assets
are stored. This includes responsibility for enforcing access control, encryption and
backup measures.
data privacy officer (DPO) - this role is responsible for oversight of any personally
identifiable information (PII) assets managed by the company.
data controller - the entity responsible for determining why and how data is stored,
collected and used for ensuring that these purposes and means are lawful. the
controller has ultimate responsibility for privacy breaches and is not permitted to
transfer that responsibility.
data processor - an entity engaged by the data controller to assist with technical
collection, storage or analysis tasks. a data processor follows the instructions of a
data controller with regard to collection or processing.
Data Types
Personally Identifiable Information (Pii) - This Is Data That Can Be Used To Identify,
Contact Or Locate An Individual Such As A Social Security Number.
Financial Information - This Refers To Data Held About Bank And Investment
Accounts Plus Tax Returns And Even Credit/Debit Cards. The Payment Card
Industry Data Security Standard (Pci Dss) Defines The Safe Handling And Storage
Of This Information.
The First Responders Might Be Able To Handle The Incident If Its A Minor Issue
However In More Serious Cases, The Case May Need To Be Escalated To A More
Senior Manager.
In Certain Cases, A Timescale Might Also Be Applied. For Example With Gdpr, All
Affected Individuals Must Be Informed Of The Breach Within 72 Hours After The
Breach Occurred.
● Data Sharing And Use Agreement - personal data can only be collected for a
specific purpose but data sets can be subject to deidentification to remove
personal data. however there are risks of re identification if combined with
other data sources. A data sharing and use agreement is a legal means of
preventing this risk. it can specify terms for the way a data set can be
analyzed and proscribe the use of re identification techniques.
16.3 - Privacy And Data Controls
Data Can Be Described As Being In One Of Three States:
● Data At Rest - Data Is In Some Sort Of Persistent Storage Media. This Data
Can Be Encrypted And Acls Can Also Be Applied To It
● Data In Transit - This Is The State When Data Is Transmitted Over A Network.
In This State It Can Be Protected By A Transport Encryption Protocol Such As
Tls Or Ipsec.
Data Exfiltration - Data Exfiltration Can Take Place Via A Wide Variety Of
Mechanisms:
DLP Products Automate The Discovery And Classification Of Data Types And
Enforce Rules So That Data Is Not Viewed Or Transferred Without Proper
Authorization.
● Policy Server - To Configure Classification, Confidentiality And Privacy Rules
And Policies, Log Incidents And Compile Reports
● Endpoint Agents - To Enforce Policy On Client Computers Even When They
Are Not Connected To The Network
● Network Agents - To Scan Communications At Network Borders And
Interface With Web And Messaging Servers To Enforce Policy.
Remediation Is The Action The Dlp Software Takes When It Detects A Policy
Violation.
● Alert Only
● Block - The User Is Prevented From Copying The Original File But Retains
Access To It. User May Not Alerted To The Policy Violation But It Will Be
Logged As An Incident By The Management Engine.
● Quarantine - Access To The Original File Is Denied To The User.
● Tombstone - The Original File Is Quarantined And Replaced With One
Describing The Policy Violation And How The User Can Release It Again.
Data Minimization Affects The Data Retention Policy And Its Necessary To Track
How Long A Data Point Has Been Stored For And Whether Continued Retention Is
Necessary For A Legitimate Processing Function.
Data Masking - Can Mean That All Or Part Of The Contents Of A Field Are Redacted
By Substituting All Character Strings With "X".
First Task Is To Define And Categorize Types Of Incidents. In Order To Identify And
Manage Incidents, You Should Develop Some Method Of Reporting, Categorizing
And Prioritizing Them.
For Major Incidents, Expertise From Other Business Divisions Might Be Needed
Status And Event Details Should Be Circulated On A Need-To-Know Basis And Only
To Trusted Parties Identified On A Call List.
A Key Tool For Threat Research Is A Framework To Use To Describe The Stages Of An
Attack And These Stages Are Referred To As A Cyber Kill Chain.
Each Event May Also Be Described By Meta-Features Such As Date/Time, Kill Chain
Phase Etc.
● Tabletop - Least Costly Where The Facilitator Presents A Scenario And The
Responders Explain What Action They Would Take To Identify, Contain And
Eradicate The Threat. Flashcards Are Used In Place Of Computer Systems.
disaster recovery plan - also called the emergency response plan. This is a
document meant to minimize the effects of a disaster or disruption. meant for short
term events and implemented during the event itself.
business continuity plan - identifies how business processes should deal with
both minor and disaster-level disruption. a continuity plan ensures that business
processes can still function during an incident even if at a limited scale.
retention policy - a retention policy for historic logs and data captures sets the
period of which these are retained. indicators of a breach might be discovered only
months after the breach and this would not be possible without the retention
policy to keep logs and other digital evidence.
A Single-User Logon Failure Might Not Raise An Alert However Multiple Failed
Logins For The Same Account Over A Short Period Of Time Should Raise One.
One of the biggest challenges in operating a SIEM is tuning the system sensitivity
to reduce false positive indicators being reported as an event.
The correlation rules are likely to assign a criticality level to each match.
● log only - an event is produced and added to the siem's database but
automatically classified
● alert - the event is listed on a dashboard or incident handling system for an
agent to assess.
● alarm - the event is automatically classified as critical and a priority alarm is
raised.
trend analysis - this is the process of detecting patterns or indicators within a data
set over a time series and using those patterns to make predictions about future
events.
● volume-based trend analysis - this can be based on logs growing much faster
than usual. This analysis can also be based on network traffic and endpoint
disk usage.
● statistical deviation analysis can show when a data point should be treated
as suspicious. For example, a data point that appears outside the two clusters
for standard and admin users might indicate some suspicious activity by that
account.
logging platforms - log data from network appliances and hosts can be
aggregated by a siem either by installing a local agent to collect the data or by
using a forwarding system to transmit logs directly to the siem server.
syslog - provides an open format, protocol and server software for logging event
messages and it's used by a very wide range of host types.
a syslog message comprises a pri code, a header containing a timestamp and host
name and a message part. usually uses UDP port 514
● rsyslog uses the same configuration file syntax but can work over tcp and use
a secure connection.
● syslog-ng uses a different configuration file syntax but can also use
tcp/secure communications and more advanced options for message
filtering.
In linux, rather than writing events to syslog-format text files, logs from processes
are written to a binary-format called journald.
system & security logs - the five main categories of windows event logs are:
network logs can be generated from routers, firewalls, switches and access points.
authentication attempts for each host are likely to be written to the security log.
DNS event logs may be logged by a dns server while web servers are typically
configured to log http traffic that encounters an error or traffic that matches some
predefined rule set.
The status code of a response can reveal something about both the request and the
server's behavior.
It can also be a means of accessing data that is encrypted when stored on a mass
storage device.
file - file metadata is stored as attributes. The file system tracks when a file was
created, accessed and modified. The acl attached to a file showing its permissions
also represents another type of attribute.
web - when a client requests a resource from a web server, the server returns the
resource plus headers setting or describing its properties. headers describe the
type of data returned.
email - an email's internet header contains address information for the recipient
and sender plus details of the servers handling transmission of the message
between them.
Recovery Means Restoring The Affected Systems Back To Their Original Working
State Before The Incident And Ensuring That The System Cannot Be Attacked Again
Using The Same Attack Vector.
● Social Engineering
● Vulnerabilities
● Lack Of Security Controls
● Configuration Drift
● Weak Configuration
The purpose of SOAR is to solve the problem of the volume of alerts overwhelming
the analyst's ability to respond. An incident response workflow is usually defined as
a playbook. A playbook is a checklist of actions to perform to detect and respond to
a specific type of incident.
Prosecuting External Threat Sources Can Be Difficult As The Threat Actor May Be In
A Different Country Or Have Taken Effective Steps To Disguise Their Location.
Like Dna Or Fingerprints, Digital Evidence Is Latent Meaning That The Evidence
Cannot Be Seen With The Naked Eye; Rather It Must Be Interpreted Using A
Machine Or Process.
Due Process - Term Used In Us And Uk Common Law That Requires That People
Only Be Convicted Of Crimes Following The Fair Application Of The Laws Of The
Land.
The First Response Period Following Detection And Notification Is Often Critical. To
Gather Evidence Successfully, It‘s Vital That Staff Do Not Panic Or Act In A Way That
Would Compromise The Investigation.
Legal Hold - This Refers To The Fact That Information That May Be Relevant To A
Court Case Must Be Preserved. This Means That Computer Systems May Be Taken
As Evidence With All The Obvious Disruption To A Network That Entails.
timelines - a very important part of a forensic investigation will involve tying events
to specific times to establish a consistent and verifiable narrative. This visual
representation of events in a chronological order is called a timeline.
Operating systems and files use a variety of methods to identify the time at which
something occurred but the benchmark time is coordinated universal time (utc).
Local time will be offset from UTC by several hours and this local time offset may
also vary if a seasonal daylight saving time is in place.
NTFSuses utc “internally“ but many OS and file systems record timestamps as the
local system time and when collecting evidence, it is vital to establish how a
timestamp is calculated and note the offset between the local system time and utc.
Event logs and network traffic - an investigation may also obtain the event logs
for one or more network appliances and/or server hosts. network captures might
provide valuable evidence.
For forensics, data records that are not supported by physical evidence (data drive)
must meet many tests to be admissible in court. if the records were captured by a
SIEM, it must demonstrate accuracy and integrity.
The intelligence gathered from a digital forensic activity can be used in two
different ways:
Data acquisition is also more complicated when capturing evidence from a digital
scene compared to a physical one (evidence may be lost due to system glitches or
loss of power).
Data acquisition usually proceeds by using a tool to make an image from the data
held on the target device. the image can be acquired from either volatile or
nonvolatile storage.
system memory acquisition - system memory is volatile data held in the ram
modules. A system memory dump creates an image file that can be analyzed to
identify the processes that are running, the contents of temporary file systems,
registry data, network connections and more.
There are three main ways to collect the contents of the system memory
● live acquisition
● crash dump
● hibernation file and pagefile
Disk image acquisition refers to acquiring data from non-volatile storage. it could
also be referred to as device acquisition meaning the ssd storage in a smartphone
or media player.
Live acquisition - means copying the data while the host is still running. this may
capture more evidence or more data for analysis and reduce the impact on overall
services. however the data on the actual disks will have changed so this method
may not produce legally acceptable evidence.
Static acquisition by shutting down the host - runs the risk that the malware will
detect the shut-down process and perform anti-forensics to try and remove traces
of itself.
Static acquisition by pulling the plug - this means disconnecting the power at the
wall socket. This will likely preserve the storage device in a forensically clean state
but there is the risk of corrupting data.
To obtain a clean forensic image from a non-volatile storage, you need to ensure
nothing you do alters the data or metadata on the source disk or file system. A write
blocker can ensure this by preventing any data from being changed by filtering
write commands.
The host devices and media taken from the crime scene should be labeled, bagged
and sealed using tamper-evident bags. bags should have anti-static shielding to
reduce the possibility that data will be damaged or corrupted on the electronic
media by electrostatic discharge.
The evidence should be stored in a secure facility.
● network - packet captures and traffic flows can contain evidence. most
networks will come from a SIEM.
● cache - software cache can be acquired as part of a disk image. the contents
of hardware cache are generally not recoverable.
● artifacts and data recovery - artifact refers to any type of data that is not part
of the mainstream data structures of an os. Data recovery refers to analyzing
a disk for file fragments that might represent deleted or overwritten files. The
process of recovering them is referred to as carving.
● snapshot - is a live acquisition image of a persistent disk and may be the only
means of acquiring data from a virtual machine or cloud process.
● firmware - is usually implemented as flash memory. Some types like the pc
firmware can potentially be extracted from the device or from the system
memory using an imaging utility.
Risk Types
● external
● internal
● multiparty (supply chain attack)
● intellectual property (ip) theft
● software compliance/licensing
● legacy systems
Quantitative risk assessment - this aims to assign concrete values to each risk
factor:
● single loss expectancy (sle) - the amount that would be lost in a single
occurrence of the risk factor. it's calculated by multiplying the value of the
asset by an exposure factor (ef). ef is the percentage of the asset value that
would be lost.
● annualized loss expectancy (ale) - the amount that would be lost over the
course of a year. done by multiplying the sle by the annualized rate of
occurrence (aro)
it's important to realize that the value of an asset isn't just about its material value
but also the damage its compromise could cost the company (e.g a server is worth
more than its cost).
Qualitative risk assessment - seeks out people's opinions of which risk factors
are significant. assets and risks may be placed in categories such as high, medium
or low value and critical, high, medium or low probability respectively.
19.2 - Risk Controls
Risk Mitigation - This Is The Most Common Method Of Handling Risk And Typically
Involves The Use Of Countermeasure Or Safe Guards. The Likelihood Of The Risk
Occurring Must Be Reduced To The Absolute Minimum.
Risk Avoidance - The Cost Of The Risk Involved Is Too High And Must Be Avoided.
Mitigation Means The Risk Probabilities Are Reduced To The Maximum While
Avoidance Means The Risk Is Eliminated Completely
Risk Acceptance - The Cost Of Mitigating The Risk Outweighs The Cost Of Losing
The Asset. Risk Can Also Be Accepted When There Isn't A Better Solution.
Risk Appetite & Residual Risk - Where Risk Acceptance Has The Scope Of A
Single System, Risk Appetite Has A Project Or Institution-Wide Scope And Is
Typically Constrained By Regulation And Compliance. Where Inherent Risks Are The
Risks Before Security Controls Have Been Applied, Residual Risks Are Those Carried
Over After The Controls Have Been Applied.
Control Risk Is A Measure Of How Much Less Effective A Security Control Has
Become Over Time E.G Antivirus.
Risk Register - A Document Showing The Results Of Risk Assessments In A
Comprehensible Format.
Where BIA identifies risks, the business continuity plan (BCP) identifies controls and
processes that enable an organization to maintain critical workflows in the face of
an incident.
mission essential function (MEF) - this is one that cannot be deferred. the
business must be able to perform the function as close to continually as possible.
recovery time objective (RTO) - the targeted amount of time to recover business
operations after a disaster.
work recovery time (WRT) - following systems recovery, there may be additional
work to reintegrate different systems, test overall functionality and brief system
users on any changes.
recovery point objective (RPO) - refers to the maximum amount of data that can
be lost after recovery from a disaster before the loss exceeds what is tolerable to an
organization.
● People
● Tangible Assets
● Intangible Assets (Ideas, Reputation, Brand)
● Procedures (Supply Chains, Critical Procedures)
Single Points Of Failure - A SPOF Is An Asset That Causes The Entire Workflow To
Collapse If It Is Damaged Or Unavailable. Can Be Mitigated By Provisioning
Redundant Components.
Mean Time To Failure (Mttf) and Mean Time Between Failures (Mtbf) Represent The
Expected Lifetime Of A Product. Mttf Should Be Used For Non-Repairable Assets
For Example, A Hard Drive Can Be Described With An Mttf While A Server With
Mtbf.
● Calculation For Mtbf Is The Total Time Divided By The Number Of Failures.
For Example 10 Devices That Run For 50 Hours And Two Of Them Fail, The
Mtbf Is 250.
● Calculation For Mttf For The Same Test Is The Total Time Divided By Number
Of Devices So 50 Hours/Failure.
Mean Time To Repair (Mttr) Is A Measure Of The Time Taken To Correct A Fault So
That The System Is Restored To Full Operation. This Metric Is Important For
Determining The Overall Rto.
Disasters
A Site Risk Assessment Should Be Conducted To Identify Risks From These Factors.
● Identify Scenarios For Natural And Non-Natural Disasters And Options For
Protecting Systems
● Identify Tasks, Resources And Responsibilities For Responding To A Disaster
● Train Staff In The Disaster Planning Procedures And How To React Well To
Change.
High Availability Also Means That A System Is Able To Cope With Rapid Growth In
Demand.
Fault Tolerance & Redundancy - A System That Can Experience Failures And
Continue To Provide The Same Or Nearly The Same Level Of Service Is Said To Be
Fault Tolerant.
Power Redundancy
For Example Four 1gb Ports Gives An Overall Bandwidth Of 4gb So If One Port Goes
Down, 3gb Of Bandwidth Will Still Be Provided.
Switching & Routing - Network Cabling Should Be Designed To Allow For Multiple
Paths Between The Various Switches And Routers So That During A Failure Of One
Part Of The Network, The Rest Remains Operational.
Load Balancers - Nic Teaming Provides Load Balancing At The Adapter Level, Load
Balancing And Clustering Can Also Be Provisioned At A Service Level.
● Storage Area Networks - Redundancy Can Be Provided Within The San And
Replication Can Also Take Place Between Sans Using Wan Links.
● Database
● Virtual Machine - The Same Vm Instance Can Be Deployed In Multiple
Locations. This Can Be Achieved By Replicating The Vm's Disk Image And
Configuration Settings.
Geographical Dispersal Refers To Data Replicating Hot And Warm Sites That Are
Physically Distant From One Another. This Means That Data Is Protected Against A
Natural Disaster Wiping Out Storage At One Of The Sites.
The Recovery Window Is Determined By The Recovery Point Objective (Rpo) Which
Is Determined Through Business Continuity Planning.
Backup Types
● full includes all files and directories while incremental and differential check
the status of the archive attribute before including a file. The archive
attribute is set whenever the file is modified so the backup software knows
which files have been changed and need to be copied.
● incremental makes a backup of all new files as well as files modified since the
last backup while differential makes a backup of all new and modified files
since the last full backup. Incremental backups save backup time but can be
more time-consuming when the system must be restored. The system is
restored first from the last full backup set and then from each incremental
backup that has subsequently occurred.
Snapshots And Images - Snapshots Are Used For Open Files That Are Being Used
All The Time Because Copy-Based Mechanisms Are Not Able To Backup Open Files.
In Windows, Snapshots Are Provided For On Ntfs By The Volume Shadow Copy
Service (Vss).
Backup Storage Issues - Backups Require Cia As Well And Must Be Secured At
All Times. Natural Disasters Such As Fires And Earthquakes Must Also Be Accounted
For.
The 3-2-1 Rule States That You Should Have 4 Copies Of Your Data Across Two Media
Types With One Copy Held Offline And Offsite.
● disk
● network attached storage (nas) - an appliance that is a specially configured
type of server that makes raid storage available over common network
protocols
● tape - very cost effective and can be transported offsite but slow compared
to disk-based solutions especially for restore operations
● san & cloud
● enable and test power delivery systems (grid power, ups, secondary
generators and so on)
● enable and test switch infrastructure then routing appliances and systems
● enable and test network security appliances (firewalls, ids)
● enable and test critical network servers (dhcp. dns, ntp and directory
services)
● enable and test back-end and middleware (databases). verify data integrity
● enable and test front-end applications
● enable client workstations and devices and client browser access.
non-persistence
● master image - the "gold copy" of a server instance with the os applications
and patches all installed and configured.
● automated build from a template - similar to a master image and is the build
instructions for an instance. rather than storing a master image, the software
may build and provision an instance according to the template instructions.
Change control and change management reduce the risk that changes to these
components could cause service disruption.
The naming strategy should allow admins to identify the type and function of any
particular resource or location at any point in the network directory.
Change control & change management - a change control process can be used
to request and approve changes in a planned and controlled way. change requests
are usually generated when
In a formal change management process, the need or reasons for change and the
procedure for implementing the change is captured in a request for change (rfc)
document and submitted for approval.
For major changes, a trial change should be attempted first and every change
should be accompanied by a rollback plan so the change can be reversed if it has a
negative impact.
site resiliency - an alternate processing site might always be available and in use
while a recovery site might take longer to set up or only be used in an emergency.
Vendor diversity - as well as deploying multiple types of controls, there are also
advantages in leveraging vendor diversity.
While single vendor solutions provide interoperability and can reduce training and
support costs, it does have several disadvantages.
Active defense means an engagement with the adversary and can mean the
deployment of decoy assets to act as lures or bait.
A honey pot is a system set up to attract threat actors, with the intention of
analyzing attack strategies and tools to provide early warnings of attack attempts. it
could also be used to detect internal fraud, snooping and malpractice.
Site layout, fencing & lighting - given constraints of cost and existing
infrastructure, try to plan the site using the following principles
Gateways and locks - in order to secure a gateway, it must be fitted with a lock.
lock types can be categorized as follows:
● physical - a conventional lock that prevents the door handle from being
operated without the use of a key.
● electronic - rather than a key, the lock is operated by entering a pin on an
electronic keypad. This type of lock is also referred to as cipher, combination
or keyless.
● biometric - a lock may be integrated with biometric scanner
Physical attacks against smart cards and usb - smart cards used to bypass
electronic locks can be vulnerable to cloning and skimming attacks.
● card cloning -making one or more copies of an existing card. a lost or stolen
card with no cryptographic protections can be physically duplicated.
● skimming - refers to using a counterfeit card to capture card details which
are then used to program a duplicate.
Malicious usb charging cables and plugs are also a widespread problem. a usb data
blocker can provide mitigation against "juice-jacking" attacks by preventing any
sort of data transfer when the smartphone is connected to a charge point.
alarm systems & sensors - there are five main types of alarms
Security guards can be placed in front of secure and important zones and can act
as a very effective intrusion detection and deterrence mechanism but can be
expensive.
The other big advantage is that movement and access can also be recorded but the
main drawback is that response times are longer and security may be
compromised if not enough staff are present to monitor the camera feeds.
Reception personnel & id badges - a very important aspect of surveillance is the
challenge policy and can be quite effective against social engineering attacks.
An access list can be held at the reception area for each secure area to determine
who is allowed to enter.
Reception areas for high-security zones might be staffed by at least two people at
all times
Air gap/ DMZ - an air gapped host is one that is not physically connected to any
network. such a host would normally have stringent physical access controls.
an air gap within a secure area serves the same function as a DMZ. as well as being
disconnected from any network, the physical space around the host makes it easier
to detect unauthorized attempts to approach the asset.
heating, ventilation & air conditioning - environmental controls mitigate the loss
of availability through mechanical issues with equipment such as overheating.
For computer rooms and data centers, the environment is typically kept at a
temperature of about 20-22 degrees centigrade and relative humidity of 50%.
Hot and cold aisles - a server room or data center should be designed in such a
way as to maximize air flow across the server or racks.
The servers are placed back-to-back not front-to-back so that the warm exhaust
from one bank of servers is not forming the air intake for another bank. This is
referred to as a hot/cold aisle arrangement.
Fire detection & suppression - fire suppression systems work on the basis of the
fire triangle. This triangle works on the principle that a fire requires heat, oxygen
and fuel to ignite and burn so removing any one of them will suppress the fire.
Overhead sprinklers may also be installed but there is the risk of a burst pipe and
accidental triggering as well as the damage it could cause in the event of an actual
fire.
secure data destruction - physical security controls also need to take account of
the disposal phase of the data life cycle. Media sanitization and remnant removal
refer to erasing data from hard drives, flash drives and tape media before they are
disposed of.
● burning
● shredding and pulping
● pulverization
● degaussing - exposing a hard disk to a powerful electromagnet disrupts the
magnetic pattern that stores the data.
secure erase (se) - since 2001, the sata and serial attached scsi (sas) specifications
have included a secure erase (se) command. This command can be invoked using a
drive/array utility or the hdparm linux utility. on HDDS, this performs a single pass of
zero-filling.
Instant secure erase (ise) - hdds and ssds that are self-encrypting drives (seds)
support another option invoking a sanitize command set in sata and sas standards
from 2012 to perform a crypto ease. Drive vendors implement this as ise. with an ise,
all data on the drive is encrypted using media encryption key (mek) and when the
erase command is issued, the mek is erased rendering the data unrecoverable.