0% found this document useful (0 votes)
28 views

Chapter 2

The document discusses cloud computing, describing its characteristics including on-demand access to computing resources, elasticity, multi-tenancy, and various service models including Infrastructure as a Service, Platform as a Service, and Software as a Service. Cloud clients access cloud applications through web browsers or dedicated client software.

Uploaded by

RYZEN
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
28 views

Chapter 2

The document discusses cloud computing, describing its characteristics including on-demand access to computing resources, elasticity, multi-tenancy, and various service models including Infrastructure as a Service, Platform as a Service, and Software as a Service. Cloud clients access cloud applications through web browsers or dedicated client software.

Uploaded by

RYZEN
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 332

Unit-2

Infrastructure Security & Virtualization Security


Cloud Computing
• Cloud computing is the delivery of computing as a service rather than a product, whereby
shared resources, software, and information are provided to computers and other devices
as a metered service over a network
• Cloud computing provides computation, software, data access, and storage resources
without requiring cloud users to know the location and other details of the computing
infrastructure.
• End users access cloud based applications through a web browser or a light weight
desktop or mobile app while the business software and data are stored on servers at a
remote location. Cloud application providers strive to give the same or better service and
performance as if the software programs were installed locally on end-user computers.
• At the foundation of cloud computing is the broader concept of infrastructure convergence
and shared services.. This type of data centre environment allows enterprises to get their
applications up and running faster, with easier manageability and less maintenance, and
enables IT to more rapidly adjust IT resources (such as servers, storage, and networking)
to meet fluctuating and unpredictable business demand.[

2
Cloud Computing characteristics:
• Empowerment of end-users of computing resources by putting the provisioning of those
resources in their own control, as opposed to the control of a centralized IT service (for
example)
• Agility improves with users' ability to re-provision technological infrastructure resources.
• Application programming interface (API) accessibility to software that enables machines
to interact with cloud software in the same way the user interface facilitates interaction
between humans and computers. Cloud computing systems typically use REST-based
APIs.
• Cost is claimed to be reduced and in a public cloud delivery model capital expenditure is
converted to operational expenditure.[This is purported to lower barriers to entry, as
infrastructure is typically provided by a third-party and does not need to be purchased for
one-time or infrequent intensive computing tasks. Pricing on a utility computing basis is
fine-grained with usage-based options and fewer IT skills are required for implementation
(in-house).

3
Cloud Computing characteristics:
• Device and location independence[ enable users to access systems using a web browser regardless
of their location or what device they are using (e.g., PC, mobile phone). As infrastructure is off-site
(typically provided by a third-party) and accessed via the Internet, users can connect from anywhere.
• Virtualization technology allows servers and storage devices to be shared and utilization be
increased. Applications can be easily migrated from one physical server to another.
• Multi-tenancy enables sharing of resources and costs across a large pool of users thus allowing for:
• Centralization of infrastructure in locations with lower costs (such as real estate, electricity,
etc.)
• Peak-load capacity increases (users need not engineer for highest possible load-levels)
• Utilisation and efficiency improvements for systems that are often only 10–20% utilised.
• Reliability is improved if multiple redundant sites are used, which makes well-designed cloud
computing suitable for business continuity and disaster recovery
• Scalability and Elasticity via dynamic ("on-demand") provisioning of resources on a fine-grained,
self-service basis near real-time, without users having to engineer for peak loads.

4
Cloud Computing characteristics:
• Performance is monitored, and consistent and loosely coupled architectures are constructed using
web services as the system interface.[
• Security could improve due to centralization of data, increased security-focused resources, etc., but
concerns can persist about loss of control over certain sensitive data, and the lack of security for
stored kernels
• Security is often as good as or better than other traditional systems, in part because providers are able
to devote resources to solving security issues that many customers cannot afford.
• However, the complexity of security is greatly increased when data is distributed over a wider area or
greater number of devices and in multi-tenant systems that are being shared by unrelated users.
• In addition, user access to security audit logs may be difficult or impossible.
• Private cloud installations are in part motivated by users' desire to retain control over the
infrastructure and avoid losing control of information security.
• Maintenance of cloud computing applications is easier, because they do not need to be installed on
each user's computer and can be accessed from different places.

5
Service Models
• Cloud computing providers offer their services according to three fundamental models Infrastructure
as a service (IaaS), platform as a service (PaaS), and software as a service (SaaS) where IaaS is the
most basic and each higher model abstracts from the details of the lower models.
• Infrastructure as a Service (IaaS)
• In this most basic cloud service model, cloud providers offer computers – as physical or more often as virtual
machines –, raw (block) storage, firewalls, load balancers, and networks. IaaS providers supply these
resources on demand from their large pools installed in data centers. Local area networks including IP
addresses are part of the offer. For the wide area connectivity, the Internet can be used or - in carrier clouds -
dedicated virtual private networks can be configured.
• To deploy their applications, cloud users then install operating system images on the machines as well as their
application software. In this model, it is the cloud user who is responsible for patching and maintaining the
operating systems and application software. Cloud providers typically bill IaaS services on a utility
computing basis, that is, cost will reflect the amount of resources allocated and consumed.
• Platform as a Service (PaaS)
• In the PaaS model, cloud providers deliver a computing platform and/or solution stack typically including
operating system, programming language execution environment, database, and web server. Application
developers can develop and run their software solutions on a cloud platform without the cost and complexity
of buying and managing the underlying hardware and software layers. With some PaaS offers, the underlying
compute and storage resources scale automatically to match application demand such that the cloud user does
not have to allocate resources manually.

6
Service Models
• Software as a Service (SaaS)
• In this model, cloud providers install and operate application software in the cloud and cloud users access the
software from cloud clients.
• The cloud users do not manage the cloud infrastructure and platform on which the application is running.
This eliminates the need to install and run the application on the cloud user's own computers simplifying
maintenance and support. What makes a cloud application different from other applications is its elasticity.
• This can be achieved by cloning tasks onto multiple virtual machines at run-time to meet the changing work
demand. Load balancers distribute the work over the set of virtual machines. This process is transparent to the
cloud user who sees only a single access point.
• To accomodate a large number of cloud users, cloud applications can be multitenant, that is, any machine
serves more than one cloud user organization.
• It is common to refer to special types of cloud based application software with a similar naming convention:
desktop as a service, business process as a service, Test Environment as a Service, communication as a
service.
• The pricing model for SaaS applications is typically a monthly or yearly flat fee per user

7
Cloud Clients
• Users access cloud computing using networked client devices, such as desktop computers,
laptops, tablets, and smartphones. Some of these devices - cloud clients - rely on cloud
computing for all or a majority of their applications to be essentially useless without it.
Examples are thin clients and the browser-based Chrome book.
• Many cloud applications do not require specific software on the client and instead use a web
browser to interact with the cloud application.
• With Ajax and HTML5 these Web user interfaces can achieve a similar or even better look and
feel as native applications. Some cloud applications, however, support specific client software
dedicated to these applications (e.g., virtual desktop clients and most email clients). Some legacy
applications (line of business applications that until now have been prevalent in thin client
Windows computing) are delivered via a screen-sharing technology.

8
Deployment Models
• Public cloud
• A public cloud is one based on the standard cloud computing model, in which a service provider
makes resources, such as applications and storage, available to the general public over the
Internet. Public cloud services may be free or offered on a pay-per-usage model.
• Community cloud
• Community cloud shares infrastructure between several organizations from a specific
community with common concerns (security, compliance, jurisdiction, etc.), whether managed
internally or by a third-party and hosted internally or externally. The costs are spread over fewer
users than a public cloud (but more than a private cloud), so only some of the cost savings
potential of cloud computing are realized.
• Hybrid cloud
• Hybrid cloud is a composition of two or more clouds (private, community or public) that remain
unique entities but are bound together, offering the benefits of multiple deployment models. It
can also be defined as multiple cloud systems that are connected in a way that allows programs
and data to be moved easily from one deployment system to another.

9
Deployment Models
• Private cloud
• Private cloud is infrastructure operated solely for a single organization, whether managed
internally or by a third-party and hosted internally or externally,
• They have attracted criticism because users "still have to buy, build, and manage them" and thus
do not benefit from less hands-on
• Private Cloud Rentals
• Private Cloud Rentals are a cost effective option to consider when security is a concern.
Companies might consider the Hybrid Cloud model when replacing obsolete data center
equipment. When moving critically important company private data off site to a Public Cloud is
not an option, renting a modular data center can be considered.

10
Other Concepts
• Cloud architecture, the systems architecture of the software systems involved in the delivery of
cloud computing, typically involves multiple cloud components communicating with each other
over a loose coupling mechanism such as a messaging queue. Elastic provision implies
intelligence in the use of tight or loose coupling as applied to mechanisms such as these and
others.
• The Intercloud
• The intercloud is an interconnected global "cloud of clouds"[and an extension of the Internet
"network of networks" on which it is based
• Cloud engineering
• Cloud engineering is the application of engineering disciplines to cloud computing. It brings a
systematic approach to the high level concerns of commercialisation, standardisation, and
governance in conceiving, developing, operating and maintaining cloud computing systems. It is
a multidisciplinary method encompassing contributions from diverse areas such as systems,
software, web, performance, information, security, platform, risk, and quality engineering.

11
Is Cloud Computing Secure?
• For most organizations, the journey to cloud is no longer a question of “if”
but rather “when”, and a large number of enterprises have already
travelled some way down this path.
• Is cloud computing secure?
• A simple answer is: Yes, if you approach cloud in the right way, with the
correct checks and balances to ensure all necessary security and risk
management measures are covered.

12
Is Cloud Computing Secure?

• Companies ready to adopt cloud services are right to place security at the top of
their agendas.

• the consequences of getting your cloud security strategy wrong could not be more
serious.

• As many unwary businesses have found to their cost in recent high-profile cases, a
single cloud-related security breach can result in an organization severely
damaging its reputation – or, worse, the entire business being put at risk.
13
Is Cloud Computing Secure?
• Those further along their cloud path are finding that, like all forms of
information security, the question boils down to effective risk management.
we outlined the different layers in the cloud services stack:
• Infrastructure-as-a-Service (IaaS)
• Platform-as-a-Service (PaaS)
• Software-as-a-Service (SaaS)
• Business Process-as-a-Service (BPaaS).
• These layers – and their associated standards, requirements and solutions –
are all at different levels of maturity.
14
Is Cloud Computing Secure?
• The world of business is becoming more uncertain, as with new system
architectures come new cyber threats. No longer can the mechanisms
deployed in the past be relied on for protection”
--Nick Gaines, Group IS Director, Volkswagen UK

• Different types of cloud have different security characteristics. The table in


next page shows a simple comparison. (The number of stars indicates how
suitable each type of cloud is for each area.)
• We choose to characterize these types as private, public and community
clouds – or “hybrid” to refer to a combination of approaches.
15
Issues
• Privacy
• The cloud model has been criticised by privacy advocates for the greater ease in which the
companies hosting the cloud services control, thus, can monitor at will, lawfully or unlawfully,
the communication and data stored between the user and the host company.
• Instances such as the secret NSA program, working with AT&T, and Verizon, which recorded
over 10 million phone calls between American citizens, causes uncertainty among privacy
advocates, and the greater powers it gives to telecommunication companies to monitor user
activity
• Using a cloud service provider (CSP) can complicate privacy of data because of the extent to
which virtualization for cloud processing (virtual machines) and cloud storage are used to
implement cloud services. The point is that because of CSP operations, customer or tenant data
may not remain on the same system, or in the same data center or even within the same
provider's cloud. This can lead to legal concerns over jurisdiction.
• While there have been efforts (such as US-EU Safe Harbor) to "harmonise" the legal
environment, providers such as Amazon still cater to major markets (typically the United States
and the European Union) by deploying local infrastructure and allowing customers to select
"availability zones."[Cloud computing poses privacy concerns because the service provider at
any point in time, may access the data that is on the cloud. They could accidentally or
deliberately alter or even delete some info.

16
Issues
• Compliance
• In order to obtain compliance with regulations including FISMA, HIPAA, and SOX in the
United States, the Data Protection Directive in the EU and the credit card industry's PCI DSS,
users may have to adopt community or hybrid deployment modes that are typically more
expensive and may offer restricted benefits. This is how Google is able to "manage and meet
additional government policy requirements beyond FISMA“ and Rackspace Cloud or
QubeSpace are able to claim PCI compliance
• Many providers also obtain SAS 70 Type II certification, but this has been criticised on the
grounds that the hand-picked set of goals and standards determined by the auditor and the
auditee are often not disclosed and can vary widely Providers typically make this information
available on request, under non-disclosure agreement
• Customers in the EU contracting with cloud providers established outside the EU/EEA have to
adhere to the EU regulations on export of personal data
• Legal
• As can be expected with any revolutionary change in the landscape of global computing, certain
legal issues arise; everything from trademark infringement, security concerns to the sharing of
propriety data resources.

17
Issues
• Open source
• Open-source software has provided the foundation for many cloud computing implementations,
one prominent example being the Hadoop framework In November 2007, the Free Software
Foundation released the Affero General Public License, a version of GPLv3 intended to close a
perceived legal loophole associated with free software designed to be run over a network
• Open standards
• Most cloud providers expose APIs that are typically well-documented (often under a Creative
Commons license) but also unique to their implementation and thus not interoperable. Some
vendors have adopted others' APIs and there are a number of open standards under development,
with a view to delivering interoperability and portability
• Security
• As cloud computing is achieving increased popularity, concerns are being voiced about the
security issues introduced through adoption of this new model. The effectiveness and efficiency
of traditional protection mechanisms are being reconsidered as the characteristics of this
innovative deployment model can differ widely from those of traditional architectures. An
alternative perspective on the topic of cloud security is that this is but another, although quite
broad, case of "applied security" and that similar security principles that apply in shared multi-
user mainframe security models apply with cloud security

18
Issues
• The relative security of cloud computing services is a contentious issue that may be delaying its
adoption.
• Physical control of the Private Cloud equipment is more secure than having the equipment off
site and under someone else’s control. Physical control and the ability to visually inspect the
data links and access ports is required in order to ensure data links are not compromised. Issues
barring the adoption of cloud computing are due in large part to the private and public sectors'
unease surrounding the external management of security-based services. It is the very nature of
cloud computing-based services, private or public, that promote external management of
provided services. This delivers great incentive to cloud computing service providers to
prioritize building and maintaining strong management of secure services
• Security issues have been categorised into sensitive data access, data segregation, privacy, bug
exploitation, recovery, accountability, malicious insiders, management console security, account
control, and multi-tenancy issues. Solutions to various cloud security issues vary, from
cryptography, particularly public key infrastructure (PKI), to use of multiple cloud providers,
standardisation of APIs, and improving virtual machine support and legal support.

19
Issues
• Sustainability
• Although cloud computing is often assumed to be a form of "green computing", there is as of yet
no published study to substantiate this assumption
• Finland, Sweden and Switzerland are trying to attract cloud computing data centers. Energy
efficiency in cloud computing can result from energy-aware scheduling and server consolidation
• However, in the case of distributed clouds over data centers with different source of energies
including renewable source of energies, a small compromise on energy consumption reduction
could result in high carbon footprint reduction
• Abuse
• As with privately purchased hardware, crackers posing as legitimate customers can purchase the
services of cloud computing for nefarious purposes. This includes password cracking and
launching attacks using the purchased services.
• In 2009, a banking trojan illegally used the popular Amazon service as a command and control
channel that issued software updates and malicious instructions to PCs that were infected by the
malware

20
Secure Cloud Computing
• Cloud computing security (sometimes referred to simply as "cloud security") is an evolving
sub-domain of computer security, network security, and, more broadly, information security. It
refers to a broad set of policies, technologies, and controls deployed to protect data, applications,
and the associated infrastructure of cloud computing. Cloud security is not to be confused with
security software offerings that are "cloud-based" (a.k.a. security-as-a-service).
• There are a number of security issues/concerns associated with cloud computing but these issues
fall into two broad categories: Security issues faced by cloud providers (organizations providing
Software-, Platform-, or Infrastructure-as-a-Service via the cloud) and security issues faced by
their customers. In most cases, the provider must ensure that their infrastructure is secure and
that their clients’ data and applications are protected while the customer must ensure that the
provider has taken the proper security measures to protect their information
• The extensive use of virtualization in implementing cloud infrastructure brings unique security
concerns for customers or tenants of a public cloud service. Virtualization alters the relationship
between the OS and underlying hardware - be it computing, storage or even networking. This
introduces an additional layer - virtualization - that itself must be properly configured, managed
and secured]. Specific concerns include the potential to compromise the virtualization software,
or "hypervisor". While these concerns are largely theoretical, they do exist

21
Dimensions of Cloud Computing Security

• Correct security controls should be implemented according to asset, threat, and


vulnerability risk assessment matrices[. While cloud security concerns can be grouped
into any number of dimensions (Gartner names seven while the Cloud Security
Alliance identifies fourteen areas of concern.)
• These dimensions have been aggregated into three general areas:
• Security and Privacy,
• Compliance, and
• Legal or Contractual Issues.[

22
Security and Privacy
• Security and privacy
• In order to ensure that data is secure (that it cannot be accessed by unauthorized users
or simply lost) and that data privacy is maintained, cloud providers attend to the
following areas:
• Data protection
• To be considered protected, data from one customer must be properly segregated from that of another; it
must be stored securely when “at rest” and it must be able to move securely from one location to
another. Cloud providers have systems in place to prevent data leaks or access by third parties. Proper
separation of duties should ensure that auditing or monitoring cannot be defeated, even by privileged
users at the cloud provider

• Physical Control
• Physical control of the Private Cloud equipment is more secure than having the equipment off site and
under someone else’s control. Having the ability to visually inspect the data links and access ports is
required in order to ensure data links are not compromised.

• Identity management
• Every enterprise will have its own identity management system to control access to information and
computing resources. Cloud providers either integrate the customer’s identity management system into
their own infrastructure, using federation or SSO technology, or provide an identity management
solution of their own.

23
Security and Privacy
• Physical and personnel security
• Providers ensure that physical machines are adequately secure and that access to these machines as well
as all relevant customer data is not only restricted but that access is documented.

• Availability
• Cloud providers assure customers that they will have regular and predictable access to their data and
applications.

• Application security
• Cloud providers ensure that applications available as a service via the cloud are secure by implementing
testing and acceptance procedures for outsourced or packaged application code. It also requires
application security measures (application-level firewalls) be in place in the production environment.

• Privacy
• Finally, providers ensure that all critical data (credit card numbers, for example) are masked and that
only authorized users have access to data in its entirety. Moreover, digital identities and credentials must
be protected as should any data that the provider collects or produces about customer activity in the
cloud.

• Legal issues
• In addition, providers and customers must consider legal issues, such as Contracts and E-Discovery, and
the related laws, which may vary by country

24
Security Characteristics

25
Security Risks
• Organizations with defined controls for externally sourced services or access to IT risk-assessment
capabilities should still apply these to aspects of cloud services where appropriate.

• But while many of the security risks of cloud overlap with those of outsourcing and offshoring, there
are also differences that organizations need to understand and manage.

“When adopting cloud services, there are four key considerations:


1. Where is my data?
2. How does it integrate?
3. What is my exit strategy?
4. What are the new security issues?”
--Tony Mather, CIO, Clear Channel International
26
Security Risks
• Processing sensitive or business-critical data outside the enterprise introduces a level of risk
because any outsourced service bypasses an organization's in-house security controls. With cloud,
however, it is possible to establish compatible controls if the provider offers a dedicated service.
An organisation should ascertain a provider’s position by asking for information about the control
and supervision of privileged administrators.
• Organizations using cloud services remain responsible for the security and integrity of their own
data, even when it is held by a service provider. Traditional service providers are subject to
external audits and security certifications. Cloud providers may not be prepared to undergo the
same level of scrutiny.
• When an organisation uses a cloud service, it may not know exactly where its data resides or
have any ability to influence changes to the location of data.
27
Security Risks
• Most providers store data in a shared environment. Although this may be segregated from
other customers’ data while it’s in that environment, it may be combined in backup and archive
copies. This could especially be the case in multi-tenanted environments.
• Companies should not assume service providers will be able to support electronic discovery,
or internal investigations of inappropriate or illegal activity. Cloud services are especially difficult
to investigate because logs and data for multiple customers may be either co-located or spread
across an ill-defined and changing set of hosts.
• Organisations need to evaluate the long-term viability of any cloud provider. They should
consider the consequences to service should the provider fail or be acquired, since there will be far
fewer readily identifiable assets that can easily be transferred in-house or to another provider.

28
Cloud Security Simplified

• As with all coherent security strategies, cloud security can seem dauntingly complex, involving many different

aspects that touch all parts of an organization.

• CIOs and their teams need to plot effective management strategies as well as understand the implications for

operations and technology.

• we outline the key considerations.

• Management

• Operation
29
• Technology
Compliance
• Compliance
• Numerous regulations pertain to the storage and use of data, including Payment Card Industry Data
Security Standard (PCI DSS), the Health Insurance Portability and Accountability Act (HIPAA), the
Sarbanes-Oxley Act, among others. Many of these regulations require regular reporting and audit trails.
Cloud providers must enable their customers to comply appropriately with these regulations.

• Business continuity and data recovery


• Cloud providers have business continuity and data recovery plans in place to ensure that service can be
maintained in case of a disaster or an emergency and that any data loss will be recovered. These plans
are shared with and reviewed by their customers.

• Logs and audit trails


• In addition to producing logs and audit trails, cloud providers work with their customers to ensure that
these logs and audit trails are properly secured, maintained for as long as the customer requires, and are
accessible for the purposes of forensic investigation

• Unique compliance requirements


• In addition to the requirements to which customers are subject, the data centers maintained by cloud
providers may also be subject to compliance requirements. Using a cloud service provider (CSP) can
lead to additional security concerns around data jurisdiction since customer or tenant data may not
remain on the same system, or in the same data center or even within the same provider's cloud.

30
Compliance
• Legal and contractual issues
• Aside from the security and compliance issues enumerated above, cloud providers and
their customers will negotiate terms around liability (stipulating how incidents involving
data loss or compromise will be resolved, for example), intellectual property, and end-of-
service (when data and applications are ultimately returned to the customer.
• Public records
• Legal issues may also include records-keeping requirements in the public sector, where
many agencies are required by law to retain and make available electronic records in a
specific fashion. This may be determined by legislation, or law may require agencies to
conform to the rules and practices set by a records-keeping agency. Public agencies using
cloud computing and storage must take these concerns into account.

31
Cloud Security Simplified
• Management
1. Updated security policy
2. Cloud security strategy
3. Cloud security governance
4. Cloud security processes
5. Security roles & responsibilities
6. Cloud security guidelines
7. Cloud security assessment
8. Service integration
9. IT & procurement security requirements
10. Cloud security management
32
Cloud Security Simplified
• Operation
1. Awareness & training
2. Incident management
3. Configuration management
4. Contingency planning
5. Maintenance
6. Media protection
7. Environmental protection
8. System integrity
9. Information integrity
10. Personnel security
33
Cloud Security Simplified
• Technology
1. Access control
2. System protection
3. Identification
4. Authentication
5. Cloud security audits
6. Identity & key management
7. Physical security protection
8. Backup, recovery & archive
9. Core infrastructure protection
10. Network protection
34
Fault Tolerance in Cloud Computing

Fault tolerance in cloud computing means creating a blueprint for ongoing work
whenever some parts are down or unavailable. It helps enterprises evaluate their
infrastructure needs and requirements and provides services in case the respective
device becomes unavailable for some reason.

It does not mean that the alternative system can provide 100% of the entire
service. Still, the concept is to keep the system usable and, most importantly, at a
reasonable level in operational mode. It is important if enterprises to continue
growing in a continuous mode and increase their productivity levels.
Main Concepts behind Fault Tolerance in Cloud Computing System

•Replication: Fault-tolerant systems work on running multiple replicas for each


service. Thus, if one part of the system goes wrong, other instances can be used to
keep it running instead. For example, take a database cluster that has 3 servers with the
same information on each. All the actions like data entry, update, and deletion are
written on each. Redundant servers will remain idle until a fault tolerance system
demands their availability.

•Redundancy: When a system part fails or goes downstate, it is important to have a


backup type system. The server works with emergency databases that include many
redundant services. For example, a website program with MS SQL as its database may
fail midway due to some hardware fault. Then the redundancy concept has to take
advantage of a new database when the original is in offline mode.
Intrusion Detection Systems
• Intrusion Detection System (IDS)
• Passive
• Hardware\software based
• Uses attack signatures
• Configuration
• SPAN/Mirror Ports
• Generates alerts (email, pager)
• After the fact response
Intrusion Prevention Systems
• Intrusion Prevention System (IPS)
• Also called Network Defense Systems (NDS)
• Inline & active
• Hardware\software based
• Uses attack signatures
• Configuration
• Inline w/fail over features.
• Generates alerts (email, pager)
• Real time response
IDS vs. IPS
• IPS evolved from IDS
• Need to stop attacks in real time
• After the fact attacks have lesser value
• IDS is cheaper.
• Several Open Source IDS/IPS
• Software based
• IPS = EXPENSIVE
• Hardware based (ASIC & FPGA)
Detection Capabilities
• Signatures
• Based on current exploits (worm, viruses)
• Detect malware, spyware and other malicious programs.
• Bad traffic detection, traffic normalization
• Anomaly Detection
• Analyzes TCP/IP parameters
• Normalization
• Fragmentation/reassembly
• Header & checksum problems
Evasion Techniques
• Encryption
• IPSec, SSH, Blowfish, SSL, etc.
• Placement of IPS sensors are crucial
• Lead to architectural problems
• False sense of security
• Encryption Key Exchange
• IPS sensors can “usually” detect/see encryption key exchanges
• IPS sensors can “usually” detected unknown protocols
Evasion Techniques (cont.)
• Packet Fragmentation
• Reassembly – 1.) out of order, 2.) storage of fragments (D.o.S)
• Overlapping – different size packets arrive out of order and in overlapping positions.
• Newly arrived packets can overwrite older data.
Evasion Techniques (cont.)
• Zero day exploits (XSS, SQL Injection)
• Not caught by signatures
• Not detected by normalization triggers
• Specific to custom applications/DB’s.
• Social engineering
• Verbal communication
• Malicious access via legitimate credentials
• Poor configuration management
• Mis-configurations allow simple access not detected.
• Increases attack vectors
Vendors
• Open Source
• SNORT (IDS/IPS) – my favorite
• Prelude (IDS)
• HoneyNet (Honey Pot/IDS)
• Commercial
• TippingPoint
• Internet Security Systems
• Juniper
• RadWare
• Mirage Networks
Tools of the Trade
• Fuzzers – SPIKE, WebScarab, ADMmutate, ISIC, Burp Suite
• Scanners - Nessus, NMAP, Nikto, Whisker
• Fragmentation – ADMmutate, Fragroute, Fragrouter, ettercap, dSniff
• Sniffers – ethereal, dSniff, ettercap, TCPDump
• Web Sites
• www.thc.org
• packetstormsecurity.nl
• www.packetfactory.net
Future of IDS/IPS
• Many security appliances  ONE
• IDS/IPS, SPAM, AV, Content Filtering
• IDS will continue to loose market share
• IPS, including malware, spyware, av are gaining market share
• Security awareness is increasing
• Attacks are getting sophisticated
• Worms, XSS, SQL Injection, etc.
Cloud-based visitor logbooks (Access Logging)

While nearly all the benefits explained in the previous sections can be gained by having either a paper or a digital, cloud-
based logbook, digital logbooks hold a clear advantage over paper. They make reaping the benefits described so much easier
and more efficient. The primary advantages of paper logbooks have been cost and ease. All you need are a few pieces of
paper in a three-ring binder and a pen. Guests walk in, write down their information, and move on. But that is really where
the advantages end. When it comes to security, paper logbooks cannot guard the data privacy of your guests the way a cloud-
based logbook can. Any guest or employee could stop by and see names and contact information listed on the paper log. And
that is assuming that your guests take the time to fill in all the required contact information. A digital system can require
guests to include phone numbers and email address information, something that is much easier to skip when writing on
paper.
Cloud-based logbooks automatically store important information about guest arrivals and departures. If the need arises to
search the logs for a particular period or name — let’s say a theft occurred between 10 a.m. and 2 p.m. last Tuesday, or a
regular vendor is suspected of a crime — an electronic log can retrieve the data far quicker than an employee can go to a file
cabinet and riffle through pages. With paper logs, a name is especially hard to retrieve. Unless the information has been
retyped or scanned into a database of some sort, finding a particular name over the course of several weeks or months
requires an employee to spend valuable time poring over the logs looking for the name. The same issue holds for data
analytics. If the information is all contained in paper logs, the ability to analyze the data is greatly hampered. It requires
scanning or retyping the information that a digital log would already have neatly parsed. And lastly, in the case of an
emergency, nothing can beat a digital log, especially one that is stored in the cloud. If a facility is on fire or is otherwise too
dangerous for people to be in, having access to the log of visitors from offsite is especially important. While a paper log may
be destroyed or inaccessible in such a situation, a digital logbook provides this useful information through an internet
connection.
Opportunities Working In A Data Center.
Though data centers may house major technology, the roles within them may require skills beyond technology. The
roles and responsibilities of a data center infrastructure staff can be quite broad ranging from design and construction
to equipment installation, the operation, and maintenance of it, network & systems configuration and testing,
mechanical and electrical equipment, etc. to meet this demand, we have taken a different approach to help widen the
pool of talents by tapping from other industries and providing the training and skills transfer. So what are some of the
roles and areas in the data center?
1. IT & Telecommunication management
2. Project management
3. Network Engineering
4. Application Management
5. Security
6. Cloud Computing
7. Facilities management
8. Real Estate Management
9. Customer Experience
10. Sales and Marketing, etc.
Security Awareness Training for Staff in a Data Center
Malicious entities often employ phishing attacks and business email compromise attacks (BEC) to infiltrate an organization.
Such attacks intend to trick employees into performing an action or series of actions to give hackers unauthorized access to
your data center systems. Hackers also employ social phishing by using in-person and voice communication techniques
to gain unauthorized access. The key to prevent employees from being tricked and minimizing phishing attacks is to train,
train, and train them some more by providing security awareness training.
Osterman Research conducted an in-depth survey of organizations during May and June 2019 and found security awareness
programs that don’t continuously challenge employees have little to no effect. This finding is not surprising because past
studies have concluded that effective training works best through repetitive tasks that challenge a person. Security awareness
training is no different.
To effectively train your employees on security awareness, skip the long lectures and tiresome reading materials.
Instead, provide Continuous Awareness Bites and analytics. This approach offers these advantages:
•It incorporates regular security awareness training and phishing simulations customized according to each person’s language,
role, and experience.
•The short bites of information engage employees in real-time right in their workflow.
•It collects analytics so you can adjust, train, improve, and gauge your overall employee awareness training program.
•The analytics collected from the bite-size training modules enable you to evaluate how security awareness evolves over time
for each employee, team, department, and company.
•The collected data also determines which employees are struggling to understand the information and require additional
resources to learn it.
Reasons why security awareness is important
1.Protection against phishing: Educating staff on recognizing and avoiding phishing attempts is crucial, as these
attacks are often the entry point for cybercriminals.
2.Mitigating insider threats: Employees who are aware of the risks they pose are less likely to become unwitting
accomplices or threats themselves. Employees can also spot suspicious insider activity and report it.
3.Regulatory compliance: Compliance with regulations like GDPR (General Data Protection Regulation), HIPAA
(Health Insurance Portability and Accountability), or CCPA (California Consumer Privacy Act) is imperative, and
staff awareness training is a key component of meeting these requirements to avoid financial and legal ramifications
for your organization.
4.Data security: Staff members who understand the value of data are more likely to take security measures
seriously, thus safeguarding your organization’s assets.
5.Reducing human errors: Awareness training can significantly decrease human errors, which are a leading cause
of data breaches. Training, simulations, and games train new and safer behaviors in employees, reducing the
likelihood of human errors.
6.Creating a security culture: An organization with a strong security culture is better equipped to prevent, detect,
and respond to cyber threats.
What threats and topics does security awareness need to cover?
Effective security awareness training should cover a wide range of topics, including:
•Security induction: Start new employees off with your company security baseline, protocols, and non-negotiables.
•Password management: Teach employees how to create and manage secure passwords.
•Phishing awareness: Recognizing phishing emails, texts, and phone calls, as well as how to report them.
•Social engineering: Understanding manipulative tactics used by cybercriminals so you can spot attacks before they even
get a chance to strike.
•Remote and mobile working: This may not apply to all organizations, but remote and mobile working security practices
should be included for employees who work in that capacity.
•Safe internet browsing/social media: Educating about malicious websites, downloads, and social media use so employees
and organizations can manage their digital footprint and safeguard their data.
•Physical security: Ensuring the safety of company premises from threats like tailgating or unlocked desktops.
•Supply chain security: For large organizations that operate in a complex network of vendors or third-party suppliers, your
supply chain must have the same level of security and awareness as your primary organization. A vulnerability in your
supply chain can compromise your organization.
•Data protection: Handling sensitive data with care including classification, regulations, and compliance protocols.
•Incident reporting: Encouraging staff to report suspicious activities and how to do so effectively.
•Mobile device security: Safeguarding data on personal and company-issued devices.
Change Management
Any time an engineer, technician, or data center operations staff member needs to make a change to physical or
logical pieces of the data center, they must follow a five-step process. Even things as benign as changing a system
clock must go through the change management process.

Schedule & plan – multiple processes should not take place at the same time, because if something goes wrong, it
will be very difficult to trace back to a single cause. It could be that vCloud update, or it could be the new switch
that was being installed in the data center (for example).

Currently, an SMOP is also created. SMOP is a tongue-in-cheek programming term meaning Simple Matter of
Programming (it usually isn’t so simple), used to suggest additional features or code edits. We use it more as a
Simple Method of Procedure, so to speak. It spells out the complete plan for the change. That means exactly what
will happen, step by step, and what will happen if something goes wrong, including backup plans or ways to back
out of the process and revert to the original state.
Once a SMOP has been created for a given process it is often reused or copied and modified. They can therefore
become a roadmap or document of more-or-less standard procedure.
Data incident response process
Every data incident is unique, and the goal of the data incident response process is to protect customer data, restore normal
service as quickly as possible, and meet both regulatory and contractual compliance requirements. The following table
describes the main steps in the Google incident response program.
Definition
• Virtualization is the ability to run multiple operating
systems on a single physical system and share the
underlying hardware resources*
• It is the process by which one computer hosts the
appearance of many computers.
• Virtualization is used to improve IT throughput and
costs by using physical resources as a pool from which
virtual resources can be allocated.

*VMWare white paper, Virtualization Overview


Virtualization Architecture
• A Virtual machine (VM) is an isolated runtime
environment (guest OS and applications)
• Multiple virtual systems (VMs) can run on a single
physical system
Hypervisor
•A hypervisor, a.k.a. a virtual machine
manager/monitor (VMM), or virtualization manager,
is a program that allows multiple operating systems to
share a single hardware host.
• Each guest operating system appears to have the host's
processor, memory, and other resources all to itself.
However, the hypervisor is actually controlling the
host processor and resources, allocating what is
needed to each operating system in turn and making
sure that the guest operating systems (called virtual
machines) cannot disrupt each other.
Benefits of Virtualization
• Sharing of resources helps cost reduction
• Isolation: Virtual machines are isolated from each
other as if they are physically separated
• Encapsulation: Virtual machines encapsulate a
complete computing environment
• Hardware Independence: Virtual machines run
independently of underlying hardware
• Portability: Virtual machines can be migrated between
different hosts.
Virtualization in Cloud Computing
Cloud computing takes virtualization one step further:
• You don’t need to own the hardware
• Resources are rented as needed from a cloud
• Various providers allow creating virtual servers:
• Choose the OS and software each instance will have
• The chosen OS will run on a large server farm
• Can instantiate more virtual servers or shut down existing
ones within minutes
• You get billed only for what you used
Virtualization Security Challenges
The trusted computing base (TCB) of a virtual machine is
too large.
• TCB: A small amount of software and hardware that
security depends on and that we distinguish from a much
larger amount that can misbehave without affecting
security*
• Smaller TCB  more security

*Lampson et al., “Authentication in distributed systems: Theory


and practice,” ACM TCS 1992
Xen Virtualization Architecture and
the Threat Model
• Management VM – Dom0
• Guest VM – Dom
• Dom0 may be malicious
– Vulnerabilities
– Device drivers
– Careless/malicious
administration
• Dom0 is in the TCB of DomU because it can access the
memory of DomU, which may cause information
leakage/modification
Virtualization Security Requirements

• Scenario: A client uses the service of a cloud computing


company to build a remote VM
• A secure network interface

• A secure secondary storage

• A secure run-time environment


• Build, save, restore, destroy
Virtualization Security Requirements

• A secure run-time environment is the most fundamental

• The first two problems already have solutions:


• Network interface: Transport layer security (TLS)
• Secondary storage: Network file system (NFS)

• The security mechanism in the first two rely on a secure run-


time environment
• All the cryptographic algorithms and security
protocols reside in the run-time environment
Smaller TCB Solution

Smaller TCB
Actual TCB

*Secure Virtual Machine Execution under an Untrusted Management OS. C. Li, A.


Raghunathan, N.K. Jha. IEEE CLOUD, 2010.
Domain building
• Building process
Domain save/restore
Hypervisor Vulnerabilities
Malicious software can run on the same server:
• Attack hypervisor
• Access/Obstruct other VMs

Guest VM1 Guest VM2

App App
s s
OS OS

Hypervisor
servers
Physical Hardware
219
NoHype*
• NoHype removes the hypervisor
• There’s nothing to attack
• Complete systems solution
• Still retains the needs of a virtualized cloud infrastructure

Guest VM1 Guest VM2

App App
s s
OS OS

No hypervisor
Physical Hardware
220

*NoHype: Virtualized Cloud Infrastructure without the Virtualization. E. Keller, J. Szefer, J.


Rexford, R. Lee. ISCA 2010.
Roles of the Hypervisor
• Isolating/Emulating resources
• CPU: Scheduling virtual machines Push to HW /
• Memory: Managing memory Pre-allocation
• I/O: Emulating I/O devices
• Networking Remove
• Managing virtual machines
Push to side
Removing the Hypervisor
• Scheduling virtual machines
• One VM per core
• Managing memory
• Pre-allocate memory with processor support
• Emulating I/O devices
• Direct access to virtualized devices
• Networking
• Utilize hardware Ethernet switches
• Managing virtual machines
• Decouple the management from operation
Why isolation and segmentation matter

Isolation and segmentation are essential for container security, as they


prevent unauthorized access, data leakage, and cross-contamination
between containers and VMs. Isolation means that each container has its
dedicated resources and namespace and cannot interfere with or access
other containers or the host VM. Segmentation means that each container
has a defined network scope and policy and cannot connect to or receive
traffic from unwanted network segments. These measures help you
protect your containers from external and internal threats and comply with
regulatory and organizational requirements.
How to isolate containers from VMs

One way to isolate containers from VMs is to use a hypervisor that


supports nested virtualization, such as VMware ESXi, Hyper-V, or KVM.
This allows you to run containers inside a guest VM that is itself running
on a host VM, creating an additional layer of isolation. However, this also
adds some overhead and complexity, and may not be compatible with some
container platforms or orchestration tools. Another way to isolate
containers from VMs is to use lightweight virtualization technology, such
as Kata Containers, Visor, or Firecracker. These tools create a minimal VM
for each container, providing strong isolation without sacrificing
performance or compatibility.
How to isolate containers from each other

One way to isolate containers from each other is to use a container


runtime that enforces strict security policies, such as SELinux, AppArmor,
or Seccomp. These tools restrict the system calls, capabilities, and
resources that each container can use, preventing them from accessing or
modifying other containers or the host VM. Another way to isolate
containers from each other is to use a container orchestration tool that
supports pod security policies, such as Kubernetes, Docker Swarm, or
Mesos. These tools allow you to define and apply rules that control the
security context and privileges of each pod, which is a group of co-located
and co-scheduled containers.
How to segment container networks

One way to segment container networks is to use a network plugin


that supports network policies, such as Calico, Cilium, or Weave Net.
These tools allow you to create and apply rules that define which
network segments each container can access or communicate with,
based on labels, ports, protocols, or IP addresses. Another way to
segment container networks is to use a service mesh that supports
network encryption, such as Istio, Linkerd, or Consul. These tools
allow you to establish and manage secure connections between
containers, using mutual TLS, certificates, or keys.
How to monitor and audit container isolation and segmentation

One way to monitor and audit container isolation and segmentation is


to use a container security tool that provides visibility and alerts, such
as Aqua, Sysdig, or Twistlock. These tools allow you to scan and
inspect your containers and VMs for vulnerabilities, misconfigurations,
or anomalies, and notify you of any breaches or violations. Another
way to monitor and audit container isolation and segmentation is to use
a log management and analysis tool that collects and correlates data
from your containers and VMs, such as Splunk, ELK, or Fluentd. These
tools allow you to track and investigate the activity and behavior of
your containers and VMs, and generate reports and dashboards.
How to improve container isolation and segmentation

One way to improve container isolation and segmentation is to


follow the principle of least privilege, which means that you
only grant the minimum permissions and resources that each
container needs to function. This reduces the attack surface and
the potential impact of a compromise. Another way to improve
container isolation and segmentation is to follow the principle of
defense in depth, which means that you layer multiple security
controls and mechanisms to protect your containers and VMs.
This increases the resilience and the detection capabilities of
your system.
Isolation of virtual machines
Network isolation

One way to isolate VMs is to use network segmentation, which divides the
network into smaller subnets that have different access rules and policies.
Network segmentation can be implemented using virtual switches, firewalls,
routers, and VLANs. Virtual switches are software-based switches that connect
VMs to the network and can enforce security policies and traffic filtering.
Firewalls are devices or software that monitor and control the network traffic
between different segments and VMs. Routers are devices or software that route
network packets between different segments and VMs. VLANs are logical
groups of network devices that share the same broadcast domain and can be
isolated from other VLANs.
Storage isolation

Another way to isolate VMs is to use storage segmentation, which separates


the storage resources that are used by different VMs. Storage segmentation
can be implemented using storage pools, encryption, and access control.
Storage pools are logical units of storage that are composed of physical disks
or partitions and can be assigned to different VMs. Encryption is a process of
transforming data into an unreadable form that can only be decrypted by
authorized parties. Encryption can be applied to the storage pools or the
individual files and disks that are used by the VMs. Access control is a
mechanism of granting or denying access to storage resources based on the
identity and role of the users and VMs.
Hypervisor isolation

A third way to isolate VMs is to use hypervisor segmentation, which separates


the hypervisor layer from the VMs and the host operating system. Hypervisor
segmentation can be implemented using type 1 or bare-metal hypervisors,
security patches, and hardening. Type 1 or bare-metal hypervisors are
hypervisors that run directly on the hardware and do not depend on a host
operating system. They are more secure and efficient than type 2 or hosted
hypervisors, which run on top of a host operating system and share its
vulnerabilities and overhead. Security patches are updates that fix the known
flaws and bugs in the hypervisor software and prevent potential exploits and
attacks. Hardening is a process of reducing the attack surface and
strengthening the security posture of the hypervisor by disabling unnecessary
services, features, and ports, and applying strict configurations and policies.
What is an intrusion detection system (IDS)?

An intrusion detection system (IDS) is a system that monitors network


traffic for suspicious activity and alerts when such activity is
discovered.

While anomaly detection and reporting are the primary functions of an


IDS, some intrusion detection systems can take actions when
malicious activity or anomalous traffic is detected, including blocking
traffic sent from suspicious Internet Protocol (IP) addresses.

An IDS can be contrasted with an intrusion prevention system (IPS),


which monitors network packets for potentially damaging network
traffic, like an IDS, but has the primary goal of preventing threats once
detected, as opposed to primarily detecting and recording threats
What is an anti-malware and Intrusion Detection System for OS
Instances?

An intrusion detection system (IDS) is a device or software


application that monitors a network for malicious activity or
policy violations. Any malicious activity or violation is typically
reported or collected centrally using a security information and
event management system. Some IDSs can respond to detected
intrusion upon discovery. These are classified as intrusion
prevention systems (IPS).
Secure Boot for integrity validation
This policy setting allows you to configure whether Secure Boot will be allowed as the platform integrity provider for
BitLocker operating system drives.

Secure Boot ensures that the PC's pre-boot environment only loads firmware that is digitally signed by authorized
software publishers. Secure Boot also provides more flexibility for managing pre-boot configuration than legacy
BitLocker integrity checks.

If you enable or do not configure this policy setting, BitLocker will use Secure Boot for platform integrity if the
platform is capable of Secure Boot-based integrity validation.
If you disable this policy setting, BitLocker will use legacy platform integrity validation, even on systems capable of
Secure Boot-based integrity validation.

When this policy is enabled and the hardware can use Secure Boot for BitLocker scenarios, the "Use enhanced Boot
Configuration Data validation profile" group policy setting is ignored and Secure Boot verifies BCD settings according
to the Secure Boot policy setting, which is configured separately from BitLocker.
SECURE BOOT PATTERN

•Intent: How to ensure that violations


properties of the software stack.
of integrity

•Example: How can the user be sure that the system


software is in the intended operational state?

• Context:On conventional platforms, software can be


manipulated or exchanged.
SECURE BOOT PATTERN

Problem:
• Before applications can be used on a computer system,
the system has to be bootstrapped.

• The bootloader loads the operating system kernel, and


the operating system kernel loads system services,
device drivers, and other applications.

• At any stage of the bootstrap process, software


components could have been exchanged or modified by
another user or by malicious software that has been
executed before.
SECURE BOOT PATTERN

• The following forces have to be resolved:

You want to ensure the integrity of the loaded


software on the system.
You want the computer system to always boot in a
welldefined secure state.
You want to allow modifications of the operating
system or application binaries.
SECURE BOOT PATTERN

Solution:
• Every stage is responsible for checking the integrity of
the next stage.

• Integrity checking can be performed in different ways


comparing hash values
verifying digital signatures.
SECURE BOOT PATTERN

•SECURE BOOT PATTERN

Figure 1. Elements of the Secure Boot pattern.


SECURE BOOT PATTERN
Known Uses:
• AEGIS
• The Cell Broadband Engine processor
Consequences:
Example Resolved:
Related Patterns:
• Boot Loader
• Authenticator
SECURE STORAGE PATTERN

•Intent: Secure storage provides confidentiality and


integrity for stored data, and additionally enforces
access restrictions on entities that want to access data.

•Example: Consider the problem of storing passwords


(e.g., for webservices) securely on a computer.

•Context : You need to provide storage that protects


the confidentiality and integrity of stored data.
SECURE STORAGE PATTERN

Problem:
• Cryptographic techniques exist to
confidentiality and integrity of data.
protect the

The following forces have to be resolved:


• confidentiality and integrity ofdata
• secret cryptographic keys
• modifications of the operating system or application
binaries
SECURE STORAGE PATTERN

Solution:
• Root Key
• Root Key and Root Key Control are both protected by
trusted hardware
SECURE STORAGE PATTERN
•Structure:

Figure 2. Elements of the Secure Storage pattern.


SECURE STORAGE PATTERN

Known Uses:
• TheCell processor features storage that can only be
accessed when the processor is in a “secure state”.
Example Resolved:
Consequences:
• Only software where the integrity verification succeeded
can access the protected data.
• Data can be stored on a system, such that it can be
accessed only when the authorized operating system and
software has been started.
SECURE STORAGE PATTERN

Related Patterns:

• Secure Storage requires Secure Boot to protect the


integrity verification data
• Secure Storage also requires Controlled Virtual Address
Space
• Information Obscurity
Access Control
• Access control is a system which enables an authority to control access to areas and resources in a given
physical facility or computer-based information system.
• In computer security, access control includes authentication, authorization and audit. It also includes
measures such as physical devices, including biometric scans and metal locks, hidden paths, digital
signatures, encryption, social barriers, and monitoring by humans and automated systems.
• In any access control model, the entities that can perform actions in the system are called subjects, and the
entities representing resources to which access may need to be controlled are called objects (see also Access
Control Matrix). Subjects and objects should both be considered as software entities and as human users
Access Control
• Access control models used by current systems tend to fall into one of two classes: those based on capabilities
and those based on access control lists (ACLs).
• In a capability-based model, holding an unforgeable reference or capability to an object provides access to
the object
• Access is conveyed to another party by transmitting such a capability over a secure channel.
• In an ACL-based model, a subject's access to an object depends on whether its identity is on a list associated
with the object
Identification, Authentication, Authorization
• Access control systems provide the essential services of identification and authentication (I&A),
authorization, and accountability where:
• identification and authentication determine who can log on to a system, and the association of users with the
software subjects that they are able to control as a result of logging in;
• authorization determines what a subject can do;
• accountability identifies what a subject (or all subjects associated with a user) did.
Identification, Authentication, Authorization
• Identification and authentication (I&A): Identification and authentication (I&A) is the process of verifying
that an identity is bound to the entity that makes an assertion or claim of identity. The I&A process assumes
that there was an initial validation of the identity, commonly called identity proofing. Various methods of
identity proofing are available ranging from in person validation using government issued identification to
anonymous methods that allow the claimant to remain anonymous, but known to the system if they return.
The method used for identity proofing and validation should provide an assurance level commensurate with
the intended use of the identity within the system. Subsequently, the entity asserts an identity together with an
authenticator as a means for validation. The only requirements for the identifier is that it must be unique
within its security domain.
Identification, Authentication, Authorization
• Authenticators are commonly based on at least one of the following four factors:
• Something you know, such as a password or a personal identification number (PIN).
This assumes that only the owner of the account knows the password or PIN needed
to access the account.
• Something you have, such as a smart card or security token. This assumes that only
the owner of the account has the necessary smart card or token needed to unlock the
account.
• Something you are, such as fingerprint, voice, retina, or iris characteristics.
• Where you are, for example inside or outside a company firewall, or proximity of
login location to a personal GPS device.
Identification, Authentication, Authorization
• Authorization: Authorization applies to subjects. Authorization determines what a
subject can do on the system.
• Most modern operating systems define sets of permissions that are variations or
extensions of three basic types of access:
• Read (R): The subject can
• Read file contents, List directory contents
• Write (W): The subject can change the contents of a file or directory with the
following tasks:
• Add, Create, Delete, Rename
• Execute (X): If the file is a program, the subject can cause the program to be run.
(In Unix systems, the 'execute' permission doubles as a 'traverse directory'
permission when granted for a directory.)
Identification, Authentication, Authorization
• These rights and permissions are implemented differently in systems based on discretionary
access control (DAC) and mandatory access control (MAC).
• Accountability: Accountability uses such system components as audit trails (records) and logs
to associate a subject with its actions. The information recorded should be sufficient to map
the subject to a controlling user.
• Audit trails and logs are important for Detecting security violations, Re-creating security
incidents
• If no one is regularly reviewing your logs and they are not maintained in a secure and
consistent manner, they may not be admissible as evidence.
• Many systems can generate automated reports based on certain predefined criteria or
thresholds, known as clipping levels. For example, a clipping level may be set to generate a
report for the following: More than three failed logon attempts in a given period, Any attempt
to use a disabled user account, These reports help a system administrator or security
administrator to more easily identify possible break-in attempts.
Single Sign-On
• Single sign-on (SSO) is a property of access control of multiple, related, but
independent software systems. With this property a user logs in once and gains
access to all systems without being prompted to log in again at each of them. Single
sign-off is the reverse property whereby a single action of signing out terminates
access to multiple software systems.
• As different applications and resources support different authentication
mechanisms, single sign-on has to internally translate to and store different
credentials compared to what is used for initial authentication.
Single Sign-on Kerberos
• Kerberos is a computer network authentication protocol, which allows nodes
communicating over a non-secure network to prove their identity to one another in
a secure manner. It is also a suite of free software published by MIT that
implements this protocol. Its designers aimed primarily at a client–server model,
and it provides mutual authentication — both the user and the server verify each
other's identity. Kerberos protocol messages are protected against eavesdropping
and replay attacks.
• Kerberos builds on symmetric key cryptography and requires a trusted third party,
and optionally may use public-key cryptography by utilizing asymmetric key
cryptography during certain phases of authentication
Kerberos
• Kerberos uses as its basis the symmetric Needham-Schroeder protocol. It makes use
of a trusted third party, termed a key distribution center (KDC), which consists of
two logically separate parts: an Authentication Server (AS) and a Ticket Granting
Server (TGS). Kerberos works on the basis of "tickets" which serve to prove the
identity of users.
• The KDC maintains a database of secret keys; each entity on the network —
whether a client or a server — shares a secret key known only to itself and to the
KDC. Knowledge of this key serves to prove an entity's identity. For
communication between two entities, the KDC generates a session key which they
can use to secure their interactions.
• The security of the protocol relies heavily on participants maintaining loosely
synchronized time and on short-lived assertions of authenticity called Kerberos
tickets.
Kerberos
• The client authenticates itself to the Authentication Server and receives a ticket.
(All tickets are time-stamped.)
• It then contacts the Ticket Granting Server, and using the ticket it demonstrates its
identity and asks for a service.
• If the client is eligible for the service, then the Ticket Granting Server sends
another ticket to the client.
• The client then contacts the Service Server, and using this ticket it proves that it has
been approved to receive the service.
Kerberos: Drawbacks
• Single point of failure: It requires continuous availability of a central server. When the
Kerberos server is down, no one can log in. This can be mitigated by using multiple Kerberos
servers and fallback authentication mechanisms.
• Kerberos requires the clocks of the involved hosts to be synchronized. The tickets have a time
availability period and if the host clock is not synchronized with the Kerberos server clock, the
authentication will fail. The default configuration requires that clock times are no more than
five minutes apart. In practice Network Time Protocol daemons are usually used to keep the
host clocks synchronized.
• The administration protocol is not standardized and differs between server implementations.
• Since all authentication is controlled by a centralized KDC, compromise of this authentication
infrastructure will allow an attacker to impersonate any user.
Access Control Techniques
• Role based access control
• Constrained user interfaces
• Access control Matrix
• Content dependent access control
• Content dependent access control
Access Control
• Access control techniques: Access control techniques are sometimes categorized as either
discretionary or non-discretionary. The three most widely recognized models are Discretionary
Access Control (DAC), Mandatory Access Control (MAC), and Role Based Access Control
(RBAC). MAC and RBAC are both non-discretionary.
• Attribute-based Access Control: In attribute-based access control, access is granted not based
on the rights of the subject associated with a user after authentication, but based on attributes
of the user. The user has to prove so called claims about his attributes to the access control
engine. An attribute-based access control policy specifies which claims need to satisfied in
order to grant access to an object. For instance the claim could be "older than 18" . Any user
that can prove this claim is granted access. Users can be anonymous as authentication and
identification are not strictly required. One does however require means for proving claims
anonymously. This can for instance be achieved using Anonymous credentials.
Access Control
• Discretionary access control: (DAC) is an access policy determined by the owner
of an object. The owner decides who is allowed to access the object and what
privileges they have.
• Two important concepts in DAC are
• File and data ownership: Every object in the system has an owner. In most DAC
systems, each object's initial owner is the subject that caused it to be created. The
access policy for an object is determined by its owner.
• Access rights and permissions: These are the controls that an owner can assign to
other subjects for specific resources.
• Access controls may be discretionary in ACL-based or capability-based access
control systems. (In capability-based systems, there is usually no explicit concept of
'owner', but the creator of an object has a similar degree of control over its access
policy.)
Access Control
• Mandatory access control: (MAC) is an access policy determined by the system, not the
owner. MAC is used in multilevel systems that process highly sensitive data, such as classified
government and military information. A multilevel system is a single computer system that
handles multiple classification levels between subjects and objects.
• Sensitivity labels: In a MAC-based system, all subjects and objects must have labels assigned
to them. A subject's sensitivity label specifies its level of trust. An object's sensitivity label
specifies the level of trust required for access. In order to access a given object, the subject
must have a sensitivity level equal to or higher than the requested object.
• Data import and export: Controlling the import of information from other systems and export
to other systems (including printers) is a critical function of MAC-based systems, which must
ensure that sensitivity labels are properly maintained and implemented so that sensitive
information is appropriately protected at all times.
Access Control
• Two methods are commonly used for applying mandatory access control:
• Rule-based (or label-based) access control: This type of control further defines
specific conditions for access to a requested object. All MAC-based systems
implement a simple form of rule-based access control to determine whether access
should be granted or denied by matching:
• An object's sensitivity label
• A subject's sensitivity label
• Lattice-based access control: These can be used for complex access control
decisions involving multiple objects and/or subjects. A lattice model is a
mathematical structure that defines greatest lower-bound and least upper-bound
values for a pair of elements, such as a subject and an object.
Access Control
• Role-based access control: (RBAC) is an access policy determined by the system,
not the owner. RBAC is used in commercial applications and also in military
systems, where multi-level security requirements may also exist. RBAC differs
from DAC in that DAC allows users to control access to their resources, while in
RBAC, access is controlled at the system level, outside of the user's control.
• Although RBAC is non-discretionary, it can be distinguished from MAC primarily
in the way permissions are handled. MAC controls read and write permissions
based on a user's clearance level and additional labels. RBAC controls collections
of permissions that may include complex operations such as an e-commerce
transaction, or may be as simple as read or write. A role in RBAC can be viewed as
a set of permissions.
Access Control
• Three primary rules are defined for RBAC:
• 1. Role assignment: A subject can execute a transaction only if the subject has
selected or been assigned a role.
• 2. Role authorization: A subject's active role must be authorized for the subject.
With rule 1 above, this rule ensures that users can take on only roles for which they
are authorized.
• 3. Transaction authorization: A subject can execute a transaction only if the
transaction is authorized for the subject's active role. With rules 1 and 2, this rule
ensures that users can execute only transactions for which they are authorized.
• Additional constraints may be applied as well, and roles can be combined in a
hierarchy where higher-level roles subsume permissions owned by sub-roles.
• Most IT vendors offer RBAC in one or more products.
What is Biometrics?
• Biometrics are automated methods of recognizing a person based on a
physiological or behavioral characteristic
• Features measured: Face, Fingerprints, Hand geometry, handwriting, Iris, Retinal,
Vein and Voice
• Identification and personal certification solutions for highly secure applications
• Numerous applications: medical, financial, child care, computer access etc.
• Biometrics replaces Traditional Authentication Methods
• Provides better security
• More convenient
• Better accountability
• Applications on Fraud detection and Fraud deterrence
• Dual purpose: Cyber Security and National Security
What is the Process?
• Three-steps: Capture-Process-Verification
• Capture: A raw biometric is captured by a sensing
device such as fingerprint scanner or video camera
• Process: The distinguishing characteristics are extracted
from the raw biometrics sample and converted into a
processed biometric identifier record
• Called biometric sample or template
• Verification and Identification
• Matching the enrolled biometric sample against a single
record; is the person really what he claims to be?
• Matching a biometric sample against a database of
identifiers
Why Biometrics?
• Authentication mechanisms often used are User ID and Passwords
• However password mechanisms have vulnerabilities: Stealing passwords
• Biometrics systems are less prone to attacks
• Need sophisticated techniques for attacks
• Cannot steal facial features and fingerprints
• Need sophisticated image processing techniques for modifying facial features
• Biometrics systems are more convenient, Need not have multiple passwords or
difficult passwords
• E.g., characters, numbers and special symbols, Need not remember passwords
• Need not carry any cards or tokens
• Better accountability: Can determine who accessed the system with less complexity
What is Secure Biometrics?
• Study the attacks of biometrics systems
• Modifying fingerprints
• Modifying facial features
• Develop a security policy and model for the system
• Application independent and Application specific policies
• Enforce Security constraints
• Entire face is classified but the nose can be displayed
• Develop a formal model
• Formalize the policy
• Design the system and identify security critical
components
• Reference monitor for biometrics systems
Security Vulnerabilities
• Type 1 attack: present fake biometric such a synthetic
biometric
• Type 2 attack: Submit a previously intercepted biometric
data: replay
• Type 3 attack: Compromising the feature extractor
module to give results desired by attacker
• Type 4 attack: Replace the genuine feature values
produced by the system by fake values desired by attacker
• Type 5 attack: Produce a high number of matching results
• Type 6 attack: Attack the template database: add
templates, modify templates etc.
Biometric Terms: Verification and
Identification
• Verification
• User claims an identity for biometric comparison
• User then provides biometric data
• System tries to match the user’s biometric with the large number of
biometric data in the database
• Determines whether there is a match or a no match
• Network security utilizes this process
• Identification
• User does not claim an identity, but gives biometric data
• System searches the database to see if the biometric provided is stored
in the database
• Positive or negative identification
• Prevents from enrolling twice for claims
• Used to enter buildings
Biometric Process
• User enrolls in a system and provides biometric data
• Data is converted into a template
• Later on user provides biometric data for verification or
identification
• The latter biometric data is converted into a template
• The verification/identification template is compared with the
enrollment template
• The result of the match is specified as a confidence level
• The confidence level is compared to the threshold level
• If the confidence score exceeds the threshold, then there is a
match
• If not, there is no match
Enrollment and Template Creation
• Enrollment
• This is the process by which the user’s biometric data is
acquired
• Templates are created
• Presentation
• User presents biometric data using hardware such as
scanning systems, voice recorders, etc.
• Biometric data
• Unprocessed image or recording
• Feature extraction
• Locate and encode distinctive characteristics from biometric
data
Data Types and Associated Biometric
Technologies
• Finger scan: Fingerprint Image
• Voice scan: Voice recording
• Face scan: Facial image
• Iris scan: Iris image
• Retina scan: Retina image
• Hand scan: Image of hand
• Signature scan: Image of signature
• Keystroke scan: Recording of character types
Templates
• Templates are NOT compressions of biometric data; they
are constructed from distinctive features extracted
• Cannot reconstruct the biometric data from templates
• Same biometric data supplied by a user at different times
may results in different templates
• When the biometric algorithm is applied to these templates,
it will recognize them as the same biometric data
• Templates may consist of strings of characters and numeric
values
• Vendor systems are heterogeneous; standards are used for
common templates and for interoperability
Biometric Matching
• Part of the Biometric process: Compares the user provided
template with the enrolled templates
• Scoring:
• Each vendor may use a different score for matching; 1-10 or -1 to
1
• Scores also generated during enrollment depending on the quality
of the biometric data
• User may have to provide different data if enrollment score is low
• Threshold is generated by system administrator and varies
from system to system and application to application
• Decision depending on match/ nomatch
• 100% accuracy is generally not possible
False Match Rate
• System gives a false positive by matching a user’s
biometric with another user’s enrollment
• Problem as an imposter can enter the system
• Occurs when two people have high degree of similarity
• Facial features, shape of face etc.
• Template match gives a score that is higher than the
threshold
• If threshold is increased then false match rate is reduced, but
False no match rate is increased
• False match rate may be used to eliminate the non-
matches and then do further matching
False Nonmatch rate
• User’s template is matched with the enrolled
templates and an incorrect decision of nonmatch is
made
• Consequence: user is denied entry
• False nonmatch occurs for the following reasons
• Changes in user’s biometric data
• Changes in how a user presents biometric data
• Changes in environment in which data is presented
• Major focus has been on reducing false match rate
and as a result there are higher false nonmatch rates
Access Conrol Administration
• Access Contol Administration will work out how the organiztion will adninistrw access
control: Centralzied or Distributed.
• Terminal Access Controller Access-Control System (TACACS) is a remote authentication
protocol that is used to communicate with an authentication server commonly used in UNIX
networks.
• TACACS allows a client to accept a username and password and send a query to a TACACS
authentication server, sometimes called a TACACS daemon or XTACACS. This server was
normally a program running on a host. The host would determine whether to accept or deny
the request and send a response back. The TIP (routing node accepting dial-up line
connections, which the user would normally want to log in into) would then allow access or
not, based upon the response.
• TACACS+ and RADIUS have generally replaced TACACS. TACACS+ is an entirely new
protocol and not compatible with TACACS or XTACACS. TACACS+ uses the Transmission
Control Protocol (TCP) and RADIUS uses the User Datagram Protocol (UDP).
Intrusion Detection System
• An IDS is a device (or application) that monitors network and/or system activities
for malicious activities or policy violations and produces reports to a Management
Station.[
• Intrusion prevention is the process of performing intrusion detection and attempting
to stop detected possible incidents.
• Intrusion detection and prevention systems (IDPS) are primarily focused on
identifying possible incidents, logging information about them, attempting to stop
them, and reporting them to security administrators.
Intrusion Detection System
• For the purpose of dealing with IT, there are two main types of IDS's: network-
based and host-based IDS.
• In a network-based intrusion-detection system (NIDS), the sensors are located at
choke points in the network to be monitored, often in the demilitarized zone (DMZ)
or at network borders. The sensor captures all network traffic and analyzes the
content of individual packets for malicious traffic.
• In a host-based system, the sensor usually consists of a software agent, which
monitors all activity of the host on which it is installed, including file system, logs
and the kernel. Some application-based IDS are also part of this category.
Threats to Access Control
• Dictionary Attack
• Brute Force Attack
• Spoofing at Logon
• Phishing
• Identity Theft
Secure Shell (SSH)

SSH is a cryptographic network protocol used for securely operating


network services over an unsecured network. It primarily provides
encrypted remote login and command execution capabilities, allowing
users to access and manage remote systems and servers.
SSH uses a client-server architecture and public-key cryptography for
authentication, ensuring that the connection between the client and
server is secure and protected from eavesdropping and tampering.
SSH was developed as a more secure alternative to plaintext protocols
like Telnet, Rlogin, and Rsh, which have significant security
vulnerabilities. It is widely implemented through the OpenSSH software
package, an open-source implementation of the SSH protocol.
Thank You!

You might also like