CGT 312 ETHICAL HACKING
Module 4
Web Services and Intrusion Detection- Architecture strategies for computer fraud
prevention – Protection of Web sites – Intrusion detection system – NIDS, HIDS –
Penetrating testing process – Web Services– Reducing transaction risks.
WEB SERVICES
The Internet is the worldwide connectivity of hundreds of thousands of computers of various types
that belong to multiple networks. On the World Wide Web, a web service is a standardized method
for propagating messages between client and server applications.
A web service is a software module that is intended to carry out a specific set of functions. Web
services in cloud computing can be found and invoked over the network.
The web service would be able to deliver functionality to the client that invoked the web service.
A web service is a set of open protocols and standards that allow data to be exchanged between
different applications or systems. Web services can be used by software programs written in a variety
of programming languages and running on a variety of platforms to exchange data via computer
networks such as the Internet in a similar way to inter-process communication on a single computer.
Any software, application, or cloud technology that uses standardized web protocols (HTTP or HTTPS)
to connect, interoperate, and exchange data messages – commonly XML (Extensible Markup
Language) – across the internet is considered a web service.
Web services range from major services such as storage management and customer relationship
management to more limited services such as furnishing a stock quote and checking bids for an
auction item.
Users can access some web services through a peer-to-peer arrangement rather than a central
server. Some services can communicate with other services. Middleware generally accommodates
this exchange of procedures and data. The need for web services emerged as all major platforms
were able to access the internet, but different platforms couldn't interact with each other. Web
services took platforms to the next level by publishing functions, message, programs and objects to
the rest of the internet.
How web services work
Web services let different organizations or applications from multiple sources communicate
without the need to share sensitive data or IT infrastructure. Instead, all information moves
through a programmatic interface across a network. This interface can then be added to a GUI,
like a webpage, to deliver specific functionality to users. This means web services aren't specific
to one programming language or operating system (OS) and don't require the use of browsers or
Hypertext Markup Language (HTML).
Most web services operate using a typical client-server behavior. An application running on a
client computer or mobile device has access to a network such as the internet. A server provides
data and compute capabilities.
The process includes the following three steps:
1. The client sends a request to the server that includes details and data the web service requires.
The data can be in any common format such as JavaScript Object Notation (JSON) or XML.
2. The server side receives and authenticates the request, parses the required details, processes the
request and accesses any appropriate results.
3. The server accesses any appropriate results and sends them back to the client application, which
displays them in a form and style appropriate for the application.
Fig: Simplified version of how a web service would function
The client would use requests to send a sequence of web service calls to a server that would host
the actual web service.
Remote procedure calls are what are used to make these requests. Calls to methods hosted by
the relevant web service are known as Remote Procedure Calls (RPC).
Example: Flipkart offers a web service that displays prices for items offered on Flipkart.com.
The front end or presentation layer can be written in .Net or Java, but the web service can be
communicated using either programming language.
The data that is exchanged between the client and the server, which is XML, is the most important
part of a web service design. XML (Extensible Markup language) is a simple intermediate language
that is understood by various programming languages. It is a counterpart to HTML. As a result,
when programs communicate with one another, they do so using XML. This creates a common
platform for applications written in different programming languages to communicate with one
another.
For transmitting XML data between applications, web services employ SOAP (Simple Object
Access Protocol). The data is sent using standard HTTP. A SOAP message is data that is sent from
the web service to the application. An XML document is all that is contained in a SOAP message.
The client application that calls the web service can be created in any programming language
because the content is written in XML.
Characteristics of web services
Web services have the following features:
(a)XML Based: The information representation and record transportation layers of a web service employ
XML. There is no need for networking, operating system, or platform binding when using XML. At the
middle level, web offering-based applications are highly interoperable.
(b) Loosely Coupled: A customer of an internet service provider isn’t necessarily directly linked to that
service provider. The user interface for a web service provider can change over time without impacting
the user’s ability to interact with the service provider.
A strongly coupled system means that the patron’s and server’s decisions are inextricably linked,
indicating that if one interface changes, the other should be updated as well.
A loosely connected architecture makes software systems more manageable and allows for easier
integration between different structures.
(c) Capability to be Synchronous or Asynchronous: Synchronicity refers to the client’s connection to the
function’s execution. The client is blocked and the client has to wait for the service to complete its
operation, before continuing in synchronous invocations. Asynchronous operations allow a client to
invoke a task and then continue with other tasks.
Asynchronous clients get their results later, but synchronous clients get their effect immediately when
the service is completed. The ability to enable loosely linked systems requires asynchronous capabilities.
(d) Coarse-Grained: Object-oriented systems, such as Java, make their services available through
individual methods. At the corporate level, a character technique is far too fine an operation to be useful.
Building a Java application from the ground, necessitates the development of several fine-grained
strategies, which are then combined into a rough-grained provider that is consumed by either a buyer or
a service.
Corporations should be coarse-grained, as should the interfaces they expose. Web services generation is
an easy approach to define coarse-grained services that have access to enough commercial enterprise
logic.
(e) Supports Remote Procedural Call: Consumers can use an XML-based protocol to call procedures,
functions, and methods on remote objects utilizing web services. A web service must support the input
and output framework exposed by remote systems.
A number of RPC techniques are used to allocate and access both technologies.
(f) Supports Document Exchanges: One of XML’s most appealing features is its simple approach to
communicating with data and complex entities. These records can be as simple as talking to a current
address or as complex as talking to an entire book or a Request for Quotation. Web administrations
facilitate the simple exchange of archives, which aids incorporate reconciliation.
The web benefit design can be seen in two ways:
(i) The first step is to examine each web benefit on-screen character in detail.
(ii) The second is to take a look at the rapidly growing web benefit convention
Web services typically use XML for information exchange and have certain other common
characteristics, including the following:
They're accessible to users over the web, allowing web services to be published, discovered and
invoked.
They're modular, so the services can be used independently or aggregated in an arranged or chained
setup to form more complex sets of web services. This is an important means of software modularity
and reusability.
Open standards let them interoperate over any programming language or OS.
They're self-contained so no additional software is needed on the client side other than a
programming language with XML and HTTP support.
They self-describe, using common XML semantics so a WSDL file provides all the information needed
to invoke a service.
They're discoverable through a common mechanism such as UDDI.
They're often open source because of the common underlying standards.
They're independent of any user interface in favor of code-only programmatic access and operation.
Advantages of web services
Web services offer many compelling advantages, such as the following:
1. Simplicity. Web services use standardized technologies such as WSDL, XML and HTTP.
2. Interoperability. The key to web services is abstraction, allowing systems to interoperate without
any knowledge of the underlying systems or architectures involved.
3. Cost-effectiveness. The simplicity and well-established technologies involved make web services
fast and inexpensive to build, deploy and maintain.
4. Modularity and reusability. Web services allow any application to use routine features and
functions as web services rather than incorporating them into each application. This lends itself
to software development that's modular and reusable.
5. Independence. Web services maintain little or no dependence between the client and server
sides of an exchange. This makes web services stateless in a way that allows for network
disruption and data loss without compromising application performance.
6. Security. Web services use authentication, authorization, encryption and other security
measures to protect the data being transmitted between clients and servers.
7. Business Functions can be exposed over the Internet: A web service is a controlled code
component that delivers functionality to client applications or end-users. This capability can be
accessed over the HTTP protocol, which means it can be accessed from anywhere on the internet.
Because all apps are now accessible via the internet, Web services have become increasingly.
That is to say, the web service can be located anywhere on the internet and provide the required
functionality.
8. Interoperability: Web administrations allow diverse apps to communicate with one another and
exchange information and services. Different apps can also make use of web services. A .NET
application, for example, can communicate with Java web administrations and vice versa. To make
the application stage and innovation self-contained, web administrations are used.
9. Communication with Low Cost: Because web services employ the SOAP over HTTP protocol, you
can use your existing low-cost internet connection to implement them. Web services can be
developed using additional dependable transport protocols, such as FTP, in addition to SOAP over
HTTP.
10. A Standard Protocol that Everyone Understands: Web services communicate via a defined
industry protocol. In the web services protocol stack, all four layers (Service Transport, XML
Messaging, Service Description, and Service Discovery) use well-defined protocols.
11. Reusability: A single web service can be used simultaneously by several client applications.
Challenges of web services
1. Connectivity. Web services are contingent on the availability of network connectivity. Besides
bandwidth limitations, issues such as network reliability and latency can contribute to performance
problems. And network disruptions and downtime can render a web service unavailable.
2. Overhead. Although web services use standardized, well-established communication layers and
protocols, these technologies also introduce some processing overhead that can impair
communications performance. In addition, resource consumption associated with web services can
raise operational overhead and costs.
3. Complexity and compatibility. Web services can be complex entities to build, implement and
maintain. Varied communication protocols, data formats and security precautions increase that
complexity. Web services are basically software with version control, often requiring applications to
update and upgrade over time as versions evolve and grow.
4. Security risks. While security tools exist to protect data exchanges through web services,
those security measures must be implemented properly and tested thoroughly to prevent privacy
issues, data breaches, unauthorized use and other attacks.
5. Troubleshooting. Web services enable the creation of more complex communication and data
exchange environments. This makes troubleshooting more difficult because problems can arise at
the client, the server, the network or the web service itself.
6. Vendor lock-in. When using a third-party provider's web service, a business can become dependent
on that provider. This type of vendor lock-in makes it difficult or impossible to use alternative web
services in the future.
Types of web services
• Web services are built using open standards and protocols to integrate with various applications.
Extensible Markup Language(XML):- XML is used to tag, code and decode data. XML-remote
procedure call, or XML-RPC, is a basic XML protocol using HTTP to exchange data and
communicate between client and server systems.
• The protocols web services use includes the following:
1. Simple Object Access Protocol(SOAP)
SOAP is an XML-based web service protocol used to transfer data using SOAP messages. The SOAP
protocol was developed to enable different programming languages to communicate quickly and
with minimal effort using HTTP or Simple Mail Transfer Protocol (SMTP).
• SOAP stands for “Simple Object Access Protocol.” It is a transport-independent messaging
protocol. SOAP is built on sending XML data in the form of SOAP Messages. A document known
as an XML document is attached to each message. Only the structure of the XML document, not
the content, follows a pattern. The best thing about Web services and SOAP is that everything is
sent through HTTP, the standard web protocol.
• A root element known as the element is required in every SOAP document. In an XML document,
the root element is the first element. The “envelope” is separated into two halves. The header
comes first, followed by the body. The routing data, or information that directs the XML document
to which client it should be sent to, is contained in the header. The real message will be in the
body.
• SOAP is highly structured. It uses an XML data format and typically HTTP or SMTP for data
transmission. SOAP is stateful and uses WSDL to describe the web service model that defines how
its requests and responses are structured; it includes well-defined security standards.
• SOAP-based web services are easier to consume than REST. They involve more standards and
support advanced capabilities such as distributed computing. However, SOAP web services are
more complex to develop, implement and support than REST.
2. WSDL (Web Services Description Language)
WSDL is used to tell the client application what's included in the web service and how to connect.
Variations on WSDL include Web Services Conversation Language and Web Services Flow Language.
If a web service can’t be found, it can’t be used. The client invoking the web service should be aware of
the location of the web service. Second, the client application must understand what the web service
does in order to invoke the correct web service. The WSDL, or Web services description language, is used
to accomplish this. The WSDL file is another XML-based file that explains what the web service does to
the client application.
The client application will be able to understand where the web service is located and how to use it by
using the WSDL document.
3.Representational State Transfer. Based on HTTP, REST provides interoperability between devices and
the internet for application programming interface (API)-based tasks. While not all RESTful web services
use the REST protocol, applications built with RESTful APIs are more lightweight, manageable and
scalable.
By comparison, REST is a flexible, stateless protocol that supports data exchange in varied
formats. REST is object-based and uses HTTP for key processes such as DELETE, GET, POST and
PUT. But REST formally defines little else, allowing data transfers using plain text, HTML, XML and
JSON. REST is considered a lightweight protocol that's easier to build and understand than SOAP
services. However, REST is typically limited to point-to-point communications, and the lack of
deeper standards can lead to performance and interoperability limitations.
SOAP vs. REST
Web services are all about accessing and exchanging information. These services depend on data
protocols, the rules that define how data is handled and moved so both sides of a communication
link can understand it. SOAP and REST are the two most popular data exchange protocols.
4.Universal Description, Discovery and Integration. UDDI is an XML-based standard that lists and details
what services are available in an application. It makes web services discoverable to other services and
facilitates digital transactions and e-commerce.
UDDI is a standard for specifying, publishing and discovering a service provider’s online services. It
provides a specification that aids in the hosting of data via web services. UDDI provides a repository
where WSDL files can be hosted so that a client application can discover a WSDL file to learn about the
various actions that a web service offers. As a result, the client application will have full access to the
UDDI, which serves as a database for all WSDL files.
The UDDI registry will hold the required information for the online service, just like a telephone directory
has the name, address, and phone number of a certain individual. So that a client application may figure
out where it is.
ARCHITECTURE STRATEGIES FOR COMPUTER FRAUD PREVENTION
Components of an Information Technology Infrastructure
Many organizations have Internet facing Web-based applications that can be accessed remotely by the
insider either within the confines of the organization or remotely. Conversely, there are many
applications within organizations that are not network based or Web based and function as stand-alone
applications, which can also be used to perpetrate computer fraud.
Consequently, the risk exposure for insider abuse is significantly higher for organizations that have
Web-based versus traditional applications that can be accessed only within the organization.
o Firewall.
Packet Filter Firewall
Packet Inspection Firewall
Application gateway Firewall
Circuit Level gateway
Proxy Server
o Router
o Host
o Server
o PC Workstation
o Intrusion Detection systems.
Architectural Strategies to Prevent and Detect ICF:
When developing a system architectural design for an enterprise, there are several key
considerations:
1. Scalability: This is the ease by which additional system processing capacity, throughput, or
transactions can be increased (or decreased) over time.
2. Replication: Add processing resources that replicate and share part of the workload.
3. Clustering: Physically centralize but logically distribute the processing load.
4. Locality of Decision-Making Authority: Distribute the ability to affect modification, while
centralizing the decision for which modifications to incorporate.
5. Adaptability: This is the ease by which the existing architectural design or configuration of a
system can be updated to respond to changing conditions, performance congestion, security
attacks, and so forth.
6. Mitigating Architectural Mismatches: This may occur when system components cannot
interconnect to exchange data.
7. Architectural Security Gaps: There may be architectural security gaps that allow for unauthorized
access and update capabilities to system resources.
8. Component Based: Configuring architecture using separable components.
9. Multitier: Configuring architecture into tiers that separate user interface from network access
gateways (i.e., Web servers, security firewalls), from data storage/retrieval repositories.
Types of System Architectural Designs for Information Processing:
The primary types of system architectures for information processing include Service Oriented
Architecture (SOA), distributive, client–server, and centralized information systems processing more
commonly associated with mainframe and midrange computers. Management’s decision to choose one
or both of the architectural designs is a business decision and should be primarily based on the mission
statement and the business goals and objectives of the enterprise.
1. Service Oriented Architecture (SOA)
The primary focus of this research is a SOA, which is essentially a collection of services. These services
communicate with each other. The communication can involve either simple data exchange or it could
involve two or more services coordinating some activity.
Web services is the most likely connection of SOAs and uses eXtensible Markup Language (XML) to create
a robust connection. A Web service is a software system designed to support interoperable machine-to-
machine interaction over a network. It has an interface described in a machine-processable format
(specifically Web Services Description Language [WSDL]). Other systems interact with the Web service in
a manner prescribed by its description using Simple Object Access Protocol (SOAP) messages, typically
conveyed using Hypertext Transfer Protocol (HTTP). An organization using Web services internally could
easily have those services disrupted through the insider threat.
2. Centralized Processing
This refers to the processing of all data at one single central location by a large mainframe computer.
Mainframes became popular in the 1960s and 1970s because of their unprecedented computer power.
During the 1980s and early 1990s, concepts such as client–server and distributed computing caused
many to realize that although computing power could be purchased at a significantly lower capital cost,
there were hidden costs involved.
3. Distributive Systems Architecture
A Distributive Systems Architecture refers to any of a variety of computer systems that use more than
one computer, or processor, to run an application. This includes parallel processing, in which a single
computer uses more than one central processing unit (CPU) to execute programs. More often, however,
distributed processing refers to local area networks (LANs) designed so that a single program can run
simultaneously at various sites.
Another form of distributed processing involves distributed databases— databases in which the data
is stored across two or more computer systems.
The database system keeps track of where the data is so that the distributed nature of the database is
not apparent to users.
4. Client–Server Architecture
This is a network architecture in which each computer or process on the network is either a client or a
server. Servers are powerful computers or processes dedicated to managing disk drives (file servers),
printers (print servers), or network traffic (network servers).
Clients are PCs or workstations on which users run applications. Clients rely on servers for resources,
such as files, devices, and even processing power.
A newer client–server architecture, called a three-tier architecture, introduces a middle tier for the
application logic. A special type of client–server architecture consists of three well-defined and separate
processes, each running on a different platform:
The user interface, which runs on the user’s computer (the client).
The functional modules that actually process data (this middle tier runs on a server and is often called
the application server).
A database management system (DBMS) that stores the data required by the middle tier (this tier runs
on a second server called the database server).
The three-tier design has many advantages over traditional two-tier or single tier designs, the chief ones
being as follows:
The added modularity makes it easier to modify or replace one tier without affecting the other tiers.
Separating the application functions from the database functions makes it easier to implement load
balancing.
Fig: SOA Architecture Strategies for Computer Fraud Prevention
PROTECTION OF WEB SITES
1. Secure Website with HTTPS:
SSL/TLS Certificates:
Ensure the website uses HTTPS (Hypertext Transfer Protocol Secure) by installing an SSL/TLS
certificate. This encrypts the communication between the user's browser and the website,
protecting sensitive data like passwords and credit card information.
HTTPS Protocol:
HTTPS is a secure version of HTTP, using encryption to protect data transmitted between the
server and the user's browser.
2. Implement Strong Authentication:
Strong Passwords:
Require users to create strong, unique passwords and enforce regular password changes.
Multi-Factor Authentication (MFA):
Implement MFA, where users need to verify their identity using multiple methods (e.g.,
password and a code from a mobile app).
3. Keep Software Updated:
Regular Updates:
Regularly update the website's software, plugins, and themes to patch security vulnerabilities.
Security Patches:
Ensure the installation of security patches promptly to address known vulnerabilities.
4. Protect Against Web Attacks:
Web Application Firewall (WAF):
Use a WAF to protect the website from common web attacks, such as SQL injection and cross-
site scripting (XSS).
Content Delivery Network (CDN):
A CDN can help improve website performance and protect against DDoS attacks by distributing
the website's content across multiple servers.
Firewall:
Implement a firewall to block malicious traffic and unauthorized access to the website.
5. Monitor and Audit Your Website:
Regular Security Audits:
Conduct regular security audits to identify and address potential vulnerabilities.
Log Monitoring:
Monitor the website logs for suspicious activity and potential security breaches.
Penetration Testing:
Consider penetration testing to simulate real-world attacks and identify vulnerabilities.
6. Protect Website Content:
Terms and Conditions:
Clearly state the website's terms and conditions regarding content usage and copyright.
Watermarks:
Protect the images with watermarks to deter unauthorized use.
Disable Easy Copying:
Implement measures to prevent easy copying of the content, such as disabling right-click
options.
Anti-Bot Technology:
Use anti-bot technology to prevent bots from scraping the website content.
7. Backups and Recovery:
Regular Backups: Regularly back up the website's files and database to ensure you can recover
from a security breach or other issues.
Recovery Plan: Develop a recovery plan to address potential security incidents.
INTRUSION DETECTION SYSTEMS
Intrusion detection systems (IDS) perform a number of different functions, including the following:
Monitoring and analysis of user and system activity.
Auditing of system configurations and vulnerabilities.
Assessment of the integrity of critical system and data files.
Recognition of activity patterns reflecting known attacks.
Statistical analysis for abnormal activity patterns.
Operating system audit trail management, with recognition of user activity reflecting policy violations.
1. NIDS-Network Intrusion Detection Systems
Network intrusion detection systems (NIDS) use raw network packets as the primary data source.
Additionally, the IDS uses a network adapter in promiscuous mode that listens and analyzes all traffic in
real-time as it travels across the network. The network IDS usually has two logical components: the
sensor and the management station. The sensor sits on a network segment, monitoring it for suspicious
traffic. The management station receives alarms from the sensors and displays them to an operator. The
sensors are usually dedicated systems that exist only to monitor the network.
Strengths
Network Attack Detection: The NIDS can detect some of the external and internal attacks that use the
network, particularly new attack forms. A NIDS will alert for an external attack or an insider accessing the
network remotely and wishing to conduct potential ICF activities.
Anomaly Detectors Detection Ability: Anomaly detectors have the ability to detect unusual behaviour
and therefore the ability to detect symptoms of attacks without specific knowledge of details.
Anomaly Detectors—Attack Signatures: Anomaly detectors have the ability to produce information
that serves as the basis for the development of new attack signatures.
No Modifications of Production Servers or Hosts: A network-based IDS does not require modification
of production servers or hosts.
No Production Impact: NIDS will generally not negatively impact any production services or processes,
because the device does not function as a router.
Self-Contained: NIDS runs on a dedicated system that is simple to install, generally plug-and-play after
some configuration changes, and then is set to monitor network traffic.
Weaknesses
Limited to a Network Segment: NIDS examines only network traffic on the segment to which it
is directly connected. It cannot detect an attack that travels through a different network segment.
Expensive: The problem may require that an organization purchase many sensors in order to
meet its network coverage goals.
Limited Detection: Detection is limited to programmed attacks from external sources; however,
it is not considered effective for detecting the more complex information threats.
Limited Coordination: There is little coordination among sensors, which creates a significant
amount of analysis traffic.
Difficulty with Encryption: A network-based IDS may have a difficult time handling attacks within
encrypted sessions.
False Positives: Anomaly detection generates a significant number of false positives.
2. Host-Based Intrusion Detection Systems(HIDS)
Host-based IDSs operate on information collected from within a computer system, which gives this HIDS
a distinct advantage over NIDS, due to the fact that it is easy for a target to see the intended outcome of
the attempted attack compared to network attack. The host-based IDS looks for signs of intrusion on the
local host system. These frequently use the host operating system audit trails and system logs as sources
of information for analysis. Every platform is different in terms of what system audit trails and system log
reports are produced; however, in Windows NT, there are system, event, and security logs, and in UNIX
there are Syslog and other operating-specific log files. This IDS architecture generally uses rule-based
engines for analyzing activity; an example of such a rule might be, superuser privilege can only be
attained through the su command.
Strengths:
A host-based IDS usually provides much more detailed and relevant information than a network-based
IDS.
Host-based systems tend to have lower false-positive rates than do network based systems.
Host-based systems can be used in environments where broad intrusion detection is not needed, or
where the bandwidth is not available for sensor to- analysis communications.
Finally, a host-based system may be less risky to configure with an active response, such as terminating
a service or logging off an offending user.
Weaknesses:
Host-based systems require installation on the particular device that you wish to protect.
If the server is not configured to do adequate logging and monitoring, you have to change the
configuration of, possibly, a production machine, which is a tremendous change management
problem.
Host-based systems are relatively expensive.
They are almost totally ignorant of the network environment. Thus, the analysis time required to
evaluate damage from a potential intrusion increases linearly with the number of protected hosts.
THE PENETRATION TESTING PROCESS
Penetration testing is an emerging practice used in organizations to attempt to identify and test their vulnerabilities
before a malicious agent has the opportunity to exploit it. The various techniques presented here attempt to
penetrate a target network from a particular frame of reference, both internal and external to the organization.
The Pen testing process and objective at its most fundamental level is trying to emulate a hacker by assessing
the security strengths and weaknesses of a target network.
A comprehensive Pen test will surface and test for real-world vulnerabilities (for example, open services)
detected during the network security scan.
One of the primary goals in Pen testing is to identify security vulnerabilities in a network and to assess how an
outsider or insider to an enterprise may either deny service or gain access to information to which the attacker is
not authorized.
Methodology
Determining what type of Pen test to perform will be largely attributed to the threat management is trying to
replicate and how the test should be conducted (internal versus external) and who should be conducting the test.
Different types of Pen tests are as follows:
Internal (Consultant Scenario): Typically, during an internal Pen test, an attempt will be made to emulate a
consultant with zero knowledge of the entity; however, the person making the attempt will possess a sufficient
skill level to perform the ethical hack but will have no access rights to the network other than physical access.
Internal (Internal Employee Scenario): Another threat profile that should be tested within an internal threat
scenario would be that of an internal employee.
An internal Pen test searches for potential security vulnerabilities within the internal (trusted) network.
External: External Pen tests are intended to identify vulnerabilities that were established through the
organization connection to the Internet via a firewall or gateway.
Fundamentally, external Pen tests are designed to test the adequacy of the perimeter defence mechanisms,
such as remote access controls, firewalls, encryption, intrusion detection, and incident response.
Pen Teams: The Pen Test teams are performed by the Final Four accounting firms and other consulting firms. The
term ―Tiger team‖ is typically used to describe Pen testing teams. The term ―Spider team‖ is commonly used to
refer to those individuals involved in Pen testing against Web servers that are located outside of the enterprise’s
network.
WEB SERVICES-REDUCING TRANSACTION RISKS:
Web Services Security for a Service Oriented Architecture:
Web services were selected as the primary architecture based upon its growing use within the marketplace.
The primary players in the Web services space include Microsoft, IBM, Oracle, BEA, and others. There is concern
in the marketplace that reliance on vendor alliances is questionable, which may lead to vendor politics, thereby
preventing a single, consistent set of standards. The front-runners in Web services design and development tools
vendors are Microsoft VS.net, BEA WebLogic Workshop, and IBM/Rational Websphere.
Major Groups Involved in Establishing Standards for Web Services Security
OASIS (Organization for the Advancement of Structured Information Standards):
A global consortium that drives the development and adaption of e-business standards. OASIS produces
worldwide standards for security, Web services, XML conformance, business transactions, electronic
publishing, topic maps, and interoperability within and between marketplaces. Key standards include
extensible rights markup language, WS-Security, Security Assertion Markup Language (SAML)
provisioning, biometrics, and eXtensible Access Control Markup Language (www.oasis-open.org).
W3C (World Wide Web Consortium): Develops interoperable technologies (specifications, guidelines,
software, and tools). Key standards include XML encryption, XML signature, and XKMS (XML Key
Management Specifications) (www.w3c.org).
Liberty Alliance: The Liberty Alliance project was formed in 2001 to establish an open standard for
federated network identity. Key standards include SAML to pass standards-based security tokens between
identity and authentication systems (www.projectliberty.org).
WS-I (Web Services Interoperability Organization): An open-industry organization chartered to promote
Web services interoperability across platforms, operating systems, and programming languages. The
organization works across all industries and standards organizations to respond to customer needs by
providing guidance, best practices, and resources for developing solutions for Web services (www.ws-
i.org).
Web Services Security—Technical Security Concerns:
The technical security governing Web Services continues to evolve and mature (i.e., SAML). There are some
organizations that are moving slowly into Web services, by deploying such activity internally as a pilot prior to
engaging directly into external e-commerce and data transmissions.
Transmission of Executables: Web services allow the hidden transmission of executables and other malicious
content. Web services allow for the transmission of any kind of data within the flow of message transactions.
Cyber Attacks: Web services security vendors and other perimeter security vendors have not yet achieved a
significant level of sophistication with regard to the aforementioned types of attacks
Security Assertion Markup Language (SAML)
SAML is an XML vocabulary used to convey trustworthy, digitally signed authentication and user credential
information between applications or domains and independent from actual authentication mechanisms, user
directories, or security policies.
Browser-Based SSO: The browser-based SSO usage of SAML allows an identity provider to transparently
pass identity attributes to service providers as part of a SSO environment. The underlying mechanics of
SAML allow the service provider to validate that the information has integrity.
SOAP-Based Web Services: A SAML assertion can be directly attached to the header of a SOAP envelope
to indicate who the end user is behind any particular transaction.
Other Web Services Security Proposals: There are other efforts underway to evaluate the security aspects
of Web services, which include evaluating a Web services proof of concept, which involves interoperability
testing between Java and .NET platforms across financial firewalls. There are extensions of SOAP being
designed to provide data confidentiality, integrity, authentication, and message reliability.
Specific Types of Web Services Security Solutions:
Security Infrastructure: A number of existing standards can be used to enhance the security of Web services.
Although some like SSL and HTTPS can be used on their own to provide point-to-point security, all of the following
can be incorporated in the more elaborate schemes currently being developed.
SSL (Secure Sockets Layer): SSL is a network encryption and authentication mechanism developed by Netscape,
which is used in HTTPS. Currently a de facto standard, SSL has been submitted to the Internet Engineering Task
Force (IETF) for standardization, which will take the form of Transport Layer Security (TLS) Protocol.
TLS (Transport Layer Security): A Transport Layer Security Protocol was adopted by IETF as RFC 2246. TLS is based
on SSL, which it may eventually supersede.
HTTPS: HTTPS is a secure form of HTTP, implemented by the use of SSL instead of plain text. Data is encrypted
in both directions, and the server is authenticated by an SSL certificate
XML Digital Signature: This is a set of XML syntax and processing rules for creating and representing digital
signatures (a necessary building block for the WS-Security Framework). As well as its obvious purpose of
guaranteeing message integrity, XML Digital Signature can be used to implement nonrepudiation.
XKMS (XML Key Management Specification): A specification submitted toW3C by Microsoft, VeriSign, and
webMethods in March 2001, XKMS aims to provide a Web service public key infrastructure (PKI) interface in such
a way as to hide as much complexity as possible from the client.
Message-Level Authentication: This includes HTTP basic and HTTP digest.
Security Assertion Markup Language (SAML)
X509
Data-Element Level Encryption and Digital Signatures
XML Application Firewall: An XML application firewall fits well as a non-invasive drop-in solution in front of any
XML or Web services interface.
Extensible Markup Language (XML)
XML provides a context by which applications can send and receive messages and documents that describe the
nature of their content in machine-readable form. Web services will define how data will be requested and passed,
with the standards incorporated in an interface within an existing application, which will allow any other
application with a similar interface to connect with it and to exchange data. Web services are sometimes referred
to as ―XML in motion because Web services protocols define the transport mechanisms for XML-based
communications. There are two formats for defining XML document declarations: XML document- type definition
(DTD) and XML Schema. The XML DTD was the original representation that provided a road map to the syntax used
within a given class of XML document. Because XML tags are not predefined as in HTML, a DTD describes tag
names and distinguishes the required and optional elements that may appear in a document
XML and Security.
At the current time, encrypting a complete XML document, testing its integrity, and confirming the authenticity
of its sender is fairly straightforward; however, at times it may be necessary to use encryption and authenticate in
arbitrary sequences and involve different users and originators. At the present time, the most important sets of
developing specifications in the area of XML-related security are XML encryption, XML signatures, XACL
(eXtensible Access Control Language), SAML, and XKMS. XML encryption will allow encryption of digital content,
such as GIF, SVG, or XML fragments, while leaving other parts of the XML document not encrypted
SOAP and Security.
When securing SOAP messages, various types of threats should be considered:
The message could be modified or read by antagonists or an antagonist could send messages to a service that,
while well-formed, lacks appropriate security claims to warrant processing.
Based on the OASIS Web Services Security Model, which applies the use of message security tokens combined
with digital signatures to protect and authenticate SOAP messages. Security tokens assert claims and can be used
to assert the binding between authentication secrets or keys and security identities. Protecting the message
content (confidentiality) from being disclosed or modified without detection (integrity) is a primary security
concern.
Problems with Web Services Security.
There are no industry best or even sound processes or practices for risk mitigation for Web services threats and
vulnerabilities within the financial services sector or other industries. Web services are based on standards that
are simple and portable but provide no built-in security mechanisms. Therefore, data transferred via Web services
can be exposed to threats. Security standards for Web services are being developed that attempt to define a
standard set of message headers to achieve integrity and security, but they are not yet available. The risk
assessment process should include, at the minimum, the following general categories involved in Web services:
Authentication: Authentication is a core requirement of a secure Web services architecture—only free
and public Web services do not require some form of authentication. The major choices for authentication
include how credentials are bound to the Web services request, the type of credentials, whether a session
token is used, and where and how authentications are processed.
Authorization: Web services architects must decide on the richness of authorization policy and where and
how authorization is performed.
Richness of Authorization: The business situation drives authorization policy requirements. This can range
from simple implied entitlement to complex policy-based and instance-based authorization.
Administration: Determining the appropriate Web services administration is a security challenge. A
comprehensive secure Web services architecture involves securing more than an independent, stand-
alone Web services tier, which in turn involves multiple software infrastructure elements, each with its
own security.
For example, a business partner security infrastructure integration (federation) provides emerging
solutions and standards for federated security. A security infrastructure is needed to enable business
partners (or divisions within a single enterprise) to recognize and trust each other’s users.
************************************************************************************