Testing and Evaluation of Service Oriented Systems
Testing and Evaluation of Service Oriented Systems
Abstract:
In Computer Science we have many established testing methods and tools to evaluate the software
systems but unfortunately they don‟t work well for systems that are made up of services. For
example, to users and systems integrators, services are just interfaces. This hinders white box testing
methods based on code structure and data flow knowledge. Lack of access to source code also
prevents classical mutation-testing approaches, which require seeding the code with errors.
Therefore, evaluation of service oriented system has been a challenge, though there are large number
of evaluation metrics exist but none of them is efficient to evaluate these systems effectively. This
paper discusses the different testing tools and evaluation methods available for SOA and summarizes
their limitation and support in context of service oriented architectures.
1. Introduction
Most organizations that want to build an SOA don't have a clue about how to approach
the cost estimate. So, how to you calculate the cost of an SOA has been a challenge. We
can't cost out an SOA like a construction project where every resource required is
tangible and is easily accountable for calculating the total project costly. Since to
compute the cost of many notions like : Understanding domain in proper context,
understanding how much required resources cost, understanding how the work will get
done and analyzing what can go wrong are some of intangible resources that are always
required and are difficult to measure. According to D. Linthicum, the risk and impact of
SOA are distributed and pervasive across applications, therefore, it is critical to perform
an architecture evaluation early in the software life cycle [D. Linthicum, (2007)].
Because SOA involves the connectivity of multiple systems, business entities, and
technologies, its overall complexity and the political forces involved need to be factored
into architecture trade off considerations more than in single-application designs where
technical concerns predominate.
SOA is a widely used architectural approach for constructing large distributed systems,
which may integrate several systems that offer services and span multiple organizations.
In this context, it is important that technical aspects be considered carefully at
architectural design time. In a software architecture evaluation, we weigh the relevance of
each design concern only after we understand the importance of each quality attribute
requirement. Because decisions about SOA tend to be pervasive and have a significant
and broad impact on business, therefore performing an early architecture evaluation is
particularly valuable and is always recommended.
1
1.1. Service Oriented Architecture (SOA)
There are many definitions of SOA but none are universally accepted. What is central to
all, however, is the notion of service. According to Phil B. et al.(2007), in an SOA
systems service is defined as follows.
is self-contained, highly modular and can be independently deployed.
Is a distributed component and is available over the network and accessible through a
name or locator other than the absolute network address.
Has a published interface so the users of the service only need to see the interface and
can be oblivious to implementation details.
Stresses interoperability such that users and providers can use different implementation
languages and platforms.
Is discoverable, means users can look it up in a special directory service where all the
services are registered.
Is dynamically bound signifies that the service is located and bound at runtime.
Therefore, service user does not need to have the service implementation available at
build time.
1.2. Service
Service is an implementation of a well-defined business functionality that operates
independent of the state of any other service defined within the system. It has well-
defined set of interfaces and operates through a pre-defined contract between the client of
the service and the service itself, which must be dynamic, flexible for adding, removing
or modifying services, according to business requirements. [Seth A, (2011)]. Services are
loosely coupled, autonomous, reusable, and have well-defined, platform-independent
interfaces, provides access to data, business processes and infrastructure, ideally in an
asynchronous manner. Receive requests from any source making no assumptions as to
the functional correctness of an incoming request. Services can be written today without
knowing how it will be used in the future and may stand on its own or be part of a larger
set of functions that constitute a larger service. Thus services within SOA
Provides for a network discoverable and accessible interface
Keeps units of work together that change together (high coupling)
Builds separation between independent units (low coupling)
From a dynamic perspective, there are three fundamental concepts which are important to
understand: the service must be visible to service providers and consumers, the clear
interface for interaction between them is defined, and how the real world is affected from
interaction between services. (See figure 1). These services should be loosely coupled
and have minimum interdependency otherwise they can cause disruptions when any of
services fails or changes.
Earlier model for integration like „point to point‟ and „spoke and wheel‟ had certain
limitations. The complexity of application integration for a point to point model rises
substantially with every new application that needs to communicate and share data with
it. Every new application needs to have custom code written to „glue‟ it to the existing
network, and thus, increasing maintenance costs. This inefficient model gave rise to a
new „spoke and wheel‟ paradigm called the Enterprise Application Integration (EAI), in
which, all communication is facilitated by the message broker. The message broker was
designed not just for routing, but often used for data transformation as well. However,
this architecture has scalability issues and introduces a single point of failure in the
network.
Figure 2: Comparison of ESB and Point-to-Point Integration Approaches [P. Bianco, 2007]
The Enterprise Service Bus is an improvement over these two architectures and plays a
critical role in connecting heterogeneous applications and services in a Service-Oriented
Architecture [Stojanovic, (2005)]. This middleware layer is responsible for not only
transporting data, but also serves as a „transformation‟ layer. This „transformation‟ of
data allows legacy systems to communicate and share data with newer applications.
Mean Response Time: One can calculated the Mean Response Time as the amount of
time elapsed from the moment the request was sent to the time a reply was received.
After retrieving the test data to compare the performances, we need a method to analyze
the results. Simply calculating the throughput or the mean response times and generating
graphs is not sufficient for the analysis.
• Testing perspectives. Various stakeholders, such as service providers and end users,
have different needs and raise different testing requirements.
• Testing level. Each SOA testing level, such as integration and regression testing, poses
unique challenges.
Further, in order to understand the testing of service architecture completely, one needs to
clear about the roles of services in different perspectives like service developer, service
provider, service integrator, service user and third party certifier. Gerardo C. et al.
(2006) describes the above terms as follows: (see table 1)
Service developer: the service developer tests the service to detect the maximum possible
number of failures with an aim to release a highly reliable service.
Service provider: The service provider tests the service to ensure it can guarantee the
requirements stipulated in the SLA with the consumer.
Service integrator: The service integrator test to gain confidence that any service to be
bound to thier own composition fits the functional and nonfunctional assumptions made
at design time.
Third-party certifier: The service integrator can use a third-party certifier to assess a
service‟s fault-proneness.
Service User: only concern that the application he‟s using works while he‟s using it.
Regardless of the test method, testing a service-centric system requires the invocation of
actual services on the provider‟s machine. This has several drawbacks. In most cases,
service testing implies several service invocations, leading to unacceptably high costs and
bandwidth use. [Gerardo and Massimiliano,(2006)].
Table 1. Highlights per testing dimension. Each stakeholder needs and responsibilities of are shown in black,
advantages in green, issues and problems in red
For most organizations, the first step of their SOA project is to figure out how much this
SOA will cost. So that budget can be estimated to get the funding. The problem is that
cost estimation of entire SOA components are not so easy and requires a clear
understanding of the work that has to be done.
Dave Linthicum proposed a formula to figure out how much an SOA project will cost as
follows [Dave Linthincum. 2011].
Cost of SOA = (Cost of Data Complexity + Cost of Service Complexity + Cost of Process
Complexity + Enabling Technology Solution)
He further provide an example to arrive at the first variable, the cost of data complexity
as follows:
Further, Dave suggested applying the same formulas to determine the costs of other
variables, including Cost of Service Complexity, Cost of Process Complexity, and
Enabling Technology Solution (which should be straightforward). Once you arrive at
your Cost of SOA, Dave advises figuring in "10 to 20 percent variations in cost for the
simple reason that we've not walked down this road before."
Different survey and studies concluded that COCOMO II model by itself is inadequate to
estimate effort required when reusing service-oriented resources. Although COCOMO II
model has a large number of coefficients such as effort multipliers and scale factors, it is
difficult to directly justify these coefficients in context of the cost estimation for SOA-
based software development
.
3.3. Functional Size Measurement Methods
More issues appear when applying IFPUG to software system size measurement.
Measuring with the COSMIC approach, on the contrary, is supposed to satisfy the typical
sizing aspects of SOA-based software. However, there is a lack of guidelines for practical
application of COSMIC measurement in SOA context. In addition to the application of
Function Points, Liu et al. (2009) use Service Points to measure the size of SOA-based
software. The software size estimation is based on the sum of the sizes of each service.
Size = (n,i ) Σ ( Pi * P)
Where Pi is an infrastructure factor with empirical value that is related to the supporting
infrastructure, technology and governance processes. P represents a single specific
service's estimated size that varies with different service types, including existing service,
service built from existing resources, and service built from scratch. This approach
implies that the size of a service-oriented application depends significantly on the service
type. However, the calculation of P for various services is not discussed in detail.
Where x is the original problem that will be solved through Solution procedure. IsBase is
used to verify whether the problem x is primitive or not, which returns TRUE if x is a
basic problem unit, or FALSE otherwise. SolveDirectly presents the conquer procedure.
Decompose is referred to as the decomposing operation, while Compose is referred to as
the composing operation [Zheng Li, Keung J, (2010)].
It is developed by starting with the end objective and successively re-dividing it into
manageable components in terms of size, duration, and responsibility [T. Y. Lin, 2005].
In large projects, the approach is quite complex and can be as much as five or six levels
deep.
Table2. Summary of different SOA based project evaluation approaches with the assumptions and limitations
4. Conclusion
Software cost estimation plays a vital role in software development projects, especially
for SOA-based software development. However, current cost estimation approaches for
SOA-based software are inadequate due to the architectural difference and the
complexity of SOA applications. This paper discussed different testing and cost
evaluation methods of service oriented systems. By using these techniques and
identifying the support of each in context of service oriented systems can be helpful for
simplifying the complexity of SOA cost estimation. By hosting different sets of metrics,
this survey help not only for the complete cost estimation work but also for estimates the
overall cost and effort through the independent estimation activities in different
development areas of an SOA application.
References
A. Bosworth, (2001) Developing Web Services, Proc. 17th International Conference on Data
Engineering (ICDE), IEEE Press, pp. 477-481,
A. Umar and A. Zordan, (2009). Reengineering for Service Oriented Architectures: A Strategic
Decision Model for Integration versus Migration, Journal of Systems and Software, vol. 82, pp.
448-462,
B. W. Boehm, C. Abts, A.W. Brown, S. Chulani, B.K. Clark, E. Horowitz, R. Madachy, D.J.
Reifer, and B. Steece, (2000). Software Cost Estimation with COCOMO II. New Jersey:
Prentice Hall PTR
D. Krafzig, K. Banke, and D. Slama, (2004). Enterprise SOA: Service- Oriented Architecture Best
Practices, Upper Saddle River: Prentice Hall PTR.
D. Linthicum, (2007) How Much Will Your SOA Cost?, SOAInstitute.org, Mar. 2007
D. Norfolk, (2007). SOA Innovation and Metrics, IT-Director.com.
E. Jamil, (2009). SOA in Asynchronous Many-to-One Heterogeneous Bi-Directional Data
Synchronization for Mission Critical Applications, WeDoWebSphere.
G. Lewis, E. Morris, L. O'Brien, D. Smith and L. Wrage, (2005). SMART: The Service-Oriented
Migration and Reuse Technique CMU/SEI- 2005-TN-029, Software Engineering Institute,
USA,
Gerardo Canfora and Massimiliano Di Penta, (2006).Testing Services and Service-Centric Systems:
Challenges and Opportunities, IEEE Computer Society, pp 10-17
L. O'Brien, (2009). A Framework for Scope, Cost and Effort Estimation for Service Oriented
Architecture (SOA) Projects, Proc. 20th Australian Software Engineering Conference
(ASWEC'09), IEEE Press, pp. 101-110.
Phil Bianco, Rick Kotermanski,, Paulo Merson (2007). Evaluating a Service-Oriented
Architecture, Software Engineering Institute.
Sanjay P. Ahuja, Amit Patel, (2011). Enterprise Service Bus: A Performance Evaluation
Communications and Network, vol 3, pp 133-140
Seth Ashish, Himanshu Agarwal, A. R. Singla (2011) Designing a SOA Based Model, ACM
SIGSOFT Software Engineering Notes, Volume 36 Issue 5, ACM New York, NY, USA,2011,
pp 5-12
T. Erl, (2005). Service-Oriented Architecture: Concepts, Technology, and Design. Crawfordsville:
Prentice Hall PTR.
T. Y. Lin, (2005). Divide and Conquer in Granular Computing Topological Partitions, Proc.
Annual Meeting of the North American Fuzzy Information Processing Society (NAFIPS 2009),
IEEE Press, pp. 282-285.
Van Latum, F.. (1998). Adopting GQM based measurement in an industrial environement. IEEE
Software 15(1), 78-86
Yusuf Lateef Oladimeji, Olusegun Folorunso, Akinwale Adio Taofeek, Adejumobi, A. I ,(2011). A
Framework for Costing Service-Oriented Architecture (SOA) Projects Using Work Breakdown
Structure (WBS) Approach Global Journal of Computer Science and Technology Volume 11
Issue 15 Version 1.0
Zheng Li, Jacky Keung (2010). Software Cost Estimation Framework for Service-Oriented
Architecture Systems using Divide-and-Conquer Approach, Fifth IEEE International Symposium
on Service Oriented System Engineering,pp 47-54
Z. Stojanovic and A. Dahanayake, (2005). Service-Oriented Software System Engineering:
Challenges and Practices. Hershey, PA: IGI Global.