UNIT 4 SE
UNIT 4 SE
A. Requirements and definition: In this phase the developing organization interacts with the
customer organization to specify the software system to be built. Ideally, the requirements should
define the system completely and unambiguously. In actual practice, there is often a need to do
corrective revisions during software development. A review or inspection during this phase is
generally done by the design team to identify conflicting or missing requirements. A significant
number of errors can be detected by this process. A change in the requirements in the later phases
can cause increased defect density.
B. Design: In this phase, the system is specified as an interconnection of units, such that each unit is
well defined and can be developed and tested independently. The design is reviewed to recognize 3
errors.
C. Coding: In this phase, the actual program for each unit is written, generally in a higher level
language such as C or C++. Occasionally, assembly level implementation may be required for high
performance or for implementing input/output operations. The code is inspected by analyzing the
code (or specification) in a team meeting to identify errors.
D. Testing: This phase is a critical part of the quest for high reliability and can take 30 to 60% of the
entire development time. It is generally divided into these separate phases.
1. Unit test: In this phase, each unit is separately tested, and changes are done to remove the
defects found. Since each unit is relatively small and can be tested independently, they can be
exercised much more thoroughly than a large program.
2. Integration testing: During integration, the units are gradually assembled and partially assembled
subsystems are tested. Testing subsystems allows the interface among modules to be tested. By
incrementally adding units to a subsystem, the unit responsible for a failure can be identified more
easily.
3. System testing: The system as a whole is exercised during system testing. Debugging is continued
until some exit criterion is satisfied. The objective of this phase is to find defects as fast as possible.
In general the input mix may not represent what would be encountered during actual operation.
4. Acceptance testing: The purpose of this test phase is to assess the system reliability and
performance in the operational environment. This requires collecting (or estimating) information
about how the actual users would use the system. This is also called alpha-testing. This is often
followed by beta-testing, which involves actual use by the users.
5. Operational use: Once the software developer has determined that an appropriate reliability
criterion is satisfied, the software is released. Any bugs reported by the users are recorded but are
not fixed until the next release. 6. Regression testing: When significant additions or modifications are
made to an existing version, regression testing is done on the new or "build" version to ensure that it
still works and has 4 not "regressed" to lower reliability.
Software Reliability Measurement Techniques
Reliability metrics are used to quantitatively expressed the reliability of the software
product. The option of which parameter is to be used depends upon the type of system
to which it applies & the requirements of the application domain.
roduct Metrics
Product metrics are those which are used to build the artifacts, i.e., requirement
specification documents, system design documents, etc. These metrics help in the
assessment if the product is right sufficient through records on attributes like usability,
reliability, maintainability & portability. In these measurements are taken from the actual
body of the source code.
Process metrics quantify useful attributes of the software development process & its
environment. They tell if the process is functioning optimally as they report on
characteristics like cycle time & rework time. The goal of process metric is to do the right
job on the first time through the process. The quality of the product is a direct function of
the process. So process metrics can be used to estimate, monitor, and improve the
reliability and quality of software. Process metrics describe the effectiveness and quality
of the processes that produce the software product.
Examples are:
A fault is a defect in a program which appears when the programmer makes an error and
causes failure when executed under particular conditions. These metrics are used to
determine the failure-free execution software.
To achieve this objective, a number of faults found during testing and the failures or
other problems which are reported by the user after delivery are collected, summarized,
and analyzed. Failure metrics are based upon customer information regarding faults
found after release of the software. The failure data collected is therefore used to
calculate failure density, Mean Time between Failures (MTBF), or other parameters
to measure or predict software reliability.
Software reliability models have appeared as people try to understand the features of
how and why software fails, and attempt to quantify software reliability.
Over 200 models have been established since the early 1970s, but how to quantify
software reliability remains mostly unsolved.
There is no individual model that can be used in all situations. No model is complete or
even representative.
o Assumptions
o Factors
A mathematical function that includes the reliability with the elements. The
mathematical function is generally higher-order exponential or logarithmic.
1) Time between Failure Models Under these models the study is based on the time between
failures. It works on the assumption that the time between (i-1) th and ith failures is a random
variable, which follows a distribution whose parameters depend on the number of faults
remaining in the program during this interval. Estimates of the parameters are obtained from
the observed values of time between failures, mean time to next failure, etc., are then
obtained from the fitted model. Jelinski Moranda Model Jelinski Moranda (JM) model is an
exponential model but is differs from geometric model in that the parameter used is
proportional to the remaining number of faults rather than constant [6]. In JM model, we have
N software faults at the start of testing, each is independent of others and is equally likely to
cause a failure during testing. Fault removal technique is applied to remove defects and no
new defects are introduced during debugging. Shailee Lohmor et al, / (IJCSIT) International
Journal of Computer Science and Information Technologies, Vol. 5 (4) , 2014, 5545-5547
www.ijcsit.com 5545 The basic assumptions of this model are:
3. Every fault has the same chance to be encountered during software operation.
4. Fault detection rate remains constant over intervals between fault occurrences.
5. Fault detection rate is proportional to current fault content of software. 6. Each detected
error is corrected without delay.
7. Failures are independent MODEL FORM: The number of predicted errors or the mean
value function, µ (tj), is given by µ ( tj ) =1/ b (a – (j-1)) Where b is the roundness or shape
factor (the rate at which the failure rate decreases). a is the total number of software errors,
and tj is occurrence time of jth fault . The number of Residual errors can be found out if the
entire number of bugs is detected and is calculated as: ER=a- µ ( tj ) The Reliability Factor is
the measure of software reliability. Its values vary between 0 and 1. If RF=1, then software
under consideration is perfect, however if RF=0, then the software is highly vulnerable.
When RF approaches close to1 then the software can be considered as reliable. RF=1-(ER/a)
2. Failure Count Models The group refers to the models that are based on the number of
failures that occur in each time interval. The random variable of interest is the number of
faults (failures) occurring during specified time intervals. It is assumed that failure counts
follow a known stochastic process. Usually a Poisson distribution with a time dependent will
be discrete or continuous failure rate. The time can be calendar time or CPU time Parameters
of the failure rate can be estimated from the observed values of failure counts and then the
Software reliability parameters are obtained from the appropriate expression. Goel- Okumuto
Non- homogeneous Poisson Process Model In this model Goel-Okumoto [9] assumed that a
software system is subject to failure at random times caused by faults present in the system.
The Non Homogeneous Poisson Process (NHPP) model is a Poisson type model that takes
the number of faults per unit of time as independent Poisson random variables. The basic
assumptions of this model are:
4. Defect repair is perfect. That is, no new defect is introduced during test.
6. Each unit of execution time during test is equally likely to find a defect if the same code is
executed at the same time. MODEL FORM: The mean value function or the cumulative
failure counts must be of the form µ(t)=a( 1- e-bt ) for some constants b>0 and N>0. a is the
expected total number of faults to be eventually detected. In this model a is the expected
number of failures to be observed eventually and b is the fault detection rate per fault
. 3. Error or Fault Seeding Model In the model of Error Seeding, a predefined number of
artificially generated errors are "incorporated" in the program code. After that, test runs are
used to detect the errors and to examine the ratio between actual and artificial errors based on
the total number of detected errors. Naturally, the artificially generated errors are not known
to the testers. In a first approach, the number of undetected errors can be estimated as
follows: FU = FG · (FE / FEG) Where FU refers to number of undetected errors, FG means
number of not seeded errors detected, FE refers number of seeded errors and FEG as number
of seeded errors detected. By seeding errors to a document and then let the document undergo
testing of some kind it is possible to calculate how many real errors that exist. According to
these, an estimation of the fault content of the program preceding to seeding is obtained and
used to assess software reliability and other relevant measures. The basic assumptions of this
model are:
Mills Hyper geometric model is one the model of the type fault seeding [7]. This model is
based on approach that number of known faults be randomly seeded in the program to be
tested. The program is then tested for some interval of time. Original indigenous fault count
can be evaluated from the numbers of indigenous and seeded faults uncovered during the test
by using the hyper geometric distribution.
4. Input - Domain Based Category Input - domain based category includes models that assess
the reliability of a program when the test cases are sampled randomly from well - known
operational distribution of inputs program. By finding all unique paths through the program
and then execute each and everyone it is possible to guarantee that everything is tested.
Nelson model is the example of the type input domain [8]. The basic assumptions of this
model are as follows:
Shailee Lohmor et al, / (IJCSIT) International Journal of Computer Science and Information
Technologies, Vol. 5 (4) , 2014, 5545-5547 www.ijcsit.com 5546
Nelson model
In this model, reliability of software is calculated by executing the software for a sample of n
input. Inputs are randomly selected for the input domain set S= (Si, i=1,.....,N) and each Si is
set of data values required for the execution. Probability distribution Pi I ; the set (Pi, i = 1,
N) is the operational profile or simply the user input distribution. And random sampling is
done according to this probability distribution. Suppose ne is the number of execution that
leads the execution to fail. Then estimation of reliability R1 is: R1={1-ne/n} .
Maintainability is a measure of how easy it is to keep a software system running smoothly and
effectively. A maintainable system can be easily adapted to changing needs, whether those
changes are made by the original developers or by new members of the team.
Developers should aim to create software that is maintainable from the start. However, even
established code bases can improve to increase maintainability. By refactoring code and
improving documentation, teams can make it easier to keep their systems up-to-date and
effective.
1. Readability: The code should be easy to read and understand. This makes it easier for
new developers to jump in and start working on the project, and also makes it easier to
spot potential bugs.
2. Modularity: The code should organize into logical modules that can be independently
maintained and updated. This allows changes to be made without affecting the rest of the
codebase and makes it easier to reuse code in other projects.
3. Testability: The code should be written in a way that makes it easy to write automated
tests. This helps ensure that new changes don’t break existing functionality, and also
makes it easier to catch bugs before they make it into production.
4. Flexibility: The code should be flexible enough to accommodate changing requirements
over time. This includes using abstractions and design patterns that make the code more
resilient to change and avoiding tightly-coupled dependencies that make it difficult to
modify individual components.
5. Scalability: The code should design for scalability from the outset. This means
considering things like performance, caching, load balancing, and other factors that can
impact the system’s ability to handle increased traffic or data volume.
Reverse Engineering
The process of analyzing a system or product to learn how it functions
without having access to the source code or design is known as reverse
engineering. This is frequently used with hardware, software, and other
technologies. "Reverse engineering" refers to the process of reproducing
the final product structure, design, or functionality by going backward
from it.
2. Interoperability
3. Security Analysis
4. Competitive Analysis
5. Software Compatibility
2. Incomplete Information
3. Time-Consuming
4. Maintenance Difficulties
5. Changing Environments
What do you know about software reengineering? Even if you developed the best
software of the era, you can still reengineer this to be something much better. Learn
what reengineering is, why your software needs it, and how it is done.
So what would you do if you created something and you think that it is in its most
perfect form? Should you just stop there? Should you improve it anyway? You may
prefer to say ‘Yes’ which is not bad at all. However, in the technological world, it is
not easy to just stop.
In a fast and constantly changing industry, your business needs to keep up,
accelerate even. As far as satisfaction goes, it would be a waste to stop your pursuit
to be better. So when you feel like you’ve already engineered a good software,
platform, or business, what should you do next? Simple: REENGINEER!
1. Reverse Engineering
2. Restructuring
3. Forward Engineering
The flow ends with forward engineering. This is the process of integrating
the latest specifications based on the results of the evaluations from
reverse engineering and restructuring.
Software Reuse
Software reuse is the use of existing artifacts to build new software components. Software reuse
also ensures that teams aren't reinventing the wheel when it comes to solving repetitive
problems. Software libraries, design patterns, and frameworks are all great ways to achieve this.
By taking the time to code with reuse principles in mind as well as reusing existing artifacts, you'll
save time in the long term. In short, reuse is how technologists can avoid the proverbial
reinvention of the wheel.
There are several benefits to reusing existing artifacts like code or design patterns. With a
robust reuse strategy, tech organizations are likely to find:
Elimination of duplicative work: Duplicate efforts among many teams that net the
same result means wasted effort, cycles, and less time spent on potentially higher value
work. The potential productivity gains on higher value business constructs through the
avoidance of duplicate efforts may be the most significant result for an organization.
Consistency of system construction and behavior:_ Creating an ecosystem in which
approved patterns, frameworks, and libraries are reused across many disparate systems
provides for consistent, highly predictable system behavior. Additionally, as problems
arise and are solved within these ecosystems, they can be more easily shared and
implemented across organizations by leveraging those same components.
Shortened software development time: Developers save time when an application
they are working on requires a piece of code that already exists. Time spent reinventing
wheels is always more effectively used innovating.
Improved speed and quality in feature and system delivery: Consistent architecture
and design patterns, coupled with a finite set of already codified components, can be
used to quickly construct large portions of systems and features with a baseline range of
acceptable security, performance, and reliability characteristics. This frees cycles and
allows engineers to focus on higher value deliverables and solve ever more challenging
problems.
This typically involves designing and building software where a client (the user-facing application)
interacts with a server (a backend system that processes and stores data). The client sends requests
to the server, which processes them and sends back the appropriate responses.
Key Components:
1. Client:
o The client is typically the part of the system that interacts directly with the user. It
can be a web browser, mobile app, or desktop application.
o The client sends requests to the server to retrieve or send data.
2. Server:
o The server is a powerful computer or system that stores resources, processes data,
and responds to client requests.
o Servers usually run back-end software that handles logic, database interactions, and
processing tasks.
3. Communication Protocol:
o HTTP/HTTPS: Commonly used for web applications (client-server communication
over the web).
o Sockets: Low-level communication typically used in real-time systems.
o WebSockets: For bidirectional communication between client and server, often used
for real-time applications (like chat apps or live updates).
4. Architecture:
o Monolithic: Both client and server code are bundled together.
o Microservices: The server-side system is broken into smaller, independent services
that communicate via APIs.
5. APIs:
o The server typically exposes a set of APIs (Application Programming Interfaces) that
the client can interact with. This allows for modular and scalable development.
Example Flow:
1. User Action: A user opens a web application and interacts with the interface (e.g., clicking a
button).
2. Client Request: The client sends an HTTP request to the server asking for some data or
performing an action.
3. Server Processing: The server processes the request, maybe querying a database or
performing some logic.
4. Response: The server sends back a response to the client with the requested data or a
success/error message.
5. Client Update: The client processes the response and updates the user interface.
Design Considerations:
1. Definition:
SOA is an architectural pattern in which software components (called services) are provided over a
network, and each service is independent and can interact with other services to achieve specific
business processes.
Loose Coupling: Services are independent, meaning they don’t need to know the details of
how other services are implemented. Communication between services is abstracted
through standardized interfaces and protocols.
Interoperability: Services can communicate with each other regardless of the platform,
technology, or language they are implemented in (as long as they adhere to common
standards like HTTP, XML, or JSON).
Reusability: Services can be reused across different applications, reducing redundancy in
development.
Discoverability: Services are often registered in a Service Registry, allowing other services or
clients to discover and interact with them.
Abstraction: Services hide their internal workings from consumers. Consumers only need to
understand how to use the service (its interface).
3. Components of SOA:
1. Services:
o The core building blocks of SOA, representing reusable business functions or
operations.
o Each service is defined by an interface (contract) and can be called by other services
or client applications.
2. Service Registry:
o A repository where services are registered, discovered, and accessed. It stores
metadata about services (e.g., service name, interface, and location).
o Examples: UDDI (Universal Description, Discovery, and Integration), which is used to
store web service definitions.
3. Service Consumer:
o An application or system that consumes the functionality provided by services. It
sends requests to services and processes responses.
4. Service Provider:
o The implementation of a service. It is responsible for carrying out the business logic
when a service is called.
5. Enterprise Service Bus (ESB):
o A middleware layer that facilitates communication between services, often
providing features like message routing, transformation, and security.
o The ESB acts as a message broker and can help orchestrate service calls and manage
workflow between services.
4. Types of Services:
Web Services: Services accessible over the web using standard protocols like HTTP and
SOAP/REST.
Microservices: A variant of SOA where services are small, independently deployable, and
work together to form a larger application. Unlike traditional SOA, microservices emphasize
very fine-grained service decomposition.
5. Communication in SOA:
SOAP (Simple Object Access Protocol): A messaging protocol used to exchange structured
information in the implementation of web services. SOAP is more rigid but very robust and
secure.
REST (Representational State Transfer): A more lightweight alternative to SOAP, used for
creating web services with standard HTTP methods like GET, POST, PUT, DELETE, etc.
JMS (Java Message Service): An asynchronous messaging protocol in Java.
MQ (Message Queues): Systems like RabbitMQ or Kafka used for asynchronous
communication.
While SOA and Microservices share some similarities (they both involve service-based
architectures), there are key differences:
Granularity: Microservices are smaller, more focused services that are designed to be
independently deployable and can be built and maintained by a smaller team.
Technology Stack: Microservices are more flexible when it comes to using different
technologies and programming languages (Polyglot architecture).
Communication: Microservices typically communicate over REST or messaging systems,
while SOA often uses SOAP and more formal standards like WS-*.
7. Benefits of SOA:
Scalability: As services are decoupled, you can scale them independently based on demand.
Reusability: Services are reusable across multiple applications or clients.
Flexibility: SOA allows for easier modification or replacement of individual services without
affecting the entire system.
Faster Development: Teams can work on different services concurrently, speeding up
overall development.
8. Challenges of SOA:
Complexity: Managing multiple services, each with its own lifecycle, versioning, and
dependencies can be complex.
Performance Overhead: Network latency and overhead can increase with the need for
multiple network calls between services.
Governance: It can be difficult to enforce consistent rules and standards across many
different services, leading to potential issues with compatibility or security.
Service Management: Maintaining a catalog of services, dealing with service discovery, and
ensuring all services are up and running can require significant operational overhead.
Loose Coupling: Services should not depend on one another’s implementation details.
Single Responsibility Principle: Each service should have a clear, specific function (e.g., don’t
combine user authentication and product recommendation in one service).
Autonomy: Services should be independent and able to operate without depending too
much on external systems.
Security must be handled both at the network level (e.g., using SSL/TLS) and at the service level:
Authentication: Ensure that users are who they claim to be. Can use OAuth, JWT, etc.
Authorization: Ensure that users have the necessary permissions to access resources.
Data Encryption: Sensitive data should be encrypted during transmission.
Service-Level Security: Each service should verify that requests and responses are
authorized and valid
Software as a Service (SaaS) is a cloud computing model where applications are provided over the
internet as a service rather than being hosted locally on a user's machine. This model eliminates the
need for organizations to install and maintain software, as the SaaS provider handles everything
from infrastructure to software updates.
1. Definition of SaaS:
SaaS is a cloud-based delivery model where software is hosted on the cloud by a service provider
and made available to customers via the internet. Rather than purchasing a software license and
installing it on local machines, users subscribe to the software and access it remotely.
Example SaaS products: Google Workspace (Docs, Sheets), Microsoft 365, Dropbox,
Salesforce, Zoom, Slack, etc.
Key Feature: SaaS applications are typically accessed through a web browser, requiring no
installation or maintenance from the user side.
Application: The actual software being used by customers, which provides the functionality
the service is designed to offer (e.g., email, CRM, collaboration tools).
Database: The storage system for customer data. SaaS applications often use cloud
databases, which allow for flexible scaling and reliability.
API: Many SaaS products offer APIs for integration with other systems, such as CRMs, ERPs,
or external databases.
Authentication: Users log in using credentials, which may be integrated with single sign-on
(SSO), OAuth, or multi-factor authentication (MFA) for security.
Admin Panel: A backend interface for managing user accounts, settings, and configurations.
Cloud Infrastructure: SaaS is hosted on the cloud, with major providers like AWS, Azure, and
Google Cloud providing the underlying infrastructure.
4. Benefits of SaaS:
Cost-Effective:
o No upfront costs for infrastructure or licensing. SaaS is typically subscription-based,
with pay-per-user or pay-per-feature models.
o Reduced IT overhead (maintenance, updates, hardware).
Scalability:
o SaaS applications can easily scale to accommodate more users or larger data
volumes. Service providers typically offer various subscription plans to match
different levels of usage.
Accessibility:
o Accessible from anywhere with an internet connection, often supporting multiple
devices and platforms (e.g., desktop, tablet, mobile).
Automatic Updates:
o The SaaS provider handles all software updates and maintenance, ensuring the
application is always up-to-date without any intervention from users.
Security:
o Many SaaS providers invest heavily in security infrastructure, ensuring high levels of
protection for customer data, including encryption, firewalls, and backup strategies.
Rapid Deployment:
o SaaS applications can be up and running quickly, with minimal setup and
configuration required from the end user.
5. Challenges of SaaS:
Collaboration Software:
o Tools for team communication and collaboration, such as Slack, Microsoft Teams, or
Google Workspace (Docs, Sheets, Drive).
Customer Relationship Management (CRM):
o Software for managing customer relationships and sales pipelines, such as
Salesforce, HubSpot, and Zoho CRM.
Enterprise Resource Planning (ERP):
o Applications that help manage business processes, such as SAP, Oracle NetSuite, and
Microsoft Dynamics.
Accounting & Finance:
o Tools for managing finances, payroll, invoicing, etc., like QuickBooks Online, Xero,
and FreshBooks.
E-commerce Platforms:
o Software for building and managing online stores, such as Shopify, BigCommerce,
and Squarespace.
Project Management:
o Tools for managing projects, tasks, and workflows, such as Trello, Asana, and
Monday.com.
7. SaaS vs. Traditional Software:
Aspect SaaS Traditional Software
Updates Automatically updated by the vendor. Requires manual updates from IT.
Google Workspace: A suite of productivity and collaboration tools, including Gmail, Docs,
Sheets, Drive, etc.
Salesforce: A widely used CRM platform for sales, marketing, and customer service
management.
Microsoft 365: Cloud-based office suite with tools like Word, Excel, Outlook, and OneDrive.
Slack: A collaboration and messaging platform for teams.
Zoom: A cloud-based video conferencing platform.
Dropbox: A cloud storage service with collaboration features.
Shopify: A SaaS platform for creating and managing online stores.
9. Security Considerations:
Data Encryption: Data should be encrypted in transit (SSL/TLS) and at rest (AES encryption).
Authentication: Support for strong authentication methods (e.g., SSO, two-factor
authentication).
Compliance: Ensure the SaaS provider complies with relevant industry regulations (e.g.,
HIPAA, GDPR).
Service Level Agreement (SLA): Ensure the provider offers clear uptime guarantees and
support response times.
Data Backups: Ensure the provider has a solid data backup and recovery plan.
AI and Machine Learning: SaaS providers are integrating more AI-driven features for
personalization, automation, and predictive analytics.
Vertical SaaS: SaaS tailored to specific industries (e.g., healthcare, legal, real estate) is
growing in popularity.
No-code/Low-code Platforms: SaaS platforms that allow users to build their own apps with
minimal programming knowledge are gaining traction.
Edge Computing: As SaaS moves towards more decentralized models, there’s a push to
handle data processing closer to the user’s location (e.g., on edge devices).
User Needs: Evaluate whether the SaaS solution addresses specific business requirements.
Cost-Effectiveness: Compare the total cost of ownership for SaaS versus on-premises
software.
Vendor Reputation: Research the provider’s reliability, reputation, and service history.
Integration: Ensure the SaaS integrates well with existing systems and tools.
Customization: Understand the limitations and customization options of the platform.