0% found this document useful (0 votes)
14 views

1_Test_Case_Reduction_and_SWOA_Optimization_for_Dist 2024

This research article proposes a method for Test Case Reduction and Support-based Whale Optimization Algorithm (SWOA) to enhance regression testing in distributed agile software development. The approach focuses on prioritizing and selecting test cases based on their frequency of failure and coverage criteria, demonstrating significant improvements over existing regression testing strategies. The study highlights the importance of regression testing in maintaining software quality amidst the evolving landscape of agile methodologies.

Uploaded by

sadiq
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
14 views

1_Test_Case_Reduction_and_SWOA_Optimization_for_Dist 2024

This research article proposes a method for Test Case Reduction and Support-based Whale Optimization Algorithm (SWOA) to enhance regression testing in distributed agile software development. The approach focuses on prioritizing and selecting test cases based on their frequency of failure and coverage criteria, demonstrating significant improvements over existing regression testing strategies. The study highlights the importance of regression testing in maintaining software quality amidst the evolving landscape of agile methodologies.

Uploaded by

sadiq
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 24

Test Case Reduction and SWOA Optimization for

Distributed Agile Software Development Using


Regression Testing
Madan Singh (  [email protected] )
Research Scholar at J. C. Bose University of Science and Technology, YMCA Faridabad
Naresh Chauhan
JC Bose University of Science & Technology
Rashmi Popli
Bose University of Science and Technology, YMCA Faridabad

Research Article

Keywords: Regression Testing, Test Case Reduction, Fractional Sigmoid based K-means Clustering,
Support-based Whale Optimization Algorithm

Posted Date: January 24th, 2023

DOI: https://doi.org/10.21203/rs.3.rs-2498593/v1

License:   This work is licensed under a Creative Commons Attribution 4.0 International License.
Read Full License

Additional Declarations: No competing interests reported.


Test Case Reduction and SWOA Optimization for Distributed Agile
Software Development Using Regression Testing

Madan Singha*, Dr. Naresh Chauhan b, Dr. Rashmi Poplic


a
Research Scholar at J. C. Bose University of Science and Technology, YMCA Faridabad, India.
b
Professor in Department of Computer Engineering at JC Bose University of Science & Technology,
Faridabad, India.
c
Deputy Dean, Consultancy in J.C Bose University of Science and Technology, YMCA Faridabad, India.
*Email Id: [email protected]

Abstract:
Regression testing is a well-established practice in software development, but its position and
importance have shifted in recent years as agile approaches have grown in popularity,
emphasizing the fundamental role of regression testing in preserving software quality. In
previous techniques, the challenge to address is determining the number and size of clusters
and optimization to stability the cost and efficacy of the strategy. To overcome all the existing
drawbacks; this research study proposes Test Case Reduction and SWOA Optimization for
Distributed Agile Software Development Using Regression Testing. The purpose of this
research study is to look into regression testing strategies in agile development teams and to
find out what they are optimum clustered test cases. The proposed strategy is divided into two
stages that are prioritization as well as selection. Prioritization and selection are carried out
once the test instances have been retrieved and grouped. The test clusters are sorted and
prioritized in this stage to ensure that the most critical instances are chosen first. Second, the
Support-based Whale Optimization Algorithm (SWOA) is used to choose test cases with a
greater frequency of failure or coverage criterion. The results of the assessment metrics show
that the proposed approach outperforms other current regression testing strategies substantially.
Our suggested technique outperforms current methods in terms of information, according to
the findings of the trials.

Keywords: - Regression Testing, Test Case Reduction, Fractional Sigmoid based K-means
Clustering, Support-based Whale Optimization Algorithm.

1. Introduction
Software project development entails a variety of tasks, ranging from requirement
collecting through testing and maintenance, all of which must be completed within a defined
periodas well aseconomical [1]. The foundation of software engineering is logical and
analytical work. Because of the fast pace of change in client needs and quick technological
innovation, software development is more complicated than other types of engineering
projects. As a result, achieving particular objectives while fulfilling a variety of limitations
becomes a problem for successful software project management [2, 3].
Estimation is defined as anticipating the measures like cost and effort measured in
capital and individual hours in the context of software estimation. Estimation is a major task in
the management of software projects as it affects both the client and developer sides. If the
estimation is accurate then the development can be planned and the progress can be monitored
as well as negotiation of cost and completion date can be done by the client-side. Also, as the
major reason for software failure lies in the inaccurate estimate of relevant parameters, hence
estimation becomes a crucial and important task in predicting the reliability of software [4].
Estimations will be completed within a specific period, and they will need to be updated
regularly as the project progresses. The effort essentially forecasts the number of man-hours
required. In recent years, the software industry has adopted Agile as a software development
framework since it has proven to be superior to other traditional development frameworks such
as a waterfall. The software size, which is determined by the number of story points, a metric
of user stories, and the team velocity, determines the majority of the effort in agile. We looked
at a publicly available dataset of projects with various features that are needed to measure effort
in agile software projects. Prediction is often challenging due to the high number of features
[5, 6].
The Agile Methodology combines developer contacts with a variety of client needs,
working software rather than extensive documentation, customer collaboration rather than
contract discussions, and higher importance placed on responding to change rather than
keeping to plans. It has outperformed the existing standard methods in terms of results. The
information system audit is required to guarantee that high-quality goods created by these
multiple approaches can be given to customers promptly [7]. The agile method, on the other
hand, isn't universally viewed as the "killer" solution: "I've spent two decades working in banks,
delivering successful technology projects, and the time's come to raise my hand and say what
desperately needs to be said: the Agile methodology and banks do not mix". This is a
controversial viewpoint, especially among the new generation of technology professionals who
use agile software development methodologies, but it must be addressed and publicly debated
[8, 9].
The quality assurance team can go through the latest build merged and detect faults.
The faults can be the build not satisfying the requirements, the build cannot be adaptable, and
the build can affect the earlier functionalities [10]. Optimization or mathematical programming
refers to the selection of a suitable element concerning certain constraints from some set of
available alternatives. The majority of population-based evolutionary computing approaches
are inspired by natural evolution. The test case optimization is exercised for reducing,
minimizing, and prioritizing the test case so that we can reduce the cost and time. The key
stages throughout the handling of faults should involve recognition of faults, the classification
of defects, and analysis of the faults. In addition, detecting that fault and eliminating the fault
form a critical part of handling the detection process [11]. Many real-world operations in areas
like transportation and logistics may be modeled using combinatorial optimization techniques,
manufacturing & production, finance & banking, bioinformatics, health care, energy systems,
and telecommunication networks. In real-world optimization issues, rich and soft restrictions,
as well as non-smooth objective functions, are prevalent, making many of them NP-hard.
They're also frequently huge in scale, with hundreds or thousands of factors to consider (e.g.,
customers, products, assets, facilities, etc.). In this case, exact optimization procedures are
unsuccessful, thus metaheuristics are a wonderful way to generate near-optimal solutions in a
reasonable amount of time [12].
Regression testing (RT) is a key component of agile methods, and it necessitates a
substantial amount of time and resources throughout the deployment. To decrease the time
between tests, agile methods use recursive implementations and test sequences. RT is
frequently used to check the quality of software programs after they have changed to the
development phase. It is a maintenance task that is carried out to ensure that the software's
current functioning has not been harmed by the changes and/or alterations [13]. Several digital
appliances have been created in recent years for a variety of processes. However, as a result of
the unexpected conclusion, various concerns have arisen. As a result, an inspection scheme is
required to check each application's behavior. As a result, replica regression testing is carried
out on many platforms. Furthermore, the regression testing cycle is broken down into six
stages, including selection, reduction, and priority-based testing. Set up or test arrangement,
ordering of test cases, execution of test cases, progress evaluation, and flaw mitigation [14].
Regression testing selection (RTS) decreases testing expenses by purposely covering
the modules whose runtime behavior may be altered by the changes introduced in each iteration
by selecting a subset from the initial test cases. The subset is made up of two sorts of test cases:
one is used to cover the modules that have been modified, and the other is used to cover the
modules that have not been modified. To confirm the validity of system behavior, test cases
covering the changed services, for example, must be conducted. With the modified services,
this sort of test case can be selected directly. The other is used to cover modules that have not
been modified but are related to modified modules. Test cases covering services having calling
ties to the altered services, for example, should be run to check that the modifications do not
result in any other functional failures. As a result, in RTS studies, change effect analysis is
required to choose the predicted test cases. According to the state of the art in regressing testing
research, most RTS techniques require artifacts such as requirements, design models, and code
files to determine which test cases are affected by the modifications [15]. Contribution to the
paper and its structure:
• Product requirements are collected user stories are extracted based on product
requirements. Then, test cases are extracted and the test case frequencies are calculated.
• Test case provides priority and selection using the FSK-Means module based on
frequently changing trial cases and failed test cases.
• Test clusters are sorted and prioritized, ensuring that the most important events are
selected and the optimization algorithm is used to select the test cases with the highest
failure or coverage criteria.
The rest of the paper is organized as follows; Section 2 describes the proposed
methodology-based related works and section 3 explains Proposed Test Case Reduction and
SWOA Optimization for Distributed Agile Software Development Using Regression Testing.
The experimental results are analyzed in section 4 and the conclusion part is presented in
section 5.

2. Related Works
The prioritization of regression test literature was summarised in this section.
Recognized literature already in existence demonstrates the various techniques researchers may
have employed to enhance TCP (Test Case Prioritization) and ORT (Optimization-based
Regression Testing) techniques for regression testing.
Al-Hajjaji et al. [33] proposed similarity-based TCP technique for product line with
diverse feature interaction coverage. The study analysed the effectiveness in both real and
seeded fault detection, after the evaluation on three different applications of distinctive feature
size. Horváth, F, et. Al. [34] investigated the impact of code coverage-based Java language
tools on TCP and TSM, and found that coverage information is useful to highlight number of
line code covered by each test case for optimization during RT. Wang et al. [35] for embedded
systems, proposed location-based TCP using gravitation law for high reliability after
modification. Shin, S. et. al. [36] Multi-objective TCP method for uncertainty prediction in
cyber physical systems. Azizi and Do [37] proposed TCP-based collaborative filtering
recommender system using change historical information in dynamic environment for
decision-making process. They observed that multi-criteria can improve the effectiveness of
TCP and need to increase fault rate with new item additions in an intelligent way.
Haghighatkhah et al. [38] proposed RT for fault detection in a continuous integration
environment. Availability of failure history data is an important criteria, but only improves
effectiveness to a certain extent while history-based diversity is more effective but has
disadvantage of high execution time. Ouriques et al. [39] compared different existing TCP
techniques in context of model-based testing using replicated study to investigate influence of
test case size on the efficiency of fault detection rate ability.
Qiang Qu et.al [16] a hybrid WOA based on complementary differential evolution is
proposed, dubbed CDEWOA. To increase the variety of the starting population, a novel
uniform initialization strategy is used first. Second, to improve search accuracy and speed, the
WOA incorporates differential evolution with a complementary mutation operator. Third,
CDEWOA can now jump out of local optimums thanks to the addition of a local peak
avoidance method. Finally, the proposed CDEWOA is put to the test using 14 mathematical
optimization tasks. Ali Abdullah Hassan et.al [17] Extensive testing is practically impossible
due to the possibly numerous software system input combinations. Combinatorial t-way
analysis was used to solve this problem. This paper introduces a novel constraint-supporting
approach based on the WOA, which supports the No Free Lunch theorem while also possibly
giving new insights into the whole t-way development process WO). This is the first time the
WOA has been used to generate t-way test suites with constraint support as part of a search-
based software engineering (SBSE) method. Yousef Hassouneh et.al [18] by merging the
WOA with a single point crossover approach, an improved version of the WOA was created.
By enhancing the exploring phase, the proposed adjustment aids the WOA in avoiding local
optima. The event, roulette wheel, linear rank, stochastic universal sampling, and random-
based are the five approaches most commonly used. The performance of the suggested
enhancement is evaluated using 17 accessible SFP datasets from the promise repository.
Huiling Chen et.al [19] when dealing with multi-dimensional issues, the main flaws of the
original technique, which converges slowly and is easy to slip into local optimum, are
addressed by a strengthened variation called RDWOA. In the original WOA, two strategies are
introduced. One approach for hastening the convergence of this method is to employ random
spare or random replacement. The other way is the twofold adaptive weight strategy, which
was developed to improve exploratory seeking patterns in the early stages and exploitative
behaviors in the later stages. The algorithm's convergence speed and overall searchability are
considerably improved when the two procedures are combined. To evaluate and study the
benefits of the proposed RDWOA, three well-known engineering design problems, as well as
typical benchmark examples such as unimodal, multi-modal, and fixed multi-modal functions,
are employed. Ersin Kaya et.al [20] the provided WOA is divided into three stages, each of
which updates all dimensions of the candidate solutions. This updated approach has the
disadvantage of causing the algorithm to stack when it converges. Some well-known meta-
heuristic techniques address this problem by modifying one or more present dimensions in their
update scheme. This paper proposes a fuzzy logic controller (FLC) based adaptive WOA
(FAWOA) to improve WOA exploitation behavior. The proposed FLC determines the rate of
change in terms of dimension and using an FLC, the WOA update scheme is achieved. The
proposed FAWOA is examined and compared to various metaheuristic techniques utilizing 23
well-known benchmark problems. Only the differential evaluation method earns the best
results on 10 of the benchmark issues, while FAWOA achieves the best results on 11 of them
Table 1 provides a summary of the existing techniques based on the relevant factors
and their shortcomings, such as low fault detection ability, taking only code criteria into
account, continuous integration, etc. According to the literature, the primary goals of various
regression testing techniques are to improve fault detection while decreasing redundant faults
and irrelevant test cases by employing various criteria like coverage and historical data. Due to
a few factors, these studies have not improved the rate of fault detection. These factors are test
case (TC), Number of tests case, Number of faults, Precision, Recall, and F-measure as
described in Table 1 with reasons described in existing literature. These elements, which are
relevant to the scope and context of our work and form the basis for comparing results, have
been identified in various earlier studies.
Table 1: Comparative study of the existing methods
Methodol No of Number Prec Recal F-
Author TC
ogy TC of faults ision l measure
Qiang Qu CDEWO 46.0
643 950 32 16.8% 79%
[16] A %

Ali Abdullah
CSA 33,018 2334 45 High - -
Hassan [17]
Yousef
82.4
Hassouneh WOA 24,541 2420 40 %
58.4% 86.27%
[18]
Huiling Chen Avera
RDWOA 879 3220 30 High
ge
Average
[19]

Ersin Kaya FLC&


30,178 2890 32 36% 54% 74%
[20] FAWOA

Al-Hajjaji Prioritizes
25,321 1101 48 Low - -
[33] TC
Java Byte
code and
Horváth [34] 30,210 950 40 Low - -
Source
Code
Wang X Law of
50,121 2334 32 Low - -
[35] gravitation
Shin, S. Y Aver
CPS 642 2420 37 74% 74%
[36] age

Collaborat
Azizi, M.
ive 32,118 990 42 High
Average
[37] 50%
filtering
Haghighatkh
Regressio 82.4
ah 20,541 1560 45 %
- -
n testing
[38]
Model
Ouriques, J.
based 31,210 1238 39 High - -
F. S. [39]
testing

3. Proposed System
This document proposes a test case reduction as well as an optimization mechanism to improve
the quality of releases. Determine which test cases vary frequently between software user
documentation versions. There are two steps to the proposed plan. To begin, the test cases that
change the most frequently are grouped. Based on the frequency of change in the proposed
model fractional sigmoid-based K-means (FSK-mean) module, the test cases are separated into
groups. Afterward, the test cases have been retrieved and grouped, prioritization and selection
are carried out. Throughout this stage, the test clusters are sorted and prioritized to ensure that
the most critical instances are selected first. Second, the Support-based Whale Optimization
Algorithm (SWOA) is utilized to select test cases that have a higher failure frequency or
coverage criterion. All of the selected test cases from all of the significant test case clusters that
have the highest failure frequency or coverage are included in the test suite. Figure 1 shows a
block schematic of the suggested technique.

Figure 1: Proposed process flow of test case reduction and optimization approach
The product requirements are retrieved from the database, and then user stories are derived
based on the product requirements US = Us1, Us 2, Us 3,...Usn , as shown in Figure 1. After
that, it extracts test cases and estimates test case frequencies. The proposed model uses a
fractional sigmoid-based K-means (FSK-mean) module to prioritize and choose test cases
based on frequently changing test cases and failed test cases. Following that, test clusters are
sorted and prioritized to ensure that the most critical instances are identified, and an
optimization method is utilized to select test cases with a higher failure frequency or coverage
criterion. Previous studies has some drawbacks, to overcome that existing issues this research
study innovates some new techniques based on [13] [32].The proposed processes are explained
in depth in the sections that follow.
3.1. Test Case Reduction and SWOA Optimization for Distributed Agile Software
Development Using Regression Testing:
This section introduces the suggested model and explains how it works. To address the
drawbacks of existing methodologies, such as irrelevant test case selection and redundant
faults, the suggested model used the fractional sigmoid-based K-means (FSK-mean) module
as well as support-based Whale Optimization techniques of regression testing. The test case
development histories can be used for clustering that lessens the number of test cases,
prioritizing test cases using code coverage and removing ambiguity in cases of the same
priority, and afterward picking the primary concern test cases from each cluster to find the most
defects as rapidly as feasible.
The main phases of the proposed model are briefly discussed in the following sections:

3.1.1. User stories: -In an active user stories TS = Ts1, Ts 2, Ts 3,...Tsn are used to specify
the environment in which they will be implemented system's functionality. A user story
explains the system's user expectations as well as symbolizes a sprint, a responsive cross-
functional team should provide business value to the product. User stories are told at this step
for the selected sprint are saved in a file. Each user narrative may have multiple test cases
linked to it.

3.1.2. Extract Test case: - To get a list of all related test cases T = T 1, T 2, T 3,...Tn in the
current release/test Where, T→ test case , go here iteration's suite, user stories related to the
release are gathered. These process are algorithm 1 explained as follows;

Input : US
Output: TS , T
Step 1: Select US
/* Split user stories */
Step 2: For i=1 to n do
Step 3: If USi ==" User stories" Then
TSi  USi
Step 4: End if
Step 5: End For

/* Extract Test case */


Step 6: For i=1 to n do
Step 7: If TSi ==" TestCase" Then
Ti  TSi
Z c  Ti each version
Step 8: End if
Step 9: End For
Step 10: Stop
Algorithm 1: for user stories and test case extraction
3.1.3. Identify frequently changed test cases or calculate frequency: -Determine which test
cases vary frequently between software user story versions. Averaging changes in related test
cases in each modified test case version yields the frequency of change. The frequency of
changes can be calculated using the formula below:

Z q =  c 
 Z 
(1)
 n

Where Z q is the number of test cases that vary regularly, Z c in each version, is the total number
of test cases that have changed. and n is the total number of times the program has been
changed.
3.1.4. Cluster: - The test cases are organized into groups with comparable change frequencies
in the cluster module. To put it another way, similar change frequencies in test instances are
clustered together. The fractional sigmoid-based K-means (FSK-mean) module is used in the
FSK method, which is a semi-supervised clustering strategy that combines nonlinear
dimensionality reduction with K-means and semi-supervised nonlinear dimensionality
reduction. The number of components utilized to classify tests into clusters is K, which
determines the overall number of clusters in the K-means clustering technique. Figure 2 shows
the procedure, this begins with data extraction using the k value and ends with data grouping
based on similarity. To reduce complexity and ambiguity, we end up with a variety of groups
of unstructured large data. The distance between the test case frequency and the centroid of a
cluster is calculated using Euclidean distance (ED) as per equation 2; test cases tc or rather tq,
n represents the total number of instances, and u represents the instance feature value (Eq. (2)),

 nk − 1(U bk − U qk )
2
ebq = (2)

Where; ebq → Euclidean of test cases, nk → Number of instance, U qk , → test instances have
been extracted and U bk →total number of test cases.

3.1.4. Create variables: -To begin, metrics are created to help prioritize tasks, with the amount
of often failed test cases being the most important. The error description Prioritization of test
system functionality is based on previous test runs as follows:

( ZQ )
Qr =
j
(3)
n

Anywhere; ZQ j →represents the entire quantity of unsuccessful test cases, n → is the total
number of test cases that have failed, If there are numerous clusters with comparable failed
frequency percentages, code coverage criteria are utilized to prioritize them. Instead, the
percentage of test cases covered, this is the total number of lines of code that each test case
covers, is utilized in this scenario.
3.2. Prioritize test cases based on fractional sigmoid-based K-means module: -The test
clusters are then organized and prioritized in this step, ensuring that the most critical instances
are chosen first with the help of fractional sigmoid based k-means clustering. The step by step
procedures are explained detailed as follows; this research study applies the findings to the
selection of suitable numerical techniques for fractional differential equations equation which
is mathematically represented as condition 4;

fract = Z q Qr (4)

Where, fract because the basic form of a fractional differential equation does have an m -
dimensional kernel, we must give m beginning conditions to have a unique solution. based on
sigmoid function. Here, fractional values are moved to the next stage of sigmoid function
calculation which is represented as follows;
3.2.1. Sigmoid function calculation:
After fractional calculation, the sigmoid function calculation is done using equation (5)
( )
Sig q = 1 / 1 + e −Qi (t ) * fract (5)

Where, Sig q → is the sigmoid function, Here, the fractional value is multiplied with the
sigmoid function for the prioritized test case cluster. After that, the clustering is done using k
means clustering.

3.2.2. K-means clustering:

The preliminary information is separated into divisions that are small as well as circulated
between different in a computer cluster, hubs are the central points of communication. The
proposed calculation works on two levels: the main stage is simply taking a look at the initial
clustering result and huge estimation of k, the subsequent stage is combining the centroids of
the primary stage to get the last arrangement of the cluster. Unlike the basic K-Means, where
irregular probability sampling is used to determine the starting seed, we use our method to
select better seeds to reduce the number of rounds required to converge. The main point is
randomly selected via probability sampling, while the remaining k-1 points are sampled using
the formula below.

Px =
 dis tan ce (Z , (Sig )
q q
(6)
 min (dis tan ce(Z , Sig ))
xD q q

Algorithm 2 provides the pseudo code for selecting the underlying seed. Following the
selection of the underlying centers, data points are demoted. In the second stage centroids
produced by the main stage are combined utilizing threshold values. It is determined dependent
on the accompanying equation.

2 n n
=  d ij
n * (n − 1) i =1 j =1
for i  j (7)

Here, n → is the number of clusters created by the principal stage, dij → signifies the separation
between any two sets of points i , and j . The average distance between any two sets of centroids
is the threshold value like fractional and sigmoid function. The combining basis is as per the
following. The threshold is compared to the distance between all of the cluster centers. When
the distance between two centers is less than the threshold, they converge to form a single
cluster. The new cluster center is determined by the average of all the consolidated points.

Input : Z q
Output: Centroids
Step 1: Select one point arbitrarily from Z q
Step 2: while Centroid  k | do
sample remaining centroids with probability given in condition (6)
step 3: end while
step 4: Stop

Algorithm 2: probability sampling

Input : product requirement data set and initial k


Output: Set of cluster

Select k centers from Z q utilizing Algorithm 2


Centroid = Centre
While m  itr do
For i = 1 to n do
For j = 1 to k do
Distance[ j ]=Calculate Distance(Data [i ] ,Centroid [ j] )
End for
average mean of distance = average mean (Distance [ j] )

centroid [i ] =
 data clusteri

clusteri
End for
End while
For each i in centroid do
For each j in centroid do
If i = j then add i to merge [i ]
continue
End if
If Distance[ i, j ] =  then
add j to mergei
remove j from centroid
End if
End for
calculate new centroid
mergei
centroid final =
mergei
End for
Stop

Algorithm3: proposed fractional sigmoid based k-means clustering algorithm

The FSK module is explained in the above section. Here the proposed algorithm for sampling
the centroid is given in algorithm 2 and the distance for every centroid is assessed in algorithm
3. At last prioritized test case cluster based on fractional sigmoid based k-means cluster
module. After prioritized test case cluster, SWOA is used to choose optimal test case clusters
which are explained detailed as the following sections;

3.3. Support-based Whale Optimization Algorithm (SWOA):


Prioritization and selection are carried out after the test instances have been retrieved and stored
and carried out. The test clusters are selected and prioritized at this stage to ensure that the most
important ones are given the most attention also critical instances are identified, and then the
Support-based Whale Optimization Algorithm is used to select test instances with a greater
chance of failure or coverage threshold frequency. All of the selection of test scenarios from
across the board significant test case clusters with the highest failure frequency or coverage are
included in the test suite.
Whales are said to be extremely intelligent animals when it comes to mobility. The WOA is
enthralled by humpback whales' unique hunting behavior. Humpback whales are known for
their elegance in hunting krills or small fish at the ocean's edge. Humpback whales use a one-
of-a-kind hunting strategy recognizedby way of bubble-net feeding. In this strategy, they swim
about the prey as well as blow bubbles in a circle or a 9-shaped pattern. The steps are as follows:
3.3.1. Encircling prey
Humpback whales recognize the presence of prey and surround it. Because the location of the
optimal design in the search space is unknown a priori, the WOA considers the presentgreatest
competitor result to be the objective prey or very close to it. The remaining search agents will
attempt to update their locations in the direction of the greatest hunt agent when the top hunt
agent has been identified. This is reflected in the equations that follow.
→ → → →
A = | B . S * (v ) − S (v ) | (8)
→ → → →
S (v + 1) = S * (v) − M . A (9)
→ →
Where vthe current iteration is indicated, M besides B are vectors of coefficients, s * is the best

solution achieved so fares position vector, S is the vector of position, | | is the total value, and
is a multiplication of elements one by one. It's important to point out that s * if a better result is
obtained in each repetition, it should be updated.
→ →
The vectors M and B are determined as follows:
→ → → →
M = 2 m. z − m (10)
→ →
B = 2. z (11)
→ →
Anywhere m throughout rounds, is reduced linearly from 2 to 0 and z is a random vector in
the range [0, 1]. A search agent's position (S, T) can be modified based on the current best
record's position (S*, T*). By altering the value, several locations about the best agent can
→ →
attainabout the present position of M B and vectors.
A similar concept can be used in n-dimensional search space, the hunt agents are traveling in
hyper-cubes around the best solution discovered so far. As described in the preceding section,
humpback whales employ the bubble-net method to attack their prey. The following is a
numerical representation of this technique:
3.3.2. Bubble-net attacking method
To numerically display the bubble-net behavior of humpback whales, two methodologies are
structured as pursues:

Shrinking encircling mechanism: This is accomplished by lowering the value of m the
→ → →
equation (10). Fluctuation range M decreased by m . In further disputes M is a random value
in the interval [-m, m] anywherem is diminished from 2 to 0 over the course iterations. Locale

random standards for M in [-1, 1], a search agent's innovativesituationbe able
toclearsomewhereamid original position and current best agent's position.
Spiral updating position: the approaches first determined the distance between the whales
located at (S*, T*) and prey located at (S*, T*). A spiral equation is developed between the
position of the whale and the position of the prey to imitate the helix-shaped movement of
humpback whales:
→ → →
S (v + 1) = A'.e bl . cos(2l ) + S * (v) (12)
→ → →
Where A' =| S * (v) = S (v) | distance between ith whale also the prey is indicated by and, u is the
logarithmic spiral's form constant, l is a numeral that appears at random in [-1, 1], and is a
multiplication of elements one by one. In a decreasing circle, humpback whales swim in a
spiral-shaped fashion around their prey. To simulate this concurrent behavior, we assume that
updating the position of whales during optimization using either the shrinking encircling
mechanisms or the spiral model has a 50% likelihood. A mathematical model is given below:

→ → →

 S * (v ) − M . A if p  0.5
S (v + 1) =  → →
(13)

 A'.e . cos(2 l ) + S * (v)
bl
if p  0.5

Anywhere p is a random number in [0, 1]. In addition to using the bubble-net approach,
humpback whales hunt for prey at random. The search's numerical model is as follows.
3.3.3. Searching for prey

Using the same strategy established on the variance of M a vector be able tousetowards find
prey. Humpback whales search at random based on their relative positions. As a result, we

employ M Use random integers greater than 1 or less than -1 to get the search agent to move
away from a reference whale. In contrast to the exploitation phase, when we update a search
agent's position based on a randomly picked search agent rather than the best search agent
found so far, in the exploitation phase, we update the position of a search agent based on a

randomly selected search agent. This mechanism besides | M |  1 Allows the WOA algorithm
to do a worldwide search by emphasizing exploration. As follows is the numerical model:
→ → → →
A =| B . S rand − S (14)
→ → → →
S (v + 1) = S rand − M . A (15)

In general, this method combines three humpback whale operations: searching for prey
(exploration phase), surrounding prey (encircling phase), and bubble-net foraging (extraction
phase). The next paragraph specifies the arithmetical representation of the Support based
Whale Optimization algorithm (SWOA):
Step 1: Initialization
The whale optimization algorithm will be used to produce optimal cluster selection support.
Initialization, or solution creation, is a key phase in an optimization process that aids in the
rapid identification of the best solution. Create a population of whales Li (i = 1,2,3,..., m) , the
iteration counter x, the iteration limit u, the coefficients U and V, the maximum number of
iterations MaxIter, and also the iteration limit u. Assemble the populace N (i.e. N test case
cluster centroids) is generated randomly, as well as every search mediator is generated
indiscriminately Li The population's fitness (i.e. single set of test case cluster centroids) is
evaluated using a fitness task f ( Li ) . The acquired answer will be used in the next stage, which
is the fitness assessment.
Step 2: Fitness function

To establish the fitness explanation, the fitness task is used. The fitness task is used to assess
the highest exactness of cluster selection in the given optimization difficulty. The fitness
function evaluates the solution after it is generated and then selects the best option. For the
most part, an optimization algorithm relies on its fitness function to find the optimum solution.
The choosing of fitness is an important part of SWOA. The fitness function is generated here
using the PNSR, which is used to choose the best clusters. The Higher the PNSR rating, the
higher the quality PSNR is a metric for assessing the output's quality. The PSNR is the ratio of
input to output.

 Max12 
PSNR = 10 log 10  
 MSE  (16)
1 M N

   I (i, j ) − I (i, j )
2
MSE =
M N x =1 y =1
(17)
The MSE is used to discover PSNR. The cumulative squared error is computed using the MSE.
The output improves with a higher PSNR and a lower MSE. This I ( x, y ) specifies the input and
I ( x, y ) specifies the output.

Step 3: SBWO based Updation solution


Update the solution using the Support-based Whale Optimization Algorithm after the
fitness evaluation. Using the formula (18) we can update the solution. The position of the
whale's 𝑥𝑖 can be characterized mathematically as follows,

xi = Li + Gi + Ai , i = 1,2,..., N (18)

Anywhere, there is the interaction between the ith and the ith whale is 𝑆𝑖 this can be
mathematically described as follows:
N ^
Li =  L( d
j =1,i  j
ij ) d ij , d ij =| xi − x j | (19)

Distance between the ith and jth whales is represented in any location by 𝑑𝑖𝑗 , l, on the other
hand, denotes the intensity of the social forces function that can be expressed mathematically
as follows:
−y

L( y ) = fe l
− e−y (20)

Where,𝑓 ⟶Intensity attraction,𝑙 ⟶the length scale is appealing.𝐺𝑖 And 𝐴𝑖 specifies the
gravity force as well as wind advection for the ith whale which can be characterized
mathematically as follows,
^
G = −g e ,
i g
^
Ai = u ew (21)

Anywhere,𝑔 ⟶ Gravitational constant,𝑢 ⟶ Constant drift, though 𝑒𝑔 and 𝑒𝑤 symbolize both


the wind direction and the unity vector to the earth's center are shown. Still, equation 10
could not be simply applied to the problem of optimization, therefore rewrite it as follows:
N u −1
x j − xi ^
xi = c ( 
j =1,i  j
c 2
L(| x j − xi |)
d ij
) +T d (22)

Where,𝑢 ⟶the search space's upper limit,𝑙 ⟶the search space's lower bound, 𝑇𝑑 ⟶ the best
solution value.
Step 4: Termination criterion
Only when the maximum number of iterations is achieved, approach come to a halt, and the
solution with the best fitness value is chosen as the optimum embedding condition.

4. Result and Discussion:


In this section, we look at the outcomes of our Test Case Reduction and SWOA Optimization
for Distributed Agile Software Development Using Regression Testing experiments.
Evaluation metrics, experimental outcomes, and comparative analyses using graphical graphs
are all given in this part. The presented method is implemented in the JAVA working platform
with cloud simulation. The FSK-mean module is used to rank TC clusters in this methodology,
and SWOA is used to select the best-clustered test case.The open-source system [13], iTrust,
is one of the cases, and the others are from real-world industrial applications; links to the
involved companies were ignored for confidentiality reasons, and their names were modified
by alphabetic order.

4.1 Evaluation metrics:

The following metrics, as provided by the equations below, are used to evaluate the proposed
system:

• Precision:Precision is the ratio of regression testing to the overall prioritized and


selection of clustered test cases that is provided in equation (23).
TP
P=
TP + FP
(23)
• Recall:Recall is the ratio of regression testing to the overall prioritized and selection of
clustered test cases that is available in the dataset which is given in equation (24).
TP
R= (24)
TP + FN
• F-measure: F-measure is detailed as the harmonic mean of recalls and precision
metrics which is given in equation (25).
2 PR
F= (25)
P+R
Where;TP→ True positive, FP→ False positive, FN→ False negative

4.2. Performance and Comparative Analysis:


The performance of Test Case Reduction and SWOA Optimization for Distributed Agile
Software Development Using Regression Testing is shown in Figure 2-7, along with a
comparison study.
The following are the outcomes of the practical evaluation of RQs.
RQ1:Do the FSK-mean and SWOA use clustering to keep track of regularly changing test
cases and past fault data?
We found that both the FSK-mean and the SWOA models enhance the number of errors
exposed, whether intentionally or unintentionally. Because the majority of previous studies in
the literature did not cluster frequently changing test cases or initiate the process based on test
case modifications, we decided to create our own.
The evaluation findings revealed that clustering frequently changing test cases during process
initialization using the FSK-mean and SWOA models considerably boosted the fault detection
rate. When the Table 2 have Cluster head A, Cluster head B and Cluster head C datasets were
subjected to the FSK-mean and SWOA procedures, high-priority test cases were discovered.
For comparison, the performance of random and fault-based ordering algorithms was evaluated
using identical scenarios. Applying clustering to the FSK-mean and SWOA techniques, we
recovered various clusters with similar change frequencies of varying TC sizes in all three
situations. Then, using Eq. (3), calculate failed frequency by dividing the total number of times
a TC failed by the total time changes. Following the selection of Frequently Failed Test Cases
from each cluster for Cluster head A, Cluster head B, and Cluster head C, we prioritized TCs
in order of highest Failed Test Cases Frequency using the FSK-mean and SWOA techniques.
In the event of a tie in Failed Test Cases Frequency, we broke the tie using Test Cases Coverage
as the 2nd priority criteria. Table 4 of all approaches contain the details of priority and
frequency criteria for selected TCs.
Table 2: prioritized cluster criteria
Object Use cases No. of Size (LOC) No. of No. of test
versions faults cases
CHA 643 5 3532 32 950
CHB 33,018 8 276,456 45 2334
CHC 24,541 6 155,987 40 2420

Table 3: FFTC, FTCT, and TCC of cluster


FFTC FTCF TCC
CHA CHB CHC CHA CHB CHC CHA CHB CHC
(%) (%) (%) (%) (%) (%)
TSC-35 TC-18 TC-45 95 88 99 39.5 39.5 39.5
TSC-37 TC-32 TC-27 95 86 95 11.0 20.0 11.0
TSC-40 TC-23 TC-30 85 88 95 40.0 40.0 40.0
TSC-41 TC-43 TC-38 75 65 80 39.0 39.0 39.0
TSC-28 TC-25 TC-22 65 55 70 35.0 35.0 35.0
TSC-20 TC-40 TC-41 65 45 60 30.0 30.0 30.0
TSC-39 TC-39 TC-39 55 45 50 27.1 27.1 27.1
TSC-23 TC-23 TC-23 45 35 50 30.0 30.0 30.0
TSC-15 TC-15 TC-15 45 65 40 28.7 28.7 28.7
TSC-24 TC-24 TC-24 35 45 50 30.0 30.0 30.0
TSC-11 TC-11 TC-11 25 35 40 28.5 28.5 28.5
TSC-43 TC-43 TC-43 25 25 10 28.1 28.1 28.1
TSC-18 TC-18 TC-18 15 25 10 27.3 27.3 27.3
Table 2 shows the order of TCs utilized by various strategies for all selected situations, as well
as different TCs for execution to discover maximum defects in the initial execution. which is
represented as follows;
Table 4 prioritized test suits
Sl.No Technique Cluster Test Case
{TSC-25, TSC-30, TSC-31, TSC-18, TSC-29, TSC-13, TSC-14,
CHA
TSC-01, TSC-33, TSC-08}
CTFF approach
{TSC08, TSC22, TSC13, TSC33, TSC15, TSC30, TSC39,
1 [31] CHB
TSC43, TSC03, TSC24, TSC21, TSC53, TSC18}
{TSC35, TSC17, TSC20, TSC28, TSC13, TSC03, TSC19,
CHC
TSC33, TSC15, TSC12, TSC61, TSC23, TSC18}
{TSC-01, TSC-03, TSC-05, TSC-07, TSC-09, TSC-11, TSC-13,
CHA
TSC-15, TSC-17, TSC-19}
Random
{TSC-05, TSC-13, TSC-25, TSC-37, TSC-49, TSC-51, TSC-63,
2 Prioritization [32] CHB
TSC-75, TSC-87, TSC-99}
{TSC-15, TSC-23, TSC-25, TSC-33, TSC-35, TSC-43, TSC-45,
CHC
TSC-53, TSC-55, TSC-63}
{TSC-29, TSC-19, TSC-18, TSC-17, TSC-16, TSC-15, TSC-14,
CHA
TSC-13, TSC-12, TSC-11}
Fault-based[13] {TSC-25, TSC-03, TSC-35, TSC-13, TSC-15, TSC-34, TSC-65,
3 CHB
TSC-23, TSC-75, TSC-43}
{TSC-11, TSC-12, TSC-21, TSC-03, TSC-35, TSC-14, TSC-65,
CHC
TSC-35, TSC-25, TSC-63}
{TSC-20, TSC-30, TSC-31, TSC-28, TSC-29, TSC-13, TSC-14,
CHA
TSC-01, TSC-33, TSC-08}
Proposed {TSC08, TSC22, TSC13, TSC33, TSC25, TSC30, TSC39,
4 CHB
TSC43, TSC03, TSC24, TSC18}
{TSC36, TSC17, TSC10, TSC28, TSC13, TSC03, TSC19,
CHC
TSC33, TSC15, TSC16, TSC31, TSC24}
Figure2: performance & comparative analysis of cluster A

Figure3: performance & comparative analysis of cluster B

Figure3: performance & comparative analysis of cluster C


Figures 2 a, b, and c indicate that FSK-mean and SWOA are more efficient than other
approaches for CHA, CHB, and CHC, respectively. As a result, the results showed that the
selected test suite in FSK-mean as well as SWOA for CHA, CHB, and CHC is the best
collection of TCs for fault detection. Using RP and FB and CTFF approaches for CHA, CHB,
and CHC, on the other hand, shows a lower level of fault detection ability. From the results,
we can see that our proposed methodology achieves better results than other ways.
RQ2:Is it possible to use clustering to improve the fault detection rate in the FSK-mean and
SWOA models?

We evaluated the efficacy of the FSK-mean and SWOA to other approaches in all
circumstances, as well as the efficiency of the suggested fault detection ability to other
strategies, to eliminate irrelevancy in TC selection and redundant fault detection in test
prioritizing. In comparison to existing approaches, the results show that CH-A, B, and C for
the FSK-mean and SWOA model have a higher rate.

Figure4: performance & comparative analysis of fault detection rate of cluster A

Figure5: performance & comparative analysis of fault detection rate of cluster B


Figure6: performance & comparative analysis of fault detection rate of cluster C
The results reveal that during the initial execution of TCs, the FSK-mean and SWOA
techniques detected nearly 100, 98, and 99 percent of the defects in CHA, CHB, and CHC,
respectively. Other methods found just 20, 30, and 40% of the problems on the first run (fault-
based) and 40, 20, and 50% for all three situations (random prioritization), respectively. Figures
4, 5, and 6 with box plots show that most errors were recognized earlier in all situations, i.e.,
CHA, CHB, and CHC utilizing FSK-mean and SWOA techniques, as opposed to RP and FB.
Many executions were required in RP and FB to find flaws, which increased development time
and expense.
RQ3: Compared to the existing methodologies, Is computation time is lesser?
To investigate the effectiveness of the FSK-mean and SWOA performance in terms of
computation time, our proposed methodology attains better outcomes compared with existing
CTFF, RP, and Fb.

Figure7: performance & comparative analysis of computation time


Figure 7 depicts the computation time performance and comparison analysis. The length of
time necessary to complete a computational process is referred to as computation time. The
calculation time is proportional to the number of rule applications when a computation is
represented as a sequence of rule applications. In this paper, we compare our methodology to
existing RP, FB, and CTFF methodologies. According to the examination of figure 7, the
proposed method achieves the shortest computing time.
RQ4: How effective is our proposed approach when compared with other previous research
studies?
We compared the proposed method's outcome with various research studies like CTFF, RP as
well as FB performances. From the analysis, the proposed method performed significantly
better compared to various previous studies in terms of precision, recall, and f-measure in table
5.
Table 5: compared the proposed method's outcome with various research studies

fault
Reference Methodology Year Precision Recall F-measure detection
rate
[21] CRMRT 2019 36.0% 16.8% 89% 90%
[22] DARSR WOA 2020 High - - 89%
[23] ARM& PSO 2021 72.4% 58.4% 86.27% 95%
[24] MA 2021 High Average Average 95%
[25] LM & BP 2021 36% 54% 74% 89%
[26] PSR 2018 Low - - -
[27] CASS 2018 Low - - -
[28] SMT 2018 Low - - 80%
[29] URTSM 2019 Average 74% 74% 89%
[30] HRA 2018 Low - - 90%
Proposed SWOA - High 90% 93% 99%

5. CONCLUSION:
To overcome the shortcomings of previous techniques, we introduced the FSK-mean and
SWOA models, which prioritize and select test cases by first clustering frequently changing
test cases. Test cases are selected in the event of a tie based on the number of often failed test
cases and coverage criteria. As a result, FSK-mean and SWOA improve regression testing for
agile software projects in particular and have substantial implications for a software company.
The proposed outcome is compared to several algorithms and some existing works of CTFF,
RP, and FB techniques based on the analysis. In terms of precise recall and F-measure, the
proposed technique outperformed.

Declarations
Funding : In this research article has not been funded by anyone.
Conflict of interest : All authors do not have any conflict of interest.
Data availability : Not Applicable.
Ethical Approval : This article does not contain any studies with human participants or
animals performed by any of the authors.

REFERENCES
1) M. Usman, R. Britto, L.-O. Damm, And J. Börstler, ``Effort Estimation In Large-Scale
Software Development: An Industrial Case Study,'' Inf. Softw.Technol., Vol. 99, Pp. 21_40,
Jul. 2018.
2) S. Mensah, J. Keung, M. F. Bosu, And K. E. Bennin, ``Duplex Output Software Effort
Estimation Model With Self-Guided Interpretation,'' Inf. Softw.Technol., Vol. 94, Pp. 1_13,
Feb. 2018.
3) Khan, Muhammad Sufyan, Farhana Jabeen, Sanaa Ghouzali, Zobia Rehman, Sheneela Naz,
And Wadood Abdul. "Metaheuristic Algorithms In Optimizing Deep Neural Network
Model For Software Effort Estimation." Ieee Access 9 (2021): 60309-60327.
4) Hemrajani, M. Vyas N. "Predicting Effort Of Agile Software Projects Using Linear
Regression, Ridge Regression And Logistic Regression."
5) Malgonde, O. And Chari, K., “An Ensemble-Based Model For Predicting Agile Software
Development Effort”, Empirical Software Engineering, Pp.1-39 (2018)
6) Vyas, Ms Manju, And Naveen Hemrajani. "Effect Of Dimensionality Reduction On
Prediction Accuracy Of Effort Of Agile Projects Using Principal Component Analysis."
In Iop Conference Series: Materials Science And Engineering, Vol. 1099, No. 1, P. 012008.
Iop Publishing, 2021.
7) Kim, Hee Wan, And Yong Gyu Jung. "A Study On The Design Of An Efficient Audit
Model In The Area Of Information System Testing Activities." International Journal Of
Advanced Culture Technology 9, No. 1 (2021): 210-217.
8) Satapathy S. M., Rath, S. K., “Empirical Assessment Of Machine Learning Models For
Agile Software Development Effort Estimation Using Story Points” Innovations In Systems
And Software Engineering, Springer, 1-10 (2017)
9) Beerbaum, Dirk. "Regsafe© Manifesto-An Agile Management Control Methodology For
Regulatory-Driven Programs." (2021).
10) Prabhu, Shridhar, Manoj Naik, A. D. Firdosh, S. A. Sohan, And Neeta B. Malvi.
"Automation In Testing With Jenkins For Software Development."
11) Alattas, Khalid. "And Optimization Technique." (2021).
12) Juan, Angel A., Peter Keenan, Rafael Martí, Seán Mcgarraghy, Javier Panadero, Paula
Carroll, And Diego Oliva. "A Review Of The Role Of Heuristics In Stochastic
Optimisation: From Metaheuristics To Learnheuristics." Annals Of Operations
Research (2021): 1-31.
13) Ali, Sadia, Yaser Hafeez, Shariq Hussain, And Shunkun Yang. "Enhanced Regression
Testing Technique For Agile Software Development And Continuous Integration
Strategies." Software Quality Journal (2019): 1-27.
14) Sivaji, U., And P. Srinivasa Rao. "Improving Regression Testing Query Replying
Procedure Using Secure Optimized Graph Walk Scheme." Journal Of Theoretical And
Applied Information Technology 99, No. 9 (2021).
15) Chen, Lizhe, Ji Wu, Haiyan Yang, And Kui Zhang. "A Microservice Regression Testing
Selection Approach Based On Belief Propagation." (2021).
16) Qu, Qiang, Yi-Han Huang, Xiao-Li Wang, and Xue-Bo Chen. "Complementary
Differential Evolution-based Whale Optimization Algorithm for Function
Optimization." IAENG International Journal of Computer Science 47, no. 4 (2020).
17) Hassan, Ali Abdullah, Salwani Abdullah, Kamal Z. Zamli, and Rozilawati Razali.
"Combinatorial test suites generation strategy utilizing the whale optimization
algorithm." IEEE Access 8 (2020): 192288-192303.
18) Hassouneh, Yousef, Hamza Turabieh, Thaer Thaher, Iyad Tumar, Hamouda Chantar, and
Jingwei Too. "Boosted whale optimization algorithm with natural selection operators for
software fault prediction." IEEE Access 9 (2021): 14239-14258.
19) Chen, Huiling, Chenjun Yang, Ali Asghar Heidari, and Xuehua Zhao. "An efficient double
adaptive random spare reinforced whale optimization algorithm." Expert Systems with
Applications 154 (2020): 113018.
20) Kaya, Ersin, and Ahmet Babalık. "FUZZY ADAPTIVE WHALE OPTIMIZATION
ALGORITHM FOR NUMERIC OPTIMIZATION." Malaysian Journal of Computer
Science 34, no. 2 (2021): 184-198.
21) Quach T, Oinonen T, Karjalainen A. Continuous and Resource Managed Regression
Testing: An Industrial Use Case. arXiv preprint arXiv:1905.01928. 2019 May 6.
22) Chen H, Yang C, Heidari AA, Zhao X. An efficient double adaptive random spare
reinforced whale optimization algorithm. Expert Systems with Applications. 2020 Sep
15;154:113018.
23) Khalid Alattas,"System Error Estimate using Combination of Classification and
Optimization Technique",Journal of Computer Science · March 2021.
24) Khan MS, Jabeen F, Ghouzali S, Rehman Z, Naz S, Abdul W. Metaheuristic Algorithms
in Optimizing Deep Neural Network Model for Software Effort Estimation. IEEE Access.
2021 Apr 12;9:60309-27.
25) Chen L, Wu J, Yang H, Zhang K. A Microservice Regression Testing Selection Approach
Based on Belief Propagation.
26) Marijan D, Liaaen M. Practical selective regression testing with effective redundancy in
interleaved tests. InProceedings of the 40th International Conference on Software
Engineering: Software Engineering in Practice 2018 May 27 (pp. 153-162).
27) Sparr CJ, Fox RA, Song YB. Optimizing Regression Testing of Software for the
Consolidated Automated Support System. In2018 IEEE AUTOTESTCON 2018 Sep 17
(pp. 1-5). IEEE.
28) Chen L, Zhang L. Speeding up mutation testing via regression test selection: An extensive
study. In2018 IEEE 11th International Conference on Software Testing, Verification and
Validation (ICST) 2018 Apr 9 (pp. 58-69). IEEE.
29) Vazgen SM, Hovhannes HH, Taron KK, Arsen MM. Unit Regression Test Selection
Mechanism Based on Hashing Algorithm. In2019 IEEE East-West Design & Test
Symposium (EWDTS) 2019 Sep 13 (pp. 1-5). IEEE.
30) Shengzhe SH, Bin WU, Jun JI, Xu LI. Hydrodynamic Regression Analysis of Seaplane
Fuselage Tests in Fixed Navigate State. In2018 IEEE 8th International Conference on
Underwater System Technology: Theory and Applications (USYS) 2018 Dec 1 (pp. 1-5).
IEEE.
31) Ashraf E, Rauf A, Mahmood K. Value based regression test case prioritization.
InProceedings of the world congress on engineering and computer science 2012 (Vol. 1,
pp. 24-26).
32) Geetha U, Sankar S, Sandhya M. Acceptance testing based test case prioritization. Cogent
Engineering. 2021 Jan 1;8(1):1907013.
33) Al-Hajjaji, M., Thüm, T., Lochau, M., Meinicke, J., & Saake, G. (2019). Effective product-
line testing using similarity-based product prioritization. Software and Systems Modeling,
18(1), 499–521. https://doi. org/10.1007/s10270-016-0569-2.
34) Horváth, F., Gergely, T., Beszédes, Á. Tengeri, D., Balogh, G., & Gyimóthy, T. (2019).
Code coverage differences of Java bytecode and source code instrumentation tools.
Software Quality Journal, 27(1), 79– 123. https://doi.org/10.1007/s11219-017-9389-z.
35) Wang, X., Zeng, H., Gao, H., Miao, H., & Lin, W. (2019). Location-based test case
prioritization for software embedded in mobile devices using the law of gravitation. Mobile
Information Systems, 2019,1-14.https://doi.org/10.1155/2019/9083956.
36) Shin, S. Y., Nejati, S., Sabetzadeh, M., Briand, L. C., & Zimmer, F. (2018). Test case
prioritization for acceptance testing of cyber physical systems: a multi-objective search-
based approach. Proceedings of the 27th ACM SIGSOFT International Symposium on
Software Testing and Analysis - ISSTA 2018, 49–60.
https://doi.org/10.1145/3213846.3213852.
37) Azizi, M., & Do, H. (2018). A collaborative filtering recommender system for test case
prioritization in web applications. Proceedings of the 33rd Annual ACM Symposium on
Applied Computing - SAC ‘18, 1560–1567.https://doi.org/10.1145/3167132.3167299.
38) Haghighatkhah, A., Mäntylä, M., Oivo, M., & Kuvaja, P. (2018). Test prioritization in
continuous integration environments. Journal of Systems and Software, 146, 80–98.
https://doi.org/10.1016/j.jss.2018.08.061
39) Ouriques, J. F. S., Cartaxo, E. G., & Machado, P. D. L. (2018). Test case prioritization
techniques for modelbased testing: a replicated study. Software Quality Journal, 26(4),
1451–1482. https://doi.org/10.1007/s11219-017-9398-y

Authors Biography

Madan Singh is Research Scholar at J. C. Bose University of Science and Technology,


YMCA Faridabad. He has 15 years of experience in Teaching and Research. His areas
of specialization include Software Engineering, Testing, Image Processing and
Network Security. He has published more than 30 research papers in various
International Journals and conferences. He is a life time member of ISTE.

Email id: [email protected]

Dr.Naresh Chauhan received his Ph.D. (Computer Engg.) from MD University,


Rohtak (Haryana) in 2008, M.Tech. (Information Technology) from GGS
IndraPrastha University, Delhi in 2004 and B.Tech. (Computer Engg.) from NIT
Kurukshetra, in the year 1992. He has about 28 years of experience in teaching and
Industries. He served Bharat Electronics Ltd. and Motorola India Ltd. Presently, he
is working as Professor in Deptt. of Computer Engg. at JC Bose University of Science
& Technology, Faridabad (India). His research interest includes Internet technologies, Software
Engineering, Software Testing and Real time systems. He has published two books on Software Testing
and Operating Systems published from Oxford University Press, India.

Email id: [email protected]

Dr. Rashmi Popli is Deputy Dean, Consultancy in J.C Bose University of Science and
Technology, YMCA Faridabad. She has 15 years of rich experience in Teaching and 4
research scholars are pursuing PhD under her guidance and supervision. Her areas of
specialization include Software Engineering, Testing, Network Security and
automation of software. She has published more than 50 research papers in various
International Journals and conferences. She is a life time member of ISTE and CSI. She is also holding
the position of Director, Industrial Relations Cell since 2017 working towards opening the various
avenues where University can collaborate with Industry.

Email id: [email protected]

You might also like