Integrated - Personalized - and - Diversified - Search - Based - On - Search - Logs
Integrated - Personalized - and - Diversified - Search - Based - On - Search - Logs
2, FEBRUARY 2024
Abstract— Personalized search and search result diversification query. For example, the query “apple” may be used to search for
are two possible solutions to cope with the query ambiguity problem information about Apple Inc. or the fruit apple. To
in search engines. In most existing studies, they have been investi- provide better search results for ambiguous queries, two main
gated separately, but intuitively, they address the problem from two
complementary perspectives and should be combined. Some recent approaches have been proposed: search result diversification [3],
work tried to combine them by restricting result diversification to [4], [5], [6], [7], [8], [9], [10], [11], [12], [13] and personalized
the subtopics corresponding to the user’s personal profile. However, search [14], [15], [16], [17], [18], [19], [20], [21], [22]. Search
diversification can be required even when the subtopics are outside result diversification tries to provide a list of documents covering
the user’s profile. In this paper, we propose a more general ap- all subtopics related to the query so that all users can find
proach to integrate them based on users’ implicit feedback in query
logs. The proposed approach PER+DIV aggregates a document’s relevant documents from the top ranked results. On the other
novelty score and personal relevance score dynamically according hand, personalized search aims to directly identify the user’s
to how much the query falls into the user’s interests. To train the personalized intent. It creates a user profile through her search
model based on user clicks in the logs, we consider user click as a history and returns a ranked list corresponding to her interests.
result of both personal relevance and result diversity and a new Both approaches try to solve the same problem (i.e. query
method is proposed to isolate and model these two factors. To
evaluate the model, we design several diversified and personalized ambiguity) from different perspectives. However, they have been
metrics in addition to the traditional click-based metrics. Exper- mostly studied separately in the past.
imental results on a large-scale query log dataset show that the There are a few exceptions, which proposed approaches
proposed integrated method significantly outperforms the existing to combine personalization and diversification. For example,
personalization and diversification approaches. Radlinski et al. [23] proposed to use similar queries for a specific
Index Terms—Integration, personalized search, search result user to address the diversification problem. Vallet et al. [24]
diversification. and Liang et al. [25] proposed methods for personalized diver-
sification of results using probability estimation and structured
I. INTRODUCTION learning. However, all these approaches focused on the problem
TUDIES have shown that many queries issued to search of personalized diversification, i.e. to make result diversification
S engines by users are broad or ambiguous [1], [2]. Different
users may intend to retrieve different information with the same
more consistent with the user’s interests. In particular, one first
determines the subtopics corresponding to the user’s interests,
and the results are selected to cover these subtopics. While
Manuscript received 8 November 2022; revised 12 May 2023; accepted 18 such approaches can be useful in some circumstances (the user
June 2023. Date of publication 30 June 2023; date of current version 11 January is only interested in documents related to her interests), in a
2024. This work was supported in part by the National Natural Science Foun-
dation of China under Grants 62272467 and 61872370, in part by Beijing Out- general search context, the search intents of a user are much
standing Young Scientist Program under Grant BJJWZYJH012019100020098, broader than her known interests - users frequently explore new
in part by the Fundamental Research Funds for the Central Universities, in topics in search. For such search queries, the above approaches
part by the fund for building world-class universities (disciplines) of Renmin
University of China, in part by the Research Funds of Renmin University of may wrongly bias the results toward user’s interests for any
China under Grant 22XNKJ34, in part by the Outstanding Innovative Talents query, even when they are unrelated. In addition, most of these
Cultivation Funded Programs 2023 of Renmin University of China, Public approaches require the subtopics of queries to be determined in
Computing Cloud, Renmin University of China, and in part by Intelligent Social
Governance Platform, Major Innovation & Planning Interdisciplinary Platform advance, which may not be possible in large-scale real search
for the “Double-First Class” Initiative, Renmin University of China. The work engines, making them hard to be applied.
was partially done at Beijing Key Laboratory of Big Data Management and Instead of framing diversification within personalization, we
Analysis Methods, and Key Laboratory of Data Engineering and Knowledge
Engineering, MOE. Recommended for acceptance by R. Chi-Wing Wong. consider personalization and diversification as two complemen-
(Corresponding author: Zhicheng Dou.) tary ingredients that we can incorporate in general search when
Jiongnan Liu, Zhicheng Dou, and Ji-Rong Wen are with the Gaoling School appropriate. Intuitively, personalization and diversification play
of Artificial Intelligence, Renmin University of China, Beijing 100872, China,
and also with the Engineering Research Center of Next-Generation Intelligent different roles in search. Personalized search assumes that we
Search and Recommendation, Ministry of Education, Beijing 100872, China know the user’s interests well, thus the search results for a query
(e-mail: [email protected]; [email protected]; [email protected]). within user’s interests can be tuned toward these interests. On
Jian-Yun Nie is with DIRO, University de Montreal, Montreal, Quebec H3C
3J7, Canada (e-mail: [email protected]). the other hand, diversification does not assume any knowledge
Digital Object Identifier 10.1109/TKDE.2023.3291006 about the user’s interests. It ranks documents according to their
1041-4347 © 2023 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission.
See https://www.ieee.org/publications/rights/index.html for more information.
Authorized licensed use limited to: Zakir Husain College of Engineering & Technology. Downloaded on March 12,2024 at 04:43:36 UTC from IEEE Xplore. Restrictions apply.
LIU et al.: INTEGRATED PERSONALIZED AND DIVERSIFIED SEARCH BASED ON SEARCH LOGS 695
users according to their interests. Early approaches in personal- should be more related to the subtopics which are not covered by
ized search are mainly based on personal features extracted from previous ranked documents. In order to evaluate the coverage of
query logs [15], [27] or based on topic models [28]. In recent subtopics at each step, different approaches use different meth-
years, more advanced approaches applying deep learning and ods to assess the subtopic distribution. xQuAD [4] and PM2 [32]
neural network techniques have been proposed. are the representative unsupervised explicit approaches. xQuAD
1) Personalized Search Based on User Profile: As we dis- defined the distribution of subtopics as the possibility of the pre-
cussed earlier, personalized search tries to extract user interests vious chosen documents not including them. PM2 counted the
from search histories. Several approaches have been proposed number of documents relevant to subtopics to get the distribution
to construct user profiles. Ge et al. [16] proposed HRNN of subtopics. Many approaches have been derived from these
that uses the hierarchical RNN technique to capture user long- two representatives by using hierarchical information (HPM2
term and short-term interests. Then, it conducted the attention and HxQuAD [33]) or using term level information (TPM2 and
technique to refine both interests using the query terms. Lu TxQuAD [34]). Jiang et al. [8] proposed DSSA that introduced
et al. [17] introduced generative adversarial network(GAN) deep learning into explicit approaches and used RNN and atten-
into personalization and proposed PSGAN based on HRNN. Ma tion techniques to model subtopic coverage. However, explicit
et al. [18] replaced the traditional RNN with time-aware RNN approaches need the subtopic and document-subtopic relevance
to incorporate temporal information in personalized search. Yao information of each query, which takes lots of time to annotate
et al. [29] adopted reinforcement learning methods to mimic and is hard to apply under real search situations. Therefore, we
user behaviors to capture user preferences. Zhou et al. [22] mainly utilize implicit diversification method in our framework.
utilized memory network to enlarge the capacity of model to 2) Implicit Approaches: Different from explicit approaches,
build more detailed user profiles. With a user profile obtained implicit approaches consider document novelty in diversifying
by above approaches, document ranking can be provided based process. They score a document according to whether the doc-
on the combination of the document-profile and document-query ument is different from the selected document set. MMR [3]
similarities. evaluated document novelty by calculating the similarity be-
2) Context-Aware Personalized Search: Recently, several tween the candidate document and the documents already se-
context-aware approaches have been proposed to improve the lected. R-LTR [6] and PAMM [7] introduced the Plackett-Luce
performance of personalized search. To better understand user model [35] and calculated the similarity from different ele-
intent, these approaches combine the profile and query together ments such as title. Both of them require manually defined
to refine the query representation itself. Yao et al. [19] used dissimilarity features between documents. In contrast, NTN [36]
the personalized embedding to construct a personalized word modeled document similarity by using neural tensor network
embedding table for each user to rebuild the query represen- and distributed representation of documents to avoid feature
tation. Zhou et al. [20] proposed HTPS, applying transformer engineering. Recently, some attention-based approaches [12],
to conduct query disambiguation. Deng et al. [30] focused on [13] have been proposed. Daletor [12] applied metric learning
the multiple positive and negative feedback provided by the to directly optimize the diversification metrics. Graph4DIV [13]
users and improved HTPS. PSSL [21] further improved HTPS constructed a document graph and devised GNN to calculate
by introducing several self-supervised learning tasks. Recently, diversification score. It used BERT to classify whether two
Some researchers [31] try to find similar users from social documents belong to same subtopic and built edges between
network to augment the representation of current users. documents.
3) Personalized Diversification Approaches: To improve the
diversification performance by enhancing the users’ infor-
B. Search Result Diversification mation, several personalized search result diversification ap-
In parallel to personalized search, search result diversification proaches have been proposed. Radlinski and Dumais [23] solved
is another approach to cope with the query ambiguity problem this problem by finding similar follow-up queries to construct
in information retrieval. It aims to make the top results cover as subtopics. Vallet et al. [24] optimized xQuAD by adding the
many subtopics of the query as possible so that users with differ- user variance into its score function. Liang et al. [25] intro-
ent intents can likely find documents corresponding to their inter- duced structure learning method to deal with the problem. It
ests. According to whether the model relies on a set of subtopics, is, however, important to stress that personalized search result
existing approaches can be divided into explicit approaches diversification is different from our model. The former mainly
(with subtopics) and implicit approaches (no subtopics). Several focused on determining the specific subtopics for users and
recent approaches such as DESA [9] and DVGAN [10] have used diversification approaches to cover them. Most of these
combined explicit and implicit approaches. DVGAN introduced approaches require the subtopics of queries determined in ad-
generative adversarial network into diversification and DESA vance. In contrast, we directly optimize the user satisfaction
adopted transformer in the interaction between documents and (click-based metric) based on large-scale search logs without
subtopics. labelling of subtopics. Personalization and diversification are
1) Explicit Approaches: Explicit search result diversification two latent aspects involved in the process rather than the final
approaches explicitly use subtopics as inputs for modeling di- objectives. In other words, our approach can determine dynam-
versity. They usually select a document relevant to subtopics that ically the extent to which the results need to be personalized or
have not been well covered before. It infers that the document diversified according to what we learn from the click data.
Authorized licensed use limited to: Zakir Husain College of Engineering & Technology. Downloaded on March 12,2024 at 04:43:36 UTC from IEEE Xplore. Restrictions apply.
LIU et al.: INTEGRATED PERSONALIZED AND DIVERSIFIED SEARCH BASED ON SEARCH LOGS 697
in D (recall that |D| = m), we first concatenate the list of term TABLE III
EXAMPLE SEARCH RESULTS AND CORRESPONDING CLICKS
embeddings of all words in all candidate documents Tdk (Tdk =
[td1k , . . ., tdMk ]) and the query itself Tq (Tq = [tq1 , . . ., tqM ])
together to form a term embedding list. Noticed that due to the
length limitation of the transformer encoder, we only reserve
the word embeddings of terms in the document’s titles in the
list. We argue that term-level integration is helpful in refining
query and document representation as it introduces more context
information.
We apply the first term-level transformer to make interactions
between documents and query, i.e., use the clicked documents to calculate the historical vector, the
three document ranking lists will have the same representation,
T̃q , T̃d1 , . . . , T̃dm = Trmterm (Tq , Td1 , . . . , Tdm ). but they actually differ in user’s behaviors. The reason that the
documents A2 , A3 , A4 in Ranking 2 are unclicked is probably
Note that T̃dk and T̃q are still term embedding lists (T̃dk = due to the redundancy between documents and this is different
[t̃d1k , . . ., t̃dMk ] and T̃q = [t̃q1 , . . ., t̃qM ]). Then we slice the term from the reason for C1 , D1 , E1 in Ranking 3.
embedding list and calculate the context-enriched representation To distinguish different rankings with the same clicked doc-
dw w uments as in Table III and to embrace more non-semantic infor-
k and q for document dk and query q by summing their own
term embedding vectors, i.e., mation into user modeling, we design a second document-level
d transformer structure to build document and search represen-
dwk = T̃dk = t̃j k , (4) tations. In particular, inspired by BERT [37], we add position
j embedding and click embedding to enhance document repre-
sentations. Furthermore, as we only use the terms in titles in
qw = T̃q = t̃qj . (5) the first term-level transformer, the embedding vectors may be
j
inaccurate and noisy. To better represent documents, we also
However, by using only the first-level term-level transformer enhance the distributed embedding of documents on the docu-
encoder, the model ignores some non-semantic information ment contents 1 . Finally, the refined document representation is
of the documents such as the click signals and the displayed calculated through the second document-level transformer:
positions. Different from the existing personalized search ap- D̃w = Dw + Dpos + Dclk + Drep ,
proaches mostly only use the clicked documents to capture user’s
interests, we argue that in our case, the unclicked documents Dv = Trmdoc (D̃w ), (6)
can also provide useful information to help model user inter-
ests. To illustrate it, we show an example in Table III, where 1 We simply use the doc2vec method in our experiments. However, it can be
A, B, C, D, E denote diverse subtopics of the query. If we only easily replaced by other methods such as BERT representation.
Authorized licensed use limited to: Zakir Husain College of Engineering & Technology. Downloaded on March 12,2024 at 04:43:36 UTC from IEEE Xplore. Restrictions apply.
LIU et al.: INTEGRATED PERSONALIZED AND DIVERSIFIED SEARCH BASED ON SEARCH LOGS 699
where Dv = [dv1 , . . . , dvk , . . . , dvm ] is the list of refined document 2) However, utilizing only the users’ behaviors in the current
representations; Dw = [dw w w
1 , . . . , dk , . . . , dm ] is calculated by sessions may be inadequate for capturing user preferences,
pos
(5); D denotes the position embedding; Dclk denotes the we need to enhance users’ overall interests during all their
click embedding and Drep denotes the distributed embedding histories to reflect their general preferences and to tune
by doc2vec. their short-term interests. However, as we only need to
build the overall long-term interests, it is not necessary to
B. Personalization Module extract the interest vectors for each session. Therefore, we
simply develop a transformer to integrate users’ searching
In our framework, the personalization module is adopted representations across all histories. In particular, we add
to evaluate the document’s relevance to the user interests. In us to the end of long-term historical vector list H l and
this module, we design a transformer-based structure to extract apply the long-term transformer on the concatenated list
the user profile for measuring the personal relevance, since it to obtain the representation:
achieves significant improvements [20], [21] over the traditional
RNN-based encoder [16], [18]. ul = Trmlong ([H l , us ] + [H l , us ]pos )[|l| + 1], (9)
We first aggregate the query and document representations
where ul is obtained the same way as us .
together to capture the representation of a historical search
3) Finally, having constructed the long-term user profile vec-
and click behavior. We take the i-th short-term query qis as an
tor ul , the short-term user profile vector us , and the inte-
example. Its representation hsi is calculated by:
grated query vector q int , following previous approaches
hsi = dvj + qiw , (7) as HTPS [20] and PEPS [19], we apply gate functions to
j
aggregate them into the final refined query representation:
where dvj is the document representation calculated by (6), gate(x, y) = zx + (1 − z)y, z = σ(φ([x; y])),
and qiw is the interactive representation of qi based on (5). uf = gate(us , ul ), q s = gate(q int , us ),
Because we introduce the position and click signals in the
second document-level transformer in 6, the built historical q l = gate(q int , ul ), q f = gate(q s , q l ),(10)
search representation hsi can better represent user interests from
both the positive (clicked documents) and negative (unclicked where uf denotes the final profile vector; q s , q l denote the
documents) feedback. refined query representation enhanced short-term profile
Then we concatenate these vectors together to form the short- and long-term profile; q f denotes the final refined query
term historical vector list H s = [hs1 , . . . , hsi , . . . , hs|s| ] and long- representation.
Previous approaches [19], [20], [21] have shown that calculat-
term historical vector list H l = [hl1 , . . . , hli , . . . , hl|l| ]. Short- ing the similarities between the scored document representations
term historical vectors may contain more information about and multiple query representations mixed with different history
user intent in the current query because the queries are in the interests can be beneficial for the personalization performance.
same session. Long-term interests are more stable and reflect Since we have already obtained the initial, integrated, and refined
the general interests of the user. They can help refine short-term query representation in (3) and (10), we can calculate several
interests. Following existing approaches [16], [18], [20], we representation-based similarities between vectors by similarity
consider that users’ long-term and short-term interests may have function sR following previous approaches. In this paper, we
hierarchical structures. We conduct a hierarchical transformer to adopt cosine similarity as sR function, but it can be any other
capture the final user profile as follows: function such as euclidean distance and dot product. Inspired
1) First, we need to capture user interests in the current by PEPS [19], we also apply K-NRM [40] to obtain interaction-
session since the user intents in current queries may be based similarity sI between the initial and integrated query and
derived from and stimulated by the search behaviors in document vectors to better model the ad hoc similarity between
current sessions. We construct the short-term user in- them. Then we use an MLP layer to integrate these similarities.
terests in this session to help refine the current query Finally, personalization score is calculated by:
representations. Since the user interests may continuously
evolve in the search and browsing flow during the current S per (d|q, U ) = φ([sI (d0 , q 0 ), sI (dint , q int ), sR (dint , q int ),
session, we apply the transformer encoder to integrate the
historical search representation hsi to build the short-term sR (dint , q s ), sR (dint , q l ), sR (dint , q f ), ψ(Fq,d )]), (11)
profile. More specifically, we add the “[CLS]” token to the where Fq,d denotes the feature vector, φ, ψ denotes MLP layers.
end of short-term historical vector list H s and apply the
position-aware transformer to capture the short-term user C. Diversification Module
profile vector us , i.e,
The diversification module in PER+DIV framework focuses
us = Trmshort ([H s , CLS] + [H s , CLS]pos )[|s| + 1], on improving the novelty of candidate documents and the diver-
(8) sity of results. As it is hard to capture the subtopics in real search
where us is captured by slicing the last embedding vector, engines, we measure the diversification score in an implicit
which corresponds to the “[CLS]” token. manner.
Authorized licensed use limited to: Zakir Husain College of Engineering & Technology. Downloaded on March 12,2024 at 04:43:36 UTC from IEEE Xplore. Restrictions apply.
700 IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING, VOL. 36, NO. 2, FEBRUARY 2024
To model the diversity of search results, we use the same score, i.e.
method introduced in Section III-A to conduct interactions
between current candidate documents. Following existing meth- S div (d|D) = φ(ξ) = φ(tanh(ψ(s[1:z] ))). (13)
ods such as DESA [9], we do not take the current query into
consideration while modeling document novelty. Thus we omit
the query part in (5), i.e., we set Tq = ∅ and only conduct D. Combination
interaction between documents. We use a second level trans-
former with shared parameters in (6) to conduct interactions To better integrate personal relevance and document novelty,
between document representations and obtain the interactive the combined weight between personalization and diversifica-
document representation matrix Dv . As we do not have the click tion score should be determined according to the query and user.
information for the candidate documents of the current query, Intuitively, we should emphasize the personal relevance part if
we regard them all as clicked ones. the current query is highly relevant to user interests. Otherwise,
Previous approaches such as DESA [9] and Daletor [12] if current query has little to do with user interests, we should
mostly slice the document d’s embedding vector from the rep- provide more diversified results to cover user potential intents
resentation matrix Dv and apply an MLP layer to it to calculate as much as possible. To implement this idea, we calculate the
the diversification score. We believe that this simple method similarity between the final profile vector uf and the integrated
cannot explicitly model the dissimilarity between documents. A query representation q int and use it to determine the com-
more convincing and natural way is to compare all the candidate bined weight in (1) between personalization and diversification
documents to measure their uniqueness. Thus, in our PER+DIV score:
framework, we adopt the Neural Tensor Network method, which
computes document similarities in its model to directly capture λ(q, U ) = sR (uf , q int ).
document novelty. However, different from the origin NTN [36]
method that only measures the dissimilarity between current In this paper we use cosine similarity for sR and more complex
document and previously selected documents, we calculate the combinations can be explored in future work.
dissimilarities among all candidate documents.
To capture document dissimilarity in different aspects, we
adapt NTN method and apply z trainable weight matrices E. Time Complexity
W ∈ Rα×α in our model, where α denotes the embedding As we described in the above part, the PER+DIV framework
length. Given a matrix Wi , we calculate the multiple-perspective can be divided into the common hierarchical transformer mod-
similarities between documents using the above representation ule, the personalization module, and the diversification module.
matrix Dv : We will analyze their complexity respectively. Preliminarily, we
assume that the embedding length is α for all vectors, and the
Si = softmax(Dv · Wi · Dv ), (12)
inner hidden size in transformer’s FFN layer is β.
where Si ∈ R m×m
denotes the similarity matrix between docu- First, in the hierarchical transformer module, the length of
ments and the softmax function is done on the row for normal- the whole term embedding list is (m + 1)M for one query,
ization. This similarity calculation method can be regarded as a where M denotes the maximum length among the query and
general dot product method. If we set Wi is an identity matrix, the title of documents, m denotes the maximum number of
then it is the dot product similarity between documents. When candidate documents. Therefore, the overall time complexity of
Wi changes, the similarity calculation can focus on different the hierarchical transformer module is O(m2 M 2 α + mM αβ)
dimensions of document representations. As we devise z train- for one query.
able matrices, we can evaluate the similarities among documents Second, in the personalization module, there are two one-layer
from different perspectives. Therefore we can get z similarity transformers, so the overall complexity is O(|s|2 α + |s|αβ +
matrices S[1:z] by applying the (12) z times: |l|2 α + |l|αβ) = O((|s|2 + |l|2 )α + (|s| + |l|)αβ), where |s|
and |l| denotes the length of user short history Hs and long
S[1:z] = softmax(Dv · W[1:z] · Dv ).
history Hl .
To capture the novelty of the scoring document d, we need Third, in the diversification module, the time complexity of
to slice z similarity vectors s[1:z] = S[1:z] [index(d)] ∈ Rm×z the NTN module is O(z(m2 α + mα2 )), corresponding to the
from the whole similarity tensor. Then we aggregate the sim- two matrix multiplication operations in (12), where z is the
ilarity vectors to obtain the document novelty in one aspect. number of trainable matrices Wi .
The aggregation function can be the sum, average, etc. We use In summary, the overall time complexity of PER+DIV
linear combination and tanh(·) function to align the range of is O((|s| + |l|)(m2 M 2 α + mM αβ) + (|s|2 + |l|2 )α + (|s| +
cosine similarity in personalization module, as it yields the best |l|)αβ + z(m2 α + mα2 )). Noticed that the complexity of the
performance: diversification module is relatively low compared to the per-
sonalization module and the hierarchical transformer module.
ξ = tanh(ψ(s[1:z] )),
Therefore, the time complexity of PER+DIV is comparable to
where ξ ∈ R refers to the document novelty in z aspects.
z
the HTPS method, which also adopts hierarchical transformer
Finally, we use an MLP layer to capture the final diversification structure to construct user profiles.
Authorized licensed use limited to: Zakir Husain College of Engineering & Technology. Downloaded on March 12,2024 at 04:43:36 UTC from IEEE Xplore. Restrictions apply.
LIU et al.: INTEGRATED PERSONALIZED AND DIVERSIFIED SEARCH BASED ON SEARCH LOGS 701
where f (d) is an abbreviation of f (d|q, U, D). Given the pre- Then, we design the diversification loss. The reason why
dicted probability and the true label pij , the loss is calculated by users didn’t click these pseudo clicked documents may be that
the weighted cross entropy function as follows: they are redundant. In other words, the clicked documents are
more diversified than those pseudo clicked ones. Similar to (14)
Lunified = CE(pij , pij )
and (15), we design the diversification loss but only use the
=− |Δij |(pij log(p
ij ) + (1−pij ) log(1−p ij )), (15) pseudo documents as negative documents, not all the unclicked
documents. The predicted probability is also calculated by S div :
where Δij denotes the metric change such as MAP when swap-
pdiv div
(di ) − S div (dj ))
ij = σ(S
ping the position of di and dj in the ranking list.
Ldiv = CE(pdiv div
ij , pij ).
B. The Separate Method
We show the formation of positive and negative samples of
As we stated above, the reason why users click a document can different loss function in Table IV. The final loss of separate
be affected by both its relevance to user’s interests and document method is the combination of Lper and Ldiv :
novelty. We also measure personalization and diversification
scores respectively in our model. However, this makes it difficult Lseparate = Lper + μLdiv , (16)
to train the model using clicks as mixed signals as in the unified where μ is a hyper-parameter.
training method. Thus, we propose another training method,
which aims to separate the personalization and diversification V. EXPERIMENTAL SETUP
in click behavior and train each module respectively.
First, we estimate which additional documents would have A. Datasets
been clicked if the user did not consider their novelty. To do this, There are some public datasets for personalized search such as
we calculate the similarity between each unclicked document the AOL dataset and the WEBIS dataset. However, the candidate
to the clicked documents.2 If the average similarity is higher documents of queries in AOL are generated by vanilla retrieval
than a threshold τ , we consider that this document should have methods such as BM25 and are not provided by real search
been clicked only due to personalization, and label it as pseudo engines. Users may not have seen them in real situations. As
click. So, the loss of personalization module is calculated by the we focus on the user behaviors in this paper, such a pseudo
same score function in (14) and (15) but we remove the pseudo dataset is not appropriate. Similarly, the WEBIS dataset also
clicked documents from negative samples and add them into lacks the original ranking results, which is also unsuitable for
positive ones. As we only consider personalization here, the pre- our experiments.
dicted probability is also calculated by the personalization score Therefore, we conduct experiments on a search log dataset
from a commercial search engine. The basic statistics of this
2 We calculate the similarities between documents mentioned in the rest of commercial dataset are shown in Table V. The searches in the
this paper using their doc2vec representation in default of further description. dataset date from 1st Jan. 2013 to 28th Feb. 2013. We regard
Authorized licensed use limited to: Zakir Husain College of Engineering & Technology. Downloaded on March 12,2024 at 04:43:36 UTC from IEEE Xplore. Restrictions apply.
702 IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING, VOL. 36, NO. 2, FEBRUARY 2024
the first four weeks as the user’s history and do our experiments relevance score in MMR by the score calculated by PEPS model.
on the last two weeks. We extract the document from the html We use λ = 0.7 in PEPS+MMR baseline.
source of web pages. Following [16], we use the 30 minutes
of user inactivity as the boundary to divide sessions. We view C. Implementation Details
the document click with more than 30 seconds dwelling time as
Our model PER+DIV 3 is trained via both the unified method
a satisfied click, which eliminates the effects of ranking bias
PER+DIV(u) and separate method PER+DIV(s). The size m
as much as possible. For each user, we divide the training,
of candidate document sets is set to 50. The word embedding
validation and test set by the sessions with 4:1:1 ratio.
size α is 100. The inner length in transformer FFN β is 256. The
number of heads in all transformers is 6. The number of layers in
B. Baselines transformers is selected from 2. The number of kernels in KNRM
is 11. For the diversification module, we use z = 4 as it yields
1) Adhoc Ranking Models:
the best performance. We adopt τ = 0.8 for cosine similarity
Org. We directly use the original ranking as the baseline in
and μ = 0.5 for the separate method. For other models, we use
the commercial dataset.
the configuration in their papers to conduct experiments.
K-NRM [40]. We take K-NRM as the adhoc search baseline.
For all of the supervised methods, we tune the learning rate
K-NRM is a kernel-based neural ranking model. It uses k kernels
r from 10−7 to 10−1 and adopt the Adam optimizer to train the
to calculate the interaction-based similarity between document
model.
and query. We take k = 11 and use the same LambdaRank
algorithm to train K-NRM.
D. Evaluation Metrics
2) Personalized Search Models:
SLTB [15]. SLTB is a feature-based personalized search We use three kinds of evaluation metrics to evaluate models:
model. It extracts 102 features from the query history for each unified metrics, personalized metrics, and diversified metrics.
covering topic feature, time feature and etc. 1) Unified Metrics: We use the metrics based on users’ satis-
P-Click [14]: P-Click assumes users will click the same fied clicks such as MAP, MRR and P@1 as our unified metrics.
document that most users clicked for the same query before. The calculation of these metrics is as follows:
It ranks the documents based on the number of historical clicks
1 1
N ci
j
made by the same user. MAP = ,
HRNN [16]. HRNN uses hierarchical RNN to construct N i=1 ci j=1 pji
user long-term and short-term user profiles and adopt attention
1 1
N
mechanism to integrate the profiles and current query. The per- MRR = ,
sonalization score is calculated by query-document matching, N i=1 p1i
profile-document matching and feature-based score.
1
N
HTPS [20]. HTPS is one of the state-of-the-art context-aware
P@1 = I 1 ,
personalized search baseline. It uses hierarchical transformer to N i=1 [pi =1]
disambiguate the query and designs a language model predicting
next query to help training.
where N is the number of queries, ci is the number of user clicks
PEPS [19]. PEPS is another state-of-the-art context-aware
in query i, pji is the position of j-th click of query i.
personalized baseline. It constructs personalized word embed-
2) Personalized Metrics: Because we cannot obtain users’
ding for each user and rebuilds the query representation by
real intents or interests, we need to design pseudo judgments
his/her unique embedding matrix.
to measure personalization performance. We replace user click
3) Search Result Diversification Models:
label with the union of the real satisfied click and the pseudo click
MMR [3]. MMR is the representative unsupervised search
we design in Section IV and calculate the MAP as personalized
result diversification baseline. It scores the document by the
metric “P-MAP”. (Personalized MAP). Because these additional
linear combination of document-query relevance and document-
pseudo clicked documents are very similar to the original clicked
document dissimilarity. We tune the combination rate λ = 0.5
documents, users should click them if document redundancy is
and 0.7 in our experiments.
√ not considered.
ORG+MMR: It uses 1/ ri as the relevance score in
3) Diversified Metrics: Evaluating diversity is very hard in
MRR [41], given the fact that the original ranking quality is
our experiments because we do not have human-created intent-
quite good, but we don’t have the ranking score available.
aware relevance labels for queries in the log. Hence we mine
DESA-IM [9]. DESA-IM is a transformer-based implicit
subtopics for queries and assess the relevance between doc-
model. We use the implicit part in DESA to build this model. It
uments and subtopics. We apply two methods to extract the
conducts transformer encoder to do the interaction between can-
subtopics.
didate documents. We use pairwise cross entropy loss function
In the first way, following [23], we regard the extension
to train this model.
queries of a query q in the query corpus as the subtopics of q. We
4) Personalized and Diversified Search Models:
PEPS+MMR. PEPS+MMR is a simple pipeline model to
integrate personalization and diversification by replacing the 3 https://github.com/rucliujn/PER-DIV
Authorized licensed use limited to: Zakir Husain College of Engineering & Technology. Downloaded on March 12,2024 at 04:43:36 UTC from IEEE Xplore. Restrictions apply.
LIU et al.: INTEGRATED PERSONALIZED AND DIVERSIFIED SEARCH BASED ON SEARCH LOGS 703
Authorized licensed use limited to: Zakir Husain College of Engineering & Technology. Downloaded on March 12,2024 at 04:43:36 UTC from IEEE Xplore. Restrictions apply.
704 IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING, VOL. 36, NO. 2, FEBRUARY 2024
TABLE VII
OVERALL PERFORMANCES OF MODELS
TABLE VIII For PEPS and PER+DIV(u), we use the user’s click as a mixed
THE NUMBER OF DOCUMENTS CHANGED FROM UNCLICK LABEL TO PSEUDO
CLICK LABEL
label to train our model, thus the results of MAP keep unchanged
in these two models. In PER+DIV(s), the loss function is related
to the threshold τ and the results of both metrics are changed with
τ . The results of MAP show that with τ increasing, the number
of pseudo clicked documents decreases, the performance of
our PER+DIV(s) model becomes closer to PER+DIV(u). The
results show that the simple labeling of pseudo clicks based on
cosine similarity in PER+DIV(s) may be harmful to the final
performance. We need a more accurate way to mark the click
only considering personalization. For P-MAP results, we can
observe that our PER+DIV(s) model outperforms PER+DIV(u)
regardless of τ . The results show that the separate training meth-
ods can improve personalization performance. Furthermore,
the improvement over PEPS in P-MAP verifies the benefits of
enhancing the unclicked documents.
Authorized licensed use limited to: Zakir Husain College of Engineering & Technology. Downloaded on March 12,2024 at 04:43:36 UTC from IEEE Xplore. Restrictions apply.
LIU et al.: INTEGRATED PERSONALIZED AND DIVERSIFIED SEARCH BASED ON SEARCH LOGS 705
TABLE IX TABLE X
PERFORMANCE OF MODELS IN S@K AND ERR-IA@5 PERFORMANCE OF MODELS IN ABLATION STUDY
Table IX. These results show that PEPS+MMR achieves the best
performance in S@k as it uses the pipeline way to rerank search
results and MMR directly uses the document similarity to model
diversification. Except for PEPS+MMR, our PER+DIV(u) out- unclicked documents is useful in improving results satisfaction.
performs PEPS by 1.7% relative improvement over PEPS in The results w/o. NTN model show that the designed structure
terms of S@3. The results of both subtopic (ERR-IA) and in diversification module actually benefits model performance.
non-subtopic (S) metrics show that the diversification module The SetRank way in (17) does not model dissimilarity explicitly.
of our model actually has positive effects on the final rankings The low results of PER+DIV w/o. doc Trm may be explained by
and can improve the diversity of top results. its failure to capture the click and position information in term-
level transformer. Removing the personal-level transformers in
D. Ablation Study PER+DIV w/o. per Trm also leads to a decrease in performance,
In this section, we conduct an ablation study of the main especially in the personalization metric P-MAP, which indicates
modules in PER+DIV to verify their effectiveness. All these the effectiveness of the designed personalization module in mod-
ablation models are trained in the unified method. These models eling personal relevance. The results of w/o. COMB show that a
are shown as follows: vanilla summing strategy cannot achieve the best performance
w/o. doc Trm. We remove the document-level transformer in the unified metrics. We need to take user’s search history and
Trmdoc in Section III-A. current query into consideration to better estimate the combining
w/o. per Trm. We remove the two transformer structure weight. From the slightly decreased results of PER+DIV w/o.
Trmshort and Trmlong in the personalization module in Sec- INT compared to the PER+DIV model, we can conclude that the
tion III-B. main improvements of our proposed framework come from the
w/o. NTN. We remove the neural tensor network from the designed hierarchical-transformer-based representation module,
diversification module and calculate diversification score using the personalization module, and the diversification module. The
the same way in SetRank [43] and DESA [9]: higher results on P-MAP of it may be due to the fact that we re-
move two ad-hoc interaction-based scores thus making the rank-
S div (d|D) = φ(Dv [index(d)]). (17) ing list more personalized. The results of the BERT-enhanced
model PER+DIV w. BERT show only comparable results to the
w/o. DIV, w/o. PER. We remove one of the diversification /
original PER+DIV model. We state that the reason may be that
personalization modules from the PER+DIV.
the main point of our problem is to integrate personalization and
w/o. COMB. We remove the combination module measuring
diversification together, simply improving the adhoc relevance
the weight λ(q, U ) in our framework and calculate the score by
will not significantly improve the performances.
simply adding S per and S div together.
w/o. INT. We remove the interaction-based score
VII. CONCLUSION
sI (d0 , q 0 ), sI (dint , q int) in the final personalization score cal-
culation in 11. In this paper, we proposed an integrated personalized and
w. BERT. Since pre-trained language models have achieved diversified framework PER+DIV to enhance both personal
great performance in other information retrieval tasks, in this relevance and document novelty. We adopted a hierarchical
ablation model, we try to incorporate BERT into our PER+DIV transformer structure to extract information from historical logs
framework. However, due to the limitations of BERT’s input and current candidates and to calculate the personalization and
length and long user histories, we can only use BERT to help cal- diversification scores of a document. These scores are then
culate the relevance between current queries and scoring docu- integrated through a combined weight estimated according to
ments. More specifically, we add a relevance score S BERT (d, q) the similarity of query and user profile. We put forward two
calculated by BERT-based cross-encoder in 11. different training methods regarding user’s click in mixed and
The ablation results are shown in Table X. All these abla- separate ways respectively to better train our model. Experimen-
tion models underperform our PER+DIV(u) model. Only using tal results showed that our model can significantly outperform all
diversification module leads to the worst results in ablation mod- personalized search and search result diversification baselines in
els, which indicates that it is necessary to enhance click history. unified metrics. This paper shows that personalization and result
The w/o. DIV results using only the personalization score are diversification are two complementary approaches dealing with
lower than that of the unified model, but outperform all other ambiguous queries that can be combined. Our work can be ap-
baselines. This demonstrates the usefulness of considering the plied in web search situations to provide more satisfying results
Authorized licensed use limited to: Zakir Husain College of Engineering & Technology. Downloaded on March 12,2024 at 04:43:36 UTC from IEEE Xplore. Restrictions apply.
706 IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING, VOL. 36, NO. 2, FEBRUARY 2024
for users. However, due to the high complexity of the PER+DIV [19] J. Yao, Z. Dou, and J.-R. Wen, “Employing personal word embeddings for
framework, it can only be utilized in the final re-ranking stage. personalized search,” in Proc. 43rd Int. ACM SIGIR Conf. Res. Develop.
Inf. Retrieval, New York, NY, USA, 2020, pp. 1359–1368.
There is still potential for improvement in combining per- [20] Y. Zhou, Z. Dou, and J.-R. Wen, “Encoding history with context-aware
sonalization and diversification in the ranking area. User click representation learning for personalized search,” in Proc. 43rd Int. ACM
behavior in web search situations is quite noisy and can be caused SIGIR Conf. Res. Develop. Inf. Retrieval, New York, NY, USA, 2020,
pp. 1111–1120.
by a variety of elements, including both personal interests and [21] Y. Zhou, Z. Dou, Y. Zhu, and J. Wen, “PSSL: Self-supervised learning for
document novelty. Therefore, it may be inaccurate to use them personalized search with contrastive sampling,” in Proc. 30th ACM Int.
to evaluate personalization and diversification simultaneously. Conf. Inf. Knowl. Manage., 2021, pp. 2749–2758.
[22] Y. Zhou, Z. Dou, and J.-R. Wen, “Enhancing re-finding behavior with
A better way may be to construct a new dataset that contains external memories for personalized search,” in Proc. 13th Int. Conf. Web
accurate signals for both sides. Search Data Mining, New York, NY, USA, 2020, pp. 789–797.
[23] F. Radlinski and S. Dumais, “Improving personalized web search using
REFERENCES result diversification,” in Proc. 29th Int. ACM SIGIR Conf. Res. Develop.
Inf. Retrieval, New York, NY, USA, 2006, pp. 691–692.
[1] C. Silverstein, M. R. Henzinger, H. Marais, and M. Moricz, “Analysis of [24] D. Vallet and P. Castells, “Personalized diversification of search results,” in
a very large web search engine query log,” ACM SIGIR Forum, vol. 33, Proc. 35th Int. ACM SIGIR Conf. Res. Develop. Inf. Retrieval, New York,
no. 1, pp. 6–12, 1999. NY, USA, 2012, pp. 841–850.
[2] Y. Yano, Y. Tagami, and A. Tajima, “Quantifying query ambiguity with [25] S. Liang, Z. Ren, and M. de Rijke, “Personalized search result diversi-
topic distributions,” in Proc. 25th ACM Int. Conf. Inf. Knowl. Manage., fication via structured learning,” in Proc. 20th ACM SIGKDD Int. Conf.
2016, pp. 1877–1880. Knowl. Discov. Data Mining, New York, NY, USA, 2014, pp. 751–760.
[3] J. G. Carbonell and J. Goldstein, “The use of MMR, diversity-based [26] C. Burges et al., “Learning to rank using gradient descent,” in Proc. 22nd
reranking for reordering documents and producing summaries,” in Proc. Int. Conf. Mach. Learn., New York, NY, USA, 2005, pp. 89–96.
21st Annu. Int. ACM SIGIR Conf. Res. Develop. Inf. Retrieval, 1998, [27] M. Volkovs, “Context models for web search personalization,” 2015,
pp. 335–336. arXiv:1502.00527.
[4] R. L. Santos, C. Macdonald, and I. Ounis, “Exploiting query reformula- [28] M. J. Carman, F. Crestani, M. Harvey, and M. Baillie, “Towards query log
tions for web search result diversification,” in Proc. 19th Int. Conf. World based personalization using topic models,” in Proc. 19th ACM Int. Conf.
Wide Web, New York, NY, USA, 2010, pp. 881–890. Inf. Knowl. Manage., New York, NY, USA, 2010, pp. 1849–1852.
[5] R. L. Santos, “Explicit web search result diversification,” ACM SIGIR [29] J. Yao, Z. Dou, J. Xu, and J. Wen, “RLPS: A reinforcement learning–based
Forum, vol. 47, no. 1, pp. 67–68, Jun. 2012. framework for personalized search,” ACM Trans. Inf. Syst., vol. 39, no. 3,
[6] Y. Zhu, Y. Lan, J. Guo, X. Cheng, and S. Niu, “Learning for search result pp. 1–29, 2021.
diversification,” in Proc. 37th Int. ACM SIGIR Conf. Res. Develop. Inf. [30] C. Deng, Y. Zhou, and Z. Dou, “Improving personalized search with dual-
Retrieval, New York, NY, USA, 2014, pp. 293–302. feedback network,” in Proc. 15th ACM Int. Conf. Web Search Data Mining,
[7] L. Xia, J. Xu, Y. Lan, J. Guo, and X. Cheng, “Learning maximal marginal 2022, pp. 210–218.
relevance model via directly optimizing diversity evaluation measures,” in [31] Y. Zhou, Z. Dou, B. Wei, R. Xie, and J. Wen, “Group based personalized
Proc. 38th Int. ACM SIGIR Conf. Res. Develop. Inf. Retrieval, New York, search by integrating search behaviour and friend network,” in Proc. 44th
NY, USA, 2015, pp. 113–122. Int. ACM SIGIR Conf. Res. Develop. Inf. Retrieval, 2021, pp. 92–101.
[8] Z. Jiang, J.-R. Wen, Z. Dou, W. X. Zhao, J.-Y. Nie, and M. Yue, “Learning [32] V. Dang and W. B. Croft, “Diversity by proportionality: An election-based
to diversify search results via subtopic attention,” in Proc. 40th Int. ACM approach to search result diversification,” in Proc. 35th Int. ACM SIGIR
SIGIR Conf. Res. Develop. Inf. Retrieval, New York, NY, USA, 2017, Conf. Res. Develop. Inf. Retrieval, New York, NY, USA, 2012, pp. 65–74.
pp. 545–554. [33] S. Hu, Z. Dou, X. Wang, T. Sakai, and J.-R. Wen, “Search result
[9] X. Qin, Z. Dou, and J.-R. Wen, “Diversifying search results using self- diversification based on hierarchical intents,” in Proc. 24th ACM Int.
attention network,” in Proc. 29th ACM Int. Conf. Inf. Knowl. Manage., Conf. Inf. Knowl. Manage., New York, NY, USA, 2015, pp. 63–72,
New York, NY, USA, 2020, pp. 1265–1274. doi: 10.1145/2806416.2806455.
[10] J. Liu, Z. Dou, X. Wang, S. Lu, and J.-R. Wen, “DVGAN: A minimax game [34] C. L. Clarke, M. Kolla, and O. Vechtomova, “An effectiveness mea-
for search result diversification combining explicit and implicit features,” sure for ambiguous and underspecified queries,” in Proc. 2nd Int. Conf.
in Proc. 43rd Int. ACM SIGIR Conf. Res. Develop. Inf. Retrieval, New Theory Inf. Retrieval: Adv. Inf. Retrieval Theory, 2009, pp. 188–199,
York, NY, USA, 2020, pp. 479–488. doi: 10.1007/978-3-642-04417-5_17.
[11] S. Yigit-Sert, I. S. Altingovde, C. Macdonald, I. Ounis, and Özgür Ulusoy, [35] J. I. Marden, Analyzing and Modeling Rank Data. Boca Raton, Florida,
“Supervised approaches for explicit search result diversification,” Inf. USA: CRC Press, 1996.
Process. Manage., vol. 57, no. 6, 2020, Art. no. 102356. [36] L. Xia, J. Xu, Y. Lan, J. Guo, and X. Cheng, “Modeling document novelty
[12] L. Yan, Z. Qin, R. K. Pasumarthi, X. Wang, and M. Bendersky, with neural tensor network for search result diversification,” in Proc. 39th
“Diversification-aware learning to rank using distributed representation,” Int. ACM SIGIR Conf. Res. Develop. Inf. Retrieval, New York, NY, USA,
in Proc. Web Conf., 2021, pp. 127–136. 2016, pp. 395–404.
[13] Z. Su, Z. Dou, Y. Zhu, X. Qin, and J. Wen, “Modeling intent graph for [37] J. Devlin, M. Chang, K. Lee, and K. Toutanova, “BERT: Pre-training
search result diversification,” in Proc. 44th Int. ACM SIGIR Conf. Res. of deep bidirectional transformers for language understanding,” 2018,
Develop. Inf. Retrieval, 2021, pp. 736–746. arXiv:1810.04805.
[14] Z. Dou, R. Song, and J.-R. Wen, “A large-scale evaluation and analysis of [38] A. Vaswani et al., “Attention is all you need,” in Proc. 31st Int. Conf. Adv.
personalized search strategies,” in Proc. 16th Int. Conf. World Wide Web, Neural Inf. Process. Syst., 2017, vol. 30, pp. 6000–6010.
New York, NY, USA, 2007, pp. 581–590. [39] T. Mikolov, K. Chen, G. Corrado, and J. Dean, “Efficient estimation of
[15] P. N. Bennett et al., “Modeling the impact of short-and long-term behavior word representations in vector space,” 2013, arXiv:1301.3781.
on search personalization,” in Proc. 35th Int. ACM SIGIR Conf. Res. [40] C. Xiong, Z. Dai, J. Callan, Z. Liu, and R. Power, “End-to-end neural
Develop. Inf. Retrieval, New York, NY, USA, 2012, pp. 185–194. ad-hoc ranking with kernel pooling,” in Proc. 40th Int. ACM SIGIR Conf.
[16] S. Ge, Z. Dou, Z. Jiang, J.-Y. Nie, and J.-R. Wen, “Personalizing search Res. Develop. Inf. Retrieval, New York, NY, USA, 2017, pp. 55–64.
results using hierarchical RNN with query-aware attention,” in Proc. [41] Z. Dou, S. Hu, K. Chen, R. Song, and J.-R. Wen, “Multi-dimensional
27th ACM Int. Conf. Inf. Knowl. Manage., New York, NY, USA, 2018, search result diversification,” in Proc. 4th ACM Int. Conf. Web Search
pp. 347–356. Data Mining, New York, NY, USA, 2011, pp. 475–484.
[17] S. Lu, Z. Dou, X. Jun, J.-Y. Nie, and J.-R. Wen, “PSGAN: A minimax [42] O. Chapelle, D. Metlzer, Y. Zhang, and P. Grinspan, “Expected reciprocal
game for personalized search with limited and noisy click data,” in Proc. rank for graded relevance,” in Proc. 30th ACM Int. Conf. Inf. Knowl.
42nd Int. ACM SIGIR Conf. Res. Develop. Inf. Retrieval, New York, NY, Manage., 2009, pp. 621–630.
USA, 2019, pp. 555–564. [43] L. Pang, J. Xu, Q. Ai, Y. Lan, X. Cheng, and J. Wen, “SetRank: Learning
[18] Z. Ma, Z. Dou, G. Bian, and J.-R. Wen, “PSTIE: Time information a permutation-invariant ranking model for information retrieval,” in Proc.
enhanced personalized search,” in Proc. 27th ACM Int. Conf. Inf. Knowl. 43th Int. ACM SIGIR Conf. Res. Develop. Inf. Retrieval, New York, NY,
Manage.New York, NY, USA, 2020, pp. 1075–1084. USA, 2020, pp. 499–508.
Authorized licensed use limited to: Zakir Husain College of Engineering & Technology. Downloaded on March 12,2024 at 04:43:36 UTC from IEEE Xplore. Restrictions apply.
LIU et al.: INTEGRATED PERSONALIZED AND DIVERSIFIED SEARCH BASED ON SEARCH LOGS 707
Jiongnan Liu received the BE degree in computer Jian-Yun Nie (Member, IEEE) is currently a pro-
science and technology in 2017 from the Renmin fessor with the University of Montreal, Montreal,
University of China, Beijing, China, where he is ON, Canada. He has been an invited professor and
currently working toward the PhD degree with the researcher with several universities and companies.
Gaoling School of Artificial Intelligence. His research He has authored or coauthored more than 150 papers
interests include search result diversification, person- in information retrieval and natural language process-
alized search, and product search. ing in journals and conferences. He was the general
co-chair of the ACM SIGIR Conference in 2011. He is
currently on the editorial board of seven international
journals.
Authorized licensed use limited to: Zakir Husain College of Engineering & Technology. Downloaded on March 12,2024 at 04:43:36 UTC from IEEE Xplore. Restrictions apply.