0% found this document useful (0 votes)
9 views20 pages

Applsci 13 02652

Uploaded by

Blueine's Edit
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
9 views20 pages

Applsci 13 02652

Uploaded by

Blueine's Edit
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 20

applied

sciences
Review
Review: A Survey on Objective Evaluation of Image Sharpness
Mengqiu Zhu 1 , Lingjie Yu 1, * , Zongbiao Wang 2 , Zhenxia Ke 1 and Chao Zhi 1,3, *

1 School of Textile Science and Engineering, Xi’an Polytechnic University, Xi’an 710048, China
2 Faculty of Engineering, The University of Sydney, Sydney, NSW 2006, Australia
3 Key Laboratory of Functional Textile Material and Product, Xi’an Polytechnic University,
Ministry of Education, Xi’an 710048, China
* Correspondence: [email protected] (L.Y.); [email protected] (C.Z.)

Abstract: Establishing an accurate objective evaluation metric of image sharpness is crucial for image
analysis, recognition and quality measurement. In this review, we highlight recent advances in no-
reference image quality assessment research, divide the reported algorithms into four groups (spatial
domain-based methods, spectral domain-based methods, learning-based methods and combination
methods) and outline the advantages and disadvantages of each method group. Furthermore, we
conduct a brief bibliometric study with which to provide an overview of the current trends from 2013
to 2021 and compare the performance of representative algorithms on public datasets. Finally, we
describe the shortcomings and future challenges in the current studies.

Keywords: evaluation metric; image sharpness; no-reference; image quality; evaluation algorithm

1. Introduction
In the overview of image quality evaluation, the common evaluation indicators [1]
include image noise, image color, artifacts, sharpness, etc. Image noise evaluation meth-
ods [2] mainly rely on image spatial and temporal noise, signal-to-noise ratio and grayscale
noise to obtain evaluation results. The image color evaluation methods [3] usually evaluate
the color degree and uniformity of the image. The image artifact evaluation methods [4]
pay more attention to chromatic aberration, distortion and vignetting factors, while the
Citation: Zhu, M.; Yu, L.; Wang, Z.;
Ke, Z.; Zhi, C. Review: A Survey on
image sharpness evaluation method [5] is based on the comprehensive evaluation of the
Objective Evaluation of Image
edges and details of the image, which is currently one of the most popular image quality
Sharpness. Appl. Sci. 2023, 13, 2652.
evaluation methods; it is closely related to research fields such as bionics [6], nonwoven
https://doi.org/10.3390/ materials [7], medicine [8], etc.
app13042652 According to the dependence on the reference image, the evaluation methods are
divided into three types: Full-Reference (FR), Reduced-Reference (RR) and No-Reference
Academic Editor: Byung-Gyu Kim
(NR) [9]. The FR method uses the distorted image and the corresponding undistorted
Received: 1 February 2023 reference image to generate the image quality score. In the RR method, the image quality is
Revised: 12 February 2023 evaluated on partial information extracted using feature extraction methods. Unlike the FR
Accepted: 15 February 2023 and RR methods, the NR method can use the distorted image alone to complete the quality
Published: 18 February 2023 assessment. Since it is usually impossible to obtain an undistorted image for reference in
practical applications, the research on NR methods has become the current mainstream
research direction [10,11].
The image sharpness refers to the clarity of the texture and borders of various detailed
Copyright: © 2023 by the authors.
parts of an image, which affects the perception of information, image acquisition, and sub-
Licensee MDPI, Basel, Switzerland.
sequent processing, especially in some applications based on high-quality images [12–14].
This article is an open access article
The ideal image sharpness evaluation function should have the characteristics of high
distributed under the terms and
sensitivity, good robustness and low computational cost [15–17]. Most traditional image
conditions of the Creative Commons
sharpness evaluation methods [18] are based on spatial or spectral domains. The methods
Attribution (CC BY) license (https://
creativecommons.org/licenses/by/
in the spatial domain mainly evaluate the image by extracting the image gradient and edge
4.0/).
information [19,20], which have the advantages of simple calculation and high real-time

Appl. Sci. 2023, 13, 2652. https://doi.org/10.3390/app13042652 https://www.mdpi.com/journal/applsci


Appl. Sci. 2023, 13, 2652 2 of 20

performance but are easily disturbed by noise. The methods in the spectral domain mainly
use transformation methods, such as Fourier transform and wavelet transform, to extract
image frequency features for sharpness evaluation [21]. This type of method has excellent
sensitivity but high computational complexity. In recent years, learning-based methods
have emerged from the machine learning methods to the deep learning methods [22]. At
the same time, an increasing number of evaluation methods of a combination of methods
are studied and developed by scholars. A combination method is usually a new method
formed by combining two or more single evaluation methods in a certain relationship. Such
methods incorporate the advantages of the single method being combined and effectively
improve the accuracy of quality evaluation.
Although the above-reported research has obtained fruitful results, the objective eval-
uation standards for image sharpness are still not mature enough; few evaluation methods
are suitable for most scenarios. It is unrealistic to ask one sharpness evaluation algorithm
to handle all potential images due to the sophisticated and various image textures and
features [23]. Therefore, this paper reviews and clusters the existing sharpness evaluation
methods and conducted systematic comparative analyses on several representative meth-
ods aimed at offering directions for researchers to choose or develop a sharpness evaluation
algorithm on different types of images.
The paper reviews, classifies and summarizes the past decade’s sharpness evaluation
methods for no-reference images. The reviewed evaluation methods are grouped into four
categories with their evaluation results compared and the advantages and disadvantages
discussed. An outlook on the application of image sharpness in image processing is given,
and a direction for further research on sharpness evaluation methods for no-reference
images is discussed. Section 1 presents the background of the evaluation method. Section 2
summarizes the current sharpness evaluation methods by characterizing them into four
groups. Section 3 offers a bibliometric study to evaluate and compare the performance
of state-of-the-art algorithms on public datasets. Section 4 highlights the shortcomings of
current research and provides an outlook on future challenges.

2. Evaluation Methods and Analysis


The sharpness evaluation methods for no-reference images can be divided into four
Appl. Sci. 2023, 13, x FOR PEER REVIEW 3 of 22
categories: spatial domain methods, spectral domain methods, learning methods and
combination methods. The specific classification is shown in Figure 1.

Figure 1. Image sharpness evaluation classification.

2.1. Spatial Domain‐based Methods


Early work [24–26] on the sharpness evaluation of no‐reference images was mainly
performed in the spatial domain. The spatial domain evaluation functions use the charac‐
Appl. Sci. 2023, 13, 2652 3 of 20

2.1. Spatial Domain-Based Methods


Early work [24–26] on the sharpness evaluation of no-reference images was mainly
performed in the spatial domain. The spatial domain evaluation functions use the char-
acteristics of the image in the spatial domain to distinguish between blurred and clear
images [27]. This type of evaluation function generally evaluates images directly by calcu-
lating the relationship between image pixels and their neighbors. We mainly divide spatial
domain methods into gradient grayscale and edge detection evaluation methods.

2.1.1. Grayscale Gradient-Based Methods


Grayscale gradient function is one of the commonly used image sharpness evaluation
functions, which mainly evaluates by calculating the difference between adjacent pixels
on the image to establish the evaluation function [28,29]. Classical methods include the
Brenner function [30], the energy gradient function [31], Laplacian function [32], Tenengrad
function [33] and so on. In addition to the classical evaluation methods based on gray
gradient mentioned above, other novel methods have also been studied. Zhan et al. [34]
used the maximum gradient and the variability of gradients to predict the quality of blurry
images in a highly consistent way with subjective scoring. Li et al. [35] proposed a blur
evaluation algorithm for no-reference images based on discrete orthogonal moments, where
gradient images are divided into equal-sized blocks and orthogonal moments are calculated
to characterize the image shape and obtain sharpness metrics. Zhang et al. [36] evaluated
the edge sharpness by calculating the grayscale values of the eight directions of the pixels.
Most of the grayscale gradient-based methods are less computationally intensive and
have high real-time performance but are susceptible to noise interference, which affects the
accuracy of the evaluation.

2.1.2. Edge Detection-Based Methods


The image edge is the most basic feature of an image, which refers to the discontinuity
Appl. Sci. 2023, 13, x FOR PEER REVIEW 4 of 22
of the local characteristics of the image [37,38]. Among the edge detection-based methods,
the most widely used algorithms are Canny operator [39], Sobel operator [40], Prewitt
operator [41] et al.
The Canny operator is aa multi-level
multi‐level edge
edge detection
detection algorithm.
algorithm. The Sobel operator
combines Gaussian smoothing and differential derivatives to detect the horizontal and
vertical edges. The Prewitt operator uses the difference generated by the grayscale values
specific region
of pixels in a specific region toto achieve
achieve edge
edge detection.
detection. The process of image sharpness
operator is
evaluation based on the Prewitt operator is shown
shown in
in Figure
Figure 2.
2.

Figure 2. The process of the image sharpness evaluation based on the


the Prewitt
Prewitt operator.
operator.

As shown in Figure 3, an image is plotted with the results of the Canny operator, the
Sobel operator and the Prewitt operator,
operator, respectively.
respectively.
Marziliano et al. [42] proposed a method to detect the edges of an image using
the Sobel operator and utilized the image edge width as the sharpness evaluation score.
Zhang et al. [43] proposed an image-filtering evaluation method based on the Sobel op-
erator and image entropy. Liu et al. [44] used the Canny edge detection algorithm based
on the activation mechanism to obtain the image edge position and direction, established
the histogram of edge width and obtained the sharpness evaluation metric by weighting
the average edge width. The method was proven to have good accuracy and predictive
monotonicity. Chen et al. [45] used the Prewitt operator and the Gabor filter to calculate the
average gradient of the image to predict the local sharpness value of a fabric surface image.

Figure 3. Edge detection results: (a) Original image; (b) Canny operator; (c) Sobel operator; (d)
Prewitt operator.

Marziliano et al. [42] proposed a method to detect the edges of an image using the
Figure 2. The process of the image sharpness evaluation based on the Prewitt operator.

Appl. Sci. 2023, 13, 2652 4 of 20


As shown in Figure 3, an image is plotted with the results of the Canny operator, the
Sobel operator and the Prewitt operator, respectively.

Figure 3.
Figure 3. Edge
Edgedetection
detectionresults:
results:(a)(a)
Original image;
Original (b)(b)
image; Canny operator;
Canny (c) Sobel
operator; operator;
(c) Sobel (d)
operator;
Prewitt operator.
(d) Prewitt operator.

Marziliano
Among et al.detection-based
the edge [42] proposed amethods,method theto detect
Cannythe edges the
operator, of an image
Sobel usingand
operator the
Sobel operator and utilized the image edge width as the sharpness
the Prewitt operator are widely used, and each of these operators has its own advantages. evaluation score.
Zhang
The et al.operator
Canny [43] proposed an image‐filtering
is sensitive to weak edges evaluation method based
but computationally on the Sobel
intensive; oper‐
the Sobel
ator and image
operator is fastentropy. Liu et al.to[44]
but susceptible usedinterference;
noise the Canny edge detectionoperator
the Prewitt algorithm is based
better on
at
the activation
extracting mechanism
the edges to obtain
of images the image
disturbed edge position
by neighboring and direction, established the
pixels.
histogram of edge width and obtained the sharpness evaluation metric by weighting the
2.1.3.
averageOther
edge Methods
width. The Based on Spatial
method Domainto have good accuracy and predictive mon‐
was proven
otonicity.
Bahrami et al. [46] obtained the quality operator
Chen et al. [45] used the Prewitt and the Gabor
score by calculating filter to calculate
the maximum the
local varia-
average
tion (MLV) gradient
of image of the image
pixels. Thetostandard
predict the local sharpness
deviation value of
of the weighted MLV a fabric surfacewas
distribution im‐
age. as a metric to measure sharpness. The study shows that this method is characterized
used
Among
by high the edge
real-time detection‐based
performance. Gu et al. methods, the Canny
[47] developed operator,model
a sharpness the Sobel operator
by analyzing
the
andautoregressive (AR) model
the Prewitt operator parameters
are widely used,point-by-point
and each of theseto calculate
operatorsthe has
energy and con-
its own ad‐
trast differences in the locally estimated AR coefficients and then
vantages. The Canny operator is sensitive to weak edges but computationally intensive; quantified the image
sharpness using a is
the Sobel operator percentile pool to predict
fast but susceptible the interference;
to noise overall score.theThis evaluation
Prewitt operator method
is bet‐
that
ter atcalculates
extractinglocal contrast
the edges of and
images energy based by
disturbed onneighboring
a mathematical model is also a spatial
pixels.
domain method. Chang et al. [48] proposed a new independent feature similarity index to
evaluate
2.1.3. OthertheMethods
sharpness by calculating
Based on Spatial the structure and texture differences between two
Domain
images.
Bahrami et al. [46] obtained the quality score byimage
Niranjan et al. [49] presented a no-reference blur metric
calculating based onlocal
the maximum the study
vari‐
of human blur perception for varying contrast values. The method
ation (MLV) of image pixels. The standard deviation of the weighted MLV distribution gathered information
by
wasestimating the probability
used as a metric to measure of detecting
sharpness.blurring
The study at each
shows edge
thatin themethod
this image is and then
charac‐
calculated the cumulative probability of blur detection to obtain
terized by high real‐time performance. Gu et al. [47] developed a sharpness model by an‐ the evaluation result.
Lin et al. [50] proposed an adaptive definition evaluation algorithm, which achieved a
alyzing the autoregressive (AR) model parameters point‐by‐point to calculate the energy
better evaluation effect than the standard adaptive definition algorithm. Zhang et al. [51]
and contrast differences in the locally estimated AR coefficients and then quantified the
presented a no-reference image quality evaluation metric based on Cartoon Texture Decom-
image sharpness using a percentile pool to predict the overall score. This evaluation
position (CTD). Using the characteristics of CTD, the image was separated into cartoon
method that calculates local contrast and energy based on a mathematical model is also a
parts with prominent edges and texture parts with noise. Then the ambiguity and noise
spatial domain method. Chang et al. [48] proposed a new independent feature similarity
levels were estimated respectively to predict the results.
The spatial domain-based methods require less computation, but the above methods
rely much on the details of the image and are easily affected by noise.

2.2. Spectral Domain-Based Methods


Frequency is an indicator that characterizes the intensity of grayscale changes in the
image [52]. In spectral domain evaluation functions, the high-frequency and low-frequency
components of the image correspond to sharp and blurred parts, respectively. The sharper
the image, the more detail and edge information it contains. Therefore, image sharpness can
be assessed by a high–low transformation in spectral domain [53]. This type of evaluation
methods are usually based on Fourier transform (FT) [54], wavelet transform (WT) [55]
methods, etc. The general framework of spectral domain-based methods is shown in
Figure 4, where H and L represent the high-pass filter and the low-pass filter, respectively,
and LL, HL, LH and HH are the corresponding components after filtering again.
quency components of the image correspond to sharp and blurred parts, respectively. The
sharper the image, the more detail and edge information it contains. Therefore, image
sharpness can be assessed by a high–low transformation in spectral domain [53]. This type
of evaluation methods are usually based on Fourier transform (FT) [54], wavelet transform
(WT) [55] methods, etc. The general framework of spectral domain‐based methods is
Appl. Sci. 2023, 13, 2652 shown in Figure 4, where H and L represent the high‐pass filter and the low‐pass filter, 5 of 20
respectively, and LL, HL, LH and HH are the corresponding components after filtering
again.

Figure 4. The general framework of spectral domain‐based methods.


Figure 4. The general framework of spectral domain-based methods.

2.2.1. Fourier Transform‐based


2.2.1. Fourier Methods
Transform-Based Methods
The physical meaning
The physical of the of
meaning Fourier transform
the Fourier is to convert
transform the gray
is to convert the distribution
gray distribution
function of an image
function into a into
of an image frequency distribution
a frequency function,
distribution while while
function, the inverse transform
the inverse transform
is to convert
is to convert the frequency
the frequency distribution
distribution function function
of an image of an
intoimage
a grayinto a gray distribution
distribution func‐
function
tion [56]. [56]. Fast
Fast Fourier Fourier transform
transform (FFT),
(FFT), discrete discrete
Fourier Fourier transform
transform (DFT) and(DFT) discreteandco‐discrete
cosine transform
sine transform (DCT) [57] (DCT) [57] are common
are common forms based forms onbased on transform.
Fourier Fourier transform.
KanjarKanjar et al.utilized
et al. [58] [58] utilized the Fourier
the Fourier transformtransform
spectrum spectrum to simulate
to simulate the uniform
the uniform
blurring of Gaussian blurred images and fixed the threshold
blurring of Gaussian blurred images and fixed the threshold of high‐frequency compo‐ of high-frequency components
nents for image sharpness assessment.
assessment. In Inaarelated
relatedstudy,
study,Kanjar
Kanjaretetal.al.[59]
[59]presented
presentedaa new
imagesharpness
new image sharpnessmeasure
measurethat thatused
used Discrete
Discrete Cosine
Cosine Transform
Transform (DCT) (DCT) coefficient‐
coefficient-based
based features
featuresfor forgenerating
generatingthe themodel
model of of
image
image sharpness
sharpness assessment.
assessment. BaeBae
et al.et[60] presented
al. [60]
presented a novel DCT‐based Quality Degradation Metric, called DCT‐QM, which wason the
a novel DCT-based Quality Degradation Metric, called DCT-QM, which was based
Appl. Sci. 2023, 13, x FOR PEER probability
REVIEW
based summation
on the probability theory.theory.
summation Bae et Baeal. [61] also
et al. [61]proposed a visual
also proposed quality
a visual assessment
quality 6 of 22
method that characterized local image features and various
assessment method that characterized local image features and various distortion types. distortion types. The visual
quality
The visual perception
quality characteristics
perception of HVS
characteristics for local
of HVS image
for local features
image and various
features and various distortion
typestypes
distortion are characterized
areetcharacterized by adopting
by adopting Structural
Structural Contrast
Contrast Index (SCI) (SCI)and DCT
DCTblocks.
blocks. Baig al. [62] proposed a no‐reference image qualityIndex
assessment and
method based
Baig et al. [62] proposed a no-reference image quality assessment method based on the
on the Discrete Fourier transform (DFT), calculated the image block‐based DFT and aver‐
Discrete Fourier transform (DFT), calculated the image block-based DFT and averaged it at
aged it at the block level and then combined them to obtain a clear metric for estimating
the block level and then combined them to obtain a clear metric for estimating the overall
the overall image perception quality.
image perception quality.
The Fourier transform, as one of the most basic time‐frequency transforms, can con‐
The Fourier transform, as one of the most basic time-frequency transforms, can convert
vert the image from the spatial domain to the spectral domain, synthesizing the multi‐
the image from the spatial domain to the spectral domain, synthesizing the multi-scale
scale features but also increasing the computational effort.
features but also increasing the computational effort.
2.2.2.
2.2.2. Wavelet Transform‐based Methods
Wavelet Transform-Based Methods
The
The wavelet
wavelet transform
transform method
method can
can obtain
obtainthetheevaluation
evaluationresults
resultsby
byusing
usingthe
thelocali‐
local-
zation
izationcharacteristics
characteristicsof
ofthe
theimage,
image,and
and its
its process
process is
is shown
shown inin Figure
Figure 5,
5, which
which is more
is more
suitable
suitable for
for the
the global
global or
or local
local evaluation
evaluation of
of the
the image.
image. InIn Figure
Figure 5,
5, H
H and
and LL represent
represent the
the
high‐pass
high-pass filter and the low‐pass
low-pass filter, respectively.
respectively. The high-pass
high‐pass filter and the low-pass
low‐pass
filter are used to extract edge features
features and
and image
image approximation,
approximation, respectively.
respectively.

Figure 5. The process of wavelet decomposition and reconstruction.

Kerouh et al. [63] proposed a no‐reference blur image evaluation method based on
the wavelet transform, which extracts the high‐frequency components of the image and
obtains edge‐defined evaluation results by analyzing multi‐resolution decomposition. Vu
et al. [64] presented a global and local sharpness evaluation algorithm based on fast wave‐
Appl. Sci. 2023, 13, 2652 6 of 20

Kerouh et al. [63] proposed a no-reference blur image evaluation method based on the
wavelet transform, which extracts the high-frequency components of the image and obtains
edge-defined evaluation results by analyzing multi-resolution decomposition. Vu et al. [64]
presented a global and local sharpness evaluation algorithm based on fast wavelet, which
decomposes the image through a three-level separable discrete wavelet transform and
calculates the logarithmic energy of wavelet sub-bands for obtaining the sharpness of the
image. Hassen et al. [65] proposed a method to evaluate the image sharpness of strong
local phase coherence near different image features based on complex wavelet transform.
Gvozden et al. [66] proposed a fast blind image sharpness/ambiguity evaluation model
(BISHARP). The local contrast information of the image was obtained by calculating the
root mean square of the image. At the same time, the diagonal wavelet coefficients in
the wavelet transform were used for ranking and weighting to obtain the final evaluation
result. Wang et al. [55] proposed a no-reference stereo image quality assessment model
based on quaternion wavelet transform (QWT), which extracted a series of quality-aware
features in QWT and MSCN coefficients of high-frequency sub-bands and finally predicted
the sharpness score.
The spectral domain-based methods decompose the image into high-frequency and
low-frequency components or sub-images of different resolution layers. They then use
different functions to process these sub-images so that complex edge information can be
extracted more clearly. However, the spectral domain-based methods require converting
Appl. Sci. 2023, 13, x FOR PEER REVIEW 7 of 22
image information from the spatial domain to the spectral domain, which greatly increases
the computational complexity.
The advantages and disadvantages of the different methods based on the spatial
Grayscale gradient‐based
domain/spectral domain are shown in Table 1. Rely on image edge infor‐
Simple and fast calculation
methods mation
Table 1. Advantages and disadvantages of different methods based on spatial/spectral domain.
Edge detection‐based
High sensitivity Susceptible to noise
Methods
methods Advantages Disadvantages
Fourier transform‐based High computational complex‐
Grayscale gradient-based methods Simple and fastExtract edge features clearly
calculation Rely on image edge information
methods ity
Edge detection-based methods High sensitivity Susceptible
High to noise complex‐
computational
Fourier transform-based methods
Wavelet transform‐based High accuracy and robust‐
Extract edge features clearly High computational complexityperfor‐
ity and poor real‐time
methods ness
mance and poor
High computational complexity
Wavelet transform-based methods High accuracy and robustness
real-time performance
2.3. Learning‐based Methods
2.3. Learning-Based Methods
Unlike traditional methods, learning‐based methods [67] can improve the accuracy
of theUnlike
evaluation results by learning
traditional methods, the training
learning-based image features
methods [67] can and achieving
improve the map‐
the accuracy of
ping of quality scores. The general framework of learning‐based methods is shown
the evaluation results by learning the training image features and achieving the mapping in Fig‐
of
ure 6. The
quality methods
scores. can be framework
The general divided into
ofSVM‐based,
learning-baseddeep learning‐based
methods is shown and dictionary‐
in Figure 6. The
based methods.
methods can be divided into SVM-based, deep learning-based and dictionary-based methods.

Figure 6. The general framework of learning-based


learning‐based methods.

2.3.1. Machine Learning‐based


2.3.1. Machine Learning-Based Methods
Methods
Early
Early learning-based
learning‐based sharpness evaluation methods
sharpness evaluation methods [68,69]
[68,69] for
for no-reference
no‐reference images
images
are mainly supported vector machine (SVM) models based
are mainly supported vector machine (SVM) models based on machine on machine learning, including
learning, includ‐
support vector
ing support regression
vector (SVR)/support
regression (SVR)/supportvector clustering
vector (SVC)
clustering methods.
(SVC) methods.
Pei
Pei et al. [70] presented a sharpness evaluation method of no‐reference images
et al. [70] presented a sharpness evaluation method of no-reference images based
based
on large-scale structures. The statistical scale of edges with different widths and the average
on large‐scale structures. The statistical scale of edges with different widths and the aver‐
value of the maximum gradient were taken as the image features, and the SVR is used to
age value of the maximum gradient were taken as the image features, and the SVR is used
obtain the features to obtain the evaluation results. This method can avoid the interference
to obtain the features to obtain the evaluation results. This method can avoid the interfer‐
of small textures and contrast around the edge to the evaluation results. Liu et al. [71]
ence of small textures and contrast around the edge to the evaluation results. Liu et al. [71]
used the anisotropy of the orientation selectivity mechanism and the influence of gradient
orientation effect on vision to extract structural information and then used Toggle opera‐
tor to extract edge information as the weight of local patterns. Finally, support vector re‐
gression (SVR) was used to train prediction models with optimization characteristics and
Appl. Sci. 2023, 13, 2652 7 of 20

used the anisotropy of the orientation selectivity mechanism and the influence of gradient
orientation effect on vision to extract structural information and then used Toggle operator
to extract edge information as the weight of local patterns. Finally, support vector regression
(SVR) was used to train prediction models with optimization characteristics and subjective
scores. Moorthy et al. [72] proposed a new two-step framework for no-reference image
quality assessment based on natural scene statistics (NSS). SVM is used to classify the
distortion types of the fitted parameter features, and then SVR is used to calculate the
image quality evaluation results under different distortion types.
Machine learning-based evaluation methods can achieve better results than other
algorithms on small-sample training sets, but the extracted features determine the quality
of the evaluation results.

2.3.2. Deep Learning-Based Methods


An increasing number of related deep learning methods are applied to image sharpness
evaluation with the continuous improvement of deep learning methods [73]. The deep
learning-based methods do not need to extract features manually but directly build a deep
learning model and obtain the evaluation score of the image after training. These types of
methods include a variety of network models, and nowadays, there are convolutional neural
network (CNN), deep convolutional neural network (DCNN), generative adversarial network
(GAN), etc. Such methods enable the learning of image quality prediction networks.
Zhu et al. [74] evaluated image quality by a method based on an optimized convolu-
tional neural network structure, aiming to automatically extract distinctive image quality
features, to improve the network learning ability and to predict the evaluation score through
normalization and packet loss. Li et al. [75] divided the entire image into blocks and then
used a deep convolutional neural network (DCNN) to extract their advanced features. They
then aggregated information from different blocks and fed these aggregated features into
a least-squares regression model to obtain sharpness evaluation values. Lin et al. [76] pro-
posed the generation of a pseudo-reference image from the distorted image first, then paired
the information of the pseudo-reference image with the distorted images and input them
into the quality regression network to obtain the quality prediction result. Zhang et al. [77]
proposed a deep bilinear model for blind image quality assessment (BIQA) that works for
both synthetically and authentically distorted images and is able to predict image quality
values. Bianco et al. [78] used the features extracted by the pre-trained convolutional neural
network (CNN) as the general image description and estimated the overall image score by
averaging the predicted scores in multiple sub-regions of the original image. Gao et al. [79]
extracted multi-level representations of images from VGGNet (Visual Geometry Group Net),
calculated a feature on each layer, then estimated the quality score of each feature vector
and finally obtained the final evaluation result by averaging.
Deep learning-based methods can automatically learn multi-layer representations of
image features from large amounts of data to obtain image feature information for image
quality assessment, but their network models are only applicable to large data sets.

2.3.3. Dictionary Learning-Based Methods


Dictionary learning-based methods are used in image definition evaluation to a certain
extent, which are often combined with sparse learning/clustering algorithms.
Li et al. [80] presented a no-reference SPArse Representation based image sharpness
(SPARISH) index. The blurred image is represented as a block using a dictionary; the
blocked energy is calculated using sparse coefficients; and the sharpness evaluation score is
obtained by normalizing the energy value using a pooling layer. The method is insensitive
to the training image and can be used to improve the evaluation of the sharpness of
the image using a dictionary. Lu et al. [81] proposed a no-reference image sharpness
measurement method based on a sparse representation of structural information. In this
method, a learning dictionary is used to encode the patch of the blurred image, and a
multi-scale spatial maximum pool scheme is introduced to obtain the final sharpness score.
Appl. Sci. 2023, 13, 2652 8 of 20

Xu et al. [82] proposed a blind image quality evaluation method based on high-order
statistical aggregation (HOSA). This method extracts local normalized image blocks as local
features through regular grid and constructs a codebook containing 100 codewords through
K-means clustering. Each local feature is assigned to several nearest clusters, and the
higher-order statistical differences between local features and corresponding clusters are
aggregated as the global evaluation results. Jiang et al. [83] proposed a no-reference image
evaluation method based on an optimized multilevel discriminant dictionary (MSDD).
MSDDs are learned by implementing a label consistent K-SVD (LC-KSVD) algorithm in a
phased mode.
The dictionary learning method is to establish the transfer relationship between the
image features and the dictionary and then matches them with the dictionary to obtain the
image evaluation results.

2.3.4. Other Methods Based on Learning


Wu et al. [84] proposed a new local learning method for blind image evaluation,
which uses the perceptually similar neighbors of the searched test image as its training
set and evaluates the image through a sparse Gaussian process. Deng et al. [85] presented
a content insensitive blind image ambiguity evaluation index by using Weibull statistics.
This method models the gradient amplitude of blur image by adjusting the scale parameter,
shape parameter and skewness in Weibull distribution and uses sparse extreme value
learning machine to predict the final image evaluation. Zhang et al. [86] proposed a
no-reference image sharpness evaluation method based on sorting learning and block
extraction. Performance evaluation shows that the method is highly relevant to human
perception and robust to image content. He et al. [87] proposed a depth model combining
the spatial and visual features of images for image quality assessment. The algorithm takes
multiple image features as input and learns feature weights through end-to-end training to
obtain evaluation results.
Through the analysis of learning-based methods, it is found that most of the above
methods combine the structural features, local or perceptual features of images. The
parameters of the corresponding indicators are estimated by the regression analysis, and
the evaluation results are obtained.
The advantages and disadvantages of different learning-based methods are shown
in Table 2.

Table 2. Advantages and disadvantages of different learning-based methods.

Methods Advantages Disadvantages


Good performance on small Evaluation results depend on
Machine-based methods
sample training set feature extraction.
Automatically train learning
Deep learning-based methods features from a large number A large amount of data
of samples
Dictionary learning-based Advanced features of samples The evaluation effect depends
methods can be extracted. on dictionary size.

2.4. Combination Methods


Concluded from the above works of literature, different evaluation methods have dif-
ferent characteristics. The combination evaluation methods combine two or more methods
to give full play to their respective advantages. For example, combining spatial domain
methods with deep learning methods can improve the accuracy of evaluation results based
on the integrity of the extracted features. The combined framework of combination methods
is shown in Figure 7.
2.4. Combination Methods
Concluded from the above works of literature, different evaluation methods have
different characteristics. The combination evaluation methods combine two or more
methods to give full play to their respective advantages. For example, combining spatial
Appl. Sci. 2023, 13, 2652 domain methods with deep learning methods can improve the accuracy of evaluation re‐9 of 20
sults based on the integrity of the extracted features. The combined framework of combi‐
nation methods is shown in Figure 7.

Figure 7. The combined framework of combination methods.


Figure 7. The combined framework of combination methods.
Vu et al. [88] utilized both spectral and spatial properties of the image to quantify the
overall perceived sharpness
Vu et al. [88] ofboth
utilized the entire image
spectral and by combining
spatial the of
properties slope
the of the magnitude
image to quantify the
spectrum
overalland the totalsharpness
perceived spatial variation with aimage
of the entire weighted geometricthe
by combining average.
slope Liu et al.
of the [89]
magnitude
spectrum and the total spatial variation with a weighted geometric average. Liu et al. [89]
combined the spatial domain characteristics and the clarity of ResNet of the images to
evaluate the clarity of the components to maintain the reliability, security of power trans-
mission by observing the images. Yue et al. [90] proposed a sharpness assessment method
that combines geometric distortion and scale invariance by analyzing the local similar-
ity of images and the similarity between their neighboring regions. The effectiveness of
this method is better than other competing methods, but the disadvantage is that it is
time-consuming. Zhang et al. [91] obtained spatial and shape information by calculating
grayscale and gradient maps, obtained salient maps by using scale invariant feature trans-
form (SIFT) and then generated fuzzy evaluation scores by discrete cosine transform (DCT).
The results demonstrated that the fuzzy scores generated by the proposed method were
highly correlated with subjective ratings. Zhan et al. [92] presented a new image structure
change model for image quality assessment, which uses fuzzy logic to classify and score
each pixel’s structure change in the distorted image. Li et al. [93] proposed a semantic
feature aggregation (SFA)-based evaluation method to mitigate the effect of complex image
content on the evaluation results. This method extracts features using a trained DCNN
model and maps global features to image quality scores. Li et al. [94] proposed a general
no-reference quality assessment framework based on shearlet transform and deep neural
networks. The coefficient amplitude is extracted from the sum of multi-scale directional
transform (shearlet transform) and sub-band as the main feature to describe the behavior of
natural images; then the softmax classifier is used to identify the differences of evolutionary
features; finally, the evaluation results are obtained. The experimental results show the
excellent performance of the method.
The combination method combines the advantages of the single methods included,
and the evaluation result has a high accuracy rate, which is consistent with the subjective
evaluation result. However, it will increase the computational complexity. Currently,
scholars are also studying more advanced methods in various aspects.

3. Bibliometrics Analysis
3.1. Research Distribution Trend Analysis
This paper evaluates the research distribution across the time period (2013 to 2015,
2016 to 2018 and 2019 to 2021) on no-reference image quality evaluation in recent years.
In the above three periods, 74, 92 and 105 related papers were published, respectively.
The distribution of methods used in related papers is shown in Figure 8. We also found
Appl. Sci. 2023, 13, 2652 10 of 20

that from 2013 to 2015, the frequency rankings of search keywords from more to less were
spatial domain methods, spectral domain methods and deep learning methods; from 2016
to 2018, the search frequency ranking changed to spatial domain methods, deep learning
methods, spectral domain methods and other methods; from 2019 to 2021, the ranking
Appl. Sci. 2023, 13, x FOR PEER REVIEW 11 of 22
was updated to deep learning methods, combination methods, spatial domain methods,
spectral domain methods and other methods.

Figure
Figure8.8.Distribution
Distribution of
of evaluation methods.
evaluation methods.

Byanalyzing
By analyzing Figure 8, 8, during
during2013–2015,
2013–2015,the thespatial
spatialdomain-based
domain‐based methods
methods were
were
the most with a percentage of 36, followed by the spectral domain-based
the most with a percentage of 36, followed by the spectral domain‐based methods, whilemethods, while
thelearning‐based
the learning-based methods
methods areare only
only19%19%ofofthe
thetotal
totalarticles. InIn
articles. thethe
following
followingyears, thethe
years,
learning-based methods grew rapidly, most possibly owing to the explosion of
learning‐based methods grew rapidly, most possibly owing to the explosion of deep learn‐ deep learning
techniques. During 2016–2018, the number of methods based on spatial domain and learning
ing techniques. During 2016–2018, the number of methods based on spatial domain and
was similar, 27% and 28%, respectively. From 2019 to 2021, the majority of the research in
learning was similar, 27% and 28%, respectively. From 2019 to 2021, the majority of the
image quality evaluation is based on learning, and combination methods also show great
research in image quality evaluation is based on learning, and combination methods also
potential with a percentage of 24. The learning-based methods are currently the most widely
show
used great potential
methods due towith
theiraexcellent
percentage of 24.for
behavior The learning‐based
extracting featuresmethods are currently
automatically. The
the most widely used methods due to their excellent behavior for
statistical data related information in our paper comes from Web of Science. extracting features au‐
tomatically. The statistical data related information in our paper comes from Web of Sci‐
3.2. The Performance of the Representative Methods on Public Datasets
ence.
In this section, we selected several representative methods from the above four main
3.2. The Performance
groups and analyzed of the Representative
their Methods on Public
evaluation performance on theDatasets
public datasets. MLV [46],
BIBLE [35], MGV [46], ARISM [47] and CPBD [49] are selected
In this section, we selected several representative methods from for the spatial
the abovedomain-
four main
based methods; DCT-QM [60], SC-QI [61], FISH [62], LPC-SI [65]
groups and analyzed their evaluation performance on the public datasets. MLV and BISHARP [66] are BI‐
[46],
selected for the spectral domain-based method; BIQI [72], DB-CNN [77],
BLE [35], MGV [46], ARISM [47] and CPBD [49] are selected for the spatial domain‐based DeepBIQ [78],
SPARISH [79], SR [81] and MSFF [87] are selected to represent the learning-based methods;
methods; DCT‐QM [60], SC‐QI [61], FISH [62], LPC‐SI [65] and BISHARP [66] are selected
S3 [88], RFSV [91], SVC [92] and SFA [93] are selected as the combination methods. The
for the spectral domain‐based method; BIQI [72], DB‐CNN [77], DeepBIQ [78], SPARISH
specific information of the comparison method is described in Table 3. We also introduce six
[79], SR [81] and MSFF [87] are selected to represent the learning‐based methods; S3 [88],
commonly used public datasets and four commonly used indicators in different literature
RFSV [91], SVC
to measure [92] and methods
the sharpness SFA [93]and
aretoselected as indicators
select two the combination methods.
to evaluate The
the image specific
quality
information of the
evaluation results. comparison method is described in Table 3. We also introduce six com‐
monly used public datasets and four commonly used indicators in different literature to
measure the sharpness methods and to select two indicators to evaluate the image quality
evaluation results.

Table 3. Specific information of different methods for comparison.

Pub‐
Appl. Sci. 2023, 13, 2652 11 of 20

Table 3. Specific information of different methods for comparison.

Group Method Category Method Published Time Characteristic


Grayscale gradient-based MLV [46] 2014 Calculate the maximum local change in the image
Grayscale gradient-based BIBLE [35] 2015 Calculate gradients and Tchebichef moments of images
Calculate the maximum gradient and gradient change
Edge detection-based MGV [46] 2018
Spatial domain-based in the image
Other spatial Calculate the energy difference and contrast difference
ARISM [47] 2014
domain-based of the AR model coefficients for each pixel
Other spatial
CPBD [49] 2011 Calculate the cumulative probability of blur detection
domain-based
Compute the weighted average L2 norm in the DCT
Fourier transform-based DCT-QM [60] 2016
domain.
Structural contrast indices and DCT blocks are
employed to characterize local image features and
Fourier transform-based SC-QI [61] 2016
visual quality perception properties of various
Spectral domain-based distortion types.
Fourier transform-based FISH [62] 2022 Compute image derivatives and block-based DFT
Wavelet transform-based LPC-SI [65] 2013 Calculate LPC intensity change
Calculate the root mean square of the image to obtain
Wavelet transform-based BISHARP [66] 2018
local contrast information
The NSS model was used to parameterize the sub-band
Machine-based BIQI [72] 2010
coefficients and to predict the sharpness values.
Distorted images use two convolutional neural
Deep learning-based DB-CNN [77] 2020 networks for feature extraction and bilinear pooling for
quality prediction.
Use features extracted from pretrained CNN as generic
Deep learning-based DeepBIQ [78] 2018
image descriptions.
The image is represented as a block using a dictionary,
Learning-based and the energy of the block is calculated using the
Dictionary learning-based SPARISH [79] 2016
sparse coefficient, then normalized by the pooling layer
to obtain a sharpness evaluation score.
Sparse representation (SR) is used to extract structural
Dictionary learning-based SR [81] 2016 information, and a multi-scale spatial max-pooling
scheme is introduced to represent image locality.
Taking multiple features of the image as input and
Other learning-based MSFF [87] 2019 learn feature weights through end-to-end training to
obtain evaluations
Calculate the slope of the magnitude spectrum in the
Graycale + DCT S3 [88] 2012 spectral domain and the spatial variation in the spatial
domain of the image patch
The blocks that compute the gradient maps are
converted to DCT coefficients to obtain shape
DCT + SIFT RFSV [91] 2016
information, and scale-invariant feature transform
Combination (SIFT) is used to obtain saliency maps.
Image quality is characterized using the distribution of
SVC + Gradient SVC [92] 2017 different structural changes and the extent of structural
differences.
The pre-trained DCNN model is used to extract
DCNN + SFA SFA [93] 2019 features, and after feature aggregation, the least squares
regression is partially used for quality prediction.

3.2.1. Public Datasets and Evaluation Indicators


The most commonly used datasets in the field include Laboratory for Image and
Video Engineer (LIVE) [95], Categorical Subjective Image Quality (CSIQ) [96], Tampere
Image Database 2008 (TID2008) [97], Tampere Image Database 2013 (TID2013) [98], Blurred
Image Dataset (BID) [99], image dataset from the University of Helsinki CID2013 [100], etc.
Among them, LIVE, CSIQ, TID2008 and TID2013 datasets are analog distortion datasets;
BID and CID2013 datasets are natural distortion datasets. The specific information of these
public datasets is shown in Table 4.
The sharpness performance evaluation indicators are used to determine whether and
to what extent the evaluation result of an image sharpness evaluation algorithm is accurate
and consistent with the subjective human judgment, which requires certain standards. The
most commonly used sharpness performance evaluation indicators are Pearson Linear
Appl. Sci. 2023, 13, 2652 12 of 20

Correlation Coefficient (PLCC), Spearman’s rank ordered correlation coefficient (SROCC),


Kendall Rank Order Correlation Coefficient (KROCC) and Root Mean Square Error (RMSE).

Table 4. Specific information of the public datasets.

Number of Number of Image Size Subjective


Dataset Distortion Type Score Range
Reference Images Distorted Images (Pixel) Scoring
LIVE Analog distortion 29 779 428 × 634–512 × 768 DMOS [1, 100]
CSIQ Analog distortion 30 866 512 × 512 DMOS [0, 9]
TID2008 Analog distortion 25 1700 512 × 384 MOS [0, 9]
TID2013 Analog distortion 25 3000 512 × 384 MOS [0, 9]
BID Natural distortion - 585 1280 × 960–2272 × 1704 MOS [0, 5]
CID2013 Natural distortion - 480 1600 × 1200 MOS [0, 100]

Pearson Linear Correlation Coefficient (PLCC) describes the correlation between the
algorithm evaluation value and the human subjective score. It is mainly used to calculate
the accuracy, as shown in Equation (1).

xi − x yi − y
  
1 n
n − 1 ∑ i =1
PLCC = (1)
σx σy

where x and y are the mean values of xi and yi , respectively, and σi is the corresponding
standard deviation.
Spearman’s rank ordered correlation coefficient (SROCC) is mainly used to measure
the monotonicity of algorithm prediction, as shown in Equation (2).

1 n
SROCC = 1 −
n ( n2 − 1) ∑i=1 (rxi − ryi )2 (2)

where rxi and ryi are the sorting positions of xi and yi in their respective data sequences.
Kendall Rank Order Correlation Coefficient (KROCC) can also effectively measure the
monotonicity of the algorithm, as shown in Equation (3).

2nc − nd
KROCC = (3)
n ( n − 1)

where nc is the number of consistent element pairs in the dataset and nd is the number of
inconsistent element pairs in the dataset.
Root Mean Square Error (RMSE) is used to directly measure the absolute error between
the algorithm evaluation score and the human subjective score, as shown in Equation (4).
 1
1 n 2
RMSE =
n ∑ (x
i −1 i
− yi ) 2
(4)

where xi is the subjective MOS value and yi is the predicted score of the algorithm.

3.2.2. Performance Analysis of Representative Methods


To observe the performance of various no-reference image quality evaluation methods,
this paper compares several representative methods on several public datasets. PLCC and
SROCC are adopted to assess the image quality evaluation results. The experimental results
in the table are taken from the reported references, and each comparison method group
was tested on the same dataset. The comparison of different evaluation methods on the
LIVE dataset and CSIQ dataset, TID2008 dataset and TID2013 dataset, BID dataset and
CID2013 dataset are shown in Tables 5–7, respectively.
Appl. Sci. 2023, 13, 2652 13 of 20

Table 5. Comparison of different evaluation methods on LIVE dataset and CSIQ dataset.

LIVE CSIQ
Group Method
PLCC SROCC PLCC SROCC
MLV [46] 0.938 0.937 0.894 0.851
BIBLE [35] 0.962 0.961 0.940 0.913
Spatial
domain-based MGV [46] 0.960 0.963 0.907 0.950
ARISM [47] 0.959 0.956 0.948 0.931
CPBD [49] 0.895 0.918 0.882 0.885
DCT-QM [60] 0.925 0.938 0.872 0.926
SC-QI [61] 0.937 0.948 0.927 0.943
Spectral
domain-based FISH [62] 0.904 0.841 0.923 0.894
LPC-SI [65] 0.922 0.950 0.906 0.893
BISHARP [66] 0.952 0.960 0.942 0.927
BIQI [72] 0.920 0.914 0.846 0.773
DB-CNN [77] 0.970 0.968 0.959 0.946
DeepBIQ [78] 0.912 0.893 0.975 0.967
Learning-based
SPARISH [79] 0.956 0.959 0.938 0.914
SR [81] 0.961 0.955 0.950 0.921
MSFF [87] 0.949 0.950 - -
S3 [88] 0.943 0.944 0.893 0.911
RFSV [91] 0.974 0.971 0.942 0.920
Combination
SVC [92] 0.949 0.941 0.952 0.954
SFA [93] 0.942 0..953 - -

Table 6. Comparison of different evaluation methods on TID2008 dataset and TID2013 dataset.

TID2008 TID2013
Group Method
PLCC SROCC PLCC SROCC
MLV [46] 0.811 0.812 0.883 0.879
BIBLE [35] 0.893 0.892 0.905 0.898
Spatial
domain-based MGV [46] 0.937 0.942 0.914 0.921
ARISM [47] 0.854 0.868 0.898 0.902
CPBD [49] 0.824 0.841 0.855 0.852
DCT-QM [60] 0.819 0.837 0.852 0.854
SC-QI [61] 0.890 0.905 0.907 0.905
Spectral
domain-based FISH [62] 0.816 0.786 0.911 0.912
LPC-SI [65] 0.846 0.843 0.892 0.889
BISHARP [66] 0.877 0.880 0.892 0.896
BIQI [72] 0.794 0.799 0.825 0.815
DB-CNN [77] 0.873 0.859 0.865 0.816
DeepBIQ [78] 0.951 0.952 0.920 0.922
Learning-based
SPARISH [79] 0.889 0.887 0.900 0.893
SR [81] 0.895 0.911 0.899 0.892
MSFF [87] 0.926 0.917 0.917 0.922
S3 [88] 0.851 0.842 0.879 0.861
RFSV [91] 0.915 0.924 0.924 0.932
Combination
SVC [92] 0.889 0.874 0.857 0.787
SFA [93] 0.916 0.907 0.954 0.948
Appl. Sci. 2023, 13, 2652 14 of 20

Table 7. Comparison of different evaluation methods on BID dataset and CID2013 dataset.

BID CID2013
Group Method
PLCC SROCC PLCC SROCC
MLV [46] 0.375 0.317 0.689 0.621
BIBLE [35] 0.392 0.361 - -
Spatial
MGV [46] 0.307 0.201 0.511 0.499
domain-based
ARISM [47] 0.193 0.151 - -
CPBD [49] - - 0.418 0.293
DCT-QM [60] 0.383 0.376 0.662 0.653
SC-QI [61] - - - -
Spectral
FISH [62] 0.485 0.474 0.638 0.587
domain-based
LPC-SI [65] 0.315 0.216 0.634 0.609
BISHARP [66] 0.356 0.307 0.678 0.681
BIQI [72] 0.513 0.472 0.742 0.723
DB-CNN [77] 0.471 0.464 0.686 0.672
DeepBIQ [78] - - - -
Learning-based
SPARISH [79] 0.482 0.402 0.661 0.652
SR [81] 0.415 0.467 0.621 0.634
MSFF [87] - - - -
S3 [88] 0.427 0.425 0.687 0.646
RFSV [91] 0.391 0.335 0.701 0.694
Combination
SVC [92] - - 0.425 0.433
SFA [93] 0.546 0.526 - -

As can be seen from Table 5, RFSV and DeepBIQ have the highest score and best
performance on the LIVE and CSIQ analog distortion databases, respectively. Table 6 shows
that DeepBIQ and SFA have the highest PLCC and SROCC values and the best performance
on TID2008 and TID2013 analog distortion databases, respectively. From Table 7, the best
performers on BID and CID2013 natural distortion datasets are SFA and BIQI, respectively.
The spatial domain-based methods are generally based on basic image processing, with
the obtained results intuitive. However, although the low-level features extracted with
spatial domain-based methods are rich in local details, global semantic information is
missing, causing the evaluation accuracy to be unsatisfactory. The performance of the
spectral domain-based methods tends to fluctuate significantly depending on different
datasets, to be specific, well-performed on analog distortion datasets while less effect on
natural distortion datasets. It is because the spectral domain-based methods are excellent
on high-frequency detail information analysis. The learning-based methods outperform
other methods on the analog distortion datasets, but the performance on the four natural
distortion datasets varies greatly. Combination methods can flexibly combine the above
methods in a variety of ways to meet the characteristics of different images, thus could
achieve good scores on their target dataset. However, currently, no one method can
guarantee the optimal effect on all datasets.
Figures 9 and 10 are the PLCC and SROCC values of different groups of methods on the
analog distortion datasets, respectively, and Figure 11 shows the PLCC and SROCC values
of different groups of methods on the natural distortion datasets. The data in Figures 9–11
are all taken from the average of the comparison results in Tables 5–7. It can be concluded
that most of the methods based on spatial-domain and combination methods perform
Appl. Sci. 2023, 13, 2652 15 of 20

Appl. Sci.
Appl. Sci. 2023,
2023, 13,
13, xx FOR
FOR PEER
PEER REVIEW
REVIEW
well
in the analog distortion datasets. Meanwhile, the learning-based methods show1717great
17 of 22
of 22
Appl. Sci. 2023, 13, x FOR PEER REVIEW of 22
potential in the comprehensive performance of both analog and natural distortion datasets.

Figure9.
Figure
Figure 9.PLCC
9. PLCCvalue
PLCC valueof
value ofdifferent
of differentgroups
different groupsof
groups ofmethods
of methodson
methods onanalog
on analogdistortion
analog distortiondatasets.
distortion datasets.
datasets.
Figure 9. PLCC value of different groups of methods on analog distortion datasets.

Figure10.
Figure
Figure
10.SROCC
10. SROCCvalue
SROCC valueof
value ofdifferent
of differentgroups
different groupsof
groups ofmethods
of methodson
methods onanalog
on analogdistortion
analog distortiondatasets.
distortion datasets.
datasets.
Figure 10. SROCC value of different groups of methods on analog distortion datasets.

Figure 11.
Figure 11. PLCC
PLCC and
and SROCC
SROCC values
values of
of different
different groups
groups of
of methods
methods on
on natural
natural distortion
distortion datasets.
datasets.
Figure
Figure 11. PLCC
11. PLCC and
and SROCC
SROCC values
values of
ofdifferent
differentgroups
groupsof
ofmethods
methodson
onnatural
naturaldistortion
distortiondatasets.
datasets.

4. Conclusions
4. Conclusions and
and Outlook
Outlook
4. Conclusions and Outlook
Through the
Through the evolution
evolution ofof no‐reference
no‐reference image
image sharpness
sharpness assessment
assessment research,
research, from
from
Through the evolution of no‐reference image sharpness assessment research, from
traditional methods
traditional methods (spatial
methods (spatial
(spatial oror spectral
or spectral domain‐based)
spectral domain‐based)
domain‐based) to to learning‐based
to learning‐based methods
learning‐based methods
methods to to
to
traditional
methods combination,
methods combination, this
this technique
technique has
has witnessed
witnessed rapid
rapid development
development in in itself.
itself. The
The fol‐
fol‐
methods combination, this technique has witnessed rapid development in itself. The fol‐
lowing conclusions
lowing conclusions can
conclusions can be
can be raised
be raised
raised byby analyzing
by analyzing a large
analyzing aa large number
large number
number of of works
of works of
works of literature.
of literature.
literature.
lowing
Appl. Sci. 2023, 13, 2652 16 of 20

4. Conclusions and Outlook


Through the evolution of no-reference image sharpness assessment research, from
traditional methods (spatial or spectral domain-based) to learning-based methods to meth-
ods combination, this technique has witnessed rapid development in itself. The following
conclusions can be raised by analyzing a large number of works of literature.
(1) Each group of evaluation methods is inseparable from the feature extraction pro-
cess. The spatial domain-based feature extraction is simple and efficient, which is beneficial
for real-time applications, but easily interfered by image noise. The spectral domain fea-
ture extraction can effectively remove noise but at the cost of increased time complexity.
Reported research reveals that the reasonable selection of the combination in spatial and
spectral domains helps improve the accuracy in image quality evaluation [101,102]. In the
learning-based method, the machine learning-based method mainly extracts features man-
ually. With the wide application of deep learning methods, CNN-based feature extraction
has become a popular trend for scholars.
(2) In recent years, methods based on deep learning have achieved good results in
evaluating no-reference image clarity. However, due to the complexity of the deep learning
network model, a large amount of training data is often required. Actually, it is quite
difficult to collect enough real-world images for network training, making it urgent to
design a deep learning evaluation method suitable for small-scale datasets.
In the future research, the sharpness evaluation method combining multiple methods
will gradually become a popular trend. We believe that the overall evaluation performance
can be further improved by using the deep learning network model to extract features,
combined with the high real-time performance in the image space domain and interference
elimination in the spectral domain. To summarize, image quality evaluation research
has the potential for further development of new research ideas on the ways in practical
application in real-world image datasets and taking full advantage of combined methods.

Author Contributions: Conceptualization, methodology, writing—original draft preparation, data


curation, M.Z.; methodology, formal analysis, writing—review and editing, project administration,
L.Y.; investigation, Z.W.; resources, Z.K.; funding acquisition, supervision, project administration,
C.Z. All authors have read and agreed to the published version of the manuscript.
Funding: This research was funded by the National Natural Science Foundation of China (Grant
No. 62201441 and 51903199), Key Research and Development Program of Shaanxi (Program
No. 2023-YBGY-490), Natural Science Basic Research Program of Shaanxi (No. 2019JQ-182, 2018JQ5214,
and 2021JQ-691), Innovation Capability Support Program of Shaanxi (Program No. 2022KJXX-40),
Outstanding Young Talents Support Plan of Shaanxi Universities (2020), Scientific Research Program
Funded by Shaanxi Provincial Education Department (Program No. 22JK0394), Science and Technol-
ogy Guiding Project of China National Textile and Apparel Council (No.2020044 and No.2020046),
and Innovation Capacity Support Plan of Shaanxi, China (No. 2020PT-043).
Institutional Review Board Statement: Not applicable.
Informed Consent Statement: Not applicable.
Data Availability Statement: Not applicable.
Acknowledgments: This work was supported by the National Natural Science Foundation of China
(Grant No. 62201441 and 51903199), Key Research and Development Program of Shaanxi (Program
No. 2023-YBGY-490), Natural Science Basic Research Program of Shaanxi (No. 2019JQ-182, 2018JQ5214,
and 2021JQ-691), Innovation Capability Support Program of Shaanxi (Program No. 2022KJXX-40),
Outstanding Young Talents Support Plan of Shaanxi Universities (2020), Scientific Research Program
Funded by Shaanxi Provincial Education Department (Program No. 22JK0394), Science and Technol-
ogy Guiding Project of China National Textile and Apparel Council (No.2020044 and No.2020046),
and Innovation Capacity Support Plan of Shaanxi, China (No. 2020PT-043).
Conflicts of Interest: The authors declare no potential conflict of interest.
Appl. Sci. 2023, 13, 2652 17 of 20

References
1. Mahajan, P.; Jakhetiya, V.; Abrol, P.; Lehana, P.K.; Guntuku, S.C. Perceptual quality evaluation of hazy natural images. IEEE Trans.
Ind. Inform. 2021, 17, 8046–8056. [CrossRef]
2. Li, Y.L.; Jiang, Y.J.; Yu, X.; Ren, B.; Wang, C.Y.; Chen, S.H.; Su, D.Y. Deep-learning image reconstruction for image quality
evaluation and accurate bone mineral density measurement on quantitative CT: A phantom-patient study. Front. Endocrinol. 2022,
13, 884306. [CrossRef] [PubMed]
3. Zhang, Y.Q.; Chen, H.; Wang, L.; Xiao, Y.J.; Huang, H.B. Color image segmentation using level set method with initialization
mask in multiple color spaces. Int. J. Eng. Manuf. 2011, 1, 70–76. [CrossRef]
4. Dickmann, J.; Sarosiek, C.; Rykalin, V.; Pankuch, M.; Coutrakon, G.; Johnson, R.P.; Bashkirov, V.; Schulte, R.W.; Parodi, K.;
Landry, G.; et al. Proof of concept image artifact reduction by energy-modulated proton computed tomography (EMpCT). Phys.
Med. 2021, 81, 237–244. [CrossRef] [PubMed]
5. Liu, S.Q.; Yu, S.; Zhao, Y.M.; Tao, Z.; Yu, H.; Jin, L.B. Salient region guided blind image sharpness assessment. Sensors 2021, 21, 3963.
[CrossRef]
6. David, K.; Mehmet, C.; Xiaoping, A.S. State estimation based echolocation bionics and image processing based target pattern
recognition. Adv. Sci. Technol. Eng. Syst. 2019, 4, 73–83.
7. Ke, Z.X.; Yu, L.J.; Wang, G.; Sun, R.; Zhu, M.; Dong, H.R.; Xu, Y.; Ren, M.; Fu, S.D.; Zhi, C. Three-Dimensional modeling of
spun-bonded nonwoven meso-structures. Polymers 2023, 15, 600. [CrossRef]
8. Zhu, H.; Zhi, C.; Meng, J.; Wang, Y.; Liu, Y.; Wei, L.; Fu, S.; Miao, M.; Yu, L. A self-pumping dressing with multiple liquid transport
channels for wound microclimate management. Macromol. Biosci. 2022, 23, 2200356. [CrossRef]
9. Wang, Z.; Bovik, A.C. Reduced- and no-reference image quality assessment. IEEE Signal Process. Mag. 2011, 28, 29–40. [CrossRef]
10. Wu, Q.; Li, H.; Meng, F.; Ngan, K.N.; Zhu, S.Y. No reference image quality assessment metric via multi-domain structural
information and piecewise regression. Vis. Commun. Image Represent. 2015, 32, 205–216. [CrossRef]
11. Lu, Y.; Xie, F.; Liu, T.; Jiang, Z.J.; Tao, D.C. No reference quality assessment for multiply-distorted images based on an improved
bag-of-words model. IEEE Signal Process. Lett. 2015, 22, 1811–1815. [CrossRef]
12. Cai, H.; Wang, M.J.; Mao, W.D.; Gong, M.L. No-reference image sharpness assessment based on discrepancy measures of
structural degradation. J. Vis. Commun.. Image Represent. 2020, 71, 102861. [CrossRef]
13. Qi, L.Z.; Zhi, C.; Meng, J.G.; Wang, Y.Z.; Liu, Y.M.; Song, Q.W.; Wu, Q.; Wei, L.; Dai, Y.; Zou, J.; et al. Highly efficient acoustic
absorber designed by backing cavity-like and filled-microperforated plate-like structure. Mater. Des. 2023, 225, 111484. [CrossRef]
14. Chen, G.; Zhai, M. Quality assessment on remote sensing image based on neural networks. J. Vis. Commun. Image Represent. 2019,
63, 102580. [CrossRef]
15. Huang, S.; Liu, Y.; Du, H. A no-reference objective image sharpness metric for perception and estimation. Sixth Int. Conf. Digit.
Image Process. (ICDIP 2014) 2014, 915914, 1–7.
16. Ferzli, R.; Karam, L.J. A no-reference objective image sharpness metric based on the notion of just noticeable blur (JNB). IEEE
Trans. Image Process. A Publ. IEEE Signal Process. Soc. 2009, 18, 717–728. [CrossRef]
17. Qian, J.Y.; Zhao, H.J.; Fu, J.; Song, W.; Qian, J.D.; Xiao, Q.B. No-reference image sharpness assessment via difference quotients.
Electron. Imaging 2019, 28, 013032. [CrossRef]
18. Zhai, G.T.; Min, X.K. Perceptual image quality assessment: A survey. Sci. China Inf. Sci. 2020, 63, 211301. [CrossRef]
19. Mittal, A.; Moorthy, A.K.; Bovik, A.C. No-reference image quality assessment in the spatial domain. IEEE Trans. Image Process.
2012, 21, 4695–4708. [CrossRef]
20. Li, Q.; Lin, W.; Fang, Y. No-reference quality assessment for multiply-distorted images in gradient domain. IEEE Signal Process.
Lett. 2016, 23, 541–545. [CrossRef]
21. Bovik, A.C.; Liu, S. DCT-domain blind measurement of blocking artifacts in DCT-coded images. In Proceedings of the IEEE
International Conference on Acoustics, Speech, and Signal Processing, Salt Lake City, UT, USA, 7–11 May 2001; Volume 3,
pp. 1725–1728.
22. Hou, W.L.; Gao, X.B.; Tao, D.C.; Li, X.L. Blind image quality assessment via deep learning. IEEE Trans. Neural Netw. Learn. Syst.
2015, 26, 1275–1286. [PubMed]
23. Zhang, Y.; Ngan, K.N.; Ma, L.; Li, H.L. Objective quality assessment of image retargeting by incorporating fidelity measures and
inconsistency detection. IEEE Trans. Image Process. 2017, 26, 5980–5993. [CrossRef] [PubMed]
24. Thakur, N. An efficient image quality criterion in spatial domain. Indian J. Sci. Technol. 2016, 9, 1–6.
25. Hong, Y.Z.; Ren, G.Q.; Liu, E.H. A no-reference image blurriness metric in the spatial domain. Opt.-Int. J. Light Electron. Opt. 2016,
127, 5568–5575. [CrossRef]
26. Feichtenhofer, C.; Fassold, H.; Schallauer, P. A perceptual image sharpness metric based on local edge gradient analysis. IEEE
Signal Process. Lett. 2013, 20, 379–382. [CrossRef]
27. Yan, X.Y.; Lei, J.; Zhao, Z. Multidirectional gradient neighbourhood-weighted image sharpness evaluation algorithm. Math. Probl.
Eng. 2020, 1, 7864024. [CrossRef]
28. Min, X.; Zhai, G.; Gu, K.; Yang, X.; Guan, X. Objective quality evaluation of dehazed images. IEEE Trans. Intell. Transp. Syst. 2019,
20, 2879–2892. [CrossRef]
29. Wang, F.; Chen, J.; Zhong, H.N.; Ai, Y.B.; Zhang, W.D. No-Reference image quality assessment based on image multi-scale contour
prediction. Appl. Sci. 2022, 12, 2833. [CrossRef]
Appl. Sci. 2023, 13, 2652 18 of 20

30. Wang, T.G.; Zhu, L.; Cao, P.L.; Liu, W.J. Research on Vickers hardness image definition evaluation function. Adv. Mater. Res. 2010,
121, 134–138. [CrossRef]
31. Dong, W.; Sun, H.W.; Zhou, R.X.; Chen, H.M. Autofocus method for SAR image with multi-blocks. J. Eng. 2019, 19, 5519–5523.
[CrossRef]
32. Jiang, S.X.; Zhou, R.G.; Hu, W.W. Quantum image sharpness estimation based on the Laplacian operator. Int. J. Quantum Inf.
2020, 18, 2050008. [CrossRef]
33. Zeitlin, A.D.; Strain, R.C. Augmenting ADS-B with traffic information service-broadcast. IEEE Aerosp. Electron. Syst. Mag. 2003,
18, 13–18. [CrossRef]
34. Zhan, Y.B.; Zhang, R. No-Reference image sharpness assessment based on maximum gradient and variability of gradients. IEEE
Trans. Multimed. 2017, 20, 1796–1808. [CrossRef]
35. Li, L.D.; Lin, W.S.; Wang, X.S.; Yang, G.B.; Bahrami, K.; Kot, A.C. No-reference image blur assessment based on discrete orthogonal
moments. IEEE Trans. Cybern. 2015, 46, 39–50. [CrossRef]
36. Zhang, K.N.; Huang, D.; Zhang, B.; Zhang, D. Improving texture analysis performance in biometrics by adjusting image sharpness.
Pattern Recognit. 2017, 66, 16–25. [CrossRef]
37. Sun, R.; Lei, T.; Chen, Q.; Wang, Z.X.; Du, X.G.; Zhao, W.Q. Survey of image edge detection. Front. Signal Process. 2022, 2, 1–13.
[CrossRef]
38. Dong, L.; Zhou, J.; Tang, Y.Y. Effective and fast estimation for image sensor noise via constrained weighted least squares. IEEE
Trans. Image Process. 2018, 27, 2715–2730. [CrossRef]
39. Xu, Z.Q.; Ji, X.Q.; Wang, M.J.; Sun, X.B. Edge detection algorithm of medical image based on Canny operator. J. Phys. Conf. Ser.
2021, 1955, 012080. [CrossRef]
40. Ren, X.; Lai, S.N. Medical image enhancement based on Laplace transform, Sobel operator and Histogram equalization. Acad. J.
Comput. Inf. Sci. 2022, 5, 48–54.
41. Balochian, S.; Baloochian, H. Edge detection on noisy images using Prewitt operator and fractional order differentiation. Multimed.
Tools Appl. 2022, 81, 9759–9770. [CrossRef]
42. Marziliano, P.; Dufaux, F.; Winkler, S.; Ebrahimi, T. Perceptual blur and ringing metrics: Application to JPEG2000. Signal Process.
Image Commun. 2004, 19, 163–172. [CrossRef]
43. Zhang, R.K.; Xiao, Q.Y.; Du, Y.; Zuo, X.Y. DSPI filtering evaluation method based on Sobel operator and image entropy. IEEE
Photonics J. 2021, 13, 7800110. [CrossRef]
44. Liu, Z.Y.; Hong, H.J.; Gan, Z.H.; Wang, J.H.; Chen, Y.P. An improved method for evaluating image sharpness based on edge
information. Appl. Sci. 2022, 12, 6712. [CrossRef]
45. Chen, M.Q.; Yu, L.J.; Zhi, C.; Sun, R.J.; Zhu, S.W.; Gao, Z.Y.; Ke, Z.X.; Zhu, M.Q.; Zhang, Y.M. Improved faster R-CNN for fabric
defect detection based on Gabor filter with genetic algorithm optimization. Comput. Ind. 2022, 134, 103551. [CrossRef]
46. Bahrami, K.; Kot, A.C. A fast approach for no-reference image sharpness assessment based on maximum local variation. IEEE
Signal Process. Lett. 2014, 21, 751–755. [CrossRef]
47. Gu, K.; Zhai, G.T.; Lin, W.S.; Yang, X.K.; Zhang, W.J. No-reference image sharpness assessment in autoregressive parameter space.
IEEE Trans. Image Process. 2015, 24, 3218–3231.
48. Chang, H.W.; Zhang, Q.W.; Wu, Q.G.; Gan, Y. Perceptual image quality assessment by independent feature detector. Neurocomput-
ing 2015, 151, 1142–1152. [CrossRef]
49. Narvekar, N.D.; Karam, L.J. A no-reference image blur metric based on the cumulative probability of blur detection (CPBD). IEEE
Trans. Image Process. 2011, 20, 2678–2683. [CrossRef]
50. Lin, L.H.; Chen, T.J. A novel scheme for image sharpness using inflection points. Int. J. Imaging Syst. Technol. 2020, 30, 1–8.
[CrossRef]
51. Zhang, F.Y.; Roysam, B. Blind quality metric for multidistortion images based on cartoon and texture decomposition. IEEE Signal
Process. Lett. 2016, 23, 1265–1269. [CrossRef]
52. Anju, M.; Mohan, J. Deep image compression with lifting scheme: Wavelet transform domain based on high-frequency subband
prediction. Int. J. Intell. Syst. 2021, 37, 2163–2187. [CrossRef]
53. Marichal, X.; Ma, W.Y.; Zhang, H. Blur determination in the compressed domain using DCT information. In Proceedings of the
IEEE International Conference on Image Processing, Kobe, Japan, 24–28 October 1999; pp. 386–390.
54. Mankar, R.; Gajjela, C.C.; Shahraki, F.F.; Prasad, S.; Mayerich, D.; Reddy, R. Multi-modal image sharpening in fourier transform
infrared (FTIR) microscopy. The Analyst 2021, 146, 4822. [CrossRef] [PubMed]
55. Wang, H.; Li, C.F.; Guan, T.X.; Zhao, S.H. No-reference stereoscopic image quality assessment using quaternion wavelet transform
and heterogeneous ensemble learning. Displays 2021, 69, 102058. [CrossRef]
56. Pan, D.; Shi, P.; Hou, M.; Ying, Z.; Fu, S.; Zhang, Y. Blind predicting similar quality map for image quality assessment.
In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018;
pp. 6373–6382.
57. Golestaneh, S.A.; Chandler, D.M. No-reference quality assessment of JPEG images via a quality relevance map. IEEE Signal
Process. Lett. 2014, 21, 155–158. [CrossRef]
58. Kanjar, D.; Masilamani, V. Image sharpness measure for blurred images in frequency domain. Procedia Eng. 2013, 64, 149–158.
Appl. Sci. 2023, 13, 2652 19 of 20

59. Kanjar, D.; Masilamani, V. No-reference image sharpness measure using discrete cosine transform statistics and multivariate
adaptive regression splines for robotic applications. Procedia Comput. Sci. 2018, 133, 268–275.
60. Bae, S.H.; Kim, M. DCT-QM: A DCT-based quality degradation metric for image quality optimization problems. IEEE Trans.
Image Process. 2016, 25, 4916–4930. [CrossRef]
61. Bae, S.H.; Kim, M. A novel image quality assessment with globally and locally consilient visual quality perception. IEEE Trans.
Image Process. A Publ. IEEE Signal Process. Soc. 2016, 25, 2392–2406. [CrossRef]
62. Baig, M.A.; Moinuddin, A.A.; Khan, E.; Ghanbari, M. DFT-based no-reference quality assessment of blurred images. Multimed.
Tools Appl. 2022, 81, 7895–7916. [CrossRef]
63. Kerouh, F. A no reference quality metric for measuring image blur in wavelet domain. Int. J. Digit. Form. Wirel. Commun. 2012, 4, 803–812.
64. Vu, P.V.; Chandler, D.M. A fast wavelet-based algorithm for global and local image sharpness estimation. IEEE Signal Process. Lett.
2012, 19, 423–426. [CrossRef]
65. Hassen, R.; Wang, Z.; Salama, M.A. Image sharpness assessment based on local phase coherence. IEEE Trans. Image Process. 2013,
22, 2798–2810. [CrossRef]
66. Gvozden, G.; Grgic, S.; Grgic, M. Blind image sharpness assessment based on local contrast map statistics. J. Vis. Commun. Image
Represent. 2018, 50, 145–158. [CrossRef]
67. Bosse, S.; Maniry, D.; Muller, K.R.; Samek, W. Deep neural networks for no-reference and full-reference image quality assessment.
IEEE Trans. Image Process. 2018, 27, 206–219. [CrossRef] [PubMed]
68. Burges, C.; Shaked, T.; Renshaw, E. Learning to rank using gradient descent. In Proceedings of the International Conference on
Machine Learning, New York, NY, USA, 7 August 2005; pp. 89–96.
69. Ye, P.; Kumar, J.; Kang, L. Unsupervised feature learning framework for no-reference image quality assessment. In Proceedings of
the IEEE Conference on Computer Vision and Pattern Recognition, Providence, RI, USA, 16–21 June 2012; pp. 1098–1105.
70. Pei, B.; Liu, X.; Feng, Z. A No-Reference image sharpness metric based on large-scale structure. J. Phys. Conf. 2018, 960, 012018.
[CrossRef]
71. Liu, L.; Gong, J.; Huang, H.; Sang, Q.B. Blind image blur metric based on orientation-aware local patterns. Signal Process.-Image
Commun. 2020, 80, 115654. [CrossRef]
72. Moorthy, A.K.; Bovik, A.C. A two-step framework for constructing blind image quality indices. IEEE Signal Process. Lett. 2010,
17, 513–516. [CrossRef]
73. Kim, J.; Nguyen, A.D.; Lee, S. Deep CNN-based blind image quality predictor. IEEE Trans. Neural Netw. Learn. Syst. 2019, 30, 11–24.
[CrossRef] [PubMed]
74. Zhu, M.L.; Ge, D.Y. Image quality assessment based on deep learning with FPGA implementation. Signal Process. Image Commun.
2020, 83, 115780. [CrossRef]
75. Li, D.Q.; Jiang, T.T.; Jiang, M. Exploiting high-level semantics for no-reference image quality assessment of realistic blur images.
In Proceedings of the 25th ACM International Conference on Multimedia, Mountain View, CA, USA, 23–27 October 2017;
pp. 378–386.
76. Lin, K.Y.; Wang, G.X. Hallucinated-IQA: No-reference image quality assessment via adversarial learning. In Proceedings of the
2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Salt Lake City, UT, USA, 18–23 June 2018;
pp. 732–741.
77. Zhang, W.X.; Ma, K.D.; Yan, J.; Deng, D.X.; Wang, Z. Blind image quality assessment using a deep bilinear convolutional neural
network. IEEE Trans. Circuits Syst. Video Technol. 2020, 30, 36–47. [CrossRef]
78. Bianco, S.; Celona, L.; Napoletano, P.; Schettini, R. On the use of deep learning for blind image quality assessment. Signal Image
Video Process. 2018, 12, 355–362. [CrossRef]
79. Gao, F.; Yu, J.; Zhu, S.G.; Huang, Q.M.; Tian, Q. Blind image quality prediction by exploiting multi-level deep representations.
Pattern Recognit. 2018, 81, 432–442. [CrossRef]
80. Li, L.; Wu, D.; Wu, J.; Li, H.L.; Lin, W.S.; Kot, A.C. Image sharpness assessment by sparse representation. IEEE Trans. Multimed.
2016, 18, 1085–1097. [CrossRef]
81. Lu, Q.B.; Zhou, W.G.; Li, H.Q. A no-reference image sharpness metric based on structural information using sparse representation.
Inf. Sci. 2016, 369, 334–346. [CrossRef]
82. Xu, J.T.; Ye, P.; Li, Q.H.; Du, H.Q.; Liu, Y.; Doermann, D. Blind image quality assessment based on high order statistics aggregation.
IEEE Trans. Image Process. 2016, 25, 4444–4457. [CrossRef] [PubMed]
83. Jiang, Q.P.; Shao, F.; Lin, W.S.; Gu, K.; Jiang, G.Y.; Sun, H.F. Optimizing multistage discriminative dictionaries for blind image
quality assessment. IEEE Trans. Multimed. 2018, 20, 2035–2048. [CrossRef]
84. Wu, Q.B.; Li, H.L.; Ngan, K.; Ma, K.D. Blind image quality assessment using local consistency aware retriever and uncertainty
aware evaluator. IEEE Trans. Circuits Syst. Video Technol. 2018, 28, 2078–2089. [CrossRef]
85. Deng, C.W.; Wang, S.G.; Li, Z.; Huang, G.B.; Lin, W.S. Content-insensitive blind image blurriness assessment using Weibull
statistics and sparse extreme learning machine. IEEE Trans. Syst. Man Cybern.-Syst. 2019, 49, 516–527. [CrossRef]
86. Zhang, Y.B.; Wang, H.Q.; Tan, F.F.; Chen, W.J.; Wu, Z.R. No-reference image sharpness assessment based on rank learning. In
Proceedings of the 2019 International Conference on Image Processing (ICIP), Taipei, Taiwan, 22–25 September 2019; pp. 2359–2363.
87. He, S.Y.; Liu, Z.Z. Image quality assessment based on adaptive multiple Skyline query. Signal Process.-Image Commun. 2019,
80, 115676. [CrossRef]
Appl. Sci. 2023, 13, 2652 20 of 20

88. Vu, C.T.; Phan, T.D.; Chandler, D.M. S3: A spectral and spatial measure of local perceived sharpness in natural image. IEEE Trans.
Image Process. 2012, 21, 934–945. [CrossRef]
89. Liu, X.Y.; Jin, Z.H.; Jiang, H.; Miao, X.R.; Chen, J.; Lin, Z.C. Quality assessment for inspection images of power lines based on
spatial and sharpness evaluation. IET Image Process. 2022, 16, 356–364. [CrossRef]
90. Yue, G.H.; Hou, C.P.; Gu, K.; Zhou, T.W.; Zhai, G.T. Combining local and global measures for DIBR-Synthesized image quality
evaluation. IEEE Trans. Image Process. 2018, 28, 2075–2088. [CrossRef]
91. Zhang, S.; Li, P.; Xu, X.H.; Li, L.; Chang, C.C. No-reference image blur assessment based on response function of singular values.
Symmetry 2018, 10, 304. [CrossRef]
92. Zhan, Y.; Zhang, R.; Wu, Q. A structural variation classification model for image quality assessment. IEEE Trans. Multimed. 2017,
19, 1837–1847. [CrossRef]
93. Li, D.Q.; Jiang, T.T.; Lin, W.S.; Jiang, M. Which has better visual quality: The clear blue sky or a blurry animal? IEEE Trans.
Onmultimedia 2019, 21, 1221–1234. [CrossRef]
94. Li, Y.; Po, L.M.; Xu, X. No-reference image quality assessment with shearlet transform and deep neural networks. Neurocomputing
2015, 154, 94–109. [CrossRef]
95. Sheikh, H.R.; Sabir, M.F.; Bovik, A.C. A statistical evaluation of recent full reference image quality assessment algorithms. IEEE
Trans. Image Process. 2006, 15, 3440–3451. [CrossRef] [PubMed]
96. Larson, E.C.; Chandler, D.M. Most apparent distortion: Full-reference image quality assessment and the role of strategy. J. Electron.
Imaging 2010, 19, 011006.
97. Ponomarenko, N.; Lukin, V.; Zelensky, A. TID2008-a database for evaluation of full-reference visual quality assessment metrics.
Adv. Mod. Radio Electron. 2009, 10, 30–45.
98. Ponomarenko, N.; Ieremeiev, O.; Lukin, V. Color image database TID2013: Peculiarities and preliminary results. In Proceedings
of the European Workshop on Visual Information Processing (EUVIP), Paris, France, 10–12 June 2013; pp. 106–111.
99. Ciancio, A.; Da, S.; Said, A.; Obrador, P. No-reference blur assessment of digital pictures based on multifeature classifiers. IEEE
Trans. Image Process. 2011, 20, 64–75. [CrossRef]
100. Virtanen, T.; Nuutinen, M.; Vaahteranoksa, M.; Oittinen, P. CID2013: A database for evaluating no-reference image quality
assessment algorithms. IEEE Trans. Image Process. 2015, 24, 390–402. [CrossRef] [PubMed]
101. Varga, D. No-Reference quality assessment of authentically distorted images based on local and global features. J. Imaging 2022, 8, 173.
[CrossRef] [PubMed]
102. Li, L.D.; Xia, W.H.; Wang, S.Q. No-Reference and robust image sharpness evaluation based on multiscale spatial and spectral
features. IEEE Trans. Multimed. 2017, 19, 1030–1040. [CrossRef]

Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual
author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to
people or property resulting from any ideas, methods, instructions or products referred to in the content.

You might also like