Applsci 13 02652
Applsci 13 02652
sciences
Review
Review: A Survey on Objective Evaluation of Image Sharpness
Mengqiu Zhu 1 , Lingjie Yu 1, * , Zongbiao Wang 2 , Zhenxia Ke 1 and Chao Zhi 1,3, *
1 School of Textile Science and Engineering, Xi’an Polytechnic University, Xi’an 710048, China
2 Faculty of Engineering, The University of Sydney, Sydney, NSW 2006, Australia
3 Key Laboratory of Functional Textile Material and Product, Xi’an Polytechnic University,
Ministry of Education, Xi’an 710048, China
* Correspondence: [email protected] (L.Y.); [email protected] (C.Z.)
Abstract: Establishing an accurate objective evaluation metric of image sharpness is crucial for image
analysis, recognition and quality measurement. In this review, we highlight recent advances in no-
reference image quality assessment research, divide the reported algorithms into four groups (spatial
domain-based methods, spectral domain-based methods, learning-based methods and combination
methods) and outline the advantages and disadvantages of each method group. Furthermore, we
conduct a brief bibliometric study with which to provide an overview of the current trends from 2013
to 2021 and compare the performance of representative algorithms on public datasets. Finally, we
describe the shortcomings and future challenges in the current studies.
Keywords: evaluation metric; image sharpness; no-reference; image quality; evaluation algorithm
1. Introduction
In the overview of image quality evaluation, the common evaluation indicators [1]
include image noise, image color, artifacts, sharpness, etc. Image noise evaluation meth-
ods [2] mainly rely on image spatial and temporal noise, signal-to-noise ratio and grayscale
noise to obtain evaluation results. The image color evaluation methods [3] usually evaluate
the color degree and uniformity of the image. The image artifact evaluation methods [4]
pay more attention to chromatic aberration, distortion and vignetting factors, while the
Citation: Zhu, M.; Yu, L.; Wang, Z.;
Ke, Z.; Zhi, C. Review: A Survey on
image sharpness evaluation method [5] is based on the comprehensive evaluation of the
Objective Evaluation of Image
edges and details of the image, which is currently one of the most popular image quality
Sharpness. Appl. Sci. 2023, 13, 2652.
evaluation methods; it is closely related to research fields such as bionics [6], nonwoven
https://doi.org/10.3390/ materials [7], medicine [8], etc.
app13042652 According to the dependence on the reference image, the evaluation methods are
divided into three types: Full-Reference (FR), Reduced-Reference (RR) and No-Reference
Academic Editor: Byung-Gyu Kim
(NR) [9]. The FR method uses the distorted image and the corresponding undistorted
Received: 1 February 2023 reference image to generate the image quality score. In the RR method, the image quality is
Revised: 12 February 2023 evaluated on partial information extracted using feature extraction methods. Unlike the FR
Accepted: 15 February 2023 and RR methods, the NR method can use the distorted image alone to complete the quality
Published: 18 February 2023 assessment. Since it is usually impossible to obtain an undistorted image for reference in
practical applications, the research on NR methods has become the current mainstream
research direction [10,11].
The image sharpness refers to the clarity of the texture and borders of various detailed
Copyright: © 2023 by the authors.
parts of an image, which affects the perception of information, image acquisition, and sub-
Licensee MDPI, Basel, Switzerland.
sequent processing, especially in some applications based on high-quality images [12–14].
This article is an open access article
The ideal image sharpness evaluation function should have the characteristics of high
distributed under the terms and
sensitivity, good robustness and low computational cost [15–17]. Most traditional image
conditions of the Creative Commons
sharpness evaluation methods [18] are based on spatial or spectral domains. The methods
Attribution (CC BY) license (https://
creativecommons.org/licenses/by/
in the spatial domain mainly evaluate the image by extracting the image gradient and edge
4.0/).
information [19,20], which have the advantages of simple calculation and high real-time
performance but are easily disturbed by noise. The methods in the spectral domain mainly
use transformation methods, such as Fourier transform and wavelet transform, to extract
image frequency features for sharpness evaluation [21]. This type of method has excellent
sensitivity but high computational complexity. In recent years, learning-based methods
have emerged from the machine learning methods to the deep learning methods [22]. At
the same time, an increasing number of evaluation methods of a combination of methods
are studied and developed by scholars. A combination method is usually a new method
formed by combining two or more single evaluation methods in a certain relationship. Such
methods incorporate the advantages of the single method being combined and effectively
improve the accuracy of quality evaluation.
Although the above-reported research has obtained fruitful results, the objective eval-
uation standards for image sharpness are still not mature enough; few evaluation methods
are suitable for most scenarios. It is unrealistic to ask one sharpness evaluation algorithm
to handle all potential images due to the sophisticated and various image textures and
features [23]. Therefore, this paper reviews and clusters the existing sharpness evaluation
methods and conducted systematic comparative analyses on several representative meth-
ods aimed at offering directions for researchers to choose or develop a sharpness evaluation
algorithm on different types of images.
The paper reviews, classifies and summarizes the past decade’s sharpness evaluation
methods for no-reference images. The reviewed evaluation methods are grouped into four
categories with their evaluation results compared and the advantages and disadvantages
discussed. An outlook on the application of image sharpness in image processing is given,
and a direction for further research on sharpness evaluation methods for no-reference
images is discussed. Section 1 presents the background of the evaluation method. Section 2
summarizes the current sharpness evaluation methods by characterizing them into four
groups. Section 3 offers a bibliometric study to evaluate and compare the performance
of state-of-the-art algorithms on public datasets. Section 4 highlights the shortcomings of
current research and provides an outlook on future challenges.
As shown in Figure 3, an image is plotted with the results of the Canny operator, the
Sobel operator and the Prewitt operator,
operator, respectively.
respectively.
Marziliano et al. [42] proposed a method to detect the edges of an image using
the Sobel operator and utilized the image edge width as the sharpness evaluation score.
Zhang et al. [43] proposed an image-filtering evaluation method based on the Sobel op-
erator and image entropy. Liu et al. [44] used the Canny edge detection algorithm based
on the activation mechanism to obtain the image edge position and direction, established
the histogram of edge width and obtained the sharpness evaluation metric by weighting
the average edge width. The method was proven to have good accuracy and predictive
monotonicity. Chen et al. [45] used the Prewitt operator and the Gabor filter to calculate the
average gradient of the image to predict the local sharpness value of a fabric surface image.
Figure 3. Edge detection results: (a) Original image; (b) Canny operator; (c) Sobel operator; (d)
Prewitt operator.
Marziliano et al. [42] proposed a method to detect the edges of an image using the
Figure 2. The process of the image sharpness evaluation based on the Prewitt operator.
Figure 3.
Figure 3. Edge
Edgedetection
detectionresults:
results:(a)(a)
Original image;
Original (b)(b)
image; Canny operator;
Canny (c) Sobel
operator; operator;
(c) Sobel (d)
operator;
Prewitt operator.
(d) Prewitt operator.
Marziliano
Among et al.detection-based
the edge [42] proposed amethods,method theto detect
Cannythe edges the
operator, of an image
Sobel usingand
operator the
Sobel operator and utilized the image edge width as the sharpness
the Prewitt operator are widely used, and each of these operators has its own advantages. evaluation score.
Zhang
The et al.operator
Canny [43] proposed an image‐filtering
is sensitive to weak edges evaluation method based
but computationally on the Sobel
intensive; oper‐
the Sobel
ator and image
operator is fastentropy. Liu et al.to[44]
but susceptible usedinterference;
noise the Canny edge detectionoperator
the Prewitt algorithm is based
better on
at
the activation
extracting mechanism
the edges to obtain
of images the image
disturbed edge position
by neighboring and direction, established the
pixels.
histogram of edge width and obtained the sharpness evaluation metric by weighting the
2.1.3.
averageOther
edge Methods
width. The Based on Spatial
method Domainto have good accuracy and predictive mon‐
was proven
otonicity.
Bahrami et al. [46] obtained the quality operator
Chen et al. [45] used the Prewitt and the Gabor
score by calculating filter to calculate
the maximum the
local varia-
average
tion (MLV) gradient
of image of the image
pixels. Thetostandard
predict the local sharpness
deviation value of
of the weighted MLV a fabric surfacewas
distribution im‐
age. as a metric to measure sharpness. The study shows that this method is characterized
used
Among
by high the edge
real-time detection‐based
performance. Gu et al. methods, the Canny
[47] developed operator,model
a sharpness the Sobel operator
by analyzing
the
andautoregressive (AR) model
the Prewitt operator parameters
are widely used,point-by-point
and each of theseto calculate
operatorsthe has
energy and con-
its own ad‐
trast differences in the locally estimated AR coefficients and then
vantages. The Canny operator is sensitive to weak edges but computationally intensive; quantified the image
sharpness using a is
the Sobel operator percentile pool to predict
fast but susceptible the interference;
to noise overall score.theThis evaluation
Prewitt operator method
is bet‐
that
ter atcalculates
extractinglocal contrast
the edges of and
images energy based by
disturbed onneighboring
a mathematical model is also a spatial
pixels.
domain method. Chang et al. [48] proposed a new independent feature similarity index to
evaluate
2.1.3. OthertheMethods
sharpness by calculating
Based on Spatial the structure and texture differences between two
Domain
images.
Bahrami et al. [46] obtained the quality score byimage
Niranjan et al. [49] presented a no-reference blur metric
calculating based onlocal
the maximum the study
vari‐
of human blur perception for varying contrast values. The method
ation (MLV) of image pixels. The standard deviation of the weighted MLV distribution gathered information
by
wasestimating the probability
used as a metric to measure of detecting
sharpness.blurring
The study at each
shows edge
thatin themethod
this image is and then
charac‐
calculated the cumulative probability of blur detection to obtain
terized by high real‐time performance. Gu et al. [47] developed a sharpness model by an‐ the evaluation result.
Lin et al. [50] proposed an adaptive definition evaluation algorithm, which achieved a
alyzing the autoregressive (AR) model parameters point‐by‐point to calculate the energy
better evaluation effect than the standard adaptive definition algorithm. Zhang et al. [51]
and contrast differences in the locally estimated AR coefficients and then quantified the
presented a no-reference image quality evaluation metric based on Cartoon Texture Decom-
image sharpness using a percentile pool to predict the overall score. This evaluation
position (CTD). Using the characteristics of CTD, the image was separated into cartoon
method that calculates local contrast and energy based on a mathematical model is also a
parts with prominent edges and texture parts with noise. Then the ambiguity and noise
spatial domain method. Chang et al. [48] proposed a new independent feature similarity
levels were estimated respectively to predict the results.
The spatial domain-based methods require less computation, but the above methods
rely much on the details of the image and are easily affected by noise.
Kerouh et al. [63] proposed a no‐reference blur image evaluation method based on
the wavelet transform, which extracts the high‐frequency components of the image and
obtains edge‐defined evaluation results by analyzing multi‐resolution decomposition. Vu
et al. [64] presented a global and local sharpness evaluation algorithm based on fast wave‐
Appl. Sci. 2023, 13, 2652 6 of 20
Kerouh et al. [63] proposed a no-reference blur image evaluation method based on the
wavelet transform, which extracts the high-frequency components of the image and obtains
edge-defined evaluation results by analyzing multi-resolution decomposition. Vu et al. [64]
presented a global and local sharpness evaluation algorithm based on fast wavelet, which
decomposes the image through a three-level separable discrete wavelet transform and
calculates the logarithmic energy of wavelet sub-bands for obtaining the sharpness of the
image. Hassen et al. [65] proposed a method to evaluate the image sharpness of strong
local phase coherence near different image features based on complex wavelet transform.
Gvozden et al. [66] proposed a fast blind image sharpness/ambiguity evaluation model
(BISHARP). The local contrast information of the image was obtained by calculating the
root mean square of the image. At the same time, the diagonal wavelet coefficients in
the wavelet transform were used for ranking and weighting to obtain the final evaluation
result. Wang et al. [55] proposed a no-reference stereo image quality assessment model
based on quaternion wavelet transform (QWT), which extracted a series of quality-aware
features in QWT and MSCN coefficients of high-frequency sub-bands and finally predicted
the sharpness score.
The spectral domain-based methods decompose the image into high-frequency and
low-frequency components or sub-images of different resolution layers. They then use
different functions to process these sub-images so that complex edge information can be
extracted more clearly. However, the spectral domain-based methods require converting
Appl. Sci. 2023, 13, x FOR PEER REVIEW 7 of 22
image information from the spatial domain to the spectral domain, which greatly increases
the computational complexity.
The advantages and disadvantages of the different methods based on the spatial
Grayscale gradient‐based
domain/spectral domain are shown in Table 1. Rely on image edge infor‐
Simple and fast calculation
methods mation
Table 1. Advantages and disadvantages of different methods based on spatial/spectral domain.
Edge detection‐based
High sensitivity Susceptible to noise
Methods
methods Advantages Disadvantages
Fourier transform‐based High computational complex‐
Grayscale gradient-based methods Simple and fastExtract edge features clearly
calculation Rely on image edge information
methods ity
Edge detection-based methods High sensitivity Susceptible
High to noise complex‐
computational
Fourier transform-based methods
Wavelet transform‐based High accuracy and robust‐
Extract edge features clearly High computational complexityperfor‐
ity and poor real‐time
methods ness
mance and poor
High computational complexity
Wavelet transform-based methods High accuracy and robustness
real-time performance
2.3. Learning‐based Methods
2.3. Learning-Based Methods
Unlike traditional methods, learning‐based methods [67] can improve the accuracy
of theUnlike
evaluation results by learning
traditional methods, the training
learning-based image features
methods [67] can and achieving
improve the map‐
the accuracy of
ping of quality scores. The general framework of learning‐based methods is shown
the evaluation results by learning the training image features and achieving the mapping in Fig‐
of
ure 6. The
quality methods
scores. can be framework
The general divided into
ofSVM‐based,
learning-baseddeep learning‐based
methods is shown and dictionary‐
in Figure 6. The
based methods.
methods can be divided into SVM-based, deep learning-based and dictionary-based methods.
used the anisotropy of the orientation selectivity mechanism and the influence of gradient
orientation effect on vision to extract structural information and then used Toggle operator
to extract edge information as the weight of local patterns. Finally, support vector regression
(SVR) was used to train prediction models with optimization characteristics and subjective
scores. Moorthy et al. [72] proposed a new two-step framework for no-reference image
quality assessment based on natural scene statistics (NSS). SVM is used to classify the
distortion types of the fitted parameter features, and then SVR is used to calculate the
image quality evaluation results under different distortion types.
Machine learning-based evaluation methods can achieve better results than other
algorithms on small-sample training sets, but the extracted features determine the quality
of the evaluation results.
Xu et al. [82] proposed a blind image quality evaluation method based on high-order
statistical aggregation (HOSA). This method extracts local normalized image blocks as local
features through regular grid and constructs a codebook containing 100 codewords through
K-means clustering. Each local feature is assigned to several nearest clusters, and the
higher-order statistical differences between local features and corresponding clusters are
aggregated as the global evaluation results. Jiang et al. [83] proposed a no-reference image
evaluation method based on an optimized multilevel discriminant dictionary (MSDD).
MSDDs are learned by implementing a label consistent K-SVD (LC-KSVD) algorithm in a
phased mode.
The dictionary learning method is to establish the transfer relationship between the
image features and the dictionary and then matches them with the dictionary to obtain the
image evaluation results.
3. Bibliometrics Analysis
3.1. Research Distribution Trend Analysis
This paper evaluates the research distribution across the time period (2013 to 2015,
2016 to 2018 and 2019 to 2021) on no-reference image quality evaluation in recent years.
In the above three periods, 74, 92 and 105 related papers were published, respectively.
The distribution of methods used in related papers is shown in Figure 8. We also found
Appl. Sci. 2023, 13, 2652 10 of 20
that from 2013 to 2015, the frequency rankings of search keywords from more to less were
spatial domain methods, spectral domain methods and deep learning methods; from 2016
to 2018, the search frequency ranking changed to spatial domain methods, deep learning
methods, spectral domain methods and other methods; from 2019 to 2021, the ranking
Appl. Sci. 2023, 13, x FOR PEER REVIEW 11 of 22
was updated to deep learning methods, combination methods, spatial domain methods,
spectral domain methods and other methods.
Figure
Figure8.8.Distribution
Distribution of
of evaluation methods.
evaluation methods.
Byanalyzing
By analyzing Figure 8, 8, during
during2013–2015,
2013–2015,the thespatial
spatialdomain-based
domain‐based methods
methods were
were
the most with a percentage of 36, followed by the spectral domain-based
the most with a percentage of 36, followed by the spectral domain‐based methods, whilemethods, while
thelearning‐based
the learning-based methods
methods areare only
only19%19%ofofthe
thetotal
totalarticles. InIn
articles. thethe
following
followingyears, thethe
years,
learning-based methods grew rapidly, most possibly owing to the explosion of
learning‐based methods grew rapidly, most possibly owing to the explosion of deep learn‐ deep learning
techniques. During 2016–2018, the number of methods based on spatial domain and learning
ing techniques. During 2016–2018, the number of methods based on spatial domain and
was similar, 27% and 28%, respectively. From 2019 to 2021, the majority of the research in
learning was similar, 27% and 28%, respectively. From 2019 to 2021, the majority of the
image quality evaluation is based on learning, and combination methods also show great
research in image quality evaluation is based on learning, and combination methods also
potential with a percentage of 24. The learning-based methods are currently the most widely
show
used great potential
methods due towith
theiraexcellent
percentage of 24.for
behavior The learning‐based
extracting featuresmethods are currently
automatically. The
the most widely used methods due to their excellent behavior for
statistical data related information in our paper comes from Web of Science. extracting features au‐
tomatically. The statistical data related information in our paper comes from Web of Sci‐
3.2. The Performance of the Representative Methods on Public Datasets
ence.
In this section, we selected several representative methods from the above four main
3.2. The Performance
groups and analyzed of the Representative
their Methods on Public
evaluation performance on theDatasets
public datasets. MLV [46],
BIBLE [35], MGV [46], ARISM [47] and CPBD [49] are selected
In this section, we selected several representative methods from for the spatial
the abovedomain-
four main
based methods; DCT-QM [60], SC-QI [61], FISH [62], LPC-SI [65]
groups and analyzed their evaluation performance on the public datasets. MLV and BISHARP [66] are BI‐
[46],
selected for the spectral domain-based method; BIQI [72], DB-CNN [77],
BLE [35], MGV [46], ARISM [47] and CPBD [49] are selected for the spatial domain‐based DeepBIQ [78],
SPARISH [79], SR [81] and MSFF [87] are selected to represent the learning-based methods;
methods; DCT‐QM [60], SC‐QI [61], FISH [62], LPC‐SI [65] and BISHARP [66] are selected
S3 [88], RFSV [91], SVC [92] and SFA [93] are selected as the combination methods. The
for the spectral domain‐based method; BIQI [72], DB‐CNN [77], DeepBIQ [78], SPARISH
specific information of the comparison method is described in Table 3. We also introduce six
[79], SR [81] and MSFF [87] are selected to represent the learning‐based methods; S3 [88],
commonly used public datasets and four commonly used indicators in different literature
RFSV [91], SVC
to measure [92] and methods
the sharpness SFA [93]and
aretoselected as indicators
select two the combination methods.
to evaluate The
the image specific
quality
information of the
evaluation results. comparison method is described in Table 3. We also introduce six com‐
monly used public datasets and four commonly used indicators in different literature to
measure the sharpness methods and to select two indicators to evaluate the image quality
evaluation results.
Pub‐
Appl. Sci. 2023, 13, 2652 11 of 20
Pearson Linear Correlation Coefficient (PLCC) describes the correlation between the
algorithm evaluation value and the human subjective score. It is mainly used to calculate
the accuracy, as shown in Equation (1).
xi − x yi − y
1 n
n − 1 ∑ i =1
PLCC = (1)
σx σy
where x and y are the mean values of xi and yi , respectively, and σi is the corresponding
standard deviation.
Spearman’s rank ordered correlation coefficient (SROCC) is mainly used to measure
the monotonicity of algorithm prediction, as shown in Equation (2).
1 n
SROCC = 1 −
n ( n2 − 1) ∑i=1 (rxi − ryi )2 (2)
where rxi and ryi are the sorting positions of xi and yi in their respective data sequences.
Kendall Rank Order Correlation Coefficient (KROCC) can also effectively measure the
monotonicity of the algorithm, as shown in Equation (3).
2nc − nd
KROCC = (3)
n ( n − 1)
where nc is the number of consistent element pairs in the dataset and nd is the number of
inconsistent element pairs in the dataset.
Root Mean Square Error (RMSE) is used to directly measure the absolute error between
the algorithm evaluation score and the human subjective score, as shown in Equation (4).
1
1 n 2
RMSE =
n ∑ (x
i −1 i
− yi ) 2
(4)
where xi is the subjective MOS value and yi is the predicted score of the algorithm.
Table 5. Comparison of different evaluation methods on LIVE dataset and CSIQ dataset.
LIVE CSIQ
Group Method
PLCC SROCC PLCC SROCC
MLV [46] 0.938 0.937 0.894 0.851
BIBLE [35] 0.962 0.961 0.940 0.913
Spatial
domain-based MGV [46] 0.960 0.963 0.907 0.950
ARISM [47] 0.959 0.956 0.948 0.931
CPBD [49] 0.895 0.918 0.882 0.885
DCT-QM [60] 0.925 0.938 0.872 0.926
SC-QI [61] 0.937 0.948 0.927 0.943
Spectral
domain-based FISH [62] 0.904 0.841 0.923 0.894
LPC-SI [65] 0.922 0.950 0.906 0.893
BISHARP [66] 0.952 0.960 0.942 0.927
BIQI [72] 0.920 0.914 0.846 0.773
DB-CNN [77] 0.970 0.968 0.959 0.946
DeepBIQ [78] 0.912 0.893 0.975 0.967
Learning-based
SPARISH [79] 0.956 0.959 0.938 0.914
SR [81] 0.961 0.955 0.950 0.921
MSFF [87] 0.949 0.950 - -
S3 [88] 0.943 0.944 0.893 0.911
RFSV [91] 0.974 0.971 0.942 0.920
Combination
SVC [92] 0.949 0.941 0.952 0.954
SFA [93] 0.942 0..953 - -
Table 6. Comparison of different evaluation methods on TID2008 dataset and TID2013 dataset.
TID2008 TID2013
Group Method
PLCC SROCC PLCC SROCC
MLV [46] 0.811 0.812 0.883 0.879
BIBLE [35] 0.893 0.892 0.905 0.898
Spatial
domain-based MGV [46] 0.937 0.942 0.914 0.921
ARISM [47] 0.854 0.868 0.898 0.902
CPBD [49] 0.824 0.841 0.855 0.852
DCT-QM [60] 0.819 0.837 0.852 0.854
SC-QI [61] 0.890 0.905 0.907 0.905
Spectral
domain-based FISH [62] 0.816 0.786 0.911 0.912
LPC-SI [65] 0.846 0.843 0.892 0.889
BISHARP [66] 0.877 0.880 0.892 0.896
BIQI [72] 0.794 0.799 0.825 0.815
DB-CNN [77] 0.873 0.859 0.865 0.816
DeepBIQ [78] 0.951 0.952 0.920 0.922
Learning-based
SPARISH [79] 0.889 0.887 0.900 0.893
SR [81] 0.895 0.911 0.899 0.892
MSFF [87] 0.926 0.917 0.917 0.922
S3 [88] 0.851 0.842 0.879 0.861
RFSV [91] 0.915 0.924 0.924 0.932
Combination
SVC [92] 0.889 0.874 0.857 0.787
SFA [93] 0.916 0.907 0.954 0.948
Appl. Sci. 2023, 13, 2652 14 of 20
Table 7. Comparison of different evaluation methods on BID dataset and CID2013 dataset.
BID CID2013
Group Method
PLCC SROCC PLCC SROCC
MLV [46] 0.375 0.317 0.689 0.621
BIBLE [35] 0.392 0.361 - -
Spatial
MGV [46] 0.307 0.201 0.511 0.499
domain-based
ARISM [47] 0.193 0.151 - -
CPBD [49] - - 0.418 0.293
DCT-QM [60] 0.383 0.376 0.662 0.653
SC-QI [61] - - - -
Spectral
FISH [62] 0.485 0.474 0.638 0.587
domain-based
LPC-SI [65] 0.315 0.216 0.634 0.609
BISHARP [66] 0.356 0.307 0.678 0.681
BIQI [72] 0.513 0.472 0.742 0.723
DB-CNN [77] 0.471 0.464 0.686 0.672
DeepBIQ [78] - - - -
Learning-based
SPARISH [79] 0.482 0.402 0.661 0.652
SR [81] 0.415 0.467 0.621 0.634
MSFF [87] - - - -
S3 [88] 0.427 0.425 0.687 0.646
RFSV [91] 0.391 0.335 0.701 0.694
Combination
SVC [92] - - 0.425 0.433
SFA [93] 0.546 0.526 - -
As can be seen from Table 5, RFSV and DeepBIQ have the highest score and best
performance on the LIVE and CSIQ analog distortion databases, respectively. Table 6 shows
that DeepBIQ and SFA have the highest PLCC and SROCC values and the best performance
on TID2008 and TID2013 analog distortion databases, respectively. From Table 7, the best
performers on BID and CID2013 natural distortion datasets are SFA and BIQI, respectively.
The spatial domain-based methods are generally based on basic image processing, with
the obtained results intuitive. However, although the low-level features extracted with
spatial domain-based methods are rich in local details, global semantic information is
missing, causing the evaluation accuracy to be unsatisfactory. The performance of the
spectral domain-based methods tends to fluctuate significantly depending on different
datasets, to be specific, well-performed on analog distortion datasets while less effect on
natural distortion datasets. It is because the spectral domain-based methods are excellent
on high-frequency detail information analysis. The learning-based methods outperform
other methods on the analog distortion datasets, but the performance on the four natural
distortion datasets varies greatly. Combination methods can flexibly combine the above
methods in a variety of ways to meet the characteristics of different images, thus could
achieve good scores on their target dataset. However, currently, no one method can
guarantee the optimal effect on all datasets.
Figures 9 and 10 are the PLCC and SROCC values of different groups of methods on the
analog distortion datasets, respectively, and Figure 11 shows the PLCC and SROCC values
of different groups of methods on the natural distortion datasets. The data in Figures 9–11
are all taken from the average of the comparison results in Tables 5–7. It can be concluded
that most of the methods based on spatial-domain and combination methods perform
Appl. Sci. 2023, 13, 2652 15 of 20
Appl. Sci.
Appl. Sci. 2023,
2023, 13,
13, xx FOR
FOR PEER
PEER REVIEW
REVIEW
well
in the analog distortion datasets. Meanwhile, the learning-based methods show1717great
17 of 22
of 22
Appl. Sci. 2023, 13, x FOR PEER REVIEW of 22
potential in the comprehensive performance of both analog and natural distortion datasets.
Figure9.
Figure
Figure 9.PLCC
9. PLCCvalue
PLCC valueof
value ofdifferent
of differentgroups
different groupsof
groups ofmethods
of methodson
methods onanalog
on analogdistortion
analog distortiondatasets.
distortion datasets.
datasets.
Figure 9. PLCC value of different groups of methods on analog distortion datasets.
Figure10.
Figure
Figure
10.SROCC
10. SROCCvalue
SROCC valueof
value ofdifferent
of differentgroups
different groupsof
groups ofmethods
of methodson
methods onanalog
on analogdistortion
analog distortiondatasets.
distortion datasets.
datasets.
Figure 10. SROCC value of different groups of methods on analog distortion datasets.
Figure 11.
Figure 11. PLCC
PLCC and
and SROCC
SROCC values
values of
of different
different groups
groups of
of methods
methods on
on natural
natural distortion
distortion datasets.
datasets.
Figure
Figure 11. PLCC
11. PLCC and
and SROCC
SROCC values
values of
ofdifferent
differentgroups
groupsof
ofmethods
methodson
onnatural
naturaldistortion
distortiondatasets.
datasets.
4. Conclusions
4. Conclusions and
and Outlook
Outlook
4. Conclusions and Outlook
Through the
Through the evolution
evolution ofof no‐reference
no‐reference image
image sharpness
sharpness assessment
assessment research,
research, from
from
Through the evolution of no‐reference image sharpness assessment research, from
traditional methods
traditional methods (spatial
methods (spatial
(spatial oror spectral
or spectral domain‐based)
spectral domain‐based)
domain‐based) to to learning‐based
to learning‐based methods
learning‐based methods
methods to to
to
traditional
methods combination,
methods combination, this
this technique
technique has
has witnessed
witnessed rapid
rapid development
development in in itself.
itself. The
The fol‐
fol‐
methods combination, this technique has witnessed rapid development in itself. The fol‐
lowing conclusions
lowing conclusions can
conclusions can be
can be raised
be raised
raised byby analyzing
by analyzing a large
analyzing aa large number
large number
number of of works
of works of
works of literature.
of literature.
literature.
lowing
Appl. Sci. 2023, 13, 2652 16 of 20
References
1. Mahajan, P.; Jakhetiya, V.; Abrol, P.; Lehana, P.K.; Guntuku, S.C. Perceptual quality evaluation of hazy natural images. IEEE Trans.
Ind. Inform. 2021, 17, 8046–8056. [CrossRef]
2. Li, Y.L.; Jiang, Y.J.; Yu, X.; Ren, B.; Wang, C.Y.; Chen, S.H.; Su, D.Y. Deep-learning image reconstruction for image quality
evaluation and accurate bone mineral density measurement on quantitative CT: A phantom-patient study. Front. Endocrinol. 2022,
13, 884306. [CrossRef] [PubMed]
3. Zhang, Y.Q.; Chen, H.; Wang, L.; Xiao, Y.J.; Huang, H.B. Color image segmentation using level set method with initialization
mask in multiple color spaces. Int. J. Eng. Manuf. 2011, 1, 70–76. [CrossRef]
4. Dickmann, J.; Sarosiek, C.; Rykalin, V.; Pankuch, M.; Coutrakon, G.; Johnson, R.P.; Bashkirov, V.; Schulte, R.W.; Parodi, K.;
Landry, G.; et al. Proof of concept image artifact reduction by energy-modulated proton computed tomography (EMpCT). Phys.
Med. 2021, 81, 237–244. [CrossRef] [PubMed]
5. Liu, S.Q.; Yu, S.; Zhao, Y.M.; Tao, Z.; Yu, H.; Jin, L.B. Salient region guided blind image sharpness assessment. Sensors 2021, 21, 3963.
[CrossRef]
6. David, K.; Mehmet, C.; Xiaoping, A.S. State estimation based echolocation bionics and image processing based target pattern
recognition. Adv. Sci. Technol. Eng. Syst. 2019, 4, 73–83.
7. Ke, Z.X.; Yu, L.J.; Wang, G.; Sun, R.; Zhu, M.; Dong, H.R.; Xu, Y.; Ren, M.; Fu, S.D.; Zhi, C. Three-Dimensional modeling of
spun-bonded nonwoven meso-structures. Polymers 2023, 15, 600. [CrossRef]
8. Zhu, H.; Zhi, C.; Meng, J.; Wang, Y.; Liu, Y.; Wei, L.; Fu, S.; Miao, M.; Yu, L. A self-pumping dressing with multiple liquid transport
channels for wound microclimate management. Macromol. Biosci. 2022, 23, 2200356. [CrossRef]
9. Wang, Z.; Bovik, A.C. Reduced- and no-reference image quality assessment. IEEE Signal Process. Mag. 2011, 28, 29–40. [CrossRef]
10. Wu, Q.; Li, H.; Meng, F.; Ngan, K.N.; Zhu, S.Y. No reference image quality assessment metric via multi-domain structural
information and piecewise regression. Vis. Commun. Image Represent. 2015, 32, 205–216. [CrossRef]
11. Lu, Y.; Xie, F.; Liu, T.; Jiang, Z.J.; Tao, D.C. No reference quality assessment for multiply-distorted images based on an improved
bag-of-words model. IEEE Signal Process. Lett. 2015, 22, 1811–1815. [CrossRef]
12. Cai, H.; Wang, M.J.; Mao, W.D.; Gong, M.L. No-reference image sharpness assessment based on discrepancy measures of
structural degradation. J. Vis. Commun.. Image Represent. 2020, 71, 102861. [CrossRef]
13. Qi, L.Z.; Zhi, C.; Meng, J.G.; Wang, Y.Z.; Liu, Y.M.; Song, Q.W.; Wu, Q.; Wei, L.; Dai, Y.; Zou, J.; et al. Highly efficient acoustic
absorber designed by backing cavity-like and filled-microperforated plate-like structure. Mater. Des. 2023, 225, 111484. [CrossRef]
14. Chen, G.; Zhai, M. Quality assessment on remote sensing image based on neural networks. J. Vis. Commun. Image Represent. 2019,
63, 102580. [CrossRef]
15. Huang, S.; Liu, Y.; Du, H. A no-reference objective image sharpness metric for perception and estimation. Sixth Int. Conf. Digit.
Image Process. (ICDIP 2014) 2014, 915914, 1–7.
16. Ferzli, R.; Karam, L.J. A no-reference objective image sharpness metric based on the notion of just noticeable blur (JNB). IEEE
Trans. Image Process. A Publ. IEEE Signal Process. Soc. 2009, 18, 717–728. [CrossRef]
17. Qian, J.Y.; Zhao, H.J.; Fu, J.; Song, W.; Qian, J.D.; Xiao, Q.B. No-reference image sharpness assessment via difference quotients.
Electron. Imaging 2019, 28, 013032. [CrossRef]
18. Zhai, G.T.; Min, X.K. Perceptual image quality assessment: A survey. Sci. China Inf. Sci. 2020, 63, 211301. [CrossRef]
19. Mittal, A.; Moorthy, A.K.; Bovik, A.C. No-reference image quality assessment in the spatial domain. IEEE Trans. Image Process.
2012, 21, 4695–4708. [CrossRef]
20. Li, Q.; Lin, W.; Fang, Y. No-reference quality assessment for multiply-distorted images in gradient domain. IEEE Signal Process.
Lett. 2016, 23, 541–545. [CrossRef]
21. Bovik, A.C.; Liu, S. DCT-domain blind measurement of blocking artifacts in DCT-coded images. In Proceedings of the IEEE
International Conference on Acoustics, Speech, and Signal Processing, Salt Lake City, UT, USA, 7–11 May 2001; Volume 3,
pp. 1725–1728.
22. Hou, W.L.; Gao, X.B.; Tao, D.C.; Li, X.L. Blind image quality assessment via deep learning. IEEE Trans. Neural Netw. Learn. Syst.
2015, 26, 1275–1286. [PubMed]
23. Zhang, Y.; Ngan, K.N.; Ma, L.; Li, H.L. Objective quality assessment of image retargeting by incorporating fidelity measures and
inconsistency detection. IEEE Trans. Image Process. 2017, 26, 5980–5993. [CrossRef] [PubMed]
24. Thakur, N. An efficient image quality criterion in spatial domain. Indian J. Sci. Technol. 2016, 9, 1–6.
25. Hong, Y.Z.; Ren, G.Q.; Liu, E.H. A no-reference image blurriness metric in the spatial domain. Opt.-Int. J. Light Electron. Opt. 2016,
127, 5568–5575. [CrossRef]
26. Feichtenhofer, C.; Fassold, H.; Schallauer, P. A perceptual image sharpness metric based on local edge gradient analysis. IEEE
Signal Process. Lett. 2013, 20, 379–382. [CrossRef]
27. Yan, X.Y.; Lei, J.; Zhao, Z. Multidirectional gradient neighbourhood-weighted image sharpness evaluation algorithm. Math. Probl.
Eng. 2020, 1, 7864024. [CrossRef]
28. Min, X.; Zhai, G.; Gu, K.; Yang, X.; Guan, X. Objective quality evaluation of dehazed images. IEEE Trans. Intell. Transp. Syst. 2019,
20, 2879–2892. [CrossRef]
29. Wang, F.; Chen, J.; Zhong, H.N.; Ai, Y.B.; Zhang, W.D. No-Reference image quality assessment based on image multi-scale contour
prediction. Appl. Sci. 2022, 12, 2833. [CrossRef]
Appl. Sci. 2023, 13, 2652 18 of 20
30. Wang, T.G.; Zhu, L.; Cao, P.L.; Liu, W.J. Research on Vickers hardness image definition evaluation function. Adv. Mater. Res. 2010,
121, 134–138. [CrossRef]
31. Dong, W.; Sun, H.W.; Zhou, R.X.; Chen, H.M. Autofocus method for SAR image with multi-blocks. J. Eng. 2019, 19, 5519–5523.
[CrossRef]
32. Jiang, S.X.; Zhou, R.G.; Hu, W.W. Quantum image sharpness estimation based on the Laplacian operator. Int. J. Quantum Inf.
2020, 18, 2050008. [CrossRef]
33. Zeitlin, A.D.; Strain, R.C. Augmenting ADS-B with traffic information service-broadcast. IEEE Aerosp. Electron. Syst. Mag. 2003,
18, 13–18. [CrossRef]
34. Zhan, Y.B.; Zhang, R. No-Reference image sharpness assessment based on maximum gradient and variability of gradients. IEEE
Trans. Multimed. 2017, 20, 1796–1808. [CrossRef]
35. Li, L.D.; Lin, W.S.; Wang, X.S.; Yang, G.B.; Bahrami, K.; Kot, A.C. No-reference image blur assessment based on discrete orthogonal
moments. IEEE Trans. Cybern. 2015, 46, 39–50. [CrossRef]
36. Zhang, K.N.; Huang, D.; Zhang, B.; Zhang, D. Improving texture analysis performance in biometrics by adjusting image sharpness.
Pattern Recognit. 2017, 66, 16–25. [CrossRef]
37. Sun, R.; Lei, T.; Chen, Q.; Wang, Z.X.; Du, X.G.; Zhao, W.Q. Survey of image edge detection. Front. Signal Process. 2022, 2, 1–13.
[CrossRef]
38. Dong, L.; Zhou, J.; Tang, Y.Y. Effective and fast estimation for image sensor noise via constrained weighted least squares. IEEE
Trans. Image Process. 2018, 27, 2715–2730. [CrossRef]
39. Xu, Z.Q.; Ji, X.Q.; Wang, M.J.; Sun, X.B. Edge detection algorithm of medical image based on Canny operator. J. Phys. Conf. Ser.
2021, 1955, 012080. [CrossRef]
40. Ren, X.; Lai, S.N. Medical image enhancement based on Laplace transform, Sobel operator and Histogram equalization. Acad. J.
Comput. Inf. Sci. 2022, 5, 48–54.
41. Balochian, S.; Baloochian, H. Edge detection on noisy images using Prewitt operator and fractional order differentiation. Multimed.
Tools Appl. 2022, 81, 9759–9770. [CrossRef]
42. Marziliano, P.; Dufaux, F.; Winkler, S.; Ebrahimi, T. Perceptual blur and ringing metrics: Application to JPEG2000. Signal Process.
Image Commun. 2004, 19, 163–172. [CrossRef]
43. Zhang, R.K.; Xiao, Q.Y.; Du, Y.; Zuo, X.Y. DSPI filtering evaluation method based on Sobel operator and image entropy. IEEE
Photonics J. 2021, 13, 7800110. [CrossRef]
44. Liu, Z.Y.; Hong, H.J.; Gan, Z.H.; Wang, J.H.; Chen, Y.P. An improved method for evaluating image sharpness based on edge
information. Appl. Sci. 2022, 12, 6712. [CrossRef]
45. Chen, M.Q.; Yu, L.J.; Zhi, C.; Sun, R.J.; Zhu, S.W.; Gao, Z.Y.; Ke, Z.X.; Zhu, M.Q.; Zhang, Y.M. Improved faster R-CNN for fabric
defect detection based on Gabor filter with genetic algorithm optimization. Comput. Ind. 2022, 134, 103551. [CrossRef]
46. Bahrami, K.; Kot, A.C. A fast approach for no-reference image sharpness assessment based on maximum local variation. IEEE
Signal Process. Lett. 2014, 21, 751–755. [CrossRef]
47. Gu, K.; Zhai, G.T.; Lin, W.S.; Yang, X.K.; Zhang, W.J. No-reference image sharpness assessment in autoregressive parameter space.
IEEE Trans. Image Process. 2015, 24, 3218–3231.
48. Chang, H.W.; Zhang, Q.W.; Wu, Q.G.; Gan, Y. Perceptual image quality assessment by independent feature detector. Neurocomput-
ing 2015, 151, 1142–1152. [CrossRef]
49. Narvekar, N.D.; Karam, L.J. A no-reference image blur metric based on the cumulative probability of blur detection (CPBD). IEEE
Trans. Image Process. 2011, 20, 2678–2683. [CrossRef]
50. Lin, L.H.; Chen, T.J. A novel scheme for image sharpness using inflection points. Int. J. Imaging Syst. Technol. 2020, 30, 1–8.
[CrossRef]
51. Zhang, F.Y.; Roysam, B. Blind quality metric for multidistortion images based on cartoon and texture decomposition. IEEE Signal
Process. Lett. 2016, 23, 1265–1269. [CrossRef]
52. Anju, M.; Mohan, J. Deep image compression with lifting scheme: Wavelet transform domain based on high-frequency subband
prediction. Int. J. Intell. Syst. 2021, 37, 2163–2187. [CrossRef]
53. Marichal, X.; Ma, W.Y.; Zhang, H. Blur determination in the compressed domain using DCT information. In Proceedings of the
IEEE International Conference on Image Processing, Kobe, Japan, 24–28 October 1999; pp. 386–390.
54. Mankar, R.; Gajjela, C.C.; Shahraki, F.F.; Prasad, S.; Mayerich, D.; Reddy, R. Multi-modal image sharpening in fourier transform
infrared (FTIR) microscopy. The Analyst 2021, 146, 4822. [CrossRef] [PubMed]
55. Wang, H.; Li, C.F.; Guan, T.X.; Zhao, S.H. No-reference stereoscopic image quality assessment using quaternion wavelet transform
and heterogeneous ensemble learning. Displays 2021, 69, 102058. [CrossRef]
56. Pan, D.; Shi, P.; Hou, M.; Ying, Z.; Fu, S.; Zhang, Y. Blind predicting similar quality map for image quality assessment.
In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018;
pp. 6373–6382.
57. Golestaneh, S.A.; Chandler, D.M. No-reference quality assessment of JPEG images via a quality relevance map. IEEE Signal
Process. Lett. 2014, 21, 155–158. [CrossRef]
58. Kanjar, D.; Masilamani, V. Image sharpness measure for blurred images in frequency domain. Procedia Eng. 2013, 64, 149–158.
Appl. Sci. 2023, 13, 2652 19 of 20
59. Kanjar, D.; Masilamani, V. No-reference image sharpness measure using discrete cosine transform statistics and multivariate
adaptive regression splines for robotic applications. Procedia Comput. Sci. 2018, 133, 268–275.
60. Bae, S.H.; Kim, M. DCT-QM: A DCT-based quality degradation metric for image quality optimization problems. IEEE Trans.
Image Process. 2016, 25, 4916–4930. [CrossRef]
61. Bae, S.H.; Kim, M. A novel image quality assessment with globally and locally consilient visual quality perception. IEEE Trans.
Image Process. A Publ. IEEE Signal Process. Soc. 2016, 25, 2392–2406. [CrossRef]
62. Baig, M.A.; Moinuddin, A.A.; Khan, E.; Ghanbari, M. DFT-based no-reference quality assessment of blurred images. Multimed.
Tools Appl. 2022, 81, 7895–7916. [CrossRef]
63. Kerouh, F. A no reference quality metric for measuring image blur in wavelet domain. Int. J. Digit. Form. Wirel. Commun. 2012, 4, 803–812.
64. Vu, P.V.; Chandler, D.M. A fast wavelet-based algorithm for global and local image sharpness estimation. IEEE Signal Process. Lett.
2012, 19, 423–426. [CrossRef]
65. Hassen, R.; Wang, Z.; Salama, M.A. Image sharpness assessment based on local phase coherence. IEEE Trans. Image Process. 2013,
22, 2798–2810. [CrossRef]
66. Gvozden, G.; Grgic, S.; Grgic, M. Blind image sharpness assessment based on local contrast map statistics. J. Vis. Commun. Image
Represent. 2018, 50, 145–158. [CrossRef]
67. Bosse, S.; Maniry, D.; Muller, K.R.; Samek, W. Deep neural networks for no-reference and full-reference image quality assessment.
IEEE Trans. Image Process. 2018, 27, 206–219. [CrossRef] [PubMed]
68. Burges, C.; Shaked, T.; Renshaw, E. Learning to rank using gradient descent. In Proceedings of the International Conference on
Machine Learning, New York, NY, USA, 7 August 2005; pp. 89–96.
69. Ye, P.; Kumar, J.; Kang, L. Unsupervised feature learning framework for no-reference image quality assessment. In Proceedings of
the IEEE Conference on Computer Vision and Pattern Recognition, Providence, RI, USA, 16–21 June 2012; pp. 1098–1105.
70. Pei, B.; Liu, X.; Feng, Z. A No-Reference image sharpness metric based on large-scale structure. J. Phys. Conf. 2018, 960, 012018.
[CrossRef]
71. Liu, L.; Gong, J.; Huang, H.; Sang, Q.B. Blind image blur metric based on orientation-aware local patterns. Signal Process.-Image
Commun. 2020, 80, 115654. [CrossRef]
72. Moorthy, A.K.; Bovik, A.C. A two-step framework for constructing blind image quality indices. IEEE Signal Process. Lett. 2010,
17, 513–516. [CrossRef]
73. Kim, J.; Nguyen, A.D.; Lee, S. Deep CNN-based blind image quality predictor. IEEE Trans. Neural Netw. Learn. Syst. 2019, 30, 11–24.
[CrossRef] [PubMed]
74. Zhu, M.L.; Ge, D.Y. Image quality assessment based on deep learning with FPGA implementation. Signal Process. Image Commun.
2020, 83, 115780. [CrossRef]
75. Li, D.Q.; Jiang, T.T.; Jiang, M. Exploiting high-level semantics for no-reference image quality assessment of realistic blur images.
In Proceedings of the 25th ACM International Conference on Multimedia, Mountain View, CA, USA, 23–27 October 2017;
pp. 378–386.
76. Lin, K.Y.; Wang, G.X. Hallucinated-IQA: No-reference image quality assessment via adversarial learning. In Proceedings of the
2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Salt Lake City, UT, USA, 18–23 June 2018;
pp. 732–741.
77. Zhang, W.X.; Ma, K.D.; Yan, J.; Deng, D.X.; Wang, Z. Blind image quality assessment using a deep bilinear convolutional neural
network. IEEE Trans. Circuits Syst. Video Technol. 2020, 30, 36–47. [CrossRef]
78. Bianco, S.; Celona, L.; Napoletano, P.; Schettini, R. On the use of deep learning for blind image quality assessment. Signal Image
Video Process. 2018, 12, 355–362. [CrossRef]
79. Gao, F.; Yu, J.; Zhu, S.G.; Huang, Q.M.; Tian, Q. Blind image quality prediction by exploiting multi-level deep representations.
Pattern Recognit. 2018, 81, 432–442. [CrossRef]
80. Li, L.; Wu, D.; Wu, J.; Li, H.L.; Lin, W.S.; Kot, A.C. Image sharpness assessment by sparse representation. IEEE Trans. Multimed.
2016, 18, 1085–1097. [CrossRef]
81. Lu, Q.B.; Zhou, W.G.; Li, H.Q. A no-reference image sharpness metric based on structural information using sparse representation.
Inf. Sci. 2016, 369, 334–346. [CrossRef]
82. Xu, J.T.; Ye, P.; Li, Q.H.; Du, H.Q.; Liu, Y.; Doermann, D. Blind image quality assessment based on high order statistics aggregation.
IEEE Trans. Image Process. 2016, 25, 4444–4457. [CrossRef] [PubMed]
83. Jiang, Q.P.; Shao, F.; Lin, W.S.; Gu, K.; Jiang, G.Y.; Sun, H.F. Optimizing multistage discriminative dictionaries for blind image
quality assessment. IEEE Trans. Multimed. 2018, 20, 2035–2048. [CrossRef]
84. Wu, Q.B.; Li, H.L.; Ngan, K.; Ma, K.D. Blind image quality assessment using local consistency aware retriever and uncertainty
aware evaluator. IEEE Trans. Circuits Syst. Video Technol. 2018, 28, 2078–2089. [CrossRef]
85. Deng, C.W.; Wang, S.G.; Li, Z.; Huang, G.B.; Lin, W.S. Content-insensitive blind image blurriness assessment using Weibull
statistics and sparse extreme learning machine. IEEE Trans. Syst. Man Cybern.-Syst. 2019, 49, 516–527. [CrossRef]
86. Zhang, Y.B.; Wang, H.Q.; Tan, F.F.; Chen, W.J.; Wu, Z.R. No-reference image sharpness assessment based on rank learning. In
Proceedings of the 2019 International Conference on Image Processing (ICIP), Taipei, Taiwan, 22–25 September 2019; pp. 2359–2363.
87. He, S.Y.; Liu, Z.Z. Image quality assessment based on adaptive multiple Skyline query. Signal Process.-Image Commun. 2019,
80, 115676. [CrossRef]
Appl. Sci. 2023, 13, 2652 20 of 20
88. Vu, C.T.; Phan, T.D.; Chandler, D.M. S3: A spectral and spatial measure of local perceived sharpness in natural image. IEEE Trans.
Image Process. 2012, 21, 934–945. [CrossRef]
89. Liu, X.Y.; Jin, Z.H.; Jiang, H.; Miao, X.R.; Chen, J.; Lin, Z.C. Quality assessment for inspection images of power lines based on
spatial and sharpness evaluation. IET Image Process. 2022, 16, 356–364. [CrossRef]
90. Yue, G.H.; Hou, C.P.; Gu, K.; Zhou, T.W.; Zhai, G.T. Combining local and global measures for DIBR-Synthesized image quality
evaluation. IEEE Trans. Image Process. 2018, 28, 2075–2088. [CrossRef]
91. Zhang, S.; Li, P.; Xu, X.H.; Li, L.; Chang, C.C. No-reference image blur assessment based on response function of singular values.
Symmetry 2018, 10, 304. [CrossRef]
92. Zhan, Y.; Zhang, R.; Wu, Q. A structural variation classification model for image quality assessment. IEEE Trans. Multimed. 2017,
19, 1837–1847. [CrossRef]
93. Li, D.Q.; Jiang, T.T.; Lin, W.S.; Jiang, M. Which has better visual quality: The clear blue sky or a blurry animal? IEEE Trans.
Onmultimedia 2019, 21, 1221–1234. [CrossRef]
94. Li, Y.; Po, L.M.; Xu, X. No-reference image quality assessment with shearlet transform and deep neural networks. Neurocomputing
2015, 154, 94–109. [CrossRef]
95. Sheikh, H.R.; Sabir, M.F.; Bovik, A.C. A statistical evaluation of recent full reference image quality assessment algorithms. IEEE
Trans. Image Process. 2006, 15, 3440–3451. [CrossRef] [PubMed]
96. Larson, E.C.; Chandler, D.M. Most apparent distortion: Full-reference image quality assessment and the role of strategy. J. Electron.
Imaging 2010, 19, 011006.
97. Ponomarenko, N.; Lukin, V.; Zelensky, A. TID2008-a database for evaluation of full-reference visual quality assessment metrics.
Adv. Mod. Radio Electron. 2009, 10, 30–45.
98. Ponomarenko, N.; Ieremeiev, O.; Lukin, V. Color image database TID2013: Peculiarities and preliminary results. In Proceedings
of the European Workshop on Visual Information Processing (EUVIP), Paris, France, 10–12 June 2013; pp. 106–111.
99. Ciancio, A.; Da, S.; Said, A.; Obrador, P. No-reference blur assessment of digital pictures based on multifeature classifiers. IEEE
Trans. Image Process. 2011, 20, 64–75. [CrossRef]
100. Virtanen, T.; Nuutinen, M.; Vaahteranoksa, M.; Oittinen, P. CID2013: A database for evaluating no-reference image quality
assessment algorithms. IEEE Trans. Image Process. 2015, 24, 390–402. [CrossRef] [PubMed]
101. Varga, D. No-Reference quality assessment of authentically distorted images based on local and global features. J. Imaging 2022, 8, 173.
[CrossRef] [PubMed]
102. Li, L.D.; Xia, W.H.; Wang, S.Q. No-Reference and robust image sharpness evaluation based on multiscale spatial and spectral
features. IEEE Trans. Multimed. 2017, 19, 1030–1040. [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual
author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to
people or property resulting from any ideas, methods, instructions or products referred to in the content.