0% found this document useful (0 votes)
18 views

Image Texture Model Based On Energy Features

This document summarizes a research paper on image texture modeling using energy features. The paper proposes representing a texture using a model that is a set of pixel weights reflecting their significance based on the energy of wavelet transform coefficients. It discusses existing approaches to texture analysis including statistical, structural, model-based and signal processing methods. Edge analysis and multiscale representation are highlighted as important techniques. The proposed texture model uses energy characteristics to calculate pixel weights that can classify images.

Uploaded by

Edan Hollis
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
18 views

Image Texture Model Based On Energy Features

This document summarizes a research paper on image texture modeling using energy features. The paper proposes representing a texture using a model that is a set of pixel weights reflecting their significance based on the energy of wavelet transform coefficients. It discusses existing approaches to texture analysis including statistical, structural, model-based and signal processing methods. Edge analysis and multiscale representation are highlighted as important techniques. The proposed texture model uses energy characteristics to calculate pixel weights that can classify images.

Uploaded by

Edan Hollis
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 11

Journal of Physics: Conference Series

PAPER • OPEN ACCESS

Image texture model based on energy features


To cite this article: M M Lyasheva et al 2021 J. Phys.: Conf. Ser. 1902 012120

View the article online for updates and enhancements.

This content was downloaded from IP address 168.151.120.199 on 15/05/2021 at 03:29


AMCSM 2020 IOP Publishing
Journal of Physics: Conference Series 1902 (2021) 012120 doi:10.1088/1742-6596/1902/1/012120

Image texture model based on energy features

M M Lyasheva, S A Lyasheva and M P Shleymovich


Kazan National Research Technical University named after A.N.Tupolev-KAI,
10, K. Marks, Kazan, 420111, Russia

E-mail: [email protected]

Abstract. In information processing and control systems based on computer vision


technologies, it is necessary to provide an effective indicative representation of images for their
subsequent analysis. One of the widely used approaches is based on the construction of texture
models. In this paper, the description of the texture using energy features is considered. The
proposed model is a set of weights of image pixels reflecting their significance from the point
of view of image perception. The significance of the pixel is estimated using the energy of the
coefficients of the orthogonal discrete multiresolution wavelet transform. The paper presents
expressions for calculating pixel weights and shows that the resulting texture models can be
used to classify images.

1. Introduction
Currently, it is difficult to imagine a field of human activity in which computer vision technologies are
not used. Examples of systems based on such technologies are geoinformation systems [1], biometric
identification systems [2], close range photogrammetry systems [3], intelligent transport systems [4],
aviation systems [5], remote sensing systems [6], pavement health monitoring systems [7], etc.
These systems implement various processes when performing the following stages of image
processing and analysis:
1) image registration;
2) primary image processing;
3) image feature extraction;
4) image feature analysis.
Despite the importance of all stages, the processes of extracting image features are of particular
importance. This is due to the fact that the features should adequately reflect the characteristics of
images, the analysis of which provides the solution of specific functional problems in the
corresponding systems.
As a rule, color, texture, and shape features are used for image analysis [8, 9]. among the various
color features, color histogram, color connectivity vector, color correlogram, color moments, and
dominant color descriptor are often used [10-14].
Examples of textural features are descriptors obtained on the basis of statistical properties analysis,
spatial frequency analysis, co-occurrence matrix analysis, edge analysis, primitive length analysis,
energy characteristics of Laws, human texture perception model, independent component analysis,
construction of local binary patterns, fractal characteristics analysis, multiscale representation analysis,
morphological characteristics, construction of chain grammars of form, construction of graph
grammars, grouping of primitives [8, 9, 15-21].
Content from this work may be used under the terms of the Creative Commons Attribution 3.0 licence. Any further distribution
of this work must maintain attribution to the author(s) and the title of the work, journal citation and DOI.
Published under licence by IOP Publishing Ltd 1
AMCSM 2020 IOP Publishing
Journal of Physics: Conference Series 1902 (2021) 012120 doi:10.1088/1742-6596/1902/1/012120

As shape features, you can specify the area, perimeter length, center of gravity, roundness,
squareness, characteristics of the describing rectangle, characteristics of the convex hull, initial
moments, central moments, invariant moments, Zernike moments, chain codes, boundary signatures,
Fourier descriptors, shape histogram [8, 9, 17, 22, 23].
Image features are represented as a vector of numerical values. Its formation is carried out in two
stages. At the first stage, image segmentation is performed. As a result of this operation, a set of pixels
(image segments) that meet a given criterion are determined. At the second stage, feature values are
calculated that reflect the properties of image segments.
The set of feature values is a certain image model that reflects its features from the point of view of
specific tasks of information extraction and analysis. The effectiveness of the subsequent analysis and
decision-making based on its results largely depends on the quality of this model. Therefore, the
development of effective feature models is relevant and practically significant.

2. Texture analysis of images


One of the widely used approaches to describing images is texture analysis of images. Within the
framework of this approach, transformations are carried out that make it possible to represent an image
in the form of a texture model as a set of texture features, some of which were listed above.
According to reviews [15, 16], the following texture description methods can be distinguished:
1) geometric (structural) methods;
2) statistical methods;
3) model-based methods;
4) signal processing methods (transformation-based methods).
Geometric (structural) approaches represent texture with well-defined primitives (microtexture)
and a hierarchy of spatial arrangements (macrotexture) of these primitives. To describe a texture, you
need to define primitives and placement rules. The choice of a primitive (from a set of primitives) and
the likelihood that the selected primitive will be placed at a specific location can be a function of the
location or primitives near that location. The advantage of the structured approach is that it provides a
good symbolic description of the image. However, this feature is more useful for synthesis than for
image analysis. Abstract descriptions can be poorly defined for natural textures due to the variability
of both micro- and macrostructure and the lack of a clear distinction between them.
Unlike geometric (structural) methods, statistical approaches do not attempt to explicitly describe
the hierarchical structure of a texture. Instead, they represent texture indirectly using non-deterministic
properties defined by distributions and relationships between pixels. High quality texture recognition
allows obtaining methods based on second-order statistics.
Model-based texture analysis methods are based on the construction of an image model, which can
be used not only to describe the texture, but also to synthesize it. The model parameters reflect the
essential properties of the texture. This approach uses generative and stochastic models for texture
interpretation. The model parameters are estimated and then used to analyze the images.
According to the monograph [19], approaches to texture representation can be grouped into three
main classes – statistical, syntactic, and hybrid. Statistical methods are suitable if the dimensions of
the primitives are comparable to the pixel dimensions. Syntactic and hybrid methods are more suitable
for textures, where primitives can be assigned a label-type primitive, which means that primitives can
be described using a greater variety of properties than just tonal properties (for example, a shape
description can be used).
Methods based on edge analysis and multiscale representation are of particular importance in our
work. Edge analysis is based on appropriate detectors. To describe a texture, for example, the gradient
function can be used g(d):
g (d ) | f (i, j )  f (i  d , j ) |  | f (i, j )  f (i  d , j ) | 
(1)
 | f (i, j )  f (i, j  d ) |  | f (i, j )  f (i, j  d ) |,

2
AMCSM 2020 IOP Publishing
Journal of Physics: Conference Series 1902 (2021) 012120 doi:10.1088/1742-6596/1902/1/012120

where d is the value of the distance between pixels horizontally or vertically; f is the image; i, j are the
coordinates of the image pixel. The function g(d) is similar to the negative autocorrelation function –
its minimum (maximum) corresponds to the maximum (minimum) of the autocorrelation function.
The dimension of the feature space of the texture description is given by the number of distance values
d used to calculate the gradient.
Texture properties can also be derived from first and second order edge distribution statistics:
1) graininess – the thinner the texture, the more edges are present in the image of the edge of the
texture;
2) contrast – high-contrast textures are characterized by high values of the modulus of the
gradient;
3) randomness – random scatter of pixels are characterized by the entropy of the histogram of the
edge value;
4) directivity – an approximate measure of directivity can be defined as the entropy of the edge
direction histogram; directional textures have an even number of significant histogram peaks,
non-directional textures have a uniform edge direction histogram;
5) linearity – the linearity of the texture is determined by the simultaneous appearance of pairs of
edges with the same direction of edges at constant distances, when some edges are located in
the direction of other edges;
6) periodicity – the periodicity of the texture can be determined based on the analysis of the
simultaneous appearance of pairs of edges of the same direction at constant distances in
directions perpendicular to the direction of other edges;
7) size – the size of the texture can be based on the joint appearance of pairs of edges with
opposite directions of the edges at a constant distance in a direction perpendicular to the
directions of other edges.
Another feature of texture based on edge analysis is edge density (number of edges per unit area):
p | Mag ( p)  T 
Fedgeness  , (2)
N
where p is the pixel of the image area; Mag(p) is edge modulus in pixel p; N is the number of pixels in
the image area; T is the threshold value. This characteristic can be extended to take into account both
the filling of the texture and the properties of its orientation. For this, normalized histograms of the
modulus Hmag(R) and the direction Hdir(R) of the gradient over the area R are constructed (the
histograms are normalized by dividing the frequencies by the size of the area). These histograms
usually contain a small fixed number of digits (about 10). A pair of histograms gives a quantitative
description of the texture of the area R:

Fmagdir  H mag ( R), H dir ( R) .  (3)

Description of the texture based on the multiscale representation is based on the application of the
wavelet transform. In general, the wavelet transform of a function is expressed as follows:

1  xu
Wf (u, s)  

f ( x)
s D/2
ψ *
 s 
dx , (4)

where Wf is the conversion result; f is the original function; ψ* is complex conjugation of the shifted
and scaled function ψ, which has a zero mean value, center at the zero point and unit norm; D is the
dimension of the signal; u is D-dimensional vector of shift parameters; s is the scale parameter [24].
The wavelet transform decomposes the signal into basic functions of the form:
1  xu 
ψ u , s ( x)  ψ , (5)
 s 
D/2
s

3
AMCSM 2020 IOP Publishing
Journal of Physics: Conference Series 1902 (2021) 012120 doi:10.1088/1742-6596/1902/1/012120

which allow us to identify its features in the local area (determined by the shift parameters) at a certain
scale (determined by the scale parameter). For images, we have D = 2, x = (x1, x2)T, u = (u1, u2)T.
The wavelet transform in the form (4) is continuous. When using discrete sets of shift and scale
parameters, we have a discrete wavelet transform. A special case of a discrete wavelet transform is a
multiresolution wavelet transform.
Wavelet analysis procedures are based on the wavelet transform. Its advantages include the ability
to identify local spatial and frequency characteristic features of images [25]. In addition, fast
algorithms exist for orthogonal discrete multiresolution wavelet transforms.
Wavelet transform allows you to get a multiscale description of the texture of images. The most
commonly used wavelet transform-based texture features are wavelet energy signatures and their
second-order statistics.
As a result of the wavelet transform, the image is represented in the form:
y (m.n)   yi (m, n) i 1,...,N  a L (m, n), d L (m, n),..., d1 (m, n) ,
T
(6)

where N is the number of frequency ranges; I is the number of levels of decomposition; aL(m, n) are
approximating coefficients; dL(m, n), …, d1(m, n) are detailing coefficients; m, n are coefficient
coordinates. For the image we have:
N  1 3L , (7)
Consequently, the texture can be described by a set N of probability density functions of the first order
p(yi) for i = 1, …, N.
A more compact representation can be obtained using a set of variance features:
vi  Var{ yi } , (8)
where vi is the variance for the ith frequency range (ith channel); i = 1, ..., N; Var is the operator for
calculating variance. The variances of the vi channel can be estimated from the mean sum of squares
over the region of interest R of the analyzed texture:
N

y
1
vi  2
i ( m, n) , (9)
NR ( m ,n )R

where NR is the number of pixels in the region R. To obtain a better estimate of the variance of the low
frequency range, it is usually recommended to subtract the square of the average image brightness
from the obtained variance value.
Next, we will consider the approach developed by the authors to constructing a texture model of
images based on energy features calculated using the orthogonal discrete multiresolution Haar wavelet
transform. This approach combines the use of variance features, typical for multiscale representation
of image textures, with edge analysis.

3. Energy features
The energy features of an image are estimates of the energy values of pixel attributes. The set of
energy features forms a weight model of the image – a set of weights that reflect the significance of
the image pixels from the point of view of its perception [26].
In the statistical approach, the texture is defined using non-deterministic properties related to the
distributions and ratios of brightness values (gray levels) in the image. You can evaluate the texture
from this point of view by analyzing changes in brightness values.
Changes in the brightness values in the image characterize its local features. According to research
in the field of biological vision, such features are most important for the perception of scenes in
images. In turn, the relatively large amount of brightness change in a pixel characterizes it as a pixel
belonging to the edge. The density of edges in different areas of the image reflects the texture of the

4
AMCSM 2020 IOP Publishing
Journal of Physics: Conference Series 1902 (2021) 012120 doi:10.1088/1742-6596/1902/1/012120

image as a whole. Thus, by evaluating the significance of a pixel in terms of image perception, we can
build a texture model.
In this paper, to assess the significance of pixels as their attributes, it is proposed to use the
coefficients of the wavelet transform compared with them. The simplest and fastest is the orthogonal
multiresolution Haar transform. When using it, it is assumed that the original image itself is the result
of a wavelet decomposition at the last level J and is a matrix of approximating coefficients LLJ, whose
values are equal to the brightness values of its pixels (it is assumed that the matrices of detailing
coefficients of this level LHJ, HLJ and HHJ contain only zero values). To calculate the coefficients of
the LLj–1, LHj–1, HLj–1 and HHj–1 matrices of level j – 1 based on the values of the approximating
coefficients of the LLj matrix of level j (j = J, ..., j0, where j0 is the initial level), the following
expressions are used:
LL j (2m,2n)  LL j (2m  1,2n)  LL j (2m,2n  1)  LL j (2m  1,2n  1)
LL j 1 (m, n)  , (10)
4
LL j (2m,2n)  LL j (2m  1,2n)  LL j (2m,2n  1)  LL j (2m  1,2n  1)
LH j 1 (m, n)  , (11)
4
LL j (2m,2n)  LL j (2m  1,2n)  LL j (2m,2n  1)  LL j (2m  1,2n  1)
HL j 1 (m, n)  , (12)
4
LL j (2m,2n)  LL j (2m  1,2n)  LL j (2m,2n  1)  LL j (2m  1,2n  1)
HH j 1 (m, n)  . (13)
4
Figures 1 and 2 show the original standard images and the results of their three-level orthogonal
multiresolution Haar wavelet transform (shown from top to bottom and from left to right: boat,
cameraman, house, jetplane, lake, livingroom, mandril, peppers, pirate). The original images were
obtained from the sites [27, 28]. They are halftone images with dimensions of 512  512 pixels.
Figure 2 shows copies of the original images, which are matrices of approximating coefficients of
level j0, and matrices of scaled absolute values of detailing coefficients. At the same time, coefficients
with larger absolute values are marked with darker pixels.
When considering figure 2, it is necessary to take into account the layout of the matrices of the
detailing coefficients of the multiresolution wavelet transform. In this case, the scheme shown in
figure 3 is used.
The advantage of the orthogonal multiresolution Haar wavelet transform for image processing is
the ability to localize the points of brightness change. Indeed, when comparing figures 1 and 2, it is
clear that significant changes in brightness correspond to significant detailing coefficients in absolute
value. At the same time, there is also a correspondence between the values of the detailing coefficients
of different scales (levels).
There are various ways to calculate the weights for estimating the significance of pixels based on
the values of the coefficients of the orthogonal multiresolution Haar wavelet transform, one of which
is defined as follows:
w j (m j , n j )  k1 j w j 1 (m j / 2, n j / 2)  k2 j E j (m j , n j ) , (14)

where
E j (m j , n j )  LH 2j (m j , n j )  HL2j (m j , n j )  HH 2j (m j , n j ) . (15)

Expressions (14) and (15) are used for levels from j0 to J – 1. At the same time, for the initial level
j0, the weight values w j0 1 (m j0 / 2, n j0 / 2) are set to zero. The weight values for the pixels of the
original image considered at level J are calculated by the formula:

5
AMCSM 2020 IOP Publishing
Journal of Physics: Conference Series 1902 (2021) 012120 doi:10.1088/1742-6596/1902/1/012120

wJ (mJ , nJ )  wJ 1 (mJ / 2, nJ / 2) (16)


and then normalized:
wJ (mJ , nJ )
wJ (mJ , nJ )  . (17)
max{wJ (mJ , nJ )}
where max{wJ(mJ, nJ} is the maximum weight value.

Figure 1. Standard images. Figure 2. Orthogonal multiresolution Haar wavelet


transform of standard images.
The weight models of the original standard images in figure 1 are shown in figure 4, where when
the weight model is visualized, the more significant weight values are shown by darker pixels. It
should be noted that weight models characterize the contribution of each pixel to the perception of an
image in terms of the significance of changes in brightness in it.
The considered source images do not have a regular texture, i.e. they are weakly textured images.
In contrast, the images shown in figure 5 are characterized by a more pronounced texture. They are
reduced to the size of 512  512 pixels texture images D1 – D9 from the well-known Brodatz database
[29]. Figures 6 and 7 show the results of the orthogonal multiresolution Haar wavelet transform and
examples of weight models of these images, respectively.

4. Texture model
Based on the proposed energy features, you can build a texture model of the image using the following
steps:
1. Split the image into rectangular regions R1, …, RN.
2. For each region Ri, i = 1, …, N, calculate the average weight of the pixels entering this region:

 w(m , n )
( mi ,ni )Ri
i i

wi  , (18)
| Ri |
where | Ri| is the number of pixels included in the region Ri.
3. Generate feature vector:
f  ( f1 ,..., f N ) , (19)

6
AMCSM 2020 IOP Publishing
Journal of Physics: Conference Series 1902 (2021) 012120 doi:10.1088/1742-6596/1902/1/012120

where f = wi is the value of the ith texture feature of the image.

Figure 3. Wavelet transform scheme. Figure 4. Weight models of standard images.

Figure 5. Texture images. Figure 6. Orthogonal multiresolution Haar wavelet


transform of texture images.
Table 1 shows the results of comparing texture descriptions of standard images boat, cameraman,
house, jetplane, lake, livingroom, mandril, peppers, and pirate (in the table, images are designated as
I1 – I9, respectively). When calculating the weights, the values of the scale factors k1,j of all levels j
from j0 to J – 1 are 0.25, and the values of the scale factors k2,j of all levels j from j0 to J – 1 are equal
to 1. The initial level j0, the final level J and the number of levels wavelet expansions L are 6, 9, and 3,
respectively. The number of texture features of images is 64 (images are evenly divided into 64
regions). To compare the images, the mean square error criterion was used, calculated from the feature
vectors.
Table 2 shows the results of a similar comparison of descriptions of texture images D1 – D9
obtained using the same parameters as for standard images.

7
AMCSM 2020 IOP Publishing
Journal of Physics: Conference Series 1902 (2021) 012120 doi:10.1088/1742-6596/1902/1/012120

All images, the processing results of which are given in tables 1 and 2, are grayscale, have
dimensions of 512  512 pixels and are presented as TIFF files. The values given in tables 1 and 2
show that both images of real world objects (standard images) and texture images (texture images) can
be classified based on the described features.

Figure 7. Weight models of texture images.


The presented method of forming texture features has a relatively high computational speed.
Experiments have shown this. For the experiments, we used a software implementation developed in
the programming language C ++ in the programming system Microsoft Visual Studio 2017 using the
computer vision library OpenCV 3.4.9. As a result of the experiments, an average processing time of
0.007 sec was obtained for grayscale images with dimensions of 512  512 pixels. The experiments
were carried out on a personal computer with an quad-core processor Intel (R) Core (TM) i5-8300H
[email protected] GHz and 8 GB RAM under the operating system Microsoft Windows 10. The advantage
of the method is also the possibility of its parallel implementation. This is due to the peculiarities of
the multiresolution wavelet transform, which defines independent sets of interrelated coefficients of
different levels. The method can be used for image analysis when solving various problems in systems
based on computer vision technologies, including for contextual image search, object detection and
recognition in images, image compression, etc.
Table 1. Mean square error values when comparing standard images.
Image I1 I2 I3 I4 I5 I6 I7 I8 I9
I1 0.0000 0.0275 0.0264 0.0314 0.0167 0.0157 0.0654 0.0129 0.0178
I2 0.0275 0.0000 0.0323 0.0367 0.0324 0.0322 0.0706 0.0285 0.0259
I3 0.0264 0.0323 0.0000 0.0349 0.0289 0.0258 0.0659 0.0281 0.0289
I4 0.0314 0.0367 0.0349 0.0000 0.0323 0.0301 0.0653 0.0325 0.0259
I5 0.0167 0.0324 0.0289 0.0323 0.0000 0.0166 0.0559 0.0126 0.0159
I6 0.0157 0.0322 0.0258 0.0301 0.0166 0.0000 0.0591 0.0166 0.0213
I7 0.0654 0.0706 0.0659 0.0653 0.0559 0.0591 0.0000 0.0608 0.0599
I8 0.0129 0.0285 0.0281 0.0325 0.0126 0.0166 0.0608 0.0000 0.0167
I9 0.0178 0.0259 0.0289 0.0259 0.0159 0.0213 0.0599 0.0167 0.0000

8
AMCSM 2020 IOP Publishing
Journal of Physics: Conference Series 1902 (2021) 012120 doi:10.1088/1742-6596/1902/1/012120

Table 2. Mean square error values when comparing texture images.


Image D1 D2 D3 D4 D5 D6 D7 D8 D9
D1 0.0000 0.0367 0.1088 0.0764 0.0265 0.0107 0.0336 0.0305 0.1136
D2 0.0367 0.0000 0.0799 0.0516 0.0214 0.0364 0.0349 0.0624 0.0852
D3 0.1088 0.0799 0.0000 0.0397 0.0881 0.1052 0.0924 0.1372 0.0252
D4 0.0764 0.0516 0.0397 0.0000 0.0567 0.0716 0.0614 0.1043 0.0421
D5 0.0265 0.0214 0.0881 0.0567 0.0000 0.0243 0.0298 0.0516 0.0944
D6 0.0107 0.0364 0.1052 0.0716 0.0243 0.0000 0.0307 0.0342 0.1098
D7 0.0336 0.0349 0.0924 0.0614 0.0298 0.0307 0.0000 0.0551 0.0973
D8 0.0305 0.0624 0.1372 0.1043 0.0516 0.0342 0.0551 0.0000 0.1423
D9 0.1136 0.0852 0.0252 0.0421 0.0944 0.1098 0.0973 0.1423 0.0000

5. Conclusion
The proposed method for constructing a texture model of images based on energy features is simple to
implement and allows a high processing speed to be obtained. The results obtained show that the
described features make it possible to ensure stable discrimination of images with different textural
characteristics. This can be used to construct effective classifiers for solving problems in systems
based on computer vision technologies. For example, the method can be used for image compression
or contextual image search.
It should be noted that the practical application of the considered approach is associated with the
need to take into account the conditions for the functioning of the corresponding systems. For
example, with the need to take into account the effect of interference of various nature in control
systems and information processing, including interference from electrostatic discharges [30-32].

References
[1] Rizaev I S, Miftakhutdinov D I and Takhavova E G 2018 Solution of the problem of
superposing image and digital map for detection of new objects J. Phys.: Conf. Ser. 944 pp
1–7
[2] Suvorov N V and Shleymovich M P 2020 Mathematical model of the biometric iris recognition
system Computer Research and Modeling 12 (3) pp 629–639
[3] Luhmann T, Robson S, Kyle S and Boehm J 2013 Close-Range Photogrammetry and 3D
Imaging (Berlin: De Gruyter) p 708
[4] Makhmutova A, Anikin I V and Dagaeva M 2020 Object Tracking Method for
Videomonitoring in Intelligent Transport Systems Proceedings - 2020 International Russian
Automation Conference RusAutoCon 2020 (Sochi) pp 535–540
[5] Kostyashkin L N and Nikiforov M B (Eds.) 2016 Image processing in aircraft vision systems
(Moscow: FIZMATLIT) p 240 (in Russian)
[6] Schowengerdt R A 2006 Remote Sensing: Models and Methods for Image Processing (London:
Academic Press) p 560
[7] Lyasheva S, Tregubov V and Shleymovich M 2019 Detection and recognition of pavement
cracks based on computer vision technology 2019 International Conference on Industrial
Engineering, Applications and Manufacturing ICIEAM 2019 (Sochi) pp 1–5
[8] Gonzalez R C and Woods R. E. 2018 Digital Image Processing (London: Pearson) p 1168
[9] Shapiro L G and Stockmann G C 2001 Computer Vision (London: Pearson) p 608
[10] Long F, Zhang H and Feng D D 2003 Fundamentals of Content-Based Image Retrieval
Multimedia Information Retrieval and Management. Series: Signals and Communication
Technology ed D D Feng, W C Siu and H J Zhang (Berlin, Heidelberg: Springer) pp 1–26
[11] Pass G and Zabih R 1996 Histogram refinement for content-based image retrieval Proceedings

9
AMCSM 2020 IOP Publishing
Journal of Physics: Conference Series 1902 (2021) 012120 doi:10.1088/1742-6596/1902/1/012120

Third IEEE Workshop on Applications of Computer Vision WACV '96 (Sarasota) pp. 96–102
[12] Huang J, Kumar S R, Mitra M, Zhu W and Zabih R 1999 Spatial Color Indexing and
Applications International Journal of Computer Vision 35 (3) pp 245–268
[13] Striker M and Orengo M. 1995 Similarity of color images SPIE Proc. of Storage and Retrieval
for Image and Video Databases (San Jose) vol 2420 pp 381–392
[14] Deng Y, Manjunath B S, Kenney C, Moore M S and Shin H 2001 An efficient color
representation for image retrieval IEEE Transactions on image processing 10 (1) pp 140–
147.
[15] Tuceryan M and Jain A K 1998 Texture analysis The handbook of pattern recognition and
computer vision ed C H Chen, L F Pau and P S P Wang (Singapore: World Scientific
Publishing Company) pp 207–248
[16] Materka A 2004 Texture analysis methodologies for magnetic resonance imaging Dialogues in
Clinical Neuroscience 6 (2) pp 243–250
[17] Davies E R 2017 Computer Vision: Principles, Algorithms, Applications, Learning (London:
Academic Press) p 900
[18] Hung C-C, Song E and Lan Y 2019 Image Texture Analysis: Foundations, Models and
Algorithms (Cham: Springer) p 270
[19] Sonka M, Hlavac V and Boyle R 2015 Image Processing, Analysis, and Machine Vision
(Boston: Cengage Learning) p 920
[20] Li S Z 2009 Markov Random Field Modeling in Image Analysis (London: Springer) p 384
[21] Snitkowska E and Kasprzak W 2006 Independent Component Analysis of Textures in
Angiography Images Computer Vision and Graphics. Series: Computational Imaging and
Vision ed K Wojciechowski, B Smolka, H Palus, R Kozera, W Skarbek and L Noakes
(Dordrecht: Springer) vol 32 pp 367–372
[22] Nixon M and Aguado A 2019 Feature Extraction and Image Processing for Computer Vision
(London: Academic Press) p 650
[23] Flusser J, Suk T and Zitova B 2009 Moments and Moment Invariants in Pattern Recognition
(New York: Wiley) p 312
[24] Addison P S 2002 The Illustrated Wavelet Transform Handbook: Introductory Theory and
Applications in Science, Engineering, Medicine and Finance (Bristol and Philadelphia: IOP
Publishing) p 368
[25] Mallat S 2009 A Wavelet Tour of Signal Processing (London: Academic Press) p 832
[26] Gizatullin Z M, Lyasheva S A, Morozov O G and Shleymovich M P 2020 The method of
con-tour detection based on the weight model of image Computer Optics 44 (3) pp 393–400
[27] The USC-SIPI Image Database. Available at: http://sipi.usc.edu/database/database.php
[28] Website of the leading digital image processing books by Gonzalez and Woods. Available at:
http://www.imageprocessingplace.com/
[29] Multiband Texture Database. Available at: https://multibandtexture.recherche.usherbrooke.ca/
original_brodatz.html
[30] Shkinderov M S and Gizatullin Z M 2018 Study of an Access Monitoring and Control System
Working in the Presence of Electrostatic Discharges Journal of Communications Technology
and Electronics 63 (11) pp 1319–1325
[31] Gizatullin Z M and Shkinderov M S 2019 Research of Noise Immunity of Computing
Equipment under Exposure of Electrostatic Discharge Proceedings of 2019 International
Russian Automation Conference RusAutoCon 2019 (Sochi) pp 1–5
[32] Gizatullin R M, Gizatullin Z M, Shkinderov M S and Khuziyakhmetova E A 2018 The Analysis
of the Noise Immunity of an Electronic Device under the Action of Electrostatic Discharge
Proceedings of the 2018 14th International Scientific-Technical Conference on Actual
Problems of Electronic Instrument Engineering APEIE (Novosibirsk) vol 1 part 3 pp 332–
335

10

You might also like