0% found this document useful (0 votes)
21 views

CV Lecture 7

This document discusses various image segmentation techniques, including thresholding, region-based segmentation, and clustering-based segmentation. It provides details on global and adaptive thresholding, region growing, region splitting and merging. It also describes types of clustering algorithms like hierarchical, hard, and soft clustering and discusses what makes good clustering. The goal of image segmentation is to separate an image into meaningful regions or objects.

Uploaded by

Lovely doll
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
21 views

CV Lecture 7

This document discusses various image segmentation techniques, including thresholding, region-based segmentation, and clustering-based segmentation. It provides details on global and adaptive thresholding, region growing, region splitting and merging. It also describes types of clustering algorithms like hierarchical, hard, and soft clustering and discusses what makes good clustering. The goal of image segmentation is to separate an image into meaningful regions or objects.

Uploaded by

Lovely doll
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 119

Computer Vision

CSC-455
Muhammad Najam Dar
Three Lectures

Image Segmentation Algorithms (Techniques)


 Thresholding: Global vs Adaptive.
 Region Growing
 Region Splitting and Merging
 Cluster Analysis:
k-Mean Clustering
k-Mode Clustering
Hierarchical Clustering
Fuzzy C-Mean Clustering
Mean Shift Segmentation
Image Segmentation
 Group similar components (such as,
pixels in an image, image frames in a
video)
 Applications: Finding tumors, veins,
etc. in medical images, finding targets
in satellite/aerial images, finding
people in surveillance images,
summarizing video, etc.
Image Segmentation
 Segmentation algorithms are based on one of two basic
properties of gray-scale values:
 Discontinuity
 Partition an image based on abrupt changes in gray-
scale levels.
 Detection of isolated points, lines, and edges in an
image.

 Similarity
 Thresholding, region growing, and region
splitting/merging.
Three Lectures

Image Segmentation Algorithms (Techniques)


 Thresholding: Global vs Adaptive.
 Region Growing
 Region Splitting and Merging
 Cluster Analysis:
k-Mean Clustering
k-Mode Clustering
Hierarchical Clustering
Fuzzy C-Mean Clustering
Mean Shift Segmentation
Thresholding
 Segmentation into two classes/groups
 Foreground (Objects)
 Background
Thresholding

1 if f (x, y) 
g(x, y)  
T
Objects
0 if f (x,&y) 
Background
T

 Global Thresholding
 Local/Adaptive Thresholding
Global Thresholding
 Single threshold value for entire image
 Fixed ?
 Automatic
 Intensity histogram
Global Thresholding
 Single threshold value for entire image
 Fixed ?
 Automatic
 Intensity histogram
Global Thresholding
 Estimate an initial T

 Segment Image using T: Two groups of pixels G1 and


G2

 Compute average gray values m1 and m2 of two groups

 Compute new threshold value T=1/2(m1+m2)

 Repeat steps 2 to 4 until: abs(Ti – Ti-1)<epsilon


Global Thresholding

Multilevel thresholding
Thresholding
 Non-uniform illumination:
Global Thresholding
Adaptive Thresholding
Adaptive Thresholding
 Threshold: function of neighboring pixels

T  mean
T
median
max 
T 
min
2
Adaptive Thresholding

Original Image Global Thresholding


Adaptive Thresholding

T=mean, neighborhood=7x7 T=mean-Const., neighborhood=7x7


Adaptive Thresholding
 Niblack Algorithm

T mk
s m  mean
s  standard
deviations
k

Niblack
constant
Document Binarization

• Local Thresholding – Examples

Original Niblack Sauvola

Wol Feng NICK


f
Region-Based Segmentation
 Divide the image into regions
 R1,R2,…,RN
 Following properties must hold:

(For adjacent regions)


Three Lectures

Image Segmentation Algorithms (Techniques)


 Thresholding: Global vs Adaptive.
 Region Growing
 Region Splitting and Merging
 Cluster Analysis:
k-Mean Clustering
k-Mode Clustering
Hierarchical Clustering
Fuzzy C-Mean Clustering
Mean Shift Segmentation
Region-Based Segmentation
 Region Growing
 Region growing: groups pixels or subregions into
larger regions.
 Pixel aggregation: starts with a set of “seed” points and from
these grows regions by appending to each seed points
those neighboring pixels that have similar properties (such
as gray level).

1. Choose the seed pixel(s).


2. Check the neighboring pixels and add them to the region if they are
similar to the seed
3. Repeat step 2 for each of the newly added pixels; stop if no more
pixels can be added

Predicate: for example abs(zj - seed) < Epsilon


Region-Based Segmentation
 Example
Three Lectures

Image Segmentation Algorithms (Techniques)


 Thresholding: Global vs Adaptive.
 Region Growing
 Region Splitting and Merging
 Cluster Analysis:
k-Mean Clustering
k-Mode Clustering
Hierarchical Clustering
Fuzzy C-Mean Clustering
Mean Shift Segmentation
Region-Based Segmentation
 Region Splitting
 Region Growing: Starts from a set of seed points.
 Region Splitting: Starts with the whole image as a
single region and subdivide the regions that do not
satisfy a condition.
 Image = One Region R
 Select a predicate P (gray values etc.)
 Successively divide each region into smaller and
smaller
quadrant regions so that:

P(Ri )  true
Region-Based Segmentation
 Region Splitting

Adjacent regions could be same


Problem?
Allow Merge
Solution?
Region-Based Segmentation
 Region Merging
 Region merging is the opposite of region
splitting.
 Merge adjacent regions Ri and Rj for which:

P(Ri  R j )  True
 Region Splitting/Merging
 Stop when no further split or merge is possible
Region-Based Segmentation
 Example

1. Split into four disjointed quadrants any region Ri where P(Ri)=False

2. Merge any adjacent regions Rj and Rk for which P(Rj U Rk)=True

3. Stop when no further merging or splitting is possible


Finding the outline and shape of image objects, e.g.
character recognition.
The goals of segmentation
• Separate image into coherent
“objects”
image
human segmentation

Berkeley segmentation database:


http://www.eecs.berkeley.edu/Research/Projects/CS/vision/grouping/segbench/
Three Lectures

Image Segmentation Algorithms (Techniques)


 Thresholding: Global vs Adaptive.
 Region Growing
 Region Splitting and Merging
 Cluster Analysis:
k-Mean Clustering
k-Mode Clustering
Hierarchical Clustering
Fuzzy C-Mean Clustering
Mean Shift Segmentation
Segmentation as Clustering

Source: K. Grauman
Segmentation as clustering

Source: K. Grauman
What is Cluster Analysis?

• Cluster: a collection of data objects


– Similar to one another within the same cluster
– Dissimilar to the objects in other clusters

• Cluster analysis
– Finding similarities between data according to the
characteristics found in the data and grouping similar
data objects into clusters
What is Cluster Analysis?

• Clustering analysis is an important human


activity
• Early in childhood, we learn how to distinguish
between cats and dogs
• Unsupervised learning: no predefined classes
• Typical applications
– As a stand-alone tool to get insight into data
distribution
– As a preprocessing step for other algorithms
Types of Clustering

Hierarchical: clusters form a tree


• Agglomerative
• Divisive
• Hard vs. Soft
– Hard: same object can only belong to single
cluster i.e. k-Mean, k-Medoid etc.
– Soft: same object can belong to different
clusters i.e. Fuzzy C Mean Clustering.
Quality: What Is Good Clustering?

• A good clustering method will


produce high quality clusters with
– high intra-class similarity
(Similar to one another within the same
cluster)
– low inter-class similarity
(Dissimilar to the objects in other
clusters)
Quality: What Is Good Clustering?
• Finding groups of objects such that the objects in a
group will be similar (or related) to one another and
different from (or unrelated to) the objects in other
groups
Intra-cluster Inter-cluster
distances are
distances are maximized
minimized
Similarity and Dissimilarity Between Objects
• Distances are normally used to measure the similarity
or dissimilarity between two data objects
• Some popular ones include: Minkowski distance:

d(i, j)  q (|  x | | x  x | ...|  x | )
q q q

i1 x j1 i2 j2 ip jp
x
where i = (xi1, xi2, …, xip) and j = (xj1, xj2, …, xjp) are two p-
dimensional data objects, and q is a positive integer
• If q = 1, d is Manhattan distance
d(i, j) | x  x  x |...| x  x
i1 j1 i2 j2 ip
|| x |
jp
Similarity and Dissimilarity Between Objects

• If q = 2, d is Euclidean distance:
2 2
d(i, j)  (| x  x |2 )
x i2
j2
ip
• Also, one can use weighted distance, parametric
jp
Pearson correlation, or other disimilarity measures
Clustering Algorithms: Basic Concept

• Given a k, find a partition of k clusters that


optimizes the chosen
partitioning criterion
• k-means and k-medoids algorithms
• k-means (MacQueen’67): Each cluster is represented by the
center of the cluster
• k-medoids or PAM (Partition around medoids) (Kaufman &
Rousseeuw’87): Each cluster is represented by one of the
objects in the cluster
Three Lectures

Image Segmentation Algorithms (Techniques)


 Thresholding: Global vs Adaptive.
 Region Growing
 Region Splitting and Merging
 Cluster Analysis:
k-Mean Clustering
k-Mode Clustering
Hierarchical Clustering
Fuzzy C-Mean Clustering
Mean Shift Segmentation
K-Means Clustering
1. Chose the number (K) of clusters and randomly
select the
centroids of each cluster.
2. For each data point:
 Calculate the distance from the data point to each
cluster.
 Assign the data point to the closest cluster.
3. Recompute the centroid of each cluster.
4. Repeat steps 2 and 3 until there is no further
change in the assignment of data points (or in the
centroids).
K-Means Clustering
K-Means Clustering
K-Means Clustering
K-Means Clustering
K-Means Clustering
K-Means Clustering
K-Means Clustering
K-Means Clustering
K-Means Clustering
Clustering
 Example

D. Comaniciu and P.
Meer, Robust
Analysis of Feature
Spaces: Color Image
Segmentation, 1997.
K-Means Clustering
 Example

Original K=5 K=1


1
The K-Means Clustering Method

10 10
10
9 9
9
8 8
8
7 7
7
6 6
6
5 5
5
4 4
4

3
Assign
3
Update 3

2 2
2
each 1
the 1
1

0 objects 0 cluster 0

means
0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9

to most
0 1 2 3 4 5 6 7 8 9 10 10
10

similar
center reassign reassign
10 10

9 9

K=2 8 8

7 7

Arbitrarily choose K 6 6

object as initial 5 5

cluster center
4
Update
4

3 3

2 the 2

0
cluster 1

0
0 1
10
2 3 4 5 6 7 8 9 means 0 1
10
2 3 4 5 6 7 8 9
Example

• Run K-means clustering with 3 clusters (initial


centroids: 3, 16, 25) for at least 2 iterations
Example

• Centroids:
3 – 2 3 4 7 9 new centroid: 5

16 – 10 11 12 16 18 19 new centroid: 14.33

25 – 23 24 25 30 new centroid: 25.5


Example

• Centroids:
5 – 2 3 4 7 9 new centroid: 5

14.33 – 10 11 12 16 18 19 new centroid: 14.33

25.5 – 23 24 25 30 new centroid: 25.5


In class Practice

• Run K-means clustering with 3 clusters (initial


centroids: 3, 12, 19) for at least 2 iterations
Three Lectures

Image Segmentation Algorithms (Techniques)


 Thresholding: Global vs Adaptive.
 Region Growing
 Region Splitting and Merging
 Cluster Analysis:
k-Mean Clustering
k-Mode Clustering
Hierarchical Clustering
Fuzzy C-Mean Clustering
Mean Shift Segmentation
Hierarchical Clustering
• Two main types of hierarchical clustering
– Agglomerative:
• Start with the points as individual clusters
• At each step, merge the closest pair of clusters until only one cluster (or
k clusters) left
Matlab: Statistics Toolbox: clusterdata,
which performs all these steps: pdist, linkage, cluster
– Divisive:
• Start with one, all-inclusive cluster
• At each step, split a cluster until each cluster contains a point (or there are
k clusters)
• Traditional hierarchical algorithms use a similarity or
distance
matrix
– Merge or split one cluster at a time
– Image segmentation mostly uses simultaneous merge/split
Hierarchical Clustering
• Agglomerative (Bottom-up)
– Compute all pair-wise pattern-pattern similarity
coefficients
– Place each of n patterns into a class of its own
– Merge the two most similar clusters into one
• Replace the two clusters into the new cluster
• Re-compute inter-cluster similarity scores w.r.t. the new
cluster
– Repeat the above step until there are k clusters
left (k can be 1)
Hierarchical Clustering

• Agglomerative (Bottom
up)
Hierarchical clustering
• Agglomerative (Bottom
up)
• 1st iteration
1
Hierarchical clustering
• Agglomerative (Bottom
up)
• 2nd iteration
1

2
Hierarchical clustering
• Agglomerative (Bottom
up)
• 3rd iteration
3
1 2
Hierarchical clustering
• Agglomerative (Bottom
up)
• 4th iteration
3 2
1

4
Hierarchical clustering
• Agglomerative (Bottom
up)
• 5th iteration
3
1 2
5

4
Hierarchical clustering
• Agglomerative (Bottom
up)
• Finally
6
k3 clusters left 2 9
1
5
8
4
7
• Divisive (Top-down)
– Start at the top with all patterns in one cluster
– The cluster is split using a flat clustering algorithm
– This procedure is applied recursively until each
pattern is in its own singleton cluster
Hierarchical clustering

• Divisive (Top-
down)
Hierarchical Clustering: The Algorithm

• Hierarchical clustering takes as input a set of points


• It creates a tree in which the points are leaves and the internal
nodes reveal the similarity structure of the points.
– The tree is often called a “dendogram.”
• The method is summarized below:

Place all points into their own


clusters While there is more than one
cluster, do
Merge the closest pair of clusters

 The behavior of the algorithm depends on how “closest


pair of clusters” is defined
Hierarchical Clustering: Example
This example illustrates single-link clustering in
Euclidean space on 6 points.

F
E

A
B
C D
A B C D E F
Hierarchical clustering
• Produces a set of nested clusters organized as a
hierarchical tree
• Can be visualized as a dendrogram
– A tree like diagram that records the sequences of
merges or splits

6 5
0.2

4
0.15 3 4
2
5
0.1
2

0.05
1
3 1
0
1 3 2 5 4 6
Strengths of Hierarchical Clustering

• Do not have to assume any particular number of


clusters
– Any desired number of clusters can be obtained
by ‘cutting’ the dendogram at the proper level
Hierarchical Clustering: Merging Clusters

Single Link: Distance between two


clusters is the distance between the closest
points.
Also called “neighbor joining.”

Average Link: Distance between


clusters is distance between the cluster
centroids.

Complete Link: Distance between


clusters is distance between farthest pair
of points.
How to Define Inter-Cluster Similarity
p1 p2 p3 p4 p5 ...

p1
Similarity?
p2

p3

p4

p5
• MIN .
• MAX .

• Group Average .
Proximity Matrix
• Distance Between Centroids
How to Define Inter-Cluster Similarity
p1 p2 p3 p4 p5 ...

p1

p2

p3

p4

p5
• MIN .
• MAX .

• Group Average .
Proximity Matrix
• Distance Between Centroids
How to Define Inter-Cluster Similarity
p1 p2 p3 p4 p5 ...

p1

p2

p3

p4

p5
• MIN .
• MAX .

• Group Average .
Proximity Matrix
• Distance Between Centroids
How to Define Inter-Cluster Similarity
p1 p2 p3 p4 p5 ...

p1

p2

p3

p4

p5
• MIN .
• MAX .

• Group Average .
Proximity Matrix
• Distance Between Centroids
How to Define Inter-Cluster Similarity
p1 p2 p3 p4 p5 ...

p1
  p2

p3

p4

p5
• MIN .
• MAX .

• Group Average .
Proximity Matrix
• Distance Between Centroids
Example

Let us consider a gene measured in a set of 5 experiments:


A,B,C,D and E. The values measured in the 5 experiments
are: A=100 B=200 C=500 D=900 E=1100

We will construct the hierarchical clustering of these


values using Euclidean distance, centroid linkage and an
agglomerative approach.
Example

SOLUTION:
• The closest two values are 100 and 200
=>the centroid of these two values is 150.
• Now we are clustering the values: 150, 500, 900, 1100
• The closest two values are 900 and 1100
=>the centroid of these two values is 1000.
• The remaining values to be joined are: 150, 500, 1000.
• The closest two values are 150 and 500
=>the centroid of these two values is 325.
• Finally, the two resulting subtrees are joined in the root of
the tree.
An example:
Two hierarchical clusters of the expression values of a single
gene measured in 5 experiments.

1100 500 1100 900


500 900
D E C E D
C 200 100
100 200
A B B A

The dendograms are identical: both diagrams show that:


•A is most similar to B
•C is most similar to the group (A,B)
•D is most similar to E
In the left dendogram A and E are plotted far from each other
In the right dendogram A and E are immediate neighbors
THE PROXIMITY IN A HIERARCHICAL CLUSTERING DOES NOT NECESSARILY
CORRESPOND TO SIMILARITY
What Is the Problem of the K- Means Method?

• The k-means algorithm is sensitive to outliers !


– Since an object with an extremely large value may substantially
distort the distribution of the data.

• K-Medoids: Instead of taking the mean value of the object in


a cluster as a reference point, medoids can be used, which is
the most centrally located object in a cluster.
10 10
9 9
8 8
7 7
6 6
5 5
4 4
3 3
2 2
1 1
0 0
0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9
10 10
Limitations of K-means: Differing Sizes

Original Points K-means (3 Clusters)


Limitations of K-means: Differing Density

Original Points K-means (3 Clusters)


Limitations of K-means: Non-globular Shapes

Original Points K-means (2 Clusters)


Three Lectures

Image Segmentation Algorithms (Techniques)


 Thresholding: Global vs Adaptive.
 Region Growing
 Region Splitting and Merging
 Cluster Analysis:
k-Mean Clustering
k-Mode Clustering
Hierarchical Clustering
Fuzzy C-Mean Clustering
Mean Shift Segmentation
K-Mode

• Handling categorical data: k-modes (Huang’98)

– Replacing means of clusters with modes


• Mode of an attribute: most frequent value

• Mode of instances: for an attribute A, mode(A)= most frequent


value
• K-mode is equivalent to K-means

– Using a frequency-based method to update modes of clusters

– A mixture of categorical and numerical data: k-prototype method


K-mediods example
X1 2 6
X2 3 4
X3 3 8
X4 4 7
X5 6 2
X6 6 4
X7 7 3
X8 7 4
X9 8 5
X10 7 6
K-mediods example
• Initialize k mediods
• Let us assume c1 = (3,4) and c2 = (7,4)
• Calculate distance so as to associate each data
object to its nearest medoid.
i c1 Data ts Cost i c2 Data s Cost
objec (distance) objec (distance)
(Xi) t (Xi)

1 3 4 2 6 3 1 7 4 2 6 7
3 3 4 3 8 4 3 7 4 3 8 8
4 3 4 4 7 4 4 7 4 4 7 6
5 3 4 6 2 5 5 7 4 6 2 3
6 3 4 6 4 3 6 7 4 6 4 1
7 3 4 7 3 5 7 7 4 7 3 1
9 3 4 8 5 6 9 7 4 8 5 2
10 3 4 7 6 6 10 7 4 7 6 2

Cluster1 = {(3,4)(2,6)(3,8)(4,7)}
Cluster2 = {(7,4)(6,2)(6,4)(7,3)(8,5)
(7,6)}
• Select one of the nonmedoids O′. Let us assume O′ = (7,3)
• Now the medoids are c1(3,4) and O′(7,3)

i c1 Data ts Cost (distance) i O′ Data Cost (distance)


objec objects
(Xi) (Xi)
1 3 4 2 6 3 1 7 3 2 6 8
3 3 4 3 8 4 3 7 3 3 8 9
4 3 4 4 7 4 4 7 3 4 7 7
5 3 4 6 2 5 5 7 3 6 2 2
6 3 4 6 4 3 6 7 3 6 4 2
8 3 4 7 4 4 8 7 3 7 4 1
9 3 4 8 5 6 9 7 3 8 5 3
10 3 4 7 6 6 10 7 3 7 6 3

• Do not change the mediod as S > 0


Three Lectures

Image Segmentation Algorithms (Techniques)


 Thresholding: Global vs Adaptive.
 Region Growing
 Region Splitting and Merging
 Cluster Analysis:
k-Mean Clustering
k-Mode Clustering
Hierarchical Clustering
Fuzzy C-Mean Clustering
Mean Shift Segmentation
Fuzzy C-means Clustering
• Fuzzy C-means (FCM) is a method of
clustering which allows one piece of data to
belong to two or more clusters.

• This method (developed by Dunn in 1973 and


improved by Bezdek in 1981) is frequently
used in pattern recognition.
Fuzzy C-means Clustering
Fuzzy C-means Clustering
Fuzzy C-means Clustering
Fuzzy C-means Clustering
Fuzzy C-means Clustering
Fuzzy C-means Clustering
Fuzzy C-means Clustering
Three Lectures

Image Segmentation Algorithms (Techniques)


 Thresholding: Global vs Adaptive.
 Region Growing
 Region Splitting and Merging
 Cluster Analysis:
k-Mean Clustering
k-Mode Clustering
Hierarchical Clustering
Fuzzy C-Mean Clustering
Mean Shift Segmentation
Mean Shift Segmentation
• An advanced and versatile technique for clustering-
based
segmentation

http://www.caip.rutgers.edu/~comanici/MSPAMI/msPamiResults.html

D. Comaniciu and P. Meer, Mean Shift: A Robust Approach toward Featur


e
Mean Shift Segmentation
• The mean shift algorithm seeks a mode or local
maximum of density of a given distribution
– Choose a search window (width and location)
– Compute the mean of the data in the search
window
– Center the search window at the new mean
location
– Repeat until convergence
Mean Shift Segmentation Region of
interest

Center of
mass

Mean Shift
vector

Slide by Y. Ukrainitz & B.


Mean Shift Segmentation Region of
interest

Center of
mass

Mean Shift
vector

Slide by Y. Ukrainitz & B.


Mean Shift Segmentation Region of
interest

Center of
mass

Mean Shift
vector

Slide by Y. Ukrainitz & B.


Mean Shift Segmentation Region of
interest

Center of
mass

Mean Shift
vector

Slide by Y. Ukrainitz & B.


Mean Shift Segmentation Region of
interest

Center of
mass

Mean Shift
vector

Slide by Y. Ukrainitz & B.


Mean Shift Segmentation Region of
interest

Center of
mass

Mean Shift
vector

Slide by Y. Ukrainitz & B.


Mean Shift Segmentation Region of
interest

Center of
mass

Slide by Y. Ukrainitz & B.


Mean Shift Segmentation

• Cluster: all data points in the attraction


basin of a mode
• Attraction basin: the region for which
all trajectories lead to the same mode

Slide by Y. Ukrainitz & B.


Mean shift clustering/segmentation
• Find features (color, gradients, texture, etc)
• Initialize windows at individual pixel locations
• Perform mean shift for each window until convergence
• Merge windows that end up near the same “peak” or
mode
Mean shift segmentation results

http://www.caip.rutgers.edu/~comanici/MSPAMI/msPamiResults.html
More results
Mean shift pros and cons
• Pros
– Does not assume spherical clusters
– Just a single parameter (window size)
– Finds variable number of modes
– Robust to outliers
• Cons
– Output depends on window size
– Computationally expensive
– Does not scale well with dimension of feature
space
References
 Some Slide material has been taken from Dr M. Usman Akram Computer Vision
Lectures
 CSCI 1430: Introduction to Computer Vision by James Tompkin
 Statistical Pattern Recognition: A Review – A.K Jain et al., PAMI (22) 2000
 Pattern Recognition and Analysis Course – A.K. Jain, MSU
 Pattern Classification” by Duda et al., John Wiley & Sons.
 Digital Image Processing”, Rafael C. Gonzalez & Richard E. Woods, Addison-Wesley,
2002
 Machine Vision: Automated Visual Inspection and Robot Vision”, David Vernon,
Prentice Hall, 1991
 www.eu.aibo.com/
 Advances in Human Computer Interaction, Shane Pinder, InTech, Austria, October
2008
 Computer Vision A modern Approach by Frosyth
 http://www.cs.cmu.edu/~16385/s18/

You might also like