P52 Report
P52 Report
ON
“AI-Based tool for preliminary diagnosis of dermatological manifestations”
BACHELOR OF ENGINEERING
In
Computer Science & Engineering
SUBMITTED BY
Tusheet Pal Singh(2020a1r127)
Arijit Manhas (2020a1r31)
Aman Kumar(2020a)
Paaras Kapoor (2020a126)
SUBMITTED TO
Department of Computer Science & Engineering
Model Institute of Engineering and Technology (Autonomous)
Jammu, India
2022
1
CANDIDATES’ DECLARATION
We, Tusheet Pal Singh, Arijit Manhas, Aman kumar, Paras Kapoor, hereby declare that the
work which is being presented in the minor project report entitled, “AI-based tool for
preliminary diagnosis of dermatological manifestations” in the partial fulfillment of
requirement for the award of degree of B.E. (CSE) and submitted in the Computer Science
and Engineering Department, Model Institute of Engineering and Technology (Autonomous),
Jammu is an authentic record of our own work carried by us under the supervision of Ms.
Shafalika Vijayal (Asst. Professor C.S.E, Model Institute of Engineering and Technology,
Jammu).The matter presented in this seminar report has not been submitted in this or any
other University/Institute for the award of B.E. Degree.e.
Dated: 16-12-2022
1
Department of Computer Science & Engineering
Model Institute of Engineering and Technology (Autonomous)
Kot Bhalwal, Jammu, India
(NAAC “A” Grade Accredited)
CERTIFICATE
Certified that this seminar report entitled “AI-based tool for preliminary diagnosis of
Engineering and Technology (Autonomous), Jammu”, who carried out the minor project
This is to certify that the above statement is correct to the best of my knowledge.
2
ACKNOWLEDGEMENTS
Minor projects serve as vital platforms for students to delve into updated technologies and
gain firsthand experience in the dynamic field of engineering. The culmination of our
collaborative effort has shaped the present work, and I extend my sincere appreciation to all
those who have contributed to its success.
I am deeply grateful to Prof. (Dr.) Ankur Gupta, Director of MIET, Prof. (Dr.) Ashok
Kumar, Dean Academics at MIET, and Ms. Shafalika Vijayal, Assistant Professor &
Program Manager CSE at MIET. Their unwavering guidance, constant inspiration, and
encouragement were pivotal to the completion of this project. Their keen involvement
throughout the entire duration of our work has been truly valuable.
Reflecting on our journey, I extend heartfelt thanks to each member of our team. Together,
we navigated through challenges, shared ideas, and collectively worked towards achieving
our goals. The commitment and hard work of every team member have been indispensable.
In expressing gratitude, I also acknowledge the teachers who, despite their busy schedules,
provided us with their invaluable time, guidance, and support. Their assistance allowed us to
carry out our project within the esteemed organization and enriched our training experience.
This opportunity marks a significant milestone in our collective career development.
Our team is sincerely thankful to Model Institute of Engineering and Technology
(Autonomous), Jammu for providing us with this valuable opportunity. We are committed to
applying the skills and knowledge gained during this project in the most effective manner. As
a team, we look forward to future collaborations and endeavors.
3
ABSTRACT
Background Deep learning, which is a part of a broader concept of artificial intelligence (AI)
and/or machine learning has achieved remarkable success in vision tasks. While there is
growing interest in the use of this technology in diagnostic support for skin-related neglected
tropical diseases (skin NTDs), there have been limited studies in this area and fewer focused
on dark skin. In this study, we aimed to develop deep learning based AI models with clinical
images we collected for five skin NTDs, namely, Buruli ulcer, leprosy, mycetoma, scabies,
and yaws, to understand how diagnostic accuracy can or cannot be improved using different
models and training patterns.
Methodology This study used photographs collected prospectively in Côte d’Ivoire and
Ghana through our ongoing studies with use of digital health tools for clinical data
documentation and for teledermatology. Our dataset included a total of 1,709 images from
506 patients. Two convolutional neural networks, ResNet-50 and VGG-16 models were
adopted to examine the performance of different deep learning architectures and validate their
feasibility in diagnosis of the targeted skin NTDs.
4
Contents
Candidates’ Declaration i
Internship Certificate ii
Certificate iii
Acknowledgement iv
Self-Evaluation v
Abstract vi
Contents vii
List of Figures ix
List of Tables x
Abbreviations Used xi
Chapter 1 Python 1-3
1.1 Introduction to Python 1
1.2 History of Python 1
1.3 Development in Python 1
1.4 Features of Python 2
1.5 Use of Python 2
Chapter 2 Artificial Intelligence 4-8
2.1 About Artificial Intelligence 4
2.2 History of Artificial Intelligence 5
2.3 Types of Artificial Intelligence 7
2.4 Applications of Artificial Intelligence 7
Chapter 3 Machine Learning 9-14
3.1 About Machine Learning 9
3.2 Difference - Human Learning & Machine Learning 9
3.3 Difference - Rule Based Approach & Machine Learning 10
3.4 Problems solved using Machine Learning 10
3.5 Types of Machine Learning 11
3.6 Procedure of Machine Learning 14
Chapter 4 Deep Learning 15-17
4.1 About Deep Learning 15
4.2 Neural Networks 15
4.3 Difference - Machine Learning & Deep Learning 16
5
Chapter 5 Modules & Libraries 18-19
5.1 Python Modules 18
5.2 Python Libraries 18
Chapter 6 Project Description 20-23
6.1 Problem Statement 20
6.2 Workflow of Project 20
6.3 Development Environment 21
6.4 Dataset 21
6.5 Significance of important code segments 22
6.6 Accuracy & Loss 23
Conclusion 24
References 25
Appendix: Machine Learning vs Deep Learning 26
Appendix: Datasets Used 28
6
List of Figures
7
List of Tables
8
ABBREVATIONS USED
AI Artificial Intelligence
ANN Artificial Neural Network
CNN Convolutional Neural Network
DL Deep Learning
ML Machine Learning
NN Neural Networks
RNN Recurrent Neural Network
9
Chapter 1
Python
Python was created by Guido van Rossum, and first released on February 20, 1991. While
you may know the python as a large snake, the name of the Python programming language
comes from an old BBC television comedy sketch series called Monty Python’s Flying
Circus.
One of the amazing features of Python is the fact that it is actually one person’s work.
Usually, new programming languages are developed and published by large companies
employing lots of professionals, and due to copyright rules, it is very hard to name any of the
people involved in the project. Python is an exception.
Of course, Guido van Rossum did not develop and evolve all the Python components himself.
The speed with which Python has spread around the world is a result of the continuous work
of thousands (very often anonymous) programmers, testers, users (many of them aren’t IT
specialists) and enthusiasts, but it must be said that the very first idea (the seed from which
Python sprouted) came to one head – Guido’s.
1
1.4 Features of Python
Python is omnipresent, and people use numerous Python-powered devices on a daily basis,
whether they realize it or not. There are billions of lines of code written in Python, which
means almost unlimited opportunities for code reuse and learning from well-crafted
examples. What’s more, there is a large and very active Python community, always happy to
help.
There are also a couple of factors that make Python great for learning:
● It is easy to learn – the time needed to learn Python is shorter than for many other
languages; this means that it’s possible to start the actual programming faster;
● It is easy to use for writing new software – it’s often possible to write code faster
● It is easy to obtain, install and deploy – Python is free, open and multiplatform; not all
Programming skills prepare you for careers in almost any industry, and are required if you
want to continue to more advanced and higher-paying software development and engineering
roles. Python is the programming language that opens more doors than any other. With a
solid knowledge of Python, you can work in a multitude of jobs and a multitude of industries.
And the more you understand Python, the more you can do in the 21st Century. Even if you
don’t need it for work, you will find it useful to know.
Many developing tools are implemented in Python. More and more everyday use applications
are being written in Python. Lots of scientists have abandoned expensive proprietary tools
and switched to Python. Lots of IT project testers have started using Python to carry out
repeatable test procedures. The list is long.
2
● Web and Internet development (e.g., Django and Pyramid frameworks, Flask and
Bottle micro-frameworks)
● Scientific and numeric computing (e.g., SciPy – a collection of packages for the
● Games (e.g., Battlefield series, Sid Meier’s Civilization IV…), websites and services
3
Chapter 2
Artificial Intelligence
Artificial Intelligence (AI) is a new technical science that studies and develops theories,
methods, techniques, and application systems for simulating and extending human
intelligence. In 1956, the concept of AI was first proposed by John McCarthy, who defined
the subject as "science and engineering of making intelligent machines, especially intelligent
computer program". AI is concerned with making machines work in an intelligent way,
similar to the way that the human mind works. At present, AI has become an interdisciplinary
course that involves various fields.
4
2.2 History of Artificial Intelligence
From 1957 to 1974, AI flourished. Computers could store more information and became
faster, cheaper, and more accessible. Machine learning algorithms also improved and people
got better at knowing which algorithm to apply to their problem. Early demonstrations such
as Newell and Simon’s General Problem Solver and Joseph Weizenbaum’s ELIZA showed
promise toward the goals of problem solving and the interpretation of spoken language
respectively. These successes, as well as the advocacy of leading researchers (namely the
attendees of the DSRPAI) convinced government agencies such as the Defense Advanced
Research Projects Agency (DARPA) to fund AI research at several institutions. The
government was particularly interested in a machine that could transcribe and translate
spoken language as well as high throughput data processing. Optimism was high and
expectations were even higher. In 1970 Marvin Minsky told Life Magazine, “from three to
eight years we will have a machine with the general intelligence of an average human being.”
5
However, while the basic proof of principle was there, there was still a long way to go before
the end goals of natural language processing, abstract thinking, and self-recognition could be
achieved.
In the 1980’s, AI was reignited by two sources: an expansion of the algorithmic toolkit, and a
boost of funds. John Hopfield and David Rumelhart popularized “deep learning” techniques
which allowed computers to learn using experience. On the other hand Edward Feigenbaum
introduced expert systems which mimicked the decision making process of a human expert.
The program would ask an expert in a field how to respond in a given situation, and once this
was learned for virtually every situation, non-experts could receive advice from that program.
Expert systems were widely used in industries. The Japanese government heavily funded
expert systems and other AI related endeavors as part of their Fifth Generation Computer
Project (FGCP). From 1982-1990, they invested $400 million dollars with the goals of
revolutionizing computer processing, implementing logic programming, and improving
artificial intelligence. Unfortunately, most of the ambitious goals were not met. However, it
could be argued that the indirect effects of the FGCP inspired a talented young generation of
engineers and scientists. Regardless, funding of the FGCP ceased, and AI fell out of the
limelight.
Ironically, in the absence of government funding and public hype, AI thrived. During the
1990s and 2000s, many of the landmark goals of artificial intelligence had been achieved. In
1997, reigning world chess champion and grand master Gary Kasparov was defeated by
IBM’s Deep Blue, a chess playing computer program. This highly publicized match was the
first time a reigning world chess champion loss to a computer and served as a huge step
towards an artificially intelligent decision-making program. In the same year, speech
recognition software, developed by Dragon Systems, was implemented on Windows. This
was another great step forward but in the direction of the spoken language interpretation
endeavor. It seemed that there wasn’t a problem machines couldn’t handle. Even human
emotion was fair game as evidenced by Kismet, a robot developed by Cynthia Breazeal that
could recognize and display emotions.
One could imagine interacting with an expert system in a fluid conversation, or having a
conversation in two different languages being translated in real time. We can also expect to
see driverless cars on the road in the next twenty years (and that is conservative). In the long
6
term, the goal is general intelligence, that is a machine that surpasses human cognitive
abilities in all tasks. This is along the lines of the sentient robot we are used to seeing in
movies. Even if the capability is there, the ethical questions would serve as a strong barrier
against fruition. When that time comes (but better even before the time comes), we will need
to have a serious conversation about machine policy and ethics (ironically both
fundamentally human subjects), but for now, we’ll allow AI to steadily improve and run
amok in society.
● Strong AI: The strong AI view holds that it is possible to create intelligent machines
that can really reason and solve problems. Such machines are considered to be
conscious and self-aware, can independently think about problems and work out
optimal solutions to problems, have their own system of values and world views, and
have all the same instincts as living things, such as survival and security needs. It can
be regarded as a new civilization in a certain sense.
● Weak AI: The weak AI view holds that intelligent machines cannot really reason and
solve problems. These machines only look intelligent, but do not have real
intelligence or self-awareness.
7
● Speech processing: a general term for various processing technologies used to
research the voicing process, statistical features of speech signals, speech recognition,
machine-based speech synthesis, and speech perception.
Also, these technologies can be categorized as sub-fields according to their use in that field.
8
Chapter 3
Machine Learning
Machine learning is a core research field of AI, and it is also a necessary knowledge for deep
learning.
Humans acquire knowledge through experience either directly or shared by others. Machines
acquire knowledge through experience shared in the form of past data.
9
3.3 Difference – Rule Based Approach & Machine Learning
Fig 3.3: Rule Based Approach Fig 3.4: Machine Learning Approach
Machine learning can deal with many types of tasks. The following describes the most typical
and common types of tasks.
● Classification:
A computer program needs to specify which of the k categories some input belongs to. To
accomplish this task, learning algorithms usually output a function 𝑓: 𝑅𝑛 → (1,2, … , 𝑘). For example,
the image classification algorithm in computer vision is developed to handle classification
10
tasks.
● Regression:
For this type of task, a computer program predicts the output for the given input. Learning
𝑓: 𝑅𝑛 → 𝑅
algorithms typically output a function . An example of this task type is to predict the
claim amount of an insured person (to set the insurance premium) or predict the security
price.
● Clustering:
A large amount of data from an unlabelled dataset is divided into multiple categories
according to internal similarity of the data. Data in the same category is more similar than
that in different categories. This feature can be used in scenarios such as image retrieval and
user profile management.
In general, Machine Learning is classified in four types. They can be highlighted as:
● Supervised learning:
Obtain an optimal model with required performance through training and learning based on
the samples of known categories. Then, use the model to map all inputs to outputs and check
the output for the purpose of classifying unknown data.
11
Fig 3.5: Supervised Machine Learning.
● Unsupervised learning:
For unlabeled samples, the learning algorithms directly model the input datasets. Clustering is
a common form of unsupervised learning. We only need to put highly similar samples
together, calculate the similarity between new samples and existing ones, and classify them
by similarity.
12
Fig 3.6: Unsupervised Machine Learning
● Semi-
supervised learning:
In one task, a machine learning model that automatically uses a large amount of unlabelled
data to assist learning directly of a small amount of labeled data.
● Reinforcement learning:
It is an area of machine learning concerned with how agents ought to take actions in an
environment to maximize some notion of cumulative reward. The difference between
13
reinforcement learning and supervised learning is the teacher signal. The reinforcement signal
provided by the environment in reinforcement learning is used to evaluate the action (scalar
signal) rather than telling the learning system how to perform correct actions.
The basic procedure of Model building through Machine Learning algorithm can be
understood with the help of following flowchart:
14
Each mentioned step has its own significance for machine learning process and can affect the
accuracy & efficiency of model if not configured correctly.
15
Chapter 4
Deep Learning
Deep learning is a subset of machine learning, which is essentially a neural network with
three or more layers. These neural networks attempt to simulate the behavior of the human
brain—albeit far from matching its ability—allowing it to “learn” from large amounts of
data.
Artificial neural network (neural network): Formed by artificial neurons connected to each
other, the neural network extracts and simplifies the human brain's microstructure and
functions. It is an important approach to simulate human intelligence and reflect several basic
features of human brain functions, such as concurrent information processing, learning,
association, model classification, and memory.
16
Fig 4.1: Neural Network Representation.
Low hardware requirements on the computer: Higher hardware requirements on the computer:
Given the limited computing amount, the To execute matrix operations on massive data,
computer does not need a GPU for parallel the computer needs a GPU to perform parallel
computing generally. computing.
Applicable to training under a small data amount The performance can be high when high-
and whose performance cannot be improved dimensional weight parameters and massive
continuously as the data amount increases. training data are provided.
As mentioned, we can observe that Deep Learning is more efficient as compared to Machine
Learning in terms of model optimization. We generally use TensorFlow & Keras for Deep
Learning Development.
17
Chapter 5
A Python module is a file containing Python definitions and statements. A module can define
functions, classes, and variables. A module can also include runnable code. Grouping related
code into a module makes the code easier to understand and use. It also makes the code
logically organized.
A Python library is a collection of related modules. It contains bundles of code that can be
used repeatedly in different programs. It makes Python Programming simpler and convenient
for the programmer. As we don’t need to write the same code again and again for different
programs. Python libraries play a very vital role in fields of Machine Learning, Data Science,
Data Visualization, etc.
● NumPy is a library for the Python programming language, adding support for large,
● SciPy is a free and open-source Python library used for scientific computing and
● Matplotlib is a plotting library for the Python programming language and its
18
● Scikit-learn is a machine learning library for the Python programming language. It
● TensorFlow is a free and open-source software library for machine learning and
artificial intelligence. It can be used across a range of tasks but has a particular focus
on training and inference of deep neural networks.
● Keras is an open-source software library that provides a Python interface for artificial
neural networks. Keras acts as an interface for the TensorFlow library. Up until
version 2.3, Keras supported multiple backends, including TensorFlow, Microsoft
Cognitive Toolkit, Theano, and PlaidML.
● Librosa is a Python package for music and audio analysis. Librosa is basically used
when we work with audio data like in music generation (using LSTM's), Automatic
Speech Recognition. It provides the building blocks necessary to create the music
information retrieval systems.
19
Chapter 6
Project Description
Machine learning and its sub-field deep learning are foundations of the AI framework. “Machine
Learning” refers to the automatic improvement of AI algorithms through experience and
massive historical data (training datasets) to build models based on datasets that allow the
algorithm to generate prediction and make decisions without programming . “Deep learning”
is a division of machine learning founded on artificial neural networks (ANNs) and
representation learning. The ANN is a mathematical model that simulates the structure and
function of biological neural networks, and an adaptive system with learning capabilities. The
performance of an ANN depends on the number and structure of its neural layers and training
dataset]. Deep learning is already widely used to detect and classify skin cancers and other
skin lesions . The most prominent deep learning networks can be divided into recursive
neural networks (RvNNs), recurrent neural networks (RNNs), Kohonen self-organizing
neural networks (KNNs), generative adversarial neural networks (GANs) and convolutional
neural networks (CNNs) . CNNs, a subtype of ANNs, are most frequently used for image
processing and detection in medicine, particularly in dermatology, pathology and radiology .
Currently, the most implemented CNN architectures in the field of dermatology are
GoogleNet, Inception-V3, V4, ResNet, Inception-ResNet V2 and Dense Net . As the raw data
source for training CNN architectures for applying deep learning, image sets with a large
number of high-quality images are decisive for the diagnostic accuracy, sensitivity and
specificity of the final trained AI algorithm. An image set can be used to be managed for
image data. The object contains a description of the image, the location of the image and the
number of images in the set. The most common image sets used to train AI CAD systems in
dermatology today are ISIC archives (2016–2021), HAM10000, and PH2 image sets .The
concepts and components related to AI in the dermatology field are displayed systematically
in Table 1
Essential terminologies involved in AI in dermatology.
Terminology Paraphrase
20
Terminology Paraphrase
CNNs are a class of neural networks; they are feed forward neural
Convolutional
networks. Their artificial neurons can respond to a part of the
Neural Networks
surrounding units in the coverage area, most commonly applied to
(CNNs)
analysing visual imagery.
21
Terminology Paraphrase
Generative
GANs are a method of unsupervised learning that learns by playing
Adversarial
two neural networks against each other.
Networks (GANs)
22
For each study, information on paper (author and publication time), study location, study
disease, the type and aim of AI algorithm, image number of learning dataset, outcomes,
accuracy, sensitivity and specialty was extracted. A detailed summary of each is provided in.
Figure 2
Timeline and major nodes of AI development.
Go to:
4. The Implementation of AI in Dermatology
The diagnosis of skin diseases is mainly based on the characteristics of the lesions . However,
there are more than 2000 different types of dermatological diseases, and some skin lesions of
different diseases show similarities, which makes anti diastole difficult . At present, the
global shortage of dermatologists is increasing with the high incidence of skin diseases. There
is a serious deficit of dermatologists and uneven distribution, especially the developing
countries and remote areas, which urgently require more medical facility, professional
23
consultation and clinical assistance. Rapid iteration in big data, image recognition technology
and the widespread use of smartphones worldwide may be creating the largest
transformational opportunity for skin diseases’ diagnosis and treatment in this era. In addition
to addressing the needs of underserved areas and the poor, AI now has the ability to provide
rapid diagnoses, leading to more diverse and accessible treatments approaches. An AI-aided
system and algorithm will quickly turn out to be normal diagnosis and evaluation-related
techniques. The morphological analysis of a lesion is the classic basis of dermatological
diagnostics, and the face recognition and aesthetic analysis from AI have also matured and
become more reliable . Currently, some applications of AI in dermatology have already found
their way into clinical practice. Table 2, Table 3 and Table 4 illustrates specific
implementation of AI in dermatology visualized with a mind map (Figure 3) . AI systems
based on a deep learning algorithm use plentiful public skin lesion image datasets to
distinguish between benign and malignant skin cancers. These datasets contain massive
original images in diverse modalities, such as dermoscopy, clinical photographs or
histopathological images . In addition, deep learning was used to process the disagreements
of human annotations for skin lesion images. An ensemble of Bayesian fully convolutional
networks (FCNs) trained with ISIC archive was applied for the lesion image’s segmentation
by considering two major factors in the aggregations of multiple truth annotations. The FCNs
implemented a robust-to-annotation noise learning scheme to leverage multiple experts’
opinions towards improving the generalization performance using all available annotations
efficiently . Currently, the most representative and commonly used AI model is the CNN. It
transmits input data through a series of interconnected nodes that resemble biological
neurons. Each node is a unit of mathematical operation, a group of interconnected nodes in
the network is called a layer and multiple layers build the overall framework of the network
(Figure 4) . Deep CNNs have also been applied to the automatic understanding of skin lesion
images in recent years. Mirikharaji et al., proposed a new framework for training fully
convolutional segmentation networks from a large number of cheap unreliable annotations, as
well as a small fraction of expert clean annotations to handle both clean and noisy pixel-level
annotations accordingly in the loss function. The results show that their spatially adaptive re-
weighting method can significantly decrease the requirement for the careful labelling of
images without sacrificing segmentation accuracy .
24
Figure 3
A schematic illustrates the hierarchy of the implementation of AI in dermatology.
Figure 4
Information from the image data set is transmitted through a structure composed of multi-
25
layer connection nodes. Each line is a weight connecting one layer to the next, with each
circle representing an input, neuron or output. In convolutional neural networks, these layers
contain unique convolutional layers that act as filters. The network made up of many layered
filters learn increasingly high-level representations of the image.
4.1. AI in Aid-Diagnosis and Multi-Classification for Skin Lesions
4.1.1. Multi-Classification for Skin Lesions in ISIC Challenges
In recent years, the classification of multiple skin lesions has become a hotspot with the
increasing popularity of using deep learning algorithms in medical image analysis. Before,
metadata indicating information such as site, age, gender, etc., were not included, even
though this information is collected by doctors in daily clinical practice and has an impact on
their diagnostic decisions. Therefore, the algorithm or AI system that includes this
information is better able to reproduce the actual diagnostic scenario, and its diagnostic
performance will be more credible. The ISIC challenges consider AI systems that can identify
the presence of many different pathologies and provide metadata for labelled cases, thus
allowing for a more realistic comparison between AI systems and clinical scenarios. Since the
International Skin Imaging Collaboration (ISIC) challenge was held in 2016, it represents the
benchmark for diverse research groups working in this area. To date, their database has
accumulated over 80,000 labelled training and testing images, which are openly accessible to
all researchers and have been used for training algorithms to diagnose and classify various
skin lesions . In ISIC 2016–2018, subsets of the image datasets were divided into seven
classes: (1) actinic keratosis and intraepithelial carcinoma, (2) basal cell carcinoma, (3)
benign keratosis, (4) dermatofibroma, (5) melanocytic nevi, (6) melanoma and (7) vascular
skin lesion. From 2019, the atypical nevi were added as the eighth subset. Garcia-Arroyo and
Garcia-Zapirain designed a CAD system to participate in ISIC 2016, 2017 Challenge and
were ranked 9th and 15th, respectively . In 2018, Rezvantalab et al., investigated the
effectiveness and capability of four pre-trained algorithms with HAM10000 (comprising a
large part of the ISIC datasets) and PH 2 state-of-the-art architectures (DenseNet 201, ResNet
152, Inception v3, Inception ResNet v2) in the classification of eight skin diseases. Their
overall results show that all deep learning models outperform dermatologists (by at least
11%) . Iqbal et al., proposed a deep convolutional neural network (DNN) model trained using
ISIC 2017–2019 datasets that proved to be able to automatically and efficiently classify skin
lesions with 0.964 AUR in ROC curve . Similarly, Lucius’ team developed a DNN trained
with HM10000 to classify seven types of skin lesions. Statistics showed that the diagnostic
26
accuracy of dermatologists is significantly improved with the help of DNNs . MINAGAWA
et al., trained a DNN using ISIC-2017, HAM10000 and Shinshu datasets to narrow the
diagnostic accuracy gap for dermatologists facing patients from different regions . Qin et al.,
established a skin lesion style-based generative adversarial network (GAN) and tested it in
the ISIC 2018 dataset, showing that the GAN can efficiently generate high-quality images of
skin lesions, resulting in an improved performance of the classification model . Cano et al.,
applied CNNs based on NASNet architecture trained with a skin image lesion from the ISIC
archive for multiple skin lesion classification, which has been cross validated. Its excellent
performance suggests that it can be utilized as a novel classification system for multiple
classes of skin diseases . Al-masni et al., integrated a deep learning full-resolution
convolutional network and a convolutional neural network classifier for segmenting and
classifying various skin lesions. The proposed integrated deep learning model was evaluated
in ISIC 2016–2018 datasets and achieved an over 80% accuracy in all three for segmentation
and discrimination among seven classes of skin lesions, with the highest accuracy of 89.28%
in ISIC 2018 . In 2018, Gessert et al., employed an ensemble of CNNs in the ISIC 2018
challenge and achieved second place . Next year, they exploited a set of deep learning models
trained with BCN20000 and HAM10000 datasets to solve the skin lesion classification
problem, including EfficientNets, SENet and ResNeXt WSL to address the classification of
skin lesions and predict unknown classes by analyzing patients’ metadata. Their approach
achieved first place in the ISIC 2019 challenge .
In recent years, transfer learning technology has also been applied for classifying multiple
skin lesions. Transfer learning allows a model developed from one task to be transferred for
another task after fine-tuning and augmentation. It is very helpful when we don’t have
enough training data sources. When lesion images are difficult to acquire, the algorithmic
model can be initially performed with natural images and subsequently fine-tuned with an
enhanced lesion dataset to increase the accuracy and specificity of the algorithm, thereby
improving the performance on image processing tasks. Singhal et al., utilized transfer
learning to train four different state-of-the-art architectures with the ISIC 2018 dataset and
demonstrated their practicability for the detection of skin lesions . Barhoumi et al., trained
content-based dermatological lesion retrieval (CBDLR) systems using transfer learning, and
their results showed that it outperformed a similar CBDLR systems using standard distances .
There are also some more studies that have devised AI systems or architectures trained or
tested in ISIC datasets and that have gained outstanding performances; we summarize them
27
in detail in Lately, the ISIC-2021 datasets have just been released. Except for the ISIC 18,
ISIC 2019 and ISIC 2020 melanoma datasets, it also contains extra seven datasets with a total
of approximately 30,000 images, such as Fitzpatric 17k, PAD-UFS-20, Derma7pt and
Dermofit Image. This greatly increases the richness and diversity of the ISIC-2021 archive
and correlates the patient’s skin lesion condition with the other disorders of the body, which
will provide the basis for the future training of AI algorithms with a more comprehensive and
higher diagnostic accuracy. We are also looking forward to the publication of high-quality
papers based on this archive .
4.1.2. Multi-Classification for Skin Lesions in Specific Dermatosis
In addition to the eight major categories of skin diseases defined in the ISIC challenge, in
many specific skin diseases, a differential diagnosis for multiple subtypes is also an urgent
issue to be solved. For example, in melanoma, while the common melanoma subtypes
superficial spreading melanoma (SSM) and lentigo maligna melanoma (LMM) are relatively
easy to diagnose, the morphological features of melanomas on other specific anatomical sites
(e.g., mucosa, limb skin and nail units) are often overlooked . On top on that, some benign
nevus of melanocytic origin can also be easily confused with malignant melanoma in
morphology [. Among the common pigmentation disorders, many are caused by
abnormalities in melanin in the skin. Although they are similar in appearance, they are
diseases with different pathological structures and treatment strategies. Diagnostic models
based on AI algorithms can improve the diagnostic accuracy and specificity of these diseases
so as to benefit dermatologists by reducing the time and financial cost of the diagnosis .
Melanocytic Skin Lesions
Since Binder’s team applied an ANN to discriminate between benign naevi and malignant
melanoma in 1994, increasing numbers of AI algorithms are employed for the multi-
classification of melanocytic skin lesions . Moleanalyzer pro is a proven commercial CNN
system for the classification of melanogenic lesions. Winkler and his team used the system,
which was trained with more than 150,000 images, to investigate its diagnostic performance
across different melanoma localizations and subtypes in six benign/malignant dermoscopic
image sets. The CNN showed a high-level performance in most sets, except for the melanoma
in mucosal and subungual sites, suggesting that the CNN may partly offset the impact of a
reduced human accuracy . In two studies by HA Haenssle et al., in 2018 and 2020, CNNs
were also used in comparison with specialist dermatologists to detect melanocytic/non-
melanocytic skin cancers and benign lesions. In 2018, the CNN trained with Google’s
28
Inception v4 CNN architecture was compared with 58 physicians. The results showed that
most dermatologists outperformed the CNN, but the CNN ROC curves revealed a higher
specificity and doctors may benefit from assistance by a CNN’s image classification . In
2020, Moleanalyzer pro was compared with 96 dermatologists. Even though dermatologists
accomplish better results when they have richer clinical and textual case information, the
overall results show that the CNN and most dermatologists perform at the same level in less
artificial conditions and a wider range of diagnoses . Sies et al., utilize the Moleanalyzer pro
and Moleanalyzer daynamole systems for the classification of melanoma, melanocytic nervus
and other dermatomas. The results showed that the two market-approved CAD systems offer
a significantly superior diagnostic performance compared to conventional image analyzers
without AI algorithms (CIA) .
Benign Pigmented Skin Lesions
Based on a wealth of experience and successful clinical practice, scholars have gradually
tried to apply AI to differentiate a variety of pigmented skin diseases with promising results.
Lin’s team pioneered the use of deep learning to diagnose common benign pigmented
disorders. They developed two CNN models (DenseNet-96 and ResNet-152) to identify six
facial pigmented dermatoses (the nevus of Ota, acquired nevus of Ota, chloasma, freckles,
seborrheic keratosis and cafe-au-lait spots).Then, they introduced ResNet.99 to build a fusion
network, and evaluated the performance of the two CNN with fusion networks separately.
The results showed that the fusion network performance was the best and could reach a level
comparable to that of dermatologists . In 2019, Tschandl et al., conducted the world largest
comparison study between the machine-learning algorithm and 511 dermatologists for the
diagnosis accuracy of pigmented skin lesion classification. The algorithm was, on average,
2.01% more correct in its diagnosis compared to all human readers. The result disclosed that
machine-learning classifiers outperform dermatologist in the diagnosis of skin pigmented
lesions and should be more widely used in clinical practice . In the latest study, Lyakhov et
al., established a multimodal neural network for the hair removal preliminary process and
differentiation of the 10 most common pigmented lesions (7 benign and 3 malignant). They
found that fusing metadata from various sources could provide additional information,
thereby improving the efficiency of the neural network analysis and classification system, as
well as the accuracy of the diagnosis. Experimental results showed that the fusion of
metadata led to an increase in recognition accuracy of 4.93–6.28%, with a maximum
diagnosis rate of 83.56%. The study demonstrated that the fusion of patient statistics and
29
visual data makes it possible to find extra connections between dermatoscopic images and
medical diagnoses, significantly improving the accuracy of neural network classification .
Inflammatory Dermatoses
Inflammatory dermatoses are a group of diseases caused by the destruction of skin tissue as a
result of immune system disorders, including eczema, atopic dermatitis, psoriasis, chronic
urticarial and pemphigus. Newly recorded histological findings and neoteric applications of
immunohistochemistry have also refined the diagnosis of inflammatory skin diseases . AI
CAD systems are able to optimize the workflow of highly routinely diagnosed inflammatory
dermatoses. A multi-model, multi-level system using an ANN architecture was designed for
eczema detection. This system is conceived as an architecture with different models matching
input features, and the output of these models are integrated through a multi-level decision
layer to calculate the probability of eczema, resulting in a system with a higher confidence
level than a single-level system . From 2017 onwards, neural networks have been shown to
be useful for diagnosing acne vulgaris . The latest publications on the use of computer-aided
systems in acne vulgaris are based on a wealth of data from cell phone photographs of
affected patients, which enable the development of AI-based algorithms to determine the
severity of facial acne and to identify different types of acne lesions or post-inflammatory
hyperpigmentation . Scientists in South Korea trained various image analysis algorithms to
recognize images of fungal nails. For this purpose, they used datasets of almost 50,000 nail
images and 4 validation datasets of a total of 1358 images. A comparison of the respective
diagnostic accuracy (measured in this study by the Youden index) of differently trained
assessors and the AI algorithm showed the highest diagnostic accuracy in the computer-based
image analysis and was significant superior to dermatologists (p = 0.01) .
4.2. AI in Aid-Diagnosis and Binary-Classification for Specific Dermatosis
4.2.1. Skin Cancer
The incidence of skin cancer has been increasing yearly . Although its mortality rate is
relatively low , it remains a heavy economic burden on health services and can cause severe
mental problems, especially as most skin cancers occur in highly visible areas of the body .
Due to the low screening awareness, a lack of specific lesion features in early skin cancer and
insufficient adequate clinical expertise and services, most patients were only diagnosed at an
advanced stage, thus leading to a poor prognosis , so there is an urgent need for AI systems to
help clinicians in this field.
Melanoma
30
Melanoma is the deadliest type of skin cancer. The early screening and early diagnosis of
melanoma is essential to improve patient survival . Currently, dermatologists diagnose
melanoma mainly by applying the ABCD principle based on the morphological
characteristics of melanoma lesions. However, even for experienced dermatologists, this
manual examination is non-trivial, time consuming and can be easily confused with other
benign skin lesions . Thus, most AI-driven skin cancer research has focused on the
classification of melanocytic lesions to aid melanoma screening. In 2004, Blum et al.,
pioneered the use of computer algorithms for the diagnosis of cutaneous melanoma and
proved that a diagnostic algorithm for the digital image analysis of melanocytic diseases
could achieve a similar accuracy to expert dermatoscopy . In 2017, Esteva et al., trained a
GoogleNet-Inception-v3-based CNN with the training dataset, including 129,450 clinical
images of 2032 different diseases from 18 sites. The performance of the CNN was compared
with 21 dermatologists in two critical binary classifications (the most common cancer and the
deadliest skin cancer) of biopsy-confirmed clinical images. The CNN’s performance on both
tasks was competent, and comparable to that of dermatologists, demonstrating its ability to
classify skin cancer . The ISIC Melanoma Project has also created a publicly accessible
archive of images of skin lesions for education and research. Marchetti et al., summarized the
results of a melanoma classification for ISIC challenge in 2016, which involved 25
competing teams. They compared the algorithm’s diagnosis with those of eight experienced
dermatologists. The outcomes showed that automated algorithms significantly outperformed
the dermatologists in diagnosing melanoma . Subsequently, they made a comparison of the
computer algorithms’ performance of 32 teams in the ISIC 2017 challenge with 17 human
readers. The results also demonstrated that deep neural networks could classify skin images
of melanoma and its benign simulants with a high precision and have the potential to boost
the performance of human readers . Filho and Tangs’ team have utilized the ISIC 2016, 2017
challenge and PH2 datasets to develop the algorithm for the classification and segmentation
of the melanoma area automatically. Their test outcomes indicated that these algorithms
could dramatically improve the doctors’ efficiency in diagnosing melanoma . In MacLellan’s
study, three AI-aid diagnosis systems were compared with dermatologists using 209 lesions
in 184 patients. The statistics showed that the Moleanalyzer pro had a relative high sensitivity
and the highest specificity (88.1%, 78.8%), whereas local dermatologists had the highest
sensitivity but a low specificity (96.6%, 32.2%) Consistently, Moleanalyzer pro also showed
its reliability in the differentiation of combined naevi and melanomas .It is also possible for
31
dermatologists to build a whole-body map using a 3D imaging AI system; its application is of
particular relevance in the context of skin cancer diagnostics. The 360° scanner uses whole-
body images to create a “map” of pigmented skin lesions. Using a dermatoscope, atypical and
altered nevi can also be examined microscopically and stored digitally. With the help of
intelligent software, emerging lesions or lesions that change over time are automatically
marked during follow-up checks—an important feature for recognizing a malignancy and
initiating therapeutic measures. In addition, in the long term, high-risk melanoma populations
will benefit from a clinical management approach that combines an AI-based 3D total-body
photography monitor with sequential digital dermoscopy imaging and teledermatologist
evaluation .
Non-Melanoma Skin Cancer
AI is also widely used to differentiate between malignant and benign skin lesions, along with
the detection of non-melanoma skin cancer (NMSC). Rofman et al., proposed a multi-
parameter ANN system based on personal health management data that can be used to
forecast and analyze the risk of NMSC. The system was trained and validated by 2056
NMSC and 460,574 non-cancer cases from the 1997–2015 NHIS adult survey data, and was
then further tested by 28058 individuals from the 2016 NHIS survey data. The ANN system
is available for the risk assessment of non-melanoma skin cancer with a high sensitivity
(88.5%). It can classify patients into high, medium and low cancer risk categories to provide
clinical decision support and personalized cancer risk management. The study’s model is
therefore a prediction, where clinicians can obtain information and the patient risk status to
detect and prevent non-melanoma skin cancer at an early stage .Alzubaidi et al., propose a
novel approach to overcome the lack of enough input-labeled raw skin lesion images by
retraining a deep learning model based on large unlabeled medical images on a small number
of labeled medical images through transfer learning. The model has an F1-score value of
98.53% in distinguishing skin cancer from normal skin.
Neurofibroma
Neurofibromatosis (NF) is a group of three conditions in which tumors grow in the nervous
system, and are NF1, NF2 and schwannomatosis .NF1 is the most common neurofibroma and
cancer susceptibility disease. Most patients with NF1 have a normal life expectancy, but 10%
of them develop malignant peripheral nerve sheath tumors (MPNST), which is a major cause
of morbidity .Therefore, the timely differentiation of benign and malignant lesions has direct
significance for improving the survival rate of patients. Wei et al., successfully established a
32
Keras-based machine-learning model that can discriminate between NF1-related benign and
malignant craniofacial lesions with a very high accuracy (96.99 and 100%) in validation
cohorts 1 and 2 and a 51.27% accuracy in various other body regions .Plexiform
neurofibroma (PN) is a prototypical and most common NF1 tumor. Ho et al., created a DNN
algorithm to conduct a semi-automated volume segmentation of PNs based on multiple b-
value diffusion-weighted MRI. They evaluated the accuracy of semi-automated tumor
volume maps constructed by a DNN compared to manual segmentation and revealed that the
volumes generated by the DNN from multiple diffusion data on PNs have a good correlation
with manual volumes, and that there is a significance between PN and normal
tissue .Interestingly, Bashat and his colleagues also demonstrated that a quantitative image
representation method based on machine learning may assist in the classification between
benign PNs and MPNST in NF1 .In a similar initiative, Duarte et al., used grey matter density
maps obtained from magnetic resonance (MR) brain structure scans to create a multivariate
pattern analysis algorithm to differentiate between NF1 patients and healthy controls. A total
of 83% of participants were correctly classified, with 82% sensitivity and 84% specificity,
demonstrating that multivariate techniques are a useful and powerful tool .
4.2.2. Application of AI for Inflammatory Dermatosis
Psoriasis
The prevalence of psoriasis is 0% to 2.1% in children and 0.91% to 8.5% in adults .The
psoriasis area and severity index (PASI), body surface area (BSA) and physician global
assessment (PGA) are the three most commonly used indicators to evaluate psoriasis
severity .However, both PASI and BSA have been repeatedly questioned for their objectivity
and reliability .It would therefore be of great help to use AI algorithms to make a
standardized and objective assessment. Nowadays, machine-learning-based algorithms are
available to determine BSA scores. Although this algorithm had slight limitations in detecting
flaking as diseased skin, it has reached an expert level in BSA assessment .At present, there
are already computer-assisted programs for PASI evaluation, which, however, still require
human assistance and function by recognizing predefined threshold values for certain
characteristics .Another study by Fink’s team is also based on image analysis with the
FotoFinderTM. The accuracy and reproducibility of PASI has been impressively improved
with the help of semi-automatic computer-aided algorithms .These technological advances in
BSA and PASI measurements are expected to greatly reduce the workload of doctors while
ensuring a high degree of repeatability and standardization. In addition to the three above
33
indicators, Anabik Pal et al., used erythema, scaling and induration to build a DNN to
determine the severity of psoriatic plagues. The algorithm is given a psoriasis image and then
makes a prediction about the severity of the three parameters. This task is seen as a new
multi-task learning (MTL) problem formed by three interdependent subtasks in addition to
three different single task learning (STL) problems, so the DNN is trained accordingly. The
training dataset consists of 707 photographs and the training results show that the deep CNN-
based MTL approach performs well when grading the disease parameters alone, but slightly
less well when all three parameters are correctly graded at the same time .
AI can also assist in evaluating and diagnosing psoriasis. Munro’s microabscesses (MM) is a
sign of psoriasis. Anabik Pak et al., presented a computational framework (MICaps) to detect
neutrophils in the epidermal stratum corneum of the skin from biopsy images (a component
of MM detection in skin biopsies). Using MICaps, the diagnosis performance was increased
by 3.27% and model parameters were reduced by 50% .A CNN algorithm that differentiated
among nine diagnoses based on photos made fewer misdiagnoses and had a lower omission
diagnostic rate of psoriasis compared to 25 dermatologists .In addition, Emma et al., used
machine learning to find out which psoriasis patient characteristics are associated with long-
term responses to biologics .Thanks to AI, an amelioration in diagnosis and treatment can be
inferred in psoriasis patients.
Eczema
The challenge in the computer-aided image diagnosis of eczematous diseases is to correctly
differentiate not only between disease and health, but also between different forms of
eczema. The eczema stage and affected area are the most essential factors in effectively
assessing the dynamics of the disease. It is not trivial to accurately identify the eczema area
and other inflammatory dermatoses on the basis of photographic documentation. The
macroscopic forms of eczema are diverse, with different stages and varying degrees of
distribution and severity .The prerequisite for training algorithms for the AI-supported image
analysis of all of these various assessment parameters is therefore a correspondingly large
initial quantity of image files that have been optimized and adjusted in terms of the recording
technology. Forms of eczema with disseminated eruption, such as the corresponding
manifestation patterns of atopic dermatitis, would also be linked to the availability of
automated digital, serial whole-body photography for an efficient and time-saving AI-
supported calculation of an area score. Han et al., trained a deep neural-network-based
algorithm. The algorithm is able to differentiate between eczema and other infectious skin
34
diseases and to classify very rare skin lesions, which has direct clinical significance, and to
serve as augmented intelligence to empower medical professionals in diagnostic
dermatology. They even showed that treatment recommendations (e.g., topical steroids versus
antiseptics) could also be learned by differentiating between inflammatory and infectious
causes. It remains to be seen and questioned, however, whether an AI-aided severity
assessment and a clinically practicable area score can be derived from this as a prerequisite
for a valid follow-up in the case of eczema .Schnuerle et al., designed a support-vector-
machine-based image processing method for hand eczema segmentation with industry
swiss4ward for operational use at the University Hospital Zurich. This system uses the F1-
score as the primary measurement and is superior to a few advanced methods that were tested
on their gold standard dataset likewise .Presumably, a combination of such an AI-aided
image analysis and molecular diagnostics can optimize the future differential diagnostic
classification of eczema diseases, as recently predicted for various clinical manifestations of
hand dermatitis .
Atopic Dermatitis
Atopic dermatitis (AD) is the most common chronic inflammatory disease, with a prevalence
of 10% to 20% in developed countries .It usually starts in childhood and recurs multiple times
in adulthood, greatly affecting patients’ quality of life .In 2017, Gustofson’s team identified
patients with AD via a machine-learning-based phenotype algorithm. The algorithm
combined code information with the collection of electronic health records to achieve a high
positive predictive value and sensitivity. These results demonstrate the utility of natural
language processing and machine learning in EHR-based phenotyping .An ANN algorithm
was developed to assess the influence of air contaminants and weather variation on AD
patients; their results proved that the severity of AD symptoms was positively correlated with
outdoor temperatures, RH, precipitation, NO 2, O3 and PM10 .In the latest study, a fully
automatic approach based on CNN was proposed to analysis multiphoton tomography (MPT)
data. The proposed algorithm correctly diagnosed AD in 97.0 ± 0.2% of all images presenting
living cells, with a sensitivity of 0.966 ± 0.003 and specificity of 0.977 ± 0.003, indicating
that MPT imaging can be combined with AI to successfully diagnose AD .
Acne
The assessment of AI has been very effective. Melina et al., showed an excellent correlation
between the automatic and manual evaluation of the investigator’s global assessment with r =
0.958 .In the case of acne vulgaris in particular, such a procedure could prevent far-reaching
35
consequences with permanent skin damage in the form of scars.
Vitiligo
The depigmented macules of vitiligo are usually in high contrast to unaffected skin. Vitiligo
is more easily recognized by AI systems than features of eczema or psoriasis lesions with
poorly defined borders. Computer-based algorithms used for the detection of vitiligo with an
F1 score of 0.8462 demonstrated an impressive superiority to pustular psoriasis .Luo
designed a vitiligo AI diagnosis system employing cycle-consistent adversarial networks
(cycle GANs) to generate images in Wood’s lamp and improved the image resolution via an
attention-aware dense net with residual deconvolution (ADRD). The system achieved a
9.32% improvement in classification performance accuracy compared to direct classification
of the original images using Resnet 50 .Makena’s team built a CNN that performs vitiligo
skin lesion segmentation quickly and robustly. The network was trained on 308 images with
various lesion sizes, intricacies and anatomical locations. The modified network
outperformed the state-of-the-art U-Net with a much higher Jaccard index score (73.6%
versus 36.7%) and shorter segmentation time than the previously proposed semi-autonomous
watershed approach .These novel systems have proved promising for clinical applications by
greatly saving the testing time and improving the diagnostic accuracy.
Fungal Dermatosis
Gao et al., invented an automated microscope for fungal detection in dermatology based on
deep learning. The system is as proficient as a dermatologist in detecting skin and nail
specimens, with sensitivities of 99.5% and 95.2% and specificities of 91.4% and 100%,
respectively .
4.3. Application of AI for Aesthetic Dermatology
AI combined with new optical technologies is also increasingly being applied in aesthetics
dermatology. Examples include face recognition, automatic beautification in smartphones
and related software. So-called smart mirror analyzers are now available on the Internet,
which are AI-assisted technologies with image recognition systems that analyze the skin
based on its appearance and current external environment and recommend skin care products
accordingly .The program ArcSoft Protrait can automatically identify the wrinkles, moles,
acne and cicatrice and intelligently soften, moisturize and smooth the skin while retaining a
maximum skin texture and detail, greatly simplifying the cumbersome and time-consuming
portrait process .AI also plays an essential role in facial aesthetics assessing. For this purpose,
ANNs are trained using face image material that people judge independently according to
36
aesthetic criteria based on various criteria. The ANN learns from photos and their respective
attractiveness ratings to make human-like judgments about the aesthetics of the face .New
applications objectively evaluate each photo on the basis of over 80 facial coordinates and
nearly 7000 associated distances and angles .
4.4. Applications of AI for Skin Surgery
Radical resection and amputation are the best means of preventing recurrence and fatal
metastasis for malignant dermatoma .A skin or flap graft via microsurgery and the application
of prosthesis play a crucial role in improving the quality of life of patients after
resection .Adequate microvascular anastomosis is the key to a successful microvascular-free
tissue transfer. As a basic requirement in this regard, the surgeon must have excellent
microsurgery skills. Thanks to the support of a series of auxiliary equipment such as
microscopes, magnifications of up to 10 to 15 times are possible and allow for the
anastomosis of small vessels. Nevertheless, due to physiological tremor, only vessels of up to
approximately 0.5–1 mm in size can be safely anastomosed, especially in lymphatic surgery
or perforator-based flaps, where the vascular caliber may even be smaller, which is why
surgeons reach their limits here .In this background, the expansion of surgical microscopes to
include robotics and AI capabilities represents a promising and innovative approach for
surpassing the capabilities of the human hand. The aim is to use robots equipped with AI to
eliminate human tremor and to enable motion scaling for an increased precision and dexterity
in the smallest of spaces .By downscaling human movements, finer vessels can be attached.
In the future, advances could be achieved in the field of ultra-microsurgery and anastomoses
in the range of 0.1–0.8 mm on the smallest vessels or nerve fascicles. In the long term,
intelligent robotics could also automate technically demanding tasks, such as microsurgical
anastomosis performed by robots, or the implementation of a real-time feedback system for
the surgeon.
Prosthetics have also evolved with the implementation of AI. After amputation injuries,
prostheses can now restore not only the shape but also essential functions of the amputated
extremity; in this way, they make a significant contribution to the reintegration of the patient
into society. The mental control of the extremity remains in the brain even after amputation.
When movement patterns are imagined, despite the lack of end organs to perform them,
neurons will still transmit corresponding nerve signals .Prostheses can now receive the
electrical potential via up to eight electrodes and assign them to the respective functions via
pattern recognition and innovative technological methods equipped with AI, and can ensure
37
that patients better use the prosthesis in their daily lives .This enables the patient to directly
control different grip shapes and movements, which means that gripping movements can be
realized much faster and more naturally in terms of movement behavior.
The application of AI-based surgical robots in skin surgery is now also becoming widespread.
Compared to traditional open surgery, robotic-assisted surgery offers 3D vision systems and
flexible operating instruments, with potentially fewer postoperative complications as a result.
In 2010, Sohn et al., first applied this technique to treat two pelvic metastatic melanoma
patients .In 2017, Kim successfully treated one case of vaginal malignant melanoma using
robotic-assisted anterior pelvic organ resection with ileoccystostomy .One year later, Hyde
successfully treated four cases of malignant melanoma using robotic-assisted inguinal lymph
node dissection .Miura et al., found that robotic assistance provided a safe, effective and
minimally invasive method of removing a pelvic lymph from patients with peritoneal
metastases melanoma, with shorter hospital stays compared to normal open surgery .Medical
robots are also involved in the field of hair transplantation. In 2011, the ARTAS system was
officially approved by the US-FDA for male hair transplantation, providing clear and detailed
characteristics of the donor area by capturing microscopically magnified images and
computer-aided parameters to facilitate the acquisition of complete follicular units from the
donor area .The system reduces labor consumption and eliminates human fatigue and
potential errors, and the procedure time is significantly reduced .
Go to:
5. Computer-Aided Dermatology AI Systems on Market
With the rapid development of AI over the past decade, a number of ‘skin’ medical systems
and instruments with multiple applications have been commercialized. These systems and
instruments have ample image datasets to assist in skin examination, monitoring the skin
condition, clinical follow-up and providing treatment advice or guidance. Here, we briefly
summarize the most widely used dermatology AI systems and smartphone apps of the last 15
years .
As a state-of-the-art full-body scanning imaging and intelligent identification system, the
Vectra WBS360 allows the entire skin surface to be acquired with a macroscopic quality
resolution through a single capture. Clinicians can map and survey pigmented lesions and
distributed dermatoses with integrated software. Other applications include documenting
pigmented lesions, psoriasis and vitiligo with the help of 3D imaging systems that allow for
detailed documentation and organization of pre- and post-operative image records. Its
38
companion dermoscope VEOS DS3 combines optics and illumination with wireless capture.
The AI-based DermaGraphix imaging software also helps in assessing the risk of the lesion’s
malignancy: it allows physicians to label and monitor lesions and process images in a
protected and implementable image management system .
Another AI skin system from Canfield, VISIA has been in the market for over 15 years and
has evolved into its seventh generation. The system uses cross-polarization and UV
illumination to record and measure surface and sub-surface skin conditions. Canfield’s
RBX® technology isolates the distinctive color characteristics that lead to the red and brown
skin components of color concentration, such as spider veins and hyperpigmentation. Its new
AI wrinkle algorithm dramatically increases the detection and precision of fine lines and
wrinkles. It can also simulate the effect of each region after injecting different volumes, and
can simulate how patients might appear from the ages of 18–80. It provides a finer
visualization of sub-surface melanin and vascularity conditions for all skin types and
ethnicities. In addition, it allows for the grading of patients’ skin using the world’s largest
database of skin characteristics, and measures blemishes, wrinkles, texture, pores, UV spots,
red areas and porphyrins .A study assessing the clinical value of VISIA suggests that 86% of
respondents agreed that VISIA analysis had improved their understanding of and attention
toward their skin health. They would all recommend VISIA analysis to other people and 62%
of them preferred a clinical practice with a VISIA system .
An AI system specifically designed to identify skin cancer, FotoFinder, debuted in 1991. It
performs skin cancer diagnosis through automated whole-body mapping and digital
dermoscopy, as well as psoriasis documentation and aesthetic imaging. In addition,
FotoFinder systems are used in daily practice and related studies. Its AI-based software
Moleanalyzer pro, working with deep learning algorithms, allows for a risk-of-malignancy
evaluation. It is a market-approved CNN and currently has the largest dataset of dermoscopic
images, including their associated diagnosis. The CNN has already been involved in several
comparative studies in skin lesions diagnosis, and its reliability and feasibility have been
recognized .Dermascan is also a medical imaging system focusing on monitoring and
differential diagnostics in skin cancer. It uses polarization to capture the skin surface and
automatically analyzes traces of hyperpigmentation. All patient and localization-related
images are saved in a database and linked to the video–dermoscopy system. By using digital
photo documentation, the system can identify emerging pigmentation marks and diagnose
changes in existing lesions .
39
Miravex’s Antera 3D imaging system is a device and software complex with powerful and
versatile data handling and consultation tools for the analysis and qualitative measurement of
wrinkles, texture, pigmentation, redness and other various dermatologic conditions. Antera
3D uses an AI algorithm to reconstruct full 3D images of the skin surface and is particularly
suitable for the analysis of topographical features such as wrinkles, skin texture, pores and
volume. For morphological analysis (wrinkles, texture, volumes, etc.), tests on artificial skin
samples under controlled conditions have established an instrumental error of less than 2%,
demonstrating the high level of measurement reproducibility offered by the Antera 3D
camera .
Following the commercially available artificial intelligence skin system, AIDERMA was
born as the first AI-assisted comprehensive platform for the diagnosis and treatment of skin
diseases. With leading AI image recognition technology as its core, AIDERMA provides
doctors with integrated support for assisting diagnosis, case management, professional
education and patient management, helping doctors to improve their diagnosis and treatment
efficiency in all aspects and escorting them in their clinical work. AIDERMA can
intelligently identify skin lesion photos and give the names of skin diseases directly. In the
competition with the FotoFinder system in 2018, its diagnosis accuracy rate reached 80%.
Smart skin is now open to Chinese certified physicians and can identify 90 types of diseases
with an average accuracy rate of 86%. The product has been clinically tested in more than
3400 hospitals since its launch, helping doctors complete nearly 80,000 auxiliary diagnoses
and supporting them to access over three million clinical contents .
Out of complex and large AI systems and platforms, some light-weight AI-based
dermatology diagnostic apps for smart phones have also recently emerged. Dermacompass by
swiss4ward is a learning tool for dermatologists. It contains skin disease images along with
treatment algorithms and also provides an individual case diagnosis and image comparison.
This app uses automatic image analysis to grade the medical severity of hand eczema and
detects hand eczema through computer vision and machine learning. DermoScanner is an
application leveraging the power of AI and deep learning and allows users to analyze skin
moles and detect skin cancers via a mobile camera .
Table 5
On market aid-dermatology AI system and apps.
40
On
Name Manufacturer Country Market Platform Application
Year
Analyzes melanocytic
as well as non-
melanocytic skin
Moleanalyzer pro Fotofinder Germany 2018 Windows
lesions and calculates
an AI score for mole
risk assessment
41
On
Name Manufacturer Country Market Platform Application
Year
Identification of the
new or modified
lesions with digital
Dermoscan X2 Dermoscan Germany 2017 Windows photo documentations
and makes automatic
comparison of
pigmentation marks
Automatic
identification of skin
disorders and stores
AIDERMA Dingxiangyuan China 2018 Online
patient’s medical
record in the cloud
safely
Imaging,
documentation and
analysis of skin
MetaOptima
Android conditions including
DermEngine Technology Canada 2015
and iOS skin cancer; offers
Inc.
business intelligence
features designed for
practice management
42
On
Name Manufacturer Country Market Platform Application
Year
It contains skin
diseases, pictures and
algorithms for
Android treatment and provides
Dermacompass Swiss4ward Switzerland 2017
and iOS individual case
diagnosis and image
comparison for
dermatologists
43