The current issue and full text archive of this journal is available on Emerald Insight at:
https://www.emerald.com/insight/1742-7371.htm
Speech and
Mobile application based speech voice analysis
and voice analysis for COVID-19 for COVID-19
detection using computational
audit techniques
Udhaya Sankar S.M. and Ganesan R. Received 24 September 2020
Revised 24 September 2020
Department of Computer Engineering, Faculty of Engineering and Technology, Accepted 24 September 2020
SRM Institute of Science and Technology, Kattankulathur, India
Jeevaa Katiravan
Department of Information Technology, Velammal Engineering College, Chennai, India
Ramakrishnan M.
Department of Computer Applications, Madurai Kamaraj University,
Madurai, India, and
Ruhin Kouser R.
Department of Computer Science and Engineering, Kingston Engineering College,
Vellore, India
Abstract
Purpose – It has been six months from the time the first case was registered, and nations are still working
on counter steering regulations. The proposed model in the paper encompasses a novel methodology to equip
systems with artificial intelligence and computational audition techniques over voice recognition for detecting
the symptoms. Regular and irregular speech/voice patterns are recognized using in-built tools and devices on
a hand-held device. Phenomenal patterns can be contextually varied among normal and presence of
asymptotic symptoms.
Design/methodology/approach – The lives of patients and healthy beings are seriously affected with
various precautionary measures and social distancing. The spread of virus infection is mitigated with
necessary actions by governments and nations. Resulting in increased death ratio, the novel coronavirus is
certainly a serious pandemic which spreads with unhygienic practices and contact with air-borne droplets of
infected patients. With minimal measures to detect the symptoms from the early onset and the rise of
asymptotic outcomes, coronavirus becomes even difficult for detection and diagnosis.
Findings – A number of significant parameters are considered for the analysis, and they are dry cough, wet
cough, sneezing, speech under a blocked nose or cold, sleeplessness, pain in chests, eating behaviours and
other potential cases of the disease. Risk- and symptom-based measurements are imposed to deliver a
symptom subsiding diagnosis plan. Monitoring and tracking down the symptoms inflicted areas, social
distancing and its outcomes, treatments, planning and delivery of healthy food intake, immunity
improvement measures are other areas of potential guidelines to mitigate the disease.
Originality/value – This paper also lists the challenges in actual scenarios for a solution to work
satisfactorily. Emphasizing on the early detection of symptoms, this work highlights the importance of such a
mechanism in the absence of medication or vaccine and demand for large-scale screening. A mobile and
ubiquitous application is definitely a useful measure of alerting the officials to take necessary actions by
eliminating the expensive modes of tests and medical investigations.
International Journal of Pervasive
Computing and Communications
Keywords Artificial intelligence, Application © Emerald Publishing Limited
1742-7371
Paper type Research paper DOI 10.1108/IJPCC-09-2020-0150
IJPCC Introduction
The first case was registered in Wuhan of the Chinese Province back in December 2019,
according to the World Health Organization (WHO) Office in China. Ever since, the disease
has spread around 200 countries, at the time of writing, six months from the inception. In the
month of March, the disease was identified to be COVID-19, a variant of SARS-COV2 family
and declared as a life-threatening pandemic. According to the statement of John Hopkins
University, the coronavirus will be assuming an exponential growth, affecting the immunity
and resulting in a pneumonic infection over the patients’ lungs to induce breathing
problems. Till date, 12,631,237 affected cases have resulted in 562,700 deaths worldwide.
The recovering number of cases has shown belief in the counter measures adopted by WHO
and governments. Medical institutions are yet to analyse and present the resultant effects of
coronavirus after recovery. With doubling cases to the previous day, governments have
taken a wide range of measures to control the spread, by shutting down entire nations and
closely monitoring the patients in recognized institutions. Social distancing has been
seriously followed in all places to ensure the control of disease spread.
Various research activities have been taken up by organizations globally and produced
massive efforts to create a social awareness among the population. The research domain has
been interdisciplinary with artificial intelligence (AI), medical equipment, medical testing,
machine learning and embedded systems. The integration of mobile and pervasive
computing has opened up a next level of research with early detection techniques and proper
tracking. Digital health tracking and a defined system for analysing the observations to its
fullest, simplifies the process of symptoms detection. AI, combined with ubiquitous
computing delivers a systematic and powerful tool for predicting coronavirus symptoms,
track the origin and alerting scheme for medical institutions. Screening and result
production should be automated and consume lesser time for faster analysis, reporting and
isolation of affected persons in their respective containment zones. Computer vision,
Computer audition, image processing techniques over medical images (World Health
Organization, 2020; Hu et al., 2020; Gozes et al., 2020), wireless sensor networks (Speech –
The physiology of speech, 2020), for managing signals from embedded tools or portable
devices are established to assist in detection of COVID-19 symptoms.
The research work intends to document a survey of computer vision, computer auditing
techniques, audio processing, machine learning in combination of computational technology.
This study will be an advancement to the existing speech analysis and sound analysis. Speech
synthesis is a remarkable contribution in the present scenario. Screening of densely populated
nations and with minimally observable symptoms is adding challenges to the detection and
diagnosis on time. The current methods of screening and testing the patients are time
consuming and often misleading, resulting in further spread of the contagion. Many developed
nations are also facing a shortage of testing kits and precautionary wears, protective films and
hospital facilities. Ranging from breathing troubles to fever, the symptoms also recorded body
pains, diarrhoea, sore throat and runny noses. These symptoms are detected only after two
weeks of the onset, where mild symptoms can be sensed from five days. These are the
challenging factors of current research work.
The challenges are addressed by attempting to narrow down the symptoms of COVID-19
with speech and voice analysis. Individuals are scanned for their voice and speech
variations, AI attempts to identify the variations and confirm the infection of the virus. The
other sensors will lead to the confirmation of the same using temperature, rate of heartbeats,
breathing rate differences and pressure variations. These symptoms can be used directly or
indirectly to narrow down as the symptoms of coronavirus, more importantly, before the
inception of severe symptoms. Numerous researches have been carried out for relating the
breathing variations, speech characteristics to the emotional, physiological and psychological Speech and
characteristics of an individual. Same set of characteristics can be accurate and results can lead voice analysis
to a definite solution to detect the asymptotic indications. The following sections comprise of the
detailed survey of similar techniques and methodologies of early detection, sound and speech
for COVID-19
recognition, machine learning, AI-enabled computational technologies. Various observations are
collected and evaluated such as speech variations when an individual is wearing a mask,
suffering from cold, nasal congestion, sneezing and coughing and in presence of disease.
Multiple use cases, challenges and global test cases are compared for a comprehensive design
and implementation, to satisfy the need of the current scenario across the globe. The presented
model will be in form of a mobile app, in touch with the individuals throughout, sense for
alarming signals through models of AI for sound and speech processing. Once the signals are
observed, the same will be transmitted to officials accordingly for immediate medical attention.
Literature survey
Medical diagnosis usually commences with gathering potential signs from a human body
with respect to heart rates, blood pressure, temperatures and respiration rates. These
observations are made with expensive devices and equipment, demands medical expertise
and unable to transfer from one place to another. When detection of symptoms is based on
voice and speech analysis, the task can simply be performed by normal recorders available
on a smart phone. This facilitates a ubiquitous option for the current requirement, where
almost every individual owns a smartphone. Computer Paralinguistic is a domain of
computer science and engineering for speech analysis and recognition (Wang et al., 2020). A
similar model has been designed and implemented in Interspeech Computational
Paralinguistics Challenge (ComParE) (Borkovec et al., 1974) to investigate the changes in
speech and voice when influenced by cold. The acoustic parameters of regular voice and
cold induced voice are compared for differentiating the levels. Hence, the idea is extended to
be a suitable solution for analysing the changes in voice, speech (Maghdid et al., 2020) for a
fruitful solution. Shortness of breath, other associated difficulties can be sensitized by
comparing the regular patterns, made periodically and produce the results.
Additional challenge of the proposed model is to be aware of detecting voice based on
masks, a facial mask can also disrupt the actual recordings. This factor should not lead to
misleading results and deem it as a breathing trouble, monitoring elders, emotion analysis
are other challenges of the proposed system. Coronavirus is also affecting the children under
the age of 5, where crying is the only symptom that has to be differentiated. Crying as an
indicator has to be deeply analysed for confirming the presence of corona virus symptoms
(Trouvain and Truong, 2015; Schuller and Batliner, 2013). Another notable symptom is loss
of appetite which cannot be confirmed with the present model. ComParE also included the
lack of sleeping, another notable symptom of COVID-19, extended by Karolinska sleepiness
test in 2019 (Schuller et al., 2017; James, 2015). Monitoring the regular symptoms of
headaches and body pains account for other reasons and thus cannot be completely the
deciding factors of the disease (Burton et al., 2004; Usman et al., 2019a).
Association of breathing variations is observed along with physiological and
psychological conditions according to the surveyed methodologies (Mesleh et al., 2012).
Reliable evidence of heart beats and speech are investigated in signal processing models and
necessary AI models. The accuracy of different models varies according to the conditions
under which the patients are tested, including their environmental conditions. Regression
analysis has been implemented on measuring the heart beat variations, classification
of normal and abnormal heart beats, delivered an accuracy of 94%. With a suitable AI
algorithm, the classification accuracy improved to almost 100% (Skopin and Baglikov, 2009;
IJPCC Sakai, 2015). Speech and ECG reports were analysed together for detecting vital symptoms
(Schuller et al., 2019; Schuller and Schuller, 2020). A method was implemented to correlate
the blood pressure and speech related variations (Schuller et al., 2014). Hence the existing
work has been concentrating on the potential signs and different combinations for detecting
the symptoms. Biological parameters can be related to detection of COVID-19 symptoms
through construction of waveforms, measuring the spatial and temporal characteristics with
standard values. Temperature symptoms can be mapped to fever and tiredness of an
affected patient, and this induces the variations in speech. Similarly, blood pressure
variation will lead to a change in speech, heart rate and respiratory rate are directly
associated with speech, ultimately leading to speech variations. Almost all the symptoms of
COVID-19 has resulted in a change of speech and acoustics (Moradshahi et al., 2012;
Schröder et al., 2016; Murphy et al., 2004). This motivates the present study and model for
further analysis. Table 1 lists out the possible measures of general speech analysis.
Proposed methodology
The speech signal will be recorded in the installed app and converted into a digital format.
Background noise, pauses, stammering and unwanted components will be removed from the
digital signal for a clear analysis. General signal processing techniques will clear the areas to be
processed through filtering and voice activity detection phases. Once the actual signal is
obtained, they are processed by a feature selection algorithm, which categorizes the input
signal into a specific characterized speech signal. The patterns are vital elements for further
classification by the implemented algorithm where input pattern is compared against the pre-
processed sample patterns (Schröder et al., 2016; Murphy et al., 2004; Song, 2015; Janott et al.,
2018). The obtained parameters are compared with a default parameter for a binary
classification as the outcome of the solution. Correlation between the obtained parameters and
existing biomarkers are carefully studied in the literature (Pokorny et al., 2019; Ruskin et al.,
1973; Usman et al., 2019b; Schuller et al., 2020; Cardone et al., 1987; Wei et al., 1983).
As far as speech and voice processing is concerned for the detection of COVID-19
symptoms, the default biomarkers are retrieved from existing studies and reports globally.
Automatic recognition of dry and wet coughs, sneezing, throat clearing, swallowing and
productive coughs had a higher rate of recognition and with better accuracy. The cause of
fatality is found to be pneumonic in COVID-19 cases, as they require a ventilator for support
in respiration. Pneumonic conditions have caused the most fatalities from the onset of
disease, giving the different breathing conditions to test and train our models. Computer
auditing techniques have shown promising results in differentiation of breathing patterns in
normal and cold influences, lung sounds and other respiratory troubles. The actual benefit
of the proposed model is the implementation of same in a mobile phone (31-32). Deployed
Symptom Risk analysis Diagnosis Monitoring
Coughing (wet and dry) No Yes Yes
Cardiovascular disease Yes No Yes
Table 1. Sneezing No Yes Yes
Emotion No No Yes
Use cases of
Crying Yes No Yes
coronavirus Snoring Yes No Yes
symptoms and Masks Yes No Yes
speech/sound Breathing Yes Yes Yes
analysis Depression No No Yes
models can be implemented in different working conditions, such as sleep, emotional Speech and
distress, work outs and so on. Different types of snoring can be detected when a patient is voice analysis
sleeping, periodical tracking will enable the system to gain insights about actual symptoms.
In existing models, the ComParE technique analysed the relationship between
for COVID-19
cardiovascular symptoms and respiratory variations. Detected heart rates were categorized
into normal, mild and abnormal conditions based on patterns derived from classification AI
algorithms. The model can be extended to identify individuals failing to maintain social
distancing, and if the system analyses more than two voices in a close proximity, it will
advise to maintain the social distance between each other. Implemented in mobile phones,
the model will ensure social distancing in crowded places by configuring the local pointers.
Processing audio in a real-time environment
The voices obtained would be from different environments and processing of voices would be
based on additional noises, quality of the device, recording, storage and retrieval of the same
from the devices. When periodical recording has to be obtained, it cannot be always of the same
sort. The first recording might happen at home and respective individual might record
the second audio during travel. Various reverberations, noises and transmission losses will be
added to original source of recording. The proposed model should also engage multicultural and
multilingual variability as an important factor. The efficiency of the model is determined by the
mobile application, speech analysis, audio analysis and data analysis. Energy consumption of
mobile devices, transmission of the data to the machine learning and AI algorithms should also
be considered. The biological markers can be represented in the following Figure 1.
Assessment of risk factors
Detection of this practice commences in an individual assessment for determining age,
gender, healthy conditions of the person and correlating it to the mortality risk level. Users
can also ensure that the other individuals are wearing a mask and maintain a social distance
among each other. Counting of voices in a close proximity, estimating the ambience and
Figure 1.
COVID-19 biological
symptoms
IJPCC other functionalities of the proposed model can also be done. The current mechanism of
testing and confirming the presence of COVID-19 is by a swab sample of bodily fluids and
blood tests, CT scans of the chest area for monitoring the respiratory rates. Available
techniques for testing include huge and expensive devices, which cannot be ported from one
location to another. There are difficulties related to mass screening and setting up in
different locations. Real time difficulties are overcome with an individually available mobile
application in everyone’s phone. Though this technique cannot be a professional technique,
the invasive and ubiquitous features have definitely increased the benefits of the model.
Very minimal investigations have been carried over in the audio/speech processing
methodologies. Owing to the source tracking difficulties, influenza, cold or COVID-19
symptoms have to be carefully segregated. But, sneezing and coughing sound the same in both
normal cold and COVID-19 symptoms. This results in an unfortunate condition in finding
unique patterns among the different conditions. When short audio/speech files are obtained in
intervals, they may not yield to a reliable solution. The series of information is constituted into
a histogram for deriving meaningful insights and extract useful patterns for making
appropriate decisions. Each symptom mentioned in Table 1 indicates the presence of a
treatment-demanding symptom in the present crisis. Similarly, all these indications can be
recorded with an intelligent sensor used to record audios. A mobile application designed to
collect the frequencies with respect to time, and the histograms will derive a higher success rate
by detecting the symptoms of COVID-19 and other normal cold. With the right AI algorithm,
comparisons can yield the symptom of COVID-19. This is delivered by the onset gradient,
which tracks the conversions of one symptom to the next in a defined unit of time. The changes
will be gradual in a normal condition and will subside after a certain period of time, where the
conditions worsen in case of COVID-19. Abrupt conditions will worsen in matter of hours or
days and addition of new symptoms will highlight the features of the pandemic.
Results and discussions
Machine learning and pattern recognition methodologies backed up the mobile application to
detect and record the voice samples in regular intervals. Computer auditing techniques with
statistical learning approaches delivered promising results by reducing the chances of errors.
Chances of errors and limitations are estimated by a confidence measure calculation and trust
measure for reliable results. The same results will be forwarded to respective officials for
diagnosis and treatments. Obtained results should be delivering the same outcomes in every
tests taken without misleading results. The following Table 2 indicates the recorded symptoms
of cold, influenza and COVID-19. Differences and intensity of every symptom can be higher and
abruptly increasing without gradual improvements. This is a notable sign of COVID-19, which
can easily be detected with the algorithm and mobile application.
Figure 2 illustrates the registered waveforms and spectral characteristics after analysing
the healthy persons and COVID-19 affected patients.
Cold Influenza Coronavirus
Breathing difficulty Mild Severe Extreme
Runny nose Extreme Severe Mild
Table 2. Nasal congestion Extreme Mild Mild
Symptom and Coughing Dry Dry Dry þ extreme
severity of COVID-19 Fatigue Mild Extreme Extreme
vs cold vs influenza Tiredness Mild Extreme Extreme
Speech and
voice analysis
for COVID-19
Figure 2.
(a) Healthy person
respiratory rate and
(b) COVID-19 affected
patient respiratory
waveforms
IJPCC Using the speech and voice analysis for the COVID-19 symptom detection with mobile
phone applications can be helpful in resolving the current problems, deficiency of testing
kits and maintaining the social distancing problems. Available resources can be effectively
used for the needy. The proposed technique can be helpful in early detection of symptoms,
individuals can detect the symptoms themselves, isolate and prevent the spread. If the
symptoms get severe, respective officials will be informed for necessary action. Simplicity of
the model enables simple mobile phones and analysed by an algorithm. Screening can be
done almost from everywhere and standard GSM modules will be responsible for
transmission of reports to medical officials.
The intention and purpose of the proposed methodology is to screen mass population
through their very own mobile phones. Sophisticated devices, manufactured today, can be
used for various purposes such as official and commercial functionalities. Inbuilt
microphones are powerful to detect and amplify the sounds received from the users.
Powered by speech recognition, the algorithm can be enhanced to distinguish the various
breathing sounds, coughing and thus can be related to potential symptoms for detecting the
novel coronavirus. “Machine listening”, “computational paralinguistics” can be used with
speech recognition algorithms to facilitate the symptoms detection process. Moreover,
implementing the technique based on existing usage of microphones during calls and voice
commands will consume the same amount of energy from the batteries and would not
impose heavy draining of battery power. Numerous research studies have been carried out
for detection of abnormal sounds using computer audition in existing research work.
Conclusion
Speech information will comprise of numerous vital and intrinsic information about
emotional, psychological, psychological characteristics of an individual. The biomarkers can
be extracted as potential signs of a disease and can be used for life saving diagnosis. Real-
time, large-scale screening and analysis of speech signals can be obtained from a simple
mobile application, and they can be inputted into AI algorithms such as ComParE for
extracting features from asymptotic individuals. Until a vaccine is created for curing
COVID-19, Ultimate benefit of having such a system is to produce immediate results, to
minimize spreading of the disease and finally to create an awareness among people. AI
algorithms have been refined to produce accurate results and evaluated in multiple
techniques surveyed in the literature.
References
Borkovec, T.D., Wall, L.R. and Stone, N.M. (1974), “False physiological feedback and the maintenance
of speech anxiety”, Journal of Abnormal Psychology, Vol. 83 No. 2, pp. 164-168.
Burton, D.A., Stokes, K. and Hall, G.M. (2004), “Physiological effects of exercise”, Continuing Education
in Anaesthesia Critical Care and Pain, Vol. 4 No. 6, pp. 185-188.
Cardone, C., Bellavere, F., Ferri, M. and Fedele, D. (1987), “Autonomic mechanisms in the heart rate
response to coughing”, Clinical Science, Vol. 72 No. 1, pp. 55-60.
Gozes, O., Frid-Adar, M., Greenspan, H., Browning, P.D., Zhang, H., Ji, W., Bernheim, A. and Siegel, E.
(2020), “Rapid AI development cycle for the coronavirus (covid-19) pandemic: initial results for
automated detection and patient monitoring using deep learning CT image analysis”, arXiv
preprint arXiv:2003.05037.
Hu, Z., Ge, Q., Jin, L. and Xiong, M. (2020), “Artificial intelligence forecasting of Covid-19 in China”,
arXiv preprint arXiv: 2002.07112.
James, A.P. (2015), “Heart rate monitoring using human speech spectral features”, Human-Centric Speech and
Computing and Information Sciences, Vol. 5 No. 1, pp. 1-12.
voice analysis
Janott, C., Schmitt, M., Zhang, Y., Qian, K., Pandit, V., Zhang, Z., Heiser, C., Hohenhorst, W., Herzog, M.,
Hemmert, W. and Schuller, B. (2018), “Snoring classified: the Munich Passau SnoreSound
for COVID-19
corpus”, Computers in Biology and Medicine, Vol. 94 No. 1, pp. 106-118.
Maghdid, H.S., Ghafoor, K.Z., Sadiq, A.S., Curran, K. and Rabie, K. (2020), “A novel AI-enabled
framework to diagnose coronavirus covid19 using smartphone embedded sensors: design
study”, arXivpreprint arXiv:2003.07434.
Mesleh, A., Skopin, D., Baglikov, S. and Quteishat, A. (2012), “Heart rate extraction from vowel speech
signals”, Journal of Computer Science and Technology, Vol. 27 No. 6, pp. 1243-1251.
Moradshahi, P., Chatrzarrin, H. and Goubran, R. (2012), “Improving the performance of cough
sound discriminator in reverberant environments using microphone array”, Proceedings
International Instrumentation and Measurement Technology Conference (I2MTC), IEEE,
Graz, pp. 20-23.
Murphy, R.L., Vyshedskiy, A., Power-Charnitsky, V.-A., S.Bana, D., Marinelli, P.M., Wong-Tse, A. and
Paciej, R. (2004), “Automatedlung sound analysis in patients with pneumonia”, Respiratorycare,
Vol. 49 No. 12, pp. 1490-1497.
Pokorny, F.B., Fiser, M., Graf, F., Marschik, P.B. and Schuller, B.W. (2019), “Sound and the city: current
perspectives on acousticgeo-sensing in urban environment”, Acta Acustica United with Acustica,
Vol. 105 No. 5, pp. 766-778.
Ruskin, J., Bache, R.J., Rembert, J.C. and Greenfield, J.C. Jr (1973), “Pressure-flow studies in man: effect of
respiration on left ventricular stroke volume”, Circulation, Vol. 48 No. 1, pp. 79-85.
Sakai, M. (2015), “Feasibility study on blood pressure estimations from voice spectrum analysis”,
International Journal of Computer Applications, Vol. 109 No. 7.
Schröder, J., Anemiiller, J. and Goetze, S. (2016), “Classification of human cough signals using spectro-
temporal gabor filter bank features”, Proceedings International Conference on Acoustics, Speech
and Signal Processing (ICASSP), IEEE, Shanghai, pp. 6455-6459.
Schuller, B. and Batliner, A. (2013), Computational Paralinguistics: Emotion, Affect and Personality in
Speech and Language Processing, Wiley.
Schuller, D. and Schuller, B. (2020), “The challenge of automatic eating behaviour analysis and
tracking”, in Costin, H.N., Schuller, B.W. and Florea, A.M. (Eds), Recent Advances in
IntelligentAssistive Technologies: Paradigms and Applications, Ser, Intelligent Systems
Reference Library, Springer, pp. 187-204.
Schuller, B.W., Schuller, D.M., Qian, K., Liu, J., Zheng, H. and Li, X. (2020), “COVID-19 and computer
audition: an overview on what speech and sound analysis could contribute in the SARS-CoV-2
corona crisis”, available at: https://arxiv.org/pdf/2003.11117.pdf
Schuller, B., Steidl, S., Batliner, A., Schiel, F., Krajewski, J., Weninger, F. and Eyben, F. (2014), “Medium-
term speaker states – a review on intoxication, sleepiness and the first challenge”, Computer
Speech and Language, Vol. 28 No. 2, pp. 346-374.
Schuller, B.W., Batliner, A., Bergler, C., Pokorny, F., Krajewski, J., Cychosz, M., Vollmann, R., Roelen, S.-D.,
Schnieder, S., Bergelson, E., Cristià, A., Seidl, A., Yankowitz, L., Nöth, E., Amiriparian, S., Hantke, S.
and Schmitt, M. (2019), “The INTERSPEECH 2019 computational paralinguistics challenge: styrian
dialects, continuous sleepiness, baby sounds and orca activity”, Proceedings INTERSPEECH,
ISCA, Graz, pp. 2378-2382.
Schuller, B., Steidl, S., Batliner, A., Bergelson, E., Krajewski, J., Janott, C., Amatuni, A., Casillas, M.,
Seidl, A., Soderstrom, M., Warlaumont, A., Hidalgo, G., Schnieder, S., Heiser, C., Hohenhorst, W.,
Herzog, M., Schmitt, M., Qian, K., Zhang, Y., Trigeorgis, G., Tzirakis, P. and Zafeiriou, S. (2017),
“The INTERSPEECH 2017 computational paralinguistics challenge: addressee, cold and
snoring”, Proceedings INTERSPEECH, ISCA, Stockholm, pp. 3442-3446.
IJPCC Skopin, D. and Baglikov, S. (2009), “Heartbeat feature extraction from vowel speech signal using 2D
spectrum representation”, Proc. 4th International Conference on Information Technology (ICIT),
Amman.
Song, I. (2015), “Diagnosis of pneumonia from sounds collected using low cost cell phones”, Proceedings
International Joint Conferenceon Neural Networks (IJCNN), IEEE, pp. 1-8.
Speech – The physiology of speech (2020), available at: http://science.jrank.org/pages/6371/Speech-
physiology-speech.html (accessed 22 April 2020).
Trouvain, J. and Truong, K.P. (2015), “Prosodic characteristics of read speech before and after treadmill
running”, 16th Annual Conference of the International Speech Communication Association,
Dresden, September 6-10.
Usman, M., Zubair, M., Ahmed, Z. et al. (2019a), “Cloud based predictive analytics for heart rate detection
and classification from speech spectral features”, Manuscript submitted for publication.
Usman, M., Ahmad, Z. and Wajid, M. (2019b), “Dataset of raw and pre-processed speech signals, mel
frequency cepstral coefficients of speech and heart rate measurements”, 2019 5th International
Conference on Signal Processing, Computing and Control (ISPCC), Solan, pp. 376-379.
Wang, S., Kang, B., Ma, J., Zeng, X., Xiao, M., Guo, J., Cai, M., Yang, J., Li, Y., Meng, X. and Xu, B. (2020),
“A deep learning algorithm using CT images to screen for corona virus disease (Covid-19)”,
medRxiv.
Wei, J.Y., Rowe, J.W., Kestenbaum, A.D. and Ben-Haim, S.I.M.O.N.A. (1983), “Post-cough heart rate
response: influence of age, sex, and basal blood pressure”, American Journal of Physiology-
Regulatory, Integrative and Comparative Physiology, Vol. 245 No. 1, pp. R18-R24.
World Health Organization (2020), available at: www.who.int/news-room/q-a-detail/q-a-coronaviruses#::
text=symptoms (accessed 22 April 2020).
Corresponding author
Udhaya Sankar S.M. can be contacted at: [email protected]
For instructions on how to order reprints of this article, please visit our website:
www.emeraldgrouppublishing.com/licensing/reprints.htm
Or contact us for further details: [email protected]