Skip to main content

Main menu

  • Home
  • Current issue
  • Early View
  • Archive
  • Authors/reviewers
    • Instructions for authors
    • Submit a manuscript
    • COVID-19 submission information
    • Institutional open access agreements
    • Peer reviewer login
  • Alerts
  • Subscriptions
  • ERS Publications
    • European Respiratory Journal
    • ERJ Open Research
    • European Respiratory Review
    • Breathe
    • ERS Books
    • ERS publications home

User menu

  • Log in
  • Subscribe
  • Contact Us
  • My Cart

Search

  • Advanced search
  • ERS Publications
    • European Respiratory Journal
    • ERJ Open Research
    • European Respiratory Review
    • Breathe
    • ERS Books
    • ERS publications home

Login

European Respiratory Society

Advanced Search

  • Home
  • Current issue
  • Early View
  • Archive
  • Authors/reviewers
    • Instructions for authors
    • Submit a manuscript
    • COVID-19 submission information
    • Institutional open access agreements
    • Peer reviewer login
  • Alerts
  • Subscriptions

A novel infrasound and audible machine-learning approach for the diagnosis of COVID-19

Guy Dori, Noa Bachner-Hinenzon, Nour Kasim, Haitem Zaidani, Sivan Haia Perl, Shlomo Maayan, Amin Shneifi, Yousef Kian, Tuvia Tiosano, Doron Adler, Yochai Adir
ERJ Open Research 2022; DOI: 10.1183/23120541.00152-2022
Guy Dori
1HaEmek medical center, Afula, Israel
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Noa Bachner-Hinenzon
2Sanolla, Nesher, Israel
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • For correspondence: noa@sanolla.com
Nour Kasim
1HaEmek medical center, Afula, Israel
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Haitem Zaidani
3Rambam medical center, Haifa, Israel
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Sivan Haia Perl
4Shamir medical center, Zerifin, Israel
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Shlomo Maayan
5Barzilai medical center, Ashkelon, Israel
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Amin Shneifi
6Clalit health services, Israel
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Yousef Kian
5Barzilai medical center, Ashkelon, Israel
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Tuvia Tiosano
1HaEmek medical center, Afula, Israel
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Doron Adler
2Sanolla, Nesher, Israel
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Yochai Adir
7Carmel medical center, Israel
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • Article
  • Figures & Data
  • Info & Metrics
  • PDF
Loading

Abstract

The COVID-19 outbreak has rapidly spread around the world, causing a global public health and economic crisis. A critical limitation in detecting COVID-19 related pneumonia is that it is often manifested as a “silent pneumonia”, i.e., pulmonary auscultation, using a standard stethoscope, sounds "normal". Chest CT is the gold standard for detecting COVID-19 pneumonia; however, radiation exposure, availability and cost preclude its utilization as a screening tool for COVID-19 pneumonia. In this study we hypothesized that COVID-19 pneumonia, “silent” to the human ear using a standard stethoscope, is detectable using a full spectrum auscultation device that contains a machine-learning analysis.

Lung sounds signals were acquired, using a novel full spectrum (3–2,000Hz) stethoscope, from 164 patients with COVID-19 pneumonia, 61 non-COVID-19 pneumonia and 141 healthy subjects. A machine-learning classifier was constructed, and the data was classified into 3 groups: 1. Normal lung sounds 2. COVID-19 pneumonia 3. Non-COVID-19 pneumonia.

Standard auscultation found that 72% of the non-COVID-19 pneumonia patients had abnormal lung sounds, compared to only 25% for the COVID-19 pneumonia patients. The classifier's sensitivity and specificity for the detection of COVID-19 pneumonia were 97% and 93%, respectively, when analyzing the sound and infrasound data, and they were reduced to 93% and 80% without the infrasound data (p<0.01 difference in ROC with and without infrasound).

This study reveals that useful clinical information exists in the infrasound spectrum of COVID-19 related pneumonia, and machine-learning analysis applied to the full spectrum of lung sounds is useful in its detection.

Introduction

Severe Acute Respiratory Syndrome due to the novel coronavirus was initially reported in China, December 2019, and is now termed COVID-19 disease [1]. COVID-19 may lead to severe pneumonia (COVID-19PN), requiring specialized management in the intensive care unit [2–3]. Diagnosing pneumonia in general, and specifically due to SARS-CoV-2 infection, relies on clinical evaluation, including physical examination and imaging studies. However, COVID-19PN presents diagnostic challenges. Clinically, this respiratory condition has been termed “silent pneumonia” due to the paucity of pathological pulmonary auscultation findings using a standard stethoscope [4]. One possible cause for this “silence” may lie with the expertise required of the physician to diagnose auscultation findings associated with pneumonia [5–6]. Another cause may be related to the frequency of sound waves produced in COVID-19PN, which may actually reside outside the hearing spectrum of the human ear.

Chest computerized tomography (CT) has become the gold standard for diagnosing typical lung pathologies associated with COVID-19PN [7–8]. However, in addition to the radiation exposure, availability and cost of CTs, recent studies have shown that although their sensitivity is high, chest CT specificity is rather low [9]. Available CT data show that COVID-19 patients with both mild and severe clinical presentations demonstrate bilateral patchy infiltrations or ground glass opacities [10–11], for which the differential diagnosis is wide. Surprisingly, similar CT abnormalities were also observed in asymptomatic patients of COVID-19 [12]. Thus, predicting the course of COVID-19PN and possible pulmonary deterioration based on CT becomes a difficult task. As a result, some patients who initially present with only mild symptoms and maintain well preserved mechanical characteristics of the lungs with no breathing difficulties, are suddenly diagnosed with severe hypoxemia and breathing difficulties only late in the course of the disease [13–14].

In a recent meta-analysis, overall pooled sensitivity for lung auscultation was 37% with 89% specificity [15], in identifying pneumonia type illnesses. Due to the paucity of suspicious breath sounds, it could be that sensitivity and specificity of COVID-19PN with a regular stethoscope would be even lower. Pathophysiologically, it is possible that COVID-19PN is "silent" because of its diffuse, peripheral nature of pulmonary inflammation [3], compared with the localized consolidation of a non-COVID-19 pneumonia [16].

CT and lung ultrasound are proposed as first line screening for COVID-19PN [17]. However, during a pandemic, as the pressure to hospitalize patients increases and imaging resources become depleted, resource allocation becomes a critical issue. Patients may benefit from a medical device, based on artificial intelligence, which supports less trained medical teams (triage staff and novices) with reliable diagnostic measures and allows for specialists to attend the most difficult patients.

There have been advances in the use of the electronic stethoscope in the past decade, and new systems and adaptations are already available [18], along with new tools for analyzing lung sounds by using artificial intelligence [19–20]. Simple machine-learning methods succeeded in diagnosing COVID-19 from breath and cough sounds crudely collected on a mobile application with an area under the curve of around 70% [20].

As standard methods for diagnosing COVID-19PN (pulmonary auscultation and imaging) often lack specificity and sensitivity, we investigated the utility of a novel smart digital stethoscope (VoqX, Sanolla), with which pulmonary sounds in the infrasound range (≤ 20 Hz) were recorded and analyzed using machine-learning algorithms. This infrasound range is inaudible to the human ear yet contains vital information for the diagnosis of lung pathologies [21, 22, 23]. Therefore, the VoqX may support the unmet need for diagnosing COVID-19PN without relying on a physician's auscultation expertise.

Methods

Subjects and study design

This study was conducted among patients admitted to selected tertiary medical centers in Israel. The study protocol adhered to the principles of the Declaration of Helsinki and was approved by the institutional ethics board of each hospital (IRB Ethics committee of: Shamir medical center No. 0136–20-ASF, Rambam medical center 0631-18-RMB, HaEmek medical center 0136-19-EMC and 0160-19-EMC, Barzilai medical center 0036-20-BRZ). Written informed consent was obtained from all patients. The following data were collected from each patient: age, gender, height, weight, smoker status and the results of lung auscultation. Diagnosis of non-COVID-19 pneumonia was confirmed by one of the following: anamnesis, physical examination, chest x-ray (or CT), blood test (blood count) and oximetry<94%. Diagnosis of COVID-19 was performed according to the criteria based on the WHO's recommendation [24]. Patients were excluded if they met at least one of the following criteria: under 18 years of age, pregnant, had any type of chest malformation, under another's guardianship or weighed more than 150 kg. The Healthy group was composed of healthy volunteers that accompanied patients to the hospital and did not suffer from any chronic or acute respiratory disease.

Respiratory sound data

Real time respiratory sound recordings were performed during the physical examination while the subjects were in a sitting position. The VoqX stethoscope was placed on top of the patient's clothing. Acquisition of respiratory sounds was performed over a single layer of clothing for all patients, with no need for direct skin contact. For every patient, 14 sites (A-N) were auscultated, within the anterior, posterior, and lateral chest walls (figure 1). For every one of these fourteen sites, 16 s were recorded. At the end of each examination, data were transferred to a hard drive and stored. In total, 1,974 signals were recorded from healthy subjects (141 *14 locations), 2,296 from patients with COVID-19PN (164*14 locations) and 854 from patients with non-COVID-19 pneumonia (61*14 locations).

FIGURE 1
  • Download figure
  • Open in new tab
  • Download powerpoint
FIGURE 1

A Map of locations for acquiring the acoustic data.

The auscultation was non-blinded for the COVID and non-COVID pneumonia patients:

  1. A. In the COVID group, clinicians were aware patients had COVID, and despite the possible bias due to knowing the diagnosis, they have reported abnormal breathing sounds in only 25% of cases.

  2. B. In the non-COVID pneumonia group, clinicians suspected that patients had pneumonia after anamnesis and auscultation. The X-rays defining the presence of pneumonia were performed afterwards.

  3. C. In the Healthy group, the auscultation was part of a different study recording healthy subjects, COPD and asthma patients. The physician auscultated to the respiratory sounds blindly and reported whether abnormal sounds were detected.

Voqx characteristics

Diagnosis by auscultation

Using the stethoscope, the examining physician was able to capture acoustic waves between 3 Hz – 2000Hz, as well as amplify the sound using a simple control button.

Diagnosis by visual representation

VoqX created a graphical image of the recorded sound, a “sound signature”. The image illustrates the sound of breathing and provided an immediate visual tool for the evaluation of abnormalities.

Diagnosis by machine-learning

The VoqX provides a statistical model of acoustic waves associated with various diseases. After conducting signal analysis, the device provides statistical information as for the probability of the existence (or non-existence) of several diseases. In addition, it estimates the severity of the clinical condition of the examined subject.

Connectivity

The VoqX can connect to a master device through a Bluetooth interface. Thereafter, it is possible to download and exchange data between the device and a computer for backup, analysis and patient tracking (figure 2).

FIGURE 2
  • Download figure
  • Open in new tab
  • Download powerpoint
FIGURE 2

Optional cloud connectivity.

Signal processing and machine-learning

To optimize the results of the classifier, all recorded acoustic waves were pre-processed by:

  1. Removing ambient noise by using a second microphone embedded in the VoqX that detected the ambient noise and dynamically reduced it from the data by an LMS (least mean square) adaptive filter.

  2. Truncating each acoustic signal at the beginning and termination of the signal, to eliminate noise generated by placing and removing the stethoscope from body surface. It was performed by a threshold on the first derivative of the signal in the first and last second of the data.

  3. Removing clicking noises that were generated from movements of the stethoscope on the patient's clothing during examination. This process was performed by finding spikes in the signal that were greater than 3*median amplitude of the signal.

Thereafter, 164 features were calculated in the time and frequency domains (Appendix A). The features of the time domain were statistical features such as: average, standard deviation, median etc. and features related to the shape of the sound wave (e.g. skewness, kurtosis). The features of the frequency domain included the dominant frequencies at various ranges, including their magnitudes and areas under the curves. Moreover, in the frequency domain, we calculated Mel-Frequency Cepstrum Coefficients (MFCC) features. Heart and breathing rates were extracted from the data and added to the classifier. The algorithm is described in the flowchart in figure 3. The features were calculated for windows of 4 s and the output for every chest location was the median value. Next, the 25% and 75% percentile of the features (14 measurements for 14 locations) were calculated and served as the final features for every patient. Finally, the final-features were ranked according to their p-values for distinguishing between the three groups (Appendix); 144 out of 164 final-features had a significant p-value (p<0.05) and the best 50 were chosen. These 50 features were analyzed by a Gaussian Support Vector Machine (SVM) classifier [25] to classify: Normal, non-COVID-19 pneumonia and COVID-19PN. The Gaussian SVM was chosen among 32 different methods since it provided the best performance. For a new breathing sound, the classifier calculates the distance between its features and the features of all three groups and chooses which group is the closest. The Gaussian SVM was performed with an automatic kernel scale and a box-constraint of 2. The features were standardized before the analysis. Since the sample size was different for the 3 groups, a standard method of a cost function was applied to the error calculation to equalize the weight of every group. Moreover, in order to generate an automatic objective process of investigating the performance, the data was randomly divided 12 times to 60% of the data set used as training signals and 40% used as a test group.

FIGURE 3
  • Download figure
  • Open in new tab
  • Download powerpoint
FIGURE 3

Flowchart of the preprocessing and machine-learning algorithm.

Statistical analysis

The p-value of the features was calculated by a one-way analysis of variance.

The sensitivity, specificity, accuracy, positive predictive value, and negative predictive value were calculated for each run of the classifier: for the cross-validation group (during the learning process in the creation of the classifier) and for the test group that did not participate during the learning. ROC analysis was performed and the area under the curves with and without the infrasound were compared [26].

Results

Sixty-one patients with non-COVID-19 pneumonia, 164 patients with COVID-19 pneumonia and 141 healthy patients were included in the study (table 1). In 44 of 61 patients (72%) with non-COVID-19 pneumonia, the physical examination revealed an abnormality on lung auscultation; 59% had only crackles, 7% had only wheezes, 3% had both crackles and wheezes, and in 3%, the intensity of breathing sounds was perceived as reduced. In contrast, in only 41 of 164 (25%) patients with COVID-19PN, abnormalities on lung auscultation were detected. Nineteen percent had crackles, 4% had wheezes, and in 2% breathing sounds were perceived as reduced.

View this table:
  • View inline
  • View popup
TABLE 1

Selected Patient Characteristics and Physical Findings by Lung Auscultation. STD=standard deviation

All breathing sounds were analyzed using machine-learning software. The input was 164 unique features that were calculated for every signal. Out of 164 features, 144 were able to distinguish between the 3 groups by analysis of variance (p<0.05), however, only fifty features were selected to reduce the calculation time, as adding more than 50 features did not improve classification performance. The 50 features were selected according to their level of performance in distinguishing between the three study groups. An example of features that improved the performance is given in figure 4. Figure 4A shows that adding infrasound data to the analyses helps distinguishing between the three groups.

FIGURE 4
  • Download figure
  • Open in new tab
  • Download powerpoint
FIGURE 4

Visual sound signature of VoqX recorded from: a healthy subject (A), a COVID-19PN patient (B) and a patient with non-COVID-19 pneumonia (C). The x axis is Time and contains 8.5 s. The Y axis is Frequency from 0 to 1000 Hz. The colors are the intensity of breathing sounds in dBFS (dB-full-scale).

Figure 5A-C demonstrate a visual sound signature created by the device. Figure 5A shows three dense blue columns that represent three respiratory cycles of a healthy subject. Figure 5B shows the visual sound signature of a COVID-19 patient for whom detecting the breathing cycle is difficult, and higher intensities for various frequencies are found. Figure 5C represents a non-COVID-19 pneumonia patient, in which very high intensities are shown along all frequencies, due to crackles.

FIGURE 5
  • Download figure
  • Open in new tab
  • Download powerpoint
FIGURE 5

Two features (out of 164) that strongly depend on the infrasound: MFCC1 as function of Magnitude of breathing frequency for COVID-19 (blue), Normal (red) and non-COVID-19 (yellow). A. With infrasound B. Without infrasound.

Sixty percent of the signals were used to train the classifier. The other forty percent served as a test group to investigate the performance of the classifier. This process was randomly performed 12 times. The results are described in table 2. Table 2 contains three kinds of results: detection in the test group that did not participate in the learning process, the cross-validation that is performed during the learning process to optimize the classifier, and the results of auscultation by using an acoustic stethoscope. The results show that the performance of the classifier is statistically improved when adding the infrasound to the analysis (p<0.01) and the sensitivity of detecting COVID-19PN has reached 97% with a specificity of 93% (table 2).

View this table:
  • View inline
  • View popup
TABLE 2

Performance Measures [%] from 12 Runs of Random Sets

In addition, we collected data from eight asymptomatic subjects who were COVID-19 positive by nasopharyngeal PCR examination (112 signals) and were living in a quarantine facility. These subjects served as a second test group to validate our ability to diagnose asymptomatic patients. The classifier classified seven of them as COVID-19PN, and one was diagnosed with non-COVID-19 pneumonia. The X-rays of these patients identified three patients with COVID-19PN, while the other five (although diagnosed with the COVID-19 virus) did not show any COVID-19 findings using the x-ray.

Finally, the Receiver Operating Characteristics (ROC) curves with and without infrasound were statistically compared and found to be significantly different for the COVID-19PN group (figure 6A, p<0.01, AUC=0.98 with infrasound and 0.92 without infrasound), while for the patients with non-COVID-19 pneumonia, the ROC curve difference was insignificant (figure 6B, AUC=0.92 with infrasound and 0.90 without infrasound).

FIGURE 6
  • Download figure
  • Open in new tab
  • Download powerpoint
FIGURE 6

ROC curves after classification with and without infrasound. A. Detection of “silent” COVID-19PN. B. Detection of non-COVID-19 pneumonia.

Discussion

In this study, we demonstrated that the use of the novel digital stethoscope VoqX to acquire breathing signals in the sound and infrasound range of frequencies and analyzing the signals with machine-learning methods, can distinguish between patients with COVID-19PN, non-COVID-19 pneumonia and healthy subjects with an accuracy of 92%, while a standard auscultation by an acoustic stethoscope has reached the accuracy of 52% (table 2). Infrasound plays an important role in detecting “silent pneumonia” and the VoqX provides this ability (table 2). This phenomenon is further emphasized when the first MFCC is drawn versus the Magnitude of breathing frequency (figure 4). The MFCC approximates the human auditory system' s response at different frequency ranges. The Magnitude of breathing frequency is the magnitude at the frequency domain of the peak that was generated from the breathing rate. Figure 4A shows that adding infrasound data to the analyses helps distinguishing between the three groups. Without infrasound analysis, the features of healthy subjects and patients with COVID-19PN are indistinguishable (figure 4B). Comparing classification results of breathing signals of data containing infrasound with data having limited bandwidth shows improvement of the specificity from 80% for limited bandwidth data without infrasound to 93% for the data with the full spectrum including infrasound (table 2).

To the best of our knowledge, the VoqX is the first device that utilizes machine-learning on the infrasound and sound of breathing to detect COVID-19PN more accurately, reaching area under the curve of 94% (figure 6).

In summary, the current study shows that by using the full range of respiratory sound waves (audible and infrasound) it is possible to diagnose COVID-19PN. We have shown that adding the infrasound data of breathing improves the accuracy of COVID-19PN detection (silent pneumonia). Furthermore, since COVID-19PN is significantly manifested in the infrasound, while non-COVID-19PN has most of its acoustic energy in the sound range, using the novel digital stethoscope VoqX helps distinguishing between COVID-19PN and non-COVID-19 pneumonia.

Limitations and future work

Acquiring data at 14 locations on the chest wall is time consuming (16 s were used for the processing for each location). Thus, currently a new classifier is being developed to diagnose lung diseases by using only 4 locations at the back. Acquiring the data will require 4*16=64 s and the analysis will take 30 s. Such a time frame for diagnosis enables the integration of the VoqX in the in-clinic practice.

It is noted that there was a significant difference between the age of the healthy subjects and the patients. However, as far as we know, there is no need to consider the patient's age while auscultating to lung sounds [27]

Funder: Israeli Innovation Authority; Grant: Israeli Government.

Appendix

– Features for machine learning and their p values for separating the 3 groups

View this table:
  • View inline
  • View popup

Footnotes

  • IRB Ethics committee of: &#13;&#10;Shamir medical center No. 0136-20-ASF, &#13;&#10;Rambam medical center 0631-18-RMB, &#13;&#10;HaEmek medical center 0136-19-EMC and 0160-19-EMC, Barzilai medical center 0036-20-BRZ

  • Data availability: All the data and codes are available: GitHub - Sanolla/Lung_Classifier: Sanolla lungs classifier - Matlab code

  • Data availability: All the data and codes are available: GitHub - Sanolla/Lung_Classifier: Sanolla lungs classifier - Matlab code

  • Conflict of interest: Noa Bachner-Hinenzon and Doron Adler are full-time employees of Sanolla.

  • Conflict of interest: Other authors do not have any conflict of interests.

  • Received March 28, 2022.
  • Accepted July 29, 2022.
  • Copyright ©The authors 2022
http://creativecommons.org/licenses/by-nc/4.0/

This version is distributed under the terms of the Creative Commons Attribution Non-Commercial Licence 4.0. For commercial reproduction rights and permissions contact permissions{at}ersnet.org

References

  1. ↵
    1. Bonilla-Aldana DK,
    2. Dhama K,
    3. Rodriguez-Morales AJ
    . Revisiting the one health approach in the context of COVID-19: A look into the ecology of this emerging disease. Adv Anim Vet Sci 2020; 8: 234–237.
    OpenUrlCrossRef
  2. ↵
    1. Zhu N,
    2. Zhang D,
    3. Wang W, et al.
    A novel coronavirus from patients with pneumonia in China, 2019. N Engl J Med 2020; 382: 727–733. doi:10.1056/NEJMoa2001017
    OpenUrlCrossRefPubMed
  3. ↵
    1. The Lancet
    . Emerging understandings of 2019-nCoV. Lancet 2020; 395: P311. doi:10.1016/S0140-6736(20)30186-0
    OpenUrl
  4. ↵
    1. Tan B,
    2. Zhang Y,
    3. Gui Y, et al.
    The possible impairment of respiratory-related neural loops may be associated with the silent pneumonia induced by SARS-CoV-2. J Med Virol 2020; 92: 2269–2271. doi:10.1002/jmv.26158
    OpenUrl
  5. ↵
    1. Loudon R,
    2. Murphy RLH
    . Lung sounds. Am Rev Respir Dis 1984; 130: 663–673.
    OpenUrlPubMed
  6. ↵
    1. Pasterkamp H,
    2. Kraman SS,
    3. Wodicka GR
    . Respiratory sounds. Advances beyond the stethoscope. Am J Respir Crit Care Med 1997; 156: 974–987. doi:10.1164/ajrccm.156.3.9701115
    OpenUrlCrossRefPubMed
  7. ↵
    1. Ye Z,
    2. Zhang Y,
    3. Wang Y, et al.
    Chest CT manifestations of new coronavirus disease 2019 (COVID-19): a pictorial review. Eur Radiol 2020; 30: 4381–4389. doi:10.1007/s00330-020-06801-0
    OpenUrlPubMed
  8. ↵
    1. Garin N,
    2. Marti C,
    3. Scheffler M, et al.
    Computed tomography scan contribution to the diagnosis of community-acquired pneumonia. Curr Opin Pulm Med 2019; 25: 242–248. doi:10.1097/MCP.0000000000000567
    OpenUrlPubMed
  9. ↵
    1. Xu B,
    2. Xing Y,
    3. Peng J, et al.
    Chest CT for detecting COVID-19: a systematic review and meta-analysis of diagnostic accuracy. Eur Radiol 2020; 30: 5720–5727. doi:10.1007/s00330-020-06934-2
    OpenUrlPubMed
  10. ↵
    1. Wang D,
    2. Hu B,
    3. Hu C, et al.
    Clinical characteristics of 138 hospitalized patients with 2019 novel coronavirus-infected pneumonia in Wuhan. China JAMA J Am Med Assoc 2020; 323: 1061–1069. doi:10.1001/jama.2020.1585
    OpenUrl
  11. ↵
    1. Huang C,
    2. Wang Y,
    3. Li X, et al.
    Clinical features of patients infected with 2019 novel coronavirus in Wuhan, China. Lancet 2020; 395: 497–506. doi:10.1016/S0140-6736(20)30183-5
    OpenUrlCrossRefPubMed
  12. ↵
    1. Zhang R,
    2. Ouyang H,
    3. Fu L, et al.
    CT features of SARS-CoV-2 pneumonia according to clinical presentation: a retrospective analysis of 120 consecutive patients from Wuhan city. Eur Radiol 2020; 30: 4417–4426. doi:10.1007/s00330-020-06854-1
    OpenUrlPubMed
  13. ↵
    1. Rodriguez-Morales AJ,
    2. Cardona-Ospina JA,
    3. Gutiérrez-Ocampo E, et al.
    Clinical, laboratory and imaging features of COVID-19: A systematic review and meta-analysis. Travel Med Infect Dis 2020; 34: 101623. doi:10.1016/j.tmaid.2020.101623
    OpenUrlCrossRefPubMed
  14. ↵
    1. Xie J,
    2. Tong Z,
    3. Guan X, et al.
    Critical care crisis and some recommendations during the COVID-19 epidemic in China. Intensive Care Med 2020; 46: 837–840. doi:10.1007/s00134-020-05979-7
    OpenUrlCrossRefPubMed
  15. ↵
    1. Arts L,
    2. Lim EHT,
    3. van de Ven PM, et al.
    The diagnostic accuracy of lung auscultation in adult patients with acute pulmonary pathologies: a meta-analysis. Sci Rep 2020; 10: 7347. doi:10.1038/s41598-020-64405-6
    OpenUrlCrossRefPubMed
  16. ↵
    1. Franquet T
    . Imaging of pneumonia: Trends and algorithms. Eur Respir J 2001; 18: 196–208. doi:10.1183/09031936.01.00213501
    OpenUrlAbstract/FREE Full Text
  17. ↵
    1. Prachand VN,
    2. Milner R,
    3. Angelos P, et al.
    Medically necessary, time-sensitive procedures: scoring system to ethically and efficiently manage resource scarcity and provider risk during the COVID-19 pandemic. J Am Coll Surg 2020; 231: 281–288. doi:10.1016/j.jamcollsurg.2020.04.011
    OpenUrlPubMed
  18. ↵
    1. Vasudevan RS,
    2. Horiuchi Y,
    3. Torriani FJ, et al.
    Persistent value of the stethoscope in the age of COVID-19. Am J Med 2020; 133: 1143–1150. doi:10.1016/j.amjmed.2020.05.018
    OpenUrl
  19. ↵
    1. Mlodzinski E,
    2. Stone DJ,
    3. Celi LA
    . Machine learning for pulmonary and critical care medicine: a narrative review. Pulm Ther 2020; 6: 67–77. doi:10.1007/s41030-020-00110-z
    OpenUrlPubMed
  20. ↵
    1. Brown C,
    2. Chauhan J,
    3. Grammenos A, et al.
    exploring automatic diagnosis of COVID-19 from crowdsourced respiratory sound data. Proc ACM SIGKDD Int Conf Knowl Discov Data Min 2020; 3474–3484. doi:10.1145/3394486.3412865
  21. ↵
    1. Gavriely N,
    2. Cugell DW
    . Breath sounds methodology. Breath Sounds Methodol 2019. doi:10.1201/9780429260544
  22. ↵
    1. Gavriely N,
    2. Nissan M,
    3. Rubin AHE, et al.
    Spectral characteristics of chest wall breath sounds in normal subjects. Thorax 1995; 50: 1292–1300. doi:10.1136/thx.50.12.1292
    OpenUrlAbstract/FREE Full Text
  23. ↵
    1. Kasim N,
    2. Bachner-hinenzon N,
    3. Brikman S, et al.
    A comparison of the power of breathing sounds signals acquired with a smart stethoscope from a cohort of COVID19 patients at peak disease, and predischarge from the hospital. Biomed Signal Process Control 2022; 78: 103920. doi:10.1016/j.bspc.2022.103920
    OpenUrl
  24. ↵
    1. WHO
    . Clinical management of severe acute respiratory infection when novel coronavirus (2019-nCoV) infection is suspected. interim guidance. WHO 2020.
  25. ↵
    1. Vladimir Vapnik
    1. Corinna Cortes
    and Vladimir Vapnik. Support-Vector Networks. Mach Learn. 1995; 20: 273–297.
    OpenUrlCrossRef
  26. ↵
    1. Hanley JA,
    2. McNeil BJ
    . The meaning and use of the area under a receiver operating characteristic (ROC) curve. Radiology 1982; 143. doi:10.1148/radiology.143.1.7063747
  27. ↵
    1. Gross V,
    2. Dittmar A,
    3. Penzel T, et al.
    The relationship between normal lung sounds, age, and gender. Am J Respir Crit Care Med 2000; 162: 905–909. doi:10.1164/ajrccm.162.3.9905104
    OpenUrlCrossRefPubMed
PreviousNext
Back to top
Vol 9 Issue 2 Table of Contents
ERJ Open Research: 9 (2)
  • Table of Contents
  • Index by author
Email

Thank you for your interest in spreading the word on European Respiratory Society .

NOTE: We only request your email address so that the person you are recommending the page to knows that you wanted them to see it, and that it is not junk mail. We do not capture any email address.

Enter multiple addresses on separate lines or separate them with commas.
A novel infrasound and audible machine-learning approach for the diagnosis of COVID-19
(Your Name) has sent you a message from European Respiratory Society
(Your Name) thought you would like to see the European Respiratory Society web site.
CAPTCHA
This question is for testing whether or not you are a human visitor and to prevent automated spam submissions.
Print
Citation Tools
A novel infrasound and audible machine-learning approach for the diagnosis of COVID-19
Guy Dori, Noa Bachner-Hinenzon, Nour Kasim, Haitem Zaidani, Sivan Haia Perl, Shlomo Maayan, Amin Shneifi, Yousef Kian, Tuvia Tiosano, Doron Adler, Yochai Adir
ERJ Open Research Jan 2022, 00152-2022; DOI: 10.1183/23120541.00152-2022

Citation Manager Formats

  • BibTeX
  • Bookends
  • EasyBib
  • EndNote (tagged)
  • EndNote 8 (xml)
  • Medlars
  • Mendeley
  • Papers
  • RefWorks Tagged
  • Ref Manager
  • RIS
  • Zotero
Share
A novel infrasound and audible machine-learning approach for the diagnosis of COVID-19
Guy Dori, Noa Bachner-Hinenzon, Nour Kasim, Haitem Zaidani, Sivan Haia Perl, Shlomo Maayan, Amin Shneifi, Yousef Kian, Tuvia Tiosano, Doron Adler, Yochai Adir
ERJ Open Research Jan 2022, 00152-2022; DOI: 10.1183/23120541.00152-2022
Reddit logo Technorati logo Twitter logo Connotea logo Facebook logo Mendeley logo
Full Text (PDF)

Jump To

  • Article
    • Abstract
    • Introduction
    • Methods
    • Results
    • Discussion
    • Limitations and future work
    • – Features for machine learning and their p values for separating the 3 groups
    • Footnotes
    • References
  • Figures & Data
  • Info & Metrics
  • PDF

Subjects

  • Respiratory infections and tuberculosis
  • Tweet Widget
  • Facebook Like
  • Google Plus One

More in this TOC Section

  • The Bronchiectasis Exacerbation Diary: A Novel PRO for Non-Cystic Fibrosis Bronchiectasis
  • What bothers severe asthma patients most? A paired patient-clinician study across seven European countries
  • The Palestinian primary ciliary dyskinesia population: first results of the diagnostic, and genetic spectrum
Show more Original research article

Related Articles

Navigate

  • Home
  • Current issue
  • Archive

About ERJ Open Research

  • Editorial board
  • Journal information
  • Press
  • Permissions and reprints
  • Advertising

The European Respiratory Society

  • Society home
  • myERS
  • Privacy policy
  • Accessibility

ERS publications

  • European Respiratory Journal
  • ERJ Open Research
  • European Respiratory Review
  • Breathe
  • ERS books online
  • ERS Bookshop

Help

  • Feedback

For authors

  • Instructions for authors
  • Publication ethics and malpractice
  • Submit a manuscript

For readers

  • Alerts
  • Subjects
  • RSS

Subscriptions

  • Accessing the ERS publications

Contact us

European Respiratory Society
442 Glossop Road
Sheffield S10 2PX
United Kingdom
Tel: +44 114 2672860
Email: journals@ersnet.org

ISSN

Online ISSN: 2312-0541

Copyright © 2023 by the European Respiratory Society