Abstract
Persistent cough is a symptom common to a number of respiratory disorders; however, reliable monitoring of cough frequency and cough severity over an extended period of time can be a challenge. Traditional methods involve subjective evaluation by care providers or patient self-reports. As an alternative, we propose an objective method for monitoring cough using a wearable microphone. We collected 24-hour audio recordings from 9 patients suffering from chronic obstructive pulmonary disease, asthma, and lung cancer using the VitaloJAK wearable microphone. Trained professionals carefully listened to each audio stream and manually labeled each cough event. Using this data, we propose a new neural-network-based cough detection scheme. A pre-processing algorithm is used to estimate the start and end of each cough and the deep neural network is trained using each cough instance. Experiments demonstrate an average leave-one-participant-out cross-validation specificity and sensitivity of 93.7% and 97.6% respectively.
Original language | English |
---|---|
Title of host publication | 2018 IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP 2018 - Proceedings |
Publisher | IEEE |
Pages | 2161-2165 |
Number of pages | 5 |
Volume | 2018-April |
ISBN (Print) | 9781538646588 |
DOIs | |
Publication status | Published - 2018 |
Event | 2018 IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP 2018 - Calgary, Canada Duration: 15 Apr 2018 → 20 Apr 2018 |
Conference
Conference | 2018 IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP 2018 |
---|---|
Country/Territory | Canada |
City | Calgary |
Period | 15/04/18 → 20/04/18 |
Keywords
- Audio processing
- Cough detection
- Deep learning
- Mobile health sensing
- Respiratory disease