Zayed, Yara, Hasasneh, Ahmad et Tadj, Chakib.
2023.
« Infant cry signal diagnostic system using deep learning and fused features ».
Diagnostics, vol. 13, nº 12.
Compte des citations dans Scopus : 7.
Prévisualisation |
PDF
Tadj-C-2023-27102.pdf - Version publiée Licence d'utilisation : Creative Commons CC BY. Télécharger (6MB) | Prévisualisation |
Résumé
Early diagnosis of medical conditions in infants is crucial for ensuring timely and effective treatment. However, infants are unable to verbalize their symptoms, making it difficult for healthcare professionals to accurately diagnose their conditions. Crying is often the only way for infants to communicate their needs and discomfort. In this paper, we propose a medical diagnostic system for interpreting infants’ cry audio signals (CAS) using a combination of different audio domain features and deep learning (DL) algorithms. The proposed system utilizes a dataset of labeled audio signals from infants with specific pathologies. The dataset includes two infant pathologies with high mortality rates, neonatal respiratory distress syndrome (RDS), sepsis, and crying. The system employed the harmonic ratio (HR) as a prosodic feature, the Gammatone frequency cepstral coefficients (GFCCs) as a cepstral feature, and image-based features through the spectrogram which are extracted using a convolution neural network (CNN) pretrained model and fused with the other features to benefit multiple domains in improving the classification rate and the accuracy of the model. The different combination of the fused features is then fed into multiple machine learning algorithms including random forest (RF), support vector machine (SVM), and deep neural network (DNN) models. The evaluation of the system using the accuracy, precision, recall, F1-score, confusion matrix, and receiver operating characteristic (ROC) curve, showed promising results for the early diagnosis of medical conditions in infants based on the crying signals only, where the system achieved the highest accuracy of 97.50% using the combination of the spectrogram, HR, and GFCC through the deep learning process. The finding demonstrated the importance of fusing different audio features, especially the spectrogram, through the learning process rather than a simple concatenation and the use of deep learning algorithms in extracting sparsely represented features that can be used later on in the classification problem, which improves the separation between different infants’ pathologies. The results outperformed the published benchmark paper by improving the classification problem to be multiclassification (RDS, sepsis, and healthy), investigating a new type of feature, which is the spectrogram, using a new feature fusion technique, which is fusion, through the learning process using the deep learning model.
Type de document: | Article publié dans une revue, révisé par les pairs |
---|---|
Professeur: | Professeur Tadj, Chakib |
Affiliation: | Génie électrique |
Date de dépôt: | 26 juill. 2023 17:53 |
Dernière modification: | 16 oct. 2023 16:19 |
URI: | https://espace2.etsmtl.ca/id/eprint/27102 |
Actions (Authentification requise)
Dernière vérification avant le dépôt |