Main Article Content
Abstract
Keywords
Article Details
This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.
References
- C.M. Lee, S.S. Narayanan, “Toward Detecting Emotions in Spoken Dialogsâ€, IEEE Trans, Speech Audio Process, 13(2), 293–303 , 2005.
- D. Tacconi, O. Mayora, P. Lukowicz, B. Arnrich, C. Setz, G. Troster, C. Haring, “Activity and Emotion Recognition to Support Early Diagnosis of Psychiatric Diseasesâ€, Second International Conference on Pervasive Computing Technologies for Healthcare, pp. 100–102, 2008.
- S. Yildirim, S. Narayanan, A. Potamianos, “Detecting Emotional State of a Child in a Conversational Computer Gameâ€. Comput. Speech Lang. 25(1), 29–44 , 2011.
- D. Ververidis, C. Kotropoulos, “Emotional speech recognition: resources, features, and methodsâ€, Speech Commun. 48 (9), 1162–1181, 2006.
- D.Neiberg, K. Elenius, “Automatic Recognition of Anger in Spontaneous Speechâ€, INTERSPEECH 2008, Brisbane, Australia, pp. 2755–2758, 2008.
- Alex, S. Ben, Mary, L., & Babu, B. P. “Attention and Feature Selection for Automatic Speech Emotion Recognition Using Utterance and Syllable-Level Prosodic Featuresâ€, Circuits, Systems, and Signal Processing, 39(11), 5681–5709, 2020.
- Mirsamadi, S., Barsoum, E., & Zhang, C., “Automatic Speech Emotion Recognition Using Recurrent Neural Networks With Local Attention Center for Robust Speech Systems†, IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP), 2227–2231, 2017.
- Mustaqeem, Sajjad, M., & Kwon, S. ,“Clustering-Based Speech Emotion Recognition by Incorporating Learned Features and Deep BiLSTMâ€. IEEE Access, 8, 79861–79875, 2020.
- Sun, T. W., “End-to-End Speech Emotion Recognition with Gender Informationâ€. IEEE Access, 8, 152423–152438., 2020.
- Hamidi, Mina., “Emotion Recognition from Persian Speech with Neural Network.â€, International Journal of Artificial Intelligence & Applications. 3. 107-112, 2012.
- Hamsa, S., Shahin, I., Iraqi, Y., & Werghi, N., “Emotion Recognition from Speech Using Wavelet Packet Transform Cochlear Filter Bank and Random Forest Classifier.â€, IEEE Access, 8, 96994–97006, 2020.
- Fahmi, F., Jiwanggi, M. A., & Adriani, M. , “Speech-Emotion Detection in an Indonesian Movieâ€, Proceedings of the 1st Joint Workshop on Spoken Language Technologies for Under-Resourced Languages (SLTU) and Collaboration and Computing for Under-Resourced Languages (CCURL), May, 185–193, 2020.
- Cong, P.; Wang, C.; Ren, Z.; Wang, H.; Wang, Y.; Feng, J. “Unsatisï¬ed customer call detection with deep learningâ€, In Proceedings of the 2016 10th International Symposium on Chinese Spoken Language Processing(ISCSLP), Tianjin, China, 17–20; pp. 1–5, 2016.
- Livingstone, S., & Russo, F. “The Ryerson Audio-Visual Database of Emotional Speech and Song (RAVDESS) “. In PLoS ONE (Vol. 13), 2018.
References
C.M. Lee, S.S. Narayanan, “Toward Detecting Emotions in Spoken Dialogsâ€, IEEE Trans, Speech Audio Process, 13(2), 293–303 , 2005.
D. Tacconi, O. Mayora, P. Lukowicz, B. Arnrich, C. Setz, G. Troster, C. Haring, “Activity and Emotion Recognition to Support Early Diagnosis of Psychiatric Diseasesâ€, Second International Conference on Pervasive Computing Technologies for Healthcare, pp. 100–102, 2008.
S. Yildirim, S. Narayanan, A. Potamianos, “Detecting Emotional State of a Child in a Conversational Computer Gameâ€. Comput. Speech Lang. 25(1), 29–44 , 2011.
D. Ververidis, C. Kotropoulos, “Emotional speech recognition: resources, features, and methodsâ€, Speech Commun. 48 (9), 1162–1181, 2006.
D.Neiberg, K. Elenius, “Automatic Recognition of Anger in Spontaneous Speechâ€, INTERSPEECH 2008, Brisbane, Australia, pp. 2755–2758, 2008.
Alex, S. Ben, Mary, L., & Babu, B. P. “Attention and Feature Selection for Automatic Speech Emotion Recognition Using Utterance and Syllable-Level Prosodic Featuresâ€, Circuits, Systems, and Signal Processing, 39(11), 5681–5709, 2020.
Mirsamadi, S., Barsoum, E., & Zhang, C., “Automatic Speech Emotion Recognition Using Recurrent Neural Networks With Local Attention Center for Robust Speech Systems†, IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP), 2227–2231, 2017.
Mustaqeem, Sajjad, M., & Kwon, S. ,“Clustering-Based Speech Emotion Recognition by Incorporating Learned Features and Deep BiLSTMâ€. IEEE Access, 8, 79861–79875, 2020.
Sun, T. W., “End-to-End Speech Emotion Recognition with Gender Informationâ€. IEEE Access, 8, 152423–152438., 2020.
Hamidi, Mina., “Emotion Recognition from Persian Speech with Neural Network.â€, International Journal of Artificial Intelligence & Applications. 3. 107-112, 2012.
Hamsa, S., Shahin, I., Iraqi, Y., & Werghi, N., “Emotion Recognition from Speech Using Wavelet Packet Transform Cochlear Filter Bank and Random Forest Classifier.â€, IEEE Access, 8, 96994–97006, 2020.
Fahmi, F., Jiwanggi, M. A., & Adriani, M. , “Speech-Emotion Detection in an Indonesian Movieâ€, Proceedings of the 1st Joint Workshop on Spoken Language Technologies for Under-Resourced Languages (SLTU) and Collaboration and Computing for Under-Resourced Languages (CCURL), May, 185–193, 2020.
Cong, P.; Wang, C.; Ren, Z.; Wang, H.; Wang, Y.; Feng, J. “Unsatisï¬ed customer call detection with deep learningâ€, In Proceedings of the 2016 10th International Symposium on Chinese Spoken Language Processing(ISCSLP), Tianjin, China, 17–20; pp. 1–5, 2016.
Livingstone, S., & Russo, F. “The Ryerson Audio-Visual Database of Emotional Speech and Song (RAVDESS) “. In PLoS ONE (Vol. 13), 2018.