E3S Web Conf.
Volume 309, 20213rd International Conference on Design and Manufacturing Aspects for Sustainable Energy (ICMED-ICMPC 2021)
|Number of page(s)||5|
|Published online||07 October 2021|
- G. Wen, H. Li, J. Huang, D. Li, E. Xun, “Random Deep Belief Networks for Recognizing Emotions from Speech Signals”, Computational Intelligence and Neuroscience, 2017, 9 (2017). [Google Scholar]
- M. S. Hossain, G. Muhammad, “Emotion Recognition Using Deep Learning Approach from Audio-Visual Emotional Big Data,” Inf. Fus. 49, 10, (2019). [Google Scholar]
- Pawan Kumar Mishra, Arti Rawat, “Emotion Recognition through Speech Using Neural Network”, International Journal of Advanced Research in Computer Science and Software Engineering (IJARCSSE), Volume 5, Issue 5, pp. 422–428, May 2015. [Google Scholar]
- S. Latif et al. “Direct Modelling of Speech Emotion from Raw Speech”, Proc. I. S., 2019, 5 (2019). [Google Scholar]
- M. Xu etal. “Speech Emotion Recognition with Multiscale Area Attention and Data Augmentation”, arXiv:2102.01813, (2021). [Google Scholar]
- S.R. Livingstone, F.A. Russo, “The Ryerson Audio-Visual Database of Emotional Speech and Song (RAVDESS): A dynamic, multimodal set of facial and vocal expressions in North American English”. PLoS ONE 13, 5, (2018). [Google Scholar]
- P. Fuller, M. Kathleen, Dupuis and Kate, , “Toronto Emotional Speech set (TESS)”, Scholars Portal Data verse, Version 1.0. https://doi.org/10.5683/SP2/E8H2MF, (2020) [Google Scholar]
- Survey Audio-Visual Expressed Emotion (SAVEE) Database (http://kahlan.eps.surrey.ac.uk/savee/Download.html) [Google Scholar]
- B. Schuller, S. Reiter, R. Muller, M. Al-Hames, M. Lang, and G. Rigoll, “Speaker independent speech emotion recognition by ensemble classification,” in Proc. of IEEE Int. Conf. on M.M and Expo, 4 (2005). [Google Scholar]
- M. K. Sarker, K.M.R. Alam, M. Arifuzzaman, “Emotion recognition from speech based on relevant feature and majority voting,”, in Proc. of the Int. Conf. Info., Elec. and Vision 5,(2014). [Google Scholar]
- C. Huang, W. Gong, W. Fu, and D. Feng, “A research of speech emotion recognition based on deep belief network and SVM”, Math. Prob. Engg., 2014, 7, (2014). [Google Scholar]
- C. Huang, W. Gong, W. Fu, and D. Feng, “A research of speech emotion recognition based on deep belief network and SVM,” Mathematical Problems in Engineering, 2014, 7 (2014). [Google Scholar]
- Y. Sri Lalitha, et. al., “Efficient Tumor Detection in MRI Brain Images”, Intl. J. O. B. M. Engg., 16, 9, (2020). [Google Scholar]
- Hemanta Kumar Palo Mihir Narayan Mohanty, “Wavelet based feature combination for recognition of emotions”, in A. Shams Engg J. 9, 7 (2018). [Google Scholar]
- A. van den Oord, S. Dieleman, H. Zen, K. Simonyan, O. Vinyals, A. Graves, N. Kalchbrenner, A. Senior, K. Kavukcuoglu, “WaveNet: a generative model for raw audio”, in 9th ISCA Speech Synthesis Workshop 10 (2016). [Google Scholar]
- Y. Sri Lalitha, Dr. A. Govardhan, “Semantic Framework for Text Clustering with Neighbor”, Proc. of 48th Ann. Conv.of CSI, A.I.S.C., Springer 2,10, (2013). [Google Scholar]
- Y.J. Nagendra Kumar, B. Mani Sai, V. Shailaja, S. Renuka, P. Bharathi, ”Python NLTK Sentiment Inspection using Naive Bayes Classifier”, IJRTE, 8,4 (2019). [Google Scholar]
- B. Dhanalaxmi, G. Apparao Naidu, K. Anuradha, “Adaptive PSO based association rule mining technique for software defect classification using ANN”, Procedia Computer Science, 46,11, (2015). [Google Scholar]
Current usage metrics show cumulative count of Article Views (full-text article views including HTML views, PDF and ePub downloads, according to the available data) and Abstracts Views on Vision4Press platform.
Data correspond to usage on the plateform after 2015. The current usage metrics is available 48-96 hours after online publication and is updated daily on week days.
Initial download of the metrics may take a while.