Open Access
Issue
E3S Web Conf.
Volume 627, 2025
VI International Conference on Geotechnology, Mining and Rational Use of Natural Resources (GEOTECH-2025)
Article Number 04017
Number of page(s) 14
Section Automation, Digital Transformation and Intellectualization for the Sustainable Development of Mining and Transport Systems, Energy Complexes and Mechanical Engineering
DOI https://doi.org/10.1051/e3sconf/202562704017
Published online 16 May 2025
  1. A. A. Abdelhamid, E. S. M. El-Kenawy, B. Alotaibi, G. M. Amer, M. Y. Abdelkader, A. Ibrahim, and M. M. Eid, IEEE Access 10, 49265-49284 (2022). [CrossRef] [Google Scholar]
  2. Z. Huang, W. Xue, Q. Mao, et al. Multimed Tools Appl 76, 6785–6799 (2017). [CrossRef] [Google Scholar]
  3. M. Becker, J. Lippel, A. Stuhlsatz, T. Zielke, Graphical Models 108, 101060 (2020). 10.1016/j.gmod.2020.101060. [CrossRef] [Google Scholar]
  4. B. Schuller, B. Vlasenko, F. Eyben, M. Wöllmer, A. Stuhlsatz, A. Wendemuth, G. Rigoll, Cross-Corpus Acoustic Emotion Recognition: Variances and Strategies. T. Affective Computing 1, 119-131 (2010). [CrossRef] [Google Scholar]
  5. P. K. Singh, Sh. S. Setta, A. K. Singh, A. P. Singh, Machine Learning and Sentiments Analysis: Analyzing Customer Reviews (IGI Global, 2024) [Google Scholar]
  6. L. Yang, K. Xie, C. Wen, and J. B. He, Speech emotion analysis of netizens based on bidirectional LSTM and PGCDBN IEEE Access 9, 59860-59872, 2021. [CrossRef] [Google Scholar]
  7. A. Huang and P. Bao, Human vocal sentiment analysis. arXiv preprint arXiv:1905.08632 (2019). https://arxiv.org/abs/1905.08632 [Google Scholar]
  8. Y. Jia and S. SungChu, A deep learning system for sentiment analysis of service calls. arXiv preprint arXiv:2004.10320 (2020). https://aclanthology.org/2020.ecnlp- 1.4.pdf [Google Scholar]
  9. A.A. Anthony, C.M. Patil, Speech Emotion Recognition Systems: A Comprehensive Review on Different Methodologies. Wireless Pers Commun 130, 515–525 (2023). [CrossRef] [Google Scholar]
  10. R. A. Khalil, E. Jones, M. I. Babar, T. Jan, M. H. Zafar, and T. Alhussain, Speech emotion recognition using deep learning techniques: A review IEEE Access 7, 117327-117345 (2019). [CrossRef] [Google Scholar]
  11. A. Murarka, B. Radhakrishnan, and S. Ravichandran, Classification of mental illnesses on social media using RoBERTa in: Proceedings of the 12th International Workshop on Health Text Mining and Information Analysis (2021), pp. 59–68. Available: https://aclanthology.org/2021.louhi-1.7/ [Google Scholar]
  12. G. Trigeorgis, K. Bousmalis, S. Zafeiriou, B. W. Schuller, IEEE Transactions on Pattern Analysis and Machine Intelligence 39(3), 417-429 (2017). [CrossRef] [PubMed] [Google Scholar]
  13. A. Satt, S. Rozenberg, and R. Hoory, Efficient emotion recognition from speech using deep learning on spectrograms Interspeech 2017, 1089-1093 (2017). [CrossRef] [Google Scholar]
  14. S. Mirsamadi, J. H.L. Hansen, Speech Communication 106, 21-30 (2019). [CrossRef] [Google Scholar]
  15. Z. Zeng, Y. Hu, G. I. Roisman, Z. Wen, Y. Fu, T. S. Huang, Lecture Notes in Computer Science 4451, 72-90 (2007). [CrossRef] [Google Scholar]
  16. M. El Ayadi, M. S. Kamel, and F. Karray, Survey on speech emotion recognition: Features, classification schemes, and databases Pattern Recognition 44(3), 572-587 (2011). [CrossRef] [Google Scholar]
  17. R. Jahangir, Y. W. Teh, F. Hanif, and G. Mujtaba, Deep learning approaches for speech emotion recognition: state of the art and research challenges Multimedia Tools and Applications 80(16), 23745-23812 (2021). [CrossRef] [Google Scholar]
  18. M. Swain, A. Routray, and P. Kabisatpathy, Databases, features and classifiers for speech emotion recognition: a review International Journal of Speech Technology 21, 93–120 (2018). [CrossRef] [Google Scholar]
  19. S. Maghilnan, M. R. Kumar, Sentiment Analysis on Speaker Specific Speech Data arXiv preprint arXiv:1802.06209v1 (2017) https://doi.org/10.48550/arXiv.1802.06209 [Google Scholar]
  20. USC Signal Analysis and Interpretation Lab. IEMOCAP database. Retrieved December 6, 2024, from https://sail.usc.edu/iemocap/ [Google Scholar]
  21. Affective-MELD. Affective-MELD: A multimodal emotion recognition dataset. Retrieved December 6, 2024, from https://affective-meld.github.io/ [Google Scholar]
  22. Kaggle. Berlin database of emotional speech (EmoDB) [Data set]. Retrieved December 6, 2024, from https://www.kaggle.com/datasets/piyushagni5/berlin-database-of- emotional-speech-emodb [Google Scholar]

Current usage metrics show cumulative count of Article Views (full-text article views including HTML views, PDF and ePub downloads, according to the available data) and Abstracts Views on Vision4Press platform.

Data correspond to usage on the plateform after 2015. The current usage metrics is available 48-96 hours after online publication and is updated daily on week days.

Initial download of the metrics may take a while.