Open Access
E3S Web Conf.
Volume 412, 2023
International Conference on Innovation in Modern Applied Science, Environment, Energy and Earth Studies (ICIES’11 2023)
Article Number 01064
Number of page(s) 13
Published online 17 August 2023
  1. M. Telmem, Y. Ghanou, Estimation of the Optimal HMM Parameters for Amazigh Speech Recognition System Using CMU-Sphinx, proceedings of the first international conference on intelligent computing in data sciences, icds2017. [Google Scholar]
  2. El Ghazi, C. Daoui, N. Idrissi, Automatic Speech Recognition for Tamazight Enchained Digit, World Journal Control Science and Engineering 2 (2014), no. 1, 1–5. [Google Scholar]
  3. H. Satori, F. El Haoussi, Investigation Amazigh speech recognition using CMU tools, Int J Speech Technol 17, 235 (2014). [CrossRef] [Google Scholar]
  4. Abenaou, F. Ataa Allah, B. Nsiri, Vers un systme de reconnaissance automatique de la parole amazighe bas´e sur les transformations orthogonales param´etrables. Asinag., 9 (2014), 133–145. [Google Scholar]
  5. Telmem, Meryam, Ghanou, Youssef, A Comparative Study of HMMs and CNN Acoustic Model in Amazigh Recognition System, (2020) 10.1007/978-981-15-0947-650. [Google Scholar]
  6. SAfâa El Ouahabi, Mohamed Atounti, Mohamed Bellouki. Optimal parameters selected for automatic recognition of spoken Amazigh digits and letters using Hidden Markov Model Toolkit, International Journal of Speech Technology (2020) 10.1007/s10772-020-09762-3. [Google Scholar]
  7. SAfâa El Ouahabi, Mohamed Atounti, Mohamed Bellouki, Toward an automatic speech recognition system for amazigh-tarifit language. International Journal of Speech Technology, 22 (2019). 1–12. 10.1007/s10772-019-09617-6. [CrossRef] [Google Scholar]
  8. SAfâa El Ouahabi, Mohamed Atounti, Mohamed Bellouki. Amazigh speech recognition using triphone modeling and clustering tree decision, Annals of the University of Craiova 46 (2019), 56–65. [Google Scholar]
  9. A. Boukous, The planning of Standardizing Amazigh language The Moroccan Experience, IRCAM. [Google Scholar]
  10. Y. LeCun, L. Bottou, Y. Bengio, P. Haffner, Gradient-based learning applied to document recognition. Proceedings of the IEEE, (1998), 2278-2324. [CrossRef] [Google Scholar]
  11. A. Krizhevsky, I. Sutskever, G. E. Hinton. ImageNet Classification with Deep Convolutional Neural Networks. In F. Pereira, C. J. C. Burges, L. Bottou, K. Q. Weinberger, editors, Advances in Neural Information Processing Systems 25, 1097-1105. Curran Associates, Inc., (2012). [Google Scholar]
  12. R. Girshick, J. Donahue, T. Darrell, J. Malik, Rich Feature Hierarchies for Accurate Object Detection and Semantic Segmentation. Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 11 (2013). [Google Scholar]
  13. O. Parkhi, A. Vedaldi, A. Zisserman, Deep Face Recognition. volume 1 (2015), 41.1-41.12. [Google Scholar]
  14. G. Hu, Y. Yang, D. Yi, J. Kittler, S. Li, T. Hospedales, When Face Recognition Meets with Deep Learning: An Evaluation of Convolutional Neural Networks for Face Recognition, 04 (2015). [Google Scholar]
  15. A. Karpathy, G. Toderici, S. Shetty, T. Leung, R. Sukthankar, L. Fei-Fei, Large-Scale Video Classification with Convolutional Neural Networks, 06 (2014), 1725-1732. [Google Scholar]
  16. K. Simonyan, A. Zisserman, Two-Stream Convolutional Networks for Action Recognition in Videos, Advances in Neural Information Processing Systems, 106 (2014). [Google Scholar]
  17. Wang, D. J. Wu, A. Coates, A. Y. Ng, End-to-end text recognition with convolutional neural networks. Proceedings of the 21st International Conference on Pattern Recognition, (ICPR2012), (2012), 3304-3308. [Google Scholar]
  18. Y. Kim., Convolutional Neural Networks for Sentence Classification. Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing, 08 (2014). [Google Scholar]
  19. Y. LeCun, B. Boser, J. S. Denker, R. E. Howard, W. Habbard, L. D. Jackel, D. Henderson, Advances in Neural Information Processing Systems 2. chapitre Handwritten Digit Recognition with a Back-propagation Network. Morgan Kaufmann Publishers Inc., San Francisco, CA, USA, (1990), 396-404. [Google Scholar]
  20. [Google Scholar]
  21. Ouhnini, Ahmed & Aksasse, B. & Ouanan, Mohammed. (2023). Towards an Automatic Speech-to-Text Transcription System: Amazigh Language. International Journal of Advanced Computer Science and Applications. 14. 10.14569/IJACSA.2023.0140250. [CrossRef] [Google Scholar]

Current usage metrics show cumulative count of Article Views (full-text article views including HTML views, PDF and ePub downloads, according to the available data) and Abstracts Views on Vision4Press platform.

Data correspond to usage on the plateform after 2015. The current usage metrics is available 48-96 hours after online publication and is updated daily on week days.

Initial download of the metrics may take a while.