Open Access
Issue
E3S Web Conf.
Volume 360, 2022
2022 8th International Symposium on Vehicle Emission Supervision and Environment Protection (VESEP2022)
Article Number 01092
Number of page(s) 11
DOI https://doi.org/10.1051/e3sconf/202236001092
Published online 23 November 2022
  1. Liu Bing. Sentiment Analysis: Mining Opinions, Sentiments, and Emotions [M]. China Machine Press, 2017. [Google Scholar]
  2. Yang F. Research and implementation of Mongolian online handwriting recognition based on whole word [D]. Inner Mongolia university, 2021. DOI: 10.27224/dcnki.Gnmdu.2021.001055. [Google Scholar]
  3. Bao Yueying. Mongolian word personalized learning system design and practice of research [D]. Inner Mongolia normal university, 2020. The DOI: 10.27230/dcnki.Gnmsu.2020.000314. [Google Scholar]
  4. Yang Z., Dai Z., Yang Y., et al. XLNet: generalized autoregressive pretraining for language understanding[C]//Proceedings of the 33rd International Conference on Neural Information Processing Systems. 2019: 5753–5763. [Google Scholar]
  5. Radford A., Narasimhan K., Salimans T., et al. Improving language understanding by generative pre-training[J]. 2018. [Google Scholar]
  6. Devlin J., Chang M. W., Lee K., et al. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding[J]. 2019. [Google Scholar]
  7. Feng F., Yang Y., Cer D., et al. Language-agnostic BERT Sentence Embedding[J]. 2020. [Google Scholar]
  8. Che Wanxiang, Guo Jiang, Cui Yiming. Natural Language Processing based on Pretraining Model [M]. Publishing House of Electronics Industry, 2021. [Google Scholar]
  9. Peters M., Neumann M., Iyyer M., et al. Deep Contextualized Word Representations[C]// Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers). 2018. [Google Scholar]
  10. Han X., Zhang Z., Ding N., et al. Pre-trained models: Past, present and future[J]. AI Open, 2021, 2: 225–250. [CrossRef] [Google Scholar]
  11. Yue Zengying, Ye Xia, Liu Ruiheng. A Review of Pre-Training Techniques Based on Language Model., 2021, 35(09):15–29. [Google Scholar]
  12. Lan Z., Chen M., Goodman S., et al. ALBERT : A Lite BERT for Self-supervised Learning of Language Representations[C]//International Conference on Learning Representations. 2019. [Google Scholar]
  13. Sun Y., Wang S., Li Y., et al. Ernie 2.0: A continual pre-training framework for language understanding[C]//Proceedings of the AAAI Conference on Artificial Intelligence. 2020, 34(05): 8968–8975. [CrossRef] [Google Scholar]
  14. Zhang Z., Han X., Liu Z., et al. ERNIE: Enhanced Language Representation with Informative Entities[C]//Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. 2019: 1441–1451. [CrossRef] [Google Scholar]
  15. Tian H., Gao C., Xiao X., et al. SKEP: Sentiment Knowledge Enhanced Pre-training for Sentiment Analysis[C]//Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. 2020: 4067–4076. [CrossRef] [Google Scholar]
  16. Su Yila, Sun Xiaoqian, Batuqiqige, Renqing Daoerji. Research on Cyrillic mongol-chinese machine translation based on dual learning [J]. Computer applications and software, 2020, 37(01):172–178. [Google Scholar]
  17. Bao Qi Li Muge. Research on the Construction of Mongolian Sentiment Lexicon based on the Corpus of Da Nakhdorji's Works [D]. Inner Mongolia university, 2020. DOI: 10.27224/dcnki.Gnmdu.2020.000827. [Google Scholar]
  18. Qi Ligel. Based on the deep study of Mongolian news text classification and orientation analysis [D]. Inner Mongolia university, 2021. The DOI: 10.27224/dcnki.Gnmdu.2021.001041. [Google Scholar]
  19. Huang K., Xiao K., Mo F., et al. Domain-aware word segmentation for Chinese language: A document-level context-aware model [J]. Transactions on Asian and Low-Resource Language Information Processing, 2021, 21(2): 1–16. [Google Scholar]
  20. Laptev A., Andrusenko A., Podluzhny I., et al. Dynamic acoustic unit augmentation with bpe-dropout for low-resource end-to-end speech recognition[J]. Sensors, 2021, 21(9): 3063. [CrossRef] [PubMed] [Google Scholar]
  21. Xu Linhong, Lin Hongfei, Pan Yu, Ren Hui, Chen Jianmei. Journal of information science, 2008, 27(2):180–185. [Google Scholar]

Current usage metrics show cumulative count of Article Views (full-text article views including HTML views, PDF and ePub downloads, according to the available data) and Abstracts Views on Vision4Press platform.

Data correspond to usage on the plateform after 2015. The current usage metrics is available 48-96 hours after online publication and is updated daily on week days.

Initial download of the metrics may take a while.