Open Access
Issue
E3S Web Conf.
Volume 260, 2021
2021 International Conference on Advanced Energy, Power and Electrical Engineering (AEPEE2021)
Article Number 03013
Number of page(s) 11
Section Electrical Engineering and Automation
DOI https://doi.org/10.1051/e3sconf/202126003013
Published online 19 May 2021
  1. Yang G. Z., Huang T. S. Human face detection in a complex back2 ground[J]. Pattern Recognition, 1994, 27(1): 53–63. [Google Scholar]
  2. Moghaddam B., Pentland A. Probabilistic visual learning for object recognition[J]. Pattern Analysisand Machine Intelligence, 1997, 19(7): 696–710. [Google Scholar]
  3. Ren S., He K., Girshick R., et al. Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks[J]. IEEE Transactions on Pattern Analysis & Machine Intelligence, 2017, 39(6): 1137–1149. [Google Scholar]
  4. Girshick R. Fast R-CNN[J]. Computer Science, 2015. [Google Scholar]
  5. Zhang K., Zhang Z., Li Z., et al. Joint Face Detection and Alignment Using Multitask Cascaded Convolutional Networks[J]. IEEE Signal Processing Letters, 2016, 23(10):1499–1503. [NASA ADS] [CrossRef] [Google Scholar]
  6. Zhang Y., Lv P., Lu X., et al. Face detection and alignment method for driver on highroad based on improved multi-task cascaded convolutional networks[J]. Multimedia Tools and Applications, 2019, 78(18). [Google Scholar]
  7. Lanitis A., Taylor C. J., Cootes T. F. Automatic interpretation and coding of face image using flexible models[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 1997, 19(7): 743–756. [Google Scholar]
  8. Cootes T. F., Wheeler G. V., Walker K. N., et al. View-based active appearance models[J]. Image & Vision Computing, 2002, 20 (9-10): 657–664. [Google Scholar]
  9. Lekdioui K., Messoussi R., Ruichek Y., et al. Facial decomposition for expression recognition using texture/shape and SVM classifier[J]. Signal Processing: Image Communication, 2017. [Google Scholar]
  10. Olson. Design and improvement of face recognition system based on SVM[J]. Network security technology and application, 2019 (12) [Google Scholar]
  11. Lowe D. G. Object Recognition from Local Scale-Invariant Features// Proceedings of the International Conference on Computer Vision. Corfu, GREECE, 1999: 1150–1157. [Google Scholar]
  12. Ojala T., Pietikainen M., Harwood D. Performance evaluation of texture measures with classification based on Kullback discrimination of distributions// Proceedings of the 12th International Conference on Pattern Recognition. Beijing, China, 1994: 582–585. [Google Scholar]
  13. Levi G., Hassner T. Emotion recognition in the wild via convolutional neural networks and mapped binary patterns[C]. ACM on International Conference on Multimodal Interaction. New York, USA, 2015: 503–510. [Google Scholar]
  14. Zhang T., Zheng W., Cui Z., et al. A deep nural network driven feature learning method for multi-view facial expression recognition[J]. IEEE Transactions on Multimedia, 2016:1–1. [Google Scholar]
  15. Zhang C., Wang P., Chen K., et al. Identity-aware convolutional neural networks for facial expression recognition[J]. Systems of Engineering and Electronics Journal, 2017, 28(4): 784–792. [Google Scholar]
  16. Wen Y. M., Ou Y.W., Ling Y.Q. Expression recognition oriented dual channel convolutional neural network [J]. Computer engineering and design, 2019, 40(7): 46–52 [Google Scholar]
  17. Freeman W. T., Roth M. Orientation histograms for hand gesture recognition// Proceedings of the International workshop on automatic face and gesture recognition. Zurich, Switzerland, 1995: 296–301. [Google Scholar]
  18. Huang A., Abugharbieh R., Tam R. A novel rotationally invariantregion-based hidden Markov model for efficient 3-D image segmentation[J]. IEEE Trans on Image Processing, 2010, 19(10): 2737–2748. [Google Scholar]
  19. Choy S. K., Tong C. S. Statistical wavelet subband characterization based on generalized gamma density and its application in texture retrieval[J]. IEEE Trans on Image Processing, 2010, 19(2): 281–289. [Google Scholar]
  20. Ren Jianfen, Jiang Xudong, Yuan Junsong. Noise resistant local binary pattern with an embedded error-correction mechanism[J]. IEEE Transon Image Processing. 2013, 22(10): 4049–4060. [Google Scholar]
  21. Tan X. Y., Bill T. Enhanced local texture feature sets for face recognition under difficult lighting conditions[J]. IEEE Trans on Image Processing, 2010, 19(6): 1635–1650. [Google Scholar]
  22. Akata Z., Perronnin F., Harchaoui Z., et al. Good practice in large-scale learning for image classification[J]. IEEE Trans on Pattern Analysis and Machine Intelligence, 2014, 36(3): 507–520. [Google Scholar]
  23. Lai C. Analysis of activation function in convolutional neural networks [J]. Science and technology innovation, 2019 (33): 35–36 [Google Scholar]
  24. Zhang S., Gong Y. H., Wang J. J. Development of deep convolutional neural network and its application in computer vision. Acta computa Sinica, 2019, 42(3): 453–482 [Google Scholar]
  25. Zhou F. Y., Jin L. P., Dong J. review of convolutional neural networks. Acta computa Sinica, 2017, 40(6): 1229–1251 [Google Scholar]
  26. Gulcehre C., Moczulski M., Denil M., et al. Noisy Activation Functions[J]. 2016. [Google Scholar]
  27. Liu K., Zhang M., Pan Z. Facial expression recognition with CNN ensemble[C]. International Conference on Cyberworlds. Chongqing: IEEE Computer Society, 2016: 163-166. [Google Scholar]
  28. Guo Y., Tao D., Yu J., et al. Deep Neural Networks with Relativity Learning for facial expression recognition[C]. IEEE International Conference on Multimedia & Expo Workshops (ICMEW). Seattle: IEEE, 2016: 1–6. [Google Scholar]
  29. Wang J., Yuan C. Facial expression recognition with multi-scale convolution neural network[C]. Pacific Rim Conference on Multimedia. Xi'an: Springer, 2016: 376–385. [Google Scholar]

Current usage metrics show cumulative count of Article Views (full-text article views including HTML views, PDF and ePub downloads, according to the available data) and Abstracts Views on Vision4Press platform.

Data correspond to usage on the plateform after 2015. The current usage metrics is available 48-96 hours after online publication and is updated daily on week days.

Initial download of the metrics may take a while.