Open Access
Issue
E3S Web Conf.
Volume 399, 2023
International Conference on Newer Engineering Concepts and Technology (ICONNECT-2023)
Article Number 04045
Number of page(s) 8
Section Computer Science
DOI https://doi.org/10.1051/e3sconf/202339904045
Published online 12 July 2023
  1. Ren, S., He, K., Girshick, R., & Sun, J. (2015). Faster R-CNN: Towards real-time object detection with region proposal networks. In Advances in Neural Information Processing Systems (pp. 91–99). [Google Scholar]
  2. Liu, W., Anguelov, D., Erhan, D., Szegedy, C., Reed, S., Fu, C.Y., & Berg, A.C. (2016). SSD: Single shot multibox detector. In European Conference on Computer Vision (pp. 21–37). [CrossRef] [Google Scholar]
  3. Redmon, J., Divvala, S., Girshick, R., & Farhadi, A. (2016). You only look once: Unified, real-time object detection. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 779–788). [Google Scholar]
  4. Long, J., Shelhamer, E., & Darrell, T. (2015). Fully convolutional networks for semantic segmentation. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 3431–3440). [Google Scholar]
  5. Ronneberger, O., Fischer, P., & Brox, T. (2015). U-Net: Convolutional networks for biomedical image segmentation. In International Conference on Medical image computing and computer-assisted intervention (pp. 234–241). [Google Scholar]
  6. Chen, L.C., Papandreou, G., Kokkinos, I., Murphy, K., & Yuille, A.L. (2018). Deeplab: Semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected CRFs. IEEE transactions on pattern analysis and machine intelligence, 40(4), 834–848. [CrossRef] [PubMed] [Google Scholar]
  7. Turk, M., & Pentland, A. (1991). Eigenfaces for recognition. Journal of cognitive neuroscience, 3(1), 71–86. [Google Scholar]
  8. Viola, P., & Jones, M. (2004). Robust real-time face detection. International journal of computer vision, 57(2), 137–154. [CrossRef] [Google Scholar]
  9. Taigman, Y., Yang, M., Ranzato, M., & Wolf, L. (2014). Deepface: Closing the gap to human-level performance in face verification. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 1701–1708). [Google Scholar]
  10. Bolme, D.S., Beveridge, J.R., Draper, B.A., & Lui, Y.M. (2010). Visual object tracking using adaptive correlation filters. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 2544–2550). [Google Scholar]
  11. Danelljan, M., Hager, G., Shahbaz Khan, F., & Felsberg, M. (2015). Learning spatially regularized correlation filters for visual tracking. In Proceedings of the IEEE international conference on computer vision (pp. 4310–4318). [Google Scholar]
  12. Wang, N., Zhang, T., Li, J., & Yang, J. (2018). Multi-cue correlation filters for robust visual tracking. International journal of computer vision, 126(5), 484–501. [Google Scholar]
  13. Dong, C., Loy, C.C., He, K., & Tang, X. (2014). Learning a deep convolutional network for image super-resolution. In European conference on computer vision (pp. 184–199). [Google Scholar]
  14. Ledig, C., Theis, L., Huszar, F., Caballero, J., Cunningham, A., Acosta, A., … & Shi, W. (2017). Photo-realistic single image super-resolution using a generative adversarial network. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 4681–4690). [Google Scholar]
  15. Lim, B., Son, S., Kim, H., Nah, S., & Lee, K.M. (2017). Enhanced deep residual networks for single image super-resolution. In Proceedings of the IEEE conference on computer vision and pattern recognition workshops (pp. 136–144). [Google Scholar]
  16. Simonyan, K., & Zisserman, A. (2014). Two-stream convolutional networks for action recognition in videos. In Advances in neural information processing systems (pp. 568–576). [Google Scholar]
  17. Carreira, J., & Zisserman, A. (2017). Quo vadis, action recognition? A new model and the kinetics dataset. In proceedings of the IEEE conference on computer vision and pattern recognition (pp. 4724–4733). [Google Scholar]
  18. Feichtenhofer, C., Pinz, A., & Zisserman, A. (2017). Detect to track and track to detect. In Proceedings of the IEEE international conference on computer vision (pp. 3038–3046). [Google Scholar]
  19. Quattoni, A., & Torralba, A. (2009). Recognizing indoor scenes. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 413–420). [Google Scholar]
  20. Gupta, S., Arbelaez, P., & Malik, J. (2010). Perceptual organization and recognition of indoor scenes from RGB-D images. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 564–571). [Google Scholar]
  21. Zhang, C., Song, H.O., Cui, Y., & Darrell, T. (2018). End-to-end semantic role labeling for video captioning. In Proceedings of the European Conference on Computer Vision (pp. 551–567). [Google Scholar]
  22. Simonyan, K., & Zisserman, A. (2014). Two-stream convolutional networks for action recognition in videos. In Advances in neural information processing systems (pp. 568576). [Google Scholar]
  23. Carreira, J., & Zisserman, A. (2017). Quo vadis, action recognition? A new model and the kinetics dataset. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 4724–4733). [Google Scholar]
  24. Tran, D., Bourdev, L., Fergus, R., Torresani, L., & Paluri, M. (2015). Learning spatiotemporal features with 3D convolutional networks. In Proceedings of the IEEE international conference on computer vision (pp. 4489–4497). [Google Scholar]
  25. Saenko, K., Kulis, B., Fritz, M., & Darrell, T. (2010). Adapting visual category models to new domains. In European conference on computer vision (pp. 213–226). [Google Scholar]
  26. Ganin, Y., & Lempitsky, V. (2015). Unsupervised domain adaptation by backpropagation. In International conference on machine learning (pp. 1180–1189). [Google Scholar]
  27. Hoffman, J., Tzeng, E., Park, T., Zhu, J.Y., Isola, P., Saenko, K., … & Efros, A.A. (2017). Cycada: Cycle-consistent adversarial domain adaptation. In International conference on machine learning (pp. 1989–1998). [Google Scholar]
  28. Girshick, R., Donahue, J., Darrell, T., & Malik, J. (2014). Rich feature hierarchies for accurate object detection and semantic segmentation. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 580–587). [Google Scholar]
  29. He, K., Gkioxari, G., Dollar, P., & Girshick, R. (2017). Mask R-CNN. In Proceedings of the IEEE international conference on computer vision (pp. 2961–2969). [Google Scholar]
  30. Chen, L., Yang, Y., Wang, J., Xu, W., & Yuille, A.L. (2020). BlendMask: Top-Down meets Bottom-Up for instance segmentation. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 9728–9737). [Google Scholar]
  31. Chhabra, G. (2023). Comparison of imputation methods for univariate time series. International Journal on Recent and Innovation Trends in Computing and Communication, 11, 286–292. doi: 10.17762/ijritcc.v11i2s.6148 [CrossRef] [Google Scholar]
  32. Appe, S.R.N., Arulselvi, G., & Balaji, G.N. (2023). Tomato ripeness detection and classification using VGG based CNN models. International Journal of Intelligent Systems and Applications in Engineering, 11(1), 296–302. Retrieved from www.scopus.com [Google Scholar]
  33. Rossi, G., Nowak, K., Nielsen, M., Garda, A., & Silva, J. Enhancing Collaborative Learning in Engineering Education with Machine Learning. Kuwait Journal of Machine Learning, 1(2). Retrieved from http://kuwaitjournals.com/index.php/kjml/artide/view/119 [Google Scholar]
  34. Makarand L.M. (2021). Earlier Detection of Gastric Cancer Using Augmented Deep Learning Techniques in Big Data with Medical Iot (Miot). Research Journal of Computer Systems and Engineering, 2(2), 22–26. Retrieved from https://technicaljournals.org/RJ CSE/index.php/journal/article/view/28 [Google Scholar]
  35. Wiling, B. (2021). Locust Genetic Image Processing Classification Model-Based Brain Tumor Classification in MRI Images for Early Diagnosis. Machine Learning Applications in Engineering Education and Management, 1(1), 19–23. Retrieved from http://yashikajournals.com/index.php/mlaeem/article/view/6 [Google Scholar]

Current usage metrics show cumulative count of Article Views (full-text article views including HTML views, PDF and ePub downloads, according to the available data) and Abstracts Views on Vision4Press platform.

Data correspond to usage on the plateform after 2015. The current usage metrics is available 48-96 hours after online publication and is updated daily on week days.

Initial download of the metrics may take a while.