Open Access
Issue |
E3S Web Conf.
Volume 430, 2023
15th International Conference on Materials Processing and Characterization (ICMPC 2023)
|
|
---|---|---|
Article Number | 01154 | |
Number of page(s) | 21 | |
DOI | https://doi.org/10.1051/e3sconf/202343001154 | |
Published online | 06 October 2023 |
- Aggarwal, J. K., & Ryoo, M. S. (2011). Human activity analysis: A review. ACM Computing Surveys (CSUR), 43(3), 16. [CrossRef] [Google Scholar]
- Popoola, A., & Wang, W. (2018). A comprehensive review of human action recognition techniques. Journal of Image and Graphics, 6(3), 152-164. [CrossRef] [Google Scholar]
- Chaaraoui, A. A., & Climent-Pérez, P. (2013). A review on vision techniques applied to human behavior analysis for ambient-assisted living. Expert Systems with Applications, 40(18), 7447-7467. [Google Scholar]
- Wang, L., & Wang, L. (2019). Action recognition from depth maps: A survey. IEEE Transactions on Circuits and Systems for Video Technology, 29(11), 3294-3313. [Google Scholar]
- Karpathy, A., Toderici, G., Shetty, S., Leung, T., Sukthankar, R., & Fei-Fei, L. (2014). Large-scale video classification with convolutional neural networks. In Proceedings of the IEEE conference on Computer Vision and Pattern Recognition (CVPR) (pp. 1725-1732). [Google Scholar]
- Simonyan, K., & Zisserman, A. (2014). Two-stream convolutional networks for action recognition in videos. In Advances in Neural Information Processing Systems (NIPS) (pp. 568-576). [Google Scholar]
- Tran, D., Bourdev, L., Fergus, R., Torresani, L., & Paluri, M. (2015). Learning spatiotemporal features with 3D convolutional networks. In Proceedings of the IEEE International Conference on Computer Vision (ICCV) (pp. 4489-4497). [Google Scholar]
- Certainly! Here are a few more references on human action recognition that you can explore. [Google Scholar]
- Madhu, Bhukya, and M. Venu Gopalachari. “Classification of the Severity of Attacks on Internet of Things Networks.” In Sentiment Analysis and Deep Learning: Proceedings of ICSADL 2022, pp. 411-424. Singapore: Springer Nature Singapore, 2023. [Google Scholar]
- Wang, H., Kläser, A., Schmid, C., & Liu, C. (2011). Action recognition by dense trajectories. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (pp. 3169-3176). [Google Scholar]
- Laptev, I., Marszalek, M., Schmid, C., & Rozenfeld, B. (2008). Learning realistic human actions from movies. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (pp. 1-8). [Google Scholar]
- Ji, S., Xu, W., Yang, M., & Yu, K. (2013). 3D convolutional neural networks for human action recognition. IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), 35(1), 221-231. [CrossRef] [PubMed] [Google Scholar]
- Simonyan, K., & Zisserman, A. (2014). Two-stream convolutional networks for action recognition in videos. Neural Networks, 64, 98-106. [Google Scholar]
- Carreira, J., & Zisserman, A. (2017). Quo vadis, action recognition? A new model and the kinetics dataset. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (pp. 4724-4733). [Google Scholar]
- Feichtenhofer, C., Pinz, A., & Zisserman, A. (2016). Convolutional two-stream network fusion for video action recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (pp. 1933-1941). [Google Scholar]
- Singh, G., Saha, S., Sapienza, M., Torr, P. H., & Cuzzolin, F. (2016). Online real-time multiple spatiotemporal action localisation and prediction. In Proceedings of the European Conference on Computer Vision (ECCV) (pp. 297-314). [Google Scholar]
- Farha, Y. A., & Gall, J. (2019). MS-TCN: Multi-stage temporal convolutional network for action segmentation. In Proceedings of the IEEE International Conference on Computer Vision (ICCV) (pp. 2641-2650). [Google Scholar]
- Sivakumar, S. A., Tegil J. John, Thamarai G. Selvi, Bhukya Madhu, C. Udhaya Shankar, and K. P. Arjun. “IoT based Intelligent Attendance Monitoring with Face Recognition Scheme.” In 2021 5th International Conference on Computing Methodologies and Communication (ICCMC), pp. 349-353. IEEE, 2021. [Google Scholar]
- Tran, D., Bourdev, L., Fergus, R., Torresani, L., & Paluri, M. (2015). Learning spatiotemporal features with 3D convolutional networks. In proceedings of the IEEE International Conference on Computer Vision (ICCV) (pp. 4489-4497). [Google Scholar]
- Wang, L., Qiao, Y., & Tang, X. (2015). Action recognition with trajectory-pooled deep-convolutional descriptors. In proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (pp. 4305-4314). [Google Scholar]
- Feichtenhofer, C., Pinz, A., & Zisserman, A. (2016). Convolutional two-stream network fusion for video action recognition. In proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (pp. 1933-1941). [Google Scholar]
- Simonyan, K., & Zisserman, A. (2014). Two-stream convolutional networks for action recognition in videos. Communications of the ACM, 59(7), 42-51. [Google Scholar]
- Ji, S., Xu, W., Yang, M., & Yu, K. (2013). 3D convolutional neural networks for human action recognition. IEEE transactions on pattern analysis and machine intelligence, 35(1), 221-231. [CrossRef] [PubMed] [Google Scholar]
- Madhu, Bhukya, M. Venu Gopala Chari, Ramdas Vankdothu, Arun Kumar Silivery, and Veerender Aerranagula. “Intrusion detection models for IOT networks via deep learning approaches.” Measurement: Sensors 25 (2023): 100641. [CrossRef] [Google Scholar]
- Wang, L., Xiong, Y., Wang, Z., Qiao, Y., Lin, D., Tang, X., & Van Gool, L. (2016). Temporal segment networks: Towards good practices for deep action recognition. In proceedings of the European Conference on Computer Vision (ECCV) (pp. 20-36). [Google Scholar]
- Zhang, Y., & Wang, L. (2019). A survey on recent advances in video-based human action recognition. arXiv preprint arXiv:1907.04653. [Google Scholar]
- Zolfaghari, M., Singh, K., Brox, T., & Schiele, B. (2018). Ecological video classification with the 3D convolutional neural network. In proceedings of the European Conference on Computer Vision (ECCV) (pp. 334-349). [Google Scholar]
- Tran, D., Wang, H., & Torresani, L. (2018). A closer look at spatiotemporal convolutions for action recognition. In proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (pp. 6450-6459). [Google Scholar]
- Feichtenhofer, C., Fan, H., Malik, J., & He, K. (2019). SlowFast networks for video recognition. In proceedings of the IEEE International Conference on Computer Vision (ICCV) (pp. 6201-6210). [Google Scholar]
- Wang, X., Girshick, R., Gupta, A., & He, K. (2018). Non-local neural networks. In proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (pp. 7794-7803). [Google Scholar]
- Li, Y., Qi, H., Dai, J., Ji, X., & Wei, Y. (2020). Spatio-temporal graph for video-based person re-identification. In proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (pp. 10129-10138). [Google Scholar]
- Jiang, Z., Xu, J., & Zhang, Y. (2020). STM: Spatial-temporal memory networks for video action recognition. In proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (pp. 11128-11137). [Google Scholar]
- Lin, T. Y., Goyal, P., Girshick, R., He, K., & Dollár, P. (2017). Focal loss for dense object detection. In proceedings of the IEEE International Conference on Computer Vision (ICCV) (pp. 2980-2988). [Google Scholar]
- Damodaram, A. K., L. Venkateswara Reddy, M. Giri, and N. Manikandan. “A Study On’LPWAN’Technologies For A Drone Assisted Smart Energy Meter System In 5g-smart City Iot-cloud Environment.” Journal of Applied Science and Engineering 26, no. 8 (2022): 1195-1203. [Google Scholar]
- Simitha, K. M., and MS Subodh Raj. “IoT and WSN based air quality monitoring and energy saving system in SmartCity project.” In 2019 2nd International Conference on Intelligent Computing, Instrumentation and Control Technologies (ICICICT), vol. 1, pp. 1431-1437. IEEE, 2019. [Google Scholar]
Current usage metrics show cumulative count of Article Views (full-text article views including HTML views, PDF and ePub downloads, according to the available data) and Abstracts Views on Vision4Press platform.
Data correspond to usage on the plateform after 2015. The current usage metrics is available 48-96 hours after online publication and is updated daily on week days.
Initial download of the metrics may take a while.