Open Access
Issue |
E3S Web Conf.
Volume 474, 2024
X International Annual Conference “Industrial Technologies and Engineering” (ICITE 2023)
|
|
---|---|---|
Article Number | 02022 | |
Number of page(s) | 9 | |
Section | Applied IT Technologies in Energy and Industry | |
DOI | https://doi.org/10.1051/e3sconf/202447402022 | |
Published online | 08 January 2024 |
- P. F. Felzenszwalb, D. P. Huttenlocher, Int. J. of Computer Vision 61, 55–79 (2005) [CrossRef] [Google Scholar]
- A. Schick, R. Stiefelhagen, “3D Pictorial Structures for Human Pose Estimation with Supervoxels”, in IEEE Winter Conference on Applications of Computer Vision (2015) [Google Scholar]
- B. Sapp, B. Taskar, “Modec: Multimodal decomposable models for human pose estimation”, in CVPR (2013) [Google Scholar]
- J. Tompson, A. Jain, Y. LeCun, Ch. Bregler, “Joint Training of a Convolutional Network and a Graphical Model for Human Pose Estimation”, in NIPS (2014) [Google Scholar]
- A. Toshev, Ch. Szegedy, “DeepPose: Human Pose Estimation via Deep Neural Networks”, in IEEE Conference on Computer Vision and Pattern Recognition (2014) [Google Scholar]
- L. Pishchulin, E. Insafutdinov, S. Tang, B. Andres, M. Andriluka, P. Gehler, B. Schiele, “DeepCut: Joint Subset Partition and Labeling for Multi Person Pose Estimation”, in IEEE Conference on Computer Vision and Pattern Recognition (2016) [Google Scholar]
- H.-S. Fang, S. Xie, Y.-W. Tai, C. Lu, RMPE: Regional Multi-person Pose Estimation (2018) [Google Scholar]
- H.-S. Fang, J. Li, H. Tang, Ch. Xu, H. Zhu, Yu. Xiu, Y.-L. Li, C. Lu, AlphaPose: Whole Body Regional Multi-Person Pose Estimation and Tracking in Real-Time (2022) [Google Scholar]
- Z. Cao, G. Hidalgo, T. Simon, S.-E. Wei, Y. Sheikh, “OpenPose: Realtime Multi-Person 2D Pose Estimation using Part Affinity Fields”, in IEEE Transactions on Pattern Analysis and Machine Intelligence (2019) [Google Scholar]
- B. Cheng, B. Xiao, J. Wang, H. Shi, T.S. Huang, L. Zhang, HigherHRNet: Scale-Aware Representation Learning for Bottom-Up Human Pose Estimation”, in CVPR (2020) [Google Scholar]
- B. Artacho, A. Savakis, “BAPose: Bottom-Up Pose Estimation with Disentangled Waterfall Representations”, in IEEE/CVF Winter Conference on Applications of Computer Vision Workshops (2023) [Google Scholar]
- PaddleDetection, Object detection and instance segmentation toolkit based on PaddlePaddle, https://github.com/PaddlePaddle/PaddleDetection (2019) [Google Scholar]
- M. A. Fischler, R. A. Elschlager, IEEE Transactions on Computer 22 (1), 67–92 (1973) [CrossRef] [Google Scholar]
- K. Sun, B. Xiao, D. Liu, J. Wang, “Deep high-resolution representation learning for human pose estimation”, in CVPR (2019) [Google Scholar]
- J. Wang, K. Sun, T. Cheng, B. Jiang, C. Deng, Y. Zhao, D. Liu, Y. Mu, M. Tan, X. Wang, W. Liu, B. Xiao, “Deep high-resolution representation learning for visual recognition”, in CoRR (2019) [Google Scholar]
- D. Sur’is, R. Liu, C. Vondrick, “Learning the Predictability of the Future”, in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2021) [Google Scholar]
- M. Andriluka, U. Iqbal, E. Insafutdinov, L. Pishchulin, A. Milan, J. Gall, B. Schiele, “PoseTrack: A Benchmark for Human Pose Estimation and Tracking”, in IEEE/CVF Conference on Computer Vision and Pattern Recognition (2018) [Google Scholar]
- A. Khelvas, A. Gilya-Zetinov, E. Konyagin, D. Demyanova, P. Sorokin, R. Khafizov, Advances in Intelligent Systems and Computing 1251, 10–22 (2021) [CrossRef] [Google Scholar]
- Y. Xiu, J. Li, H. Wang, Y. Fang, C. Lu, “Pose flow: efficient online pose tracking”, in British Machine Vision Conference (2018) [Google Scholar]
- A. Antonucci, V. Magnago, L. Palopoli, D. Fontanelli, “Performance Assessment of a People Tracker for Social Robots”, in IEEE Instrumentation and Measurement Society (2019) [Google Scholar]
- J. Docekal, J. Rozlivek, J. Matas, M. Hoffmann, “Human Keypoint Detection for Close Proximity Human-Robot Interaction”, in IEEE-RAS 21st International Conference on Humanoid Robots (2022) [Google Scholar]
- O.S. Amosov, S.G. Amosova, S.V. Zhiganov, Yu. S. Ivanov, F.F. Pashchenko, J. Of Comp. And Systems Sciences Int. 59 (5), 712–727 (2020) [CrossRef] [Google Scholar]
- O.S. Amosov, S.V. Zhiganov, Yu. S. Ivanov, Information Technology in Industry 6 (1), 14–19 (2018) [Google Scholar]
- G. Yu, Q. Chang, W. Lv, Ch. Xu, Ch. Cui, W. Ji, Q. Dang, K. Deng, G. Wang, Y. Du, B. Lai, Q. Liu, X. Hu, D. Yu, Y. Ma, “PP-PicoDet: A Better Real-Time Object Detector on Mobile Devices”, in Computer Vision and Pattern Recognition (2021) [Google Scholar]
- K. Simonyan, A. Zisserman, “Very Deep Convolutional Networks for Large-Scale Image Recognition”, in ICLR (2015) [Google Scholar]
- YOLOv5 in PyTorch. Available online: https://github.com/ultralytics/yolov5. [Google Scholar]
- Ch. Yu, B. Xiao, Ch. Gao, L. Yuan, L. Zhang, N. Sang, J. Wang, Lite-HRNet: A Lightweight High-Resolution Network, Computer Vision and Pattern Recognition (2021) [Google Scholar]
- T.-Y. Lin, M. Maire, S. Belongie, J. Hays, P. Perona, D. Ramanan, P. Dollár, C.L. Zitnick, “Microsoft COCO: Common objects in context”, in European Conference on Computer Vision (2014) [Google Scholar]
- O.S. Amosov, S.G. Amosova, Y.S. Ivanov, S.V. Zhiganov, “Using the deep neural networks for normal and abnormal situation recognition in the automatic access monitoring and control system of vehicles”, in Neural Computing & Applications (2020) [Google Scholar]
- O.A. Stepanov, O.S. Amosov, IFAC Proceedings Volumes (IFAC-PapersOnline) 37 (12), 213–218 (2004) [CrossRef] [Google Scholar]
- O.S. Amosov, Proceedings of the 2nd International Conference on Intelligent Control and Information Processing (ICICIP, 2011) 6008233, 208–213 (2011) [CrossRef] [Google Scholar]
- O.S. Amosov, S.G. Baena, IEEE International Conference on Control and Automation 8003045, 118–123 (2017) [Google Scholar]
- O.S. Amosov, S.G. Amosova, Y.S. Ivanov, S.V. Zhiganov, Procedia Computer Science 150, 532–539 (2019) [CrossRef] [Google Scholar]
- O.S. Amosov, S.G. Amosova, I.O. Iochkov, “Deep Neural Network Recognition of Rivet Joint Defects in Aircraft Products”, in Sensors 22 (9), 3417 (2022) [CrossRef] [PubMed] [Google Scholar]
Current usage metrics show cumulative count of Article Views (full-text article views including HTML views, PDF and ePub downloads, according to the available data) and Abstracts Views on Vision4Press platform.
Data correspond to usage on the plateform after 2015. The current usage metrics is available 48-96 hours after online publication and is updated daily on week days.
Initial download of the metrics may take a while.