Open Access
E3S Web Conf.
Volume 448, 2023
The 8th International Conference on Energy, Environment, Epidemiology and Information System (ICENIS 2023)
Article Number 02042
Number of page(s) 17
Section Information System
Published online 17 November 2023
  1. N. A. Giudice, Navigating without vision: principles of blind spatial cognition. Edward Elgar Publishing, 2018. [Google Scholar]
  2. Y. Zhuang, J. Yang, Y. Li, L. Qi, and N. El-Sheimy, “Smartphone-based indoor localization with bluetooth low energy beacons,” Sensors (Switzerland), vol. 16, no. 5, pp. 1–20, 2016, doi: 10.3390/s16050596. [CrossRef] [Google Scholar]
  3. M. Elgendy, C. Sik-Lanyi, and A. Kelemen, “Making shopping easy for people with visual impairment using mobile assistive technologies,” Appl. Sci., vol. 9, no. 6, 2019, doi: 10.3390/app9061061. [Google Scholar]
  4. A. Bhowmick and S. M. Hazarika, “An insight into assistive technology for the visually impaired and blind people: state-of-the-art and future trends,” J. Multimodal User Interfaces, vol. 11, no. 2, pp. 149–172, 2017, doi: 10.1007/s12193-016-0235-6. [CrossRef] [Google Scholar]
  5. E. Kostyra, S. Żakowska-Biemans, K. Śniegocka, and A. Piotrowska, “Food shopping, sensory determinants of food choice and meal preparation by visually impaired people. Obstacles and expectations in daily food experiences,” Appetite, vol. 113, pp. 14–22, 2017, doi: 10.1016/j.appet.2017.02.008. [CrossRef] [PubMed] [Google Scholar]
  6. R. Tapu, B. Mocanu, and T. Zaharia, “DEEP-SEE: Joint object detection, tracking and recognition with application to visually impaired navigational assistance,” Sensors (Switzerland), vol. 17, no. 11, 2017, doi: 10.3390/s17112473. [CrossRef] [Google Scholar]
  7. R. Velázquez, E. Pissaloux, P. Rodrigo, M. Carrasco, N. I. Giannoccaro, and A. Lay-Ekuakille, “An outdoor navigation system for blind pedestrians using GPS and tactile-foot feedback,” Appl. Sci., vol. 8, no. 4, 2018, doi: 10.3390/app8040578. [CrossRef] [Google Scholar]
  8. K. Manjari, M. Verma, and G. Singal, “A survey on Assistive Technology for visually impaired,” Internet of Things (Netherlands), vol. 11, 2020, doi: 10.1016/j.iot.2020.100188. [Google Scholar]
  9. R. Jafri, S. A. Ali, H. R. Arabnia, and S. Fatima, “Computer vision-based object recognition for the visually impaired in an indoors environment: a survey,” Vis. Comput., vol. 30, no. 11, pp. 1197–1222, 2014, doi: 10.1007/s00371-013-0886-1. [CrossRef] [Google Scholar]
  10. S. A. S. Mohamed, M. H. Haghbayan, T. Westerlund, J. Heikkonen, H. Tenhunen, and J. Plosila, “A Survey on Odometry for Autonomous Navigation Systems,” IEEE Access, vol. 7, pp. 97466–97486, 2019, doi: 10.1109/ACCESS.2019.2929133. [CrossRef] [Google Scholar]
  11. S. Garrido-Jurado, R. Muñoz-Salinas, F. J. Madrid-Cuevas, and M. J. Marín-Jiménez, “Automatic generation and detection of highly reliable fiducial markers under occlusion,” Pattern Recognit., vol. 47, no. 6, pp. 2280–2292, 2014, doi: 10.1016/j.patcog.2014.01.005. [CrossRef] [Google Scholar]
  12. E. Marchand, H. Uchiyama, and F. Spindler, “Pose Estimation for Augmented Reality: A Hands-On Survey,” IEEE Trans. Vis. Comput. Graph., vol. 22, no. 12, pp. 2633–2651, 2016, doi: 10.1109/TVCG.2015.2513408. [CrossRef] [PubMed] [Google Scholar]
  13. S. Garrido-Jurado, R. Muñoz-Salinas, F. J. Madrid-Cuevas, and R. Medina-Carnicer, “Generation of fiducial marker dictionaries using Mixed Integer Linear Programming,” Pattern Recognit., vol. 51, pp. 481–491, 2016, doi: 10.1016/j.patcog.2015.09.023. [CrossRef] [Google Scholar]
  14. S. Al-Khalifa and M. Al-Razgan, “Ebsar: Indoor guidance for the visually impaired,” Comput. Electr. Eng., vol. 54, pp. 26–39, 2016, doi: 10.1016/j.compeleceng.2016.07.015. [CrossRef] [Google Scholar]
  15. A. Morar et al., “A comprehensive survey of indoor localization methods based on computer vision,” Sensors (Switzerland), vol. 20, no. 9, pp. 1–36, 2020, doi: 10.3390/s20092641. [Google Scholar]
  16. M. Elgendy, T. Guzsvinecz, and C. Sik-Lanyi, “Identification of markers in challenging conditions for people with visual impairment using convolutional neural network,” Appl. Sci., vol. 9, no. 23, 2019, doi: 10.3390/app9235110. [Google Scholar]
  17. K. He, X. Zhang, S. Ren, and J. Sun, “Spatial pyramid pooling in deep convolutional networks for visual recognition,” Lect. Notes Comput. Sci. (including Subser. Lect. Notes Artif. Intell. Lect. Notes Bioinformatics), vol. 8691 LNCS, no. PART 3, pp. 346–361, 2014, doi: 10.1007/978-3-319-10578-9_23. [Google Scholar]
  18. R. Girshick, “Fast R-CNN,” Proc. IEEE Int. Conf. Comput. Vis., vol. 2015 Inter, pp. 1440–1448, 2015, doi: 10.1109/ICCV.2015.169. [Google Scholar]
  19. H. Rampersad, “Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks,” Total Perform. Scorec., pp. 159–183, 2020, doi: 10.4324/9780080519340-12. [Google Scholar]
  20. J. Redmon, S. Divvala, R. Girshick, and A. Farhadi, “You only look once: Unified, real-time object detection,” Proc. IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognit., vol. 2016-Decem, pp. 779–788, 2016, doi: 10.1109/CVPR.2016.91. [Google Scholar]
  21. Wei Liu et al., “SSD: Single Shot MultiBox Detector,” Eccv, vol. 1, pp. 398–413, 2016, doi: 10.1007/978-3-319-46448-0. [Google Scholar]
  22. P. Soviany and R. T. Ionescu, “Optimizing the Trade-off between Single-Stage and Two-Stage Deep Object Detectors using Image Difficulty Prediction,” Proc. - 2018 20th Int. Symp. Symb. Numer. Algorithms Sci. Comput. SYNASC 2018, pp. 209–214, 2018, doi: 10.1109/SYNASC.2018.00041. [Google Scholar]
  23. J. Redmon, S. Divvala, R. Girshick, and A. Farhadi, “You Only Look Once: Unified, Real-Time Object Detection,” in 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016, pp. 779–788, doi: 10.1109/CVPR.2016.91. [Google Scholar]
  24. J. Redmon and A. Farhadi, “YOLO9000: Better, Faster, Stronger,” in 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2017, pp. 6517–6525, doi: 10.1109/CVPR.2017.690. [Google Scholar]
  25. J. Redmon and A. Farhadi, “YOLO9000: Better, faster, stronger,” Proc. - 30th IEEE Conf. Comput. Vis. Pattern Recognition, CVPR 2017, vol. 2017-Janua, pp. 6517–6525, 2017, doi: 10.1109/CVPR.2017.690. [CrossRef] [Google Scholar]
  26. J. Redmon and A. Farhadi, “YOLOv3: An Incremental Improvement,” 2018, [Online]. Available: [Google Scholar]
  27. A. Bochkovskiy, C.-Y. Wang, and H.-Y. M. Liao, “YOLOv4: Optimal Speed and Accuracy of Object Detection,” ArXiv, vol. abs/2004.1, 2020. [Google Scholar]
  28. S. Pang, Z. Yu, and M. A. Orgun, “A novel end-to-end classifier using domain transferred deep convolutional neural networks for biomedical images,” Comput. Methods Programs Biomed., vol. 140, pp. 283–293, 2017, doi: 10.1016/j.cmpb.2016.12.019. [CrossRef] [Google Scholar]
  29. Y. Xiao, J. Wu, Z. Lin, and X. Zhao, “A semi-supervised deep learning method based on stacked sparse auto-encoder for cancer prediction using RNA-seq data,” Comput. Methods Programs Biomed., vol. 166, pp. 99–105, 2018, doi: 10.1016/j.cmpb.2018.10.004. [CrossRef] [Google Scholar]
  30. S. W. Yang and S. K. Lin, “Fall detection for multiple pedestrians using depth image processing technique,” Comput. Methods Programs Biomed., vol. 114, no. 2, pp. 172–182, 2014, doi: 10.1016/j.cmpb.2014.02.001. [CrossRef] [Google Scholar]
  31. J. Tang, Q. Su, B. Su, S. Fong, W. Cao, and X. Gong, “Parallel ensemble learning of convolutional neural networks and local binary patterns for face recognition,” Comput. Methods Programs Biomed., vol. 197, p. 105622, 2020, doi: 10.1016/j.cmpb.2020.105622. [CrossRef] [Google Scholar]
  32. C. González García, D. Meana-Llorián, B. C. Pelayo G-Bustelo, J. M. Cueva Lovelle, and N. Garcia-Fernandez, “Midgar: Detection of people through computer vision in the Internet of Things scenarios to improve the security in Smart Cities, Smart Towns, and Smart Homes,” Futur. Gener. Comput. Syst., vol. 76, no. October, pp. 301–313, 2017, doi: 10.1016/j.future.2016.12.033. [CrossRef] [Google Scholar]
  33. B. Al-Madani, F. Orujov, R. Maskeliūnas, R. Damaševičius, and A. Venčkauskas, “Fuzzy logic type-2 based wireless indoor localization system for navigation of visually impaired people in buildings,” Sensors (Switzerland), vol. 19, no. 9, 2019, doi: 10.3390/s19092114. [CrossRef] [Google Scholar]
  34. W. C. S. S. Simões, G. S. Machado, A. M. A. Sales, M. M. de Lucena, N. Jazdi, and V. F. de Lucena, “A review of technologies and techniques for indoor navigation systems for the visually impaired,” Sensors (Switzerland), vol. 20, no. 14, pp. 1–35, 2020, doi: 10.3390/s20143935. [Google Scholar]
  35. S. T. Pundlik Matteo; Luo, Gang, “CVPR Workshops - Collision Detection for Visually Impaired from a Body-Mounted Camera,” 2013 IEEE Conference on Computer Vision and Pattern Recognition Workshops, vol. NA, no. NA. pp. 41–47, 2013, doi: 10.1109/cvprw.2013.11. [Google Scholar]
  36. V.-N. N. Hoang Thanh-Huong; Le, Thi-Lan; Tran, Thanh-Hai; Vuong, Tan-Phu; Vuillerme, Nicolas, “Obstacle detection and warning system for visually impaired people based on electrode matrix and mobile Kinect,” Vietnam J. Comput. Sci., vol. 4, no. 2, pp. 71–83, 2016, doi: 10.1007/s40595-016-0075-z. [Google Scholar]
  37. K. Vetteth, P. Ganesh, and D. Srikar, “Collision avoidance device for visually impaired,” Int. J. Sci. Technol. Res., vol. 2, no. 10, pp. 185–188, 2013. [Google Scholar]
  38. G. Lee and H. Kim, “A hybrid marker-based indoor positioning system for pedestrian tracking in subway stations,” Appl. Sci., vol. 10, no. 21, pp. 1–20, 2020, doi: 10.3390/app10217421. [Google Scholar]
  39. Y. Li, S. Zhu, Y. Yu, and Z. Wang, “An improved graph-based visual localization system for indoor mobile robot using newly designed markers,” Int. J. Adv. Robot. Syst., vol. 15, no. 2, pp. 1–15, 2018, doi: 10.1177/1729881418769191. [Google Scholar]
  40. Y. Bazi, H. Alhichri, N. Alajlan, and F. Melgani, “Scene description for visually impaired people with multi-label convolutional svm networks,” Appl. Sci., vol. 9, no. 23, 2019, doi: 10.3390/app9235062. [CrossRef] [Google Scholar]
  41. M. Elgendy, M. Herperger, T. Guzsvinecz, and C. S. Lanyi, “Indoor Navigation for People with Visual Impairment using Augmented Reality Markers,” 10th IEEE Int. Conf. Cogn. Infocommunications, CogInfoCom 2019 - Proc., pp. 425–430, 2019, doi: 10.1109/CogInfoCom47531.2019.9089960. [Google Scholar]
  42. L. López, G; Quesada, L; Guerrero, “Alexa vs. Siri vs. Cortana vs. Google Assistant: A Comparison of Speech-Based Natural User Interfaces,” 2017. [Google Scholar]
  43. D. B. Johnson, “A Note on Dijkstra’s Shortest Path Algorithm,” J. ACM, vol. 20, no. 3, pp. 385–388, 1973, doi: 10.1145/321765.321768. [CrossRef] [Google Scholar]
  44. A. de la V. Artificial, “ArUco: a minimal library for Augmented Reality applications based on OpenCV,” 2020. (accessed Dec. 23, 2020). [Google Scholar]
  45. S. Kayukawa et al., “BBEEP: A sonic collision avoidance system for blind travellers and nearby pedestrians,” Conf. Hum. Factors Comput. Syst. - Proc., no. May, 2019, doi: 10.1145/3290605.3300282. [Google Scholar]
  46. A. Y. Rodríguez J. Javier; Alcantarilla, Pablo F.; Bergasa, Luis M.; Almazán, Javier; Cela, Andrés, “Assisting the Visually Impaired: Obstacle Detection and Warning System by Acoustic Feedback,” Sensors (Basel)., vol. 12, no. 12, pp. 17476–17496, 2012, doi: 10.3390/s121217476. [CrossRef] [Google Scholar]
  47. S. K. Singh, S. Rathore, and J. H. Park, “Blockiotintelligence: A blockchain-enabled intelligent IoT architecture with artificial intelligence,” Futur. Gener. Comput. Syst., 2020, [Online]. Available: [Google Scholar]
  48. K. K. K. Singh Vibha, “Smart Wireless Network Algorithm in the Era of Big Data,” in Lecture Notes in Networks and Systems, vol. NA, no. NA, 2021, pp. 1–8. [Google Scholar]
  49. F. S. L. Bashiri Eric; Badger, Jonathan C.; D’Souza, Roshan M.; Yu, Zeyun; Peissig, Peggy L., “ISVC - Object Detection to Assist Visually Impaired People: A Deep Neural Network Adventure,” in Advances in Visual Computing, vol. 11241, Springer International Publishing, 2018, pp. 500–510. [Google Scholar]
  50. X. Chen and A. L. Yuille, “A Time-Efficient Cascade for Real-Time Object Detection: With applications for the visually impaired,” in 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’05) - Workshops, 2005, p. 28, doi: 10.1109/CVPR.2005.399. [Google Scholar]
  51. M. Afif, R. Ayachi, Y. Said, E. Pissaloux, and M. Atri, “An evaluation of retinanet on indoor object detection for blind and visually impaired persons assistance navigation,” Neural Process. Lett., 2020, doi: 10.1007/s11063-020-10197-9. [Google Scholar]
  52. M. Afif, R. Ayachi, E. Pissaloux, Y. Said, and M. Atri, “Indoor objects detection and recognition for an ICT mobility assistance of visually impaired people,” Multimed. Tools …, 2020, doi: 10.1007/s11042-020-09662-3. [Google Scholar]
  53. T. Winlock, E. Christiansen, and S. Belongie, “Toward real-time grocery detection for the visually impaired,” in 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition - Workshops, 2010, pp. 49–56, doi: 10.1109/CVPRW.2010.5543576. [Google Scholar]
  54. R. C. Y. Joshi Saumya; Dutta, Malay Kishore; Travieso-González, Carlos M., “Efficient Multi-Object Detection and Smart Navigation Using Artificial Intelligence for Visually Impaired People,” Entropy (Basel)., vol. 22, no. 9, pp. 941-NA, 2020, doi: 10.3390/e22090941. [CrossRef] [Google Scholar]
  55. U. S. Masud Tareq; Malaikah, Hunida M.; Islam, Fezan Ul; Abbas, Ghulam, “Smart Assistive System for Visually Impaired People Obstruction Avoidance Through Object Detection and Classification,” IEEE Access, vol. 10, no. NA, pp. 13428–13441, 2022, doi: 10.1109/access.2022.3146320. [CrossRef] [Google Scholar]
  56. Bogusław Cyganek, Object Detection and Recognition in Digital Images. John Wiley & Sons Ltd, 2013. [CrossRef] [Google Scholar]
  57. A. Bhalla, S. Goutham, K. Prakash, and T. Sanjana, “VIEW: Optimization of Image Captioning and Facial Recognition on Embedded Systems to Aid the Visually Impaired,” 2021, doi: 10.1109/C2I454156.2021.9689405. [Google Scholar]

Current usage metrics show cumulative count of Article Views (full-text article views including HTML views, PDF and ePub downloads, according to the available data) and Abstracts Views on Vision4Press platform.

Data correspond to usage on the plateform after 2015. The current usage metrics is available 48-96 hours after online publication and is updated daily on week days.

Initial download of the metrics may take a while.