Open Access
Issue
E3S Web Conf.
Volume 229, 2021
The 3rd International Conference of Computer Science and Renewable Energies (ICCSRE’2020)
Article Number 01048
Number of page(s) 12
DOI https://doi.org/10.1051/e3sconf/202122901048
Published online 25 January 2021
  1. Ahmadvand, P., Ebrahimpour, R., & Ahmadvand, P. (2016). How popular CNNs perform in real applications of face recognition. 2016 24th Telecommunications Forum (TELFOR), 1–4. [Google Scholar]
  2. Baydilli, Y. Y., & Atila, Ü. (2020). Classification of white blood cells using capsule networks. Computerized Medical Imaging and Graphics, 80, 101-699. [Google Scholar]
  3. Butun, E., Yildirim, O., Talo, M., Tan, R.-S., & Rajendra Acharya, U. (2020). 1D-CADCapsNet: One dimensional deep capsule networks for coronary artery disease detection using ECG signals. Physica Medica, 70, 39–48. [CrossRef] [Google Scholar]
  4. Dauphin, Y. N., Fan, A., Auli, M., & Grangier, D. (2017). Language modeling with gated convolutional networks. Proceedings of the 34th International Conference on Machine Learning Volume 70, 933–941. [Google Scholar]
  5. Duarte, K., Rawat, Y., & Shah, M. (2018). VideoCapsuleNet: A Simplified Network for Action Detection. In S. Bengio, H. Wallach, H. Larochelle, K. Grauman, N. Cesa-Bianchi, & R. Garnett (Eds.), Advances in Neural Information Processing Systems 31 (pp. 7610–7619). Curran Associates, Inc. [Google Scholar]
  6. He, K., Zhang, X., Ren, S., & Sun, J. (2015). Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification. 2015 IEEE International Conference on Computer Vision (ICCV), 1026–1034. [CrossRef] [Google Scholar]
  7. Hinton, G. E., Krizhevsky, A., & Wang, S. D. (2011). Transforming Auto-Encoders. In T. Honkela, W. Duch, M. Girolami, & S. Kaski (Eds.), Artificial Neural Networks and Machine Learning – ICANN 2011 (Vol. 6791, pp. 44–51). Springer Berlin Heidelberg. [Google Scholar]
  8. Hinton, G. E., Sabour, S., & Frosst, N. (2018). Matrix capsules with EM routing. 6th International Conference on Learning Representations, ICLR, 1–15. [Google Scholar]
  9. Hosseini, H., Xiao, B., Jaiswal, M., & Poovendran, R. (2017). On the Limitation of Convolutional Neural Networks in Recognizing Negative Images. 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), 352–358. [CrossRef] [Google Scholar]
  10. Jia, F., Lei, Y., Lin, J., Zhou, X., & Lu, N. (2016). Deep neural networks: A promising tool for fault characteristic mining and intelligent diagnosis of rotating machinery with massive data. Mechanical Systems and Signal Processing, 72–73, 303–315. [CrossRef] [Google Scholar]
  11. Jiménez-Sánchez, A., Albarqouni, S., & Mateus, D. (2018). Capsule Networks Against Medical Imaging Data Challenges. Intravascular Imaging and Computer Assisted Stenting and Large-Scale Annotation of Biomedical Data and Expert Label Synthesis, 150–160. [Google Scholar]
  12. Jung, S., Lee, U., Jung, J., & Shim, D. H. (2016). Real-time Traffic Sign Recognition system with deep convolutional neural network. 2016 13th International Conference on Ubiquitous Robots and Ambient Intelligence (URAI), 31–34. [CrossRef] [Google Scholar]
  13. Kim, J., Jang, S., Park, E., & Choi, S. (2020). Text classification using capsules. Neurocomputing, 376, 214–221. [CrossRef] [Google Scholar]
  14. Kosiorek, A., Sabour, S., Teh, Y. W., & Hinton, G. E. (2019). Stacked Capsule Autoencoders. In H. Wallach, H. Larochelle, A. Beygelzimer, F. d\textquotesingle Alché-Buc, E. Fox, & R. Garnett (Eds.), Advances in Neural Information Processing Systems 32 (pp. 15512–15522). [Google Scholar]
  15. Lee, H., Ekanadham, C., & Ng, A. Y. (2008). Sparse deep belief net model for visual area V2. In J. C. Platt, D. Koller, Y. Singer, & S. T. Roweis (Eds.), Advances in Neural Information Processing Systems 20 (pp. 873–880).Curran Associates, Inc. [Google Scholar]
  16. Lei, K., Fu, Q., Yang, M., & Liang, Y. (2020). Tag Recommendation by Text Classification with Attention-Based Capsule Network. Neurocomputing. [Google Scholar]
  17. Li, H., Guo, X., DaiWanli Ouyang, B., & Wang, X. (2018). Neural Network Encapsulation. Proceedings of the European Conference on Computer Vision (ECCV), 252–267. [Google Scholar]
  18. Maturana, D., & Scherer, S. (2015). VoxNet: A 3D Convolutional Neural Network for real-time object recognition. 2015 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 922–928. [CrossRef] [Google Scholar]
  19. Mobiny, A., & Nguyen, H. V. (2018). Fast CapsNet for Lung Cancer Screening. Medical Image Computing and Computer Assisted Intervention – MICCAI 2018, 741–749. [CrossRef] [Google Scholar]
  20. Nguyen, H. H., Yamagishi, J., & Echizen, I. (2019). Capsule-forensics: Using Capsule Networks to Detect Forged Images and Videos. ICASSP 2019 2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2307–2311. [CrossRef] [Google Scholar]
  21. Phong, N. H., & Ribeiro, B. (2019). Advanced Capsule Networks via Context Awareness. International Conference on Artificial Neural Networks. Springer, Cham, 11727, 166–177. [Google Scholar]
  22. Pu, Y., Gan, Z., Henao, R., Yuan, X., Li, C., Stevens, A., & Carin, L. (2016). Variational Autoencoder for Deep Learning of Images, Labels and Captions. In D. D. Lee, M. Sugiyama, U. V. Luxburg, I. Guyon, & R. Garnett (Eds.), Advances in Neural Information Processing Systems 29 (pp. 2352–2360). Curran Associates, Inc. [Google Scholar]
  23. Rajasegaran, J., Jayasundara, V., Jayasekara, S., Jayasekara, H., Seneviratne, S., & Rodrigo, R. (2019). DeepCaps: Going Deeper With Capsule Networks. 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 10717–10725. [CrossRef] [Google Scholar]
  24. Ramírez, I., Cuesta-Infante, A., Schiavi, E., & Pantrigo, J. J. (2020). Bayesian capsule networks for 3D human pose estimation from single 2D images. Neurocomputing, 379, 64–73. [CrossRef] [Google Scholar]
  25. Ren, Z., Zhu, Y., Yan, K., Chen, K., Kang, W., Yue, Y., & Gao, D. (2020). A novel model with the ability of fewshot learning and quick updating for intelligent fault diagnosis. Mechanical Systems and Signal Processing, 138, 106608. [CrossRef] [Google Scholar]
  26. Rogez, G., Weinzaepfel, P., & Schmid, C. (2019). LCRNet++: Multi-person 2D and 3D Pose Detection in Natural Images. IEEE Transactions on Pattern Analysis and Machine Intelligence, 1–1. [CrossRef] [Google Scholar]
  27. Sabour, S., Frosst, N., & Hinton, G. E. (2017). Dynamic Routing Between Capsules. In I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, & R. Garnett (Eds.), Advances in Neural Information Processing Systems 30 (pp. 3856–3866). Curran Associates, Inc. [Google Scholar]
  28. School of Computing, Northwestern Polytechnical University, Xi’an 710072, Shaanxi, P.R. China, Jiang, X., Wang, Y., Liu, W., Li, S., & Liu, J. (2019). CapsNet, CNN, FCN: Comparative Performance Evaluation for Image Classification. International Journal of Machine Learning and Computing, 9(6), 840–848. [Google Scholar]
  29. Su, J., Vargas, D. V., & Sakurai, K. (2019). Attacking convolutional neural network using differential evolution. IPSJ Transactions on Computer Vision and Applications, 11(1), 1. [CrossRef] [Google Scholar]
  30. Tome, D., Russell, C., & Agapito, L. (2017). Lifting from the Deep: Convolutional 3D Pose Estimation from a Single Image. 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 5689–5698. [CrossRef] [Google Scholar]
  31. Vincent, P., Larochelle, H., Bengio, Y., & Manzagol P.-A. (2008). Extracting and composing robust features with denoising autoencoders. Proceedings of the 25th International Conference on Machine Learning ICML ’08, 1096–1103. [CrossRef] [Google Scholar]
  32. Wang, J., Li, S., An, Z., Jiang, X., Qian, W., & Ji, S. (2019). Batch-normalized deep neural networks for achieving fast intelligent fault diagnosis of machines. Neurocomputing, 329, 53–65. [CrossRef] [Google Scholar]
  33. Wang, Y.-W., Huang, L., Jiang, S.-W., Li, K., Zou, J., & Yang, S.-Y. (2020). CapsCarcino: A novel sparse data deep learning tool for predicting carcinogens. Food and Chemical Toxicology, 135, 110921. [CrossRef] [Google Scholar]
  34. Weiler, M., Geiger, M., Welling, M., Boomsma, W., & Cohen, T. S. (2018). 3D Steerable CNNs: Learning Rotationally Equivariant Features in Volumetric Data. In S. Bengio, H. Wallach, H. Larochelle, K. Grauman, N. Cesa-Bianchi, & R. Garnett (Eds.), Advances in Neural Information Processing Systems 31 (pp. 10381–10392). Curran Associates, Inc. [Google Scholar]
  35. Xiong, Y., Su, G., Ye, S., Sun, Y., & Sun, Y. (2019). Deeper Capsule Network For Complex Data. 2019 International Joint Conference on Neural Networks (IJCNN), 1–8. [Google Scholar]
  36. Yi, S., Ma, H., & Li, X. (2019). Modified Capsule Network for Object Classification. In Yao Zhao, N. Barnes, B. Chen, R. Westermann, X. Kong, & C. Lin (Eds.), Image and Graphics (pp. 256–266). Springer International Publishing. [CrossRef] [Google Scholar]
  37. Zhang, Q., Yang, L. T., & Chen, Z. (2016). Deep Computation Model for Unsupervised Feature Learning on Big Data. IEEE Transactions on Services Computing, 9(1), 161–171. [Google Scholar]
  38. Zhang, S., Zhou, Q., & Wu, X. (2018). Fast Dynamic Routing Based on Weighted Kernel Density Estimation. Cognitive Internet of Things: Frameworks, Tools and Applications, 301–309. [Google Scholar]
  39. Zhao, Yongheng, Birdal, T., Deng, H., & Tombari, F. (2019). 3D Point Capsule Networks. 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 1009–1018. [CrossRef] [Google Scholar]
  40. Zhao, Z., Kleinhans, A., Sandhu, G., Patel, I., & Unnikrishnan, K. P. (2019a). Capsule Networks with Max-Min Normalization. ArXiv:1903.09662 [Cs]. http://arxiv.org/abs/1903.09662 [Google Scholar]
  41. Zhao, Z., Kleinhans, A., Sandhu, G., Patel, I., & Unnikrishnan, K. P. (2019b). Fast Inference in Capsule Networks Using Accumulated Routing Coefficients. ArXiv:1904.07304 [Cs]. http://arxiv.org/abs/1904.07304 [Google Scholar]
  42. Zhu, Y., & Zabaras, N. (2018). Bayesian deep convolutional encoder–decoder networks for surrogate modeling and uncertainty quantification. Journal of Computational Physics, 366, 415–447. [CrossRef] [Google Scholar]

Current usage metrics show cumulative count of Article Views (full-text article views including HTML views, PDF and ePub downloads, according to the available data) and Abstracts Views on Vision4Press platform.

Data correspond to usage on the plateform after 2015. The current usage metrics is available 48-96 hours after online publication and is updated daily on week days.

Initial download of the metrics may take a while.