Open Access
Issue |
E3S Web Conf.
Volume 430, 2023
15th International Conference on Materials Processing and Characterization (ICMPC 2023)
|
|
---|---|---|
Article Number | 01065 | |
Number of page(s) | 10 | |
DOI | https://doi.org/10.1051/e3sconf/202343001065 | |
Published online | 06 October 2023 |
- I. J. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. WardeFarley, S. Ozair, A. C. Courville, and Y. Bengio. Generative adversarial nets. In NIPS, (2014). [Google Scholar]
- T. Salimans, I. J. Goodfellow, W. Zaremba, V. Cheung, A. Radford, and X. Chen. Improved techniques for training gans. In NIPS, (2016). [Google Scholar]
- K. Gregor, I. Danihelka, A. Graves, D. J. Rezende, and D. Wierstra. DRAW: A recurrent neural network for image generation. In ICML, (2015). [Google Scholar]
- P. Isola, J.-Y. Zhu, T. Zhou, and A. A. Efros. Image-to-image translation with conditional adversarial networks. In CVPR, (2017). [Google Scholar]
- C. Ledig, L. Theis, F. Huszar, J. Caballero, A. Aitken, A. Tejani, J. Totz, Z. Wang, and W. Shi. Photo-realistic single image super resolution using a generative adversarial network. In CVPR, (2017). [Google Scholar]
- A. Nguyen, J. Yosinski, Y. Bengio, A. Dosovitskiy, and J. Clune. Plug & play generative networks: Conditional iterative generation of images in latent space. In CVPR, (2017). [Google Scholar]
- A. Radford, L. Metz, and S. Chintala. Unsupervised representation learning with deep convolutional generative adversarial networks. In ICLR, (2016). [Google Scholar]
- C. Wah, S. Branson, P. Welinder, P. Perona, and S. Belongie. The Caltech-UCSD Birds-200-2011 Dataset. Technical Report CNS- TR2011-001, California Institute of Technology, (2011). [Google Scholar]
- Z. Yang, X. He, J. Gao, L. Deng, and A. J. Smola. Stacked attention networks for image question answering. In CVPR, (2016). [Google Scholar]
- H. Zhang, T. Xu, H. Li, S. Zhang, X. Wang, X. Huang, and D. Metaxas. Stackgan: Text to photo-realistic image synthesis with stacked generative adversarial networks. In ICCV, (2017). [Google Scholar]
- H. Zhang, T. Xu, H. Li, S. Zhang, X. Wang, X. Huang, and D. N. Metaxas. Stackgan++: Realistic image synthesis with stacked generative adversarial networks. arXiv: 1710.10916, (2017). [Google Scholar]
- T Xu, P Zhang, Q Huang, Han Zhang, Zhe Gan, Xiaolei Huang, Xiaodong He. AttnGAN: Fine-Grained Text to Image Generation with Attentional Generative Adversarial Networks. In CVPR, (2018). [Google Scholar]
- Zhenxing Zhang, Lambert Schomaker. DTGAN: Dual Attention Generative Adversarial Networks for Text-to-Image Generation. arXiv:2011.02709,(2020). [Google Scholar]
- Zhenxing Zhang, Lambert Schomaker. DiverGAN: An Efficient and Effective Single-Stage Framework for Diverse Text-to-Image Generation. arXiv:2111.09267,(2021). [Google Scholar]
- Xinsheng Wang, Tingting Qiao, Jihua Zhu, Alan Hanjalic, Odette Scharenborg: S2IGAN: Speech-to-Image Generation via Adversarial Learning. arXiv:2005.06968, (2020). [Google Scholar]
- Tae-Hyun Oh, Tali Dekel, Changil Kim, Inbar Mosseri, William T. Freeman, Michael Rubinstein, Wojciech Matusik: Speech2Face: Learning the Face Behind a Voice. arXiv:1905.09773,(2019). [Google Scholar]
- S. Reed, Z. Akata, S. Mohan, S. Tenka, B. Schiele, and H. Lee. Learning what and where to draw. In NIPS, (2016). [Google Scholar]
- S. Reed, Z. Akata, B. Schiele, and H. Lee. Learning deep representations of fine-grained visual descriptions. In CVPR, (2016). [Google Scholar]
- S. Reed, Z. Akata, X. Yan, L. Logeswaran, B. Schiele, and H. Lee. Generative adversarial text-to-image synthesis. In ICML, (2016). [Google Scholar]
- S. E. Reed, A. van den Oord, N. Kalchbrenner, S. G. Colmenarejo, Z. Wang, Y. Chen, D. Belov, and N. de Freitas. Parallel multiscale autoregressive density estimation. In ICML, (2017). [Google Scholar]
- J Shen, R Pang, Ron J. Weiss, M Schuster, Navdeep Jaitly, Zongheng Yang, Zhifeng Chen, Yu Zhang et al. Natural TTS Synthesis by Conditioning WaveNet on Mel Spectrogram Predictions. arXiv:1712.05884 [Google Scholar]
- A. Agrawal, J. Lu, S. Antol, M. Mitchell, C. L. Zitnick, D. Parikh, and D. Batra. VQA: visual question answering. IJCV, 123(1):4–31, (2017). [CrossRef] [Google Scholar]
- G. Ramesh et al., “Feature Selection Based Supervised Learning Method for Network Intrusion Detection”, International Journal of Recent Technology and Engineering (IJRTE), ISSN: 2277-3878, Volume-8, Issue-1, May (2019). [Google Scholar]
- Y. Sara, J. Dumne, A. Reddy Musku, D. Devarapaga and R. Gajula, “A Deep Learning Facial Expression Recognition based Scoring System for Restaurants,” 2022 International Conference on Applied Artificial Intelligence and Computing (ICAAIC), Salem, India, pp. 630-634, doi: 10.1109/ICAAIC53929.2022.9793219. (2022) [Google Scholar]
- Ramesh, G., Anugu, A., Madhavi, K., Surekha, P.. Automated Identification and Classification of Blur Images, Duplicate Images Using Open CV. In: Luhach, A.K., Jat, D.S., Bin Ghazali, K.H., Gao, XZ., Lingras, P. Advanced Informatics for Computing Research. ICAICR 2020. Communications in Computer and Information Science, vol 1393. Springer(eds) Singapore., https://doi.org/10.1007/978-981-16-3660-8_52, (2021) [Google Scholar]
- G. Ramesh, J. Praveen, Artificial Intelligence (AI) Framework for Multi-Modal Learning and Decision Making towards Autonomous and Electric Vehicles, E3S Web Conf. 309 01167, DOI: 10.1051/e3sconf/202130901167 (2021) [CrossRef] [EDP Sciences] [Google Scholar]
- Parameswari, D.V.L., Rao, C.M., Kalyani, D. et al. Mining images of high spatial resolution in agricultural environments. Appl Nanosci, (2021). https://doi.org/10.1007/s13204-021-01969-3 [Google Scholar]
- Somasekar, J Ramesh, G “Beneficial Image Preprocessing by Contrast Enhancement Technique for SEM Images”, IJEMS Vol.29(6) [December 2022], NIScPR-CSIR,India, (2022) [Google Scholar]
Current usage metrics show cumulative count of Article Views (full-text article views including HTML views, PDF and ePub downloads, according to the available data) and Abstracts Views on Vision4Press platform.
Data correspond to usage on the plateform after 2015. The current usage metrics is available 48-96 hours after online publication and is updated daily on week days.
Initial download of the metrics may take a while.