Open Access
Issue
E3S Web Conf.
Volume 229, 2021
The 3rd International Conference of Computer Science and Renewable Energies (ICCSRE’2020)
Article Number 01004
Number of page(s) 9
DOI https://doi.org/10.1051/e3sconf/202122901004
Published online 25 January 2021
  1. G. Shobha and S. Rangaswamy, “Machine Learning, ” in Handbook of Statistics, Vol. 38, Elsevier, 2018, pp. 197–228. [CrossRef] [Google Scholar]
  2. N. Papernot, P. McDaniel, I. Goodfellow, S. Jha, Z. B. Celik, and A. Swami, “Practical Black-Box Attacks against Machine Learning, ” ArXiv160202697 Cs, Mar. 2017. [Google Scholar]
  3. I. J. Goodfellow, J. Shlens, and C. Szegedy, “Explaining and Harnessing Adversarial Examples, ” ArXiv14126572 Cs Stat, Mar. 2015. [Google Scholar]
  4. M. Barreno, B. Nelson, R. Sears, A. D. Joseph, and J. D. Tygar, “Can machine learning be secure?, ” in Proceedings of the 2006 ACM Symposium on Information, computer and communications security ASIACCS ’06, Taipei, Taiwan, 2006, p. 16, doi: 10.1145/1128817.1128824. [CrossRef] [Google Scholar]
  5. B. Biggio and F. Roli, “Wild patterns: Ten years after the rise of adversarial machine learning, ” Pattern Recognit., Vol. 84, pp. 317–331, Dec. 2018, doi: 10.1016/j.patcog.2018.07.023. [CrossRef] [Google Scholar]
  6. M. Barreno, B. Nelson, A. D. Joseph, and J. D. Tygar, “The security of machine learning, ” Mach. Learn., Vol. 81, no. 2, pp. 121–148, Nov. 2010, doi: 10.1007/s10994-010-5188-5. [CrossRef] [Google Scholar]
  7. S. Qiu, Q. Liu, S. Zhou, and C. Wu, “Review of Artificial Intelligence Adversarial Attack and Defense Technologies, ” Appl. Sci., Vol. 9, no. 5, p. 909, Mar. 2019, doi: 10.3390/app9050909. [CrossRef] [Google Scholar]
  8. N. Pitropakis, E. Panaousis, T. Giannetsos, E. Anastasiadis, and G. Loukas, “A taxonomy and survey of attacks against machine learning, ” Comput. Sci. Rev., Vol. 34, p. 100199, Nov. 2019, doi: 10.1016/j.cosrev.2019.100199. [CrossRef] [Google Scholar]
  9. L. Huang, A. D. Joseph, B. Nelson, B. I. P. Rubinstein, and J. D. Tygar, “Adversarial Machine Learning, ” p. 15. [Google Scholar]
  10. Q. Liu, P. Li, W. Zhao, W. Cai, S. Yu, and V. C. M. Leung, “A Survey on Security Threats and Defensive Techniques of Machine Learning: A Data Driven View, ” IEEE Access, Vol. 6, pp. 12103–12117, 2018, doi: 10.1109/ACCESS.2018.2805680. [CrossRef] [Google Scholar]
  11. B. Biggio et al., “Evasion Attacks against Machine Learning at Test Time, ” ArXiv170806131 Cs, Vol. 7908, pp. 387–402, 2013, doi: 10.1007/978-3-642-40994-3_25. [Google Scholar]
  12. L. Muñoz-González and E. C. Lupu, “The Security of Machine Learning Systems, ” in AI in Cybersecurity, Vol. 151, L. F. Sikos, Ed. Cham: Springer International Publishing, 2019, pp. 47–79. [CrossRef] [Google Scholar]
  13. D. Lowd and C. Meek, “Adversarial learning, ” in Proceeding of the eleventh ACM SIGKDD international conference on Knowledge discovery in data mining KDD ’05, Chicago, Illinois, USA, 2005, p. 641, doi: 10.1145/1081870.1081950. [Google Scholar]
  14. M. Jagielski, A. Oprea, B. Biggio, C. Liu, C. Nita-Rotaru, and B. Li, “Manipulating Machine Learning: Poisoning Attacks and Countermeasures for Regression Learning, ” ArXiv180400308 Cs, Apr. 2018. [Google Scholar]
  15. S. Alfeld, X. Zhu, and P. Barford, “Data Poisoning Attacks against Autoregressive Models, ” p. 7. [Google Scholar]
  16. B. I. P. Rubinstein et al., “ANTIDOTE: understanding and defending against poisoning of anomaly detectors, ” in Proceedings of the 9th ACM SIGCOMM conference on Internet measurement conference IMC ’09, Chicago, Illinois, USA, 2009, p. 1, doi: 10.1145/1644893.1644895. [Google Scholar]
  17. B. Li, Y. Wang, A. Singh, and Y. Vorobeychik, “Data Poisoning Attacks on Factorization-Based Collaborative Filtering, ” ArXiv160808182 Cs, Oct. 2016. [Google Scholar]
  18. B. Nelson et al., “Exploiting Machine Learning to Subvert Your Spam Filter, ” p. 10. [Google Scholar]
  19. S. Shen, S. Tople, and P. Saxena, “A UROR: defending against poisoning attacks in collaborative deep learning systems, ” in Proceedings of the 32nd Annual Conference on Computer Security Applications, Los Angeles California USA, Dec. 2016, pp. 508–519, doi: 10.1145/2991079.2991125. [Google Scholar]
  20. A. Kurakin, I. Goodfellow, and S. Bengio, “Adversarial examples in the physical world, ” ArXiv160702533 Cs Stat, Feb. 2017. [Google Scholar]
  21. A. Ilyas, S. Santurkar, D. Tsipras, L. Engstrom, B. Tran, and A. Madry, “Adversarial Examples Are Not Bugs, They Are Features, ” ArXiv190502175 Cs Stat, Aug. 2019. [Google Scholar]
  22. S. Huang, N. Papernot, I. Goodfellow, Y. Duan, and P. Abbeel, “Adversarial Attacks on Neural Network Policies, ” ArXiv170202284 Cs Stat, Feb. 2017. [Google Scholar]
  23. Y. Liu et al., “Trojaning Attack on Neural Networks, ” p. 17. [Google Scholar]
  24. T. Gu, B. Dolan-Gavitt, and S. Garg, “BadNets: Identifying Vulnerabilities in the Machine Learning Model Supply Chain, ” ArXiv170806733 Cs, Mar. 2019. [Google Scholar]
  25. Z. Abaid, M. A. Kaafar, and S. Jha, “Quantifying the impact of adversarial evasion attacks on machine learning based android malware classifiers, ” in 2017 IEEE 16th International Symposium on Network Computing and Applications (NCA), Cambridge, MA, Oct. 2017, pp. 1–10, doi: 10.1109/NCA.2017.8171381. [Google Scholar]
  26. G. F. Elsayed, I. Goodfellow, and J. Sohl-Dickstein, “Adversarial Reprogramming of Neural Networks, ” ArXiv180611146 Cs Stat, Nov. 2018. [Google Scholar]
  27. P. Neekhara, S. Hussain, S. Dubnov, and F. Koushanfar, “Adversarial Reprogramming of Sequence Classification Neural Networks, ” p. 11. [Google Scholar]
  28. Y. Long, V. Bindschaedler, and C. A. Gunter, “Towards Measuring Membership Privacy, ” ArXiv171209136 Cs, Dec. 2017. [Google Scholar]
  29. R. Shokri, M. Stronati, C. Song, and V. Shmatikov, “Membership Inference Attacks against Machine Learning Models, ” ArXiv161005820 Cs Stat, Mar. 2017. [Google Scholar]
  30. N. Papernot, P. McDaniel, A. Sinha, and M. Wellman, “Towards the Science of Security and Privacy in Machine Learning, ” ArXiv161103814 Cs, Nov. 2016. [Google Scholar]
  31. S.-M. Moosavi-Dezfooli, A. Fawzi, O. Fawzi, and P. Frossard, “Universal adversarial perturbations, ” ArXiv161008401 Cs Stat, Mar. 2017. [Google Scholar]
  32. M. Fredrikson, E. Lantz, S. Jha, S. Lin, D. Page, and T. Ristenpart, “Privacy in Pharmacogenetics: An End-to-End Case Study of Personalized Warfarin Dosing, ” p. 17. [Google Scholar]
  33. B. Nelson, “Behavior of Machine Learning Algorithms in Adversarial Environments, ” p. 245. [Google Scholar]
  34. H. Xu et al., “Adversarial Attacks and Defenses in Images, Graphs and Text: A Review, ” Int. J. Autom. Comput., Vol. 17, no. 2, pp. 151–178, Apr. 2020, doi: 10.1007/s11633-019-1211-x. [CrossRef] [Google Scholar]
  35. O. Ibitoye, R. Abou-Khamis, A. Matrawy, and M. O. Shafiq, “The Threat of Adversarial Attacks on Machine Learning in Network Security -A Survey, ” ArXiv191102621 Cs, Nov. 2019. [Google Scholar]
  36. G. F. Cretu, A. Stavrou, M. E. Locasto, S. J. Stolfo, and A. D. Keromytis, “Casting out Demons: Sanitizing Training Data for Anomaly Sensors, ” in 2008 IEEE Symposium on Security and Privacy (sp 2008), Oakland, CA, USA, May 2008, pp. 81–95, doi: 10.1109/SP.2008.11. [Google Scholar]
  37. B. Biggio, G. Fumera, and F. Roli, “Design of robust classifiers for adversarial environments, ” in 2011 IEEE International Conference on Systems, Man, and Cybernetics, Anchorage, AK, USA, Oct. 2011, pp. 977–982, doi: 10.1109/ICSMC.2011.6083796. [Google Scholar]
  38. B. Biggio, G. Fumera, and F. Roli, “Multiple classifier systems for robust classifier design in adversarial environments, ” Int. J. Mach. Learn. Cybern., Vol. 1, no. 1–4, pp. 27–41, Dec. 2010, doi: 10.1007/s13042-010-0007-7. [CrossRef] [Google Scholar]
  39. A. Chakraborty, M. Alam, V. Dey, A. Chattopadhyay, and D. Mukhopadhyay, “Adversarial Attacks and Defences: A Survey, ” ArXiv181000069 Cs Stat, Sep. 2018. [Google Scholar]
  40. N. Akhtar and A. Mian, “Threat of Adversarial Attacks on Deep Learning in Computer Vision: A Survey, ” IEEE Access, Vol. 6, pp. 14410–14430, 2018, doi: 10.1109/ACCESS.2018.2807385. [CrossRef] [Google Scholar]
  41. H. Hosseini, Y. Chen, S. Kannan, B. Zhang, and R. Poovendran, “Blocking Transferability of Adversarial Examples in Black-Box Learning Systems, ” ArXiv170304318 Cs, Mar. 2017. [Google Scholar]
  42. A. Shafahi et al., “Adversarial Training for Free!, ” ArXiv190412843 Cs Stat, Nov. 2019. [Google Scholar]
  43. K. Ishikawa, Guide to quality control, 13. print. Tokyo: Asian Productivity Organization, 1996. [Google Scholar]

Current usage metrics show cumulative count of Article Views (full-text article views including HTML views, PDF and ePub downloads, according to the available data) and Abstracts Views on Vision4Press platform.

Data correspond to usage on the plateform after 2015. The current usage metrics is available 48-96 hours after online publication and is updated daily on week days.

Initial download of the metrics may take a while.