Paper Title
Analysis of Multiple Adversarial Machine Learning Attacks on Convolutional Neural Networks

With the increase of machine learning applications, cyber-attacks on these applications have also increased in recent years. Even if the machine learning models are used in many fields including transportation and communications, social media, product recommendations, dynamic pricing, and fraud detection, they are still vulnerable to cyber-attacks. Deep neural networks also face a security threat from adversarial examples which are inputs that appear normal but cause an misclassification by the Deep Neural Network. In this paper, first, electric load data from ERCOT is considered as a signal and then converted to an image. After that, we analyze different gradient-based adversarial attacks on a Convolutional Neural Networks (CNN) model designed to classify whether the data from ERCOT belongs to West Station or Far West Station. We also explore robustness of the CNN model to examine the effect of the implemented Adversarial Machine Learning (AML) and finally give an idea for future work in order to design more robust models that could resist adversarial attacks. Keywords - Adversarial Machine Learning, Projected Gradient Descent, Convolutional Neural Network