Challenges in Safeguarding Machine Learning Models Against Adversarial Attacks

Main Article Content

Nilambari Mate, Jyoti Yadav

Abstract

The usefulness of Machine Learning (ML) has been proven in a variety of application settings, making it one of the most frequently studied areas today. Applications with significant social influence rely on automated judgments made using machine learning faces several concerns regarding possible vulnerabilities brought by machine learning algorithms. Intelligent attackers possess powerful incentives to tamper with produced findings and models by algorithms that use machine learning to accomplish their goals using adversarial attacks. Adversarial attacks can be done using many ways like contaminating models training data or altering models testing data or polluting a central model. If model training data is manipulated by an attacker as part of an attack, the model's capacity to predict accurate results is negatively impacted. This is called as data poisoning attack.  Small perturbation may result in large side effects on the output of the Machine learning model. This paper list out various strategies to poison the training data. It also analyzes various attacking strategies and summarizes defensive techniques used to prevent or detect data poisoning attack. It also shows the impact of poisoning attack on the machine learning models through experimental results. Finally, this paper highlights various research opportunities to create robust model by preventing data poisoning attack.

Article Details

Section
Articles