Convolutional Neural Networks from Medical Image Analysis to Adversarial Attacks and Defenses

Main Article Content

Xavier Wilson Selvaraj, Lourdu Mahimai Doss P, Muthumanickam Gunasekaran, Steffi Jass Jeya Chandra Babu

Abstract

This makes CNNs so diverse and complex that one can even conduct research on. In this paper, we present a four-part study: First, we will implement a highly capable CNN by utilizing the CT KIDNEY Classification Dataset comprising classes such as Normal, Cyst, Tumor, and Stone by using Adam for optimization, which finally achieved an accuracy of 69% in classifying kidney images. Then we discuss the adversarial attacks using Layer-wise Model Distortion with the Basic Iterative Method. Surprisingly, the evasion attack method received an accuracy of 0.007%, which suggests the vulnerability of the model. To make our CNN robust against adversarial attacks, we introduce the concept of Layer-wise Model Defense techniques. Our newly developed Binary Input Detector demonstrates great robustness and reaches an accuracy of 57% in the evasion attack. We finally show the trade-offs between classification accuracy and vulnerability to evasion attacks, with a delicate balance to be struck for any effective deployment of robust CNNs. This work represents another step in the dynamic landscape of deep learning security and underlines the importance of implementing adaptive defenses against evolving adversaries.

Article Details

Section
Articles