Main Article Content
Copyright (c) 2023 Proceedings on Automation in Medical Engineering
This work is licensed under a Creative Commons Attribution 4.0 International License.
Convolutional neural networks have been successfully applied in many areas, but the security concern was challenged by the vulnerability to adversarial samples, which were crafted by minor modification on the legitimate samples. These adversarial samples can easily fool the neural network model with high success rate, therefore, to analyze the models’ classification robustness, the adversarial samples can be developed into an index of model robustness. In this paper, we use a gradient search method to generate adversarial samples from the real samples. The model was trained to perform the surgical tool recognition task from cholecystectomy videos. Instead of setting a target misclassified class, we use non-target gradient space search to determine the nearest adversarial class region.