submitted on 2024-07-07, 13:37 and posted on 2024-07-07, 14:18authored byGuanxiong Liu, Issa Khalil, Abdallah Khreishah, Abdulelah Algosaibi, Adel Aldalbahi, Mohammed Alnaeem, Abdulaziz Alhumam, Muhammad Anan
<p dir="ltr">From recent research work, it has been shown that neural network (NN) classifiers are vulnerable to adversarial examples which contain special perturbations that are ignored by human eyes while can mislead NN classifiers. In this paper, we propose a practical black-box adversarial example generator, dubbed ManiGen. ManiGen does not require any knowledge of the inner state of the target classifier. It generates adversarial examples by searching along the manifold, which is a concise representation of input data. Through extensive set of experiments on different datasets, we show that (1) adversarial examples generated by ManiGen can mislead standalone classifiers by being as successful as the state-of-the-art white-box generator, Carlini, and (2) adversarial examples generated by ManiGen can more effectively attack classifiers with state-of-the-art defenses.</p><h2>Other Information</h2><p dir="ltr">Published in: IEEE Access<br>License: <a href="https://creativecommons.org/licenses/by/4.0" target="_blank">https://creativecommons.org/licenses/by/4.0</a><br>See article on publisher's website: <a href="https://dx.doi.org/10.1109/access.2020.3029270" target="_blank">https://dx.doi.org/10.1109/access.2020.3029270</a></p>
Funding
Deputyship for Research & Innovation, Ministry of Education in Saudi Arabia (1120).