To refer to this page use:
|Abstract:||Highly expressive models such as deep neural networks (DNNs) have been widely applied to various applications. However, recent studies show that DNNs are vulnerable to adversarial examples, which are carefully crafted inputs aiming to mislead the predictions. Currently, the majority of these studies have focused on perturbation added to image pixels, while such manipulation is not physically realistic. Some works have tried to overcome this limitation by attaching printable 2D patches or painting patterns onto surfaces, but can be potentially defended because 3D shape features are intact. In this paper, we propose meshAdv to generate "adversarial 3D meshes" from objects that have rich shape features but minimal textural variation. To manipulate the shape or texture of the objects, we make use of a differentiable renderer to compute accurate shading on the shape and propagate the gradient. Extensive experiments show that the generated 3D meshes are effective in attacking both classifiers and object detectors. We evaluate the attack under different viewpoints. In addition, we design a pipeline to perform black-box attack on a photorealistic renderer with unknown rendering parameters.|
|Citation:||Xiao, Chaowei, Dawei Yang, Bo Li, Jia Deng, and Mingyan Liu. "MeshAdv: Adversarial Meshes for Visual Recognition." IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019): pp. 6891-6900. doi:10.1109/CVPR.2019.00706|
|Pages:||6891 - 6900|
|Type of Material:||Conference Article|
|Journal/Proceeding Title:||IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)|
Items in OAR@Princeton are protected by copyright, with all rights reserved, unless otherwise indicated.