To refer to this page use:
|Abstract:||Adversarial attacks play a critical role in understanding deep neural network predictions and improving their robustness. Existing attack methods aim to deceive convolutional neural network (CNN)-based classifiers by manipulating RGB images that are fed directly to the classifiers. However, these approaches typically neglect the influence of the camera optics and image processing pipeline (ISP) that produce the network inputs. ISPs transform RAW measurements to RGB images and traditionally are assumed to preserve adversarial patterns. In fact, these low-level pipelines can destroy, introduce or amplify adversarial patterns that can deceive a downstream detector. As a result, optimized patterns can become adversarial for the classifier after being transformed by a certain camera ISP or optical lens system but not for others. In this work, we examine and develop such an attack that deceives a specific camera ISP while leaving others intact, using the same downstream classifier. We frame this camera-specific attack as a multi-task optimization problem, relying on a differentiable approximation for the ISP itself. We validate the proposed method using recent state-of-the-art automotive hardware ISPs, achieving 92% fooling rate when attacking a specific ISP. We demonstrate physical optics attacks with 90% fooling rate for a specific camera lens.|
|Citation:||Phan, Buu, Mannan, Fahim and Heide, Felix. "Adversarial Imaging Pipelines." 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2021). doi:10.1109/CVPR46437.2021.01579|
|Type of Material:||Conference Article|
|Journal/Proceeding Title:||2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)|
Items in OAR@Princeton are protected by copyright, with all rights reserved, unless otherwise indicated.