To refer to this page use:
http://arks.princeton.edu/ark:/88435/pr13f4kn25
Full metadata record
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Phan, Buu | - |
dc.contributor.author | Mannan, Fahim | - |
dc.contributor.author | Heide, Felix | - |
dc.date.accessioned | 2023-11-20T21:16:24Z | - |
dc.date.available | 2023-11-20T21:16:24Z | - |
dc.date.issued | 2021 | en_US |
dc.identifier.citation | Phan, Buu, Mannan, Fahim and Heide, Felix. "Adversarial Imaging Pipelines." 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2021). doi:10.1109/CVPR46437.2021.01579 | en_US |
dc.identifier.issn | 1063-6919 | - |
dc.identifier.uri | https://openaccess.thecvf.com/content/CVPR2021/html/Phan_Adversarial_Imaging_Pipelines_CVPR_2021_paper.html | - |
dc.identifier.uri | http://arks.princeton.edu/ark:/88435/pr13f4kn25 | - |
dc.description.abstract | Adversarial attacks play a critical role in understanding deep neural network predictions and improving their robustness. Existing attack methods aim to deceive convolutional neural network (CNN)-based classifiers by manipulating RGB images that are fed directly to the classifiers. However, these approaches typically neglect the influence of the camera optics and image processing pipeline (ISP) that produce the network inputs. ISPs transform RAW measurements to RGB images and traditionally are assumed to preserve adversarial patterns. In fact, these low-level pipelines can destroy, introduce or amplify adversarial patterns that can deceive a downstream detector. As a result, optimized patterns can become adversarial for the classifier after being transformed by a certain camera ISP or optical lens system but not for others. In this work, we examine and develop such an attack that deceives a specific camera ISP while leaving others intact, using the same downstream classifier. We frame this camera-specific attack as a multi-task optimization problem, relying on a differentiable approximation for the ISP itself. We validate the proposed method using recent state-of-the-art automotive hardware ISPs, achieving 92% fooling rate when attacking a specific ISP. We demonstrate physical optics attacks with 90% fooling rate for a specific camera lens. | en_US |
dc.language.iso | en_US | en_US |
dc.relation.ispartof | 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) | en_US |
dc.rights | Author's manuscript | en_US |
dc.title | Adversarial Imaging Pipelines | en_US |
dc.type | Conference Article | en_US |
dc.identifier.doi | 10.1109/CVPR46437.2021.01579 | - |
dc.identifier.eissn | 2575-7075 | - |
pu.type.symplectic | http://www.symplectic.co.uk/publications/atom-terms/1.0/conference-proceeding | en_US |
Files in This Item:
File | Description | Size | Format | |
---|---|---|---|---|
AdversarialImagingPipelines.pdf | 6.07 MB | Adobe PDF | View/Download |
Items in OAR@Princeton are protected by copyright, with all rights reserved, unless otherwise indicated.