[…]DARPA’s Guaranteeing AI Robustness against Deception (GARD) program […] focuses on a few core objectives. One of which is the development of a testbed for characterizing ML defenses and assessing the scope of their applicability […]
Ensuring that emerging defenses are keeping pace with – or surpassing – the capabilities of known attacks is critical to establishing trust in the technology and ensuring its eventual use. To support this objective, GARD researchers developed a number of resources and virtual tools to help bolster the community’s efforts to evaluate and verify the effectiveness of existing and emerging ML models and defenses against adversarial attacks.
“Other technical communities – like cryptography – have embraced transparency and found that if you are open to letting people take a run at things, the technology will improve,” said Bruce Draper, the program manager leading GARD.
[…]
GARD researchers from Two Six Technologies, IBM, MITRE, University of Chicago, and Google Research have collaboratively generated a virtual testbed, toolbox, benchmarking dataset, and training materials to enable this effort. Further, they have made these assets available to the broader research community via a public repository
[…]
Central to the asset list is a virtual platform called Armory that enables repeatable, scalable, and robust evaluations of adversarial defenses. The Armory “testbed” provides researchers with a way to pit their defenses against known attacks and relevant scenarios. It also provides the ability to alter the scenarios and make changes, ensuring that the defenses are capable of delivering repeatable results across a range of attacks.
Armory utilizes a Python library for ML security called Adversarial Robustness Toolbox, or ART. ART provides tools that enable developers and researchers to defend and evaluate their ML models and applications against a number of adversarial threats, such as evasion, poisoning, extraction, and inference. The toolbox was originally developed outside of the GARD program as an academic-to-academic sharing platform.
[…]
The Adversarial Patches Rearranged In COnText, or APRICOT, benchmark dataset is also available via the repository. APRICOT was created to enable reproducible research on the real-world effectiveness of physical adversarial patch attacks on object detection systems. The dataset lets users project things in 3D so they can more easily replicate and defeat physical attacks, which is a unique function of this resource. “Essentially, we’re making it easier for researchers to test their defenses and ensure they are actually solving the problems they are designed to address,” said Draper.
[…]
Often, researchers and developers believe something will work across a spectrum of attacks, only to realize it lacks robustness against even minor deviations. To help address this challenge, Google Research has made the Google Research Self-Study repository that is available via the GARD evaluation toolkit. The repository contains “test dummies” – or defenses that aren’t designed to be the state-of-the-art but represent a common idea or approach that’s used to build defenses. The “dummies” are known to be broken, but offer a way for researchers to dive into the defenses and go through the process of properly evaluating their faults.
[…]
The GARD program’s Holistic Evaluation of Adversarial Defenses repository is available at https://www.gardproject.org/. Interested researchers are encouraged to take advantage of these resources and check back often for updates.
Source: DARPA Open Sources Resources to Aid Evaluation of Adversarial AI Defenses
Robin Edgar
Organisational Structures | Technology and Science | Military, IT and Lifestyle consultancy | Social, Broadcast & Cross Media | Flying aircraft