Unlike adversarial models that attack AIs “from the inside”, attacks developed for black boxes could be used against closed system like autonomous cars, security (facial recognition, for example), or speech recognition (Alexa or Cortana).The tool, called Foolbox, is currently under review for presentation at next year’s International Conference on Learning Representations (kicking off at the end of April).Wieland Brendel, Jonas Rauber and Matthias Bethge of the Eberhard Karls University Tubingen, Germany explained at arXiv that Foolbox is a “decision-based” attack called a boundary attack which “starts from a large adversarial perturbation and then seeks to reduce the perturbation while staying adversarial”.
“Its basic operating principle – starting from a large perturbation and successively reducing it – inverts the logic of essentially all previous adversarial attacks. Besides being surprisingly simple, the boundary attack is also extremely flexible”, they wrote.
For example, “transfer-based attacks” have to be tested against the same training data as the models they’re attacking, and need “cumbersome substitute models”.
Gradient-based attacks, the paper claimed, also need detailed knowledge about the target model, while score-based attacks need access to the target model’s confidence scores.
The boundary attack, the paper said, only needs to see the final decision of a machine learning model – the class label it applies to an input, for example, or in a speech recognition model, the transcribed sentence.
Source: Another AI attack, this time against ‘black box’ machine learning • The Register
Robin Edgar
Organisational Structures | Technology and Science | Military, IT and Lifestyle consultancy | Social, Broadcast & Cross Media | Flying aircraft