Boffins from Duke University say they have figured out a way to help protect artificial intelligences from adversarial image-modification attacks: by throwing a few imaginary numbers their way.
[…]
The problem with reliability: adversarial attacks which modify the input imagery in a way imperceptible to the human eye. In an example from a 2015 paper a clearly-recognisable image of a panda, correctly labelled by the object recognition algorithm with a 57.7 per cent confidence level, was modified with noise – making the still-very-clearly-a-panda appear to the algorithm as a gibbon with a worrying 93.3 per cent confidence.
Guidance counselling
The problem lies in how the algorithms are trained, and it’s a modification to the training process that could fix it – by introducing a few imaginary numbers into the mix.
The team’s work centres on gradient regularisation, a training technique designed to reduce the “steepness” of the learning terrain – like rolling a boulder along a path to reach the bottom, instead of throwing it over the cliff and hoping for the best. “Gradient regularisation throws out any solution that passes a large gradient back through the neural network,” Yeats explained.
“This reduces the number of solutions that it could arrive at, which also tends to decrease how well the algorithm actually arrives at the correct answer. That’s where complex values can help. Given the same parameters and math operations, using complex values is more capable of resisting this decrease in performance.”
By adding just two layers of complex values, made up of real and imaginary number components, to the training process, the team found it could boost the quality of the results by 10 to 20 per cent – and help avoid the problem boulder taking what it thinks is a shortcut and ending up crashing through the roof of a very wrong answer.
“The complex-valued neural networks have the potential for a more ‘terraced’ or ‘plateaued’ landscape to explore,” Yeates added. “And elevation change lets the neural network conceive more complex things, which means it can identify more objects with more precision.”
The paper and a stream of its presentation at the conference are available on the event website.
Source: Imaginary numbers help AIs solve the very real problem of adversarial imagery • The Register
Robin Edgar
Organisational Structures | Technology and Science | Military, IT and Lifestyle consultancy | Social, Broadcast & Cross Media | Flying aircraft