On Wednesday, Facebook and Michigan State University debuted a novel method of not just detecting deep fakes but discovering which generative model produced it by reverse engineering the image itself.
Beyond telling you if an image is a deep fake or not, many current detection systems can tell whether the image was generated in a model that the system saw during its training — known as a “close-set” classification. Problem is, if the image was created by a generative model that the detector system wasn’t trained on then the system won’t have the previous experience to be able to spot the fake.
[…]
“By generalizing image attribution to open-set recognition, we can infer more information about the generative model used to create a deepfake that goes beyond recognizing that it has not been seen before.”
What’s more, this system can compare and trace similarities across a series of deep fakes, enabling researchers to trace groups of falsified images back to a single generative source, which should help social media moderators better track coordinated misinformation campaigns.
[…]
A generative model’s hyperparameters are the variables it uses to guide its self-learning process. So if you can figure out what the various hyperparameters are, you can figure out what model used them to create that image.
[…]
Source: Facebook’s latest AI doesn’t just detect deep fakes, it knows where they came from | Engadget
Robin Edgar
Organisational Structures | Technology and Science | Military, IT and Lifestyle consultancy | Social, Broadcast & Cross Media | Flying aircraft