Following Canada’s lead earlier this week, privacy watchdogs in Britain and Australia today launched a joint investigation into how Clearview AI harvests and uses billions of images it scraped from the internet to train its facial-recognition algorithms.
The startup boasted it had collected a database packed with more than three billion photos downloaded from people’s public social media pages. That data helped train its facial-recognition software, which was then sold to law enforcement as a tool to identify potential suspects.
Cops can feed a snapshot of someone taken from, say, CCTV footage into Clearview’s software, which then attempts to identify the person by matching it up with images in its database. If there’s a positive match, the software links to that person’s relevant profiles on social media that may reveal personal details such as their name or where they live. It’s a way to translate previously unseen photos of someone’s face into an online handle so that person can be tracked down.
Now, the UK’s Information Commissioner (ICO) and the Office of the Australian Information Commissioner (OAIC) are collaborating to examine the New York-based upstart’s practices. The investigation will focus “on the company’s use of ‘scraped’ data and biometrics of individuals,” the ICO said in a statement.
“The investigation highlights the importance of enforcement cooperation in protecting the personal information of Australian and UK citizens in a globalised data environment,” it added. “No further comment will be made while the investigation is ongoing.”
Robin Edgar
Organisational Structures | Technology and Science | Military, IT and Lifestyle consultancy | Social, Broadcast & Cross Media | Flying aircraft