Amazon claims it reviews the software created by third-party developers for its Alexa voice assistant platform, yet US academics were able to create more than 200 policy-violating Alexa Skills and get them certified.
In a paper [PDF] presented at the US Federal Trade Commission’s PrivacyCon 2020 event this week, Clemson University researchers Long Cheng, Christin Wilson, Song Liao, Jeffrey Alan Young, Daniel Dong, and Hongxin Hu describe the ineffectiveness of Amazon’s Skills approval process.
The researchers have also set up a website to present their findings.
Like Android and iOS apps, Alexa Skills have to be submitted for review before they’re available to be used with Amazon’s Alexa service. Also like Android and iOS, the Amazon’s review process sometimes misses rule-breaking code.
In the researchers’ test, sometimes was every time: The e-commerce giant’s review system granted approval for every one of 234 rule-flouting Skills submitted over a 12-month period.
“Surprisingly, the certification process is not implemented in a proper and effective manner, as opposed to what is claimed that ‘policy-violating skills will be rejected or suspended,'” the paper says. “Second, vulnerable skills exist in Amazon’s skills store, and thus users (children, in particular) are at risk when using [voice assistant] services.”
Amazon disputes some of the findings and suggests that the way the research was done skewed the results by removing rule-breaking Skills after certification, but before other systems like post-certification audits might have caught the offending voice assistant code.
The devil is in the details
Alexa hardware has been hijacked by security researchers for eavesdropping and the software on these devices poses similar security risks, but the research paper concerns itself specifically with content in Alexa Skills that violates Amazon’s rules.
Alexa content prohibitions include limitations on activities like collecting information from children, collecting health information, sexually explicit content, descriptions of graphic violence, self-harm instructions, references to Nazis or hate symbols, hate speech, the promotion drugs, terrorism, or other illegal activities, and so on.
Getting around these rules involved tactics like adding a counter to Skill code, so the app only starts spewing hate speech after several sessions. The paper cites a range of problems with the way Amazon reviews Skills, including inconsistencies where rejected content gets accepted after resubmission, vetting tools that can’t recognize cloned code submitted by multiple developer accounts, excessive trust in developers, and negligence in spotting data harvesting even when the violations are made obvious.
Amazon also does not require developers to re-certify their Skills if the backend code – run on developers’ servers – changes. It’s thus possible for Skills to turn malicious if the developer alters the backend code or an attacker compromises a well-intentioned developer’s server.
As part of the project, the researchers also examined 825 published Skills for kids that either had a privacy policy or a negative review. Among these, 52 had policy violations. Negative comments by users mention unexpected advertisements, inappropriate language, and efforts to collect personal information.
Robin Edgar
Organisational Structures | Technology and Science | Military, IT and Lifestyle consultancy | Social, Broadcast & Cross Media | Flying aircraft