three white-hat hackers helped a regional rail company in southwest Poland unbrick a train that had been artificially rendered inoperable by the train’s manufacturer after an independent maintenance company worked on it. The train’s manufacturer is now threatening to sue the hackers who were hired by the independent repair company to fix it.
The fallout from the situation is currently roiling Polish infrastructure circles and the repair world, with the manufacturer of those trains denying bricking the trains despite ample evidence to the contrary. The manufacturer is also now demanding that the repaired trains immediately be removed from service because they have been “hacked,” and thus might now be unsafe, a claim they also cannot substantiate.
The situation is a heavy machinery example of something that happens across most categories of electronics, from phones, laptops, health devices, and wearables to tractors and, apparently, trains. In this case, NEWAG, the manufacturer of the Impuls family of trains, put code in the train’s control systems that prevented them from running if a GPS tracker detected that it spent a certain number of days in an independent repair company’s maintenance center, and also prevented it from running if certain components had been replaced without a manufacturer-approved serial number.
This anti-repair mechanism is called “parts pairing,” and is a common frustration for farmers who want to repair their John Deere tractors without authorization from the company. It’s also used by Apple to prevent independent repair of iPhones.
An image posed by q3k of how the team did their work.
In this case, a Polish train operator called Lower Silesian Railway, which operates regional train services from Wrocław, purchased 11 Impuls trains. It began to do regular maintenance on the trains using an independent company called Serwis Pojazdów Szynowych (SPS), which notes on its website that “many Polish carriers have trusted us” with train maintenance. Over the course of maintaining four different Impuls trains, SPS found mysterious errors that prevented them from running. SPS became desperate and Googled “Polish hackers” and came across a group called Dragon Sector, a reverse-engineering team made up of white hat hackers. The trains had just undergone “mandatory maintenance” after having traveled a million kilometers.
“This is quite a peculiar part of the story—when SPS was unable to start the trains and almost gave up on their servicing, someone from the workshop typed “polscy hakerzy” (‘Polish hackers’) into Google,” the team from Dragon Sector, made up of Jakub Stępniewicz, Sergiusz Bazański, and Michał Kowalczyk, told me in an email. “Dragon Sector popped up and soon after we received an email asking for help.”
The problem was so bad that an infrastructure trade publication in Poland called Rynek Kolejowy picked up on the mysterious issues over the summer, and said that the lack of working trains was beginning to impact service: “Four vehicles after level P3-2 repair cannot be started. At this moment, it is not known what caused the failure. The lack of units is a serious problem for the carrier and passengers, because shorter trains are sent on routes.”
The hiring of Dragon Sector was a last resort: “In 2021, an independent train workshop won a maintenance tender for some trains made by Newag, but it turned out that they didn’t start after servicing,” Dragon Sector told me. “[SPS] hired us to analyze the issue and we discovered a ‘workshop-detection’ system built into the train software, which bricked the trains after some conditions were met (two of the trains even used a list of precise GPS coordinates of competitors’ workshops). We also discovered an undocumented ‘unlock code’ which you could enter from the train driver’s panel which magically fixed the issue.”
Dragon Sector was able to bypass the measures and fix the trains. The group posted a YouTube video of the train operating properly after they’d worked on it:
The news of Dragon Sector’s work was first reported by the Polish outlet Zaufana Trzecia Strona and was translated into English by the site Bad Cyber. Kowalczyk and Stępniewicz gave a talk about the saga last week at Poland’s Oh My H@ck conference in Warsaw. The group plans on doing further talks about the technical measures implemented to prevent the trains from running and how they fixed it.
“These trains were locking up for arbitrary reasons after being serviced at third-party workshops. The manufacturer argued that this was because of malpractice by these workshops, and that they should be serviced by them instead of third parties,” Bazański, who goes by the handle q3k, posted on Mastodon. “After a certain update by NEWAG, the cabin controls would also display scary messages about copyright violations if the human machine interface detected a subset of conditions that should’ve engaged the lock but the train was still operational. The trains also had a GSM telemetry unit that was broadcasting lock conditions, and in some cases appeared to be able to lock the train remotely.”
The train had a system that detected if it physically had been to an independent repair shop.
All of this has created quite a stir in Poland (and in repair circles). NEWAG did not respond to a request for comment from 404 Media. But Rynek Kolejowy reported that the company is now very mad, and has threatened to sue the hackers. In a statement to Rynek Kolejowy, NEWAG said “Our software is clean. We have not introduced, we do not introduce and we will not introduce into the software of our trains any solutions that lead to intentional failures. This is slander from our competition, which is conducting an illegal black PR campaign against us.” The company added that it has reported the situation to “the authorized authorities.”
“Hacking IT systems is a violation of many legal provisions and a threat to railway traffic safety,” NEWAG added. “We do not know who interfered with the train control software, using what methods and what qualifications. We also notified the Office of Rail Transport about this so that it could decide to withdraw from service the sets subjected to the activities of unknown hackers.”
In response, Dragon Sector released a lengthy statement explaining how they did their work and explaining the types of DRM they encountered: “We did not interfere with the code of the controllers in Impulsa – all vehicles still run on the original, unmodified software,” part of the statement reads. SPS, meanwhile, has said that its position “is consistent with the position of Dragon Sector.”
Kowalczk told 404 Media that “we are answering media and waiting to be summoned as witnesses,” and added that “NEWAG said that they will sue us, but we doubt they will – their defense line is really poor and they would have no chance defending it, they probably just want to sound scary in the media.”
This strategy—to intimidate independent repair professionals, claim that the device (in this case, a train) is unsafe, and threaten legal action—is an egregious but common playbook in manufacturers’ fight against repair, all over the world.
Genetic testing company 23andMe changed its terms of service to prevent customers from filing class action lawsuits or participating in a jury trial days after reports revealing that attackers accessed personal information of nearly 7 million people — half of the company’s user base — in an October hack.
In an email sent to customers earlier this week viewed by Engadget, the company announced that it had made updates to the “Dispute Resolution and Arbitration section” of its terms “to include procedures that will encourage a prompt resolution of any disputes and to streamline arbitration proceedings where multiple similar claims are filed.” Clicking through leads customers to the newest version of the company’s terms of service that essentially disallow customers from filing class action lawsuits, something that more people are likely to do now that the scale of the hack is clearer.
“To the fullest extent allowed by applicable law, you and we agree that each party may bring disputes against the other party only in an individual capacity and not as a class action or collective action or class arbitration,” the updated terms say. Notably, 23andMe will automatically opt customers into the new terms unless they specifically inform the company that they disagree by sending an email within 30 days of receiving the firm’s notice. Unless they do that, they “will be deemed to have agreed to the new terms,” the company’s email tells customers.
23andMe did not respond to a request for comment from Engadget.
In October, the San Francisco-based genetic testing company headed by Anne Wojcicki announced that hackers had accessed sensitive user information including photos, full names, geographical location, information related to ancestry trees, and even names of related family members. The company said that no genetic material or DNA records were exposed. Days after that attack, the hackers put up profiles of hundreds of thousands of Ashkenazi Jews and Chinese people for sale on the internet. But until last week, it wasn’t clear how many people were impacted.
In a filing with the Securities and Exchange Commission, 23andMe said that “multiple class action claims” have already been against the company in both federal and state court in California and state court in Illinois, as well as in Canadian courts.
Forbidding people from filing class action lawsuit, as Axiosnotes, hides information about the proceedings from the public since affected parties typically attempt to resolve disputes with arbitrators in private. Experts, such as Chicago-Kent College of Law professor Nancy Kim, an online contractor expert, told Axios that changing its terms wouldn’t be enough to protect 23andMe in court.
The company’s new terms are sparking outrage online. “Wow they first screw up and then they try to screw their users by being shady,” a user who goes by Daniel Arroyo posted on X. “Seems like they’re really trying to cover their asses,” wrote another user called Paul Duke, “and head off lawsuits after announcing hackers got personal data about customers.”
Since the beginning of 2023, ESET researchers have observed an alarming growth of deceptive Android loan apps, which present themselves as legitimate personal loan services, promising quick and easy access to funds.
Despite their attractive appearance, these services are in fact designed to defraud users by offering them high-interest-rate loans endorsed with deceitful descriptions, all while collecting their victims’ personal and financial information to blackmail them, and in the end gain their funds. ESET products therefore recognize these apps using the detection name SpyLoan, which directly refers to their spyware functionality combined with loan claims.
Key points of the blogpost:
Apps analyzed by ESET researchers request various sensitive information from their users and exfiltrate it to the attackers’ servers.
This data is then used to harass and blackmail users of these apps and, according to user reviews, even if a loan was not provided.
ESET telemetry shows a discernible growth in these apps across unofficial third-party app stores, Google Play, and websites since the beginning of 2023.
Malicious loan apps focus on potential borrowers based in Southeast Asia, Africa, and Latin America.
All of these services operate only via mobile apps, since the attackers can’t access all sensitive user data that is stored on the victim’s smartphone through browsers.
[…]
All of the SpyLoan apps that are described in this blogpost and mentioned in the IoCs section are marketed through social media and SMS messages, and available to download from dedicated scam websites and third-party app stores. All of these apps were also available on Google Play. As a Google App Defense Alliance partner, ESET identified 18 SpyLoan apps and reported them to Google, who subsequently removed 17 of these apps from their platform. Before their removal, these apps had a total of more than 12 million downloads from Google Play. The last app identified by ESET is still available on Google Play – however, since its developers changed its permissions and functionality, we no longer detect it as a SpyLoan app.
[…]
According to ESET telemetry, the enforcers of these apps operate mainly in Mexico, Indonesia, Thailand, Vietnam, India, Pakistan, Colombia, Peru, the Philippines, Egypt, Kenya, Nigeria, and Singapore (see map in Figure 2). All these countries have various laws that govern private loans – not only their rates but also their communication transparency; however, we don’t know how successfully they are enforced. We believe that any detections outside of these countries are related to smartphones that have, for various reasons, access to a phone number registered in one of these countries.
At the time of writing, we haven’t seen an active campaign targeting European countries, the USA, or Canada.
[…]
ESET Research has traced the origins of the SpyLoan scheme back to 2020. At that time, such apps presented only isolated cases that didn’t catch the attention of researchers; however, the presence of malicious loan apps kept growing and ultimately, we started to spot them on Google Play, the Apple App Store, and on dedicated scam websites
[…]
Security company Lookout identified 251 Android apps on Google Play and 35 iOS apps on the Apple App Store that exhibited predatory behavior. According to Lookout, they had been in contact with Google and Apple regarding the identified apps and in November 2022 published a blogpost about these apps
[…]
Once a user installs a SpyLoan app, they are prompted to accept the terms of service and grant extensive permissions to access sensitive data stored on the device. Subsequently, the app requests user registration, typically accomplished through SMS one-time password verification to validate the victim’s phone number.
These registration forms automatically select the country code based on the country code from the victim’s phone number, ensuring that only individuals with phone numbers registered in the targeted country can create an account,
[…]
After successful phone number verification, users gain access to the loan application feature within the app. To complete the loan application process, users are compelled to provide extensive personal information, including address details, contact information, proof of income, banking account information, and even to upload photos of the front and back sides of their identification cards, and a selfie
[…]
On May 31st, 2023, additional policies started to apply to loan apps on Google Play, stating that such apps are prohibited from asking for permission to access sensitive data such as images, videos, contacts, phone numbers, location, and external storage data. It appears this updated policy didn’t have an immediate effect on existing apps, as most of the ones we reported were still available on the platform (including their broad permissions) after the policy started to apply
[…]
After such an app is installed and personal data is collected, the app’s enforcers start to harass and blackmail their victims into making payments, even if – according to the reviews – the user didn’t apply for a loan or applied but the loan wasn’t approved
[…]
Besides the data harvesting and blackmailing, these services present a form of modern-day digital usury, which refers to the charging of excessive interest rates on loans, taking advantage of vulnerable individuals with urgent financial needs, or borrowers who have limited access to mainstream financial institutions. One user gave a negative review (shown in Figure 14) to a SpyLoan app not because it was harassing him, but because it had already been four days since he applied for a loan, but nothing had happened and he needed money for medication.
An SEC filing has revealed more details on a data breach affecting 23andMe users that was disclosed earlier this fall. The company says its investigation found hackers were able to access the accounts of roughly 0.1 percent of its userbase, or about 14,000 of its 14 million total customers, TechCrunch notes. On top of that, the attackers were able to exploit 23andMe’s opt-in DNA Relatives (DNAR) feature, which matches users with their genetic relatives, to access information about millions of other users. A 23andMe spokesperson told Engadget that hackers accessed the DNAR profiles of roughly 5.5 million customers this way, plus Family Tree profile information from 1.4 million DNA Relative participants.
DNAR Profiles contain sensitive details including self-reported information like display names and locations, as well as shared DNA percentages for DNA Relatives matches, family names, predicted relationships and ancestry reports. Family Tree profiles contain display names and relationship labels, plus other information that a user may choose to add, including birth year and location. When the breach was first revealed in October, the company said its investigation “found that no genetic testing results have been leaked.”
According to the new filing, the data “generally included ancestry information, and, for a subset of those accounts, health-related information based upon the user’s genetics.” All of this was obtained through a credential-stuffing attack, in which hackers used login information from other, previously compromised websites to access those users’ accounts on other sites. In doing this, the filing says, “the threat actor also accessed a significant number of files containing profile information about other users’ ancestry that such users chose to share when opting in to 23andMe’s DNA Relatives feature and posted certain information online.”
The disturbing part of this is that the people who were hacked were idiots anyway for re-using their password and probably didn’t realise that they were giving away DNA information about not only themselves, but their whole family to 23andMe, who sold it on. Genetic information is the most personal type of information you have. You can not change it. And if you give it to someone, you also give away your family. Now it wasn’t just given away, it was stolen too.
Hardware security hackers have detailed how it’s possible to bypass Windows Hello’s fingerprint authentication and login as someone else – if you can steal or be left alone with their vulnerable device.
The research was carried out by Blackwing Intelligence, primarily Jesse D’Aguanno and Timo Teräs, and was commissioned and sponsored by Microsoft’s Offensive Research and Security Engineering group. The pair’s findings were presented at the IT giant’s BlueHat conference last month, and made public this week. You can watch the duo’s talk below, or dive into the details in their write-up here.
For users and administrators: be aware your laptop hardware may be physically insecure and allow fingerprint authentication to be bypassed if the equipment falls into the wrong hands. We’re not sure how that can be fixed without replacing the electronics or perhaps updating the drivers and/or firmware within the fingerprint sensors. One of the researchers told us: “It’s my understanding from Microsoft that the issues were addressed by the vendors.” So check for updates or errata. We’ve asked the manufacturers named below for comment, and we will keep you updated.
For device makers: check out the above report to make sure you’re not building these design flaws into your products. Oh, and answer our emails.
The research focuses on bypassing Windows Hello’s fingerprint authentication on three laptops: a Dell Inspiron 15, a Lenovo ThinkPad T14, and a Microsoft Surface Pro 8/X, which were using fingerprint sensors from Goodix, Synaptics, and ELAN, respectively. All three were vulnerable in different ways. As far as we can tell, this isn’t so much a problem with Windows Hello or using fingerprints. It’s more due to shortcomings or oversights with the communications between the software side and the hardware.
Windows Hello allows users to log into the OS using their fingerprint. This fingerprint is stored within the sensor chipset. What’s supposed to happen, simply put, is that when you want to set up your laptop to use your print, the OS generates an ID and passes that to the sensor chip. The chip reads the user’s fingerprint, and stores the print internally, associating it with the ID number. The OS then links that ID with your user account.
Then when you come to login, the OS asks you to present your finger, the sensor reads it, and if it matches a known print, the chips sends the corresponding ID to the operating system, which then grants you access to the account connected to that ID number. The physical communication between the chip and OS involves cryptography to, ideally, secure this authentication method from attackers.
But blunders in implementing this system have left at least the above named devices vulnerable to unlocking – provided one can nab the gear long enough to connect some electronics.
“In all, this research took approximately three months and resulted in three 100 percent reliable bypasses of Windows Hello authentication,” Blackwing’s D’Aguanno and Teräs wrote on Tuesday.
Here’s a summary of the techniques used and described by the infosec pair:
Model: Dell Inspiron 15
Method: If someone can boot the laptop into Linux, they can use the sensor’s Linux driver to enumerate from the sensor chip the ID numbers associated with known fingerprints. That miscreant can then store in the chip their own fingerprint with an ID number identical to the ID number of the Windows user they want to login as. The chip stores this new print-ID association in an internal database associated with Linux; it doesn’t overwrite the existing print-ID association in its internal database for Windows.
The attacker then attaches a man-in-the-middle (MITM) device between the laptop and the sensor, and boots into Windows. The Microsoft OS sends some non-authenticated configuration data to the chip. Crucially, the MITM electronics rewrites that config data on the fly to tell the chip to use the Linux database, and not the Windows database, for fingerprints. Thus when the miscreant next touches their finger to the reader, the chip will recognize the print, return the ID number for that print from the Linux database, which is the same ID number associated with a Windows user, and Windows will log the attacker in as that user.
Model: Lenovo ThinkPad T14
Method: The attack used against the ThinkPad is similar to the one above. While the Dell machine uses Microsoft’s Secure Device Connection Protocol (SDCP) between the OS and the chip, the T14 uses TLS to secure the connection. This can be undermined to again, using Linux, add a fingerprint with an ID associated with a Windows user, and once booted back into Windows, login as that user using the new fingerprint.
Model: Microsoft Surface Pro 8 / X Type Cover with Fingerprint ID
Method: This is the worst. There is no security between the chip and OS at all, so the sensor can be replaced with anything that can masquerade as the chip and simply send a message to Windows saying: Yup, log that user in. And it works. Thus an attacker can log in without even presenting a fingerprint.
Interestingly enough, D’Aguanno told us restarting the PC with Linux isn’t required for exploitation – a MITM device can do the necessary probing and enrollment of a fingerprint itself while the computer is still on – so preventing the booting of non-Windows operating systems, for instance, won’t be enough to stop a thief. The equipment can be hoodwinked while it’s still up and running.
“Booting to Linux isn’t actually required for any of our attacks,” D’Aguanno told us. “On the Dell (Goodix) and ThinkPad (Synaptics), we can simply disconnect the fingerprint sensor and plug into our own gear to attack the sensors. This can also be done while the machine is on since they’re embedded USB, so they can be hot plugged.”
In that scenario, “Bitlocker wouldn’t affect the attack,” he added.
As to what happens if the stolen machine is powered off completely, and has a BIOS password, full-disk encryption, or some other pre-boot authentication, exploitation isn’t as straight forward or perhaps even possible: you’d need to get the machine booted far enough into Windows for the Blackwing team’s fingerprint bypass to work. The described techniques may work against BIOSes that check for fingerprints to proceed with the startup sequence.
“If there’s a password required to boot the machine, and the machine is off, then that could stop this just by nature of the machine not booting to the point where fingerprint authentication is available,” D’Aguanno clarified to us.
“However, at least one of the implementations allows you to use fingerprint authentication for BIOS boot authentication, too. Our focus was on the impact to Windows Hello, though, so we did not investigate that further at this point, but that may be able to be exploited too.”
The duo also urged manufacturers to use SDCP and enable to connect sensor chips to Windows: “It doesn’t help if it’s not turned on.”
They also promised to provide more details about the vulnerabilities they exploited in all three targets in future, and were obviously circumspect in giving away too many details that could be used to crack kit.
Commercial air crews are reporting something “unthinkable” in the skies above the Middle East: novel “spoofing” attacks have caused navigation systems to fail in dozens of incidents since September.
In late September, multiple commercial flights near Iran went astray after navigation systems went blind. The planes first received spoofed GPS signals, meaning signals designed to fool planes’ systems into thinking they are flying miles away from their real location. One of the aircraft almost flew into Iranian airspace without permission. Since then, air crews discussing the problem online have said it’s only gotten worse, and experts are racing to establish who is behind it.
OPSGROUP, an international group of pilots and flight technicians, sounded the alarm about the incidents in September and began to collect data to share with its members and the public. According to OPSGROUP, multiple commercial aircraft in the Middle Eastern region have lost the ability to navigate after receiving spoofed navigation signals for months. And it’s not just GPS—fallback navigation systems are also corrupted, resulting in total failure.
According to OPSGROUP, the activity is centered in three regions: Baghdad, Cairo, and Tel Aviv. The group has tracked more than 50 incidents in the last five weeks, the group said in a November update, and identified three new and distinct kinds of navigation spoofing incidents, with two arising since the initial reports in September.
While GPS spoofing is not new, the specific vector of these new attacks was previously “unthinkable,” according to OPSGROUP, which described them as exposing a “fundamental flaw in avionics design.” The spoofing corrupts the Inertial Reference System, a piece of equipment often described as the “brain” of an aircraft that uses gyroscopes, accelerometers, and other tech to help planes navigate. One expert Motherboard spoke to said this was “highly significant.”
“This immediately sounds unthinkable,” OPSGROUP said in its public post about the incidents. “The IRS (Inertial Reference System) should be a standalone system, unable to be spoofed. The idea that we could lose all on-board nav capability, and have to ask [air traffic control] for our position and request a heading, makes little sense at first glance— especially for state of the art aircraft with the latest avionics. However, multiple reports confirm that this has happened.”
Signal jamming in the Middle East is common, but this kind of powerful spoofing is new. According to Todd Humphreys, a UT Austin professor who researches satellite communications, extremely powerful signal jammers have been present in the skies near Syria since 2018. “Syria was called ‘the most aggressive electronic warfare environment on the planet’ by the head of [U.S. Special Operations Command],” Humphreys told Motherboard.
[…]
“Apart from run-of-the-mill jamming (e.g., with chirp jammers), we have captured GPS spoofing signals in our radio trawling,” he said. “But, interestingly, the spoofing signals never seemed to be complete. They were either missing key internal data, or were not mutually consistent, and so would not have fooled a GPS receiver. They seemed to be aimed at denial of service rather than actual deception. My students and I came to realize that spoofing is the new jamming. In other words, it is being used for denial of service because it’s more effective for that purpose than blunt jamming.”
[…]
“The GPS and IRS, and their redundant backups, are the principal components of modern aircraft navigation systems,” Humphreys said. “When their readings are corrupted, the Flight Management System assumes an incorrect aircraft position, Synthetic Vision systems show the wrong context, etc. Eventually, if the pilots figure out that something is amiss, they can revert to [VHF omnidirectional range]/ [distance measure equipment] over land. But in several recent cases, air traffic control had to step in and directly provide pilots ‘vectors’ (over an insecure communications channel) to guide them to their destination. That’s not a scalable solution.”
[…]
“It shows that the inertial reference systems that act as dead-reckoning backups in case of GPS failure are no backup at all in the face of GPS spoofing because the spoofed GPS receiver corrupts the IRS, which then dead reckons off the corrupted position,” he told Motherboard. “What is more, redundant GPS receivers and IRSs (large planes have 2+ GPS receivers and 3+ IRS) offer no additional protection: they all get corrupted.”
Humphreys and others have been sounding the alarm about an attack like this occurring for the past 15 years. In 2012, he testified by Congress about the need to protect GNSS from spoofing. “GPS spoofing acts like a zero-day exploit against aviation systems,” he told Motherboard. “They’re completely unprepared for it and powerless against it.”
[…]
The entities behind the novel spoofing attacks are unknown, but Humphreys said that he and a student have narrowed down possible sources. “Using raw GPS measurements from several spacecraft in low-Earth orbit, my student Zach Clements last week located the source of this spoofing to the eastern periphery of Tehran,” he said.
Iran would not be the only country spoofing GPS signals in the region. As first reported by Politico, Clements was the first to identify spoofing most likely coming from Israel after Hamas’ Oct. 7 attacks. “The strong and persistent spoofing we’re seeing over Israel since around October 15 is almost certainly being carried out by Israel itself,” Humprheys said. “The IDF effectively admitted as much to a reporter with Haartz.” Humphreys said at the time that crews experiencing this GPS spoofing could rely on other onboard instruments to land.
Humphreys said the effects of the Israeli spoofing are identical to those observed in late September near Iran. “And these are the first clear-cut cases of GPS spoofing of commercial aircraft ever, to my knowledge,” he said. “That they happened so close in time is surprising, but possibly merely coincidental.”
Google’s Threat Analysis Group revealed on Thursday that it discovered and worked to help patch an email server flaw used to steal data from governments in Greece, Moldova, Tunisia, Vietnam and Pakistan. The exploit, known as CVE-2023-37580, targeted email server Zimbra Collaboration to pilfer email data, user credentials and authentication tokens from organizations.
It started in Greece at the end of June. Attackers that discovered the vulnerability and sent emails to a government organization containing the exploit. If someone clicked the link while logged into their Zimbra account, it automatically stole email data and set up auto-forwarding to take control of the address.
While Zimbra published a hotfix on open source platform Github on July 5, most of the activity deploying the exploit happened afterward. That means targets didn’t get around to updating the software with the fix until it was too late. It’s a good reminder to update the devices you’ve been ignoring now, and ASAP as more updates become available. “These campaigns also highlight how attackers monitor open-source repositories to opportunistically exploit vulnerabilities where the fix is in the repository, but not yet released to users,” the Google Threat Analysis Group wrote in a blog post.
Around mid-July, it became clear that threat group Winter Vivern got ahold of the exploit. Winter Vivern targeted government organizations in Moldova and Tunisia. Then, a third unknown actor used the exploit to phish for credentials from members of the Vietnam government. That data got published to an official government domain, likely run by the attackers. The final campaign Google’s Threat Analysis Group detailed targeted a government organization in Pakistan to steal Zimbra authentication tokens, a secure piece of information used to access locked or protected information.
Zimbra users were also the target of a mass-phishing campaign earlier this year. Starting in April, an unknown threat actor sends an email with a phishing link in an HTML file, according to ESET researchers. Before that, in 2022, threat actors used a different Zimbra exploit to steal emails from European government and media organizations.
As of 2022, Zimbra said it had more than 200,000 customers, including over 1,000 government organizations. “The popularity of Zimbra Collaboration among organizations expected to have lower IT budgets ensures that it stays an attractive target for adversaries,” ESET researchers said about why attackers target Zimbra.
affiliates of ransomware gang AlphV (aka BlackCat) claimed to have compromised digital lending firm MeridianLink – and reportedly filed an SEC complaint against the fintech firm for failing to disclose the intrusion to the US watchdog.
First reported by DataBreaches, the break-in apparently happened on November 7. AlphaV’s operatives claimed they did not encrypt any files but did steal some data – and MeridianLink was allegedly aware of the intrusion the day it occurred.
In screenshots shared with The Register and posted on social media, the AlphaV SEC submission claims MeridianLink made a “material misstatement or omission” in its filings and financial statements, “or a failure to file.”
The thoughtful folks at AlphV asserted they are simply filing the paperwork for MeridianLink – and giving it “24 hours before we publish the data in its entirety.”
The Register asked the SEC about the AlphV complaint. “We decline to comment,” the spokesperson replied.
The Clorox Company’s chief security officer has left her job in the wake of a corporate network breach that cost the manufacturer hundreds of millions of dollars.
[…]
Chau Banks, the chief information and data officer of the $7 billion biz, who reportedly penned the memo, will fill Bogac’s role as Clorox continues mopping up the mess searches for and hires a replacement.
[…]
Clorox first disclosed its computer network had been compromised in a US Securities and Exchange Commission filing in August. At the time, it said some of its IT systems and operations had been “temporarily impaired” due to “unauthorized activity” in its IT environment.
A subsequent SEC filing in September noted “wide scale disruption” across the business because of the intrusion.
Those disruptions included processing orders by hand after some systems were taken offline.
[…]
In its first-quarter fiscal 2024 earnings report at the start of this month, Clorox reported a 20 percent drop in year-on-year Q1 net sales and noted the $356 million decrease was “driven largely” by the cyberattack.
In a subsequent SEC filing, Clorox noted that expenses related to the network break-in for the three months ending September 30 totaled $24 million.
“The costs incurred relate primarily to third-party consulting services, including IT recovery and forensic experts and other professional services incurred to investigate and remediate the attack, as well as incremental operating costs incurred from the resulting disruption to the company’s business operations,” according to the Form 10-Q filing.
Clorox also revealed it expects to incur more expenses related to the security super-snafu in future periods
[…] Previously unreported figures from ad blocking companies indicate that YouTube’s crackdown is working, with hundreds of thousands of people uninstalling ad blockers in October. The available data suggests that last month saw a record number of ad blockers uninstalled—and also a record for new ad blocker installs as people sought alternatives that wouldn’t trigger YouTube’s dreaded pop-up.
[…]
Munich-based Ghostery experienced three to five times the typical daily number of both uninstalls and installs throughout much of October, Modras says, leaving usage about flat. Over 90 percent of users who completed a survey about their reason for uninstalling cited the tool failing on YouTube. So intent were users on finding a workable blocker that many appear to have tried Microsoft’s Edge, a web browser whose market share pales beside Chrome’s. Ghostery installations on Edge surged 30 percent last month compared to September. Microsoft declined to comment.
YouTube uses escalating pop-up messages to demand that users stop using an ad blocker, eventually threatening to cut off access to videos.
Google via WIRED Staff
AdGuard, which says it has about 75 million users of its ad blocking tools including 4.5 million people who pay for them, normally sees around 6,000 uninstallations per day for its Chrome extension. From October 9 until the end of the month, those topped 11,000 per day, spiking to about 52,000 on October 18, says CTO Andrey Meshkov.
User complaints started flooding in at the 120-person, Cyprus-based company, about four every hour, at least half of them about YouTube. But as at Ghostery, installations also surged as others looked for relief, reaching about 60,000 installations on Chrome on October 18 and 27. Subscribers grew as people realized AdGuard’s paid tools remained unaffected by YouTube’s clampdown.
Another extension, AdLock, recorded about 30 percent more daily installations and uninstallations in October than in previous months, according to its product head.
[…]
Ad blocking executives say that user reports suggest YouTube’s attack on ad blockers has coincided with tests to increase the number of ads it shows. YouTube sold over $22 billion in ads through the first nine months of this year, up about 5 percent from the same period last year, accounting for about 10 percent of Google’s overall sales.
[…]
YouTube’s test has affected users accessing the website through Chrome on laptops and desktops, according to ad block developers. It doesn’t affect people using YouTube’s mobile or TV apps, using YouTube’s mobile site, or watching YouTube videos embedded on other sites. YouTube’s Lawton says warnings appear regardless of whether users are logged in to the service or using Incognito mode.
Further, the warnings seem to be triggered when YouTube detects certain open source filtering rules that many ad blockers use to identify ads, rather than by targeting any specific extensions, Ghostery’s Modras says. The technology deployed by YouTube mirrors code Google developed in 2017 for a program it calls Funding Choices that enables news and other websites to detect ad blockers, he adds.
The ad sleuths who figure out ways to detect ads and engineers skilled at blocking them are working hard to figure out how to evade YouTube’s blocker blockade, in private Slack groups and discussion on GitHub projects. But progress has been hampered because YouTube isn’t ensnaring every user in its dragnet. Relatively few of the developers have been able to trigger the warning themselves—perhaps the world’s only ad block users who cheer when YouTube finally catches them.
[…]
Some ad blockers are already adapting. Hankuper, the Slovakian company behind lesser known blocker AdLock, released a new version for Windows this week that it believes goes unnoticed by YouTube. If users find that to be true, it will push the fix to versions for macOS, Android, and iOS, says Kostiantyn Shebanov, Hankuper’s product head and business development manager.
Ghostery’s Modras worries about the consequences of Google escalating the war on blockers. Users losing anti-tracking features as they disable the tools could fall prey to online hazards, and the more complex blocking tactics companies like his are being forced to introduce could lead to unintended security holes. “The more powerful they have to become to deal with challenges, the more risk is involved,” he says.
There could also be legal repercussions. Modras says that when a publisher takes steps to thwart an adblocker, it’s illegal for developers to try to circumvent those measures in Europe. But he believes it is permissible to block ads if a blocker does so before triggering a warning.
It doesn’t help much that Google is essentially deploying spyware to figure out which browsers to block. And it’s apparently very very targetted spyware too.
Note: uBlock Origin extension works to block ads. It’s a browser extension you should be using anyway. You can also install a browser like Brave or Firefox (whichever one you are not using at the moment) and use that to only watch YouTube on. Brave will help block a lot of ads.
Researchers have devised an attack that forces Apple’s Safari browser to divulge passwords, Gmail message content, and other secrets by exploiting a side channel vulnerability in the A- and M-series CPUs running modern iOS and macOS devices.
iLeakage, as the academic researchers have named the attack, is practical and requires minimal resources to carry out. It does, however, require extensive reverse-engineering of Apple hardware and significant expertise in exploiting a class of vulnerability known as a side channel, which leaks secrets based on clues left in electromagnetic emanations, data caches, or other manifestations of a targeted system. The side channel in this case is speculative execution, a performance enhancement feature found in modern CPUs that has formed the basis of a widecorpus of attacks in recent years. The nearly endless stream of exploit variants has left chip makers—primarily Intel and, to a lesser extent, AMD—scrambling to devise mitigations.
Exploiting WebKit on Apple silicon
The researchers implement iLeakage as a website. When visited by a vulnerable macOS or iOS device, the website uses JavaScript to surreptitiously open a separate website of the attacker’s choice and recover site content rendered in a pop-up window. The researchers have successfully leveraged iLeakage to recover YouTube viewing history, the content of a Gmail inbox—when a target is logged in—and a password as it’s being autofilled by a credential manager. Once visited, the iLeakage site requires about five minutes to profile the target machine and, on average, roughly another 30 seconds to extract a 512-bit secret, such as a 64-character string.
Enlarge/ Top: An email displayed in Gmail’s web view. Bottom: Recovered sender address, subject, and content.
Kim, et al.
“We show how an attacker can induce Safari to render an arbitrary webpage, subsequently recovering sensitive information present within it using speculative execution,” the researchers wrote on an informational website. “In particular, we demonstrate how Safari allows a malicious webpage to recover secrets from popular high-value targets, such as Gmail inbox content. Finally, we demonstrate the recovery of passwords, in case these are autofilled by credential managers.”
[…]
For the attack to work, a vulnerable computer must first visit the iLeakage website. For attacks involving YouTube, Gmail, or any other specific Web property, a user should be logged into their account at the same time the attack site is open. And as noted earlier, the attacker website needs to spend about five minutes probing the visiting device. Then, using the window.open JavaScript method, iLeakage can cause the browser to open any other site and begin siphoning certain data at anywhere from 24 to 34 bits per second.
[…]
iLeakage is a practical attack that requires only minimal physical resources to carry out. The biggest challenge—and it’s considerable—is the high caliber of technical expertise required. An attacker needs to not only have years of experience exploiting speculative execution vulnerabilities in general but also have fully reverse-engineered A- and M-series chips to gain insights into the side channel they contain. There’s no indication that this vulnerability has ever been discovered before, let alone actively exploited in the wild.
That means the chances of this vulnerability being used in real-world attacks anytime soon are slim, if not next to zero. It’s likely that Apple’s scheduled fix will be in place long before an iLeakage-style attack site does become viable.
Winter Vivern, believed to be a Belarus-aligned hacker, attacked European government entities and a think tank starting on Oct. 11, according to an Ars Technica report Wednesday. ESET Research discovered the hack that exploited a zero-day vulnerability in Roundcube, a webmail server with millions of users, and allowed the pro-Russian group to exfiltrate sensitive emails.
Roundcube patched the XSS vulnerability on Oct. 14, two days after ESET Research reported it. Winter Vivern sent malicious code to users disguised in an innocent-looking email from team.management@outlook.com. Users simply viewed the message in a web browser, and the hacker could access all their emails. Winter Vivern is a cyberespionage group that has been active since at least 2020 targeting governments in Europe and Central Asia.
“Despite the low sophistication of the group’s toolset, it is a threat to governments in Europe because of its persistence, very regular running of phishing campaigns,” said Matthieu Faou, a malware researcher at ESET, in a post.
Roundcube released an update for multiple versions of its software on Oct. 16 fixing the cross-site scripting vulnerabilities. Despite the patch and known vulnerabilities in older versions, many applications don’t get updated by users, says Faou.
Citrix has urged admins to “immediately” apply a fix for CVE-2023-4966, a critical information disclosure bug that affects NetScaler ADC and NetScaler Gateway, admitting it has been exploited.
Plus, there’s a proof-of-concept exploit, dubbed Citrix Bleed, now on GitHub. So if you are using an affected build, at this point assume you’ve been compromised, apply the update, and then kill all active sessions per Citrix’s advice from Monday.
The company’s first issued a patch for compromised devices on October 10, and last week Mandiant warned that criminals — most likely cyberspies — have been abusing this hole to hijack authentication sessions and steal corporate info since at least late August.
[…]
Also last week, Mandiant Consulting CTO Charles Carmakal warned that “organizations need to do more than just apply the patch — they should also terminate all active sessions. These authenticated sessions will persist after the update to mitigate CVE-2023-4966 has been deployed.”
Citrix, in the Monday blog, also echoed this mitigation advice and told customers to kill all active and persistent sessions using the following commands:
MGM Resorts has admitted that the cyberattack it suffered in September will likely cost the company at least $100 million.
The effects of the attack are expected to make a substantial dent in the entertainment giant’s third-quarter earnings and still have a noticeable impact in its Q4 too, although this is predicted to be “minimal.”
According to an 8K filing with the Securities and Exchange Commission (SEC) on Thursday, MGM Resorts said less than $10 million has also been spent on “one-time expenses” such as legal and consultancy fees, and the cost of bringing in third-party experts to handle the incident response.
These are the current estimates for the total costs incurred by the attack, which took slot machines to the sword and borked MGM’s room-booking systems, among other things, but the company admitted the full scope of costs has yet to be determined.
The good news is that MGM expects its cyber insurance policy to cover the financial impact of the attack.
The company also expects to fill its rooms to near-normal levels starting this month. September’s occupancy levels took a hit – 88 percent full compared to 93 percent at the same time last year – but October’s occupancy is forecast to be down just 1 percent and November is poised to deliver record numbers thanks to the Las Vegas Formula 1 event.
[…]
MGM Resorts confirmed personal data belonging to customers had been stolen during the course of the intrusion. Those who became customers before March 2019 may be affected.
Stolen data includes social security numbers, driving license numbers, passport numbers, and contact details such as names, phone numbers, email addresses, postal addresses, as well as gender and dates of birth.
At this time, there is no evidence to suggest that financial information including bank numbers and cards were compromised, and passwords are also believed to be unaffected.
[…]
Adam Marrè, CISO at cybersecurity outfit Arctic Wolf, told The Register: “When looking at the total cost of a breach, such as the one which impacted MGM, many factors can be taken into account. This can include a combination of revenue lost for downtime, extra hours worked for remediation, tools that may have been purchased to deal with the issue, outside incident response help, setting up and operating a hotline for affected people, fixing affected equipment, purchasing credit monitoring, and sending physical letters to victims. Even hiring an outside PR firm to help with crisis messaging. When you add up everything, $100 million does not sounds like an unrealistic number for organization like MGM.
Genetic testing giant 23andMe confirmed that a data scraping incident resulted in hackers gaining access to sensitive user information and selling it on the dark web.
The information of nearly 7 million 23andMe users was offered for sale on a cybercriminal forum this week. The information included origin estimation, phenotype, health information, photos, identification data and more. 23andMe processes saliva samples submitted by customers to determine their ancestry.
When asked about the post, the company initially denied that the information was legitimate, calling it a “misleading claim” in a statement to Recorded Future News.
The company later said it was aware that certain 23andMe customer profile information was compiled through unauthorized access to individual accounts that were signed up for the DNA Relative feature — which allows users to opt in for the company to show them potential matches for relatives.
[…]
When pressed on how compromising a handful of user accounts would give someone access to millions of users, the spokesperson said the company does not believe the threat actor had access to all of the accounts but rather gained unauthorized entry to a much smaller number of 23andMe accounts and scraped data from their DNA Relative matches.
The spokesperson declined to confirm the specific number of customer accounts affected.
Anyone who has opted into DNA Relatives can view basic profile information of others who make their profiles visible to DNA Relative participants, a spokesperson said.
Users who are genetically related can access ancestry information, which is made clear to users when they create their DNA Relatives profile, the spokesperson added.
[…]
A researcher approached Recorded Future News after examining the leaked database and found that much of it looked real. The researcher spoke on condition of anonymity because he found the information of his wife and several of her family members in the leaked data set. He also found other acquaintances and verified that their information was accurate.
The researcher downloaded two files from the BreachForums post and found that one had information on 1 million 23andMe users of Ashkenazi heritage. The other file included data on more than 300,000 users of Chinese heritage.
The data included profile and account ID numbers, names, gender, birth year, maternal and paternal genetic markers, ancestral heritage results, and data on whether or not each user has opted into 23andme’s health data.
“It appears the information has been scraped from user profiles which are only supposed to be shared between DNA Matches. So although this particular leak does not contain genomic sequencing data, it’s still data that should not be available to the public,” the researcher said.
“23andme seems to think this isn’t a big deal. They keep telling me that if I don’t want this info to be shared, I should not opt into the DNA relatives feature. But that’s dismissing the importance of this data which should only be viewable to DNA relatives, not the public. And the fact that someone was able to scrape this data from 1.3 million users is concerning. The hacker allegedly has more data that they have not released yet.”
The researcher added that he discovered another issue where someone could enter a 23andme profile ID, like the ones included in the leaked data set, into their URL and see someone’s profile.
The data available through this only includes profile photos, names, birth years and location but does not include test results.
“It’s very concerning that 23andme has such a big loophole in their website design and security where they are just freely exposing peoples info just by typing a profile ID into the URL. Especially for a website that deals with people’s genetic data and personal information. What a botch job by the company,” the researcher said.
[…]
The security policies of genetic testing companies like 23andMe have faced scrutiny from regulators in recent weeks. Three weeks ago, genetic testing firm 1Health.io agreed to pay the Federal Trade Commission (FTC) a $75,000 fine to resolve allegations that it failed to secure sensitive genetic and health data, retroactively overhauled its privacy policy without notifying and obtaining consent from customers whose data it had obtained, and tricked customers about their ability to delete their data.
Commercial spyware has exploited a security hole in Arm’s Mali GPU drivers to compromise some people’s devices, according to Google today.
These graphics processors are used in a ton of gear, from phones and tablets to laptops and cars, so the kernel-level vulnerability may be present in countless equipment. This includes Android handsets made by Google, Samsung, and others.
The vulnerable drivers are paired with Arm’s Midgard (launched in 2010), Bifrost (2016), Valhall (2019), and fifth generation Mali GPUs (2023), so we imagine this buggy code will be in millions of systems.
On Monday, Arm issued an advisory for the flaw, which is tracked as CVE-2023-4211. This is a use-after-free bug affecting Midgard driver versions r12p0 to r32p0; Bifrost versions r0p0 to r42p0; Valhall versions r19p0 to r42p0; and Arm 5th Gen GPU Architecture versions r41p0 to r42p0.
We’re told Arm has corrected the security blunder in its drivers for Bifrost to fifth-gen. “This issue is fixed in Bifrost, Valhall, and Arm 5th Gen GPU Architecture Kernel Driver r43p0,” the advisory stated. “Users are recommended to upgrade if they are impacted by this issue. Please contact Arm support for Midgard GPUs.”
We note version r43p0 of Arm’s open source Mali drivers for Bifrost to fifth-gen were released in March. Midgard has yet to publicly get that version, it appears, hence why you need to contact Arm for that. We’ve asked Arm for more details on that.
What this means for the vast majority of people is: look out for operating system or manufacturer updates with Mali GPU driver fixes to install to close this security hole, or look up the open source drivers and apply updates yourself if you’re into that. Your equipment may already be patched by now, given the release in late March, and details of the bug are only just coming out. If you’re a device maker, you should be rolling out patches to customers.
“A local non-privileged user can make improper GPU memory processing operations to gain access to already freed memory,” is how Arm described the bug. That, it seems, is enough to allow spyware to take hold of a targeted vulnerable device.
According to Arm there is “evidence that this vulnerability may be under limited, targeted exploitation.” We’ve received confirmation from Google, whose Threat Analysis Group’s (TAG) Maddie Stone and Google Project Zero’s Jann Horn found and reported the vulnerability to the chip designer, that this targeted exploitation has indeed taken place.
“At this time, TAG can confirm the CVE was used in the wild by a commercial surveillance vendor,” a TAG spokesperson told The Register. “More technical details will be available at a later date, aligning with our vulnerability disclosure policy.”
On August 15, 2023, the threat actor “Ransomed,” operating under the alias “RansomForums,” posted on Telegram advertising their new forum and Telegram chat channel. On the same day, the domain ransomed[.]vc was registered.
But before activity on Ransomed had even really begun, the forum was the victim of a distributed denial-of-service (DDoS) attack. In response, the operators of the site quickly pivoted to rebrand it as a ransomware blog that, similar to other ransomware collectives, would adopt the approach of publicly listing victim names while issuing threats of data exposure unless ransoms are paid.
[…]
Ransomed is leveraging an extortion tactic that has not been observed before—according to communications from the group, they use data protection laws like the EU’s GDPR to threaten victims with fines if they do not pay the ransom. This tactic marks a departure from typical extortionist operations by twisting protective laws against victims to justify their illegal attacks.
[…]
The group has disclosed ransom demands for its victims, which span from €50,000 EUR to €200,000 EUR. For comparison, GDPR fines can climb into the millions and beyond—the highest ever was over €1 billion EUR. It is likely that Ransomed’s strategy is to set ransom amounts lower than the price of a fine for a data security violation, which may allow them to exploit this discrepancy in order to increase the chance of payment.
As of August 28, Ransomed operators have listed two Bitcoin addresses for payment on their site. Typically, threat actors do not make their wallet addresses public, instead sharing them directly with victims via a ransom note or negotiations portal.
These unconventional choices have set Ransomed apart from other ransomware operations, although it is still unproven if their tactics will be successful.
[…]
It is likely that Ransomed is a financially motivated project, and one of several other short-lived projects from its creators.
The owner of the Ransomed Telegram chat claims to have the source code of Raid Forums and said they intend to use it in the future, indicating that while the owner is running a ransomware blog for now, there are plans to turn it back into a forum later—although the timeline for this reversion is not clear.
The forum has gained significant attention in the information security community and in threat communities for its bold statements of targeting large organizations. However, there is limited evidence that the attacks published on the Ransomed blog actually took place, beyond the threat actors’ claims.
[…]
As the security community continues to monitor this enigmatic group’s activities, one thing remains clear: the landscape of ransomware attacks continues to evolve, challenging defenders to adapt and innovate in response.
Hackers backed by the Chinese government are planting malware into routers that provides long-lasting and undetectable backdoor access to the networks of multinational companies in the US and Japan, governments in both countries said Wednesday. The hacking group, tracked under names including BlackTech, Palmerworm, Temp.Overboard, Circuit Panda, and Radio Panda, has been operating since at least 2010, a joint advisory published by government entities in the US and Japan reported. The group has a history of targeting public organizations and private companies in the US and East Asia. The threat actor is somehow gaining administrator credentials to network devices used by subsidiaries and using that control to install malicious firmware that can be triggered with “magic packets” to perform specific tasks.
The hackers then use control of those devices to infiltrate networks of companies that have trusted relationships with the breached subsidiaries. “Specifically, upon gaining an initial foothold into a target network and gaining administrator access to network edge devices, BlackTech cyber actors often modify the firmware to hide their activity across the edge devices to further maintain persistence in the network,” officials wrote in Wednesday’s advisory. “To extend their foothold across an organization, BlackTech actors target branch routers — typically smaller appliances used at remote branch offices to connect to a corporate headquarters — and then abuse the trusted relationship of the branch routers within the corporate network being targeted. BlackTech actors then use the compromised public-facing branch routers as part of their infrastructure for proxying traffic, blending in with corporate network traffic, and pivoting to other victims on the same corporate network.”
Most of Wednesday’s advisory referred to routers sold by Cisco. In an advisory of its own, Cisco said the threat actors are compromising the devices after acquiring administrative credentials and that there’s no indication they are exploiting vulnerabilities. Cisco also said that the hacker’s ability to install malicious firmware exists only for older company products. Newer ones are equipped with secure boot capabilities that prevent them from running unauthorized firmware, the company said. “It would be trivial for the BlackTech actors to modify values in their backdoors that would render specific signatures of this router backdoor obsolete,” the advisory stated. “For more robust detection, network defenders should monitor network devices for unauthorized downloads of bootloaders and firmware images and reboots. Network defenders should also monitor for unusual traffic destined to the router, including SSH.”
To detect and mitigate this threat, the advisory recommends administrators disable outbound connections on virtual teletype (VTY) lines, monitor inbound and outbound connections, block unauthorized outbound connections, restrict administration service access, upgrade to secure boot-capable devices, change compromised passwords, review network device logs, and monitor firmware changes for unauthorized alterations.
Ars Technica notes: “The advisory didn’t provide any indicators of compromise that admins can use to determine if they have been targeted or infected.”
An anonymous reader quotes a report from Engadget: The ALPHV/BlackCat ransomware group claimed responsibility for the MGM Resorts cyber outage on Tuesday, according to a post by malware archive vx-underground. The group claims to have used common social engineering tactics, or gaining trust from employees to get inside information, to try and get a ransom out of MGM Resorts, but the company reportedly refuses to pay. The conversation that granted initial access took just 10 minutes, according to the group.
“All ALPHV ransomware group did to compromise MGM Resorts was hop on LinkedIn, find an employee, then call the Help Desk,” the organization wrote in a post on X. Those details came from ALPHV, but have not been independently confirmed by security researchers. The international resort chain started experiencing outages earlier this week, as customers noticed slot machines at casinos owned by MGM Resorts shut down on the Las Vegas strip. As of Wednesday morning, MGM Resorts still shows signs that it’s experiencing downtime, like continued website disruptions. In a statement on Tuesday, MGM Resorts said: “Our resorts, including dining, entertainment and gaming are currently operational.” However, the company said Wednesday that the cyber incident has significantly disrupted properties across the United States and represents a material risk to the company.
“[T]he major credit rating agency Moody’s warned that the cyberattack could negatively affect MGM’s credit rating, saying the attack highlighted ‘key risks’ within the company,” reports CNBC. “The company’s corporate email, restaurant reservation and hotel booking systems remain offline as a result of the attack, as do digital room keys. MGM on Wednesday filed a 8-K report with the Securities and Exchange Commission noting that on Tuesday the company issued a press release ‘regarding a cybersecurity issue involving the Company.'” MGM’s share price has declined more than 6% since Monday.
An anonymous reader shared this report from Bloomberg: China-linked hackers breached the corporate account of a Microsoft engineer and are suspected of using that access to steal a valuable key that enabled the hack of senior U.S. officials’ email accounts, the company said in a blog post. The hackers used the key to forge authentication tokens to access email accounts on Microsoft’s cloud servers, including those belonging to Commerce Secretary Gina Raimondo, Representative Don Bacon and State Department officials earlier this year.
The U.S. Cybersecurity and Infrastructure Security Agency and Microsoft disclosed the breach in June, but it was still unclear at the time exactly how hackers were able to steal the key that allowed them to access the email accounts. Microsoft said the key had been improperly stored within a “crash dump,” which is data stored after a computer or application unexpectedly crashes…
The incident has brought fresh scrutiny to Microsoft’s cybersecurity practices.
Microsoft’s blog post says they corrected two conditions which allowed this to occur. First, “a race condition allowed the key to be present in the crash dump,” and second, “the key material’s presence in the crash dump was not detected by our systems.” We found that this crash dump, believed at the time not to contain key material, was subsequently moved from the isolated production network into our debugging environment on the internet connected corporate network. This is consistent with our standard debugging processes. Our credential scanning methods did not detect its presence (this issue has been corrected).
After April 2021, when the key was leaked to the corporate environment in the crash dump, the Storm-0558 actor was able to successfully compromise a Microsoft engineer’s corporate account. This account had access to the debugging environment containing the crash dump which incorrectly contained the key. Due to log retention policies, we don’t have logs with specific evidence of this exfiltration by this actor, but this was the most probable mechanism by which the actor acquired the key.
On Monday, local news outlets in Las Vegas caught wind of various complaints from patrons of MGM businesses; some said ATMs at associated hotels and casinos didn’t appear to be working; others said their hotel room keys had stopped functioning; still others noted that bars and restaurants located within MGM complexes had suddenly been shuttered. If you head to MGM’s website, meanwhile, you’ll note it’s definitely not working the way that it’s supposed to.
MGM put out a short statement Monday saying that it had been the victim of an undisclosed “cybersecurity issue.” The Associated Press notes that computer outages connected to said issue appear to be impacting MGM venues across the U.S.—in Vegas but also in places as far flung as Mississippi, Ohio, Michigan, and large parts of the northeast.
In November 2022, the password manager service LastPass disclosed a breach in which hackers stole password vaults containing both encrypted and plaintext data for more than 25 million users. Since then, a steady trickle of six-figure cryptocurrency heists targeting security-conscious people throughout the tech industry has led some security experts to conclude that crooks likely have succeeded at cracking open some of the stolen LastPass vaults.
[…]
Since late December 2022, Monahan and other researchers have identified a highly reliable set of clues that they say connect recent thefts targeting more than 150 people, Collectively, these individuals have been robbed of more than $35 million worth of crypto.
Monahan said virtually all of the victims she has assisted were longtime cryptocurrency investors, and security-minded individuals. Importantly, none appeared to have suffered the sorts of attacks that typically preface a high-dollar crypto heist, such as the compromise of one’s email and/or mobile phone accounts.
[…]
Monahan has been documenting the crypto thefts via Twitter/X since March 2023, frequently expressing frustration in the search for a common cause among the victims. Then on Aug. 28, Monahan said she’d concluded that the common thread among nearly every victim was that they’d previously used LastPass to store their “seed phrase,” the private key needed to unlock access to their cryptocurrency investments.
[…]
Bax, Monahan and others interviewed for this story say they’ve identified a unique signature that links the theft of more than $35 million in crypto from more than 150 confirmed victims, with roughly two to five high-dollar heists happening each month since December 2022.
[…]
But the researchers have published findings about the dramatic similarities in the ways that victim funds were stolen and laundered through specific cryptocurrency exchanges. They also learned the attackers frequently grouped together victims by sending their cryptocurrencies to the same destination crypto wallet.
A graphic published by @tayvano_ on Twitter depicting the movement of stolen cryptocurrencies from victims who used LastPass to store their crypto seed phrases.
By identifying points of overlap in these destination addresses, the researchers were then able to track down and interview new victims. For example, the researchers said their methodology identified a recent multi-million dollar crypto heist victim as an employee at Chainalysis, a blockchain analysis firm that works closely with law enforcement agencies to help track down cybercriminals and money launderers.
Chainalysis confirmed that the employee had suffered a high-dollar cryptocurrency heist late last month, but otherwise declined to comment for this story.
[…]
I’ve been urging my friends and family who use LastPass to change all of their passwords and migrate any crypto that may have been exposed, despite knowing full well how tedious that is.”
Reuters found cyber-espionage teams linked to the North Korean government, which security researchers call ScarCruft and Lazarus, secretly installed stealthy digital backdoors into systems at NPO Mashinostroyeniya, a rocket design bureau based in Reutov, a small town on the outskirts of Moscow.
Reuters could not determine whether any data was taken during the intrusion or what information may have been viewed.
A security researcher along with three PhD students from Germany have reportedly found a way to exploit Tesla’s current AMD-based cars to develop what could be the world’s first persistent “Tesla Jailbreak.”
The team published a briefing ahead of their presentation at next week’s Blackhat 2023. There, they will present a working version of an attack against Tesla’s latest AMD-based media control unit (MCU). According to the researchers, the jailbreak uses an already-known hardware exploit against a component in the MCU, which ultimately enables access to critical systems that control in-car purchases—and perhaps even tricking the car into thinking these purchases are already paid for.
[.,..]
Tesla has started using this well-established platform to enable in-car purchases, not only for additional connectivity features but even for analog features like faster acceleration or rear heated seats. As a result, hacking the embedded car computer could allow users to unlock these features without paying.”
Separately, the attack will allow researchers to extract a vehicle-specific cryptography key that is used to authenticate and authorize a vehicle within Tesla’s service network.
According to the researchers, the attack is unpatchable on current cars, meaning that no matter what software updates are pushed out by Tesla, attackers—or perhaps even DIY hackers in the future—can run arbitrary code on Tesla vehicles as long as they have physical access to the car. Specifically, the attack is unpatchable because it’s not an attack directly on a Tesla-made component, but rather against the embedded AMD Secure Processor (ASP) which lives inside of the MCU.
[…]
Tesla is an offender of something many car owners hate: making vehicles with hardware installed, but locked behind software. For example, the RWD Model 3 has footwell lights installed from the factory, but they are software disabled. Tesla also previously locked the heated steering wheel function and heated rear seats behind a software paywall, but eventually began activating it on new cars at no extra cost in 2021. There’s also the $2,000 “Acceleration Boost” upgrade for certain cars that drops a half-second off of the zero to 60 time.
Mobile PINs are a lot like passwords in that there are a number of very common ones, and [Mobile Hacker] has a clever proof of concept that uses a tiny microcontroller development board to emulate a keyboard to test the 20 most common unlock PINs on an Android device.
Trying the twenty most common PINs doesn’t take long.
The project is based on research analyzing the security of 4- and 6-digit smartphone PINs which found some striking similarities between user-chosen unlock codes. While the research is a few years old, user behavior in terms of PIN choice has probably not changed much.
The hardware is not much more than a Digispark board, a small ATtiny85-based board with built-in USB connector, and an adapter. In fact, it has a lot in common with the DIY Rubber Ducky except for being focused on doing a single job.
Once connected to a mobile device, it performs a form of keystroke injection attack, automatically sending keyboard events to input the most common PINs with a delay between each attempt. Assuming the device accepts, trying all twenty codes takes about six minutes.
Disabling OTG connections for a device is one way to prevent this kind of attack, and not configuring a common PIN like ‘1111’ or ‘1234’ is even better. You can see the brute forcing in action in the video, embedded below.
Bruteforcing PIN protection of popular app using $3 ATTINY85 #Arduino
Testing all possible PIN combinations (10,000) would take less than 1,5 hours without getting account locked. It is possible coz, PIN is limited only to 4 digits, without biometrics authentication#rubberduckypic.twitter.com/rbu9Tk3S9d