D-Link, Comba network gear leave passwords open for potentially whole world to see

DSL modems and Wi-Fi routers from D-Link and Comba have been found to be leaving owners’ passwords out in the open.

Simon Kenin, a security researcher with Trustwave SpiderLabs, took credit for the discovery of five bugs that leave user credentials accessible to attackers.

For D-Link gear, two bugs were discovered in the firmware for the DSL-2875AL and DSL-2877AL wireless ADSL modem/router. The first bug describes a configuration file in the DSL-2875AL that contains the user password, and does not require any authentication to view: you just have to be able to reach the web-based admin console, either on the local network or across the internet, depending the device’s configuration.

“This file is available to anyone with access to the web-based management IP address and does not require any authentication,” Trustwave’s Karl Sigler said on Tuesday. “The path to the file is https://[router ip address]/romfile.cfg and the password is stored in clear text there.”

The second flaw is present in both the 2857AL and 2877AL models. It is less a “flaw” than a glaring security oversight: the source code for the router log-in page (again, accessible to anyone that can reach its built-in web UI server) contains the ISP username and password of the user in plain text. This can be pulled up simply by choosing the “view source” option in a browser window.

Fixes have been released for both models. Those with the 2877AL modem will want to get Firmware 1.00.20AU 20180327, while owners of the 2875AL should update to at least version 1.00.08AU 20161011.

The Register tried to get in touch with D-Link for comment on the matter, but was unable to get a response. Trustwave didn’t fare much better, saying that the bugs were only listed as patched after the researchers told D-Link they were going public with the findings, after waiting months for the router biz to get its act together.

Source: D-Link, Comba network gear leave passwords open for potentially whole world to see • The Register

Cheap GPS kiddie trackers have default password 123456 and send all information unencrypted

GPS trackers are designed to bring you greater peace of mind by helping you to locate your kids, your pets, and even your car. They can help keep the elderly or disabled safe by providing them with a simple SOS button to call for immediate help. Many devices are marketed for these purposes on common sites like Amazon and eBay and can be purchased for $25-$50 USD, making them more financially attractive than using a smartphone for some of the same capabilities.

[…]

As the instructions state, there is a web portal and a mobile application that you can use to manage the tracker. We took the path of least resistance and first opened a web application which is reachable at http://en.i365gps.com.

[…]

As you can see the first red flag is that the login form is served over HTTP protocol, not over the more secure HTTPS. Moreover, you have two options to connect to the cloud: by using an account with username and password or using ID and password. Which one to pick? We turned to the leaflet for answers. It says:

Figure 5: Default password

This applies both for Android application as well as for web application. What is also an alarming fact is that last sentence: “…user needs to contact reseller to register a username if need to login by username.” Since you have to call the reseller to request a username, it’s fairly clear you are intended to use the ID, the password for which is “123456.” Not a good start.

[…]

Ok so let’s get back to the IMEI/ID that in combination with default password serves as the credentials for your account. Remember how easy it was to scan through that 1M of possible IMEI numbers as they have the same prefix? So we scanned an arbitrary 4M sequential serial numbers ourselves just to get an idea of the scale of the devices out there and we learned that at least six hundred thousand devices are live in the wild with default passwords. We executed a deeper scan of a subset of one million of these devices to determine make, model, and location; of the one million, we scanned, over 167,000 were locatable.

Figure 29: a result of a detailed scan of 1M serial numbers for tracker devices
Figure 30: last GPS position of trackers

Now it’s obvious that the same infrastructure is used for all or at least most of the trackers from this vendor as we identified 29 different models of trackers during this scan of 1M IMEIs. All the models are sold by wholesaler Shenzen i365, and we were able to determine that some models in this scan are being sold under different product names, which leads us to the conclusion that infrastructure and devices are being white labelled and sold under different brand names. In many instances, however, we were only able to determine a generic model number.

Number of trackers Tracker model
60601 T58
36658 A9
26654 T8S
20778 T28
20640 TQ
11480 A16
10263 A6
9121 3G
7452 A18
5092 A21
4083 T28A
3626 A12
2921 A19
2839 A20
2638 A20S
2610 S1
1664 P1
749 FA23
607 A107
280 RomboGPS
79 PM01
55 A21P
26 PM02
16 A16X
15 PM03
4 WA3
4 P1-S
3 S6
1 S9

Figure 31: trackers models and their counts in 1M detailed sample scan

Figure 32: affected models

You are probably already feeling like there is a lot more to this story than meets the eye as we found devices that are not produced by this particular company during this scan. It turns out that this problem is much bigger than it looks. How big? We’ll show you in the follow-up to this which goes deeper into the relationships between different products and companies and into many surprising facts about cloud infrastructure. We found more alarming vulnerabilities and much more instances of this cloud and trackers.

But so far we think we are speaking of approximately 50 different applications sharing the same platform (and probably also the same vulnerabilities) as seen in this picture:

Figure 33: the research continues, see you in part 2 where we uncover more about platform/cloud

Source: The secret life of GPS trackers (1/2) – Avast Threat Labs

Tesla Malfunction Locks Out Owners Who Depended on App for Entry, Forces Them to Scramble for ‘Keys’

Some Tesla users who rely on the app to gain entry to their Model 3 were temporarily unable to get into their electric cars on Labor Day.

The Next Web reported that a number of people tweeted out their frustrations on Monday when they were “locked out” of their car due to phone app issues. Downdetector, a tracker for users to report technical difficulties with web-based services, also showed that many users were having trouble with Tesla’s app.

A Tesla spokesperson confirmed to Gizmodo that Tesla’s app was temporarily unavailable on Monday but full functionality was soon restored. Tweets suggest the app was down for around three hours at least.

Source: Tesla Malfunction Locks Out Owners Who Depended on App for Entry, Forces Them to Scramble for ‘Keys’

Well done, Elon Musk!

Hundreds of Millions of Facebook Users Phone Numbers Exposed

Facebook is staring down yet another security blunder, this time with an incident involving an exposed server containing hundreds of millions of phone numbers that were previously associated with accounts on its platform.

The situation appears to be pinned to a feature no longer enabled on the platform but allowed users to search for someone based on their phone number. TechCrunch’s Zack Whittaker first reported Wednesday that a server—which did not belong to Facebook but was evidently not password protected and therefore accessible to anyone who could find it—was discovered online by security researcher Sanyam Jain and found to contain records on more than 419 million Facebook users, including 133 records on users based in the U.S.

(A Facebook spokesperson disputed the 419 million figure in a call with Gizmodo, claiming the server contained “closer to half” of that number, but declined to provide a specific figure.)

According to TechCrunch, records contained on the server included a Facebook user’s phone number and individual Facebook ID. Using both, TechCrunch said it was able to cross-check them to verify records and additionally found that in some cases, records included a user’s country, name, and gender. The report stated that it’s unclear who scraped the data from Facebook or why. The Facebook spokesperson said that the company became aware of the situation a few days ago but would not specify an exact date.

Whittaker noted that having access to a user’s phone number could allow a bad actor to force-reset accounts linked to that number, and could further expose them to intrusions like spam calls or other abuse. But it could also allow a bad actor to pull up a host of private information on a person by inputting it into any number of public databases or with some legwork or by impersonation grant a hacker access to apps or even a bank account.

Source: Hundreds of Millions of Facebook Users Phone Numbers Exposed

Don’t fly with your Explody MacBook!

Following an Apple notice that a “limited number” of 15-inch MacBook Pros may have faulty batteries that could potentially create a fire safety risk, multiple airlines have barred transporting Apple laptops in their checked luggage—in some cases, regardless of whether they fall under the recall.

Bloomberg reported Wednesday that Qantas Airways and Virgin Australia had joined the growing list of airlines enforcing policies around the MacBook Pros. In a statement by email, a spokesperson for Qantas told Gizmodo that “[u]ntil further notice, all 15 inch Apple MacBook Pros must be carried in cabin baggage and switched off for flight following a recall notice issued by Apple.”

Virgin Australia, meanwhile, said in a “Dangerous Goods” notice on its website that any MacBook model “must be placed in carry-on baggage only. No Apple MacBooks are permitted in checked in baggage until further notice.”

Apple in June announced a voluntary recall program for the affected models of 15-inch Retina display MacBook Pro, which it said were sold between September 2015 and February 2017. Apple said at the time it would fix affected models for free, adding that “[c]ustomer safety is always Apple’s top priority.”

Apple did not immediately return a request for comment about airline policies implemented in response to the recall.

Both Singapore Airlines and Thai Airways also recently instituted policies around the MacBook Pros. In a statement on its website over the weekend, Singapore Airlines said that passengers are prohibited from bringing affected models on its aircraft either in their carry-ons or in their checked luggage “until the battery has been verified as safe or replaced by the manufacturer.”

Bloomberg previously reported that airlines TUI Group Airlines, Thomas Cook Airlines, Air Italy, and Air Transat also introduced bans on the laptops. The cargo activity of all four is managed by Total Cargo Expertise, which reportedly said in an internal notice to its staff that the affected devices are “prohibited on board any of our mandate carriers.”

Both the Federal Aviation Administration and European Union Aviation Safety Agency said they had contacted airlines following Apple’s announcement regarding the recall. The FAA said that it alerted U.S. carriers to the issue in July.

Apple allows MacBook users to see if their devices are affected by inputting a serial number. While checking individual serial numbers for each and every device that comes through security checkpoints has the potential to slow service, banning all MacBooks either outright or in the cabin seems like a severe overreaction and, to be honest, a gigantic pain in the ass for customers.

Source: Airlines Are Banning MacBooks From Checked Luggage

I’d say removing macbooks from check in luggage and then looking if the serials are OK or not will take a stupid amount of time. Banning them from check in luggage makes perfect sense.

Lenovo Solution Centre can turn users into Admins – Lenovo changes end of life for LSC until before the last release in response.

Not only has a vulnerability been found in Lenovo Solution Centre (LSC), but the laptop maker fiddled with end-of-life dates to make it seem less important – and is now telling the world it EOL’d the vulnerable monitoring software before its final version was released.

The LSC privilege-escalation vuln (CVE-2019-6177) was found by Pen Test Partners (PTP), which said it has existed in the code since it first began shipping in 2011. It was bundled with the vast majority of the Chinese manufacturer’s laptops and other devices, and requires Windows to run. If you removed the app, or blew it away with a Linux install, say, you’re safe right now.

[…]

he solution? Uninstall Lenovo Solution Centre, and if you’re really keen you can install Lenovo Vantage and/or Lenovo Diagnostics to retain the same branded functionality, albeit without the priv-esc part.

All straightforward. However, it went a bit awry when PTP reported the vuln to Lenovo. “We noticed they had changed the end-of-life date to make it look like it went end of life even before the last version was released,” they told us.

Screenshots of the end-of-life dates – initially 30 November 2018, and then suddenly April 2018 after the bug was disclosed – can be seen on the PTP blog. The last official release of the software is dated October 2018, so Lenovo appears to have moved the EOL date back to April of that year for some reason.

Source: Security gone in 600 seconds: Make-me-admin hole found in Lenovo Windows laptop crapware. Delete it now • The Register

London Transport asked people to write down their Oyster passwords – but don’t worry

London-dwelling Alfie Fresta wanted a National Rail travelcard discount added to his London Oyster card so the discount would work automatically with his pay-as-you-go smartcard.

He was startled when London Overground staff at New Cross Gate station handed him a paper form with a box on it asking for his online Oyster account password.

“I was in utter disbelief,” Fresta told El Reg, having just read about Oyster online accounts being breached by credential-stuffing crooks. “Having worked on a number of web apps, I know storing passwords in clear text is, for lack of a better word, a ginormous no-no.”

Oyster plain text password form from Arriva Rail London, which operates London Overground

The Arriva Rail London form handed to Fresta. ARL is the outsourced operator for TfL’s London Overground services. Click to enlarge

Just to check that this wasn’t a local misunderstanding by station staff, Fresta checked it out at other stations – and was again asked to write down his password in plain text for staff to read.

TfL did not deny that this is its standard procedure for staff adding discounts to Oyster cards, but insisted in a statement to The Register that it doesn’t store those passwords and lets customers take the completed form away afterwards.

A spokeswoman told us: “Customers can add discounts to their Oyster cards at all station ticket machines and our staff are on hand to support them with this process. If a customer prefers to do this via a ticket office rather than a machine, then a password is temporarily provided to the ticket office staff via a form.

“The password is always entered in the presence of the customer and the form is returned to them to ensure it can be disposed of securely. Customers are advised to change the password on first login, if setting up an online Oyster account. We recognise that where possible this process could be improved and work is under way to identify options.”

Fresta was not impressed with TfL’s customer service, telling us he wasn’t given “any explanation as to how the information [would] be handled or why”.

Source: Yes, TfL asked people to write down their Oyster passwords – but don’t worry, they didn’t inhale • The Register

That’s insane!

Here’s a top tip: Don’t trust the new guy – block web domains less than a month old. They are bound to be dodgy

IT admins could go a long way towards protecting their users from malware and other dodgy stuff on the internet if they ban access to any web domain less than a month old.

This advice comes from Unit 42, the security branch of networking house Palo Alto Networks. To be exact, the recommendation is that any domain created in the past 32 days ought to be blocked. This comes after the gang studied newly-registered domains – NRDs for short – and found that more than 70 per cent fell under the classification of “suspicious,” “not safe for work,” or “malicious.”

“While this may be deemed a bit aggressive by some due to potential false-positives, the risk from threats via NRDs is much greater,” noted Unit 42’s Zhanhao Chen, Jun Javier Wang, and Kelvin Kwan. “At the bare minimum, if access to NRDs are allowed, then alerts should be set up for additional visibility.”

According to Unit 42’s study of new domains created on 1,530 different top level domains (TLDs) from March to May of this year, just 8.4 per cent of NRDs could be confirmed as hosting only benign pages. 2.32 per cent were confirmed not safe for work, while 1.27 per cent of the domains were classified as malicious, meaning they were found to host malware, phishing, or botnet, command and control tools.

The solid majority of the domains, 69.73 per cent to be exact, fell under the label of “suspicious,” meaning the domains appear to have been parked, had insufficient content to be verified as legit, or were considered “questionable,” or “high risk,” but not flat-out malicious. 18.2 per cent were classified as just “other,” rather unhelpfully.

In other words, just under three quarters of new domains are used for sites that vary from completely empty, to shady at best, to verified as attack sites.

Source: Here’s a top tip: Don’t trust the new guy – block web domains less than a month old. They are bound to be dodgy • The Register

Moscow’s blockchain voting system cracked a month before election, will be fixed due to responsible disclosure, open source and bug bounties

A French security researcher has found a critical vulnerability in the blockchain-based voting system Russian officials plan to use next month for the 2019 Moscow City Duma election.

Pierrick Gaudry, an academic at Lorraine University and a researcher for INRIA, the French research institute for digital sciences, found that he could compute the voting system’s private keys based on its public keys. This private keys are used together with the public keys to encrypt user votes cast in the election.

Gaudry blamed the issue on Russian officials using a variant of the ElGamal encryption scheme that used encryption key sizes that were too small to be secure. This meant that modern computers could break the encryption scheme within minutes.

“It can be broken in about 20 minutes using a standard personal computer, and using only free software that is publicly available,” Gaudry said in a report published earlier this month.

“Once these [private keys] are known, any encrypted data can be decrypted as quickly as they are created,” he added.

What an attacker can do with these encryption keys is currently unknown, since the voting system’s protocols weren’t yet available in English, so Gaudry couldn’t investigate further.

“Without having read the protocol, it is hard to tell precisely the consequences, because, although we believe that this weak encryption scheme is used to encrypt the ballots, it is unclear how easy it is for an attacker to have the correspondence between the ballots and the voters,” the French researcher said.

“In the worst case scenario, the votes of all the voters using this system would be revealed to anyone as soon as they cast their vote.”

[…]

The French academic was able to test Moscow’s upcoming blockchain-based voting system because officials published its source code on GitHub in July, and asked security researchers to take their best shots.

Following Gaudry’s discovery, the Moscow Department of Information Technology promised to fix the reported issue — the use of a weak private key.

“We absolutely agree that 256×3 private key length is not secure enough,” a spokesperson said in an online response. “This implementation was used only in a trial period. In few days the key’s length will be changed to 1024.”

[…]

However, a public key of a length of 1024 bits may not be enough, according to Gaudry, who believes officials should use one of at least 2048 bits instead.

[…]

There is a good side to this,” he added. “The fact that Moscow allowed others to look at the code, research it and then help them secure it.”

Furthermore, Moscow officials also approved a monetary reward for Gaudry, who according to Russian news site Meduza, stands to make one million Russian ruble, which is just over $15,000.

According to a previous report from July, Gaudry’s reward is near the top prize the Moscow local government promised bug hunters when it put the code on GitHub, which was 1.5 million Russian ruble ($22,500).

“The US system COULD learn a lot from Mother Russia on this one,” Roberts said, referring to the plethora of growing pains the US has been going through recently while trying to secure its electronic voting machines.

These growing pains mostly come from voting machine vendors, who are refusing to engage with the cyber-security community, something the Moscow government had no problem doing.

This closed-source nature around electronic voting machines and election systems used in the US is the reason why Microsoft recently announced plans to open-source on GitHub a new technology for securing electronic voting machines.

Source: Moscow’s blockchain voting system cracked a month before election | ZDNet

Bug-hunter finds local privilege escalation in Steam. Valve refuses to acknowledge and so he’s dropped it on the internet.

The way Kravets tells is (Valve did not respond to a request for comment), the whole saga started earlier this month when he went to report a separate elevation of privilege flaw in Steam Client, the software gamers use to purchase and run games from the games service.

Valve declined to recognize and pay out for the bug, which they said required local access and the ability to drop files on the target machine in order to run and was therefore not really a vulnerability.

“I received a lot of feedback. But Valve didn’t say a single word, HackerOne sent a huge letter and, mostly, kept silence,” Kravets wrote. “Eventually things escalated with Valve and I got banned by them on HackerOne — I can no longer participate in their vulnerability rejection program (the rest of H1 is still available though).”

Now, some two weeks later, Kravets has discovered and disclosed a second elevation of privilege flaw. Like the first, this vulnerability this flaw (a DLL loading vulnerability) would require the attacker to have access to the target’s machine and the ability to write files locally.

Source: Disgruntled bug-hunter drops Steam zero-day to get back at Valve for refusing him a bounty • The Register

The Register then says something pretty stupid:

While neither flaw would be considered a ‘critical’ risk as they each require the attacker to already have access to the target machine (if that’s the case you’re already in serious trouble, so what’s another flaw)

It’s an escalation flaw, which means that as a normal user you can run things administrators are only supposed to run. That’s a problem.

Cut off your fingers: Data Breach in Biometric Security Platform Affecting Millions of Users over thousands of countries – yes unencrypted and yes, editable

Led by internet privacy researchers Noam Rotem and Ran Locar, vpnMentor’s team recently discovered a huge data breach in security platform BioStar 2.  

BioStar 2 is a web-based biometric security smart lock platform. A centralized application, it allows admins to control access to secure areas of facilities, manage user permissions, integrate with 3rd party security apps, and record activity logs.

As part of the biometric software, BioStar 2 uses facial recognition and fingerprinting technology to identify users.

The app is built by Suprema, one of the world’s top 50 security manufacturers, with the highest market share in biometric access control in the EMEA region. Suprema recently partnered with Nedap to integrate BioStar 2 into their AEOS access control system.

AEOS is used by over 5,700 organizations in 83 countries, including some of the biggest multinational businesses, many small local businesses, governments, banks, and even the UK Metropolitan Police.

The data leaked in the breach is of a highly sensitive nature. It includes detailed personal information of employees and unencrypted usernames and passwords, giving hackers access to user accounts and permissions at facilities using BioStar 2. Malicious agents could use this to hack into secure facilities and manipulate their security protocols for criminal activities. 

This is a huge leak that endangers both the businesses and organizations involved, as well as their employees. Our team was able to access over 1 million fingerprint records, as well as facial recognition information. Combined with the personal details, usernames, and passwords, the potential for criminal activity and fraud is massive. 

Once stolen, fingerprint and facial recognition information cannot be retrieved. An individual will potentially be affected for the rest of their lives.

[…]

Our team was able to access over 27.8 million records, a total of 23 gigabytes of data, which included the following information:

  • Access to client admin panels, dashboards, back end controls, and permissions
  • Fingerprint data
  • Facial recognition information and images of users
  • Unencrypted usernames, passwords, and user IDs
  • Records of entry and exit to secure areas
  • Employee records including start dates
  • Employee security levels and clearances
  • Personal details, including employee home address and emails
  • Businesses’ employee structures and hierarchies
  • Mobile device and OS information

[…]

With this leak, criminal hackers have complete access to admin accounts on BioStar 2. They can use this to take over a high-level account with complete user permissions and security clearances, and make changes to the security settings in an entire network. 

Not only can they change user permissions and lock people out of certain areas, but they can also create new user accounts – complete with facial recognition and fingerprints – to give themselves access to secure areas within a building or facility.

Furthermore, hackers can change the fingerprints of existing accounts to their own and hijack a user account to access restricted areas undetected. Hackers and other criminals could potentially create libraries of fingerprints to be used any time they want to enter somewhere without being detected.

This provides a hacker and their team open access to all restricted areas protected with BioStar 2. They also have access to activity logs, so they can delete or alter the data to hide their activities.

As a result, a hacked building’s entire security infrastructure becomes useless. Anybody with this data will have free movement to go anywhere they choose, undetected.

Source: Report: Data Breach in Biometric Security Platform Affecting Millions of Users

And there’s why biometrics are a poor choice in identification – you can’t change your fingertips, but you can edit the records. Using this data it should be fairly easy to print out fingerprints, if you can’t feel bothered to edit the database either.

Researchers Bypass Apple FaceID Using glasses to fool liveness detection

Researchers on Wednesday during Black Hat USA 2019 demonstrated an attack that allowed them to bypass a victim’s FaceID and log into their phone simply by putting a pair of modified glasses on their face. By merely placing tape carefully over the lenses of a pair glasses and placing them on the victim’s face the researchers demonstrated how they could bypass Apple’s FaceID in a specific scenario. The attack itself is difficult, given the bad actor would need to figure out how to put the glasses on an unconscious victim without waking them up.

To launch the attack, researchers with Tencent tapped into a feature behind biometrics called “liveness” detection, which is part of the biometric authentication process that sifts through “real” versus “fake” features on people.

[…]

Researchers specifically honed in on how liveness detection scans a user’s eyes. They discovered that the abstraction of the eye for liveness detection renders a black area (the eye) with a white point on it (the iris). And, they discovered that if a user is wearing glasses, the way that liveness detection scans the eyes changes.

“After our research we found weak points in FaceID… it allows users to unlock while wearing glasses… if you are wearing glasses, it won’t extract 3D information from the eye area when it recognizes the glasses.”

Putting these two factors together, researchers created a prototype of glasses – dubbed “X-glasses” – with black tape on the lenses, and white tape inside the black tape. Using this trick they were then able to unlock a victim’s mobile phone and then transfer his money through mobile payment App by placing the tape-attached glasses above the sleeping victim’s face to bypass the attention detection mechanism of both FaceID and other similar technologies.

The attack comes with obvious drawbacks – the victim must be unconscious, for one, and can’t wake up when the glasses are placed on their face.

Source: Researchers Bypass Apple FaceID Using Biometrics ‘Achilles Heel’ | Threatpost

A reminder why Open Source is so important: Someone audited Kubernetes

The Cloud Native Computing Foundation (CNCF) today released a security audit of Kubernetes, the widely used container orchestration software, and the findings are about what you’d expect for a project with about two million lines of code: there are plenty of flaws that need to be addressed.

The CNCF engaged two security firms, Trail of Bits and Atredis Partners, to poke around Kubernetes code over the course of four months. The companies looked at Kubernetes components involved in networking, cryptography, authentication, authorization, secrets management, and multi-tenancy.

Having identified 34 vulnerabilities – 4 high severity, 15 medium severity, 8 low severity and 7 informational severity – the Trail of Bits report advises project developers to rely more on standard libraries, to avoid custom parsers and specialized configuration systems, to choose “sane defaults,” and to ensure correct filesystem and kernel interactions prior to performing operations.

“The assessment team found configuration and deployment of Kubernetes to be non-trivial, with certain components having confusing default settings, missing operational controls, and implicitly designed security controls,” the Trail of Bits report revealed. “Also, the state of the Kubernetes codebase has significant room for improvement.”

Underscoring these findings, Kubernetes 1.13.9, 1.14.5, and 1.15.2 were released on Monday to fix two security issues in the software, CVE-2019-11247 and CVE-2019-11249. The former could allow a user in one namespace to access a resource scoped to a cluster. The latter could allow a malicious container to create or replace a file on the client computer when the client employs the kubectl cp command.

As noted by the CNCF, the security auditors found: policy application inconsistencies, which prompt a false sense of security; insecure TLS used by default; environmental variables and command-line arguments that reveal credentials; secrets leaked in logs; no support for certificate revocation, and seccomp (a system-call filtering mechanism in the Linux kernel) not activated by default.

The findings include advice to cluster admins, such as not using both Role-Based Access Controls and Attribute-Based Access Controls because of the potential for inadvertent permission grants if one of these fails.

They also include various recommendations and best practices for developers to follow as they continue making contributions to Kubernetes.

For example, one recommendation is to avoid hardcoding file paths to dependencies. The report points to Kubernetes’ kublet process, “where a dependency on hardcoded paths for PID files led to a race condition which could allow an attacker to escalate privileges.”

The report also advises enforcing minimum files permissions, monitoring processes on Linux, and various other steps to make Kubernetes more secure.

In an email to The Register, Chris Aniszczyk, CTO and COO of CNCF, expressed satisfaction with the audit process. “We view it positively that the whole process of doing a security audit was handled transparently by the members of the Kubernetes Security Audit WG, from selecting a vendor to working with the upstream project,” he said. “I don’t know of any other open source organization that has shared and open sourced the whole process around a security audit and the results. Transparency builds trust in open source communities, especially around security.”

Asked how he’d characterize the risks present in Kubernetes at the moment, Aniszczyk said, “The Kubernetes developers responded quickly and created appropriate CVEs for critical issues. In the end, we would rather have the report speak for itself in terms of the findings and recommendations.”

Source: Captain, we’ve detected a disturbance in space-time. It’s coming from Earth. Someone audited the Kubernetes source • The Register

Why is this good? Because these holes will be fixed instead of exploited.

Democratic Senate campaign group exposed 6.2 million Americans’ emails

Data breach researchers at security firm UpGuard found the data in late July, and traced the storage bucket back to a former staffer at the Democratic Senatorial Campaign Committee, an organization that seeks grassroots donations and contributions to help elect Democratic candidates to the U.S. Senate.

Following the discovery, UpGuard researchers reached out to the DSCC and the storage bucket was secured within a few hours. The researchers shared their findings exclusively with TechCrunch and published their findings.

The spreadsheet was titled “EmailExcludeClinton.csv” and was found in a similarly named unprotected Amazon S3 bucket without a password. The file was uploaded in 2010 — a year after former Democratic senator and presidential candidate Hillary Clinton, whom the data is believed to be named after, became secretary of state.

UpGuard said the data may be people “who had opted out or should otherwise be excluded” from the committee’s marketing.

screenshot

A redacted portion of the email spreadsheet (Image: UpGuard/supplied)

Stewart Boss, a spokesperson for the DSCC, denied the data came from Sen. Hillary Clinton’s campaign and claimed the data had been created using the committee’s own information.

“A spreadsheet from nearly a decade ago that was created for fundraising purposes was removed in compliance with the stringent protocols we now have in place,” he told TechCrunch in an email.

Despite several follow-ups, the spokesperson declined to say how the email addresses were collected, where the information came from, what the email addresses were used for, how long the bucket was exposed, or if the committee knew if anyone else accessed or obtained the data.

We also contacted the former DSCC staffer who owned the storage bucket and allegedly created the database, but did not hear back.

Most of the email addresses were from consumer providers, like AOL, Yahoo, Hotmail and Gmail, but the researchers found more than 7,700 U.S. government email addresses and 3,400 U.S. military email addresses, said the UpGuard researchers.

The DSCC security lapse is the latest in a string of data exposures in recent years — some of which were also discovered by UpGuard. Two incidents in 2015 and 2017 exposed 191 million and 198 million Americans’ voter data, respectively, including voter profiles and political persuasions. Last year, 14 million voter records on Texas residents were also found on an exposed server.

Source: Democratic Senate campaign group exposed 6.2 million Americans’ emails | TechCrunch

And Amazon is still not putting these buckets up secured by default.

We’ve, um, changed our password policy, says CafePress amid reports of 23m pwned accounts

Twee T-shirts ‘n’ merch purveyor CafePress had 23 million user records swiped – reportedly back in February – and this morning triggered a mass password reset, calling it a change in internal policy.

Details of the security breach emerged when infosec researcher Troy Hunt’s Have I Been Pwned service – which lists websites known to have been hacked, allowing people to check if their information has been stolen – began firing out emails to affected people in the small hours of this morning.

According to HIBP, a grand total of 23,205,290 CafePress customers’ data was swiped by miscreants, including email addresses, names, phone numbers, and physical addresses.

We have asked CafePress to explain itself and will update this article if the company responds. There was no indication on its UK or US websites at the time of writing to indicate that the firm had acknowledged any breach.

[…]

Musing on the 77 per cent of email addresses from the breach having been seen in previous HIBP reports, Woodward said that factoid “brings me to a problem that isn’t being discussed that much, and which this kind of breach does highlight: the use of email as the user name. It’s clearly meant to make life easier for users, but the trouble is once hackers know an email has been used as a username in one place it is instantly useful for mounting credential-stuffing attacks elsewhere.”

“I wonder,” he told The Register, “if we shouldn’t be using unique usernames and passwords for each site. However, it would mean that it becomes doubly difficult to keep track of your credentials, especially if you’re using different strong passwords for each site, which I hope they are. But all users need do is start using a password manager, which I really wish they would.”

Source: We’ve, um, changed our password policy, says CafePress amid reports of 23m pwned accounts • The Register

You Can’t Trust Companies to Tell the Truth About Data Breaches

Last week, online sneaker-trading platform StockX asked its users to reset their passwords due to “recently completed system updates on the StockX platform.” In actuality, the company suffered a large data breach back in May, and only finally came clean about it when pressed by reporters who had access to some of the leaked data.

In other words, StockX lied. And while it disclosed details on the breach in the end, there’s still no explanation for why it took StockX so long to figure out what happened, nor why the company felt the need to muddy the situation with its suspicious password-reset email last week.

While most companies are fairly responsible about security disclosures, there’s no question that plenty would prefer if information about massive security breaches affecting them never hit the public eye. And even when companies have to disclose the details of a breach, they can get cagey—as we saw with Capital One’s recent problems.

Source: You Can’t Trust Companies to Tell the Truth About Data Breaches

Sadly it’s partially understandable, considering the lawsuit shotguns brought to bear on companies following disclosure.

Having said that, many of the disclosures are the results of really really stupid mistakes, such as storing credentials in plain text and not securing AWS buckets.

Monzo online bank stored bank card codes in log files as plain text

Trendy online-only Brit bank Monzo is telling hundreds of thousands of its customers to pick a new PIN – after it discovered it was storing their codes as plain-text in log files.

As a result, 480,000 folks, a fifth of the bank’s customers, now have to go to a cash machine, and reset their PINs.

The bank said the numbers, normally tightly secured with extremely limited access, had accidentally been kept in an encrypted-at-rest log file. The content of those logs were, however, accessible to roughly 100 Monzo engineers who normally would not have the clearance nor any need to see customer PINs.

The PINs were logged for punters who had used the “card number reminder” and “cancel a standing order” features.

To hear Monzo tell it, the misconfigured logs, along with the PINs, were discovered on Friday evening. By Saturday morning, the UK bank updated its mobile app so that no new PINs were sent to the log collector. On Monday, the last of the logged data had been deleted.

Source: PIN the blame on us, says Monzo in mondo security blunder: Bank card codes stored in log files as plain text • The Register

It’s 2019 – and you can completely pwn a Qualcomm-powered Android over the air

It is possible to thoroughly hijack a nearby vulnerable Qualcomm-based Android phone, tablet, or similar gadget, via Wi-Fi, we learned on Monday. This likely affects millions of Android devices.

Specifically, the following two security holes, dubbed Qualpwn and found by Tencent’s Blade Team, can be leveraged one after the other to potentially take over a handheld:

CVE-2019-10540 […] could be exploited by nearby miscreants over the air to silently squirt spyware into your phone to snoop on its wireless communications.

CVE-2019-10538: This vulnerability can be exploited by malicious code running within the Wi-Fi controller to overwrite parts of the Linux kernel running the device’s main Android operating system, paving the way for a full device compromise.

Source: It’s 2019 – and you can completely pwn a Qualcomm-powered Android over the air • The Register

E3 Expo Leaks The Personal Information Of Over 2,000 Journalists

A spreadsheet containing the contact information and personal addresses of over 2,000 games journalists, editors, and other content creators was recently found to have been published and publicly accessible on the website of the E3 Expo.

The Entertainment Software Association, the organization that runs E3, has since removed the link to the file, as well as the file itself, but the information has continued to be disseminated online in various gaming forums. While many of the individuals listed in the documents provided their work addresses and phone numbers when they registered for E3, many others, especially freelance content creators, seem to have used their home addresses and personal cell phones, which have now been publicized. This leak makes it possible for bad actors to misuse this information to harass journalists. Two people who say their private information appeared in the leak have informed Kotaku that they have already received crank phone calls since the list was publicized.

Source: E3 Expo Leaks The Personal Information Of Over 2,000 Journalists

Small aircraft can be quite easily hacked to present wrong readings, change trim and autopilot settings – if someone has physical access to it.

Modern aircraft systems are becoming increasingly reliant on networked communications systems to display information to the pilot as well as control various systems aboard aircraft. Small aircraft typically maintain the direct mechanical linkage between the flight controls and the flight surface. However, electronic controls for flaps, trim, engine controls, and autopilot systems are becoming more common. This is similar to how most modern automobiles no longer have a physical connection between the throttle and the actuator that causes the engine to accelerate.

Before digital systems became common within aircraft instrumentation, the gauges and flight instruments would rely on mechanical and simple electrical controls that were directly connected to the source of the data they were displaying to the pilot. For example, the altitude and airspeed indicators would be connected to devices that measure the speed of airflow through a tube as well as the pressure outside the aircraft. In addition, the attitude and directional indicators would be powered by a vacuum source that drove a mechanical gyroscope. The flight surfaces would be directly connected to the pilot’s control stick or yoke—on larger aircraft, this connection would be via a hydraulic interface. Some flight surfaces, such as flaps and trim tabs, would have simple electrical connections that would directly turn motors on and off.

Modern aircraft use a network of electronics to translate signals from the various sensors and place this data onto a network to be interpreted by the appropriate instruments and displayed to the pilot. Together, the physical network, called a “vehicle bus,” and a common communications method called Controller Area Network (CAN) create the “CAN bus,” which serves as the central nervous system of a vehicle using this method. In avionics, these systems provide the foundation of control systems and sensor systems and collect data such as altitude, airspeed, and engine parameters such as fuel level and oil pressure, then display them to the pilot.

After performing a thorough investigation on two commercially available avionics systems, Rapid7 demonstrated that it was possible for a malicious individual to send false data to these systems, given some level of physical access to a small aircraft’s wiring. Such an attacker could attach a device—or co-opt an existing attached device—to an avionics CAN bus in order to inject false measurements and communicate them to the pilot. These false measurements may include the following:

  • Incorrect engine telemetry readings

  • Incorrect compass and attitude data

  • Incorrect altitude, airspeed, and angle of attack (AoA) data

In some cases, unauthenticated commands could also be injected into the CAN bus to enable or disable autopilot or inject false measurements to manipulate the autopilot’s responses. A pilot relying on these instrument readings would not be able to tell the difference between false data and legitimate readings, so this could result in an emergency landing or a catastrophic loss of control of an affected aircraft.

While the impact of such an attack could be dire, we want to emphasize that this attack requires physical access, something that is highly regulated and controlled in the aviation sector. While we believe that relying wholly on physical access controls is unwise, such controls do make it much more difficult for an attacker to access the CAN bus and take control of the avionics systems.

Source: [Security Research] CAN Bus Network Integrity in Avionics Systems | Rapid7

Facebook’s answer to the encryption debate: install spyware with content filters! (updated: maybe not)

The encryption debate is typically framed around the concept of an impenetrable link connecting two services whose communications the government wishes to monitor. The reality, of course, is that the security of that encryption link is entirely separate from the security of the devices it connects. The ability of encryption to shield a user’s communications rests upon the assumption that the sender and recipient’s devices are themselves secure, with the encrypted channel the only weak point.

After all, if either user’s device is compromised, unbreakable encryption is of little relevance.

This is why surveillance operations typically focus on compromising end devices, bypassing the encryption debate entirely. If a user’s cleartext keystrokes and screen captures can be streamed off their device in real-time, it matters little that they are eventually encrypted for transmission elsewhere.

[…]

Facebook announced earlier this year preliminary results from its efforts to move a global mass surveillance infrastructure directly onto users’ devices where it can bypass the protections of end-to-end encryption.

In Facebook’s vision, the actual end-to-end encryption client itself such as WhatsApp will include embedded content moderation and blacklist filtering algorithms. These algorithms will be continually updated from a central cloud service, but will run locally on the user’s device, scanning each cleartext message before it is sent and each encrypted message after it is decrypted.

The company even noted that when it detects violations it will need to quietly stream a copy of the formerly encrypted content back to its central servers to analyze further, even if the user objects, acting as true wiretapping service.

Facebook’s model entirely bypasses the encryption debate by globalizing the current practice of compromising devices by building those encryption bypasses directly into the communications clients themselves and deploying what amounts to machine-based wiretaps to billions of users at once.

Asked the current status of this work and when it might be deployed in the production version of WhatsApp, a company spokesperson declined to comment.

Of course, Facebook’s efforts apply only to its own encryption clients, leaving criminals and terrorists to turn to other clients like Signal or their own bespoke clients they control the source code of.

The problem is that if Facebook’s model succeeds, it will only be a matter of time before device manufacturers and mobile operating system developers embed similar tools directly into devices themselves, making them impossible to escape. Embedding content scanning tools directly into phones would make it possible to scan all apps, including ones like Signal, effectively ending the era of encrypted communications.

Governments would soon use lawful court orders to require companies to build in custom filters of content they are concerned about and automatically notify them of violations, including sending a copy of the offending content.

Rather than grappling with how to defeat encryption, governments will simply be able to harness social media companies to perform their mass surveillance for them, sending them real-time alerts and copies of the decrypted content.

Source: The Encryption Debate Is Over – Dead At The Hands Of Facebook

Update 4/8/19 Bruce Schneier is convinced that this story has been concocted from a single source and Facebook is not in fact planning to do this currently. I’m inclined to agree.

source: More on Backdooring (or Not) WhatsApp

Robinhood fintech app admits to storing some passwords in cleartext

Stock trading service Robinhood has admitted today to storing some customers’ passwords in cleartext, according to emails the company has been sending to impacted customers, and seen by ZDNet.

“On Monday night, we discovered that some user credentials were stored in a readable format within our internal system,” the company said.

“We resolved the issue, and after thorough review, found no evidence that this information was accessed by anyone outside our response team.”

Robinhood is now resetting passwords out of an abundance of caution, despite not finding any evidence of abuse.

[…]

Storing passwords in cleartext is a huge security blunder; however, Robinhood is in “good company.” This year alone, Facebook, Instagram, and Google have all admitted to storing users passwords in cleartext.

Facebook admitted in March to storing passwords in cleartext for hundreds of millions of Facebook Lite users and tens of millions of Facebook users.

Facebook then admitted again in April to storing passwords in cleartext for millions of Instagram users.

Google admitted in May to also storing an unspecified number of passwords in cleartext for G Suite users for nearly 14 years.

And, a year before, in 2018, both Twitter and GitHub admitted to accidentally storing user plaintext passwords in internal logs.

Robinhood is a web and mobile service with a huge following, allowing zero-commission trading in classic stocks, but also cryptocurrencies.

Source: Robinhood admits to storing some passwords in cleartext | ZDNet

Cyberlaw wonks squint at NotPetya insurance smackdown: Should ‘war exclusion’ clauses apply to network hacks?

In June 2017, the notorious file-scrambling software nasty NotPetya caused global havoc that affected government agencies, power suppliers, healthcare providers and big biz.

The ransomware sought out vulnerabilities and used a modified version of the NSA’s leaked EternalBlue SMB exploit, generating one of the most financially costly cyber-attacks to date.

Among the victims was US food giant Mondelez – the parent firm of Oreo cookies and Cadburys chocolate – which is now suing insurance company Zurich American for denying a £76m claim (PDF) filed in October 2018, a year after the NotPetya attack. According to the firm, the malware rendered 1,700 of its servers and 24,000 of its laptops permanently dysfunctional.

In January, Zurich rejected the claim, simply referring to a single policy exclusion which does not cover “hostile or warlike action in time of peace or war” by “government or sovereign power; the military, naval, or air force; or agent or authority”.

Mondelez, meanwhile, suffered significant loss as the attack infiltrated the company – affecting laptops, the company network and logistics software. Zurich American claims the damage, as the result of an “an act of war”, is therefore not covered by Mondelez’s policy, which states coverage applies to “all risks of physical loss or damage to electronic data, programs, or software, including loss or damage caused by the malicious introduction of a machine code or instruction.”

While war exclusions are common in insurance policies, the court papers themselves refer to the grounds as “unprecedented” in relation to “cyber incidents”.

Previous claims have only been based on conventional armed conflicts.

Zurich’s use of this sort of exclusion in a cybersecurity policy could be a game-changer, with the obvious question being: was NotPetya an act of war, or just another incidence of ransomware?

The UK, US and Ukrainian governments, for their part, blamed the attack on Russian, state-sponsored hackers, claiming it was the latest act in an ongoing feud between Russia and Ukraine.

[…]

The minds behind the Tallinn Manual – the international cyberwar rules of engagement – were divided as to whether damage caused met the armed criterion. However, they noted there was a possibility that it could in rare circumstances.

Professor Michael Schmitt, director of the Tallinn Manual project, indicated (PDF) that it is reasonable to extend armed attacks to cyber-attacks. The International Committee of the Red Cross (ICRC) went further to enunciate that cyber operations that only disable certain objects are still qualified as an attack, despite no physical damage.

Source: Cyberlaw wonks squint at NotPetya insurance smackdown: Should ‘war exclusion’ clauses apply to network hacks? • The Register

Apple removes Zoom’s dodgy hidden web server on your Mac without telling you – shows who really pwns your machine

Apple has pushed a silent update to Macs, disabling the hidden web server installed by the popular Zoom web-conferencing software.

A security researcher this week went public with his finding that the mechanism used to bypass a Safari prompt before entering a Zoom conference was a hidden local web server.

Jonathan Leitschuh focused largely on the fact that a user’s webcam would likely be ON automatically, meaning that a crafty bit of web coding would give an attacker a peek into your room if you simply visit their site.

But the presence of the web server was a more serious issue, especially since uninstalling Zoom did not remove it and the web server would reinstall the Zoom client – which is malware-like behaviour.

[…]

On 9 July the company updated its Mac app to remove the local web server “via a prompted update”.

The next day Apple itself took action, by instructing macOS’s built-in antivirus engine to remove the web server on sight from Macs. Zoom CEO Eric Yuan added on Wednesday:

Apple issued an update to ensure the Zoom web server is removed from all Macs, even if the user did not update their Zoom app or deleted it before we issued our July 9 patch. Zoom worked with Apple to test this update, which requires no user interaction.

Source: Wondering how to whack Zoom’s dodgy hidden web server on your Mac? No worries, Apple’s done it for you • The Register

Kind of scary that Apple can just go about removing software from your machine without any notification