A team of researchers from the ASSET Research Group in Singapore have published the details of a collection of vulnerabilities in the fifth generation mobile communication system (5G) used with smartphones and many other devices. These fourteen vulnerabilities are detailed in this paper and a PoC detailing an attack using a software defined radio (SDR) is provided on GitHub. The core of the PoC attack involves creating a malicious 5G base station (gNB), which nearby 5G modems will seek to communicate with, only for these vulnerabilities to be exploited, to the point where a hard reset (e.g. removal of SIM card) of the affected device may be required.
Hardware Setup for 5Ghoul PoC testing and fuzzer evaluation. (Credit: Matheus E. Garbelini et al., 2023)
Another attack mode seeks to downgrade the target device’s wireless connection, effectively denying the connection to a 5G network and forcing them to connect to an alternative network (2G, 3G, 4G, etc.). Based on the affected 5G modems, the researchers estimate that about 714 smartphone models are at risk of these attacks. Naturally, not just smartphones use these 5G modem chipsets, but also various wireless routers, IoT devices, IP cameras and so on, all of which require the software these modems to be patched.
Most of the vulnerabilities concern the radio resource control (RCC) procedure, caused by flaws in the modem firmware. Android smartphones (where supported) should receive patches for 5Ghoul later this month, but when iPhone devices get patched is still unknown.
Most of this is about crashing the modem. The implication (not spelt out here) is that by restarting the modem or by forcing it to downgrade (to a mode probably no longer supported by the national provider) you force the phone to connect to your own access point, where you can then listen in on the traffic and chain other vulnerabilities to the phone.
Batteries that exploit quantum phenomena to gain, distribute and store power promise to surpass the abilities and usefulness of conventional chemical batteries in certain low-power applications. For the first time, researchers, including those from the University of Tokyo, take advantage of an unintuitive quantum process that disregards the conventional notion of causality to improve the performance of so-called quantum batteries, bringing this future technology a little closer to reality.
[…]
At present, quantum batteries only exist as laboratory experiments, and researchers around the world are working on the different aspects that are hoped to one day combine into a fully functioning and practical application. Graduate student Yuanbo Chen and Associate Professor Yoshihiko Hasegawa from the Department of Information and Communication Engineering at the University of Tokyo are investigating the best way to charge a quantum battery, and this is where time comes into play. One of the advantages of quantum batteries is that they should be incredibly efficient, but that hinges on the way they are charged.
While it’s still quite a bit bigger than the AA battery you might find around the home, the experimental apparatus acting as a quantum battery demonstrated charging characteristics that could one day improve upon the battery in your smartphone. Credit: Zhu et al, 2023
“Current batteries for low-power devices, such as smartphones or sensors, typically use chemicals such as lithium to store charge, whereas a quantum battery uses microscopic particles like arrays of atoms,” said Chen. “While chemical batteries are governed by classical laws of physics, microscopic particles are quantum in nature, so we have a chance to explore ways of using them that bend or even break our intuitive notions of what takes place at small scales. I’m particularly interested in the way quantum particles can work to violate one of our most fundamental experiences, that of time.”
[…]
the team instead used a novel quantum effect they call indefinite causal order, or ICO. In the classical realm, causality follows a clear path, meaning that if event A leads to event B, then the possibility of B causing A is excluded. However, at the quantum scale, ICO allows both directions of causality to exist in what’s known as a quantum superposition, where both can be simultaneously true.
Common intuition suggests that a more powerful charger results in a battery with a stronger charge. However, the discovery stemming from ICO introduces a remarkable reversal in this relationship; now, it becomes possible to charge a more energetic battery with significantly less power. Credit: Chen et al, 2023
“With ICO, we demonstrated that the way you charge a battery made up of quantum particles could drastically impact its performance,” said Chen. “We saw huge gains in both the energy stored in the system and the thermal efficiency. And somewhat counterintuitively, we discovered the surprising effect of an interaction that’s the inverse of what you might expect: A lower-power charger could provide higher energies with greater efficiency than a comparably higher-power charger using the same apparatus.”
The phenomenon of ICO the team explored could find uses beyond charging a new generation of low-power devices. The underlying principles, including the inverse interaction effect uncovered here, could improve the performance of other tasks involving thermodynamics or processes that involve the transfer of heat. One promising example is solar panels, where heat effects can reduce their efficiency, but ICO could be used to mitigate those and lead to gains in efficiency instead.
LaPlante lauded Schmidt’s ability to rapidly field mission data files, packages of information loaded onto F-35s before each flight.”What General Schmidt and his team did, in about a week – week-and-a-half – is turned around these mission data files. That’s the brick that goes into the airplane. And that I think the lessons learned on how you did that can apply all the way around the world.”
[…]
The ‘just-in-time’ logistics strategy and the cloud computing hub that is the foundation for F-35 logistics are of especially high concern. While those systems may be adequate for peacetime operations — and even that is highly debatable — during a time of conflict, relying on them could leave F-35s stranded on the ground.
Those lessons are in addition to the Pentagon’s own review of its long-distance F-35 logistics operations.
[…]
“This program was set up to be very efficient… [a] just-in-time kind of supply chain. I’m not sure that that works always in a contested environment,” Lt. Gen. Schmidt said. “And when you get a just-in-time mentality, which I think is it’s kind of a business model in the commercial industry that works very well in terms of keeping costs down and those kinds of things, it introduces a lot of risk operationally.”
“The biggest risk is that F-35 units have little in terms of spare parts on the shelf to keep their aircraft flying for any sustained amount of time.”
This US Navy graphic provides a very general look at the many layers of complexity just in that service’s logistics chains, including joint service, non-military U.S. government, foreign military, and commercial entities. USN
[…]
logistically specifically, it was designed to be maintenance on demand, essentially. So the aircraft could relay a message to the supply warehouse and say, this part is getting ready to fail. And then Lockheed could send that part out to the base and it could be replaced, rather than having to have large warehouses full of supply parts, not knowing which was gonna fail and what you might need. You take that into the maritime service and the challenge, Tyler, is that you can’t logistically operate that way because we could have a ship, in this case, off the coast of Taiwan that needs a part, and Lockheed Martin can guarantee its arrival into Okinawa. But now there is no FedEx, UPS, DHL that’s gonna get it out to the aircraft carrier. So it stops and now you have a delay and it has to go get picked up and the aircraft might be down. I don’t know if they have resolved that challenge…”
[…]
In addition to being supplied by the U.S. with a surge in spare parts and other items, Israel enjoys an advantage with its F-35 fleet no one else has. It’s developed its own additional sustainment and upgrade system and is the only partner that can test modifications and deploy them, including to the jet’s software, on its own. The IAF even has its own specially configured test F-35 to assist in these efforts.
The IAF realized early on that a troubled U.S. centralized support structure for the jets – a centralized cloud-based ‘computer brain’ called the Autonomic Logistics Information System (ALIS) – wouldn’t meet its needs, especially during a large-scale conflict.
The F-35 JPO ultimately decided to abandon efforts to fix the system in favor of a completely reworked architecture called the Operational Data Integrated Network (ODIN). That replacement system is still in development.
Even before the inception of ODIN, Israeli officials negotiated a unique arrangement giving them a degree of independence from the rest of the program.
The F-35Is have a distinct configuration that is importantly not dependent on ALIS. On top of that, it is the only user of the F-35 to have the authority to install entire suites of additional domestically-developed software on its jets and to perform completely independent depot-level maintenance.”
“The ingenious, automated ALIS system that Lockheed Martin has built will be very efficient and cost-effective,” an anonymous Israeli Air Force officer told Defense News in 2016. “But the only downfall is that it was built for countries that don’t have missiles falling on them.”
[…]
There is a major lack of spare parts within the F-35 ecosystem as it is, so the Israel case does serve as an example of what readiness can look like if the parts needed to support the fleet during wartime were actually available.
Israel’s experience, however, doe offer, as was previously pointed out: “an important example of how things might be structured differently and that it can be done. If nothing else, the drivers behind the IAF’s push for independence from the broader F-35 program all speak directly to many of the issues that Lt. Gen. Schmidt and others are just starting to raise more publicly now.”
In 2018, the European Parliament voted to ban geo-blocking, meaning blocking access to a network based on someone’s location. Geo-blocking systems block or authorise access to content based on where the user is located.
On Wednesday, following a 2020 evaluation by the Commission on the regulation, MEPs advocated for reassessing geo-blocking, taking into account increased demand for online shopping in recent years.
Polish MEP Beata Mazurek from the Conservative group, who was the rapporteur for the file, said ahead of the vote in her speech that “the geo-blocking regulation will remove unjustified barriers for consumers and companies working within the single market”.
“We need to do something when it comes to online payments and stop discrimination on what your nationality happens to be or where you happen to live. When internet purchases are being made, barriers need to be removed. We need to have a complete right to access a better selection of goods and services through Europe,” she said.
While the original text of the regulation banned geo-blocking, due to discrimination, for example, as Mazurek pointed out, a new amendment goes against this, saying this would result in revenue loss and higher prices for consumers.
The new legislation approved by European Parliament requires websites to sell their goods throughout the EU regardless of the country the buyer resides in. It could apply to online cultural content like music streaming and ebooks within two years. EURACTIV.fr report
Audiovisual content
According to Mazurek, fighting price discrimination entails making deliveries easier across borders and making movies, series, and sporting events accessible in one’s native language.
“The Commission should carefully assess the options for updating the current rules and provide the support the audio-visual sector’s needs,” she added.
However, in a last-minute amendment adopted during the plenary vote, MEP Sabine Verheyen, an influential member of the Parliament’s culture committee, completely flipped the wording that applies to the audiovisual sector, such as the streaming of platforms’ films.
According to Verheyen’s amendment, removing geo-blocking in this area “would result in a significant loss of revenue, putting investment in new content at risk, while eroding contractual freedom and reducing cultural diversity in content production, distribution, promotion and exhibition”.
It also emphasises that the inclusion would result “in fewer distribution channels”, and so, ultimately, consumers would have to pay more.
Mazurek said before the vote that while the report deals with audiovisual material, they “would like to see this done in a step-by-step way, bearing in mind the particular circumstances pertaining to the creative sector”.
“We want to look at the position of the interested parties without threatening the way cultural projects are financed. That might be regarded as a revolutionary approach, but we need to look at technological progress and the consumer needs which have changed over the last few years,” the MEP explained.
Yet, Wednesday’s vote on this specific amendment means the opposite as it did in the original regulation, with lawmakers now being against ending geo-blocking for audiovisual material.
Grégoire Polad, Director General of the Association of Commercial Television and Video on Demand Services in Europe (ACT), stressed that the European Parliament and the EU Council of Ministers “have now made it abundantly clear that there is no political support for any present or future inclusion of the audiovisual sector in the scope of the Geo-blocking regulation.”
The European Parliament adopted a report on Tuesday (9 May), on the implementation of the Audiovisual Media Services Directive (AVMSD), including criticism of the belated transposition from certain EU countries.
However, the European Consumer Organisation threw its weight against the carve-out for the audiovisual and creative sectors in the regulation, calling on policymakers to make audiovisual content available across borders.
A Commission spokesperson told Euractiv that they are aware of the “ongoing debate” and “will carefully analyse its content, including proposals related to the audiovisual content”, once it is adopted.
“The Commission engaged in a dialogue with the audiovisual sector aimed at identifying industry-led solutions to improve the availability and cross-border access to audiovisual content across the EU,” the spokesperson explained.
This stakeholder dialogue ended in December 2022, and the Commission will consider its conclusions in the upcoming stocktaking exercise on the Geo-blocking Regulation.
Strangely enough this is the one sector that is wholly digital and where geoblocking makes the least sense, as digital goods are moved globally for exactly the same cost, whereas physical goods need different logistics chains, where the last step to the consumer is only a tiny part of that chain. The logistical steps before they get sent from the website mean that geography actually can have a measurable effect on cost.
The movie / TV / digital rights bozo’s definitely have a big lobby on this one, and shows the corruption – or outright stupidity – in the EP. Yes, Sabine Verheyen, you must be one or the other.
Gig workers in the EU will soon get new benefits and protections, making it easier for them to receive employment status. Right now, over 500 digital labor platforms are actively operating in the EU, employing roughly 28 million platform workers. The new rules follow agreements made between the European Parliament and the EU Member States, after policies were first proposed by the European Commission in 2021.
The new rules highlight employment status as a key issue for gig workers, meaning an employed individual can reap the labor and social rights associated with an official worker title. This can include things like a legal minimum wage, the option to engage in collective bargaining, health protections at work, options for paid leave and sick days. Through a recognition of a worker status from the EU, gig workers can also qualify for unemployment benefits.
Given that most gig workers are employed by digital apps, like Uber or Deliveroo, the new directive will require “human oversight of the automated systems” to make sure labor rights and proper working conditions are guaranteed. The workers also have the right to contest any automated decisions by digital employers — such as a termination.
The new rulings will also require employers to inform and consult workers’ when there are “algorithmic decisions” that affect them. Employers will be required to report where their gig workers are fulfilling labor-related tasks to ensure the traceability of employees, especially when there are cross-border situations to consider in the EU.
Before the new gig worker protections can formally roll out, there needs to be a final approval of the agreement by the European Parliament and the Council. The stakeholders will have two years to implement the new protections into law. Similar protections for gig workers in the UK were introduced in 2021. Meanwhile, in the US, select cities have rolled out minimum wage rulings and benefits — despite Uber and Lyft’s pushback against such requirements.
Before a massive star explodes as a supernova, it convulses and sends its outer layers into space, signalling the explosive energy about to follow. When the star does explode, it sends a shockwave out into its own ejected outer layer, lighting it up as different chemical elements shine with different energies and colours. Intermingled with this is any pre-existing matter near the supernova. The result is a massive expanding shell with filaments and knots of ionized gas, populated by even smaller bubbles.
“With NIRCam’s resolution, we can now see how the dying star absolutely shattered when it exploded, leaving filaments akin to tiny shards of glass behind.”
Danny Milisavljevic, Purdue University
Cassiopeia A exploded about 10,000 years ago, and the light may have reached Earth around 1667. But there’s much uncertainty, and it’s possible that English astronomer John Flamsteed observed it in 1680. It’s also a possibility that it was first observed in 1630. That’s for historians to determine.
But whenever the exact date is, the light has reached us and continues to reach us, making Cassiopeia A an object of astronomical fascination. It’s one of the most-studied SNRs, and astronomers have observed it in multiple wavelengths with different telescopes.
The SNR is about 10 light-years across and is expanding between 4,000 and 6,000 km/second. Some outlying knots are moving much more quickly, with velocities from 5,500?14,500 km/s. The expanding shell is also extremely hot, at about 30 million degrees Kelvin (30 million C/54 million F.)
The JWST’s NIRCam high-resolution image of Cass. A reveals intricate detail that remains hidden from other telescopes. Image Credit: NASA, ESA, CSA, STScI, Danny Milisavljevic (Purdue University), Ilse De Looze (UGent), Tea Temim (Princeton University)
But none of our prior images are nearly as breathtaking as these JWST images. These images are far more than just pretty pictures. The cursive swirls and knotted clumps of gas reveal some of nature’s detailed interactions between light and matter.
The JWST sees in infrared, so its images need to be translated for our eyes. The wavelengths the telescope can see are translated into different visible colours. Clumps of bright orange and light pink are most noticeable in these images, and they signify the presence of sulphur, oxygen, argon, and neon. These elements came from the star itself, and gas and dust from the region around the star are intermingled with it.
The image below highlights some parts of the Cassiopeia A SNR.
1 shows tiny knots of gas comprised of sulphur, oxygen, argon, and neon from the star itself. 2 shows what’s known as the Green Monster, a loop of green light in Cas A’s inner cavity, which is visible in the MIRI image of the SNR. Circular holes are outlined in white and purple and represent ionized gas. This is likely where debris from the explosions punched holes in the surrounding gas and ionized it. 3 shows a light echo, where light from the ancient explosion has warmed up dust which shines as it cools down. 4 shows an especially large and intricate light echo known as Baby Cas A. Baby Cas A is actually about 170 light-years beyond Cas A. Image Credit: NASA, ESA, CSA, STScI, Danny Milisavljevic (Purdue University), Ilse De Looze (UGent), Tea Temim (Princeton University)
The JWST’s MIRI image shows different details. The outskirts of the main shell aren’t orange and pink. Instead, it looks more like smoke lit up by campfire flames.
Seeing the NIRCam image (L) and the MIRI image (R) tells us about the SNR and the JWST. First of all, the NIRCam image is sharper because of its higher resolution. The NIRCam image also appears less colourful, but that’s because of the wavelengths of light being emitted that are more visible in Mid-Infrared. In the MIRI image, the outer ring is lit up more brightly than in the NIRCam image, while the MIRI image also shows the ‘Green Monster,’ the green inner ring that is invisible in the NIRCam image. Image Credit: NASA, ESA, CSA, STScI, Danny Milisavljevic (Purdue University), Ilse De Looze (UGent), Tea Temim (Princeton University)
The Hubble Space Telescope, the Spitzer Space Telescope, and the Chandra X-Ray Observatory have all studied Cas A. In fact, Spitzer’s first light image back in 1999 was of Cas A.
This X-ray image of the Cassiopeia A (Cas A) supernova remnant is the official first light image of the Chandra X-ray Observatory. The bright object near the center may be the long-sought neutron star or black hole that remained after the explosion that produced Cas A. Image Credit: By NASA/CXC/SAO – http://chandra.harvard.edu/photo/1999/0237/, Public Domain, https://commons.wikimedia.org/w/index.php?curid=33394808
The Hubble has imaged Cas A too. This image is from 2006 and is a composite of 18 separate images. While interesting and stunning at the time, the JWST’s image surpasses it in both visual and scientific detail.
This NASA/ESA Hubble Space Telescope image provides a detailed look at the tattered remains of Cassiopeia A (Cas A). It is the youngest known remnant from a supernova explosion in the Milky Way. Image Credit: NASA, ESA, and the Hubble Heritage (STScI/AURA)-ESA/Hubble Collaboration. Acknowledgement: Robert A. Fesen (Dartmouth College, USA) and James Long (ESA/Hubble)
The JWST’s incredible images are giving us a more detailed look at Cas A than ever. Danny Milisavljevic leads the Time Domain Astronomy research team at Purdue University and has studied SNRs extensively, including Cas A. He emphasizes how important the JWST is in his work.
“With NIRCam’s resolution, we can now see how the dying star absolutely shattered when it exploded, leaving filaments akin to tiny shards of glass behind,” said Milisavljevic. “It’s really unbelievable after all these years studying Cas A to now resolve those details, which are providing us with transformational insight into how this star exploded.”
Scientists have solved a decades-long puzzle and unveiled a near unbreakable substance that could rival diamond as the hardest material on Earth. The research is published in the journal Advanced Materials.
Researchers found that when carbon and nitrogen precursors were subjected to extreme heat and pressure, the resulting materials—known as carbon nitrides—were tougher than cubic boron nitride, the second hardest material after diamond.
The breakthrough opens doors for multifunctional materials to be used for industrial purposes including protective coatings for cars and spaceships, high-endurance cutting tools, solar panels and photodetectors, experts say.
[…]
The team subjected various forms of carbon nitrogen precursors to pressures of between 70 and 135 gigapascals—around 1 million times our atmospheric pressure—while heating it to temperatures of more than 1,500°C.
To identify the atomic arrangement of the compounds under these conditions, the samples were illuminated by an intense X-ray beam at three particle accelerators—the European Synchrotron Research Facility in France, the Deutsches Elektronen-Synchrotron in Germany and the Advanced Photon Source based in the United States.
Researchers discovered that three carbon nitride compounds were found to have the necessary building blocks for super-hardness.
Remarkably, all three compounds retained their diamond-like qualities when they returned to ambient pressure and temperature conditions.
Further calculations and experiments suggest the new materials contain additional properties including photoluminescence and high energy density, where a large amount of energy can be stored in a small amount of mass.
[…]
More information: Dominique Laniel et al, Synthesis of Ultra‐Incompressible and Recoverable Carbon Nitrides Featuring CN4 Tetrahedra, Advanced Materials (2023). DOI: 10.1002/adma.202308030
Scientists have discovered the first indication of nuclear fission occurring amongst the stars. The discovery supports the idea that when neutron stars smash together, they create “superheavy” elements — heavier than the heaviest elements of the periodic table
[…]
“People have thought fission was happening in the cosmos, but to date, no one has been able to prove it,” Matthew Mumpower, research co-author and a scientist at Los Alamos National Laboratory, said in a statement.
The team of researchers led by North Carolina State University scientist Ian Roederer searched data concerning a wide range of elements in stars to discover the first evidence that nuclear fission could therefore be acting when neutron stars merge. These findings could help solve the mystery of where the universe‘s heavy elements come from.
[…]
The picture of so-called nucleosynthesis for heavier elements like gold and uranium, however, has been somewhat more mysterious. Scientists suspect these valuable and rare heavy elements are created when two incredibly dense dead stars — neutron stars — collide and merge, creating an environment violent enough to forge elements that can’t be created even in the most turbulent hearts of stars.
The evidence of nuclear fission discovered by Mumpower and the team comes in the form of a correlation between “light precision metals,” like silver, and “rare earth nuclei,” like europium, showing in some stars. When one of these groups of elements goes up, the corresponding elements in the other group also increases, the scientists saw.
The team’s research also indicates that elements with atomic masses — counts of protons and neutrons in an atomic nucleus — greater than 260 may exist around neutron star smashes, even if this existence is brief. This is much heavier than many of the elements at the “heavy end” of the periodic table.
“The only plausible way this can arise among different stars is if there is a consistent process operating during the formation of the heavy elements,” Mumpower said. “This is incredibly profound and is the first evidence of fission operating in the cosmos, confirming a theory we proposed several years ago.”
[…]
Neutron stars are created when massive stars reach the end of their fuel supplies necessary for intrinsic nuclear fusion processes, which means the energy that has been supporting them against the inward push of their own gravity ceases. As the outer layers of these dying stars are blown away, the stellar cores with masses between one and two times that of the sun collapse into a width of around 12 miles (20 kilometers).
This core collapse happens so rapidly that electrons and protons are forced together, creating a sea of neutrons so dense a mere tablespoon of this neutron star “stuff” would weigh more than 1 billion tons if it were brought to Earth.
When these extreme stars exist in a binary pairing, they spiral around one another. And as they spiral around one another, they lose angular momentum because they emit intangible ripples in spacetime called gravitational waves. This causes neutron stars to eventually collide, merge and, unsurprisingly given their extreme and exotic nature, create a very violent environment.
This ultimate neutron star merger releases a wealth of free neutrons, which are particles normally bound up with protons in atomic nuclei. This can allow other atomic nuclei in these environments to quickly grab these free neutrons — a process called rapid neutron capture or the “r-process.” This allows the atomic nuclei to grow heavier, creating superheavy elements that are unstable. These superheavy elements can then undergo fission to split down into lighter, stable elements like gold.
In 2020, Mumpower predicted how the “fission fragments” of r-process-created nuclei would be distributed. Following this, Mumpower’s collaborator and TRIUMF scientist Nicole Vassh calculated how the r-process would lead to the co-production of light precision metals such as ruthenium, rhodium, palladium and silver — as well as rare earth nuclei, like europium, gadolinium, dysprosium and holmium.
This prediction can be tested not only by looking at neutron star mergers but also by looking at the abundances of elements in stars that have been enriched by r-process-created material.
This new research looked at 42 stars and found the precise correlation predicted by Vassh, thus showing a clear signature of the fission and decay of elements heavier than found on the periodic table, further confirming that neutron star collisions are indeed the sites where elements heavier than iron are forged.
“The correlation is very robust in r-process enhanced stars where we have sufficient data. Every time nature produces an atom of silver, it’s also producing heavier rare earth nuclei in proportion. The composition of these element groups are in lockstep,” Mumpower concluded. “We have shown that only one mechanism can be responsible — fission — and people have been racking brains about this since the 1950s.”
The team’s research was published in the Dec. 6 edition of the journal Science.
Close to a million records containing personally identifiable information belonging to donors that sent money to non-profits were found exposed in an online database.
The database is owned and operated by DonorView – provider of a cloud-based fundraising platform used by schools, charities, religious institutions, and other groups focused on charitable or philanthropic goals.
Infosec researcher Jeremiah Fowler found 948,029 records exposed online including donor names, addresses, phone numbers, emails, payment methods, and more.
Manual analysis of the data revealed what appeared to be the names and addresses of individuals designated as children – though it wasn’t clear to the researcher whether these children were associated with the organization collecting the donation or the funds’ recipients.
Another document seen by Fowler revealed children’s names, medical conditions, names of their attending doctors, and information on whether the child’s image could be used in marketing materials – though in many cases this was not permitted.
In just a single document, more than 70,000 names and contact details were exposed, all believed to be donors to non-profits.
Neither Fowler nor The Register has received a response from the US-based service provider, though Fowler said it did secure the database within days of him filing a disclosure report.
three white-hat hackers helped a regional rail company in southwest Poland unbrick a train that had been artificially rendered inoperable by the train’s manufacturer after an independent maintenance company worked on it. The train’s manufacturer is now threatening to sue the hackers who were hired by the independent repair company to fix it.
The fallout from the situation is currently roiling Polish infrastructure circles and the repair world, with the manufacturer of those trains denying bricking the trains despite ample evidence to the contrary. The manufacturer is also now demanding that the repaired trains immediately be removed from service because they have been “hacked,” and thus might now be unsafe, a claim they also cannot substantiate.
The situation is a heavy machinery example of something that happens across most categories of electronics, from phones, laptops, health devices, and wearables to tractors and, apparently, trains. In this case, NEWAG, the manufacturer of the Impuls family of trains, put code in the train’s control systems that prevented them from running if a GPS tracker detected that it spent a certain number of days in an independent repair company’s maintenance center, and also prevented it from running if certain components had been replaced without a manufacturer-approved serial number.
This anti-repair mechanism is called “parts pairing,” and is a common frustration for farmers who want to repair their John Deere tractors without authorization from the company. It’s also used by Apple to prevent independent repair of iPhones.
An image posed by q3k of how the team did their work.
In this case, a Polish train operator called Lower Silesian Railway, which operates regional train services from Wrocław, purchased 11 Impuls trains. It began to do regular maintenance on the trains using an independent company called Serwis Pojazdów Szynowych (SPS), which notes on its website that “many Polish carriers have trusted us” with train maintenance. Over the course of maintaining four different Impuls trains, SPS found mysterious errors that prevented them from running. SPS became desperate and Googled “Polish hackers” and came across a group called Dragon Sector, a reverse-engineering team made up of white hat hackers. The trains had just undergone “mandatory maintenance” after having traveled a million kilometers.
“This is quite a peculiar part of the story—when SPS was unable to start the trains and almost gave up on their servicing, someone from the workshop typed “polscy hakerzy” (‘Polish hackers’) into Google,” the team from Dragon Sector, made up of Jakub Stępniewicz, Sergiusz Bazański, and Michał Kowalczyk, told me in an email. “Dragon Sector popped up and soon after we received an email asking for help.”
The problem was so bad that an infrastructure trade publication in Poland called Rynek Kolejowy picked up on the mysterious issues over the summer, and said that the lack of working trains was beginning to impact service: “Four vehicles after level P3-2 repair cannot be started. At this moment, it is not known what caused the failure. The lack of units is a serious problem for the carrier and passengers, because shorter trains are sent on routes.”
The hiring of Dragon Sector was a last resort: “In 2021, an independent train workshop won a maintenance tender for some trains made by Newag, but it turned out that they didn’t start after servicing,” Dragon Sector told me. “[SPS] hired us to analyze the issue and we discovered a ‘workshop-detection’ system built into the train software, which bricked the trains after some conditions were met (two of the trains even used a list of precise GPS coordinates of competitors’ workshops). We also discovered an undocumented ‘unlock code’ which you could enter from the train driver’s panel which magically fixed the issue.”
Dragon Sector was able to bypass the measures and fix the trains. The group posted a YouTube video of the train operating properly after they’d worked on it:
The news of Dragon Sector’s work was first reported by the Polish outlet Zaufana Trzecia Strona and was translated into English by the site Bad Cyber. Kowalczyk and Stępniewicz gave a talk about the saga last week at Poland’s Oh My H@ck conference in Warsaw. The group plans on doing further talks about the technical measures implemented to prevent the trains from running and how they fixed it.
“These trains were locking up for arbitrary reasons after being serviced at third-party workshops. The manufacturer argued that this was because of malpractice by these workshops, and that they should be serviced by them instead of third parties,” Bazański, who goes by the handle q3k, posted on Mastodon. “After a certain update by NEWAG, the cabin controls would also display scary messages about copyright violations if the human machine interface detected a subset of conditions that should’ve engaged the lock but the train was still operational. The trains also had a GSM telemetry unit that was broadcasting lock conditions, and in some cases appeared to be able to lock the train remotely.”
The train had a system that detected if it physically had been to an independent repair shop.
All of this has created quite a stir in Poland (and in repair circles). NEWAG did not respond to a request for comment from 404 Media. But Rynek Kolejowy reported that the company is now very mad, and has threatened to sue the hackers. In a statement to Rynek Kolejowy, NEWAG said “Our software is clean. We have not introduced, we do not introduce and we will not introduce into the software of our trains any solutions that lead to intentional failures. This is slander from our competition, which is conducting an illegal black PR campaign against us.” The company added that it has reported the situation to “the authorized authorities.”
“Hacking IT systems is a violation of many legal provisions and a threat to railway traffic safety,” NEWAG added. “We do not know who interfered with the train control software, using what methods and what qualifications. We also notified the Office of Rail Transport about this so that it could decide to withdraw from service the sets subjected to the activities of unknown hackers.”
In response, Dragon Sector released a lengthy statement explaining how they did their work and explaining the types of DRM they encountered: “We did not interfere with the code of the controllers in Impulsa – all vehicles still run on the original, unmodified software,” part of the statement reads. SPS, meanwhile, has said that its position “is consistent with the position of Dragon Sector.”
Kowalczk told 404 Media that “we are answering media and waiting to be summoned as witnesses,” and added that “NEWAG said that they will sue us, but we doubt they will – their defense line is really poor and they would have no chance defending it, they probably just want to sound scary in the media.”
This strategy—to intimidate independent repair professionals, claim that the device (in this case, a train) is unsafe, and threaten legal action—is an egregious but common playbook in manufacturers’ fight against repair, all over the world.
Using only a sensor-filled helmet combined with artificial intelligence, a team of scientists has announced they can turn a person’s thoughts into written words.
In the study, participants read passages of text while wearing a cap that recorded electrical brain activity through their scalp. These electroencephalogram (EEG) recordings were then converted into text using an AI model called DeWave.
Chin-Teng Lin at the University of Technology Sydney (UTS), Australia, says the technology is non-invasive, relatively inexpensive and easily transportable.
While the system is far from perfect, with an accuracy of approximately 40 per cent, Lin says more recent data currently being peer-reviewed shows an improved accuracy exceeding 60 per cent.
In the study presented at the NeurIPS conference in New Orleans, Louisiana, participants read the sentences aloud, even though the DeWave program doesn’t use spoken words. However, in the team’s latest research, participants read the sentences silently.
America’s eight largest pharmacy providers shared customers’ prescription records to law enforcement when faced with subpoena requests, The Washington Post reported Tuesday. The news arrives amid patients’ growing privacy concerns in the wake of the Supreme Court’s 2022 overturn of Roe v. Wade.
The new look into the legal workarounds was first detailed in a letter sent by Sen. Ron Wyden (D-OR) and Reps. Pramila Jayapal (D-WA) and Sara Jacobs (D-CA) on December 11 to the secretary of the Department of Health and Human Services.
Pharmacies can hand over detailed, potentially compromising information due to legal fine print. Health Insurance Portability and Accountability Act (HIPAA) regulations restrict patient data sharing between “covered entities” like doctor offices, hospitals, and other medical facilities—but these guidelines are looser for pharmacies. And while search warrants require a judge’s approval to serve, subpoenas do not.
[…]
Given each company’s national network, patient records are often shared interstate between any pharmacy location. This could become legally fraught for medical history access within states that already have—or are working to enact—restrictive medical access laws. In an essay written for The Yale Law Journal last year, cited by WaPo, University of Connecticut associate law professor Carly Zubrzycki argued, “In the context of abortion—and other controversial forms of healthcare, like gender-affirming treatments—this means that cutting-edge legislative protections for medical records fall short.”
Whether the platform had any impact on pirate IPTV providers offering the big game last Friday is unclear but plans supporting a full-on assault are pressing ahead.
[…]
When lawmakers gave Italy’s new blocking regime the green light during the summer, the text made it clear that blocking instructions would not be limited to regular ISPs. The relevant section (Paragraph 5 Art. 2) for reference below;
The document issued by AGCOM acts as a clear reminder of the above and specifically highlights that VPN and DNS providers are no exception.
“[A]ll parties in any capacity involved in the accessibility of illegally disseminated content – and therefore also, by way of example and not limitation – VPN and open DNS service providers, will have to execute the blocks requested by the Authority [AGCOM] including through accreditation to the Piracy Shield platform or otherwise implementing measures that prevent the user from reaching that content,” the notice reads.
The relevant section of the new law is in some ways even more broad when it comes to search engines such as Google. Whether they are directly involved in accessibility or not, they’re still required to take action.
AGCOM suggests that Google understands its obligations and is also prepared to take things further. The company says it will deindex offending platforms from search and also remove their ability to advertise.
“Since this is a dynamic blocking, the search engine therefore undertakes to perform de-indexing of all websites/telematic addresses that are the subject of subsequent reports that can also be communicated by rights holders accredited to the platform,” AGCOM writes.
Wow. This means we can force an ISP, VPN provider, DNS host and Google to shut down shit without explanation or recourse within 30 minutes. That’s pretty totalitarian.
The case was heard by the United States District Court for the Northern District of California. As The Register has reported, the matter tested Epic’s allegations that Google stifles competition by requiring developers to pay it commissions even if they use third-party payment services, and paid some developers to secure their exclusive presence on the Play store.
The case commenced in early November and on Monday a nine-member jury found in Epic’s favor.
As it was a jury case, the reasoning was not revealed.
Epic Games CEO Tim Sweeney thanked the jurors anyway in a post that declared “Today’s verdict is a win for all app developers and consumers around the world.”
Sweeney wrote that the verdict “proves that Google’s app store practices are illegal and they abuse their monopoly to extract exorbitant fees, stifle competition and reduce innovation.”
In Sweeney’s telling, the jurors heard “evidence that Google was willing to pay billions of dollars to stifle alternative app stores by paying developers to abandon their own store efforts and direct distribution plans, and offering highly lucrative agreements with device manufacturers in exchange for excluding competing app stores.”
Google denies such skulduggery and in statement reported by Axios vowed to appeal on grounds that the search and ads giant faces strong competition from Apple and rival app stores for Android.
[…]
The Apple case produced some small wins for Epic. But the Google decision is … erm … Epic, as it appears to be a full-throated declaration that the Play store is a monopoly.
The case will return to court in early 2024, when the presiding judge will consider remedies – which could include forcing Google to offload the Play store.
But this is far from the end of the matter – both for Epic Games and for the wider issue of tech monopolies.
[…]
legislators are increasingly taking action to erode Big Tech’s power, with the UK’s Digital Markets, Competition and Consumer Bill, and the EU’s Digital Markets Act his exemplars of such activity. ®
Under rules being considered, any telecom service provider or business with custodial access to telecom equipment – a hotel IT technician, an employee at a cafe with Wi-Fi, or a contractor responsible for installing home broadband router – could be compelled to enable electronic surveillance. And this would apply not only to those involved with data transit and data storage.
This week, the US House of Representatives is expected to conduct a floor vote on two bills that reauthorize Section 702 of the Foreign Intelligence Surveillance Act (FISA), which is set to expire in 2024.
Section 702, as The Registernoted last week, permits US authorities to intercept the electronic communications of people outside the US for foreign intelligence purposes – without a warrant – even if that communication involves US citizens and permanent residents.
As the Electronic Frontier Foundation argues, Section 702 has allowed the FBI to conduct invasive, warrantless searches of protesters, political donors, journalists, protesters, and even members of Congress.
More than a few people would therefore be perfectly happy if the law lapsed – on the other hand, law enforcement agencies insist they need Section 702 to safeguard national security.
The pending vote is expected to be conducted under “Queen-of-the-Hill Rules,” which in this instance might also be described as “Thunderdome” – two bills enter, one bill leaves, with the survivor advancing to the US Senate for consideration. The prospect that neither would be approved and Section 702 would lapse appears … unlikely.
The two bills are: HR 6570, the Protect Liberty and End Warrantless Surveillance Act; and HR 6611, the FISA Reform and Reauthorization Act (FRRA) of 2023 (FRRA).
The former reauthorizes Section 702, but with strong civil liberties and privacy provisions. The civil rights community has lined up to support it.
As for the latter, Elizabeth Goitein, co-director of the Liberty and National Security Program at legal think tank the Brennan Center for Justice, explained that the FRRA changes the definition of electronic communication service provider (ECSP) in a way that expands the range of businesses required to share data with the US.
“Going forward, it would not just be entities that have direct access to communications, like email and phone service providers, that could be required to turn over communications,” argues a paper prepared by the Brennan Center. “Any business that has access to ‘equipment’ on which communications are stored and transmitted would be fair game.”
According to Goitein, the bill’s sponsors have denied the language is intended to be interpreted so broadly.
A highly redacted FISA Court of Review opinion [PDF], released a few months ago, showed that the government has already pushed the bounds of the definition.
The court document discussed a petition to compel an unidentified entity to conduct surveillance. The petition was denied because the entity did not satisfy the definition of “electronic communication service provider,” and was instead deemed to be a provider of a product or service. That definition may change, it seems.
Goitein is not alone in her concern about the ECSP definition. She noted that a FISA Court amici – the law firm ZwillGen – has taken the unusual step of speaking out against the expanded definition of an ECSP.
In an assessment published last week, ZwillGen attorneys Marc Zwillinger and Steve Lane raised concerns about the FRRA covering a broad set of businesses and their employees.
“By including any ‘service provider’ – rather than any ‘other communication service provider’ – that has access not just to communications, but also to the ‘equipment that is being or may be used to transmit or store … communications,’ the expanded definition would appear to cover datacenters, colocation providers, business landlords, shared workspaces, or even hotels where guests connect to the internet,” they explained. They added that the addition of the term “custodian” to the service provider definition makes it apply to any third party providing equipment, storage – or even cleaning services.
The Brennan Center paper also raised other concerns – like the exemption for members of Congress from such surveillance. The FRRA bill requires the FBI to get permission from a member of Congress when it wants to conduct a query of their communications. No such courtesy is afforded to the people these members of Congress represent.
Goitein urged Americans to contact their representative and ask for a “no” vote on the FRRA and a “yes” on HR 6570, the Protect Liberty and End Warrantless Surveillance Act. ®
At the Zooniverse, anyone can be a researcherYou don’t need any specialised background, training, or expertise to participate in any Zooniverse projects. We make it easy for anyone to contribute to real academic research, on their own computer, at their own convenience.You’ll be able to study authentic objects of interest gathered by researchers, like images of faraway galaxies, historical records and diaries, or videos of animals in their natural habitats. By answering simple questions about them, you’ll help contribute to our understanding of our world, our history, our Universe, and more.With our wide-ranging and ever-expanding suite of projects, covering many disciplines and topics across the sciences and humanities, there’s a place for anyone and everyone to explore, learn and have fun in the Zooniverse. To volunteer with us, just go to the Projects page, choose one you like the look of, and get started.
[…]
Zooniverse projects are constructed with the aim of converting volunteers’ efforts into measurable results. These projects have produced a large number of published research papers, as well as several open-source sets of analyzed data. In some cases, Zooniverse volunteers have even made completely unexpected and scientifically significant discoveries.
A significant amount of this research takes place on the Zooniverse discussion boards, where volunteers can work together with each other and with the research teams. These boards are integrated with each project to allow for everything from quick hashtagging to in-depth collaborative analysis. There is also a central Zooniverse board for general chat and discussion about Zooniverse-wide matters.
Many of the most interesting discoveries from Zooniverse projects have come from discussion between volunteers and researchers. We encourage all users to join the conversation on the discussion boards for more in-depth participation.
A new study has identified a potentially growing natural hazard in the north: frostquakes. With climate change contributing to many observed changes in weather extremes, such as heavy precipitation and cold waves, these seismic events could become more common. Researchers were surprised by the role of wetlands and drainage channels in irrigated wetlands in origin of frostquakes.
Frostquakes are seismic events caused by the rapid freezing of water in the ground. They are most common during extreme winter conditions, when wet, snow-free ground freezes rapidly. They have been reported in northern Finland in 2016, 2019 and 2022, as well as in Chicago in 2019 and Ottawa in 2022, among others.
Roads and other areas cleared of snow in winter are particularly vulnerable to frostquakes.
[.,..]
We found that during the winter of 2022–2023 the main sources of frostquakes in Oulu, Finland were actually swamps, wetlands and areas with high water tables or other places where water accumulates,” says Elena Kozlovskaya, Professor of applied geophysics at the University of Oulu Mining School.
When water in the ground, accumulated during heavy rainfalls in autumn or melting of snow during warm winter weather, freezes and expands rapidly, it causes cracks in the ground, accompanied by tremors and booms. When occurred in populated areas, frostquakes, or cryoseisms, are felt by people and they can be accompanied by specific noises. Ground motions during frostquakes are comparable to those of other seismic events, such as more distant earthquakes, mining explosions and vibrations produced by freight trains. Frostquakes are also known phenomenon in permafrost regions.
The new study, currently available as a preprint and set to be published in the journal EGUsphere, is the first applied study of seismic events from marsh and wetland areas. Researchers from the University of Oulu, Finland and the Geological Survey of Finland (GTK) showed that fracturing in the uppermost frozen ground can be initiated if the thickness of frozen layer is about 5 cm and larger. Ruptures can propagate deeper and damage infrastructure such as buildings, basements, pipelines and roads.
“With climate change, rapid changes in weather patterns have brought frostquakes to the attention of the wider audience, and they may become more common. Although their intensity is usually low, a series of relatively strong frostquakes in Oulu, 2016, which ruptured roads, was the starting point for our research.
[…]
During several days when the air temperature was decreasing rapidly, the local residents reported ground tremors and unusual sounds to the researchers. These observations were used to identify frostquakes from seismic data. The conditions for a frostquake are favorable when the temperature drops to more than—20°C at a rate of about one degree per hour.
There are many wetlands close to seismic stations in Oulu near residential area where the main sources of frostquakes were detected. In Sodankylä, the frostquakes were in addition caused by ice fracturing in the Kitinen river. “Frostquakes have often occurred in January, but other times are also possible,” says Moisio.
During frost quakes, seismic surface waves produce high ground accelerations at distances of up to hundreds of meters. “The fractures during frostquakes seem to propagate along drainage channels near roads and in irrigated wetlands” Kozlovskaya says.
Irrigated wetlands and drainage channels are also abundant around residential areas.
[…]
Further studies will help to identify areas at risk of frostquakes, which will help to prepare and protect the built environment from this specific natural hazard. Researchers at the University of Oulu and GTK aim to create a system that could predict frostquakes based on soil analysis and satellite data.
More information: Nikita Afonin et al, Frost quakes in wetlands in northern Finland during extreme winter weather conditions and related hazard to urban infrastructure (2023). DOI: 10.5194/egusphere-2023-1853
Mechanical engineers Shervin Foroughi and Mohsen Habibi were painstakingly maneuvering a tiny ultrasound wand over a pool of liquid when they first saw an icicle shape emerge and solidify.
[…]
Most commercial forms of 3-D printing involve extruding fluid materials—plastics, ceramics, metals or even biological compounds—through a nozzle and hardening them layer-by-layer to form computer-drafted structures. That hardening step is key, and it relies on energy in the form of light or heat.
[…]
Using ultrasound to trigger chemical reactions in room-temperature liquids isn’t new in itself. The field of sonochemistry and its applications, which matured in the 1980s at the University of Illinois Urbana-Champaign (UIUC), relies on a phenomenon called acoustic cavitation. This happens when ultrasonic vibrations create tiny bubbles, or cavities, within a fluid. When these bubbles collapse, the vapors inside them generate immense temperatures and pressures; this applies rapid heating at minuscule, localized points.
[…]
In their experiments, which were published in Nature Communications in 2022, the researchers filled a cylindrical, opaque-shelled chamber with a common polymer (polydimethylsiloxane, or PDMS) mixed with a curing agent. They submerged the chamber in a tank of water, which served as a medium for the sound waves to propagate into the chamber (similar to the way ultrasound waves from medical imaging devices travel through gel spread on a patient’s skin). Then, using a biomedical ultrasound transducer mounted to a computer-controlled motion manipulator, the scientists traced the ultrasound beam’s focal point along a calculated path 18 millimeters deep into the liquid polymer. Tiny bubbles started to appear in the liquid along the transducer’s path, and solidified material quickly followed. After fastidiously trying many combinations of ultrasound frequencies, liquid viscosity and other parameters, the team finally succeeded in using the approach to print maple-leaf shapes, seven-toothed gears and honeycomb structures within the liquid bath. The researchers then repeated these experiments using various polymers and ceramics, and they presented their results at the Canadian Acoustical Association’s annual conference this past October.
[…]
A crucial next step for sound-based printing would be to show how this process can function in real applications that meet the strict requirements of engineers and product designers, such as materials strength, surface finish and repeatability.
The research team will soon publish new work that discusses improvements in printing speed and, significantly, resolution. In the 2022 paper the team demonstrated the ability to print “pixels” that measure 100 microns on a side. In comparison, traditional 3-D printing can achieve pixels half that size.
Balls of human brain cells linked to a computer have been used to perform a very basic form of speech recognition. The hope is that such systems will use far less energy for AI tasks than silicon chips.
“This is just proof-of-concept to show we can do the job,” says Feng Guo at Indiana University Bloomington. “We do have a long way to go.”
Brain organoids are lumps of nerve cells that form when stem cells are grown in certain conditions. “They are like mini-brains,” says Guo.
It takes two or three months to grow the organoids, which are a few millimetres wide and consist of as many as 100 million nerve cells, he says. Human brains contain around 100 billion nerve cells.
The organoids are then placed on top of a microelectrode array, which is used both to send electrical signals to the organoid and to detect when nerve cells fire in response. The team calls its system “Brainoware”.
For the speech recognition task, the organoids had to learn to recognise the voice of one individual from a set of 240 audio clips of eight people pronouncing Japanese vowel sounds. The clips were sent to the organoids as sequences of signals arranged in spatial patterns.
The organoids’ initial responses had an accuracy of around 30 to 40 per cent, says Guo. After training sessions over two days, their accuracy rose to 70 to 80 per cent.
“We call this adaptive learning,” he says. If the organoids were exposed to a drug that stopped new connections forming between nerve cells, there was no improvement.
The training simply involved repeating the audio clips, and no form of feedback was provided to tell the organoids if they were right or wrong, says Guo. This is what is known in AI research as unsupervised learning.
There are two big challenges with conventional AI, says Guo. One is its high energy consumption. The other is the inherent limitations of silicon chips, such as their separation of information and processing.
Titouan Parcollet at the University of Cambridge, who works on conventional speech recognition, doesn’t rule out a role for biocomputing in the long run.
“However, it might also be a mistake to think that we need something like the brain to achieve what deep learning is currently doing,” says Parcollet. “Current deep-learning models are actually much better than any brain on specific and targeted tasks.”
Guo and his team’s task is so simplified that it is only identifies who is speaking, not what the speech is, he says. “The results aren’t really promising from the speech recognition perspective.”
Even if the performance of Brainoware can be improved, another major issue with it is that the organoids can only be maintained for one or two months, says Guo. His team is working on extending this.
“If we want to harness the computation power of organoids for AI computing, we really need to address those limitations,” he says.
23andMe is a terrific concept. In essence, the company takes a sample of your DNA and tells you about your genetic makeup. For some of us, this is the only way to learn about our heritage. Spotty records, diaspora, mistaken family lore and slavery can make tracing one’s roots incredibly difficult by traditional methods.
What 23andMe does is wonderful because your DNA is fixed. Your genes tell a story that supersedes any rumors that you come from a particular country or are descended from so-and-so.
[…]
ou can replace your Social Security number, albeit with some hassle, if it is ever compromised. You can cancel your credit card with the click of a button if it is stolen. But your DNA cannot be returned for a new set — you just have what you are given. If bad actors steal or sell your genetic information, there is nothing you can do about it.
This is why 23andMe’s Oct. 6 data leak, although it reads like science fiction, is not an omen of some dark future. It is, rather, an emblem of our dangerous present.
23andMe has a very simple interface with some interesting features. “DNA Relatives” matches you with other members to whom you are related. This could be an effective, thoroughly modern way to connect with long-lost family, or to learn more about your origins.
But the Oct. 6 leak perverted this feature into something alarming. By gaining access to individual accounts through weak and recycled passwords, hackers were able to create an extensive list of people with Ashkenazi heritage. This list was then posted on forums with the names, sex and likely heritage of each member under the title “Ashkenazi DNA Data of Celebrities.”
First and foremost, collecting lists of people based on their ethnic backgrounds is a personal violation with tremendously insidious undertones. If you saw yourself and your extended family on such a list, you would not take it lightly.
[…]
I find it troubling because, in 2018, Time reported that 23andMe had sold a $300 million stake in its business to GlaxoSmithKline, allowing the pharmaceutical giant to use users’ genetic data to develop new drugs. So because you wanted to know if your grandmother was telling the truth about your roots, you spat into a cup and paid 23andMe to give your DNA to a drug company to do with it as they please.
Although 23andMe is in the crosshairs of this particular leak, there are many companies in murky waters. Last year, Consumer Reports found that 23andMe and its competitors had decent privacy policies where DNA was involved, but that these businesses “over-collect personal information about you and overshare some of your data with third parties…CR’s privacy experts say it’s unclear why collecting and then sharing much of this data is necessary to provide you the services they offer.”
[…]
. As it stands, your DNA can be weaponized against you by law enforcement, insurance companies, and big pharma. But this will not be limited to you. Your DNA belongs to your whole family.
Pretend that you are going up against one other candidate for a senior role at a giant corporation. If one of these genealogy companies determines that you are at an outsized risk for a debilitating disease like Parkinson’s and your rival is not, do you think that this corporation won’t take that into account?
[…]
Insurance companies are not in the business of losing money either. If they gain access to such a thing that on your record, you can trust that they will use it to blackball you or jack up your rates.
In short, the world risks becoming like that of the film Gattaca, where the genetic elite enjoy access while those deemed genetically inferior are marginalized.
The train has left the station for a lot of these issues. That list of people from the 23andMe leak cannot put the genie back in the bottle. If your DNA is on a server for one of these companies, there is a chance that it has already been used as a reference or to help pharmaceutical companies.
[…]
There are things they can do now to avoid further damage. The next time a company asks for something like your phone number or SSN, press them as to why they need it. Make it inconvenient for them to mine you for your Personal Identifiable Information (PII). Your PII has concrete value to these places, and they count on people to be passive, to hand it over without any fuss.
[…]
The time to start worrying about this problem was 20 years ago, but we can still affect positive change today. This 23andMe leak is only the beginning; we must do everything possible to protect our identities and DNA while they still belong to us.
Scientific American was warning about this since at least 2013. What have we done? Nothing.:
If there’s a gene for hubris, the 23andMe crew has certainly got it. Last Friday the U.S. Food and Drug Administration (FDA) ordered the genetic-testing company immediately to stop selling its flagship product, its $99 “Personal Genome Service” kit. In response, the company cooed that its “relationship with the FDA is extremely important to us” and continued hawking its wares as if nothing had happened. Although the agency is right to sound a warning about 23andMe, it’s doing so for the wrong reasons.
Since late 2007, 23andMe has been known for offering cut-rate genetic testing. Spit in a vial, send it in, and the company will look at thousands of regions in your DNA that are known to vary from human to human—and which are responsible for some of our traits
[…]
Everything seemed rosy until, in what a veteran Forbes reporter calls “the single dumbest regulatory strategy [he had] seen in 13 years of covering the Food and Drug Administration,” 23andMe changed its strategy. It apparently blew through its FDA deadlines, effectively annulling the clearance process, and abruptly cut off contact with the agency in May. Adding insult to injury the company started an aggressive advertising campaign (“Know more about your health!”)
[…]
But as the FDA frets about the accuracy of 23andMe’s tests, it is missing their true function, and consequently the agency has no clue about the real dangers they pose. The Personal Genome Service isn’t primarily intended to be a medical device. It is a mechanism meant to be a front end for a massive information-gathering operation against an unwitting public.
Sound paranoid? Consider the case of Google. (One of the founders of 23andMe, Anne Wojcicki, is presently married to Sergei Brin, the founder of Google.) When it first launched, Google billed itself as a faithful servant of the consumer, a company devoted only to building the best tool to help us satisfy our cravings for information on the web. And Google’s search engine did just that. But as we now know, the fundamental purpose of the company wasn’t to help us search, but to hoard information. Every search query entered into its computers is stored indefinitely. Joined with information gleaned from cookies that Google plants in our browsers, along with personally identifiable data that dribbles from our computer hardware and from our networks, and with the amazing volumes of information that we always seem willing to share with perfect strangers—even corporate ones—that data store has become Google’s real asset
[…]
23andMe reserves the right to use your personal information—including your genome—to inform you about events and to try to sell you products and services. There is a much more lucrative market waiting in the wings, too. One could easily imagine how insurance companies and pharmaceutical firms might be interested in getting their hands on your genetic information, the better to sell you products (or deny them to you).
[…]
ven though 23andMe currently asks permission to use your genetic information for scientific research, the company has explicitly stated that its database-sifting scientific work “does not constitute research on human subjects,” meaning that it is not subject to the rules and regulations that are supposed to protect experimental subjects’ privacy and welfare.
Those of us who have not volunteered to be a part of the grand experiment have even less protection. Even if 23andMe keeps your genome confidential against hackers, corporate takeovers, and the temptations of filthy lucre forever and ever, there is plenty of evidence that there is no such thing as an “anonymous” genome anymore. It is possible to use the internet to identify the owner of a snippet of genetic information and it is getting easier day by day.
This becomes a particularly acute problem once you realize that every one of your relatives who spits in a 23andMe vial is giving the company a not-inconsiderable bit of your own genetic information to the company along with their own. If you have several close relatives who are already in 23andMe’s database, the company already essentially has all that it needs to know about you.
The following is an extract from our Lost in Space-Time newsletter. Each month, we hand over the keyboard to a physicist or two to tell you about fascinating ideas from their corner of the universe. You can sign up for Lost in Space-Time for free here.
Space-time is a curious thing. Look around and it’s easy enough to visualise what the space component is in the abstract. It’s three dimensions: left-right, forwards-backwards and up-down. It’s a graph with an…
x, y and z axis. Time, too, is easy enough. We’re always moving forwards in time so we might visualise it as a straight line or one big arrow. Every second is a little nudge forwards.
But space-time, well that’s a little different. Albert Einstein fused space and time together in his theories of relativity. The outcome was a new fabric of reality, a thing called space-time that permeates the universe. How gravity works popped out of the explorations of this new way of thinking. Rather than gravity being a force that somehow operates remotely through space, Einstein proposed that bodies curve space-time, and it is this curvature that causes them to be gravitationally drawn to each other. Our very best descriptions of the cosmos begin with space-time.
Yet, visualising it is next to impossible. The three dimensions of space and one of time give four dimensions in total. But space-time itself is curved, as Einstein proposed. That means to really imagine it, you need a fifth dimension to curve into.
Luckily, all is not lost. There is a mathematical trick to visualising space-time that I’ve come up with. It’s a simplified way of thinking that not only illustrates how space-time can be curved, but also how such curvature can draw bodies towards each other. It can give you new insight into how gravity works in our cosmos.
First, let’s start with a typical way to draw space-time. Pictures like the one below are meant to illustrate Einstein’s idea that gravity arises in the universe from massive objects distorting space-time. Placing a small object, say a marble, near one of these dimples would result in it rolling towards one of the larger objects, in much the same way that gravity pulls objects together.
The weight of different space objects influences the distortion of space-and-time
Manil Suri
However, the diagram is missing a lot. While the objects depicted are three dimensional, the space they’re curving is only two dimensional. Moreover, time seems to have been entirely omitted, so it’s pure space – not space-time – that’s curving.
Here’s my trick to get around this: simplify things by letting space be only one dimensional. This makes the total number of space-time dimensions a more manageable two.
Now we can represent our 1-D space by the double-arrowed horizontal line in the left panel of the diagram below. Let time be represented by the perpendicular direction, giving a two-dimensional space-time plane. This plane is then successive snapshots, stacked one on top of the other, of where objects are located in the single space dimension at each instant.
Suppose now there are objects – say particles – at points A and B in our universe. Then if these particles remained at rest, their trajectories through space-time would just be the two parallel paths AA’ and BB’ as shown. This simply represents the fact that for every time instant, the particles remain exactly where they are in 1-D space. Such behaviour is what we’d expect in the absence of gravity or any other forces.
However, if gravity came into play, we would expect the two particles to draw closer to each other as time went on. In other words, A’ would be much closer to B’ than A was to B.
Now what if gravity, as Einstein proposed, wasn’t a force in the usual sense? What if it couldn’t act directly on A and B to bring them closer, but rather, could only cause such an effect by deforming the 2-D space-time plane? Would there be a suitable such deformation that would still result in A’ getting closer to B’?
Manil Suri
The answer is yes. Were the plane drawn on a rubber sheet, you could stretch it in various ways to easily verify that many such deformations exist. The one we’ll pick (why exactly, we’ll see below) is to wrap the plane around a sphere, as shown in the middle panel. This can be mathematically accomplished by the same method used to project a rectangular map of the world onto a globe. The formula this involves (called the “equirectangular projection”) has been known for almost two millennia: vertical lines on the rectangle correspond to lines of longitude on the sphere and horizontal ones to lines of latitude. You can see from the right panel that A’ has indeed gotten closer to B’, just as we might expect under gravity.
On the plane, the particles follow the shortest paths between A and A’, and B and B’, respectively. These are just straight lines. On the sphere, the trajectories AA’ and BB’ still represent shortest distance paths. This is because the shortest distance between two points on a spherical surface is always along one of the circles of maximal radius (these include, e.g., lines of longitude and the equator). Such curves that produce the shortest distance are called geodesics. So the geodesics AA’ and BB’ on the plane get transformed to corresponding geodesics on the sphere. (This wouldn’t necessarily happen for an arbitrary deformation, which is why we chose our wrapping around the sphere.)
Einstein postulated that particles not subject to external forces will always move through space-time along such “shortest path” geodesics. In the absence of gravity, these geodesics are just straight lines. Gravity, when introduced, isn’t counted as an external force. Rather, its effect is to curve space-time, hence changing the geodesics. The particles now follow these new geodesics, causing them to draw closer.
This is the key visualisation afforded by our simplified description of space-time. We can begin to understand how gravity, rather than being a force that acts mysteriously at a distance, could really be a result of geometry. How it can act to pull objects together via curvature built into space-time.
The above insight was fundamental to Einstein’s incorporation of gravity into his general theory of relativity. The actual theory is much more complicated, since space-time only curves in the local vicinity of bodies, not globally, as in our model. Moreover, the geometry involved must also respect the fact that nothing can travel faster than the speed of light. This effectively means that the concept of “shortest distance” has to also be modified, with the time dimension having to be treated very differently from the space dimensions.
Nevertheless, Einstein’s explanation posits, for instance, that the sun’s mass curves space-time in our solar system. That is why planets revolve around the sun rather than flying off in straight lines – they are just following the curved geodesics in this deformed space-time.
This has been confirmed by measuring how light from distant astronomical sources gets distorted by massive galaxies. Space-time truly is curved in our universe, it’s not just a mathematical convenience.
There’s a classical Buddhist parable about a group of blind men relying only on touch to figure out an animal unfamiliar to them – an elephant. Space-time is our elephant here – we can never hope to see it in its full 4-D form, or watch it curve to cause gravity. But the simplified visualisation presented here can help us better understand it .
Genetic testing company 23andMe changed its terms of service to prevent customers from filing class action lawsuits or participating in a jury trial days after reports revealing that attackers accessed personal information of nearly 7 million people — half of the company’s user base — in an October hack.
In an email sent to customers earlier this week viewed by Engadget, the company announced that it had made updates to the “Dispute Resolution and Arbitration section” of its terms “to include procedures that will encourage a prompt resolution of any disputes and to streamline arbitration proceedings where multiple similar claims are filed.” Clicking through leads customers to the newest version of the company’s terms of service that essentially disallow customers from filing class action lawsuits, something that more people are likely to do now that the scale of the hack is clearer.
“To the fullest extent allowed by applicable law, you and we agree that each party may bring disputes against the other party only in an individual capacity and not as a class action or collective action or class arbitration,” the updated terms say. Notably, 23andMe will automatically opt customers into the new terms unless they specifically inform the company that they disagree by sending an email within 30 days of receiving the firm’s notice. Unless they do that, they “will be deemed to have agreed to the new terms,” the company’s email tells customers.
23andMe did not respond to a request for comment from Engadget.
In October, the San Francisco-based genetic testing company headed by Anne Wojcicki announced that hackers had accessed sensitive user information including photos, full names, geographical location, information related to ancestry trees, and even names of related family members. The company said that no genetic material or DNA records were exposed. Days after that attack, the hackers put up profiles of hundreds of thousands of Ashkenazi Jews and Chinese people for sale on the internet. But until last week, it wasn’t clear how many people were impacted.
In a filing with the Securities and Exchange Commission, 23andMe said that “multiple class action claims” have already been against the company in both federal and state court in California and state court in Illinois, as well as in Canadian courts.
Forbidding people from filing class action lawsuit, as Axiosnotes, hides information about the proceedings from the public since affected parties typically attempt to resolve disputes with arbitrators in private. Experts, such as Chicago-Kent College of Law professor Nancy Kim, an online contractor expert, told Axios that changing its terms wouldn’t be enough to protect 23andMe in court.
The company’s new terms are sparking outrage online. “Wow they first screw up and then they try to screw their users by being shady,” a user who goes by Daniel Arroyo posted on X. “Seems like they’re really trying to cover their asses,” wrote another user called Paul Duke, “and head off lawsuits after announcing hackers got personal data about customers.”
A number of popular mobile password managers are inadvertently spilling user credentials due to a vulnerability in the autofill functionality of Android apps.
The vulnerability, dubbed “AutoSpill,” can expose users’ saved credentials from mobile password managers by circumventing Android’s secure autofill mechanism, according to university researchers at the IIIT Hyderabad, who discovered the vulnerability and presented their research at Black Hat Europe this week.
The researchers, Ankit Gangwal, Shubham Singh and Abhijeet Srivastava, found that when an Android app loads a login page in WebView, password managers can get “disoriented” about where they should target the user’s login information and instead expose their credentials to the underlying app’s native fields, they said. This is because WebView, the preinstalled engine from Google, lets developers display web content in-app without launching a web browser, and an autofill request is generated.
[…]
“When the password manager is invoked to autofill the credentials, ideally, it should autofill only into the Google or Facebook page that has been loaded. But we found that the autofill operation could accidentally expose the credentials to the base app.”
Gangwal notes that the ramifications of this vulnerability, particularly in a scenario where the base app is malicious, are significant. He added: “Even without phishing, any malicious app that asks you to log in via another site, like Google or Facebook, can automatically access sensitive information.”
The researchers tested the AutoSpill vulnerability using some of the most popular password managers, including 1Password, LastPass, Keeper and Enpass, on new and up-to-date Android devices. They found that most apps were vulnerable to credential leakage, even with JavaScript injection disabled. When JavaScript injection was enabled, all the password managers were susceptible to their AutoSpill vulnerability.
It’s pretty well known that you shouldn’t use in app browsers anyway though PSA: Stop Using In-App Browsers Now but I am not sure how you would avoid using webview in this case
Google is dealing with its second “lost data” fiasco in the past few months. This time, it’s Google Drive, which has been mysteriously losing files for some people. Google acknowledged the issue on November 27, and a week later, it posted what it called a fix.
It doesn’t feel like Google is describing this issue correctly; the company still calls it a “syncing issue” with the Drive desktop app versions 84.0.0.0 through 84.0.4.0. Syncing problems would only mean files don’t make it to or from the cloud, and that doesn’t explain why people are completely losing files. In the most popular issue thread on the Google Drive Community forums, severalusersdescribespreadsheets and documents going missing, which all would have been created and saved in the web interface, not the desktop app, and it’s hard to see how the desktop app could affect that. Many users peg “May 2023” as the time documents stopped saving. Some say they’ve never used the desktop app.
[…]
Google’s recovery instructions outline a few ways to attempt to “recover your files.” One is via a new secret UI in the Google Drive desktop app version 85.0.13.0 or higher. If you hold shift while clicking on the Drive system tray/menu bar icon, you’ll get a special debug UI with an option to “Recover from backups.” Google says, “Once recovery is complete, you’ll see a new folder on your desktop with the unsynced files named Google Drive Recovery.” Google doesn’t explain what this does or how it works.
Option No. 2 is surprising: use of the command line to recover files. The new Drive binary comes with flags for ‘–recover_from_account_backups’ and ‘–recover_from_app_data_path’, which tells us a bit about what is going on. When Google first acknowledged the issue, it warned users not to delete or move Drive’s app data folder. These flags from the recovery process make it sound like Google hopes your missing files will be in the Drive cache somewhere. Google also suggests trying Windows Backup or macOS Time Machine to find your files.
Google locked the issue thread on the Drive Community Forums at 170 replies before it was clear the problem was solved. It’s also marking any additional threads as “duplicates” and locking them.
[…]
Of the few replies before Google locked the thread, most suggested that Google’s fix did not work. One user calls the fix “complete BS,” adding, “The “solution” doesn’t work for most people.” Another says, “Google Drive DELETED my files so they are not available for recovery. This “fix” is not a fix!” There are lotsof otherreports of the fix not working, and not many that say they got their files back. The idea that Drive would have months-old copies of files in the app data folder is hard to believe.
A series of attacks against Microsoft Active Directory domains could allow miscreants to spoof DNS records, compromise Active Directory and steal all the secrets it stores, according to Akamai security researchers.
We’re told the attacks – which are usable against servers running the default configuration of Microsoft Dynamic Host Configuration Protocol (DHCP) servers – don’t require any credentials.
Akamai says it reported the issues to Redmond, which isn’t planning to fix the issue. Microsoft did not respond to The Register‘s inquiries.
The good news, according to Akamai, is that it hasn’t yet seen a server under this type of attack. The bad news: the firm’s flaw finders also told us that massive numbers of organizations are likely vulnerable, considering 40 percent of the “thousands” of networks that Akamai monitors are running Microsoft DHCP in the vulnerable configuration.
In addition to detailing the security issue, the cloud services biz also provided a tool that sysadmins can use to detect configurations that are at risk.
While the current report doesn’t provide technical details or proof-of-concept exploits, Akamai has promised, in the near future, to publish code that implements these attacks called DDSpoof – short for DHCP DNS Spoof.
“We will show how unauthenticated attackers can collect necessary data from DHCP servers, identify vulnerable DNS records, overwrite them, and use that ability to compromise AD domains,” Akamai security researcher Ori David said.
The DHCP attack research builds on earlier work by NETSPI’s Kevin Roberton, who detailed ways to exploit flaws in DNS zones.
[…]
In addition to creating non-existent DNS records, unauthenticated attackers can also use the DHCP server to overwrite existing data, including DNS records inside the ADI zone in instances where the DHCP server is installed on a domain controller, which David says is the case in 57 percent of the networks Akamai monitors.
“All these domains are vulnerable by default,” he wrote. “Although this risk was acknowledged by Microsoft in their documentation, we believe that the awareness of this misconfiguration is not in accordance with its potential impact.”
[…]
we’re still waiting to hear from Microsoft about all of these issues and will update this story if and when we do. But in the meantime, we’d suggest following Akamai’s advice and disable DHCP DNS Dynamic Updates if you don’t already and avoid DNSUpdateProxy altogether.
“Use the same DNS credential across all your DHCP servers instead,” is the advice.