Historically, manufacturers have let buyers unlock that access and customize what software their phones run. Notable exceptions in the US have, for the most part, only included carrier-specific phone variants.
Unlocking a Pixel smartphone, for example, requires adjusting a couple of settings and installing a couple of well-known tools. Then you’re ready to purge locked software or install a new launcher. Roughly a year ago, Xiaomi introduced a policy limiting users to three unlocked devices per account, providing only a limited time window for unlocking, and demanding waiting periods before doing so. It’s now gone even further, limiting users to unlocking the bootloader of just a single device throughout the year.
[…]
Custom ROMs usually (but not always) derive from pre-existing OSs like Android or Xiaomi’s HyperOS. To write operating software that works on a certain device, you need to develop it on that specific device. Consequently, individuals and teams throughout the enthusiast phone sphere constantly add to their collections of bootloader-unlocked phones. The new unlocking restrictions could place undue hardship on resource-limited development teams, reducing the number of custom ROMs produced moving forward.
Custom ROMs are not only important so you can do what you want on your hardware, but very important is that they allow you to keep updating a device long beyond manufacturer support (eg Cyanogen mod), keeping “outdated” devices running and useful.
An optical fibre technology can help chips communicate with each other at the speed of light, enabling them to transmit 80 times as much information as they could using traditional electrical connections. That could significantly speed up the training times required for large artificial intelligence models – from months to weeks – while also reducing the energy and emissions costs for data centres.
Most advanced computer chips still communicate using electrical signals carried over copper wires. But as the tech industry races to train large AI models – a process that requires networks of AI superchips to transfer huge amounts of data – companies are eager to link chips using the light-speed communication of fibre optics.
[…]
Khare and his colleagues have developed an optics module that would enable chipmakers to add six times as many optical fibres to the edge of a chip, compared to current technologies. The module uses a structure called an optical waveguide to connect as many as 51 optical fibres per millimetre. It also prevents light signals from one fibre from interfering with its neighbours.
[…]
IBM has already put the optical module through stress tests that included high humidity and temperatures ranging from -40°C (-40°F) to 125°C (257°F). Hutcheson expects that major semiconductor manufacturing companies may be interested in licensing the technology.
For years, car safety experts and everyday drivers have bemoaned the loss of the humble button. Modern cars have almost unilaterally replaced dashboards full of tactile knobs with sleek, iPad-like digital displays, despite concerns these alluring devices might be making distracted driving worse. But there are signs the tide might be shifting.
After going all in on touch screens for years, Korean carmaker Hyundai is publicly shifting gears. Hyundai Design North America Vice President Ha Hak-soo remarked on the shift during a recent interview with JoongAng Daily admitting the company was lured in by the “wow” factor of massive, all-in-one screen-based infotainment systems. Customers apparently didn’t share that enthusiasm.
“When we tested with our focus group, we realized that people get stressed, annoyed and steamed when they want to control something in a pinch but are unable to do so,” Ha said.
Now the company is reversing course. Hyundai previously announced it would use physical buttons and knobs for many in-cabin controls across its new lineup of vehicles. They aren’t alone. Porsche and Volkswagen are amongst the major brands planning to buck the trend. It’s part of what looks like a broader acknowledgment of so-called “screen fatigue” setting in amongst car buyers.
[…]
it turns out drivers, for the most part, aren’t too interested in all that choice and functionality. A survey of U.S. car owners by JD Power last year found a consecutive two-year decline in overall consumer satisfaction with their vehicles for the first time in 28 years. The main driver of that dissatisfaction was complicated, difficult to navigate touch-based infotainment systems. A more recent JD Power survey found that most drivers ranked passenger-side display screens–a growing trend in the industry–as simply “not necessary.” Only 56% of drivers surveyed said they preferred to use their vehicle’s built-in infotainment systems to play audio.
“This year’s study makes it clear that owners find some technologies of little use and/or are continually annoying,” JD Power director of user experience benchmarking and technology Kathleen Rizk, said in a statement.
There’s also evidence a growing reliance on overly complicated touch based infotainment displays may be a safety hazard. A 2017 study conducted by the AAA Foundation claims drivers navigating through in-car screens to program navigation apps and other features were “visually and mentally” distracted for an average of 40 seconds. A car traveling at 50mph could cover half a mile during that time. Buttons and knobs aren’t totally distraction-free, but research shows their tactile response allows drivers to use them more easily without looking down and away from the road. The European New Car Assessment Program (NCAP), an independent safety organization, stepped into the debate earlier this year and announced it would grant five-star safety ratings to cars with physical controls for turn signals, windshield wipers, horns, and other critical features.
If you have Spotify’s soon-to-be-bricked Car Thing, there are a few ways you can give it a new lease on life. YouTuber Dammit Jeff has showcased modifications to Car Thing that makes the device useful as a desktop music controller, customizable shortcut tool, or a simple digital clock. Ars Technica’s Kevin Purdy reports: Spotify had previously posted the code for its uboot and kernel to GitHub, under the very unassuming name “spsgsb” and with no announcement (as discovered by Josh Hendrickson). Jeff has one idea why the streaming giant might not have made much noise about it: “The truth is, this thing isn’t really great at running anything.” It has half a gigabyte of memory, 4GB of internal storage, and a “really crappy processor” (Amlogic S905D2 SoC) and is mostly good for controlling music.
How do you get in? The SoC has a built-in USB “burning mode,” allowing for a connected computer, running the right toolkit, to open up root access and overwrite its firmware. Jeff has quite a few issues getting connected (check his video description for some guidance), but it’s “drag and drop” once you’re in. Jeff runs through a few of the most popular options for a repurposed Car Thing:
– DeskThing, which largely makes Spotify desk-friendly, but adds a tiny app store for weather (including Jeff’s own WeatherWave), clocks, and alternate music controls – GlanceThing, which keeps the music controls but also provides some Stream-Deck-like app-launching shortcuts for your main computer. – Nocturne, currently invite-only, is a wholly redesigned Spotify interface that restores all its Spotify functionality.
Wind turbines are necessary for ensuring society’s sustainable future, but they still have a recycling problem. Decommissioned installations are destined for landfills in many cases, while the steel parts that actually make it to recycling facilities are only broken down after generating large amounts of (often dirty) greenhouse gas emissions. Two Dutch companies, however, recently proposed new ways to repurpose a wind turbine’s physically largest and most cumbersome pieces into tiny houses, boats, and more.
From October 19 to October 27 at Dutch Design Week 2024, Vattenfall and the design studio Superuse are showcasing a roughly 393-sq-ft home built inside a retired nacelle—the topmost, steel-encased part of a wind turbine containing its generating components such as the generator itself, gearbox, brake train, and drive mechanisms. After hollowing the nacelle of its original internal parts, the team used the casing for a prototype that now features a living space, bathroom, and kitchen with amenities like solar-powered electricity and water heating, as well as a heat pump instead of encasing turbine parts.
Portions of the home interior were also constructed from recycled wind turbine components. Credit: Vattenfall / Jorrit Lousberg Jorrit Lousberg
“We are looking for innovative ways in which you can reuse materials from used turbines… [which necessitates] making something new from them with as few modifications as possible,” Thomas Hjort, Vattenfall’s director of innovation, said in a statement. “That saves raw materials [and] energy consumption, and in this way we ensure that these materials are useful for many years after their first working life.”
Superuse didn’t take the easiest route to the new house. The team—with help from sustainable designing firms Blade-Made and Woodwave—reportedly picked the smallest possible nacelle to construct a building code-compliant dwelling instead of selecting a larger, modern nacelle for the project that would have provided more room for installing electrical wiring and appliances. In this case, the model home uses a V80 2mW turbine’s nacelle. But more recent designs are often much roomier than the 20-year-old V80’s source material, meaning future iterations could provide even more space for inhabitants.
An artists’s conceptualization of an entire community space incorporating recycled wind turbine components. Credit: Courtesy of Vattenfall
The project designers estimate that at least 10,000 V80 turbine nacelles currently exist around the world, most of which are still in operation. That will change in the coming years, however, as global wind energy demands increase and more advanced turbines are installed to fulfill those needs.
“If such a complex structure as a house is possible, then numerous simpler solutions are also feasible and scalable,” argued Jos de Krieger, a partner of Superuse and Blade-Made.
And to make their point, Vattenfall recently offered another example of upcycled turbine parts. Earlier this month, the company also revealed that prototype tests indicate comparatively small turbine blades can be made buoyant with a few modifications. Once properly sealed and reinforced, architects Sonja Draskovic and Jasper Manders topped their 90-foot test blade with green astroturf, an enclosed one-room dwelling, as well as a picket fence and lawn table to demonstrate one use case. And the potential uses for these miniature artificial islands may not end there.
“[W]e started thinking, what can we do with this new land?” Draskovic said in a statement. “Solar parks, playgrounds, houses: anything is possible.”
Other potential uses for wind turbine blades include floating solar farms, traffic noise barriers, and boat houses. Vattenfall / Jorrit Lousberg Jorrit Lousberg
Draskovic and collaborators noted that, like the nacelle home, the blade they used is one of the smallest currently available. More recent designs are nearly 328-feet-long, which may present challenges in future float tests. But blade repurposing doesn’t need to stick to the seas. Aside from boats, designers believe decommissioned turbine blades or their smaller parts may find their way into traffic noise barriers or parking garages.
It will likely take a combination of reuses to fully complete a wind turbine’s circular life cycle, while especially problematic components such as their rare earth element-laden batteries require additional consideration and solutions. Meanwhile, the design teams still need to perform additional experiments and alterations on both the tiny home and boat before scaling them for wider use. Still, the recycling prompts have already inspired people like Vattenhall’s director of innovation to look to the future for additional recycling possibilities.
“With this design, I no longer see images of wind turbine blades that we bury underground like bulky waste,” Thomas said.
Microsoft has removed Windows Mixed Reality from Windows 11.
With Windows 11 24H2, the latest major version of Microsoft’s PC operating system, you can no longer use a Windows MR headset in any way – not even on Steam.
This includes all the Windows MR headsets from Acer, Asus, Dell, HP, Lenovo, and Samsung, including HP’s Reverb G2, released in 2020.
Screenshot taken by UploadVR.
UploadVR tested Windows 11 24H2 with a Reverb G2 and found the above notice. Microsoft confirmed to UploadVR that this is an intentional removal when it originally announced the move back in December.
In August 3.49% of SteamVR users were using a Windows MR headset, which we estimate to be around 80,000 people. If they install Windows 11 24H2, their VR headset will effectively become a paperweight.
“Existing Windows Mixed Reality devices will continue to work with Steam through November 2026, if users remain on their current released version of Windows 11 (version 23H2) and do not upgrade to this year’s annual feature update for Windows 11 (version 24H2).”
The death of Windows MR headsets comes on the same week Microsoft revealed that HoloLens 2 production has ended, and that software support for the AR headset will end after 2027.
Despite the name, all Windows MR headsets were actually VR-only, and are compatible with most SteamVR content via Microsoft’s SteamVR driver.
The first Windows MR headsets arrived in late 2017 from Acer, Asus, Dell, HP, Lenovo, and Samsung, aiming to compete with the Oculus Rift and HTC Vive that had launched a year earlier. They were the first consumer VR products to deliver inside-out positional tracking, for both the headset and controllers.
[…]
In recent years Microsoft has shifted its XR focus to a software-based long term strategic partnership with Meta.
Soon, it will also bring automatic extension of Windows 11 laptops by just looking at them, including spawning entirely virtual extra monitors.
And earlier this year Microsoft announced Windows Volumetric Apps, a new API for extending 3D elements of PC applications being streamed to Meta Quest into 3D space.
A real crying shame. So another reason people will hang on to their Windows 10 installations even more. Hopefully (but doubtfully) they will release the source code and allow people to chug on under their own steam. Bricking these headsets in under four years should be illegal.
Consumer and digital rights activists are calling on the US Federal Trade Commission to stop device-makers using software to reduce product functionality, bricking unloved kit, or adding surprise fees post-purchase.
In an eight-page letter [PDF] to the Commission (FTC), the activists mentioned the Google/Levis collaboration on a denim jacket that contained sensors enabling it to control an Android device through a special app. When the app was discontinued in 2023, the jacket lost that functionality. The letter also mentions the “Car Thing,” an automotive infotainment device created by Spotify, which bricked the device fewer than two years after launch and didn’t offer a refund.
Another example highlighted is the $1,695 Snoo connected bassinet, manufactured by an outfit named Happiest Baby. Kids outgrow bassinets, yet Happiest Baby this year notified customers that if they ever sold or gave away their bassinets, the device’s next owner would have to pay a new $19.99 monthly subscription fee to keep certain features. Activists argue that reduces the resale value of the devices.
Signatories to the letter include individuals from Consumer Reports, the Electronic Frontier Foundation, teardown artists iFixit, and the Software Freedom Conservancy. Environmental groups and computer repair shops also signed the letter.
The signatories urged the FTC to create “clear guidance” that would prevent device manufacturers from using software that locks out features and functions in products that are already owned by customers.
The practice of using software to block features and functions is referred to by the signatories as “software tethering.”
“Consumers need a clear standard for what to expect when purchasing a connected device,” stated Justin Brookman, director of technology policy at Consumer Reports and a former policy director of the FTC’s Office of Technology, Research, and Investigation. “Too often, consumers are left with devices that stop functioning because companies decide to end support without little to no warning. This leaves people stranded with devices they once relied on, unable to access features or updates.”
“Consumers increasingly face a death by a thousand cuts as connected products they purchase lose their software support or advertised features that may have prompted the original purchase,” the letter states. “They may see the device turned into a brick or their favorite features locked behind a subscription. Such software tethers also prevent consumers from reselling their purchases, as some software features may not transfer, or manufacturers may shut down devices, causing a second-hand buyer harm.”
More recent examples are Anova suddenly charging for a subscription, Peloton suddenly asking for an extra fee for resold units. In the past the field is long and littered, with video games being orphaned being pretty huge, but many many gadget makers (Logitech is really good at this) abandoning products and bricking them.
Researchers at Cornell University tapped into fungal mycelia to power a pair of proof-of-concept robots. Mycelia, the underground fungal network that can sprout mushrooms as its above-ground fruit, can sense light and chemical reactions and communicate through electrical signals. This makes it a novel component in hybrid robotics that could someday detect crop conditions otherwise invisible to humans.
The Cornell researchers created two robots: a soft, spider-like one and a four-wheeled buggy. The researchers used mycelia’s light-sensing abilities to control the machines using ultraviolet light. The project required experts in mycology (the study of fungi), neurobiology, mechanical engineering, electronics and signal processing.
“If you think about a synthetic system — let’s say, any passive sensor — we just use it for one purpose,” lead author Anand Mishra said. “But living systems respond to touch, they respond to light, they respond to heat, they respond to even some unknowns, like signals. That’s why we think, OK, if you wanted to build future robots, how can they work in an unexpected environment? We can leverage these living systems, and any unknown input comes in, the robot will respond to that.”
The fungal robot uses an electrical interface that (after blocking out interference from vibrations and electromagnetic signals) records and processes the mycelia’s electrophysical activity in real time. A controller, mimicking a portion of animals’ central nervous systems, acted as “a kind of neural circuit.” The team designed the controller to read the fungi’s raw electrical signal, process it and translate it into digital controls. These were then sent to the machine’s actuators.
Cornell University / Science Robotics
The pair of shroom-bots successfully completed three experiments, including walking and rolling in response to the mycelia’s signals and changing their gaits in response to UV light. The researchers also successfully overrode the mycelia’s signals to control the robots manually, a crucial component if later versions were to be deployed in the wild.
researchers from Sandia National Laboratories have used silicon photonic microchip components to perform a quantum sensing technique called atom interferometry, an ultra-precise way of measuring acceleration. It is the latest milestone toward developing a kind of quantum compass for navigation when GPS signals are unavailable.
Sandia National Laboratories scientist Jongmin Lee, left, prepares a rubidium cold-atom cell for an atom interferometry experiment while scientists Ashok Kodigala, right, and Michael Gehl initialize the controls for a packaged single-sideband modulator chip. Credit: Craig Fritz, Sandia National Laboratories
The team published its findings and introduced a new high-performance silicon photonic modulator — a device that controls light on a microchip — as the cover story in the journal Science Advances.
[…]
Typically, an atom interferometer is a sensor system that fills a small room. A complete quantum compass — more precisely called a quantum inertial measurement unit — would require six atom interferometers.
But Lee and his team have been finding ways to reduce its size, weight, and power needs. They already have replaced a large, power-hungry vacuum pump with an avocado-sized vacuum chamber and consolidated several components usually delicately arranged across an optical table into a single, rigid apparatus.
The new modulator is the centerpiece of a laser system on a microchip. Rugged enough to handle heavy vibrations, it would replace a conventional laser system typically the size of a refrigerator.
Lasers perform several jobs in an atom interferometer, and the Sandia team uses four modulators to shift the frequency of a single laser to perform different functions.
However, modulators often create unwanted echoes called sidebands that need to be mitigated.
Sandia’s suppressed-carrier, single-sideband modulator reduces these sidebands by an unprecedented 47.8 decibels — a measure often used to describe sound intensity but also applicable to light intensity — resulting in a nearly 100,000-fold drop.
[…]
“Just one full-size single-sideband modulator, a commercially available one, is more than $10,000,” Lee said.
Miniaturizing bulky, expensive components into silicon photonic chips helps drive down these costs.
“We can make hundreds of modulators on a single 8-inch wafer and even more on a 12-inch wafer,” Kodigala said.
And since they can be manufactured using the same process as virtually all computer chips, “This sophisticated four-channel component, including additional custom features, can be mass-produced at a much lower cost compared to today’s commercial alternatives, enabling the production of quantum inertial measurement units at a reduced cost,” Lee said.
As the technology gets closer to field deployment, the team is exploring other uses beyond navigation. Researchers are investigating whether it could help locate underground cavities and resources by detecting the tiny changes these make to Earth’s gravitational force. They also see potential for the optical components they invented, including the modulator, in LIDAR, quantum computing, and optical communications.
Researchers from North Carolina State University and Johns Hopkins University have demonstrated a technology capable of a suite of data storage and computing functions—repeatedly storing, retrieving, computing, erasing or rewriting data—that uses DNA rather than conventional electronics. Previous DNA data storage and computing technologies could complete some but not all of these tasks.
The paper, titled “A Primordial DNA Store and Compute Engine,” appears in the journal Nature Nanotechnology.
[…]
“DNA computing has been grappling with the challenge of how to store, retrieve and compute when the data is being stored in the form of nucleic acids,”
[…]
we have created polymer structures that we call dendricolloids—they start at the microscale, but branch off from each other in a hierarchical way to create a network of nanoscale fibers,
[…]
“This morphology creates a structure with a high surface area, which allows us to deposit DNA among the nanofibrils without sacrificing the data density that makes DNA attractive for data storage in the first place.”
“You could put a thousand laptops’ worth of data into DNA-based storage that’s the same size as a pencil eraser,” Keung says.
“The ability to distinguish DNA information from the nanofibers it’s stored on allows us to perform many of the same functions you can do with electronic devices,”
[…]
“We can copy DNA information directly from the material’s surface without harming the DNA. We can also erase targeted pieces of DNA and then rewrite to the same surface, like deleting and rewriting information stored on the hard drive. It essentially allows us to conduct the full range of DNA data storage and computing functions. In addition, we found that when we deposit DNA on the dendricolloid material, the material helps to preserve the DNA.”
The first reports of instability issues with the 13th-gen Intel desktop CPUs started popping up in late 2022, mere months after the models came out. Those issues persisted, and over time, users reported dealing with unexpected and sudden crashes on PCs equipped with the company’s 14th-gen CPUs, as well. Now, Intel has announced that it finally found the reason why its 13th and 14th-gen desktop processors have been causing crashes and giving out on users, and it promises to roll out a fix by next month.
In its announcement, Intel said that based on extensive analysis of the processors that had been returned to the company, it has determined that elevated operating voltage was causing the instability issues. Apparently, it’s because a microcode algorithm — microcodes, or machine codes, are sets of hardware-level instructions — has been sending incorrect voltage requests to the processor.
Intel has now promised to release a microcode patch to address the “root cause of exposure to elevated voltages.” The patch is still being validated to ensure that it can address all “scenarios of instability reported to Intel,” but the company is aiming to roll it out by mid-August.
As wccftech notes, while Intel’s CPUs have been causing issues with users for at least a year and a half, a post on X by Sebastian Castellanos in February put the problem in the spotlight. Castellanos wrote that there was a “worrying trend” of 13th and 14th-gen Intel CPUs having stability issues with Unreal Engine 4 and 5 games, such as Fortnite and Hogwarts Legacy. He also noticed that the issue seems to affect mostly higher-end models and linked to a discussion on Steam Community. The user that wrote the post on Steam wanted to issue a warning to those experiencing “out of video memory trying to allocate a rendering resource” errors that it was their CPU that was faulty. They also linked to several Reddit threads with people experiencing the same problem and who had determined that their issue lied with their Intel CPUs.
More recently, the indie studio Alderon Games published a post about “encountering significant problems with Intel CPU stability” while developing its multiplayer dinosaur survival game Path of Titans. Its founder, Matthew Cassells, said the studio found that the issue affected end customers, dedicated game servers, developers’ computers, game server providers and even benchmarking tools that use Intel’s 13th and 14th-gen CPUs. Cassells added that even the CPUs that initially work well deteriorate and eventually fail, based on the company’s observations. “The failure rate we have observed from our own testing is nearly 100 percent,” the studio’s post reads, “indicating it’s only a matter of time before affected CPUs fail.”
Modern graphics cards use lots of power and all of it is turned into heat. So if you’re paying many hundreds of dollars for a powerful GPU, you’d expect no expense to be spared on the cooling system. It turns out that for many Nvidia RTX 40-series vendors, the expense is being spared and cheap, poorly applied thermal paste is leading to scorching high hotspot temperatures and performance degradation over time.
That’s the conclusion hardware tester Igor’s Lab has come to after testing multiple GeForce RTX cards, analysing temperatures and performance, and discovering that the thermal paste used by many graphics card vendors is not only sub-standard for the job but is also poorly applied.
I have four RTX 40-series cards in my office (RTX 4080 Super, 4070 Ti, and two 4070s) and all of them have quite high hotspots—the highest temperature recorded by an individual thermal sensor in the die. In the case of the 4080 Super, it’s around 11 °C higher than the average temperature of the chip. I took it apart to apply some decent quality thermal paste and discovered a similar situation to that found by Igor’s Lab.
In the space of a few months, the factory-applied paste had separated and spread out, leaving just an oily film behind, and a few patches of the thermal compound itself. I checked the other cards and found that they were all in a similar state.
[…]
Removing the factory-installed paste from another RTX 4080 graphics card, Igor’s Lab applied a more appropriate amount of a high-quality paste and discovered that it lowered the hotspot temperature by nearly 30 °C.
But it’s not just about the hotspots. Cheap, poorly applied thermal paste will cause the performance of a graphics card to degrade over time because GPUs lower clock speeds when they reach their thermal limits. PC enthusiasts are probably very comfortable with replacing a CPU’s thermal paste regularly but it’s not a simple process with graphics cards.
[…]
While Nvidia enjoys huge margins on its GPUs, graphics card vendors aren’t quite so lucky, but they’re not so small that spending a few more dollars on better thermal paste isn’t going to bankrupt the company.
Mind you, if they all started using PTM7950, then none of this would be an issue—the cards would run cooler and would stay that way for much longer. The only problem then is that you’d hear the coil whine over the reduced fan noise.
“Intel’s problems with unstable 13th-gen and 14th-gen high-end CPUs appear to run deeper than we thought,” writes TechRadar, “and a new YouTube video diving into these gremlins will do little to calm any fears that buyers of Raptor Lake Core i9 processors (and its subsequent refresh) have.” Level1Techs is the YouTuber in question, who has explored several avenues in an effort to make more sense of the crashing issues with these Intel processors that are affecting some PC gamers and making their lives a misery — more so in some cases than others. Data taken from game developer crash logs — from two different games — clearly indicates a high prevalence of crashes with the mentioned more recent Intel Core i9 chips (13900K and 14900K).
In fact, for one particular type of error (decompression, a commonly performed operation in games), there was a total of 1,584 that occurred in the databases Level1Techs sifted through, and an alarming 1,431 of those happened with a 13900K or 14900K. Yes — that’s 90% of those decompression errors hitting just two specific CPUs. As for other processors, the third most prevalent was an old Intel Core i7 9750H (Coffee Lake laptop CPU) — which had a grand total of 11 instances. All AMD processors in total had just 4 occurrences of decompression errors in these game databases. “In case you were thinking that AMD chips might be really underrepresented here, hence that very low figure, well, they’re not — 30% of the CPUs in the database were from Team Red…”
“The YouTuber also brings up another point here: namely that data centers are noticing these issues with Core i9s.”
ASUS has suddenly agreed “to overhaul its customer support and warranty systems,” writes the hardware review site Gamers Nexus — after a three–videoseries on its YouTube channel documented bad and “potentially illegal” handling of customer warranties for the channel’s 2.2 million viewers.
The Verge highlights ASUS’s biggest change: If you’ve ever been denied a warranty repair or charged for a service that was unnecessary or should’ve been free, Asus wants to hear from you at a new email address. It claims those disputes will be processed by Asus’ own staff rather than outsourced customer support agents…. The company is also apologizing today for previous experiences you might have had with repairs. “We’re very sorry to anyone who has had a negative experience with our service team. We appreciate your feedback and giving us a chance to make amends.” It started five weeks ago when Gamers Nexus requested service for a joystick problem, according to a May 10 video. First they’d received a response wrongly telling them their damage was out of warranty — which also meant Asus could add a $20 shipping charge for the requested repair. “Somehow that turned into ASUS saying the LCD needs to be replaced, even though the joystick is covered under their repair policies,” the investigators say in the video. [They also note this response didn’t even address their original joystick problem — “only that thing that they had decided to find” — and that ASUS later made an out-of-the-blue reference to “liquid damage.”] The repair would ultimately cost $191.47, with ASUS mentioning that otherwise “the unit will be sent back un-repaired and may be disassembled.” ASUS gave them four days to respond, with some legalese adding that an out-of-warranty repair fee is non-refundable, yet still “does not guarantee that repairs can be made.”
Even when ASUS later agreed to do a free “partial” repair (providing the requested in-warranty service), the video’s investigators still received another email warning of “pending service cancellation” and return of the unit unless they spoke to “Invoice Quotation Support” immediately. The video-makers stood firm, and the in-warranty repair was later performed free — but they still concluded that “It felt like ASUS tried to scam us.” ASUS’s response was documented in a second video, with ASUS claiming it had merely been sending a list of “available” repairs (and promising that in the future ASUS would stop automatically including costs for the unrequested repair of “cosmetic imperfections” — and that they’d also change their automatic emails.)
ASUS promises it’s “created a Task Force team to retroactively go back through a long history of customer surveys that were negative to try and fix the issues.” (The third video from Gamers Nexus warned ASUS was already on the government’s radar over its handling of warranty issues.)
ASUS also announced their repairs centers were no longer allowed to claim “customer-induced damage” (which Gamers Nexus believes “will remove some of the financial incentive to fail devices” to speed up workloads).
ASUS is creating a new U.S. support center allowing customers to choose either a refurbished board or a longer repair.
Gamers Nexus says they already have devices at ASUS repair centers — under pseudonyms — and that they “plan to continue sampling them over the next 6-12 months so we can ensure these are permanent improvements.” And there’s one final improvement, according to Gamers Nexus. “After over a year of refusing to acknowledge the microSD card reader failures on the ROG Ally [handheld gaming console], ASUS will be posting a formal statement next week about the defect.”
A Finnish startup called Flow Computing is making one of the wildest claims ever heard in silicon engineering: by adding its proprietary companion chip, any CPU can instantly double its performance, increasing to as much as 100x with software tweaks.
If it works, it could help the industry keep up with the insatiable compute demand of AI makers.
Flow is a spinout of VTT, a Finland state-backed research organization that’s a bit like a national lab. The chip technology it’s commercializing, which it has branded the Parallel Processing Unit, is the result of research performed at that lab (though VTT is an investor, the IP is owned by Flow).
The claim, Flow is first to admit, is laughable on its face. You can’t just magically squeeze extra performance out of CPUs across architectures and code bases. If so, Intel or AMD or whoever would have done it years ago.
But Flow has been working on something that has been theoretically possible — it’s just that no one has been able to pull it off.
Central Processing Units have come a long way since the early days of vacuum tubes and punch cards, but in some fundamental ways they’re still the same. Their primary limitation is that as serial rather than parallel processors, they can only do one thing at a time. Of course, they switch that thing a billion times a second across multiple cores and pathways — but these are all ways of accommodating the single-lane nature of the CPU. (A GPU, in contrast, does many related calculations at once but is specialized in certain operations.)
“The CPU is the weakest link in computing,” said Flow co-founder and CEO Timo Valtonen. “It’s not up to its task, and this will need to change.”
CPUs have gotten very fast, but even with nanosecond-level responsiveness, there’s a tremendous amount of waste in how instructions are carried out simply because of the basic limitation that one task needs to finish before the next one starts. (I’m simplifying here, not being a chip engineer myself.)
What Flow claims to have done is remove this limitation, turning the CPU from a one-lane street into a multi-lane highway. The CPU is still limited to doing one task at a time, but Flow’s PPU, as they call it, essentially performs nanosecond-scale traffic management on-die to move tasks into and out of the processor faster than has previously been possible.
[…]
This type of thing isn’t brand new, says Valtonen. “This has been studied and discussed in high-level academia. You can already do parallelization, but it breaks legacy code, and then it’s useless.”
So it could be done. It just couldn’t be done without rewriting all the code in the world from the ground up, which kind of makes it a non-starter. A similar problem was solved by another Nordic compute company, ZeroPoint, which achieved high levels of memory compression while keeping data transparency with the rest of the system.
Flow’s big achievement, in other words, isn’t high-speed traffic management, but rather doing it without having to modify any code on any CPU or architecture that it has tested.
[…]
Therein lies the primary challenge to Flow’s success as a business: Unlike a software product, Flow’s tech needs to be included at the chip-design level, meaning it doesn’t work retroactively, and the first chip with a PPU would necessarily be quite a ways down the road. Flow has shown that the tech works in FPGA-based test setups, but chipmakers would have to commit quite a lot of resources to see the gains in question.
[…]
Further performance gains come from refactoring and recompiling software to work better with the PPU-CPU combo. Flow says it has seen increases up to 100x with code that’s been modified (though not necessarily fully rewritten) to take advantage of its technology. The company is working on offering recompilation tools to make this task simpler for software makers who want to optimize for Flow-enabled chips.
Analyst Kevin Krewell from Tirias Research, who was briefed on Flow’s tech and referred to as an outside perspective on these matters, was more worried about industry uptake than the fundamentals.
[…]
Flow is just now emerging from stealth, with €4 million (about $4.3 million) in pre-seed funding led by Butterfly Ventures, with participation from FOV Ventures, Sarsia, Stephen Industries, Superhero Capital and Business Finland.
Sonos launched a new version of its app this week, altering the software experience that tens of millions of users rely on to control the company’s premium home wireless home speaker systems.
Turns out, people really hate it! The response from users on Reddit, on audio forums, and on social media has been almost total condemnation since the app experience switched over on May 7. Users on the dedicated r/sonos subreddit are particularly peeved about it, expressing frustration at all manner of problems. The quickest way to see the scores of complaints is to visit the megathread the users in the community started to catalog all the problems they’re experiencing.
Courtesy of Sonos
Many features that had long been a part of the Sonos app are simply missing in the update. Features such as the ability to set sleep timers and alarms, set the speakers at a precise volume level, add songs to the end of a queue, manage Wi-Fi connectivity, and add new speakers are missing or broken, according to the complaints. Users are also reporting that the revamped search engine in the app often can’t search a connected local library running on a networked computer or a network-attached storage drive—they way many of Sonos’ most loyal users listen to their large private music libraries. Some streaming services are partially or completely broken for some users too, like TuneIn and LivePhish+.
Worse, the new app is not as accessible as the previous version, with one Reddit user calling it “an accessibility disaster.” The user, Rude-kangaroo6608, writes: “As a blind guy, I now have a system that I can hardly use.”
Also, they got rid of the next and previous buttons and you can’t scrob through the song in the small player. You can’t add all files in a directory in your Library at once to the Sonos playlist – you have to add them one by one. The shuffle is gone. You can’t re-arrange queues. The system loses speakers randomly. So basically, you can’t really use the app to play music with.
Tuesday May 14th there will be an Ask Me Anything (AMA) – I would feel sorry for the Sonos people taking the questions, but don’t because they caused this fiasco in the first place. It certainly is “courageous” (ie stupid) to release an incomplete and broken app on top over expensive hardware.
Devices sold in Europe already offer minimum two-year warranties, but the new rules impose additional requirements. If a device is repaired under warranty, the customer must be given a choice between a replacement or a repair. If they choose the latter, the warranty is to be extended by a year.
Once it expires, companies are still required to repair “common household products” that are repairable under EU law, like smartphones, TVs and certain appliances (the list of devices can be extended over time). Consumer may also borrow a device during the repair or, if it can’t be fixed, opt for a refurbished unit as an alternative.
The EU says repairs must be offered at a “reasonable” price such that “consumers are not intentionally deterred” from them. Manufacturers need to supply spare parts and tools and not try to weasel out of repairs through the use of “contractual clauses, hardware or software techniques.” The latter, while not stated, may make it harder for companies to sunset devices by halting future updates.
In addition, manufacturers can’t stop the use of second-hand, original, compatible or 3D-printed spare parts by independent repairers as long as they’re in conformity with EU laws. They must provide a website that shows prices for repairs, can’t refuse to fix a device previously repaired by someone else and can’t refuse a repair for economic reasons.
While applauding the expanded rules, Europe’s Right to Repair group said it there were missed opportunities. It would have liked to see more product categories included, priority for repair over replacement, the right for independent repairers to have access to all spare parts/repair information and more. “Our coalition will continue to push for ambitious repairability requirements… as well as working with members focused on the implementation of the directive in each member state.”
Along with helping consumers save money, right-to-repair rules help reduce e-waste, CO2 pollution and more. The area is currently a battleground in the US as well, with legislation under debate in around half the states. California’s right-to-repair law — going into effect on July 1 — forces manufacturers to stock replacement parts, tools and repair manuals for seven years for smartphones and other devices that cost over $100.
[…] As reported by Android Authority, more and more users are complaining about their Pixel phones not working as, well, phones. Users will miss phone calls entirely, and only notice after they see the call went directly to voicemail, while text messages don’t appear as they’re received, but rather pop in all at once in batches. It’s affecting multiple types of Pixel, as well, including Pixel 7a, Pixel 7, Pixel 7 Pro, Pixel 8, and Pixel 8 Pro.
In a Google Support thread about the issue, users blame the March 2024 update for causing this chaos, and suggest the April 2024 update didn’t include a patch for it, either. (It isn’t present in the release notes.) One alleges this update somehow messed with the phone’s IMS (IP Multimedia Subsystem), which is responsible for powering different communication standards on the Pixel. One commenter goes so far as to say the SMS issues have nearly driven them to iPhone, saying, “Google – are you getting the message?”
We don’t know exactly what is causing this network issue with Pixel, and it’s not affecting each and every Pixel user, as this Android Police commenter would like readers to know. But there are enough Pixel devices experiencing network problems around the world that this seems to be an issue Google can address.
[…]
it seems like the only temporary workaround is to toggle wifi off and on again, to essential toggle wifi calling off and on again as well. Reports suggest the workaround will allow calls and texts through as normal, but only temporarily, as the issue does seem to come back in time.
One promising technology is the Rotating Detonation Engine (RDE), which relies on one or more detonations that continuously travel around an annular channel.
In a recent hot fire test at NASA’s Marshall Space Flight Center in Huntsville, Alabama, the agency achieved a new benchmark in developing RDE technology. On September 27th, engineers successfully tested a 3D-printed rotating detonation rocket engine (RDRE) for 251 seconds, producing more than 2,630 kg (5,800 lbs) of thrust. This sustained burn meets several mission requirements, such as deep-space burns and landing operations. NASA recently shared the footage of the RDRE hot fire test (see below) as it burned continuously on a test stand at NASA Marshall for over four minutes.
While RDEs have been developed and tested for many years, the technology has garnered much attention since NASA began researching it for its “Moon to Mars” mission architecture. Theoretically, the engine technology is more efficient than conventional propulsion and similar methods that rely on controlled detonations. The first hot fire test with the RDRE was performed at Marshall in the summer of 2022 in partnership with advanced propulsion developer In Space LLC and Purdue University in Lafayette, Indiana.
During that test, the RDRE fired for nearly a minute and produced more than 1815 kg (4,000 lbs) of thrust. According to Thomas Teasley, who leads the RDRE test effort at NASA Marshall, the primary goal of the latest test is to understand better how they can scale the combustor to support different engine systems and maximize the variety of missions they could be used for. This ranges from landers and upper-stage engines to supersonic retropropulsion – a deceleration technique that could land heavy payloads and crewed missions on Mars. As Teasley said in a recent NASA press release:
“The RDRE enables a huge leap in design efficiency. It demonstrates we are closer to making lightweight propulsion systems that will allow us to send more mass and payload further into deep space, a critical component to NASA’s Moon to Mars vision.”
Meanwhile, engineers at NASA’s Glenn Research Center and Houston-based Venus Aerospace are working with NASA Marshall to identify ways to scale the technology for larger mission profiles.
At the Parliament’s plenary session in Strasbourg, the right to repair was adopted with 590 votes in favour.
The legislative file, first presented by the EU Commission in March, aims to support the European Green Deal targets by increasing incentives for a circular economy, such as making repair a more attractive option than replacement for consumers.
[…]
Apart from ensuring favourable conditions for an independent repair market and preventing manufacturers from undermining repairs as an attractive choice, the IMCO position also extended the product category for a right-to-repair to bicycles.
“We do need this right to repair. What we are currently doing is simply not sustainable. We are living in a market economy where after two years, products have to be replaced, and we must lead Europe to a paradigm shift in that regard,” Repasi said.
Sunčana Glavak (EPP), the rapporteur for the opinion of the ENVI (Environment, Public Health and Food Safety) Committee, added it was “necessary to strengthen the repair culture through awareness raising campaigns, above all at the national level”.
[…]
To incentivise the choice for repair, the Parliament introduced an additional one-year guarantee period on the repaired goods, “once the minimum guarantee period has elapsed”, Repasi explained, as well as the possibility for a replacement product during repair if the repair takes too long.
Moreover, the Parliament intends to create a rule that market authorities can intervene to lower prices for spare parts to a realistic price level.
“Manufacturers must also be obliged to provide spare parts and repair information at fair prices. The European Parliament has recognised this correctly,” Holger Schwannecke, secretary general of the German Confederation of Skilled Crafts and Small Businesses, said.
He warned that customer claims against vendors and manufacturers must not result in craftspeople being held liable for third-party repairs.
To ensure that operating systems of smartphones continue to work after repair by an independent repairer, the Parliament aims to ban phone makers’ practice of running a closed system that limits access to alternative repair services.
Particle accelerators range in size from a room to a city. However, now scientists are looking closer at chip-sized electron accelerators, a new study finds. Potential near-term applications for the technology include radiation therapy for zapping skin cancer and, longer-term, new kinds of laser and light sources.
Particle accelerators generally propel particles within metal tubes or rings. The rate at which they can accelerate particles is limited by the peak fields the metallic surfaces can withstand. Conventional accelerators range in size from a few meters for medical applications to kilometers for fundamental research. The fields they use are often on the scale of millions of volts per meter.
In contrast, electrically insulating dielectric materials (stuff that doesn’t conduct electricity well but does support electrostatic fields well) can withstand light fields thousands of times stronger. This has led scientists to investigate creating dielectric accelerators that rely on lasers to hurl particles.
[…]
physicists fabricated a tiny channel 225 nanometers wide and up to 0.5 millimeters long. An electron beam entered one end of the channel and exited the other end.
The researchers shone infrared laser pulses 250 femtoseconds long on top of the channel to help accelerate electrons down it. Inside the channel, two rows of up to 733 silicon pillars, each 2 micrometers high, interacted with these laser pulses to generate accelerating forces.
The electrons entered the accelerators with an energy of 28,400 electron-volts, traveling at roughly one-third the speed of light. They exited it with an energy of 40,700 electron-volts, a 43 percent boost in energy.
This new type of particle accelerator can be built using standard cleanroom techniques, such as electron beam lithography. “This is why we think that our results represent a big step forward,” Hommelhoff says. “Everyone can go ahead and start engineering useful machines from this.”
[…]
Applications for these nanophotonic electron accelerators depend on the energies they can reach. Electrons of up to about 300,000 electron-volts are typical for electron microscopy, Hommelhoff says. For treatment of skin cancer, 10 million electron-volt electrons are needed. Whereas such medical applications currently require an accelerator 1 meter wide, as well as additional large, heavy and expensive parts to help drive the accelerator, “we could in principle get rid of both and have just a roughly 1-centimeter chip with a few extra centimeters for the electron source,” adds study lead author Tomáš Chlouba, a physicist at the University of Erlangen-Nuremberg in Germany.
The scientists note there are many ways to improve their device beyond their initial proof-of-concept structures. They now aim to experiment with greater acceleration and higher electron currents to help enable applications, as well as boosting output by fabricating many accelerator channels next to each other that can all be driven by the same laser pulses.
In addition, although the new study experimented with structures made from silicon due to the relative ease of working with it, “silicon is not really a high-damage threshold material,” Hommelhoff says. Structures made of glass or other materials may allow much stronger laser pulses and thus more powerful acceleration, he says.
The researchers are interested in building a small-scale accelerator, “maybe with skin cancer treatment applications in mind first,” Hommelhoff says. “This is certainly something that we should soon transfer to a startup company.”
The scientists detailed their findings in the 19 October issue of the journal Nature.
Their massive NorthPole processor chip eliminates the need to frequently access external memory, and so performs tasks such as image recognition faster than existing architectures do — while consuming vastly less power.
“Its energy efficiency is just mind-blowing,” says Damien Querlioz, a nanoelectronics researcher at the University of Paris-Saclay in Palaiseau. The work, published in Science1, shows that computing and memory can be integrated on a large scale, he says. “I feel the paper will shake the common thinking in computer architecture.”
NorthPole runs neural networks: multi-layered arrays of simple computational units programmed to recognize patterns in data. A bottom layer takes in data, such as the pixels in an image; each successive layer detects patterns of increasing complexity and passes information on to the next layer. The top layer produces an output that, for example, can express how likely an image is to contain a cat, a car or other objects.
[…]
NorthPole is made of 256 computing units, or cores, each of which contains its own memory. “You’re mitigating the Von Neumann bottleneck within a core,” says Modha, who is IBM’s chief scientist for brain-inspired computing at the company’s Almaden research centre in San Jose.
The cores are wired together in a network inspired by the white-matter connections between parts of the human cerebral cortex, Modha says. This and other design principles — most of which existed before but had never been combined in one chip — enable NorthPole to beat existing AI machines by a substantial margin in standard benchmark tests of image recognition. It also uses one-fifth of the energy of state-of-the-art AI chips, despite not using the most recent and most miniaturized manufacturing processes. If the NorthPole design were implemented with the most up-to-date manufacturing process, its efficiency would be 25 times better than that of current designs, the authors estimate.
[…]
NorthPole brings memory units as physically close as possible to the computing elements in the core. Elsewhere, researchers have been developing more-radical innovations using new materials and manufacturing processes. These enable the memory units themselves to perform calculations, which in principle could boost both speed and efficiency even further.
Another chip, described last month2, does in-memory calculations using memristors, circuit elements able to switch between being a resistor and a conductor. “Both approaches, IBM’s and ours, hold promise in mitigating latency and reducing the energy costs associated with data transfers,”
[…]
Another approach, developed by several teams — including one at a separate IBM lab in Zurich, Switzerland3 — stores information by changing a circuit element’s crystal structure. It remains to be seen whether these newer approaches can be scaled up economically.
Today, we’re delighted to announce the launch of Raspberry Pi 5, coming at the end of October. Priced at $60 for the 4GB variant, and $80 for its 8GB sibling (plus your local taxes), virtually every aspect of the platform has been upgraded, delivering a no-compromises user experience. Raspberry Pi 5 comes with new features, it’s over twice as fast as its predecessor, and it’s the first Raspberry Pi computer to feature silicon designed in‑house here in Cambridge, UK.
Key features include:
2.4GHz quad-core 64-bit Arm Cortex-A76 CPU
VideoCore VII GPU, supporting OpenGL ES 3.1, Vulkan 1.2
Dual 4Kp60 HDMI® display output
4Kp60 HEVC decoder
Dual-band 802.11ac Wi-Fi®
Bluetooth 5.0 / Bluetooth Low Energy (BLE)
High-speed microSD card interface with SDR104 mode support
2 × USB 3.0 ports, supporting simultaneous 5Gbps operation
2 × USB 2.0 ports
Gigabit Ethernet, with PoE+ support (requires separate PoE+ HAT, coming soon)
2 × 4-lane MIPI camera/display transceivers
PCIe 2.0 x1 interface for fast peripherals
Raspberry Pi standard 40-pin GPIO header
Real-time clock
Power button
In a break from recent tradition, we are announcing Raspberry Pi 5 before the product arrives on shelves. Units are available to pre-order today from many of our Approved Reseller partners, and we expect the first units to ship by the end of October.
Last year, BMW underwent media and customer hellfire over its decision to offer a monthly subscription for heated seats. While seat heating wasn’t the only option available for subscription, it was the one that seemed to infuriate everyone the most, since it concerned hardware already present in the car from the factory. After months of customers continuously expressing their displeasure with the plan, BMW has finally decided to abandon recurring charges for hardware-based functions.
“What we don’t do any more—and that is a very well-known example—is offer seat heating by [monthly subscriptions]” BMW marketing boss Pieter Nota said to Autocar. “It’s either in or out. We offer it by the factory and you either have it or you don’t have it.”
BMW’s move wasn’t solely about charging customers monthly for heated seats. Rather, the luxury automaker wanted to streamline production and reduce costs there by physically installing heated seats in every single car, since 90% of all BMWs are bought with seat heaters anyway. Then, owners who didn’t spec heated seats from the factory could digitally unlock them later with either a monthly subscription or a one-time perma-buy option. Nota still believes it was a good idea.
[…]
BMW was absolutely double dipping with heated seat subscriptions. The company started down that route to reduce production costs, making each car cheaper to build by streamlining the process. Fair enough. However, those reduced costs weren’t then passed down to buyers via lower MSRPs. Customers were technically paying for those heated seats anyway, no matter whether they wanted them. Then, BMW was not only charging extra to use a feature already installed in the car, but also subjecting it to subscription billing, even though seat heating is static hardware not designed to change or improve over time.
Customers weren’t happy, and rightfully made their grievance known. While it’s good that BMW ultimately buckled to the public’s wishes here, it doesn’t seem like the automaker’s board members truly understand why the outrage happened in the first place.
Magic Leap 1 AR headsets will “cease to function” from 31 December 2024, the company announced.
Magic Leap sent an email to all customers containing the following:
As such, we are announcing that Magic Leap 1 end of life date will be December 31, 2024. Magic Leap 1 is no longer available for purchase, but will continue to be supported through December 31, 2024 as follows:
• OS Updates: Magic Leap will only address outages that impact core functionality (as determined by Magic Leap) until December 31, 2024.
• Customer Care will continue to offer Magic Leap 1 product troubleshooting assistance through December 31, 2024.
• Warranties: Magic Leap will continue to honor valid warranty claims under the Magic Leap 1 Warranty Policy available here.
• Cloud Services: On December 31, 2024, cloud services for Magic Leap 1 will no longer be available, core functionality will reach end-of-life and the Magic Leap 1 device and apps will cease to function.
Former Magic Leap Senior Manager Steve Lukas said on X that his understanding is that the device will cease to function due to a hardcoded cloud security check it runs every six months.
[…]
Content for the device included avatar chat, a floating web browser, a Wayfair app for seeing how furniture might look in your room, twogames made by Insomniac Games, and a Spotify background app.
But Magic Leap 1’s eye-watering $2300 price and the limitations of transparent optics (even today) meant it reportedly fell significantly short of sales expectations. Transparent AR currently provides a much smaller field of view than the opaque display systems of VR-style headsets, despite costing significantly more. And Magic Leap 1’s form factor wasn’t suitable for outdoor use, so it didn’t provide the out-of-home functionality AR glasses promise to one day like on-foot navigation, translation, and contextual information.
[…]
The Information reported that Magic Leap’s founder, the CEO at the time, originally expected it to sell over one million units in the first year. In reality it reportedly sold just 6000 units in the first six months.
[…]
The company today is still fully focused on enterprise. Magic Leap 2 launched last year at $3300, leapfrogging HoloLens 2 with a taller field of view, brighter displays, and unique dynamic dimming.
So after promising stuff which took years in coming and when it did was an intense and hugely expensive dissapointment, the company will now insure that the fortune you spent on junk is now really really turned into a brick.