[…] As reported by Android Authority, more and more users are complaining about their Pixel phones not working as, well, phones. Users will miss phone calls entirely, and only notice after they see the call went directly to voicemail, while text messages don’t appear as they’re received, but rather pop in all at once in batches. It’s affecting multiple types of Pixel, as well, including Pixel 7a, Pixel 7, Pixel 7 Pro, Pixel 8, and Pixel 8 Pro.
In a Google Support thread about the issue, users blame the March 2024 update for causing this chaos, and suggest the April 2024 update didn’t include a patch for it, either. (It isn’t present in the release notes.) One alleges this update somehow messed with the phone’s IMS (IP Multimedia Subsystem), which is responsible for powering different communication standards on the Pixel. One commenter goes so far as to say the SMS issues have nearly driven them to iPhone, saying, “Google – are you getting the message?”
We don’t know exactly what is causing this network issue with Pixel, and it’s not affecting each and every Pixel user, as this Android Police commenter would like readers to know. But there are enough Pixel devices experiencing network problems around the world that this seems to be an issue Google can address.
[…]
it seems like the only temporary workaround is to toggle wifi off and on again, to essential toggle wifi calling off and on again as well. Reports suggest the workaround will allow calls and texts through as normal, but only temporarily, as the issue does seem to come back in time.
One promising technology is the Rotating Detonation Engine (RDE), which relies on one or more detonations that continuously travel around an annular channel.
In a recent hot fire test at NASA’s Marshall Space Flight Center in Huntsville, Alabama, the agency achieved a new benchmark in developing RDE technology. On September 27th, engineers successfully tested a 3D-printed rotating detonation rocket engine (RDRE) for 251 seconds, producing more than 2,630 kg (5,800 lbs) of thrust. This sustained burn meets several mission requirements, such as deep-space burns and landing operations. NASA recently shared the footage of the RDRE hot fire test (see below) as it burned continuously on a test stand at NASA Marshall for over four minutes.
While RDEs have been developed and tested for many years, the technology has garnered much attention since NASA began researching it for its “Moon to Mars” mission architecture. Theoretically, the engine technology is more efficient than conventional propulsion and similar methods that rely on controlled detonations. The first hot fire test with the RDRE was performed at Marshall in the summer of 2022 in partnership with advanced propulsion developer In Space LLC and Purdue University in Lafayette, Indiana.
During that test, the RDRE fired for nearly a minute and produced more than 1815 kg (4,000 lbs) of thrust. According to Thomas Teasley, who leads the RDRE test effort at NASA Marshall, the primary goal of the latest test is to understand better how they can scale the combustor to support different engine systems and maximize the variety of missions they could be used for. This ranges from landers and upper-stage engines to supersonic retropropulsion – a deceleration technique that could land heavy payloads and crewed missions on Mars. As Teasley said in a recent NASA press release:
“The RDRE enables a huge leap in design efficiency. It demonstrates we are closer to making lightweight propulsion systems that will allow us to send more mass and payload further into deep space, a critical component to NASA’s Moon to Mars vision.”
Meanwhile, engineers at NASA’s Glenn Research Center and Houston-based Venus Aerospace are working with NASA Marshall to identify ways to scale the technology for larger mission profiles.
At the Parliament’s plenary session in Strasbourg, the right to repair was adopted with 590 votes in favour.
The legislative file, first presented by the EU Commission in March, aims to support the European Green Deal targets by increasing incentives for a circular economy, such as making repair a more attractive option than replacement for consumers.
[…]
Apart from ensuring favourable conditions for an independent repair market and preventing manufacturers from undermining repairs as an attractive choice, the IMCO position also extended the product category for a right-to-repair to bicycles.
“We do need this right to repair. What we are currently doing is simply not sustainable. We are living in a market economy where after two years, products have to be replaced, and we must lead Europe to a paradigm shift in that regard,” Repasi said.
Sunčana Glavak (EPP), the rapporteur for the opinion of the ENVI (Environment, Public Health and Food Safety) Committee, added it was “necessary to strengthen the repair culture through awareness raising campaigns, above all at the national level”.
[…]
To incentivise the choice for repair, the Parliament introduced an additional one-year guarantee period on the repaired goods, “once the minimum guarantee period has elapsed”, Repasi explained, as well as the possibility for a replacement product during repair if the repair takes too long.
Moreover, the Parliament intends to create a rule that market authorities can intervene to lower prices for spare parts to a realistic price level.
“Manufacturers must also be obliged to provide spare parts and repair information at fair prices. The European Parliament has recognised this correctly,” Holger Schwannecke, secretary general of the German Confederation of Skilled Crafts and Small Businesses, said.
He warned that customer claims against vendors and manufacturers must not result in craftspeople being held liable for third-party repairs.
To ensure that operating systems of smartphones continue to work after repair by an independent repairer, the Parliament aims to ban phone makers’ practice of running a closed system that limits access to alternative repair services.
Particle accelerators range in size from a room to a city. However, now scientists are looking closer at chip-sized electron accelerators, a new study finds. Potential near-term applications for the technology include radiation therapy for zapping skin cancer and, longer-term, new kinds of laser and light sources.
Particle accelerators generally propel particles within metal tubes or rings. The rate at which they can accelerate particles is limited by the peak fields the metallic surfaces can withstand. Conventional accelerators range in size from a few meters for medical applications to kilometers for fundamental research. The fields they use are often on the scale of millions of volts per meter.
In contrast, electrically insulating dielectric materials (stuff that doesn’t conduct electricity well but does support electrostatic fields well) can withstand light fields thousands of times stronger. This has led scientists to investigate creating dielectric accelerators that rely on lasers to hurl particles.
[…]
physicists fabricated a tiny channel 225 nanometers wide and up to 0.5 millimeters long. An electron beam entered one end of the channel and exited the other end.
The researchers shone infrared laser pulses 250 femtoseconds long on top of the channel to help accelerate electrons down it. Inside the channel, two rows of up to 733 silicon pillars, each 2 micrometers high, interacted with these laser pulses to generate accelerating forces.
The electrons entered the accelerators with an energy of 28,400 electron-volts, traveling at roughly one-third the speed of light. They exited it with an energy of 40,700 electron-volts, a 43 percent boost in energy.
This new type of particle accelerator can be built using standard cleanroom techniques, such as electron beam lithography. “This is why we think that our results represent a big step forward,” Hommelhoff says. “Everyone can go ahead and start engineering useful machines from this.”
[…]
Applications for these nanophotonic electron accelerators depend on the energies they can reach. Electrons of up to about 300,000 electron-volts are typical for electron microscopy, Hommelhoff says. For treatment of skin cancer, 10 million electron-volt electrons are needed. Whereas such medical applications currently require an accelerator 1 meter wide, as well as additional large, heavy and expensive parts to help drive the accelerator, “we could in principle get rid of both and have just a roughly 1-centimeter chip with a few extra centimeters for the electron source,” adds study lead author Tomáš Chlouba, a physicist at the University of Erlangen-Nuremberg in Germany.
The scientists note there are many ways to improve their device beyond their initial proof-of-concept structures. They now aim to experiment with greater acceleration and higher electron currents to help enable applications, as well as boosting output by fabricating many accelerator channels next to each other that can all be driven by the same laser pulses.
In addition, although the new study experimented with structures made from silicon due to the relative ease of working with it, “silicon is not really a high-damage threshold material,” Hommelhoff says. Structures made of glass or other materials may allow much stronger laser pulses and thus more powerful acceleration, he says.
The researchers are interested in building a small-scale accelerator, “maybe with skin cancer treatment applications in mind first,” Hommelhoff says. “This is certainly something that we should soon transfer to a startup company.”
The scientists detailed their findings in the 19 October issue of the journal Nature.
Their massive NorthPole processor chip eliminates the need to frequently access external memory, and so performs tasks such as image recognition faster than existing architectures do — while consuming vastly less power.
“Its energy efficiency is just mind-blowing,” says Damien Querlioz, a nanoelectronics researcher at the University of Paris-Saclay in Palaiseau. The work, published in Science1, shows that computing and memory can be integrated on a large scale, he says. “I feel the paper will shake the common thinking in computer architecture.”
NorthPole runs neural networks: multi-layered arrays of simple computational units programmed to recognize patterns in data. A bottom layer takes in data, such as the pixels in an image; each successive layer detects patterns of increasing complexity and passes information on to the next layer. The top layer produces an output that, for example, can express how likely an image is to contain a cat, a car or other objects.
[…]
NorthPole is made of 256 computing units, or cores, each of which contains its own memory. “You’re mitigating the Von Neumann bottleneck within a core,” says Modha, who is IBM’s chief scientist for brain-inspired computing at the company’s Almaden research centre in San Jose.
The cores are wired together in a network inspired by the white-matter connections between parts of the human cerebral cortex, Modha says. This and other design principles — most of which existed before but had never been combined in one chip — enable NorthPole to beat existing AI machines by a substantial margin in standard benchmark tests of image recognition. It also uses one-fifth of the energy of state-of-the-art AI chips, despite not using the most recent and most miniaturized manufacturing processes. If the NorthPole design were implemented with the most up-to-date manufacturing process, its efficiency would be 25 times better than that of current designs, the authors estimate.
[…]
NorthPole brings memory units as physically close as possible to the computing elements in the core. Elsewhere, researchers have been developing more-radical innovations using new materials and manufacturing processes. These enable the memory units themselves to perform calculations, which in principle could boost both speed and efficiency even further.
Another chip, described last month2, does in-memory calculations using memristors, circuit elements able to switch between being a resistor and a conductor. “Both approaches, IBM’s and ours, hold promise in mitigating latency and reducing the energy costs associated with data transfers,”
[…]
Another approach, developed by several teams — including one at a separate IBM lab in Zurich, Switzerland3 — stores information by changing a circuit element’s crystal structure. It remains to be seen whether these newer approaches can be scaled up economically.
Today, we’re delighted to announce the launch of Raspberry Pi 5, coming at the end of October. Priced at $60 for the 4GB variant, and $80 for its 8GB sibling (plus your local taxes), virtually every aspect of the platform has been upgraded, delivering a no-compromises user experience. Raspberry Pi 5 comes with new features, it’s over twice as fast as its predecessor, and it’s the first Raspberry Pi computer to feature silicon designed in‑house here in Cambridge, UK.
Key features include:
2.4GHz quad-core 64-bit Arm Cortex-A76 CPU
VideoCore VII GPU, supporting OpenGL ES 3.1, Vulkan 1.2
Dual 4Kp60 HDMI® display output
4Kp60 HEVC decoder
Dual-band 802.11ac Wi-Fi®
Bluetooth 5.0 / Bluetooth Low Energy (BLE)
High-speed microSD card interface with SDR104 mode support
2 × USB 3.0 ports, supporting simultaneous 5Gbps operation
2 × USB 2.0 ports
Gigabit Ethernet, with PoE+ support (requires separate PoE+ HAT, coming soon)
2 × 4-lane MIPI camera/display transceivers
PCIe 2.0 x1 interface for fast peripherals
Raspberry Pi standard 40-pin GPIO header
Real-time clock
Power button
In a break from recent tradition, we are announcing Raspberry Pi 5 before the product arrives on shelves. Units are available to pre-order today from many of our Approved Reseller partners, and we expect the first units to ship by the end of October.
Last year, BMW underwent media and customer hellfire over its decision to offer a monthly subscription for heated seats. While seat heating wasn’t the only option available for subscription, it was the one that seemed to infuriate everyone the most, since it concerned hardware already present in the car from the factory. After months of customers continuously expressing their displeasure with the plan, BMW has finally decided to abandon recurring charges for hardware-based functions.
“What we don’t do any more—and that is a very well-known example—is offer seat heating by [monthly subscriptions]” BMW marketing boss Pieter Nota said to Autocar. “It’s either in or out. We offer it by the factory and you either have it or you don’t have it.”
BMW’s move wasn’t solely about charging customers monthly for heated seats. Rather, the luxury automaker wanted to streamline production and reduce costs there by physically installing heated seats in every single car, since 90% of all BMWs are bought with seat heaters anyway. Then, owners who didn’t spec heated seats from the factory could digitally unlock them later with either a monthly subscription or a one-time perma-buy option. Nota still believes it was a good idea.
[…]
BMW was absolutely double dipping with heated seat subscriptions. The company started down that route to reduce production costs, making each car cheaper to build by streamlining the process. Fair enough. However, those reduced costs weren’t then passed down to buyers via lower MSRPs. Customers were technically paying for those heated seats anyway, no matter whether they wanted them. Then, BMW was not only charging extra to use a feature already installed in the car, but also subjecting it to subscription billing, even though seat heating is static hardware not designed to change or improve over time.
Customers weren’t happy, and rightfully made their grievance known. While it’s good that BMW ultimately buckled to the public’s wishes here, it doesn’t seem like the automaker’s board members truly understand why the outrage happened in the first place.
Magic Leap 1 AR headsets will “cease to function” from 31 December 2024, the company announced.
Magic Leap sent an email to all customers containing the following:
As such, we are announcing that Magic Leap 1 end of life date will be December 31, 2024. Magic Leap 1 is no longer available for purchase, but will continue to be supported through December 31, 2024 as follows:
• OS Updates: Magic Leap will only address outages that impact core functionality (as determined by Magic Leap) until December 31, 2024.
• Customer Care will continue to offer Magic Leap 1 product troubleshooting assistance through December 31, 2024.
• Warranties: Magic Leap will continue to honor valid warranty claims under the Magic Leap 1 Warranty Policy available here.
• Cloud Services: On December 31, 2024, cloud services for Magic Leap 1 will no longer be available, core functionality will reach end-of-life and the Magic Leap 1 device and apps will cease to function.
Former Magic Leap Senior Manager Steve Lukas said on X that his understanding is that the device will cease to function due to a hardcoded cloud security check it runs every six months.
[…]
Content for the device included avatar chat, a floating web browser, a Wayfair app for seeing how furniture might look in your room, twogames made by Insomniac Games, and a Spotify background app.
But Magic Leap 1’s eye-watering $2300 price and the limitations of transparent optics (even today) meant it reportedly fell significantly short of sales expectations. Transparent AR currently provides a much smaller field of view than the opaque display systems of VR-style headsets, despite costing significantly more. And Magic Leap 1’s form factor wasn’t suitable for outdoor use, so it didn’t provide the out-of-home functionality AR glasses promise to one day like on-foot navigation, translation, and contextual information.
[…]
The Information reported that Magic Leap’s founder, the CEO at the time, originally expected it to sell over one million units in the first year. In reality it reportedly sold just 6000 units in the first six months.
[…]
The company today is still fully focused on enterprise. Magic Leap 2 launched last year at $3300, leapfrogging HoloLens 2 with a taller field of view, brighter displays, and unique dynamic dimming.
So after promising stuff which took years in coming and when it did was an intense and hugely expensive dissapointment, the company will now insure that the fortune you spent on junk is now really really turned into a brick.
Last week we wrote about a lawsuit against Western Digital that alleged that the firm’s solid state drive didn’t live up to its marketing promises. More lawsuits have been filed against the company since. ArsTechnica: On Thursday, two more lawsuits were filed against Western Digital over its SanDisk Extreme series and My Passport portable SSDs. That brings the number of class-action complaints filed against Western Digital to three in two days. In May, Ars Technica reported about customer complaints that claimed SanDisk Extreme SSDs were abruptly wiping data and becoming unmountable. Ars senior editor Lee Hutchinson also experienced this problem with two Extreme SSDs. Western Digital, which owns SanDisk, released a firmware update in late May, saying that currently shipping products weren’t impacted. But the company didn’t mention customer complaints of lost data, only that drives could “unexpectedly disconnect from a computer.”
Further, last week The Verge claimed a replacement drive it received after the firmware update still wiped its data and became unreadable, and there are some complaints on Reddit pointing to recent problems with Extreme drives. All three cases filed against Western Digital this week seek class-action certification (Ars was told it can take years for a judge to officially state certification and that cases may proceed with class-wide resolutions possibly occurring before official certification). Ian Sloss, one of the lawyers representing Matthew Perrin and Brian Bayerl in a complaint filed yesterday, told Ars he doesn’t believe class-action certification will be a major barrier in a case “where there is a common defect in the firmware that is consistent in all devices.” He added that defect cases are “ripe for class treatment.”
German semiconductor maker Infineon Technologies AG announced that it’s producing a printed circuit board (PCB) that dissolves in water. Sourced from UK startup Jiva Materials, the plant-based Soluboard could provide a new avenue for the tech industry to reduce e-waste as companies scramble to meet climate goals by 2030.
Jiva’s biodegradable PCB is made from natural fibers and a halogen-free polymer with a much lower carbon footprint than traditional boards made with fiberglass composites. A 2022 study by the University of Washington College of Engineering and Microsoft Research saw the team create an Earth-friendly mouse using a Soluboard PCB as its core. The researchers found that the Soluboard dissolved in hot water in under six minutes. However, it can take several hours to break down at room temperature.
In addition to dissolving the PCB fibers, the process makes it easier to retrieve the valuable metals attached to it. “After [it dissolves], we’re left with the chips and circuit traces which we can filter out,” said UW assistant professor Vikram Iyer, who worked on the mouse project.
[…]
Jiva says the board has a 60 percent smaller carbon footprint than traditional PCBs — specifically, it can save 10.5 kg of carbon and 620 g of plastic per square meter of PCB.
Today, the Institute of Electrical and Electronics Engineers (IEEE) has added 802.11bb as a standard for light-based wireless communications. The publishing of the standard has been welcomed by global Li-Fi businesses, as it will help speed the rollout and adoption of the data-transmission technology standard.
Advantages of using light rather than radio frequencies (RF) are highlighted by Li-Fi proponents including pureLiFi, Fraunhofer HHI, and the Light Communications 802.11bb Task Group. Li-Fi is said to deliver “faster, more reliable wireless communications with unparalleled security compared to conventional technologies such as Wi-Fi and 5G.” Now that the IEEE 802.11bb Li-Fi standard has been released, it is hoped that interoperability between Li-Fi systems with the successful Wi-Fi will be fully addressed.
[…]
Where Li-Fi shines (pun intended) is not just in its purported speeds as fast as 224 GB/s. Fraunhofer’s Dominic Schulz points out that as it works in an exclusive optical spectrum, this ensures higher reliability and lower latency and jitter. Moreover “Light’s line-of-sight propagation enhances security by preventing wall penetration, reducing jamming and eavesdropping risks, and enabling centimetre-precision indoor navigation,” says Shultz.
[…]
One of the big wheels of Li-Fi, pureLiFi, has already prepared the Light Antenna ONE module for integration into connected devices.
The concept of Continuous Integration (CI) is a powerful tool in software development, and it’s not every day we get a look at how someone integrated automated hardware testing into their system. [Michael Orenstein] brought to our attention the Hardware CI Arena, a framework for doing exactly that across a variety of host OSes and microcontroller architectures.
[…]
The Hardware CI Arena (GitHub repository) was created to allow automated testing to be done across a variety of common OS and hardware configurations. It does this by allowing software-controlled interactions to a bank of actual, physical hardware options. It’s purpose-built for a specific need, but the level of detail and frank discussion of the issues involved is an interesting look at what it took to get this kind of thing up and running.
The value of automatic hardware testing with custom rigs is familiar ground to anyone who develops hardware, but tying that idea into a testing and CI framework for a software product expands the idea in a useful way. When it comes to identifying problems, earlier is always better.
When should you be concerned about a NAS hard drive failing? Multiple factors are at play, so many might turn to various SMART (self-monitoring, analysis, and reporting technology) data. When it comes to how long the drive has been active, there are backup companies like Backblaze using hard drives that are nearly 8 years old. That may be why some customers have been panicked, confused, and/or angered to see their Western Digital NAS hard drive automatically given a warning label in Synology’s DiskStation Manager (DSM) after they were powered on for three years. With no other factors considered for these automatic flags, Western Digital is accused of age-shaming drives to push people to buy new HDDs prematurely.
The practice’s revelation is the last straw for some users. Western Digital already had a steep climb to win back NAS customers’ trust after shipping NAS drives with SMR (shingled magnetic recording) instead of CMR (conventional magnetic recording). Now, some are saying they won’t use or recommend the company’s hard drives anymore.
“Warning,” your NAS drive’s been on for 3 years
As users have reported online, including on Synology-focused and Synology’s ownforums, as well as on Reddit and YouTube, Western Digital drives using Western Digital Device Analytics (WDDA) are getting a “warning” stamp in Synology DSM once their power-on hours count hits the three-year mark. WDDA is similar to SMART monitoring and rival offerings, like Seagate’s IronWolf, and is supposed to provide analytics and actionable items.
The recommended action says: “The drive has accumulated a large number of power on hours [throughout] the entire life of the drive. Please consider to replace the drive soon.” There seem to be no discernible problems with the hard drives otherwise.
Synology confirmed this to Ars Technica and noted that the labels come from Western Digital, not Synology. A spokesperson said the “WDDA monitoring and testing subsystem is developed by Western Digital, including the warning after they reach a certain number of power-on-hours.”
The practice has caused some, like YouTuber SpaceRex, to stop recommending Western Digital drives for the foreseeable future. In May, the YouTuber and tech consultant described his outrage, saying three years is “absolutely nothing” for a NAS drive and lamenting the flags having nothing to do with anything besides whether or not a drive has been in use for three years.
[…]
Users are also concerned that this could prevent people from noticing serious problems with their drive.
Further, you can’t repair a pool with a drive marked with a warning label.
“Only drives with a healthy status can be used to repair or expand a storage pool,” Synology’s spokesperson said. “Users will need to first suppress the warning or disable WDDA to continue.”
[…]
Since Western Digital’s questionable practice has come to light, there has been discussion about how to disable WDDA via SSH.
Synology’s spokesperson said if WDDA is enabled in DSM, one could disable WDDA in Storage Manager and see the warning removed.
“Because the warning is triggered by a fixed power-on-hour count, we do not believe [disabling WDDA] it to be a risk. However, administrators should still pay close attention to their systems, including if other warnings or I/O disruptions occur,” the Synology rep said. “Indicators such as significantly slower reads/writes are more evident signs that a drive’s health may be deteriorating.”
A US federal court this week gave final approval to the $50 million class-action settlement Apple came to last July resolving claims the company knew about and concealed the unreliable nature of keyboards on MacBook, MacBook Air and MacBook Pro computers released between 2015 and 2019. Per Reuters (via 9to5Mac), Judge Edward Davila on Thursday called the settlement involving Apple’s infamous “butterfly” keyboards “fair, adequate and reasonable.” Under the agreement, MacBook users impacted by the saga will receive settlements between $50 and $395. More than 86,000 claims for class member payments were made before the application deadline last March, Judge Davila wrote in his ruling.
Apple debuted the butterfly keyboard in 2015 with the 12-inch MacBook. At the time, former design chief Jony Ive boasted that the mechanism would allow the company to build ever-slimmer laptops without compromising on stability or typing feel. As Apple re-engineered more of its computers to incorporate the butterfly keyboard, Mac users found the design was susceptible to dust and other debris. The company introduced multiple revisions to make the mechanism more resilient before eventually returning to a more conventional keyboard design with the 16-inch MacBook Pro in late 2019.
Some HP “Officejet” printers can disable this “dynamic security” through a firmware update, PC World reported earlier this week. But HP still defends the feature, arguing it’s “to protect HP’s innovations and intellectual property, maintain the integrity of our printing systems, ensure the best customer printing experience, and protect customers from counterfeit and third-party ink cartridges that do not contain an original HP security chip and infringe HP’s intellectual property.”
Meanwhile, Engadget now reports that “a software update Hewlett-Packard released earlier this month for its OfficeJet printers is causing some of those devices to become unusable.” After downloading the faulty software, the built-in touchscreen on an affected printer will display a blue screen with the error code 83C0000B. Unfortunately, there appears to be no way for someone to fix a printer broken in this way on their own, partly because factory resetting an HP OfficeJet requires interacting with the printer’s touchscreen display. For the moment, HP customers report the only solution to the problem is to send a broken printer back to the company for service. BleepingComputer says the firmware update “has been bricking HP Office Jet printers worldwide since it was released earlier this month…” “Our teams are working diligently to address the blue screen error affecting a limited number of HP OfficeJet Pro 9020e printers,” HP told BleepingComputer… Since the issues surfaced, multiple threads have been started by people from the U.S., the U.K., Germany, the Netherlands, Australia, Poland, New Zealand, and France who had their printers bricked, some with more than a dozen pages of reports.
“HP has no solution at this time. Hidden service menu is not showing, and the printer is not booting anymore. Only a blue screen,” one customer said.
“I talked to HP Customer Service and they told me they don’t have a solution to fix this firmware issue, at the moment,” another added.
Hewlett-Packard, or HP, has sparked fury after issuing a recent “firmware” update which blocks customers from using cheaper, non-HP ink cartridges in its printers.
Customers’ devices were remotely updated in line with new terms which mean their printers will not work unless they are fitted with approved ink cartridges.
It prevents customers from using any cartridges other than those fitted with an HP chip, which are often more expensive. If the customer tries to use a non-HP ink cartridge, the printer will refuse to print.
HP printers used to display a warning when a “third-party” ink cartridge was inserted, but now printers will simply refuse to print altogether.
[…]
This is not the first time HP has angered its customers by blocking the use of other ink cartridges.
The firm has been forced to pay out millions in compensation to customers in America, Australia and across Europe since it first introduced dynamic security measures back in 2016.
Just last year the company paid $1.35m (£1m) to consumers in Belgium, Italy, Spain and Portugal who had bought printers not knowing they were equipped with the cartridge-blocking feature.
Last year consumer advocates called on the Competition and Markets Authority to investigate whether branded ink costs and “dynamic security” measures were fair to consumers, after finding that lesser-known brands of ink cartridges offered better value for money than major names.
The consumer group Which? said manufacturers were “actively blocking customers from exerting their right to choose the cheapest ink and therefore get a better deal”.
Samsung Electronics has been stung for more than $303 million in a patent infringement case brought by US memory company Netlist.
Netlist, headquartered in Irvine, California, styles itself as a provider of high-performance modular memory subsystems. The company initially filed a complaint that Samsung had infringed on three of its patents, later amended to six [PDF]. Following a six-day trial, the jury found for Netlist in five of these and awarded a total of $303,150,000 in damages.
The exact patents in question are 10,949,339 (‘339), 11,016,918 (‘918), 11,232,054 (‘054), 8,787,060 (‘060), and 9,318,160 (‘160). The products that are said to infringe on these are Samsung’s DDR4 LRDIMM, DDR5 UDIMM, SODIMM, and RDIMM, plus the high-bandwidth memory HBM2, HBM2E and HBM3 technologies.
The patents appear to apply to various aspects of DDR memory modules. According to reports, Samsung’s representatives had argued that Netlist’s patents were invalid because they were already covered by existing technology and that its own memory chips did not function in the same way as described by the patents, but this clearly did not sway the jurors.
However, it appears that the verdict did not go all Netlist’s way because its lawyers had been arguing for more damages, saying that a reasonable royalty figure would be more like $404 million.
In the court filings [PDF], Netlist claims that Samsung had knowledge of the patents in question “no later than August 2, 2021” via access to Netlist’s patent portfolio docket.
The company states that Samsung and Netlist were initially partners under a 2015 Joint Development and License Agreement (JDLA), which granted Samsung a five-year paid-up license to Netlist’s patents.
Samsung had used Netlist’s technologies to develop products such as DDR4 memory modules and emerging new technologies, including DDR5 and HBM, Netlist said.
Under the terms of the agreement, Samsung was to supply Netlist certain memory products at competitive prices, but Netlist claimed Samsung repeatedly failed to honor these promises. As a result, Netlist claims, it terminated the JDLA on July 15, 2020.
Netlist alleged in its court filing that Samsung has continued to make and sell memory products “with materially the same structures” as those referenced in the patents, despite the termination of the agreement.
According to investor website Seeking Alpha, the damages awarded are for the infringement of Netlist technology covering only about five quarters. The website also said that Netlist now has the cash to not only grow its business but pursue other infringers of its technology.
Netlist chief executive CK Hong said in a statement that the company was pleased with the case. He claimed the verdict “left no doubt” that Samsung had wilfully infringed Netlist patents, and is “currently using Netlist technology without a license” on many of its strategic product lines.
Hong also claimed that it was an example of the “brazen free ride” carried out by industry giants against intellectual property belonging to small innovators.
“We hope this case serves as a reminder of this problem to policymakers as well as a wakeup call to those in the memory industry that are using our IP without permission,” he said.
We asked Samsung Electronics for a statement regarding the verdict in this case, but did not hear back from the company at the time if publication.
Netlist is also understood to have other cases pending against Micron and Google. Those against Micron are said to involve infringement of many of the same patents that were involved in the Samsung case. ®
The Council and the European Parliament have reached today a provisional political agreement on the regulation to strengthen Europe’s semiconductor ecosystem, better known as the ‘Chips Act’. The deal is expected to create the conditions for the development of an industrial base that can double the EU’s global market share in semiconductors from 10% to at least 20% by 2030.
[…]
The Commission proposed three main lines of action, or pillars, to achieve the Chips’ Act objectives
The “Chips for Europe Initiative”, to support large-scale technological capacity building
A framework to ensure security of supply and resilience by attracting investment
A Monitoring and Crisis Response system to anticipate supply shortages and provide responses in case of crisis.
The Chips for Europe Initiative is expected to mobilise €43 billion in public and private investments, with €3,3 billion coming from the EU budget. These actions will be primarily implemented through a Chips Joint Undertaking, a public-private partnership involving the Union, the member states and the private sector.
Main elements of the compromise
On pillar one, the compromise reached today reinforces the competences of the Chips Joint Undertaking which will be responsible for the selection of the centres of excellence, as part of its work programme.
On pillar two, the final compromise widens the scope of the so called ‘First-of-a-kind’ facilities to include those producing equipment used in semiconductor manufacturing. ’First-of-a-kind’ facilities contribute to the security of supply for the internal market and can benefit from fast-tracking of permit granting procedures. In addition, design centres that significantly enhance the Union’s capabilities in innovative chip design may receive a European label of ‘design centre of excellence’ which will be granted by the Commission. Member states may apply support measures for design centres that receive this label according to existing legislation.
The compromise also underlines, the importance of international cooperation and the protection of intellectual property rights as two key elements for the creation of an ecosystem for semiconductors.
[…]
The provisional agreement reached today between the Council and the European Parliament needs to be finalised, endorsed, and formally adopted by both institutions.
Once the Chips Act is adopted, the Council will pass an amendment of the Single Basic Act (SBA) for institutionalised partnerships under Horizon Europe, to allow the establishment of the Chips Joint Undertaking, which builds upon and renames the existing Key Digital Technologies Joint Undertaking. The SBA amendment is adopted by the Council following consultation of the Parliament.
After placing an early bet on OpenAI, the creator of ChatGPT, Microsoft has another secret weapon in its arsenal: its own artificial intelligence chip for powering the large-language models responsible for understanding and generating humanlike language. The Information: The software giant has been developing the chip, internally code-named Athena, since as early as 2019, according to two people with direct knowledge of the project. The chips are already available to a small group of Microsoft and OpenAI employees, who are testing the technology, one of them said. Microsoft is hoping the chip will perform better than what it currently buys from other vendors, saving it time and money on its costly AI efforts. Other prominent tech companies, including Amazon, Google and Facebook, also make their own in-house chips for AI. The chips — which are designed for training software such as large-language models, along with supporting inference, when the models use the intelligence they acquire in training to respond to new data — could also relieve a shortage of the specialized computers that can handle the processing needed for AI software. That shortage, reflecting the fact that primarily just one company, Nvidia, makes such chips, is felt across tech. It has forced Microsoft to ration its computers for some internal teams, The Information has reported.
DGIST Professor Yoonkyu Lee’s research team used intense light on the surface of a copper wire to synthesize graphene, thereby increasing the production rate and lowering the production cost of the high-quality transparent-flexible electrode materials and consequently enabling its mass production. The results were published in the February 23 issue of Nano Energy.
This technology is applicable to various 2D materials, and its applicability can be extended to the synthesis of various metal-2D material nanowires.
The research team used copper-graphene nanowires to implement high-performance transparent-flexible electronic devices such as transparent-flexible electrodes, transparent supercapacitors and transparent heaters and to thereby demonstrate the commercial viability of this material.
DGIST Professor Yoonkyu Lee said, “We developed a method of mass-producing at a low production cost the next-generation transparent-flexible electrode material based on high-quality copper-graphene nanowires. In the future, we expect that this technology will contribute to the production of core electrode materials for high-performance transparent-flexible electronic devices, semitransparent solar cells, or transparent displays.”
More information: Jongyoun Kim et al, Ultrastable 2D material-wrapped copper nanowires for high-performance flexible and transparent energy devices, Nano Energy (2022). DOI: 10.1016/j.nanoen.2022.108067
The European Commission has adopted a new set of right to repair rules that, among other things, will add electronic devices like smartphones and tablets to a list of goods that must be built with repairability in mind.
The new rules [PDF] will need to be need to be negotiated between the European Parliament and member states before they can be turned into law. If they are, a lot more than just repairability requirements will change.
One provision will require companies selling consumer goods in the EU to offer repairs (as opposed to just replacing a damaged device) free of charge within a legal guarantee period unless it would be cheaper to replace a damaged item.
Note: so any company can get out of it quite easily.
Beyond that, the directive also adds a set of rights for device repairability outside of legal guarantee periods that the EC said will help make repair a better option than simply tossing a damaged product away.
Under the new post-guarantee period rule, companies that produce goods the EU defines as subject to repairability requirements (eg, appliances, commercial computer hardware, and soon cellphones and tablets) are obliged to repair such items for five to 10 years after purchase if a customer demands so, and the repair is possible.
[…]
The post-guarantee period repair rule also establishes the creation of an online “repair matchmaking platform” for EU consumers, and calls for the creation of a European repair standard that will “help consumers identify repairers who commit to a higher quality.”
[…]
New rules don’t do enough, say right to repair advocates
The Right to Repair coalition said in a statement that, while it welcomes the step forward taken by the EU’s new repairability rules, “the opportunity to make the right to repair universal is missed.”
While the EC’s rules focus on cutting down on waste by making products more easily repairable, they don’t do anything to address repair affordability or anti-repair practices, R2R said. Spare parts and repair charges, the group argues, could still be exorbitantly priced and inaccessible to the average consumer.
[…]
Ganapini said that truly universal right to repair laws would include assurances that independent providers were available to conduct repairs, and that components, manuals and diagnostic tools would be affordably priced. She also said that, even with the addition of smartphones and tablets to repairability requirements, the products it applies to is still too narrow.
RGB on your PC is cool, it’s beautiful and can be quite nuts but it’s also quite complex and trying to get it to do what you want it to isn’t always easy. This article is the result of many many reboots and much Googling.
I set up a PC with 2×3 Lian Li Unifan SL 120 (top and side), 2 Lian Li Strimmer cables (an ATX and a PCIe), a NZXT Kraken Z73 CPU cooler (with LED screen, but cooled by the Lian Li Unifan SL 120 on the side, not the NZXT fans that came with it), 2 RGB DDR5 DRAMs, an ASUS ROG Geforce 2070 RTX Super, a Asus ROG Strix G690-F Gaming wifi and a Corsair K95 RGB Keyboard.
Happy rainbow colours! It seems to default to this every time I change stuff
It’s no mean feat doing all the wiring on the fan controllers nowadays, and the instructions don’t make it much easier. Here is the wiring setup for this (excluding the keyboard)
The problem is that all of this hardware comes with it’s own bloated, janky software in order to get it to do stuff.
ASUS: Armory Crate / ASUS AURA
This thing takes up loads of memory and breaks often.
I decided to get rid of it once it had problems updating my drivers. You can still download Aura seperately (although there is a warning it will no longer be updated). To uninstall Armory Crate you can’t just uninstall everything from Add or Remove Programs, you need the uninstall tool, so it will also get rid of the scheduled tasks and a directory the windows uninstallers leave behind.
Once you install Aura seperately, it still takes an inane amount of processes, but you don’t actually need to run Aura to change the RGBs on the VGA and DRAM. Oddly enough not the motherboard itself though.
Just running AURA, not Armory Crate
You also can use other programs. Theoretically. That’s what the rest of this article is about. But in the end, I used Aura.
If you read on, it may be the case that I can’t get a lot of the other stuff to work because I don’t have Armory Crate installed. Nothing will work if I don’t have Aura installed, so I may as well use that.
Note: if you want to follow your driver updates, there’s a thread on the Republic of Gamers website that follows a whole load of them.
Problem I never solved: getting the Motherboard itself to show under Aura.
Corsiar: iCUE
Yup, this takes up memory, works pretty well, keeps updating for no apparent reason and I have to slide the switch left and right to get it to detect as a USB device quite often so the lighting works again. In terms of interface it’s quite easy to use.
Woohoo! all these processes for keyboard lighting!
It detects the motherboard and can monitor the motherboard, but can’t control the lighting on it. Once upon a time it did. Maybe this is because I’m not running the whole Armory Crate thing any more.
No idea.
Note: if you do put everything on in the dashboard, memory usage goes up to 500 MB
In fact, just having the iCUE screen open uses up ~200MB of memory.
It’s the most user friendly way of doing keyboard lighting effects though, so I keep it.
When I first started running it, it told me I needed to run it as an administrator to get a driver working. I ran it and it hung my computer at device detection. Later on it started rebooting it. After installing the underlying Asus Aura services running it ran for me. [Note: the following is for the standard 0.8 build: Once. It reboots my PC after device detection now. Lots of people on Reddit have it working, maybe it needs the Aura Crate software. I have opened an issue, hopefully it will get fixed? According to a Reddit user, this could be because “If you have armoury crate installed, OpenRGB cannot detect your motherboard, if your ram is ddr5 [note: which mine is], you’ll gonna have to wait or download the latest pipeline version”]
OK, so the Pipeline build does work and even detects my motherboard! Unfortunately it doesn’t write the setting to the motherboard, so after a reboot it goes back to rainbow. After my second attempt the setting seems to have stuck and survived the reboot. However it still hangs the computer on a reboot (everything turns off except the PC itself) and It can take quite some time to open the interface. It also sometimes does and sometimes doesn’t detect the DRAM modules. Issue opened here
Even with the interace open, the memory footprint is tiny!
Note that it saves the settings to C:\Users\razor\AppData\Roaming\OpenRGB an you can find the logs there too.
SignalRGB
This looks quite good at first glance – it detected my devices and was able to apply effects to all of them at once. Awesome! Unfortunately it has a huge memory footprint (around 600MB!) and doesn’t write the settings to the devices, so if after a reboot you don’t run SignalRGB the hardware won’t show any lighting at all, they will all be turned off.
It comes in a free tier with mostly anything you need and a paid subscription tier, which costs $4,- per month = $48,- per year! Considering what this does and the price of most of these kind of one trick pony utils (one time fee ~ $20) this is incredibly high. On Reddit the developers are aggressive in saying they need to keep developing in order to support new hardware and if you think they are charging a lot of money for this you are nuts. Also, in order to download the free effects you need an account with them.
So nope, not using this.
JackNet RGBSync
Another Open Source RGB software, I got it to detect my keyboard and not much else. Development has stopped in 2020. The UI leaves a lot to be desired.
Gigabyte RGB Fusion
Googling alternatives to Aura, you will run into this one. It’s not compatible with my rig and doesn’t detect anything. Not really too surprising, considering my stuff is all their competitor, Asus.
L-Connect 2 and 3
For the Lian Li fans and the Strimmer cables I use L-Connect 2. It has a setting saying it should take over the motherboard setting, but this has stopped working. Maybe I need Armory Crate. It’s a bit clunky (to change settings you need to select which fans in the array you want to send an effect to and it always shows 4 arrays of 4 fans, which I don’t actually have), but it writes settings to the devices so you don’t need it running in the background.
L-Connect 3 runs extremely slowly. It’s not hung, it’s just incredibly slow. Don’t know why, but could be Armory Crate related.
NZXT CAM
This you need in the background or the LED screen on the Kraken will show the default: CPU temperature only. It takes a very long time to start up. It also requires quite a bit of memory to run, which is pretty bizarre if all you want to do is show a few animated GIFs on your CPU cooler in carousel mode
Interface up on the screenRunning in the background
So, it’s shit but you really really need it if you want the display on the CPU cooler to work.
Fan Control
So not really RGB, but related, is Fan Control for Windows
Also G-helper works for fan control and gpu switching
Conclusion
None of the alternatives really works very well for me. None of them can control the Lian-Li strimmer devices and most of them only control a few of them or have prohibitive licenses attached for what they are. What is more, in order to use the alternatives, you still need to install the ASUS motherboard driver, which is exactly what I had been hoping to avoid. OpenRGB shows the most promise but is still not quite there yet – but it does work for a lot of people, so hopefully this will work for you too. Good luck and prepare to reboot… A lot!
[…] “With the help of a quantum annealer, we demonstrated a new way to pattern magnetic states,” said Alejandro Lopez-Bezanilla, a virtual experimentalist in the Theoretical Division at Los Alamos National Laboratory. Lopez-Bezanilla is the corresponding author of a paper about the research in Science Advances.
“We showed that a magnetic quasicrystal lattice can host states that go beyond the zero and one bit states of classical information technology,” Lopez-Bezanilla said. “By applying a magnetic field to a finite set of spins, we can morph the magnetic landscape of a quasicrystal object.”
[…]
Lopez-Bezanilla selected 201 qubits on the D-Wave computer and coupled them to each other to reproduce the shape of a Penrose quasicrystal.
Since Roger Penrose in the 1970s conceived the aperiodic structures named after him, no one had put a spin on each of their nodes to observe their behavior under the action of a magnetic field.
“I connected the qubits so all together they reproduced the geometry of one of his quasicrystals, the so-called P3,” Lopez-Bezanilla said. “To my surprise, I observed that applying specific external magnetic fields on the structure made some qubits exhibit both up and down orientations with the same probability, which leads the P3 quasicrystal to adopt a rich variety of magnetic shapes.”
Manipulating the interaction strength between qubits and the qubits with the external field causes the quasicrystals to settle into different magnetic arrangements, offering the prospect of encoding more than one bit of information in a single object.
Some of these configurations exhibit no precise ordering of the qubits’ orientation.
“This can play in our favor,” Lopez-Bezanilla said, “because they could potentially host a quantum quasiparticle of interest for information science.” A spin quasiparticle is able to carry information immune to external noise.
A quasiparticle is a convenient way to describe the collective behavior of a group of basic elements. Properties such as mass and charge can be ascribed to several spins moving as if they were one.
Upon first glance, the Unconventional Computing Laboratory looks like a regular workspace, with computers and scientific instruments lining its clean, smooth countertops. But if you look closely, the anomalies start appearing. A series of videos shared with PopSci show the weird quirks of this research: On top of the cluttered desks, there are large plastic containers with electrodes sticking out of a foam-like substance, and a massive motherboard with tiny oyster mushrooms growing on top of it.
[…]
Why? Integrating these complex dynamics and system architectures into computing infrastructure could in theory allow information to be processed and analyzed in new ways. And it’s definitely an idea that has gained ground recently, as seen through experimental biology-based algorithms and prototypes of microbe sensors and kombucha circuit boards.
In other words, they’re trying to see if mushrooms can carry out computing and sensing functions.
A mushroom motherboard. Andrew Adamatzky
With fungal computers, mycelium—the branching, web-like root structure of the fungus—acts as conductors as well as the electronic components of a computer. (Remember, mushrooms are only the fruiting body of the fungus.) They can receive and send electric signals, as well as retain memory.
“I mix mycelium cultures with hemp or with wood shavings, and then place it in closed plastic boxes and allow the mycelium to colonize the substrate, so everything then looks white,” says Andrew Adamatzky, director of the Unconventional Computing Laboratory at the University of the West of England in Bristol, UK. “Then we insert electrodes and record the electrical activity of the mycelium. So, through the stimulation, it becomes electrical activity, and then we get the response.” He notes that this is the UK’s only wet lab—one where chemical, liquid, or biological matter is present—in any department of computer science.
Preparing to record dynamics of electrical resistance of hemp shaving colonized by oyster fungi. Andrew Adamatzky
The classical computers today see problems as binaries: the ones and zeros that represent the traditional approach these devices use. However, most dynamics in the real world cannot always be captured through that system. This is the reason why researchers are working on technologies like quantum computers (which could better simulate molecules) and living brain cell-based chips (which could better mimic neural networks), because they can represent and process information in different ways, utilizing a series of complex, multi-dimensional functions, and provide more precise calculations for certain problems.
Already, scientists know that mushrooms stay connected with the environment and the organisms around them using a kind of “internet” communication. You may have heard this referred to as the wood wide web. By deciphering the language fungi use to send signals through this biological network, scientists might be able to not only get insights about the state of underground ecosystems, and also tap into them to improve our own information systems.
An illustration of the fruit bodies of Cordyceps fungi. Irina Petrova Adamatzky
Mushroom computers could offer some benefits over conventional computers. Although they can’t ever match the speeds of today’s modern machines, they could be more fault tolerant (they can self-regenerate), reconfigurable (they naturally grow and evolve), and consume very little energy.
Before stumbling upon mushrooms, Adamatzky worked on slime mold computers—yes, that involves using slime mold to carry out computing problems—from 2006 to 2016. Physarum, as slime molds are called scientifically, is an amoeba-like creature that spreads its mass amorphously across space.
Slime molds are “intelligent,” which means that they can figure out their way around problems, like finding the shortest path through a maze without programmers giving them exact instructions or parameters about what to do. Yet, they can be controlled as well through different types of stimuli, and be used to simulate logic gates, which are the basic building blocks for circuits and electronics.
Recording electrical potential spikes of hemp shaving colonized by oyster fungi. Andrew Adamatzky
Much of the work with slime molds was done on what are known as “Steiner tree” or “spanning tree” problems that are important in network design, and are solved by using pathfinding optimization algorithms. “With slime mold, we imitated pathways and roads. We even published a book on bio-evaluation of the road transport networks,” says Adamatzky “Also, we solved many problems with computation geometry. We also used slime molds to control robots.”
When he had wrapped up his slime mold projects, Adamatzky wondered if anything interesting would happen if they started working with mushrooms, an organism that’s both similar to, and wildly different from, Physarum. “We found actually that mushrooms produce action potential-like spikes. The same spikes as neurons produce,” he says. “We’re the first lab to report about spiking activity of fungi measured by microelectrodes, and the first to develop fungal computing and fungal electronics.”
An example of how spiking activity can be used to make gates. Andrew Adamatzky
In the brain, neurons use spiking activities and patterns to communicate signals, and this property has been mimicked to make artificial neural networks. Mycelium does something similar. That means researchers can use the presence or absence of a spike as their zero or one, and code the different timing and spacing of the spikes that are detected to correlate to the various gates seen in computer programming language (or, and, etc). Further, if you stimulate mycelium at two separate points, then conductivity between them increases, and they communicate faster, and more reliably, allowing memory to be established. This is like how brain cells form habits.
Mycelium with different geometries can compute different logical functions, and they can map these circuits based on the electrical responses they receive from it. “If you send electrons, they will spike,” says Adamatzky. “It’s possible to implement neuromorphic circuits… We can say I’m planning to make a brain from mushrooms.”
Hemp shavings in the shaping of a brain, injected with chemicals. Andrew Adamatzky
So far, they’ve worked with oyster fungi (Pleurotus djamor), ghost fungi (Omphalotus nidiformis), bracket fungi (Ganoderma resinaceum), Enoki fungi (Flammulina velutipes), split gill fungi (Schizophyllum commune) and caterpillar fungi (Cordyceps militari).
“Right now it’s just feasibility studies. We’re just demonstrating that it’s possible to implement computation, and it’s possible to implement basic logical circuits and basic electronic circuits with mycelium,” Adamatzky says. “In the future, we can grow more advanced mycelium computers and control devices.”
[…] In the latest advance in nano- and micro-architected materials, engineers at Caltech have developed a new material made from numerous interconnected microscale knots.
The knots make the material far tougher than identically structured but unknotted materials: they absorb more energy and are able to deform more while still being able to return to their original shape undamaged. These new knotted materials may find applications in biomedicine as well as in aerospace applications due to their durability, possible biocompatibility, and extreme deformability.
[…]
Each knot is around 70 micrometers in height and width, and each fiber has a radius of around 1.7 micrometers (around one-hundredth the radius of a human hair). While these are not the smallest knots ever made—in 2017 chemists tied a knot made from an individual strand of atoms—this does represent the first time that a material composed of numerous knots at this scale has ever been created. Further, it demonstrates the potential value of including these nanoscale knots in a material—for example, for suturing or tethering in biomedicine.
The knotted materials, which were created out of polymers, exhibit a tensile toughness that far surpasses materials that are unknotted but otherwise structurally identical, including ones where individual strands are interwoven instead of knotted. When compared to their unknotted counterparts, the knotted materials absorb 92 percent more energy and require more than twice the amount of strain to snap when pulled.
The knots were not tied but rather manufactured in a knotted state by using advanced high-resolution 3D lithography capable of producing structures in the nanoscale. The samples detailed in the Science Advancespaper contain simple knots—an overhand knot with an extra twist that provides additional friction to absorb additional energy while the material is stretched. In the future, the team plans to explore materials constructed from more complex knots.
[…]
More information: Widianto P. Moestopo et al, Knots are not for naught: Design, properties, and topology of hierarchical intertwined microarchitected materials, Science Advances (2023). DOI: 10.1126/sciadv.ade6725