Balls of human brain cells linked to a computer have been used to perform a very basic form of speech recognition. The hope is that such systems will use far less energy for AI tasks than silicon chips.
“This is just proof-of-concept to show we can do the job,” says Feng Guo at Indiana University Bloomington. “We do have a long way to go.”
Brain organoids are lumps of nerve cells that form when stem cells are grown in certain conditions. “They are like mini-brains,” says Guo.
It takes two or three months to grow the organoids, which are a few millimetres wide and consist of as many as 100 million nerve cells, he says. Human brains contain around 100 billion nerve cells.
The organoids are then placed on top of a microelectrode array, which is used both to send electrical signals to the organoid and to detect when nerve cells fire in response. The team calls its system “Brainoware”.
For the speech recognition task, the organoids had to learn to recognise the voice of one individual from a set of 240 audio clips of eight people pronouncing Japanese vowel sounds. The clips were sent to the organoids as sequences of signals arranged in spatial patterns.
The organoids’ initial responses had an accuracy of around 30 to 40 per cent, says Guo. After training sessions over two days, their accuracy rose to 70 to 80 per cent.
“We call this adaptive learning,” he says. If the organoids were exposed to a drug that stopped new connections forming between nerve cells, there was no improvement.
The training simply involved repeating the audio clips, and no form of feedback was provided to tell the organoids if they were right or wrong, says Guo. This is what is known in AI research as unsupervised learning.
There are two big challenges with conventional AI, says Guo. One is its high energy consumption. The other is the inherent limitations of silicon chips, such as their separation of information and processing.
Titouan Parcollet at the University of Cambridge, who works on conventional speech recognition, doesn’t rule out a role for biocomputing in the long run.
“However, it might also be a mistake to think that we need something like the brain to achieve what deep learning is currently doing,” says Parcollet. “Current deep-learning models are actually much better than any brain on specific and targeted tasks.”
Guo and his team’s task is so simplified that it is only identifies who is speaking, not what the speech is, he says. “The results aren’t really promising from the speech recognition perspective.”
Even if the performance of Brainoware can be improved, another major issue with it is that the organoids can only be maintained for one or two months, says Guo. His team is working on extending this.
“If we want to harness the computation power of organoids for AI computing, we really need to address those limitations,” he says.
23andMe is a terrific concept. In essence, the company takes a sample of your DNA and tells you about your genetic makeup. For some of us, this is the only way to learn about our heritage. Spotty records, diaspora, mistaken family lore and slavery can make tracing one’s roots incredibly difficult by traditional methods.
What 23andMe does is wonderful because your DNA is fixed. Your genes tell a story that supersedes any rumors that you come from a particular country or are descended from so-and-so.
[…]
ou can replace your Social Security number, albeit with some hassle, if it is ever compromised. You can cancel your credit card with the click of a button if it is stolen. But your DNA cannot be returned for a new set — you just have what you are given. If bad actors steal or sell your genetic information, there is nothing you can do about it.
This is why 23andMe’s Oct. 6 data leak, although it reads like science fiction, is not an omen of some dark future. It is, rather, an emblem of our dangerous present.
23andMe has a very simple interface with some interesting features. “DNA Relatives” matches you with other members to whom you are related. This could be an effective, thoroughly modern way to connect with long-lost family, or to learn more about your origins.
But the Oct. 6 leak perverted this feature into something alarming. By gaining access to individual accounts through weak and recycled passwords, hackers were able to create an extensive list of people with Ashkenazi heritage. This list was then posted on forums with the names, sex and likely heritage of each member under the title “Ashkenazi DNA Data of Celebrities.”
First and foremost, collecting lists of people based on their ethnic backgrounds is a personal violation with tremendously insidious undertones. If you saw yourself and your extended family on such a list, you would not take it lightly.
[…]
I find it troubling because, in 2018, Time reported that 23andMe had sold a $300 million stake in its business to GlaxoSmithKline, allowing the pharmaceutical giant to use users’ genetic data to develop new drugs. So because you wanted to know if your grandmother was telling the truth about your roots, you spat into a cup and paid 23andMe to give your DNA to a drug company to do with it as they please.
Although 23andMe is in the crosshairs of this particular leak, there are many companies in murky waters. Last year, Consumer Reports found that 23andMe and its competitors had decent privacy policies where DNA was involved, but that these businesses “over-collect personal information about you and overshare some of your data with third parties…CR’s privacy experts say it’s unclear why collecting and then sharing much of this data is necessary to provide you the services they offer.”
[…]
. As it stands, your DNA can be weaponized against you by law enforcement, insurance companies, and big pharma. But this will not be limited to you. Your DNA belongs to your whole family.
Pretend that you are going up against one other candidate for a senior role at a giant corporation. If one of these genealogy companies determines that you are at an outsized risk for a debilitating disease like Parkinson’s and your rival is not, do you think that this corporation won’t take that into account?
[…]
Insurance companies are not in the business of losing money either. If they gain access to such a thing that on your record, you can trust that they will use it to blackball you or jack up your rates.
In short, the world risks becoming like that of the film Gattaca, where the genetic elite enjoy access while those deemed genetically inferior are marginalized.
The train has left the station for a lot of these issues. That list of people from the 23andMe leak cannot put the genie back in the bottle. If your DNA is on a server for one of these companies, there is a chance that it has already been used as a reference or to help pharmaceutical companies.
[…]
There are things they can do now to avoid further damage. The next time a company asks for something like your phone number or SSN, press them as to why they need it. Make it inconvenient for them to mine you for your Personal Identifiable Information (PII). Your PII has concrete value to these places, and they count on people to be passive, to hand it over without any fuss.
[…]
The time to start worrying about this problem was 20 years ago, but we can still affect positive change today. This 23andMe leak is only the beginning; we must do everything possible to protect our identities and DNA while they still belong to us.
Scientific American was warning about this since at least 2013. What have we done? Nothing.:
If there’s a gene for hubris, the 23andMe crew has certainly got it. Last Friday the U.S. Food and Drug Administration (FDA) ordered the genetic-testing company immediately to stop selling its flagship product, its $99 “Personal Genome Service” kit. In response, the company cooed that its “relationship with the FDA is extremely important to us” and continued hawking its wares as if nothing had happened. Although the agency is right to sound a warning about 23andMe, it’s doing so for the wrong reasons.
Since late 2007, 23andMe has been known for offering cut-rate genetic testing. Spit in a vial, send it in, and the company will look at thousands of regions in your DNA that are known to vary from human to human—and which are responsible for some of our traits
[…]
Everything seemed rosy until, in what a veteran Forbes reporter calls “the single dumbest regulatory strategy [he had] seen in 13 years of covering the Food and Drug Administration,” 23andMe changed its strategy. It apparently blew through its FDA deadlines, effectively annulling the clearance process, and abruptly cut off contact with the agency in May. Adding insult to injury the company started an aggressive advertising campaign (“Know more about your health!”)
[…]
But as the FDA frets about the accuracy of 23andMe’s tests, it is missing their true function, and consequently the agency has no clue about the real dangers they pose. The Personal Genome Service isn’t primarily intended to be a medical device. It is a mechanism meant to be a front end for a massive information-gathering operation against an unwitting public.
Sound paranoid? Consider the case of Google. (One of the founders of 23andMe, Anne Wojcicki, is presently married to Sergei Brin, the founder of Google.) When it first launched, Google billed itself as a faithful servant of the consumer, a company devoted only to building the best tool to help us satisfy our cravings for information on the web. And Google’s search engine did just that. But as we now know, the fundamental purpose of the company wasn’t to help us search, but to hoard information. Every search query entered into its computers is stored indefinitely. Joined with information gleaned from cookies that Google plants in our browsers, along with personally identifiable data that dribbles from our computer hardware and from our networks, and with the amazing volumes of information that we always seem willing to share with perfect strangers—even corporate ones—that data store has become Google’s real asset
[…]
23andMe reserves the right to use your personal information—including your genome—to inform you about events and to try to sell you products and services. There is a much more lucrative market waiting in the wings, too. One could easily imagine how insurance companies and pharmaceutical firms might be interested in getting their hands on your genetic information, the better to sell you products (or deny them to you).
[…]
ven though 23andMe currently asks permission to use your genetic information for scientific research, the company has explicitly stated that its database-sifting scientific work “does not constitute research on human subjects,” meaning that it is not subject to the rules and regulations that are supposed to protect experimental subjects’ privacy and welfare.
Those of us who have not volunteered to be a part of the grand experiment have even less protection. Even if 23andMe keeps your genome confidential against hackers, corporate takeovers, and the temptations of filthy lucre forever and ever, there is plenty of evidence that there is no such thing as an “anonymous” genome anymore. It is possible to use the internet to identify the owner of a snippet of genetic information and it is getting easier day by day.
This becomes a particularly acute problem once you realize that every one of your relatives who spits in a 23andMe vial is giving the company a not-inconsiderable bit of your own genetic information to the company along with their own. If you have several close relatives who are already in 23andMe’s database, the company already essentially has all that it needs to know about you.
The following is an extract from our Lost in Space-Time newsletter. Each month, we hand over the keyboard to a physicist or two to tell you about fascinating ideas from their corner of the universe. You can sign up for Lost in Space-Time for free here.
Space-time is a curious thing. Look around and it’s easy enough to visualise what the space component is in the abstract. It’s three dimensions: left-right, forwards-backwards and up-down. It’s a graph with an…
x, y and z axis. Time, too, is easy enough. We’re always moving forwards in time so we might visualise it as a straight line or one big arrow. Every second is a little nudge forwards.
But space-time, well that’s a little different. Albert Einstein fused space and time together in his theories of relativity. The outcome was a new fabric of reality, a thing called space-time that permeates the universe. How gravity works popped out of the explorations of this new way of thinking. Rather than gravity being a force that somehow operates remotely through space, Einstein proposed that bodies curve space-time, and it is this curvature that causes them to be gravitationally drawn to each other. Our very best descriptions of the cosmos begin with space-time.
Yet, visualising it is next to impossible. The three dimensions of space and one of time give four dimensions in total. But space-time itself is curved, as Einstein proposed. That means to really imagine it, you need a fifth dimension to curve into.
Luckily, all is not lost. There is a mathematical trick to visualising space-time that I’ve come up with. It’s a simplified way of thinking that not only illustrates how space-time can be curved, but also how such curvature can draw bodies towards each other. It can give you new insight into how gravity works in our cosmos.
First, let’s start with a typical way to draw space-time. Pictures like the one below are meant to illustrate Einstein’s idea that gravity arises in the universe from massive objects distorting space-time. Placing a small object, say a marble, near one of these dimples would result in it rolling towards one of the larger objects, in much the same way that gravity pulls objects together.
The weight of different space objects influences the distortion of space-and-time
Manil Suri
However, the diagram is missing a lot. While the objects depicted are three dimensional, the space they’re curving is only two dimensional. Moreover, time seems to have been entirely omitted, so it’s pure space – not space-time – that’s curving.
Here’s my trick to get around this: simplify things by letting space be only one dimensional. This makes the total number of space-time dimensions a more manageable two.
Now we can represent our 1-D space by the double-arrowed horizontal line in the left panel of the diagram below. Let time be represented by the perpendicular direction, giving a two-dimensional space-time plane. This plane is then successive snapshots, stacked one on top of the other, of where objects are located in the single space dimension at each instant.
Suppose now there are objects – say particles – at points A and B in our universe. Then if these particles remained at rest, their trajectories through space-time would just be the two parallel paths AA’ and BB’ as shown. This simply represents the fact that for every time instant, the particles remain exactly where they are in 1-D space. Such behaviour is what we’d expect in the absence of gravity or any other forces.
However, if gravity came into play, we would expect the two particles to draw closer to each other as time went on. In other words, A’ would be much closer to B’ than A was to B.
Now what if gravity, as Einstein proposed, wasn’t a force in the usual sense? What if it couldn’t act directly on A and B to bring them closer, but rather, could only cause such an effect by deforming the 2-D space-time plane? Would there be a suitable such deformation that would still result in A’ getting closer to B’?
Manil Suri
The answer is yes. Were the plane drawn on a rubber sheet, you could stretch it in various ways to easily verify that many such deformations exist. The one we’ll pick (why exactly, we’ll see below) is to wrap the plane around a sphere, as shown in the middle panel. This can be mathematically accomplished by the same method used to project a rectangular map of the world onto a globe. The formula this involves (called the “equirectangular projection”) has been known for almost two millennia: vertical lines on the rectangle correspond to lines of longitude on the sphere and horizontal ones to lines of latitude. You can see from the right panel that A’ has indeed gotten closer to B’, just as we might expect under gravity.
On the plane, the particles follow the shortest paths between A and A’, and B and B’, respectively. These are just straight lines. On the sphere, the trajectories AA’ and BB’ still represent shortest distance paths. This is because the shortest distance between two points on a spherical surface is always along one of the circles of maximal radius (these include, e.g., lines of longitude and the equator). Such curves that produce the shortest distance are called geodesics. So the geodesics AA’ and BB’ on the plane get transformed to corresponding geodesics on the sphere. (This wouldn’t necessarily happen for an arbitrary deformation, which is why we chose our wrapping around the sphere.)
Einstein postulated that particles not subject to external forces will always move through space-time along such “shortest path” geodesics. In the absence of gravity, these geodesics are just straight lines. Gravity, when introduced, isn’t counted as an external force. Rather, its effect is to curve space-time, hence changing the geodesics. The particles now follow these new geodesics, causing them to draw closer.
This is the key visualisation afforded by our simplified description of space-time. We can begin to understand how gravity, rather than being a force that acts mysteriously at a distance, could really be a result of geometry. How it can act to pull objects together via curvature built into space-time.
The above insight was fundamental to Einstein’s incorporation of gravity into his general theory of relativity. The actual theory is much more complicated, since space-time only curves in the local vicinity of bodies, not globally, as in our model. Moreover, the geometry involved must also respect the fact that nothing can travel faster than the speed of light. This effectively means that the concept of “shortest distance” has to also be modified, with the time dimension having to be treated very differently from the space dimensions.
Nevertheless, Einstein’s explanation posits, for instance, that the sun’s mass curves space-time in our solar system. That is why planets revolve around the sun rather than flying off in straight lines – they are just following the curved geodesics in this deformed space-time.
This has been confirmed by measuring how light from distant astronomical sources gets distorted by massive galaxies. Space-time truly is curved in our universe, it’s not just a mathematical convenience.
There’s a classical Buddhist parable about a group of blind men relying only on touch to figure out an animal unfamiliar to them – an elephant. Space-time is our elephant here – we can never hope to see it in its full 4-D form, or watch it curve to cause gravity. But the simplified visualisation presented here can help us better understand it .
Genetic testing company 23andMe changed its terms of service to prevent customers from filing class action lawsuits or participating in a jury trial days after reports revealing that attackers accessed personal information of nearly 7 million people — half of the company’s user base — in an October hack.
In an email sent to customers earlier this week viewed by Engadget, the company announced that it had made updates to the “Dispute Resolution and Arbitration section” of its terms “to include procedures that will encourage a prompt resolution of any disputes and to streamline arbitration proceedings where multiple similar claims are filed.” Clicking through leads customers to the newest version of the company’s terms of service that essentially disallow customers from filing class action lawsuits, something that more people are likely to do now that the scale of the hack is clearer.
“To the fullest extent allowed by applicable law, you and we agree that each party may bring disputes against the other party only in an individual capacity and not as a class action or collective action or class arbitration,” the updated terms say. Notably, 23andMe will automatically opt customers into the new terms unless they specifically inform the company that they disagree by sending an email within 30 days of receiving the firm’s notice. Unless they do that, they “will be deemed to have agreed to the new terms,” the company’s email tells customers.
23andMe did not respond to a request for comment from Engadget.
In October, the San Francisco-based genetic testing company headed by Anne Wojcicki announced that hackers had accessed sensitive user information including photos, full names, geographical location, information related to ancestry trees, and even names of related family members. The company said that no genetic material or DNA records were exposed. Days after that attack, the hackers put up profiles of hundreds of thousands of Ashkenazi Jews and Chinese people for sale on the internet. But until last week, it wasn’t clear how many people were impacted.
In a filing with the Securities and Exchange Commission, 23andMe said that “multiple class action claims” have already been against the company in both federal and state court in California and state court in Illinois, as well as in Canadian courts.
Forbidding people from filing class action lawsuit, as Axiosnotes, hides information about the proceedings from the public since affected parties typically attempt to resolve disputes with arbitrators in private. Experts, such as Chicago-Kent College of Law professor Nancy Kim, an online contractor expert, told Axios that changing its terms wouldn’t be enough to protect 23andMe in court.
The company’s new terms are sparking outrage online. “Wow they first screw up and then they try to screw their users by being shady,” a user who goes by Daniel Arroyo posted on X. “Seems like they’re really trying to cover their asses,” wrote another user called Paul Duke, “and head off lawsuits after announcing hackers got personal data about customers.”
A number of popular mobile password managers are inadvertently spilling user credentials due to a vulnerability in the autofill functionality of Android apps.
The vulnerability, dubbed “AutoSpill,” can expose users’ saved credentials from mobile password managers by circumventing Android’s secure autofill mechanism, according to university researchers at the IIIT Hyderabad, who discovered the vulnerability and presented their research at Black Hat Europe this week.
The researchers, Ankit Gangwal, Shubham Singh and Abhijeet Srivastava, found that when an Android app loads a login page in WebView, password managers can get “disoriented” about where they should target the user’s login information and instead expose their credentials to the underlying app’s native fields, they said. This is because WebView, the preinstalled engine from Google, lets developers display web content in-app without launching a web browser, and an autofill request is generated.
[…]
“When the password manager is invoked to autofill the credentials, ideally, it should autofill only into the Google or Facebook page that has been loaded. But we found that the autofill operation could accidentally expose the credentials to the base app.”
Gangwal notes that the ramifications of this vulnerability, particularly in a scenario where the base app is malicious, are significant. He added: “Even without phishing, any malicious app that asks you to log in via another site, like Google or Facebook, can automatically access sensitive information.”
The researchers tested the AutoSpill vulnerability using some of the most popular password managers, including 1Password, LastPass, Keeper and Enpass, on new and up-to-date Android devices. They found that most apps were vulnerable to credential leakage, even with JavaScript injection disabled. When JavaScript injection was enabled, all the password managers were susceptible to their AutoSpill vulnerability.
It’s pretty well known that you shouldn’t use in app browsers anyway though PSA: Stop Using In-App Browsers Now but I am not sure how you would avoid using webview in this case
Google is dealing with its second “lost data” fiasco in the past few months. This time, it’s Google Drive, which has been mysteriously losing files for some people. Google acknowledged the issue on November 27, and a week later, it posted what it called a fix.
It doesn’t feel like Google is describing this issue correctly; the company still calls it a “syncing issue” with the Drive desktop app versions 84.0.0.0 through 84.0.4.0. Syncing problems would only mean files don’t make it to or from the cloud, and that doesn’t explain why people are completely losing files. In the most popular issue thread on the Google Drive Community forums, severalusersdescribespreadsheets and documents going missing, which all would have been created and saved in the web interface, not the desktop app, and it’s hard to see how the desktop app could affect that. Many users peg “May 2023” as the time documents stopped saving. Some say they’ve never used the desktop app.
[…]
Google’s recovery instructions outline a few ways to attempt to “recover your files.” One is via a new secret UI in the Google Drive desktop app version 85.0.13.0 or higher. If you hold shift while clicking on the Drive system tray/menu bar icon, you’ll get a special debug UI with an option to “Recover from backups.” Google says, “Once recovery is complete, you’ll see a new folder on your desktop with the unsynced files named Google Drive Recovery.” Google doesn’t explain what this does or how it works.
Option No. 2 is surprising: use of the command line to recover files. The new Drive binary comes with flags for ‘–recover_from_account_backups’ and ‘–recover_from_app_data_path’, which tells us a bit about what is going on. When Google first acknowledged the issue, it warned users not to delete or move Drive’s app data folder. These flags from the recovery process make it sound like Google hopes your missing files will be in the Drive cache somewhere. Google also suggests trying Windows Backup or macOS Time Machine to find your files.
Google locked the issue thread on the Drive Community Forums at 170 replies before it was clear the problem was solved. It’s also marking any additional threads as “duplicates” and locking them.
[…]
Of the few replies before Google locked the thread, most suggested that Google’s fix did not work. One user calls the fix “complete BS,” adding, “The “solution” doesn’t work for most people.” Another says, “Google Drive DELETED my files so they are not available for recovery. This “fix” is not a fix!” There are lotsof otherreports of the fix not working, and not many that say they got their files back. The idea that Drive would have months-old copies of files in the app data folder is hard to believe.
A series of attacks against Microsoft Active Directory domains could allow miscreants to spoof DNS records, compromise Active Directory and steal all the secrets it stores, according to Akamai security researchers.
We’re told the attacks – which are usable against servers running the default configuration of Microsoft Dynamic Host Configuration Protocol (DHCP) servers – don’t require any credentials.
Akamai says it reported the issues to Redmond, which isn’t planning to fix the issue. Microsoft did not respond to The Register‘s inquiries.
The good news, according to Akamai, is that it hasn’t yet seen a server under this type of attack. The bad news: the firm’s flaw finders also told us that massive numbers of organizations are likely vulnerable, considering 40 percent of the “thousands” of networks that Akamai monitors are running Microsoft DHCP in the vulnerable configuration.
In addition to detailing the security issue, the cloud services biz also provided a tool that sysadmins can use to detect configurations that are at risk.
While the current report doesn’t provide technical details or proof-of-concept exploits, Akamai has promised, in the near future, to publish code that implements these attacks called DDSpoof – short for DHCP DNS Spoof.
“We will show how unauthenticated attackers can collect necessary data from DHCP servers, identify vulnerable DNS records, overwrite them, and use that ability to compromise AD domains,” Akamai security researcher Ori David said.
The DHCP attack research builds on earlier work by NETSPI’s Kevin Roberton, who detailed ways to exploit flaws in DNS zones.
[…]
In addition to creating non-existent DNS records, unauthenticated attackers can also use the DHCP server to overwrite existing data, including DNS records inside the ADI zone in instances where the DHCP server is installed on a domain controller, which David says is the case in 57 percent of the networks Akamai monitors.
“All these domains are vulnerable by default,” he wrote. “Although this risk was acknowledged by Microsoft in their documentation, we believe that the awareness of this misconfiguration is not in accordance with its potential impact.”
[…]
we’re still waiting to hear from Microsoft about all of these issues and will update this story if and when we do. But in the meantime, we’d suggest following Akamai’s advice and disable DHCP DNS Dynamic Updates if you don’t already and avoid DNSUpdateProxy altogether.
“Use the same DNS credential across all your DHCP servers instead,” is the advice.
Profiteering has played a significant role in boosting inflation during 2022, according to a report that calls for a global corporation tax to curb excess profits.
Analysis of the financial accounts of many of the UK’s biggest businesses found that profits far outpaced increases in costs, helping to push up inflation last year to levels not seen since the early 1980s.
The report from the IPPR and Common Wealth thinktanks found that business profits rose by 30% among UK-listed firms, driven by just 11% of firms that made super-profits based on their ability to push through stellar price increases – often dubbed greedflation.
Excessive profits were even larger in the US, where many important sections of the economy are dominated by a few powerful companies.
This surge in profits happened as wage increases largely failed to keep pace with inflation, and workers suffered their largest fall in disposable incomes since the second world war.
Researchers said the energy companies ExxonMobil and Shell, mining firms Glencore and Rio Tinto, and food and commodities businesses Kraft Heinz, Archer-Daniels-Midland and Bunge all saw their profits far outpace inflation in the aftermath of Russia’s invasion of Ukraine.
“Because energy and food prices feed so significantly into costs across all sectors of the wider economy, this exacerbated the initial price shock – contributing to inflation peaking higher and lasting longer than had there been less market power,” the report said.
After the analysis of 1,350 companies listed on the stock markets in the UK, US, Germany, Brazil and South Africa, the report said firms in the technology sector, telecommunications and the banking industry also pushed through significant price increases that raised their profit margins.
[…]
The report echoes research by the Unite union, which last year revealed how the biggest price increases affecting the UK consumer prices index (CPI) were driven by firms that either maintained or improved their profit margins.
[…]
Four food companies – the listed suppliers Archer-Daniels-Midland and Bunge, plus the privately owned Cargill and Dreyfus – control an estimated 70%–90% of the world grain market.
“This has caused significant harm to the economy as a whole,” the report said. “Global GDP could be 8% higher than it is now had market power not risen.
[…]
Last year, Isabel Schnabel, a member of the executive board of the European Central Bank, said that “on average, profits have recently been a key contributor to total domestic inflation, above their historical contribution”.
Jung and the Common Wealth economist Chris Hayes said a tax on the estimated $4tn of excess global profits was needed alongside moves to break up monopolistic practices that allowed firms to exploit their market power.
Jung said the Bank of England had fallen behind in the debate and needed to “catch up”.
The number of birthdays you’ve had—better known as your chronological age—now appears to be less important in assessing your health than ever before. A new study shows that bodily organs get “older” at extraordinarily different rates, and each one’s biological age can be at odds with a person’s age on paper.
[…]
The team sampled the blood of more than 5,500 people, all with no active disease or clinically abnormal biomarkers, to look for proteins that originated from specific organs. The scientists were able to determine where those proteins came from by measuring their gene activity: when genes for a protein were expressed four times more in one organ, that designated its origin. Next the team measured the concentrations of thousands of proteins in a drop of blood and found that almost 900 of them—about 18 percent of the proteins measured—tended to be specific to a single organ. When those proteins varied from the expected concentration for a particular chronological age, that indicated accelerated aging in the corresponding organ.
“We could say with reasonable certainty that [a particular protein] likely comes from the brain and somehow ends up in the blood,” explains Tony Wyss-Coray, a professor of neurology at Stanford University and co-author of the new study. If that protein concentration changes in the blood, “it must also likely change in the brain—and [that] tells us something about how the brain ages,” Wyss-Coray says.
By comparing study participants’ organ-specific proteins, the researchers were able to estimate an age gap—the difference between an organ’s biological age and its chronological age. Depending on the organ involved, participants found to have at least one with accelerated aging had an increased disease and mortality risk over the next 15 years. For example, those whose heart was “older” than usual had more than twice the risk of heart failure than people with a typically aging heart. Aging in the heart was also a strong predictor of heart attack. Similarly, those with a quickly aging brain were more likely to experience cognitive decline. Accelerated aging in the brain and vascular system predicted the progression of Alzheimer’s disease just as strongly as plasma pTau-181—the current clinical blood biomarker for the condition. Extreme aging in the kidneys was a strong predictor of hypertension and diabetes.
[…]
Wyss-Coray anticipates this research could lead to a simple blood test that could guide prognostic work—in other words, a test that could help foretell future illness. “You could start to do interventions before that person develops disease,” he says, “and potentially reverse this accelerating aging or slow it down.”
[…]
The momentum of commercial epigenetic testing is a “gold rush,” Shiels says. “There is a degree of oversell on what [the tests] can do.”
A single organ doesn’t tell the whole story of aging because deterioration processes are interconnected and affect an entire organism. “We understand a lot about the aging process on sort of a micro level,” Shiels says. “But a lot of the factors that drive age-related organ dysfunction are environmental. So it’s lifestyle, pollution, what you eat, microbes in your gut.”
“Researchers have identified a large number of bugs to do with the processing of images at boot time,” writes longtime Slashdot reader jd. “This allows malicious code to be installed undetectably (since the image doesn’t have to pass any validation checks) by appending it to the image. None of the current secure boot mechanisms are capable of blocking the attack.” Ars Technica reports: LogoFAIL is a constellation of two dozen newly discovered vulnerabilities that have lurked for years, if not decades, in Unified Extensible Firmware Interfaces responsible for booting modern devices that run Windows or Linux. The vulnerabilities are the product of almost a year’s worth of work by Binarly, a firm that helps customers identify and secure vulnerable firmware. The vulnerabilities are the subject of a coordinated mass disclosure released Wednesday. The participating companies comprise nearly the entirety of the x64 and ARM CPU ecosystem, starting with UEFI suppliers AMI, Insyde, and Phoenix (sometimes still called IBVs or independent BIOS vendors); device manufacturers such as Lenovo, Dell, and HP; and the makers of the CPUs that go inside the devices, usually Intel, AMD or designers of ARM CPUs. The researchers unveiled the attack on Wednesday at the Black Hat Security Conference in London.
As its name suggests, LogoFAIL involves logos, specifically those of the hardware seller that are displayed on the device screen early in the boot process, while the UEFI is still running. Image parsers in UEFIs from all three major IBVs are riddled with roughly a dozen critical vulnerabilities that have gone unnoticed until now. By replacing the legitimate logo images with identical-looking ones that have been specially crafted to exploit these bugs, LogoFAIL makes it possible to execute malicious code at the most sensitive stage of the boot process, which is known as DXE, short for Driver Execution Environment. “Once arbitrary code execution is achieved during the DXE phase, it’s game over for platform security,” researchers from Binarly, the security firm that discovered the vulnerabilities, wrote in a whitepaper. “From this stage, we have full control over the memory and the disk of the target device, thus including the operating system that will be started.” From there, LogoFAIL can deliver a second-stage payload that drops an executable onto the hard drive before the main OS has even started. The following video demonstrates a proof-of-concept exploit created by the researchers. The infected device — a Gen 2 Lenovo ThinkCentre M70s running an 11th-Gen Intel Core with a UEFI released in June — runs standard firmware defenses, including Secure Boot and Intel Boot Guard. LogoFAIL vulnerabilities are tracked under the following designations: CVE-2023-5058, CVE-2023-39538, CVE-2023-39539, and CVE-2023-40238. However, this list is currently incomplete.
“A non-exhaustive list of companies releasing advisories includes AMI (PDF), Insyde, Phoenix, and Lenovo,” reports Ars. “People who want to know if a specific device is vulnerable should check with the manufacturer.”
“The best way to prevent LogoFAIL attacks is to install the UEFI security updates that are being released as part of Wednesday’s coordinated disclosure process. Those patches will be distributed by the manufacturer of the device or the motherboard running inside the device. It’s also a good idea, when possible, to configure UEFIs to use multiple layers of defenses. Besides Secure Boot, this includes both Intel Boot Guard and, when available, Intel BIOS Guard. There are similar additional defenses available for devices running AMD or ARM CPUs.”
In a letter to the Department of Justice, Senator Ron Wyden said foreign officials were demanding the data from Alphabet’s (GOOGL.O) Google and Apple (AAPL.O). Although details were sparse, the letter lays out yet another path by which governments can track smartphones.
Apps of all kinds rely on push notifications to alert smartphone users to incoming messages, breaking news, and other updates. These are the audible “dings” or visual indicators users get when they receive an email or their sports team wins a game. What users often do not realize is that almost all such notifications travel over Google and Apple’s servers.
That gives the two companies unique insight into the traffic flowing from those apps to their users, and in turn puts them “in a unique position to facilitate government surveillance of how users are using particular apps,” Wyden said. He asked the Department of Justice to “repeal or modify any policies” that hindered public discussions of push notification spying.
In a statement, Apple said that Wyden’s letter gave them the opening they needed to share more details with the public about how governments monitored push notifications.
“In this case, the federal government prohibited us from sharing any information,” the company said in a statement. “Now that this method has become public we are updating our transparency reporting to detail these kinds of requests.”
Google said that it shared Wyden’s “commitment to keeping users informed about these requests.”
The Department of Justice did not return messages seeking comment on the push notification surveillance or whether it had prevented Apple of Google from talking about it.
Wyden’s letter cited a “tip” as the source of the information about the surveillance. His staff did not elaborate on the tip, but a source familiar with the matter confirmed that both foreign and U.S. government agencies have been asking Apple and Google for metadata related to push notifications to, for example, help tie anonymous users of messaging apps to specific Apple or Google accounts.
The source declined to identify the foreign governments involved in making the requests but described them as democracies allied to the United States.
The source said they did not know how long such information had been gathered in that way.
Most users give push notifications little thought, but they have occasionally attracted attention from technologists because of the difficulty of deploying them without sending data to Google or Apple.
Earlier this year French developer David Libeau said users and developers were often unaware of how their apps emitted data to the U.S. tech giants via push notifications, calling them “a privacy nightmare.”
The world has reached a pivotal moment as threats from Earth system tipping points – and progress towards positive tipping points – accelerate, a new report shows
Story highlights
Rapid changes to nature and societies already happening, and more coming
The report makes six key recommendations to change course fast
A cascade of positive tipping points would save millions of lives
Humanity is currently on a disastrous trajectory, according to the Global Tipping Points report, the most comprehensive assessment of tipping points ever conducted.
The report makes six key recommendations to change course fast, including coordinated action to trigger positive tipping points.
Behind the report is an international team of more than 200 scientists, coordinated by the University of Exeter, in partnership with Bezos Earth Fund. Centre researchers David Armstrong McKay, Steven Lade, Laura Pereira, and Johan Rockström have all contributed to the report.
A tipping point occurs when a small change sparks an often rapid and irreversible transformation, and the effects can be positive or negative.
Based on an assessment of 26 negative Earth system tipping points, the report concludes “business as usual” is no longer possible – with rapid changes to nature and societies already happening, and more coming.
With global warming now on course to breach 1.5°C, at least five Earth system tipping points are likely to be triggered – including the collapse of major ice sheets and widespread mortality of warm-water coral reefs.
As Earth system tipping points multiply, there is a risk of catastrophic, global-scale loss of capacity to grow staple crops. Without urgent action to halt the climate and ecological crisis, societies will be overwhelmed as the natural world comes apart.
Impacts of physical tipping points could trigger social tipping such as financial destabilization, disruption of social cohesion, and violent conflict that would further amplify impacts on people.
Centre researcher Steven Lade
Positive tipping points
But there are ways forward. Emergency global action – accelerated by leaders meeting now at COP28 – can harness positive tipping points and steer us towards a thriving, sustainable future.
The report authors lay out a out a blueprint for doing this, and says bold, coordinated policies could trigger positive tipping points across multiple sectors including energy, transport, and food.
A cascade of positive tipping points would save millions of lives, billions of people from hardship, trillions of dollars in climate-related damage, and begin restoring the natural world upon which we all depend.
Phase out fossil fuels and land-use emissions now, stopping them well before 2050.
Strengthen adaptation and “loss and damage” governance, recognising inequality between and within nations.
Include tipping points in the Global Stocktake (the world’s climate “inventory”) and Nationally Determined Contributions (each country’s efforts to tackle climate change)
Coordinate policy efforts to trigger positive tipping points.
Convene an urgent global summit on tipping points.
Deepen knowledge of tipping points. The research team supports calls for an IPCC Special Report on tipping points.
This report was released at COP28 and is being taken extremely seriously by scientists and news people alike – as it should be. Stuff really does need to happen and it’s positive that there are possibly points that we can use to tip the balance in our favour.
IBM and Meta Launch the AI Alliance in collaboration with over 50 Founding Members and Collaborators globally including AMD, Anyscale, CERN, Cerebras, Cleveland Clinic, Cornell University, Dartmouth, Dell Technologies, EPFL, ETH, Hugging Face, Imperial College London, Intel, INSAIT, Linux Foundation, MLCommons, MOC Alliance operated by Boston University and Harvard University, NASA, NSF, Oracle, Partnership on AI, Red Hat, Roadzen, ServiceNow, Sony Group, Stability AI, University of California Berkeley, University of Illinois, University of Notre Dame, The University of Tokyo, Yale University and others
[…]
While there are many individual companies, start-ups, researchers, governments, and others who are committed to open science and open technologies and want to participate in the new wave of AI innovation, more collaboration and information sharing will help the community innovate faster and more inclusively, and identify specific risks and mitigate those risks before putting a product into the world.
[..]
We are:
The creators of the tooling driving AI benchmarking, trust and validation metrics and best practices, and application creation such as MLPerf, Hugging Face, LangChain, LlamaIndex, and open-source AI toolkits for explainability
The universities and science agencies that educate and support generation after generation of AI scientists and engineers and push the frontiers of AI research through open science.
The builders of the hardware and infrastructure that supports AI training and applications – from the needed GPUs to custom AI accelerators and cloud platforms;
The champions of frameworks that drive platform software including PyTorch, Transformers, Diffusers, Kubernetes, Ray, Hugging Face Text generation inference and Parameter Efficient Fine Tuning.
The creators of some of today’s most used open models including Llama2, Stable Diffusion, StarCoder, Bloom, and many others.
Sir Richard Branson is leaving his space tourism company, Virgin Galactic, to stand or fall on its own two feet after declaring that his business empire will not be tipping any more cash into the project.
Branson told the Financial Times: “We don’t have the deepest pockets after COVID, and Virgin Galactic has got $1 billion, or nearly. It should, I believe, have sufficient funds to do its job on its own.”
Branson’s flight proved controversial, and attracted the ire of the Federal Aviation Authority (FAA) for venturing outside of its allocated airspace. Other issues have kept Virgin Galactic’s suborbital tourism ambitions on the ground until 2023.
Things appeared to be looking up this year as the luxury operator began commercial business again after a successful suborbital test flight and approached a near-monthly cadence. But with tickets starting at $450,000 and a maximum of four paying passengers per flight, turning a profit using the VSS Unity spaceplane and VMS Eve carrier aircraft combination is wishful thinking.
To that end, Virgin Galactic is looking to its upcoming Delta class of spaceplane, which can carry up to six passengers. It also expects eight flights – and revenues of between $21.6 million and $28.8 million per ship – per month from the forthcoming class, according to its third quarter 2023 earnings update [PDF].
However, Virgin Galactic will still be burning cash to get there. Revenue guidance for Q4 2023 stood at $3 million, while its cash flow was expected to be between $125 and 135 million. Virgin Galactic will also be switching to a quarterly cadence before pausing flights of VSS Unity in mid-2024 to focus on building the Delta ships.
Why the need to pause? As well as calling a halt to unprofitable flights, this is likely due, at least in part, to staff cuts announced by boss Michael Colglazier. All told, approximately 185 employees – around 18 percent of the workforce – are to leave the building as the biz seeks to cut costs and focus on what is most likely to make money: the Delta class spaceplanes.
Those employees will not be alone. While Branson told the FT he was “still loving” the Virgin Galactic project, that love does not appear to extend to the entrepreneur’s wallet.
His other rocket startup, Virgin Orbit, perished earlier this year
A somewhat obscure guideline for developers of U.S. government websites may be about to accelerate the long, sad decline of Mozilla’s Firefox browser. There already are plenty of large entities, both public and private, whose websites lack proper support for Firefox; and that will get only worse in the near future, because the ’fox’s auburn paws are perilously close to the lip of the proverbial slippery slope.
. . . we officially support any browser above 2% usage as observed by analytics.usa.gov.
At this writing, that analytics page shows the following browser traffic for the previous ninety days:
Browser
Share
Chrome
49%
Safari
34.8%
Edge
8.4%
Firefox
2.2%
Safari (in-app)
1.9%
Samsung Internet
1.6%
Android Webview
1%
Other
1%
I am personally unaware of any serious reason to believe that Firefox’s numbers will improve soon. Indeed, for the web as a whole, they’ve been declining consistently for years, as this chart shows:
Chrome vs. Firefox vs. Safari for January, 2009, through November, 2023. Image: StatCounter.
Firefox peaked at 31.82% in November, 2009 — and then began its long slide in almost direct proportion to the rise of Chrome. The latter shot from 1.37% use in January, 2009, to its own peak of 66.34% in September, 2020, since falling back to a “measly” 62.85% in the very latest data.1
While these numbers reflect worldwide trends, the U.S.-specific picture isn’t really better. In fact, because the iPhone is so popular in the U.S. — which is obvious from what you see on that aforementioned government analytics page — Safari pulls large numbers that also hurt Firefox.
[…]
Firefox is quickly losing “web space,” thanks to a perfect storm that’s been kicked up by the dominance of Chrome, the popularity of mobile devices that run Safari by default, and many corporate and government IT shops’ insistence that their users rely on only Microsoft’s Chromium-based Edge browser while toiling away each day.
With such a continuing free-fall, Firefox is inevitably nearing the point where USWDS will remove it, like Internet Explorer before it, from the list of supported browsers.
Firefox is a very good browser with some awesome addons – and not beholden to the Google or Microsoft or Apple overlords. And it’s the only private one offering you a real choice outside of the Chromium reach.
No, it isn’t your imagination. Windows really is installing the HP Smart App and renaming printers without user interaction.
Microsoft has updated its Windows release health dashboard to admit a problem exists. The title of the issue says it all: “Printer names and icons might be changed and HP Smart app automatically installs.”
The problem appears widespread – as well as Windows 11, versions of Windows 10 going right back to the Windows 10 Enterprise 2015 LTSB have been hit by the issue, which appears to affect Windows devices with access to the Microsoft Store. Windows Server, including Windows Server 2012, is also affected.
As a reminder, symptoms of an affected Windows 10 or 11 devices include the unexpected and unasked-for installation of the HP Smart App, even if no HP hardware is connected.
However, things can get progressively weirder, and Microsoft has reported that existing printers can end up being renamed HP printers, regardless of manufacturer. We’ve reported on how much HP would like to take control of its ecosystem, but this seems extreme even for the inveterate ink pusher.
According to Microsoft, when renaming occurs, most printers are dubbed the “HP LaserJet M101-M106,” and the printer icons might also be changed. Double-clicking the printer displays the error “No tasks are available for this page.”
So, what is happening? Microsoft said it was still investigating the issue and coordinating with its partners on a solution. It all seems to stem from the mystery automatic installation of the HP Smart App. Windows devices that don’t have access to the Microsoft Store should not be affected, according to the Windows giant.
The Register is awaiting a response from Microsoft on the issue and will update should the company respond
Since the beginning of 2023, ESET researchers have observed an alarming growth of deceptive Android loan apps, which present themselves as legitimate personal loan services, promising quick and easy access to funds.
Despite their attractive appearance, these services are in fact designed to defraud users by offering them high-interest-rate loans endorsed with deceitful descriptions, all while collecting their victims’ personal and financial information to blackmail them, and in the end gain their funds. ESET products therefore recognize these apps using the detection name SpyLoan, which directly refers to their spyware functionality combined with loan claims.
Key points of the blogpost:
Apps analyzed by ESET researchers request various sensitive information from their users and exfiltrate it to the attackers’ servers.
This data is then used to harass and blackmail users of these apps and, according to user reviews, even if a loan was not provided.
ESET telemetry shows a discernible growth in these apps across unofficial third-party app stores, Google Play, and websites since the beginning of 2023.
Malicious loan apps focus on potential borrowers based in Southeast Asia, Africa, and Latin America.
All of these services operate only via mobile apps, since the attackers can’t access all sensitive user data that is stored on the victim’s smartphone through browsers.
[…]
All of the SpyLoan apps that are described in this blogpost and mentioned in the IoCs section are marketed through social media and SMS messages, and available to download from dedicated scam websites and third-party app stores. All of these apps were also available on Google Play. As a Google App Defense Alliance partner, ESET identified 18 SpyLoan apps and reported them to Google, who subsequently removed 17 of these apps from their platform. Before their removal, these apps had a total of more than 12 million downloads from Google Play. The last app identified by ESET is still available on Google Play – however, since its developers changed its permissions and functionality, we no longer detect it as a SpyLoan app.
[…]
According to ESET telemetry, the enforcers of these apps operate mainly in Mexico, Indonesia, Thailand, Vietnam, India, Pakistan, Colombia, Peru, the Philippines, Egypt, Kenya, Nigeria, and Singapore (see map in Figure 2). All these countries have various laws that govern private loans – not only their rates but also their communication transparency; however, we don’t know how successfully they are enforced. We believe that any detections outside of these countries are related to smartphones that have, for various reasons, access to a phone number registered in one of these countries.
At the time of writing, we haven’t seen an active campaign targeting European countries, the USA, or Canada.
[…]
ESET Research has traced the origins of the SpyLoan scheme back to 2020. At that time, such apps presented only isolated cases that didn’t catch the attention of researchers; however, the presence of malicious loan apps kept growing and ultimately, we started to spot them on Google Play, the Apple App Store, and on dedicated scam websites
[…]
Security company Lookout identified 251 Android apps on Google Play and 35 iOS apps on the Apple App Store that exhibited predatory behavior. According to Lookout, they had been in contact with Google and Apple regarding the identified apps and in November 2022 published a blogpost about these apps
[…]
Once a user installs a SpyLoan app, they are prompted to accept the terms of service and grant extensive permissions to access sensitive data stored on the device. Subsequently, the app requests user registration, typically accomplished through SMS one-time password verification to validate the victim’s phone number.
These registration forms automatically select the country code based on the country code from the victim’s phone number, ensuring that only individuals with phone numbers registered in the targeted country can create an account,
[…]
After successful phone number verification, users gain access to the loan application feature within the app. To complete the loan application process, users are compelled to provide extensive personal information, including address details, contact information, proof of income, banking account information, and even to upload photos of the front and back sides of their identification cards, and a selfie
[…]
On May 31st, 2023, additional policies started to apply to loan apps on Google Play, stating that such apps are prohibited from asking for permission to access sensitive data such as images, videos, contacts, phone numbers, location, and external storage data. It appears this updated policy didn’t have an immediate effect on existing apps, as most of the ones we reported were still available on the platform (including their broad permissions) after the policy started to apply
[…]
After such an app is installed and personal data is collected, the app’s enforcers start to harass and blackmail their victims into making payments, even if – according to the reviews – the user didn’t apply for a loan or applied but the loan wasn’t approved
[…]
Besides the data harvesting and blackmailing, these services present a form of modern-day digital usury, which refers to the charging of excessive interest rates on loans, taking advantage of vulnerable individuals with urgent financial needs, or borrowers who have limited access to mainstream financial institutions. One user gave a negative review (shown in Figure 14) to a SpyLoan app not because it was harassing him, but because it had already been four days since he applied for a loan, but nothing had happened and he needed money for medication.
There’s been this weird idea lately, even among people who used to recognize that copyright only empowers the largest gatekeepers, that in the AI world we have to magically flip the script on copyright and use it as a tool to get AI companies to pay for the material they train on. But, as we’ve explained repeatedly, this would be a huge mistake. Even if people are concerned about how AI works, copyright is not the right tool to use here, and the risk of it being used to destroy all sorts of important and useful tools is quite high (ignoring Elon Musk’s prediction that “Digital God” will obsolete all of this).
However, because so many people think that they’re supporting creators and “sticking it” to Big Tech in supporting these copyright lawsuits over AI, I thought it might be useful to play out how this would work in practice. And, spoiler alert, the end result would be a disaster for creators, and a huge benefit to big tech. It’s exactly what we should be fighting against.
And, we know this because we have decades of copyright law and the internet to observe. Copyright law, by its very nature as a monopoly right, has always served the interests of gatekeepers over artists. This is why the most aggressive enforcers of copyright are the very middlemen with long histories of screwing over the actual creatives: the record labels, the TV and movie studios, the book publishers, etc.
This is because the nature of copyright law is such that it is most powerful when a few large entities act as central repositories for the copyrights and can lord around their power and try to force other entities to pay up. This is how the music industry has worked for years, and you can see what’s happened. After years of fighting internet music, it finally devolved into a situation where there are a tiny number of online music services (Spotify, Apple, YouTube, etc.) who cut massive deals with the giant gatekeepers on the other side (the record labels, the performance rights orgs, the collection societies) while the actual creators get pennies.
This is why we’ve said that AI training will never fit neatly into a licensing regime. The almost certain outcome (because it’s what happens every other time a similar situation arises) is that there will be one (possibly two) giant entities who will be designated as the “collection society” with whom AI companies will have to negotiate or to just purchase a “training license” and that entity will then collect a ton of money, much of which will go towards “administration,” and actual artists will… get a tiny bit.
And, because of the nature of training data, which only needs to be collected once, it’s not likely that this will be a recurring payment, but a minuscule one-off for the right to train on the data.
But, given the enormity of the amount of content, and the structure of this kind of thing, the cost will be extremely high for the AI companies (a few pennies for every creator online can add up in aggregate), meaning that only the biggest of big tech will be able to afford it.
In other words, the end result of a win in this kind of litigation (or, if Congress decides to act to achieve something similar) would be the further locking-in of the biggest companies. Google, Meta, and OpenAI (with Microsoft’s money) can afford the license, and will toss off a tiny one-time payment to creators (while whatever collection society there is takes a big cut for administration).
And then all of the actually interesting smaller companies and open source models are screwed.
End result? More lock-in of the biggest of big tech in exchange for… a few pennies for creators?
That’s not a beneficial outcome. It’s a horrible outcome. It will not just limit innovation, but it will massively limit competition and provide an even bigger benefit to the biggest incumbents.
Alexandre Pouget at the University of Geneva, Switzerland, and his colleagues used machine learning to analyse the chemical composition of 80 red wines from 12 years between 1990 and 2007. All the wines came from seven wine estates in the Bordeaux region of France.
“We were interested in finding out whether there is a chemical signature that is specific to each of those chateaux that’s independent of vintage,” says Pouget, meaning one estate’s wines would have a very similar chemical profile, and therefore taste, year after year.
To do this, Pouget and his colleagues used a machine to vaporise each wine and separate it into its chemical components. This technique gave them a readout for each wine, called a chromatogram, with about 30,000 points representing different chemical compounds.
The researchers used 73 of the chromatograms to train a machine learning algorithm, along with data on the chateaux of origin and the year. Then they tested the algorithm on the seven chromatograms that had been held back.
They repeated the process 50 times, changing the wines used each time. The algorithm correctly guessed the chateau of origin 100 per cent of the time. “Not that many people in the world will be able to do this,” says Pouget. It was also about 50 per cent accurate at guessing the year when the wine was made.
The algorithm could even guess the estate when it was trained using just 5 per cent of each chromatogram, using portions where there are no notable peaks in chemicals visible to the naked eye, says Pouget.
This shows that a wine’s unique taste and feel in the mouth doesn’t depend on a handful of key molecules, but rather on the overall concentration of many, many molecules, says Pouget.
By plotting the chromatogram data, the algorithm could also separate the wines into groups that were more like each other. It grouped those on the right bank of the river Garonne – called Pomerol and St-Emilion wines – separately from those from left-bank estates, known as Medoc wines.
The work is further evidence that local geography, climate, microbes and wine-making practices, together known as the terroir, do give a unique flavour to a wine. Which precise chemicals are behind each wine wasn’t looked at in this study, however.
“It really is coming close to proof that the place of growing and making really does have a chemical signal for individual wines or chateaux,” says Barry Smith at the University of London’s School of Advanced Study. “The chemicals compounds and their similarities and differences reflect that elusive concept of terroir.”
Sam Altman reportedly has no equity in OpenAI, a strange move for a tech founder, but new reporting from Wired this weekend shows the CEO would profit from an OpenAI deal to buy AI chips. OpenAI signed a previously unknown deal back in 2019 to spend $51 million on advanced chips from a startup Sam Altman is reportedly personally invested in. Altman’s web of private business interests seems to have played some role in his recent firing according to the report.
OpenAI’s board fired Sam Altman last month, calling him inconsistently candid and hindering its ability to safely develop artificial general intelligence, but not providing a real reason. Everyone’s looking for the smoking gun, and Altman’s business dealings affecting his responsibilities as OpenAI’s CEO could be what’s behind the board’s decision. However, it’s unclear, and Altman is back at the helm while the board that fired him is gone.
The startup, Rain AI, is building computer chips that replicate the human brain, which promises to be the next phase for building AI. Neuromorphic processing units, or NPUs, claim to be 100 times more powerful than Nvidia’s GPUs, which OpenAI and Microsoft are currently beholden to. While NPUs are not on the market yet, OpenAI has a deal to get first dibs.
Altman personally invested more than $1 million in Rain in 2018, according to The Information, and he’s listed on Rain’s website as a backer. OpenAI’s CEO is invested in dozens of startups, however. He previously led the startup incubator, Y Combinator, and became one of the most prominent dealmakers in Silicon Valley.
The AI chip company Rain has had no shortage of drama in the last week. The Biden administration forced a Saudi venture capital firm to sell its $25 million stake in Rain AI, just last week. Gordon Wilson, the founder and CEO of Rain, stepped down last week as well, without providing a reason. Wilson posted his resignation on LinkedIn about the same time that Sam Altman was reinstated at OpenAI.
The blurry lines between Sam Altman’s private investments and OpenAI business could have been a key reason for his firing, but we still don’t have a clear explanation from the board. A former board member who fired Sam Altman, Helen Toner, gave us her best hint yet as she stepped down last week. Toner said the firing was not about slowing down OpenAI’s progress towards AGI in a Nov. 29 tweet. Toner says the firing was about “the board’s ability to effectively supervise the company,” which sounds like it has more to do with business disclosures than breakthroughs around AGI.
US Senator Edward Markey (D-Mass.) is one of the more technologically engaged of our elected lawmakers. And like many technologically engaged Ars Technica readers, he does not like what he sees in terms of automakers’ approach to data privacy. On Friday, Sen. Markey wrote to 14 car companies with a variety of questions about data privacy policies, urging them to do better.
As Ars reported in September, the Mozilla Foundation published a scathing report on the subject of data privacy and automakers. The problems were widespread—most automakers collect too much personal data and are too eager to sell or share it with third parties, the foundation found.
Markey noted the Mozilla Foundation report in his letters, which were sent to BMW, Ford, General Motors, Honda, Hyundai, Kia, Mazda, Mercedes-Benz, Nissan, Stellantis, Subaru, Tesla, Toyota, and Volkswagen. The senator is concerned about the large amounts of data that modern cars can collect, including the troubling potential to use biometric data (like the rate a driver blinks and breathes, as well as their pulse) to infer mood or mental health.
Sen. Markey is also worried about automakers’ use of Bluetooth, which he said has expanded “their surveillance to include information that has nothing to do with a vehicle’s operation, such as data from smartphones that are wirelessly connected to the vehicle.”
“These practices are unacceptable,” Markey wrote. “Although certain data collection and sharing practices may have real benefits, consumers should not be subject to a massive data collection apparatus, with any disclosures hidden in pages-long privacy policies filled with legalese. Cars should not—and cannot—become yet another venue where privacy takes a backseat.”
The 14 automakers have until December 21 to answer the following questions:
Does your company collect user data from its vehicles, including but not limited to the actions, behaviors, or personal information of any owner or user?
If so, please describe how your company uses data about owners and users collected from its vehicles. Please distinguish between data collected from users of your vehicles and data collected from those who sign up for additional services.
Please identify every source of data collection in your new model vehicles, including each type of sensor, interface, or point of collection from the individual and the purpose of that data collection.
Does your company collect more information than is needed to operate the vehicle and the services to which the individual consents?
Does your company collect information from passengers or people outside the vehicle? If so, what information and for what purposes?
Does your company sell, transfer, share, or otherwise derive commercial benefit from data collected from its vehicles to third parties? If so, how much did third parties pay your company in 2022 for that data?
Once your company collects this user data, does it perform any categorization or standardization procedures to group the data and make it readily accessible for third-party use?
Does your company use this user data, or data on the user acquired from other sources, to create user profiles of any sort?
How does your company store and transmit different types of data collected on the vehicle? Do your company’s vehicles include a cellular connection or Wi-Fi capabilities for transmitting data from the vehicle?
Does your company provide notice to vehicle owners or users of its data practices?
Does your company provide owners or users an opportunity to exercise consent with respect to data collection in its vehicles?
If so, please describe the process by which a user is able to exercise consent with respect to such data collection. If not, why not?
If users are provided with an opportunity to exercise consent to your company’s services, what percentage of users do so?
Do users lose any vehicle functionality by opting out of or refusing to opt in to data collection? If so, does the user lose access only to features that strictly require such data collection, or does your company disable features that could otherwise operate without that data collection?
Can all users, regardless of where they reside, request the deletion of their data? If so, please describe the process through which a user may delete their data. If not, why not?
Does your company take steps to anonymize user data when it is used for its own purposes, shared with service providers, or shared with non-service provider third parties? If so, please describe your company’s process for anonymizing user data, including any contractual restrictions on re-identification that your company imposes.
Does your company have any privacy standards or contractual restrictions for the third-party software it integrates into its vehicles, such as infotainment apps or operating systems? If so, please provide them. If not, why not?
Please describe your company’s security practices, data minimization procedures, and standards in the storage of user data.
Has your company suffered a leak, breach, or hack within the last ten years in which user data was compromised?
If so, please detail the event(s), including the nature of your company’s system that was exploited, the type and volume of data affected, and whether and how your company notified its impacted users.
Is all the personal data stored on your company’s vehicles encrypted? If not, what personal data is left open and unprotected? What steps can consumers take to limit this open storage of their personal information on their cars?
Has your company ever provided to law enforcement personal information collected by a vehicle?
If so, please identify the number and types of requests that law enforcement agencies have submitted and the number of times your company has complied with those requests.
Does your company provide that information only in response to a subpoena, warrant, or court order? If not, why not?
Does your company notify the vehicle owner when it complies with a request?
UK telecoms regulator Ofcom has laid out how porn sites could verify users’ ages under the newly passed Online Safety Act. Although the law gives sites the choice of how they keep out underage users, the regulator is publishing a list of measures they’ll be able to use to comply. These include having a bank or mobile network confirm that a user is at least 18 years old (with that user’s consent) or asking a user to supply valid details for a credit card that’s only available to people who are 18 and older. The regulator is consulting on these guidelines starting today and hopes to finalize its official guidance in roughly a year’s time.
The measures have the potential to be contentious and come a little over four years after the UK government scrapped its last attempt to mandate age verification for pornography. Critics raised numerous privacy and technical concerns with the previous approach, and the plans were eventually shelved with the hope that the Online Safety Act (then emerging as the Online Harms White Paper) would offer a better way forward. Now we’re going to see if that’s true, or if the British government was just kicking the can down the road.
[…]
Ofcom lists six age verification methods in today’s draft guidelines. As well as turning to banks, mobile networks, and credit cards, other suggested measures include asking users to upload photo ID like a driver’s license or passport, or for sites to use “facial age estimation” technology to analyze a person’s face to determine that they’ve turned 18. Simply asking a site visitor to declare that they’re an adult won’t be considered strict enough.
Once the duties come into force, pornography sites will be able to choose from Ofcom’s approaches or implement their own age verification measures so long as they’re deemed to hit the “highly effective” bar demanded by the Online Safety Act. The regulator will work with larger sites directly and keep tabs on smaller sites by listening to complaints, monitoring media coverage, and working with frontline services. Noncompliance with the Online Safety Act can be punished with fines of up to £18 million (around $22.7 million) or 10 percent of global revenue (whichever is higher).
[…]
“It is very concerning that Ofcom is solely relying upon data protection laws and the ICO to ensure that privacy will be protected,” ORG program manager Abigail Burke said in a statement. “The Data Protection and Digital Information Bill, which is progressing through parliament, will seriously weaken our current data protection laws, which are in any case insufficient for a scheme this intrusive.”
“Age verification technologies for pornography risk sensitive personal data being breached, collected, shared, or sold. The potential consequences of data being leaked are catastrophic and could include blackmail, fraud, relationship damage, and the outing of people’s sexual preferences in very vulnerable circumstances,” Burke said, and called for Ofcom to set out clearer standards for protecting user data.
There’s also the risk that any age verification implemented will end up being bypassed by anyone with access to a VPN.
City lawmakers in Brazil have enacted what appears to be the nation’s first legislation written entirely by artificial intelligence — even if they didn’t know it at the time.
The experimental ordinance was passed in October in the southern city of Porto Alegre and city councilman Ramiro Rosário revealed this week that it was written by a chatbot, sparking objections and raising questions about the role of artificial intelligence in public policy.
Rosário told The Associated Press that he asked OpenAI’s chatbot ChatGPT to craft a proposal to prevent the city from charging taxpayers to replace water consumption meters if they are stolen. He then presented it to his 35 peers on the council without making a single change or even letting them know about its unprecedented origin.
“If I had revealed it before, the proposal certainly wouldn’t even have been taken to a vote,” Rosário told the AP by phone on Thursday. The 36-member council approved it unanimously and the ordinance went into effect on Nov. 23.
“It would be unfair to the population to run the risk of the project not being approved simply because it was written by artificial intelligence,” he added.
[…]
“We want work that is ChatGPT generated to be watermarked,” he said, adding that the use of artificial intelligence to help draft new laws is inevitable. “I’m in favor of people using ChatGPT to write bills as long as it’s clear.”
There was no such transparency for Rosário’s proposal in Porto Alegre. Sossmeier said Rosário did not inform fellow council members that ChatGPT had written the proposal.
Keeping the proposal’s origin secret was intentional. Rosário told the AP his objective was not just to resolve a local issue, but also to spark a debate. He said he entered a 49-word prompt into ChatGPT and it returned the full draft proposal within seconds, including justifications.
[…]
And the council president, who initially decried the method, already appears to have been swayed.
“I changed my mind,” Sossmeier said. “I started to read more in depth and saw that, unfortunately or fortunately, this is going to be a trend.”
In a paper released on arXiv last week, a team of researchers from Hugging Face and Carnegie Mellon University calculated the amount of power AI systems use when asked to perform different tasks.
After asking AIs to perform 1,000 inferences for each task, the researchers found text-based AI tasks are more energy-efficient than jobs involving images.
Text generation consumed 0.042kWh while image generation required 1.35kWh. The boffins assert that charging a smartphone requires 0.012kWh – making image generation a very power-hungry application.
“The least efficient image generation model uses as much energy as 950 smartphone charges (11.49kWh), or nearly one charge per image generation,” the authors wrote, noting the “large variation between image generation models, depending on the size of image that they generate.”
The authors also measured carbon dioxide created by different AI workloads. As depicted in the graphic below, image creation topped that chart
Would you trust Elon Musk with your mortgage? Or Big Tech with your benefits?
Us neither.
That’s what’s at stake as the EU’s Artificial Intelligence Act reaches the final stage of negotiations. For all its big talk, it seems like the EU is buckling to Big Tech.
EU lawmakers have been tasked with developing the world’s first comprehensive law to regulate AI products. Now that AI systems are already being used in public life, lawmakers are rushing to catch up.
[…]
The principle of precaution urges us to exercise care and responsibility in the face of potential risks. It is crucial not only to foster innovation but also to prevent the unchecked expansion of AI from jeopardising justice and fundamental rights.
At the Left in the European Parliament, we called for this principle to be applied to the AI Act. Unfortunately, other political groups disagreed, prioritising the interests of Big Tech over those of the people. They settled on a three-tiered approach to risk whereby products are categorised into those that do not pose a significant risk, those that are high risk and those that are banned.
However, this approach contains a major loophole that risks undermining the entire legislation.
Like asking a tobacco company whether smoking is risky
When it was first proposed, the Commission outlined a list of ‘high-risk uses’ of AI, including AI systems used to select students, assess consumers’ creditworthiness, evaluate job-seekers, and determine who can access welfare benefits.
Using AI in these assessments has significant real-life consequences. It can mean the difference between being accepted or rejected to university, being able to take out a loan or even being able to access welfare to pay bills, rent or put food on the table.
Under the three-tiered approach, AI developers are allowed to decide themselves whether their product is high-risk. The self-assessment loophole means the developers themselves get to determine whether their systems are high risk akin to a tobacco company deciding cigarettes are safe for our health, or a fossil fuel company saying its fumes don’t harm the environment.
[…]
Experience shows us that when corporations have this kind of freedom, they prioritise their profits over the interests of people and the planet. If the development of AI is to be accountable and transparent, negotiators must eliminate provisions on self-assessment.
AI gives us the opportunity to change our lives for the better. But as long as we let big corporations make the rules, we will continue to replicate inequalities that are already ravaging our societies.
OK, so this seems to be a little breathless – surely we can put in a mechanism for EU checking of risk level when notified of a potential breech, including harsh penalties for misclassifying an AI?
However, the discussions around the EU AI Act – which had the potential to be one of the first and best pieces of regulation on the planet – has now descended into farce since ChatGPT and some strange idea that the original act did not have any provisions for General Purpose / Foundational AI models (it did – they were high risk models). The silly induced discussions this has provoked has only served to delay the AI act coming into force for over a year – something that big businesses are very very happy to see.