AI can tell which chateau Bordeaux wines come from with 100% accuracy

Alexandre Pouget at the University of Geneva, Switzerland, and his colleagues used machine learning to analyse the chemical composition of 80 red wines from 12 years between 1990 and 2007. All the wines came from seven wine estates in the Bordeaux region of France.

“We were interested in finding out whether there is a chemical signature that is specific to each of those chateaux that’s independent of vintage,” says Pouget, meaning one estate’s wines would have a very similar chemical profile, and therefore taste, year after year.

To do this, Pouget and his colleagues used a machine to vaporise each wine and separate it into its chemical components. This technique gave them a readout for each wine, called a chromatogram, with about 30,000 points representing different chemical compounds.

The researchers used 73 of the chromatograms to train a machine learning algorithm, along with data on the chateaux of origin and the year. Then they tested the algorithm on the seven chromatograms that had been held back.

They repeated the process 50 times, changing the wines used each time. The algorithm correctly guessed the chateau of origin 100 per cent of the time. “Not that many people in the world will be able to do this,” says Pouget. It was also about 50 per cent accurate at guessing the year when the wine was made.

The algorithm could even guess the estate when it was trained using just 5 per cent of each chromatogram, using portions where there are no notable peaks in chemicals visible to the naked eye, says Pouget.

This shows that a wine’s unique taste and feel in the mouth doesn’t depend on a handful of key molecules, but rather on the overall concentration of many, many molecules, says Pouget.

By plotting the chromatogram data, the algorithm could also separate the wines into groups that were more like each other. It grouped those on the right bank of the river Garonne – called Pomerol and St-Emilion wines – separately from those from left-bank estates, known as Medoc wines.

The work is further evidence that local geography, climate, microbes and wine-making practices, together known as the terroir, do give a unique flavour to a wine. Which precise chemicals are behind each wine wasn’t looked at in this study, however.

“It really is coming close to proof that the place of growing and making really does have a chemical signal for individual wines or chateaux,” says Barry Smith at the University of London’s School of Advanced Study. “The chemicals compounds and their similarities and differences reflect that elusive concept of terroir.”

 

Journal reference:

Communications Chemistry DOI: 10.1038/s42004-023-01051-9

Source: AI can tell which chateau Bordeaux wines come from with 100% accuracy | New Scientist

Sam Altman May Have Found a Loophole to Cash in at OpenAI – buying his products from another company

Sam Altman reportedly has no equity in OpenAI, a strange move for a tech founder, but new reporting from Wired this weekend shows the CEO would profit from an OpenAI deal to buy AI chips. OpenAI signed a previously unknown deal back in 2019 to spend $51 million on advanced chips from a startup Sam Altman is reportedly personally invested in. Altman’s web of private business interests seems to have played some role in his recent firing according to the report.

OpenAI’s board fired Sam Altman last month, calling him inconsistently candid and hindering its ability to safely develop artificial general intelligence, but not providing a real reason. Everyone’s looking for the smoking gun, and Altman’s business dealings affecting his responsibilities as OpenAI’s CEO could be what’s behind the board’s decision. However, it’s unclear, and Altman is back at the helm while the board that fired him is gone.

The startup, Rain AI, is building computer chips that replicate the human brain, which promises to be the next phase for building AI. Neuromorphic processing units, or NPUs, claim to be 100 times more powerful than Nvidia’s GPUs, which OpenAI and Microsoft are currently beholden to. While NPUs are not on the market yet, OpenAI has a deal to get first dibs.

Altman personally invested more than $1 million in Rain in 2018, according to The Information, and he’s listed on Rain’s website as a backer. OpenAI’s CEO is invested in dozens of startups, however. He previously led the startup incubator, Y Combinator, and became one of the most prominent dealmakers in Silicon Valley.

The AI chip company Rain has had no shortage of drama in the last week. The Biden administration forced a Saudi venture capital firm to sell its $25 million stake in Rain AI, just last week. Gordon Wilson, the founder and CEO of Rain, stepped down last week as well, without providing a reason. Wilson posted his resignation on LinkedIn about the same time that Sam Altman was reinstated at OpenAI.

The blurry lines between Sam Altman’s private investments and OpenAI business could have been a key reason for his firing, but we still don’t have a clear explanation from the board. A former board member who fired Sam Altman, Helen Toner, gave us her best hint yet as she stepped down last week. Toner said the firing was not about slowing down OpenAI’s progress towards AGI in a Nov. 29 tweet. Toner says the firing was about “the board’s ability to effectively supervise the company,” which sounds like it has more to do with business disclosures than breakthroughs around AGI.

Source: Sam Altman May Have Found a Loophole to Cash in at OpenAI

Automakers’ data privacy practices “are unacceptable,” says US senator

US Senator Edward Markey (D-Mass.) is one of the more technologically engaged of our elected lawmakers. And like many technologically engaged Ars Technica readers, he does not like what he sees in terms of automakers’ approach to data privacy. On Friday, Sen. Markey wrote to 14 car companies with a variety of questions about data privacy policies, urging them to do better.

As Ars reported in September, the Mozilla Foundation published a scathing report on the subject of data privacy and automakers. The problems were widespread—most automakers collect too much personal data and are too eager to sell or share it with third parties, the foundation found.

Markey noted the Mozilla Foundation report in his letters, which were sent to BMW, Ford, General Motors, Honda, Hyundai, Kia, Mazda, Mercedes-Benz, Nissan, Stellantis, Subaru, Tesla, Toyota, and Volkswagen. The senator is concerned about the large amounts of data that modern cars can collect, including the troubling potential to use biometric data (like the rate a driver blinks and breathes, as well as their pulse) to infer mood or mental health.

Sen. Markey is also worried about automakers’ use of Bluetooth, which he said has expanded “their surveillance to include information that has nothing to do with a vehicle’s operation, such as data from smartphones that are wirelessly connected to the vehicle.”

“These practices are unacceptable,” Markey wrote. “Although certain data collection and sharing practices may have real benefits, consumers should not be subject to a massive data collection apparatus, with any disclosures hidden in pages-long privacy policies filled with legalese. Cars should not—and cannot—become yet another venue where privacy takes a backseat.”

The 14 automakers have until December 21 to answer the following questions:

  • Does your company collect user data from its vehicles, including but not limited to the actions, behaviors, or personal information of any owner or user?
    • If so, please describe how your company uses data about owners and users collected from its vehicles. Please distinguish between data collected from users of your vehicles and data collected from those who sign up for additional services.
    • Please identify every source of data collection in your new model vehicles, including each type of sensor, interface, or point of collection from the individual and the purpose of that data collection.
    • Does your company collect more information than is needed to operate the vehicle and the services to which the individual consents?
    • Does your company collect information from passengers or people outside the vehicle? If so, what information and for what purposes?
    • Does your company sell, transfer, share, or otherwise derive commercial benefit from data collected from its vehicles to third parties? If so, how much did third parties pay your company in 2022 for that data?
    • Once your company collects this user data, does it perform any categorization or standardization procedures to group the data and make it readily accessible for third-party use?
    • Does your company use this user data, or data on the user acquired from other sources, to create user profiles of any sort?
    • How does your company store and transmit different types of data collected on the vehicle? Do your company’s vehicles include a cellular connection or Wi-Fi capabilities for transmitting data from the vehicle?
  • Does your company provide notice to vehicle owners or users of its data practices?
  • Does your company provide owners or users an opportunity to exercise consent with respect to data collection in its vehicles?
    • If so, please describe the process by which a user is able to exercise consent with respect to such data collection. If not, why not?
    • If users are provided with an opportunity to exercise consent to your company’s services, what percentage of users do so?
    • Do users lose any vehicle functionality by opting out of or refusing to opt in to data collection? If so, does the user lose access only to features that strictly require such data collection, or does your company disable features that could otherwise operate without that data collection?
  • Can all users, regardless of where they reside, request the deletion of their data? If so, please describe the process through which a user may delete their data. If not, why not?
  • Does your company take steps to anonymize user data when it is used for its own purposes, shared with service providers, or shared with non-service provider third parties? If so, please describe your company’s process for anonymizing user data, including any contractual restrictions on re-identification that your company imposes.
  • Does your company have any privacy standards or contractual restrictions for the third-party software it integrates into its vehicles, such as infotainment apps or operating systems? If so, please provide them. If not, why not?
  • Please describe your company’s security practices, data minimization procedures, and standards in the storage of user data.
    • Has your company suffered a leak, breach, or hack within the last ten years in which user data was compromised?
    • If so, please detail the event(s), including the nature of your company’s system that was exploited, the type and volume of data affected, and whether and how your company notified its impacted users.
    • Is all the personal data stored on your company’s vehicles encrypted? If not, what personal data is left open and unprotected? What steps can consumers take to limit this open storage of their personal information on their cars?
  • Has your company ever provided to law enforcement personal information collected by a vehicle?
    • If so, please identify the number and types of requests that law enforcement agencies have submitted and the number of times your company has complied with those requests.
    • Does your company provide that information only in response to a subpoena, warrant, or court order? If not, why not?
  • Does your company notify the vehicle owner when it complies with a request?

Source: Automakers’ data privacy practices “are unacceptable,” says US senator | Ars Technica

The UK tries, once again, to age-gate pornography, keep a list of porn watchers

UK telecoms regulator Ofcom has laid out how porn sites could verify users’ ages under the newly passed Online Safety Act. Although the law gives sites the choice of how they keep out underage users, the regulator is publishing a list of measures they’ll be able to use to comply. These include having a bank or mobile network confirm that a user is at least 18 years old (with that user’s consent) or asking a user to supply valid details for a credit card that’s only available to people who are 18 and older. The regulator is consulting on these guidelines starting today and hopes to finalize its official guidance in roughly a year’s time.

The measures have the potential to be contentious and come a little over four years after the UK government scrapped its last attempt to mandate age verification for pornography. Critics raised numerous privacy and technical concerns with the previous approach, and the plans were eventually shelved with the hope that the Online Safety Act (then emerging as the Online Harms White Paper) would offer a better way forward. Now we’re going to see if that’s true, or if the British government was just kicking the can down the road.

[…]

Ofcom lists six age verification methods in today’s draft guidelines. As well as turning to banks, mobile networks, and credit cards, other suggested measures include asking users to upload photo ID like a driver’s license or passport, or for sites to use “facial age estimation” technology to analyze a person’s face to determine that they’ve turned 18. Simply asking a site visitor to declare that they’re an adult won’t be considered strict enough.

Once the duties come into force, pornography sites will be able to choose from Ofcom’s approaches or implement their own age verification measures so long as they’re deemed to hit the “highly effective” bar demanded by the Online Safety Act. The regulator will work with larger sites directly and keep tabs on smaller sites by listening to complaints, monitoring media coverage, and working with frontline services. Noncompliance with the Online Safety Act can be punished with fines of up to £18 million (around $22.7 million) or 10 percent of global revenue (whichever is higher).

[…]

“It is very concerning that Ofcom is solely relying upon data protection laws and the ICO to ensure that privacy will be protected,” ORG program manager Abigail Burke said in a statement. “The Data Protection and Digital Information Bill, which is progressing through parliament, will seriously weaken our current data protection laws, which are in any case insufficient for a scheme this intrusive.”

“Age verification technologies for pornography risk sensitive personal data being breached, collected, shared, or sold. The potential consequences of data being leaked are catastrophic and could include blackmail, fraud, relationship damage, and the outing of people’s sexual preferences in very vulnerable circumstances,” Burke said, and called for Ofcom to set out clearer standards for protecting user data.

There’s also the risk that any age verification implemented will end up being bypassed by anyone with access to a VPN.

[…]

Source: The UK tries, once again, to age-gate pornography – The Verge

1. Age verification doesn’t work

2. Age verification doesn’t work

3. Age verification doesn’t work

4. Really, having to register as a porn watcher and then have your name in a leaky database?!

Brazillian city enacts an ordinance that was written by ChatGPT – might be first law entered by AI

City lawmakers in Brazil have enacted what appears to be the nation’s first legislation written entirely by artificial intelligence — even if they didn’t know it at the time.

The experimental ordinance was passed in October in the southern city of Porto Alegre and city councilman Ramiro Rosário revealed this week that it was written by a chatbot, sparking objections and raising questions about the role of artificial intelligence in public policy.

Rosário told The Associated Press that he asked OpenAI’s chatbot ChatGPT to craft a proposal to prevent the city from charging taxpayers to replace water consumption meters if they are stolen. He then presented it to his 35 peers on the council without making a single change or even letting them know about its unprecedented origin.

“If I had revealed it before, the proposal certainly wouldn’t even have been taken to a vote,” Rosário told the AP by phone on Thursday. The 36-member council approved it unanimously and the ordinance went into effect on Nov. 23.

“It would be unfair to the population to run the risk of the project not being approved simply because it was written by artificial intelligence,” he added.

[…]

“We want work that is ChatGPT generated to be watermarked,” he said, adding that the use of artificial intelligence to help draft new laws is inevitable. “I’m in favor of people using ChatGPT to write bills as long as it’s clear.”

There was no such transparency for Rosário’s proposal in Porto Alegre. Sossmeier said Rosário did not inform fellow council members that ChatGPT had written the proposal.

Keeping the proposal’s origin secret was intentional. Rosário told the AP his objective was not just to resolve a local issue, but also to spark a debate. He said he entered a 49-word prompt into ChatGPT and it returned the full draft proposal within seconds, including justifications.

[…]

And the council president, who initially decried the method, already appears to have been swayed.

“I changed my mind,” Sossmeier said. “I started to read more in depth and saw that, unfortunately or fortunately, this is going to be a trend.”

Source: Brazillian city enacts an ordinance that was written by ChatGPT | AP News

One AI image needs as much power as a smartphone charge

In a paper released on arXiv last week, a team of researchers from Hugging Face and Carnegie Mellon University calculated the amount of power AI systems use when asked to perform different tasks.

After asking AIs to perform 1,000 inferences for each task, the researchers found text-based AI tasks are more energy-efficient than jobs involving images.

Text generation consumed 0.042kWh while image generation required 1.35kWh. The boffins assert that charging a smartphone requires 0.012kWh – making image generation a very power-hungry application.

“The least efficient image generation model uses as much energy as 950 smartphone charges (11.49kWh), or nearly one charge per image generation,” the authors wrote, noting the “large variation between image generation models, depending on the size of image that they generate.”

The authors also measured carbon dioxide created by different AI workloads. As depicted in the graphic below, image creation topped that chart

screenshot_graph

Click to enlarge

You can read the full paper here [PDF].

Source: One AI image needs as much power as a smartphone charge • The Register

A(I) deal at any cost: Will the EU buckle to Big Tech?

Would you trust Elon Musk with your mortgage? Or Big Tech with your benefits?

Us neither.

That’s what’s at stake as the EU’s Artificial Intelligence Act reaches the final stage of negotiations. For all its big talk, it seems like the EU is buckling to Big Tech.

EU lawmakers have been tasked with developing the world’s first comprehensive law to regulate AI products. Now that AI systems are already being used in public life, lawmakers are rushing to catch up.

[…]

The principle of precaution urges us to exercise care and responsibility in the face of potential risks. It is crucial not only to foster innovation but also to prevent the unchecked expansion of AI from jeopardising justice and fundamental rights.

At the Left in the European Parliament, we called for this principle to be applied to the AI Act. Unfortunately, other political groups disagreed, prioritising the interests of Big Tech over those of the people. They settled on a three-tiered approach to risk whereby products are categorised into those that do not pose a significant risk, those that are high risk and those that are banned.

However, this approach contains a major loophole that risks undermining the entire legislation.

Like asking a tobacco company whether smoking is risky

When it was first proposed, the Commission outlined a list of ‘high-risk uses’ of AI, including AI systems used to select students, assess consumers’ creditworthiness, evaluate job-seekers, and determine who can access welfare benefits.

Using AI in these assessments has significant real-life consequences. It can mean the difference between being accepted or rejected to university, being able to take out a loan or even being able to access welfare to pay bills, rent or put food on the table.

Under the three-tiered approach, AI developers are allowed to decide themselves whether their product is high-risk. The self-assessment loophole means the developers themselves get to determine whether their systems are high risk akin to a tobacco company deciding cigarettes are safe for our health, or a fossil fuel company saying its fumes don’t harm the environment.

[…]

Experience shows us that when corporations have this kind of freedom, they prioritise their profits over the interests of people and the planet. If the development of AI is to be accountable and transparent, negotiators must eliminate provisions on self-assessment.

AI gives us the opportunity to change our lives for the better. But as long as we let big corporations make the rules, we will continue to replicate inequalities that are already ravaging our societies.

Source: A(I) deal at any cost: Will the EU buckle to Big Tech? – EURACTIV.com

OK, so this seems to be a little breathless – surely we can put in a mechanism for EU checking of risk level when notified of a potential breech, including harsh penalties for misclassifying an AI?

However, the discussions around the EU AI Act – which had the potential to be one of the first and best pieces of regulation on the planet – has now descended into farce since ChatGPT and some strange idea that the original act did not have any provisions for General Purpose / Foundational AI models (it did – they were high risk models). The silly induced discussions this has provoked has only served to delay the AI act coming into force for over a year – something that big businesses are very very happy to see.

23andMe hackers accessed DNA information on millions of customers using a feature that matches relatives

An SEC filing has revealed more details on a data breach affecting 23andMe users that was disclosed earlier this fall. The company says its investigation found hackers were able to access the accounts of roughly 0.1 percent of its userbase, or about 14,000 of its 14 million total customers, TechCrunch notes. On top of that, the attackers were able to exploit 23andMe’s opt-in DNA Relatives (DNAR) feature, which matches users with their genetic relatives, to access information about millions of other users. A 23andMe spokesperson told Engadget that hackers accessed the DNAR profiles of roughly 5.5 million customers this way, plus Family Tree profile information from 1.4 million DNA Relative participants.

DNAR Profiles contain sensitive details including self-reported information like display names and locations, as well as shared DNA percentages for DNA Relatives matches, family names, predicted relationships and ancestry reports. Family Tree profiles contain display names and relationship labels, plus other information that a user may choose to add, including birth year and location. When the breach was first revealed in October, the company said its investigation “found that no genetic testing results have been leaked.”

According to the new filing, the data “generally included ancestry information, and, for a subset of those accounts, health-related information based upon the user’s genetics.” All of this was obtained through a credential-stuffing attack, in which hackers used login information from other, previously compromised websites to access those users’ accounts on other sites. In doing this, the filing says, “the threat actor also accessed a significant number of files containing profile information about other users’ ancestry that such users chose to share when opting in to 23andMe’s DNA Relatives feature and posted certain information online.”

[…]

Source: 23andMe hackers accessed ancestry information on millions of customers using a feature that matches relatives

The disturbing part of this is that the people who were hacked were idiots anyway for re-using their password and probably didn’t realise that they were giving away DNA information about not only themselves, but their whole family to 23andMe, who sold it on. Genetic information is the most personal type of information you have. You can not change it. And if you give it to someone, you also give away your family. Now it wasn’t just given away, it was stolen too.

Electric Vehicles Are79%  Less Reliable Than Conventional Cars

Electric vehicle owners continue to report far more problems with their vehicles than owners of conventional cars or hybrids, according to Consumer Reports’ newly released annual car reliability survey. The survey reveals that, on average, EVs from the past three model years had 79 percent more problems than conventional cars. Based on owner responses on more than 330,000 vehicles, the survey covers 20 potential problem areas, including engine, transmission, electric motors, leaks, and infotainment systems.

“Most electric cars today are being manufactured by either legacy automakers that are new to EV technology, or by companies like Rivian that are new to making cars,” says Jake Fisher, senior director of auto testing at Consumer Reports. “It’s not surprising that they’re having growing pains and need some time to work out the bugs.” Fisher says some of the most common problems EV owners report are issues with electric drive motors, charging, and EV batteries.

Source: Electric Vehicles Are Less Reliable Than Conventional Cars – Consumer Reports

Plants may be absorbing 20% more CO2 than we thought, new models find

[…]

Using realistic ecological modeling, scientists led by Western Sydney University’s Jürgen Knauer found that the globe’s vegetation could actually be taking on about 20% more of the CO2 humans have pumped into the atmosphere and will continue to do so through to the end of the century.

“What we found is that a well-established climate model that is used to feed into global climate assessments by the likes of the IPCC (Intergovernmental Panel on Climate Change) predicts stronger and sustained carbon uptake until the end of the 21st century when extended to account for the impact of some critical physiological processes that govern how plants conduct photosynthesis,” said Knauer.

[…]

Current models, the team adds, are not that complex so likely underestimate future CO2 uptake by vegetation.

[…]

Taking the well-established Community Atmosphere-Biosphere Land Exchange model (CABLE), the team accounted for three physiological factors […] the team found that the most complex version, which accounted for all three factors, predicted the most CO2 uptake, around 20% more than the simplest formula.

[…]

“Our understanding of key response processes of the carbon cycle, such as plant photosynthesis, have advanced dramatically in recent years,” said Ben Smith, professor and research director of Western Sydney University’s Hawkesbury Institute for the Environment. “It always takes a while for new knowledge to make it into the sophisticated models we rely on to inform climate and emissions policy. Our study demonstrates that by fully accounting for the latest science in these models can lead to materially different predictions.

[…]

And while it’s somewhat good news, the team says plants can’t be expected to do all the heavy lifting; the onus remains on governments to stick to emission reduction obligations. However, the modeling makes a strong case for the value of greening projects and their importance in comprehensive approaches to tackling global warming.

[…]

Source: Plants may be absorbing 20% more CO2 than we thought, new models find

HP’s CFO tells world: we are locking in customers for more profit

[…] Tech vendors – software, hardware, and cloud services – generally avoid terms that suggest they’re perhaps in some way pinning down customers in a strategic sales hold.

But as Marie Myers, chief financial officer at HP, was this week talking to the UBS Global Technology conference, in front of investors, the thrust of the message was geared toward the audience.

“We absolutely see when you move a customer from that pure transactional model … whether it’s Instant Ink, plus adding on that paper, we sort of see a 20 percent uplift on the value of that customer because you’re locking that person, committing to a longer-term relationship.”

Instant Ink is a subscription in which ink or toner cartridges are dispatched when needed, with customers paying for plans that start at $0.99 and run to $25.99 per month. As of May last year, HP had more than 11 million subscribers to the service. Since then it has banked double-digit percentage figures on the revenues front.

By pre-pandemic 2019, HP had grown weary of third-party cartridge makers stealing its supplies business. It pledged to charge more upfront for certain printer hardware (“rebalance the system profitability, capturing more profit upfront”).

HP also set in motion new subscriptions, and launched Smart Tank hardware filled with a pre-defined amount of ink/toner. These now account for 60 percent of total shipments.

Myers told the UBS Conference she was “really proud” that HP could “raise the range on our print margins” based on “bold moves and shifting models.”

[…]

An old industry factoid from 2003 was that HP ink cost seven times more than a bottle of 1985 Dom Perignon. HP isn’t alone in these sorts of comparisons – Epson was called out by Which? a couple years back.

[…]

Source: Vendor lock-in is a good thing? HP’s CFO thinks so

Months of Google Drive files disappearing randomly

Google Drive users are reporting files mysteriously disappearing from the service, with some netizens on the goliath’s support forums claiming six or more months of work have unceremoniously vanished.

The issue has been rumbling for a few days, with one user logging into Google Drive and finding things as they were in May 2023.

According to the poster, almost everything saved since then has gone, and attempts at recovery failed.

Others chimed in with similar experiences, and one claimed that six months of business data had gone AWOL.

There is little information regarding what has happened; some users reported that synchronization had simply stopped working, so the cloud storage was out of date. Others could get some of their information back by fiddling with cached files, although the limited advice on offer for the affected was to leave things well alone until engineers come up with a solution.

A message purporting to be from Google support also advised not to make changes to the root/data folder while engineers investigate the issue.

[…]

a reminder that just because files are being stored in the cloud, there is no guarantee that they are safe. European cloud hosting provider OVH suffered a disastrous fire in 2021 that left some customers scrambling for backups and disaster recovery plans.

[…]

ust because the files have been uploaded one day does not necessarily mean they will still be there – or recoverable – the next.

[…]

MatthewSt reports that he has a fix; obviously this is something worked out by a user rather than official advice, so caution is advised.

Source: The mystery of the disappearing Google Drive files • The Register

3 Vulns expose ownCloud admin passwords, sensitive data

ownCloud has disclosed three critical vulnerabilities, the most serious of which leads to sensitive data exposure and carries a maximum severity score.

The open source file-sharing software company said containerized deployments of ownCloud could expose admin passwords, mail server credentials, and license keys.

Tracked as CVE-2023-49103, the vulnerability carries a maximum severity rating of 10 on the CVSS v3 scale and affects the garaphapi app version 0.2.0 to 0.3.0.

The app relies on a third-party library that provides a URL that when followed reveals the PHP environment’s configuration details, which then allows an attacker to access sensitive data.

Not only could an intruder access admin passwords when deployed using containers, but the same PHP environment also exposes other potentially valuable configuration details, ownCloud said in its advisory, so even if the software isn’t running in a container, the recommended fixes should still be applied.

To fix the vulnerability, customers should delete the file at the following directory: owncloud/apps/graphapi/vendor/microsoft/microsoft-graph/tests/GetPhpInfo.php.

Customers are also advised to change their secrets in case they’ve been accessed. These include ownCloud admin passwords, mail server credentials, database credentials, and Object-Store/S3 access-keys.

In a library update, ownCloud said it disabled the phpinfo function in its Docker containers and “will apply various hardenings in future core releases to mitigate similar vulnerabilities.”

The second vulnerability carries another high severity score, a near-maximum rating of 9.8 for an authentication bypass flaw that allows attackers to access, modify, or delete any file without authentication.

Tracked as CVE-2023-49105, the conditions required for a successful exploit are that a target’s username is known to the attacker and that they have no signing-key configured, which is the default setting in ownCloud.

Exploits work here because pre-signed URLs are accepted when no signing-key is configured for the owner of the files.

The affected core versions are 10.6.0 to 10.13.0 and to mitigate the issue, users are advised to deny the use of pre-signed URLs in scenarios where no signing-key is configured.

The final vulnerability was assigned a severity score of 9 by ownCloud, a “critical” categorization, but the National Vulnerability Database has reduced this to 8.7 – a less-severe “high” classification.

It’s a subdomain validation bypass issue that affects all versions of the oauth2 library including and before 0.6.1 when “Allow Subdomains” is enabled.

“Within the oauth2 app, an attacker is able to pass in a specially crafted redirect-url which bypasses the validation code and thus allows the attacker to redirect callbacks to a TLD controlled by the attacker,” read ownCloud’s advisory.

Source: Vulns expose ownCloud admin passwords, sensitive data • The Register

Roundcube Open-Source Webmail Software Merges With Nextcloud

The open-source Roundcube webmail software project has “merged” with Nextcloud, the prominent open-source personal cloud software.

In boosting Nextcloud’s webmail software capabilities, Roundcube is joining Nextcloud as what’s been described as a merger. In 2024 Nextcloud is to invest into Roundcube to accelerate the development of this widely-used webmail open-source software. Today’s press release says Roundcube will not replace Nextcloud Mail with at least no plans for merging the two in the short-term.

Today’s press release says that there are no immediate changes for Roundcube and Nextcloud users besides looking forward to improved integration and accelerated development beginning in the short term.

RoundCube

More details on today’s announcement via the Nextcloud blog.

Perhaps with this increased investment into Roundcube, some of the original plans laid out years ago with the crowdfunded Roundcube-Next will finally be realized. RoundCube-Next raised more than $100k in funding a number of years ago only to fail in delivering their revamped software.

Source: Roundcube Open-Source Webmail Software Merges With Nextcloud – Phoronix

Considering Roundcube is used by hundreds of millions of users and is basically programmed by just one guy, the $100k was absolute peanuts in terms of how much was raised, especially considering the ambition. Open Source hardliners take note: this shows exactly how unfair the system is – the guy who wrote this should have been a millionaire many times over. Instead, the companies profiting off his work for free have become worth millions, and so have their CEOs.

Windows users report appearance of unwanted HP app – shows you how secure automatic updating is (with no real information about what is in the updates)

Windows users are reporting that Hewlett Packard’s HP Smart application is appearing on their systems, despite them not having any of the manufacturer’s hardware attached.

While Microsoft has remained tight-lipped on what is happening, folks on various social media platforms noted the app’s appearance, which seems to afflict both Windows 10 and Windows 11.

The Windows Update mechanism is used to deploy third-party applications and drivers as well as Microsoft’s updates, and we’d bet someone somewhere has accidentally checked the wrong box.

[…]

WindowsLatest reported the issue occurring on both physical Windows 10 hardware and a Windows 11 virtual machine.

HP Smart is innocuous enough. It’s an application used in conjunction with HP’s printer hardware and can simply be uninstalled.

However, the question is how the application got installed in the first place on a machine with no HP hardware attached or on a network, according to affected users.

[…]

Source: Windows users report appearance of unwanted HP app • The Register

Web browser suspended because it can browse the web is back on Google Play after being taken down by incomplete DMCA

Google Play has reversed its latest ban on a web browser that keeps getting targeted by vague Digital Millennium Copyright Act (DMCA) notices. Downloader, an Android TV app that combines a browser with a file manager, was restored to Google Play last night.

Downloader, made by app developer Elias Saba, was suspended on Sunday after a DMCA notice submitted by copyright-enforcement firm MarkScan on behalf of Warner Bros. Discovery. It was the second time in six months that Downloader was suspended based on a complaint that the app’s web browser is capable of loading websites.

The first suspension in May lasted three weeks, but Google reversed the latest one much more quickly. As we wrote on Monday, the MarkScan DMCA notice didn’t even list any copyrighted works that Downloader supposedly infringed upon.

Instead of identifying specific copyrighted works, the MarkScan notice said only that Downloader infringed on “Properties of Warner Bros. Discovery Inc.” In the field where a DMCA complainant is supposed to provide an example of where someone can view an authorized example of the work, MarkScan simply entered the main Warner Bros. URL: https://www.warnerbros.com/.

DMCA notice was incomplete

Google has defended its DMCA-takedown process by saying that, under the law, it is obligated to remove any content when a takedown request contains the elements required by the copyright law. But in this case, Google Play removed Downloader even though the DMCA takedown request didn’t identify a copyrighted work—one of the elements required by the DMCA.

[…]

Downloader’s first suspension in May came after several Israeli TV companies complained that the app could be used to load a pirate website. In that case, an appeal that Saba filed with Google Play was quickly rejected. He also submitted a DMCA counter-notice, which gave the complainant 10 business days to file a legal action.

[…]

Saba still needed to republish the app to make it visible to users again. “I re-submitted the app last night in the Google Play Console, as instructed in the email, and it was approved and live a few hours later,” Saba told Ars today.

In a new blog post, Saba wrote that he expected the second suspension to last a few weeks, just like the first did. He speculated that it was reversed more quickly this time because the latest DMCA notice “provided no details as to how my app was infringing on copyrighted content, which, I believe, allowed Google to invalidate the takedown request.”

“Of course, I wish Google bothered to toss out the meritless DMCA takedown request when it was first submitted, as opposed to after taking ‘another look,’ but I understand that Google is probably flooded with invalid takedown requests because the DMCA is flawed,” Saba wrote. “I’m just glad Google stepped in when it did and I didn’t have to go through the entire DMCA counter notice process. The real blame for all of this goes to Warner Bros. Discovery and other corporations for funding companies like MarkScan which has issued DMCA takedowns in the tens of millions.”

Source: Web browser suspended because it can browse the web is back on Google Play | Ars Technica

DMCA is an absolute horror of a system that is an incredibly and unfixably broken “solution” to corporate greed

FBI Director Admits Agency Rarely Has Probable Cause When It Performs Backdoor Searches Of NSA Collections

After years of continuous, unrepentant abuse of surveillance powers, the FBI is facing the real possibility of seeing Section 702 curtailed, if not scuttled entirely.

Section 702 allows the NSA to gather foreign communications in bulk. The FBI benefits from this collection by being allowed to perform “backdoor” searches of NSA collections to obtain communications originating from US citizens and residents.

There are rules to follow, of course. But the FBI has shown little interest in adhering to these rules, just as much as the NSA has shown little interest in curtailing the amount of US persons’ communications “incidentally” collected by its dragnet.

[…]

Somehow, the FBI director managed to blurt out what everyone was already thinking: that the FBI needs this backdoor access because it almost never has the probable cause to support the search warrant normally needed to access the content of US persons’ communications.

A warrant requirement would amount to a de facto ban, because query applications either would not meet the legal standard to win court approval; or because, when the standard could be met, it would be so only after the expenditure of scarce resources, the submission and review of a lengthy legal filing, and the passage of significant time — which, in the world of rapidly evolving threats, the government often does not have,” Wray said. 

Holy shit. He just flat-out admitted it: a majority of FBI searches of US persons’ communications via Section 702 are unsupported by probable cause

[…]

Unfortunately, both the FBI and the current administration are united in their desire to keep this executive authority intact. Both Wray and the Biden administration call the warrant requirement a “red line.” So, even if the House decides it needs to go (for mostly political reasons) and/or Wyden’s reform bill lands on the President’s desk, odds are the FBI will get its wish: warrantless access to domestic communications for the foreseeable future.

Source: FBI Director Admits Agency Rarely Has Probable Cause When It Performs Backdoor Searches Of NSA Collections | Techdirt

Former GTA Developer’s Blog Removed After Rockstar Complains

Former Rockstar North developer Obbe Vermeij had been enjoying a few weeks of sharing some decades-old tales. Reminiscing on his many years with the GTA developer, Vermeij took to his personal blog to recall revealing inside stories behind games like San Andreas and Vice City, and everyone was having a good time. Until Rockstar North came along.

[…]

In the last few weeks, on his very old-school Blogger blog, Vermeij had been sharing some stories about the development processes behind the games, seemingly without any malice or ill-intent.

These included interesting insights into the original GTA and GTA 2, like how much the PC versions of the games had to be compromised so it would run on the PS1. “I remember one particular time when all of the textures for the PS version had been cut down to 16 colours,” Vermeij writes. “When the artists saw the results there was cursing. There was no choice though. Difficult choices had to be made to get the game to run on a PS.”

[…]

It seems the line was crossed for some at Rockstar after a couple of weeks of these lovely anecdotes and insights. On November 22, Vermeij removed most of the posts from the site, and added a new one explaining that after receiving an email from Rockstar North, “some of the OGs there are upset by my blog.”

I genuinely didn’t think anyone would mind me talking about 20 year old games but I was wrong. Something about ruining the Rockstar mystique or something.

Anyway,

This blog isn’t important enough to me to piss off my former colleagues in Edinburgh so I’m winding it down.

[…]

Of course, you know, nothing goes away on the internet. All the posts are a splendid, positive read.

[…]

 

Source: Former GTA Developer’s Blog Removed After Rockstar Complains

Copyright Bot Can’t Tell The Difference Between Star Trek Ship And Adult Film Actress

Given that the overwhelming majority of DMCA takedown notices are generated by copyright bots that are only moderately good at their job, at best, perhaps it’s not terribly surprising that these bots keep finding new and interesting ways to cause collateral damage unintentionally.

[…]

a Tumblr site, called Mapping La Sirena.” If you’re a fan of Star Trek: Picard, you will know that’s the name of the main starship in that series. But if you’re a copyright enforcer for a certain industry, the bots you’ve set up for yourself apparently aren’t programmed with Star Trek fandom.

Transparency.automattic reports Tumblr has received numerous DMCA takedown notices from DMCA Piracy Prevention Inc, a third-party copyright monitoring service used frequently by content creators to prevent infringement of their original work. And these complaints occurred all because of the name La Sirena which also happens to be the name of an adult content creator, La Sirena 69 who is one of Piracy Prevention’s customers.

In one copyright claim over 90 Tumblr posts were targeted by the monitoring service because of the keyword match to “la sirena.” But instead of Automattic being alerted to La Sirena 69’s potentially infringed content, the company reported many of mappinglasirena.tumblr.com’s original posts.

Pure collateral damage. While not intentional per se, this is obviously still a problem. One of two things has to be the case: either we stop allowing copyright enforcement to be farmed out to a bunch of dumb bots that suck at their jobs or we insist that the bots stop sucking, which ain’t going to happen anytime soon. What cannot be allowed to happen is to shrug this sort of thing off as an innocent accident and oh well, too bad, so sad for the impact on the speech rights of the innocent.

There was nothing that remotely infringed La Sirena 69’s content. Everything about the complaints and takedown notices was wrong.

[…]

 

Source: Copyright Bot Can’t Tell The Difference Between Star Trek Ship And Adult Film Actress | Techdirt

Limits for quantum computers: Perfect clocks are impossible, research finds

[…]

Every clock has two : a certain precision and a certain time resolution. The time resolution indicates how small the time intervals are that can be measured—i.e., how quickly the clock ticks. Precision tells you how much inaccuracy you have to expect with every single tick.

The research team was able to show that since no clock has an infinite amount of energy available (or generates an infinite amount of entropy), it can never have perfect resolution and perfect precision at the same time. This sets fundamental limits to the possibilities of quantum computers.

[…]

Marcus Huber and his team investigated in general which laws must always apply to every conceivable clock. “Time measurement always has to do with entropy,” explains Marcus Huber. In every closed physical system, entropy increases and it becomes more and more disordered. It is precisely this development that determines the direction of time: the future is where the entropy is higher, and the past is where the entropy is even lower.

As can be shown, every measurement of time is inevitably associated with an increase in entropy: a clock, for example, needs a battery, the energy of which is ultimately converted into frictional heat and audible ticking via the clock’s mechanics—a process in which a fairly ordered state occurs the battery is converted into a rather disordered state of heat radiation and sound.

On this basis, the research team was able to create a that basically every conceivable clock must obey. “For a given increase in , there is a tradeoff between and precision,” says Florian Meier, first author of the second paper, now posted to the arXiv preprint server. “That means: Either the clock works quickly or it works precisely—both are not possible at the same time.”

[…]

“Currently, the accuracy of quantum computers is still limited by other factors, for example, the precision of the components used or electromagnetic fields. But our calculations also show that today we are not far from the regime in which the fundamental limits of time measurement play the decisive role.”

[…]

More information: Florian Meier et al, Fundamental accuracy-resolution trade-off for timekeeping devices, arXiv (2023). DOI: 10.48550/arxiv.2301.05173

Source: Limits for quantum computers: Perfect clocks are impossible, research finds

How to bypass Windows Hello fingerprint login

Hardware security hackers have detailed how it’s possible to bypass Windows Hello’s fingerprint authentication and login as someone else – if you can steal or be left alone with their vulnerable device.

The research was carried out by Blackwing Intelligence, primarily Jesse D’Aguanno and Timo Teräs, and was commissioned and sponsored by Microsoft’s Offensive Research and Security Engineering group. The pair’s findings were presented at the IT giant’s BlueHat conference last month, and made public this week. You can watch the duo’s talk below, or dive into the details in their write-up here.

For users and administrators: be aware your laptop hardware may be physically insecure and allow fingerprint authentication to be bypassed if the equipment falls into the wrong hands. We’re not sure how that can be fixed without replacing the electronics or perhaps updating the drivers and/or firmware within the fingerprint sensors. One of the researchers told us: “It’s my understanding from Microsoft that the issues were addressed by the vendors.” So check for updates or errata. We’ve asked the manufacturers named below for comment, and we will keep you updated.

For device makers: check out the above report to make sure you’re not building these design flaws into your products. Oh, and answer our emails.

The research focuses on bypassing Windows Hello’s fingerprint authentication on three laptops: a Dell Inspiron 15, a Lenovo ThinkPad T14, and a Microsoft Surface Pro 8/X, which were using fingerprint sensors from Goodix, Synaptics, and ELAN, respectively. All three were vulnerable in different ways. As far as we can tell, this isn’t so much a problem with Windows Hello or using fingerprints. It’s more due to shortcomings or oversights with the communications between the software side and the hardware.

Windows Hello allows users to log into the OS using their fingerprint. This fingerprint is stored within the sensor chipset. What’s supposed to happen, simply put, is that when you want to set up your laptop to use your print, the OS generates an ID and passes that to the sensor chip. The chip reads the user’s fingerprint, and stores the print internally, associating it with the ID number. The OS then links that ID with your user account.

Then when you come to login, the OS asks you to present your finger, the sensor reads it, and if it matches a known print, the chips sends the corresponding ID to the operating system, which then grants you access to the account connected to that ID number. The physical communication between the chip and OS involves cryptography to, ideally, secure this authentication method from attackers.

But blunders in implementing this system have left at least the above named devices vulnerable to unlocking – provided one can nab the gear long enough to connect some electronics.

“In all, this research took approximately three months and resulted in three 100 percent reliable bypasses of Windows Hello authentication,” Blackwing’s D’Aguanno and Teräs wrote on Tuesday.

Here’s a summary of the techniques used and described by the infosec pair:

    • Model: Dell Inspiron 15
    • Method: If someone can boot the laptop into Linux, they can use the sensor’s Linux driver to enumerate from the sensor chip the ID numbers associated with known fingerprints. That miscreant can then store in the chip their own fingerprint with an ID number identical to the ID number of the Windows user they want to login as. The chip stores this new print-ID association in an internal database associated with Linux; it doesn’t overwrite the existing print-ID association in its internal database for Windows.

      The attacker then attaches a man-in-the-middle (MITM) device between the laptop and the sensor, and boots into Windows. The Microsoft OS sends some non-authenticated configuration data to the chip. Crucially, the MITM electronics rewrites that config data on the fly to tell the chip to use the Linux database, and not the Windows database, for fingerprints. Thus when the miscreant next touches their finger to the reader, the chip will recognize the print, return the ID number for that print from the Linux database, which is the same ID number associated with a Windows user, and Windows will log the attacker in as that user.

    • Model: Lenovo ThinkPad T14
    • Method: The attack used against the ThinkPad is similar to the one above. While the Dell machine uses Microsoft’s Secure Device Connection Protocol (SDCP) between the OS and the chip, the T14 uses TLS to secure the connection. This can be undermined to again, using Linux, add a fingerprint with an ID associated with a Windows user, and once booted back into Windows, login as that user using the new fingerprint.
    • Model: Microsoft Surface Pro 8 / X Type Cover with Fingerprint ID
    • Method: This is the worst. There is no security between the chip and OS at all, so the sensor can be replaced with anything that can masquerade as the chip and simply send a message to Windows saying: Yup, log that user in. And it works. Thus an attacker can log in without even presenting a fingerprint.

Interestingly enough, D’Aguanno told us restarting the PC with Linux isn’t required for exploitation – a MITM device can do the necessary probing and enrollment of a fingerprint itself while the computer is still on – so preventing the booting of non-Windows operating systems, for instance, won’t be enough to stop a thief. The equipment can be hoodwinked while it’s still up and running.

“Booting to Linux isn’t actually required for any of our attacks,” D’Aguanno told us. “On the Dell (Goodix) and ThinkPad (Synaptics), we can simply disconnect the fingerprint sensor and plug into our own gear to attack the sensors. This can also be done while the machine is on since they’re embedded USB, so they can be hot plugged.”

In that scenario, “Bitlocker wouldn’t affect the attack,” he added.

As to what happens if the stolen machine is powered off completely, and has a BIOS password, full-disk encryption, or some other pre-boot authentication, exploitation isn’t as straight forward or perhaps even possible: you’d need to get the machine booted far enough into Windows for the Blackwing team’s fingerprint bypass to work. The described techniques may work against BIOSes that check for fingerprints to proceed with the startup sequence.

“If there’s a password required to boot the machine, and the machine is off, then that could stop this just by nature of the machine not booting to the point where fingerprint authentication is available,” D’Aguanno clarified to us.

“However, at least one of the implementations allows you to use fingerprint authentication for BIOS boot authentication, too. Our focus was on the impact to Windows Hello, though, so we did not investigate that further at this point, but that may be able to be exploited too.”

The duo also urged manufacturers to use SDCP and enable to connect sensor chips to Windows: “It doesn’t help if it’s not turned on.”

They also promised to provide more details about the vulnerabilities they exploited in all three targets in future, and were obviously circumspect in giving away too many details that could be used to crack kit.

Source: How to bypass Windows Hello, log into vulnerable laptops • The Register

Your Tastebuds Help Tell You When to Stop Eating, New Research Suggests

Our mouths might help keep our hunger in check. A recent study found evidence in mice that our brains rely on two separate pathways to regulate our sense of fullness and satiety—one originating from the gut and the other from cells in the mouth that let us perceive taste. The findings could help scientists better understand and develop anti-obesity drugs, the study authors say.

The experiment was conducted by researchers at the University of California San Francisco. They were hoping to definitively answer one of the most important and basic questions about our physiology: What actually makes us want to stop eating?

It’s long been known that the brainstem—the bottom part of the brain that controls many subconscious body functions—also helps govern fullness. The current theory is that neurons in the brainstem respond to signals from the stomach and gut as we’re eating a meal, which then trigger that feeling of having had enough. But scientists have only been able to indirectly study this process until now, according to lead author Zachary Knight, a UCSF professor of physiology in the Kavli Institute for Fundamental Neuroscience. His team was able to directly image and record the fullness-related neurons in the brainstem of alert mice right as they were chowing down.

“Our study is the first to observe these neurons while an animal eats,” Knight told Gizmodo in an email. “We found surprisingly that many of these cells respond to different signals and control feeding in different ways than was widely assumed.”

The team focused on two types of neurons in the brainstem thought to regulate fullness: prolactin-releasing hormone (PRLH) neurons and GCG neurons.

When they fed mice through the stomach alone, they found that PRLH neurons were activated by the gut, as expected by prior assumptions. But when the mice ate normally, these gut signals disappeared; instead, the PRLH neurons were almost instantly activated by signals from the mouth, largely from the parts responsible for taste perception. Minutes later, the GCG neurons were activated by gut signals.

The team’s findings, published Wednesday in Nature, indicate that there are two parallel tracks of satiety in the brainstem, ones that operate at different speeds with slightly different purposes.

“We found that the first pathway—which controls how fast you eat and involves PRLH neurons—is unexpectedly activated by the taste of food,” Knight said. “This was surprising, because we all know that tasty food causes us to eat more. But our findings reveal that food tastes also function to limit the pace of ingestion, through a brainstem pathway that likely functions beneath the level of our conscious awareness.”

The second pathway, governed by the gut and GCG neurons, seems to control how much we ultimately eat, Knight added.

Mice are not humans, of course. So more research will be needed to confirm whether we have a similar system.

[…]

Source: Your Tastebuds Help Tell You When to Stop Eating, New Research Suggests

Toxic air killed more than 500,000 people in EU in 2021, data shows

Dirty air killed more than half a million people in the EU in 2021, estimates show, and about half of the deaths could have been avoided by cutting pollution to the limits recommended by doctors.

The researchers from the European Environment Agency attributed 253,000 early deaths to concentrations of fine particulates known as PM2.5 that breached the World Health Organization’s maximum guideline limits of 5µg/m3. A further 52,000 deaths came from excessive levels of nitrogen dioxide and 22,000 deaths from short-term exposure to excessive levels of ozone.

“The figures released today by the EEA remind us that air pollution is still the number one environmental health problem in the EU,” said Virginijus Sinkevičius, the EU’s environment commissioner.

Doctors say air pollution is one of the biggest killers in the world but death tolls will drop quickly if countries clean up their economies. Between 2005 and 2021, the number of deaths from PM2.5 in the EU fell 41%, and the EU aims to reach 55% by the end of the decade.

[…]

Source: Toxic air killed more than 500,000 people in EU in 2021, data shows | Air pollution | The Guardian

Ubisoft blames ‘technical error’ for showing pop-up ads in Assassin’s Creed

Ubisoft is blaming a “technical error” for a fullscreen pop-up ad that appeared in Assassin’s Creed Odyssey this week. Reddit users say they spotted the pop-up on Xbox and PlayStation versions of the game, with an ad appearing just when you navigate to the map screen. “This is disgusting to experience while playing,” remarked one Reddit user, summarizing the general feeling against such pop-ups in the middle of gameplay.

“We have been made aware that some players encountered pop-up ads while playing certain Assassin’s Creed titles yesterday,” says Ubisoft spokesperson Fabien Darrigues, in a statement to The Verge. “This was the result of a technical error that we addressed as soon as we learned of the issue.”

The pop-up ad appeared during the middle of gameplay.
The pop-up ad appeared during the middle of gameplay.Image: triddell24 (Reddit)

While it was unclear at first why the game suddenly started showing Black Friday pop-up ads to promote Ubisoft’s latest versions of Assassin’s Creed, the publisher later explained what went wrong in a post on X (formerly Twitter). Ubisoft says it was trying to put an ad for Assassin’s Creed Mirage in the main menu of other Assassin’s Creed games. However, a “technical error” caused the promotion to show up on in-game menus instead. Ubisoft says the issue has since been fixed.

We recently saw Microsoft use fullscreen Xbox pop-up ads to promote its own games, and they’ve been annoying Xbox owners. Microsoft’s ads only appear when you boot an Xbox, and not everyone seems to be getting them. Microsoft and Ubisoft’s pop-ups are still very different to the ads we’re used to seeing on game consoles. We’ve seen games like Saints Row 2 with ads running on billboards, or plenty of in-game ads in EA Games titles in the mid-to-late 2000s.

Fullscreen pop-up ads in the middle of a game certainly aren’t common. Imagine a world full of games you’ve paid $70 for and then ads popping up in the middle of gameplay. I truly hope that Ubisoft’s “technical error” never becomes a game industry reality.

Source: Ubisoft blames ‘technical error’ for showing pop-up ads in Assassin’s Creed – The Verge

A new way to predict ship-killing rogue waves, more importantly: to see how an AI finds its results

[…]

In a paper in Proceedings of the National Academy of Sciences, a group of researchers led by Dion Häfner, a computer scientist at the University of Copenhagen, describe a clever way to make AI more understandable. They have managed to build a neural network, use it to solve a tricky problem, and then capture its insights in a relatively simple five-part equation that human scientists can use and understand.

The researchers were investigating “rogue waves”, those that are much bigger than expected given the sea conditions in which they form. Maritime lore is full of walls of water suddenly swallowing ships. But it took until 1995 for scientists to measure such a wave—a 26-metre monster, amid other waves averaging 12 metres—off the coast of Norway, proving these tales to be tall only in the literal sense.

[…]

To produce something a human could follow, the researchers restricted their neural network to around a dozen inputs, each based on ocean-wave maths that scientists had already worked out. Knowing the physical meaning of each input meant the researchers could trace their paths through the network, helping them work out what the computer was up to.

The researchers trained 24 neural networks, each combining the inputs in different ways. They then chose the one that was the most consistent at making accurate predictions in a variety of circumstances, which turned out to rely on only five of the dozen inputs.

To generate a human-comprehensible equation, the researchers used a method inspired by natural selection in biology. They told a separate algorithm to come up with a slew of different equations using those five variables, with the aim of matching the neural network’s output as closely as possible. The best equations were mixed and combined, and the process was repeated. The result, eventually, was an equation that was simple and almost as accurate as the neural network. Both predicted rogue waves better than existing models.

The first part of the equation rediscovered a bit of existing theory: it is an approximation of a well-known equation in wave dynamics. Other parts included some terms that the researchers suspected might be involved in rogue-wave formation but are not in standard models. There were some puzzlers, too: the final bit of the equation includes a term that is inversely proportional to how spread out the energy of the waves is. Current human theories include a second variable that the machine did not replicate. One explanation is that the network was not trained on a wide enough selection of examples. Another is that the machine is right, and the second variable is not actually necessary.

Better methods for predicting rogue waves are certainly useful: some can sink even the biggest ships. But the real prize is the visibility that Dr Häfner’s approach offers into what the neural network was doing. That could give scientists ideas for tweaking their own theories—and should make it easier to know whether to trust the computer’s predictions.

Source: A new way to predict ship-killing rogue waves