About Robin Edgar

Organisational Structures | Technology and Science | Military, IT and Lifestyle consultancy | Social, Broadcast & Cross Media | Flying aircraft

FSF Warns: Stay Away From iPhones, Amazon, Netflix, and Music Steaming Services

For the last thirteen years the Free Software Foundation has published its Ethical Tech Giving Guide. But what’s interesting is this year’s guide also tags companies and products with negative recommendations to “stay away from.” Stay away from: iPhones
It’s not just Siri that’s creepy: all Apple devices contain software that’s hostile to users. Although they claim to be concerned about user privacy, they don’t hesitate to put their users under surveillance.

Apple prevents you from installing third-party free software on your own phone, and they use this control to censor apps that compete with or subvert Apple’s profits.

Apple has a history of exploiting their absolute control over their users to silence political activists and help governments spy on millions of users.

Stay away from: M1 MacBook and MacBook Pro
macOS is proprietary software that restricts its users’ freedoms.

In November 2020, macOS was caught alerting Apple each time a user opens an app. Even though Apple is making changes to the service, it just goes to show how bad they try to be until there is an outcry.

Comes crawling with spyware that rats you out to advertisers.

Stay away from: Amazon
Amazon is one of the most notorious DRM offenders. They use this Orwellian control over their devices and services to spy on users and keep them trapped in their walled garden.

Be aware that Amazon isn’t the peddler of ebook DRM. Disturbingly, it’s enthusiastically supported by most of the big publishing houses.

Read more about the dangers of DRM through our Defective by Design campaign.

Stay away from: Spotify, Apple Music, and all other major streaming services
In addition to streaming music encumbered by DRM, people who want to use Spotify are required to install additional proprietary software. Even Spotify’s client for GNU/Linux relies on proprietary software.

Apple Music is no better, and places heavy restrictions on the music streamed through the platform.

Stay away from: Netflix
Netflix is continuing its disturbing trend of making onerous DRM the norm for streaming media. That’s why they were a target for last year’s International Day Against DRM (IDAD).

They’re also leveraging their place in the Motion Picture Association of America (MPAA) to advocate for tighter restrictions on users, and drove the effort to embed DRM into the fabric of the Web.

“In your gift giving this year, put freedom first,” their guide begins.

And for a freedom-respecting last-minute gift idea, they suggest giving the gift of a FSF membership (which comes with a code and a printable page “so that you can present your gift as a physical object, if you like.”) The membership is valid for one year, and includes the many benefits that come with an FSF associate membership, including a USB member card, email forwarding, access to our Jitsi Meet videoconferencing server and member forum, discounts in the FSF shop and on ThinkPenguin hardware, and more.

If you are in the United States, your gift would also be fully tax-deductible in the USA.

Source: FSF Warns: Stay Away From iPhones, Amazon, Netflix, and Music Steaming Services – Slashdot

Mickey’s Copyright Adventure: Early Disney Creation Will Soon Be Public Property – finally. What lawsuits lie in wait?

The version of the iconic character from “Steamboat Willie” will enter the public domain in 2024. But those trying to take advantage could end up in a legal mousetrap. From a report: There is nothing soft and cuddly about the way Disney protects the characters it brings to life. This is a company that once forced a Florida day care center to remove an unauthorized Minnie Mouse mural. In 2006, Disney told a stonemason that carving Winnie the Pooh into a child’s gravestone would violate its copyright. The company pushed so hard for an extension of copyright protections in 1998 that the result was derisively nicknamed the Mickey Mouse Protection Act. For the first time, however, one of Disney’s marquee characters — Mickey himself — is set to enter the public domain.

“Steamboat Willie,” the 1928 short film that introduced Mickey to the world, will lose copyright protection in the United States and a few other countries at the end of next year, prompting fans, copyright experts and potential Mickey grabbers to wonder: How is the notoriously litigious Disney going to respond? “I’m seeing in Reddit forums and on Twitter where people — creative types — are getting excited about the possibilities, that somehow it’s going to be open season on Mickey,” said Aaron J. Moss, a partner at Greenberg Glusker in Los Angeles who specializes in copyright and trademark law. “But that is a misunderstanding of what is happening with the copyright.” The matter is more complicated than it appears, and those who try to capitalize on the expiring “Steamboat Willie” copyright could easily end up in a legal mousetrap. “The question is where Disney tries to draw the line on enforcement,” Mr. Moss said, “and if courts get involved to draw that line judicially.”

Only one copyright is expiring. It covers the original version of Mickey Mouse as seen in “Steamboat Willie,” an eight-minute short with little plot. This nonspeaking Mickey has a rat-like nose, rudimentary eyes (no pupils) and a long tail. He can be naughty. In one “Steamboat Willie” scene, he torments a cat. In another, he uses a terrified goose as a trombone. Later versions of the character remain protected by copyrights, including the sweeter, rounder Mickey with red shorts and white gloves most familiar to audiences today. They will enter the public domain at different points over the coming decades. “Disney has regularly modernized the character, not necessarily as a program of copyright management, at least initially, but to keep up with the times,” said Jane C. Ginsburg, an authority on intellectual property law who teaches at Columbia University.

Source: Mickey’s Copyright Adventure: Early Disney Creation Will Soon Be Public Property – Slashdot

How it’s remotely possible that a company is capitalising on a thought someone had around 100 years ago is beyond me.

The LastPass disclosure of leaked password vaults is being torn apart by security experts

Last week, just before Christmas, LastPass dropped a bombshell announcement: as the result of a breach in August, which led to another breach in November, hackers had gotten their hands on users’ password vaults. While the company insists that your login information is still secure, some cybersecurity experts are heavily criticizing its post, saying that it could make people feel more secure than they actually are and pointing out that this is just the latest in a series of incidents that make it hard to trust the password manager.

LastPass’ December 22nd statement was “full of omissions, half-truths and outright lies,” reads a blog post from Wladimir Palant, a security researcher known for helping originally develop AdBlock Pro, among other things. Some of his criticisms deal with how the company has framed the incident and how transparent it’s being; he accuses the company of trying to portray the August incident where LastPass says “some source code and technical information were stolen” as a separate breach when he says that in reality the company “failed to contain” the breach.

He also highlights LastPass’ admission that the leaked data included “the IP addresses from which customers were accessing the LastPass service,” saying that could let the threat actor “create a complete movement profile” of customers if LastPass was logging every IP address you used with its service.

Another security researcher, Jeremi Gosney, wrote a long post on Mastodon explaining his recommendation to move to another password manager. “LastPass’s claim of ‘zero knowledge’ is a bald-faced lie,” he says, alleging that the company has “about as much knowledge as a password manager can possibly get away with.”

LastPass claims its “zero knowledge” architecture keeps users safe because the company never has access to your master password, which is the thing that hackers would need to unlock the stolen vaults. While Gosney doesn’t dispute that particular point, he does say that the phrase is misleading. “I think most people envision their vault as a sort of encrypted database where the entire file is protected, but no — with LastPass, your vault is a plaintext file and only a few select fields are encrypted.”

Palant also notes that the encryption only does you any good if the hackers can’t crack your master password, which is LastPass’ main defense in its post: if you use its defaults for password length and strengthening and haven’t reused it on another site, “it would take millions of years to guess your master password using generally-available password-cracking technology” wrote Karim Toubba, the company’s CEO.

“This prepares the ground for blaming the customers,” writes Palant, saying that “LastPass should be aware that passwords will be decrypted for at least some of their customers. And they have a convenient explanation already: these customers clearly didn’t follow their best practices.” However, he also points out that LastPass hasn’t necessarily enforced those standards. Despite the fact that it made 12-character passwords the default in 2018, Palant says, “I can log in with my eight-character password without any warnings or prompts to change it.”

LastPass’ post has even elicited a response from a competitor, 1Password — on Wednesday, the company’s principal security architect Jeffrey Goldberg wrote a post for its site titled “Not in a million years: It can take far less to crack a LastPass password.” In it, Goldberg calls LastPass’ claim of it taking a million years to crack a master password “highly misleading,” saying that the statistic appears to assume a 12 character, randomly generated password. “Passwords created by humans come nowhere near meeting that requirement,” he writes, saying that threat actors would be able to prioritize certain guesses based on how people construct passwords they can actually remember.

Of course, a competitor’s word should probably be taken with a grain of salt, though Palant echos a similar idea in his post — he claims the viral XKCD method of creating passwords would take around 3 years to guess with a single GPU, while some 11-character passwords (that many people may consider to be good) would only take around 25 minutes to crack with the same hardware. It goes without saying that a motivated actor trying to crack into a specific target’s vault could probably throw more than one GPU at the problem, potentially cutting that time down by orders of magnitude.

Both Gosney and Palant take issue with LastPass’ actual cryptography too, though for different reasons. Gosney accuses the company of basically committing “every ‘crypto 101’ sin” with how its encryption is implemented and how it manages data once it’s been loaded into your device’s memory.

Meanwhile, Palant criticizes the company’s post for painting its password-strengthening algorithm, known as PBKDF2, as “stronger-than-typical.” The idea behind the standard is that it makes it harder to brute-force guess your passwords, as you’d have to perform a certain number of calculations on each guess. “I seriously wonder what LastPass considers typical,” writes Palant, “given that 100,000 PBKDF2 iterations are the lowest number I’ve seen in any current password manager.”

[…]

Source: The LastPass disclosure of leaked password vaults is being torn apart by security experts – The Verge

EarSpy: Spying on Phone Calls via Ear Speaker Vibrations Captured by Accelerometer

As smartphone manufacturers are improving the ear speakers in their devices, it can become easier for malicious actors to leverage a particular side-channel for eavesdropping on a targeted user’s conversations, according to a team of researchers from several universities in the United States.

The attack method, named EarSpy, is described in a paper published just before Christmas by researchers from Texas A&M University, Temple University, New Jersey Institute of Technology, Rutgers University, and the University of Dayton.

EarSpy relies on the phone’s ear speaker — the speaker at the top of the device that is used when the phone is held to the ear — and the device’s built-in accelerometer for capturing the tiny vibrations generated by the speaker.

[…]

Android security has improved significantly and it has become increasingly difficult for malware to obtain the required permissions.

On the other hand, accessing raw data from the motion sensors in a smartphone does not require any special permissions. Android developers have started placing some restrictions on sensor data collection, but the EarSpy attack is still possible, the researchers said.

A piece of malware planted on a device could use the EarSpy attack to capture potentially sensitive information and send it back to the attacker.

[…]

The researchers discovered that attacks such as EarSpy are becoming increasingly feasible due to the improvements made by smartphone manufacturers to ear speakers. They conducted tests on the OnePlus 7T and the OnePlus 9 smartphones — both running Android — and found that significantly more data can be captured by the accelerometer from the ear speaker due to the stereo speakers present in these newer models compared to the older model OnePlus phones, which did not have stereo speakers.

The experiments conducted by the academic researchers analyzed the reverberation effect of ear speakers on the accelerometer by extracting time-frequency domain features and spectrograms. The analysis focused on gender recognition, speaker recognition, and speech recognition.

In the gender recognition test, whose goal is to determine whether the target is male or female, the EarSpy attack had a 98% accuracy. The accuracy was nearly as high, at 92%, for detecting the speaker’s identity.

When it comes to actual speech, the accuracy was up to 56% for capturing digits spoken in a phone call.

Source: EarSpy: Spying on Phone Calls via Ear Speaker Vibrations Captured by Accelerometer

ETSI’s Activities in Artificial Intelligence: White Paper

[…]

This White Paper entitled ETSI Activities in the field of Artificial Intelligence supports all stakeholders and summarizes ongoing effort in ETSI and planned future activities. It also includes an analysis on how ETSI deliverables may support current policy initiatives in the field of artificial intelligence.  A section of the document outlines ETSI activities of relevance to address Societal Challenges in AI while another addresses the involvement of the European Research Community.

AI activities in ETSI also rely on a unique testing experts’ community to ensure independently verifiable and repeatable testing of essential requirements in the field of AI. ETSI engages with its highly recognised Human Factors community to develop solutions on Human Oversight of AI systems.

AI requires a multitude of distinct expertise where, often, AI is not the end goal but a means to achieve the goal. For this reason, ETSI has chosen to implement a distributed approach to AI – specialized communities meet in technically focused groups. Examples include the technical committee Cyber with a specific focus on Cybersecurity aspects, ISG SAI working towards securing AI systems, ISG ENI dealing with the question of how to integrate AI into a network architecture. These are three of the thirteen groups currently working on AI related technologies within ETSI. The first initiative dates back to 2016 with the publication of a White Paper describing GANA (the Generic Autonomic Networking Architecture).

[…]

Source: ETSI – ETSI’s Activities in Artificial Intelligence: Read our New White Paper

Two people charged with hacking Ring security cameras to livestream swattings

In a reminder of smart home security’s dark side, two people hacked Ring security cameras to livestream swattings, according to a Los Angeles grand jury indictment (according to a report from Bloomberg). The pair called in hoax emergencies to authorities and livestreamed the police response on social media in late 2020.

James Thomas Andrew McCarty, 20, of Charlotte, North Carolina, and Kya Christian Nelson, 21, of Racine, Wisconsin, hacked into Yahoo email accounts to gain access to 12 Ring cameras across nine states in November 2020 (disclaimer: Yahoo is Engadget’s parent company). In one of the incidents, Nelson claimed to be a minor reporting their parents for firing guns while drinking alcohol. When police arrived, the pair used the Ring cameras to taunt the victims and officers while livestreaming — a pattern appearing in several incidents, according to prosecutors.

[…]

Although the smart devices can deter things like robberies and “porch pirates,” Amazon admits to providing footage to police without user consent or a court order when it believes someone is in danger. Inexplicably, the tech giant made a zany reality series using Ring footage, which didn’t exactly quell concerns about the tech’s Orwellian side.

Source: Two people charged with hacking Ring security cameras to livestream swattings | Engadget

Amazing that people don’t realise that Amazon is creating a total and constant surveillance system with hardware that you paid for.

Meta agrees to $725 million settlement in Cambridge Analytica class-action lawsuit

It’s been four years since Facebook became embroiled in its biggest scandal to date: Cambridge Analytica. In addition to paying $4.9 billion to the Federal Trade Commission in a settlement, the social network has just agreed to pay $725 million to settle a long-running class-action lawsuit, making it the biggest settlement ever in a privacy case.

To recap, a whistleblower revealed in 2018 that now-defunct British political consulting firm Cambridge Analytica harvested the personal data of almost 90 million users without their consent for targeted political ads during the 2016 US presidential campaign and the UK’s Brexit referendum.

The controversy led to Mark Zuckerberg testifying before congress, a $4.9 billion fine levied on the company by the FTC in July 2019, and a $100 million settlement with the US Securities and Exchange Commission. There was also a class-action lawsuit filed in 2018 on behalf of Facebook users who alleged the company violated consumer privacy laws by sharing private data with other firms.

Facebook parent Meta settled the class action in August, thereby ensuring CEO Mark Zuckerberg, chief operating officer Javier Oliva, and former COO Sheryl Sandberg avoided hours of questioning from lawyers while under oath

[…]

This doesn’t mark the end of Meta’s dealings with the Cambridge Analytica fallout. Zuckerberg is facing a lawsuit from Washington DC’s attorney general Karl A. Racine over allegations that the Meta boss was personally involved in failures that led to the incident and his “policies enabled a multi-year effort to mislead users about the extent of Facebook’s wrongful conduct.”

Source: Meta agrees to $725 million settlement in Cambridge Analytica class-action lawsuit | TechSpot

OpenAI releases Point-E, an AI that generates 3D point clouds / meshes

[…] This week, OpenAI open sourced Point-E, a machine learning system that creates a 3D object given a text prompt. According to a paper published alongside the code base, Point-E can produce 3D models in one to two minutes on a single Nvidia V100 GPU.

[…]

Outside of the mesh-generating model, which stands alone, Point-E consists of two models: a text-to-image model and an image-to-3D model. The text-to-image model, similar to generative art systems like OpenAI’s own DALL-E 2 and Stable Diffusion, was trained on labeled images to understand the associations between words and visual concepts. The image-to-3D model, on the other hand, was fed a set of images paired with 3D objects so that it learned to effectively translate between the two.

When given a text prompt — for example, “a 3D printable gear, a single gear 3 inches in diameter and half inch thick” — Point-E’s text-to-image model generates a synthetic rendered object that’s fed to the image-to-3D model, which then generates a point cloud.

After training the models on a dataset of “several million” 3D objects and associated metadata, Point-E could produce colored point clouds that frequently matched text prompts, the OpenAI researchers say. It’s not perfect — Point-E’s image-to-3D model sometimes fails to understand the image from the text-to-image model, resulting in a shape that doesn’t match the text prompt.

[…]

Earlier this year, Google released DreamFusion, an expanded version of Dream Fields, a generative 3D system that the company unveiled back in 2021. Unlike Dream Fields, DreamFusion requires no prior training, meaning that it can generate 3D representations of objects without 3D data.

[…]

Source: OpenAI releases Point-E, an AI that generates 3D models | TechCrunch

The Copyright Industry Is About To Discover That There Are Hundreds Of Thousands Of Songs Generated By AI Already Available, Already Popular

ou may have noticed the world getting excited about the capabilities of ChatGPT, a text-based AI chat bot. Similarly, some are getting quite worked up over generative AI systems that can turn text prompts into images, including those mimicking the style of particular artists. But less remarked upon is the use of AI in the world of music. Music Business Worldwide has written two detailed news stories on the topic. The first comes from China:

Tencent Music Entertainment (TME) says that it has created and released over 1,000 tracks containing vocals created by AI tech that mimics the human voice.

And get this: one of these tracks has already surpassed 100 million streams.

Some of these songs use synthetic voices based on human singers, both dead and alive:

TME also confirmed today (November 15) that – in addition to “paying tribute” to the vocals of dead artists via the Lingyin Engine – it has also created “an AI singer lineup with the voices of trending [i.e currently active] stars such as Yang Chaoyue, among others”.

The copyright industry will doubtless have something to say about that. It is also unlikely to be delighted by the second Music Business Worldwide story about AI-generated music, this time in the Middle East and North Africa (MENA) market:

MENA-focused Spotify rival, Anghami, is now taking the concept to a whole other level – claiming that it will soon become the first platform to host over 200,000 songs generated by AI.

Anghami has partnered with a generative music platform called Mubert, which says it allows users to create “unique soundtracks” for various uses such as social media, presentations or films using one million samples from over 4,000 musicians.

According to Mohammed Ogaily, VP Product at Anghami, the service has already “generated over 170,000 songs, based on three sets of lyrics, three talents, and 2,000 tracks generated by AI”.

It’s striking that the undoubtedly interesting but theoretical possibilities of ChatGPT and generative AI art are dominating the headlines, while we hear relatively little about these AI-based music services that are already up and running, and hugely popular with listeners. It’s probably a result of the generally parochial nature of mainstream Western media, which often ignores the important developments happening elsewhere.

Source: The Copyright Industry Is About To Discover That There Are Hundreds Of Thousands Of Songs Generated By AI Already Available, Already Popular | Techdirt

AI-Created Comic Has Copyright Protection Revoked by US

The United States Copyright Office (USCO) reversed an earlier decision to grant a copyright to a comic book that was created using “A.I. art,” and announced that the copyright protection on the comic book will be revoked, stating that copyrighted works must be created by humans to gain official copyright protection.

In September, Kris Kashtanova announced that they had received a U.S. copyright on his comic book, Zarya of the Dawn, a comic book inspired by their late grandmother that she created with the text-to-image engine Midjourney. Kashtanova referred to herself as a “prompt engineer” and explained at the time that she went to get the copyright so that she could “make a case that we do own copyright when we make something using AI.”

[…]

Source: AI-Created Comic Has Been Deemed Ineligible for Copyright Protection

I guess there is no big corporate interest in lobbying for AI created content – yet – and so the copyright masters have no idea what to do without their corporate cash carrying masters telling them what to do.

ChatGPT Is a ‘Code Red’ for Google’s Search Business

A new wave of chat bots like ChatGPT use artificial intelligence that could reinvent or even replace the traditional internet search engine. From a report: Over the past three decades, a handful of products like Netscape’s web browser, Google’s search engine and Apple’s iPhone have truly upended the tech industry and made what came before them look like lumbering dinosaurs. Three weeks ago, an experimental chat bot called ChatGPT made its case to be the industry’s next big disrupter. […] Although ChatGPT still has plenty of room for improvement, its release led Google’s management to declare a “code red.” For Google, this was akin to pulling the fire alarm. Some fear the company may be approaching a moment that the biggest Silicon Valley outfits dread — the arrival of an enormous technological change that could upend the business.

For more than 20 years, the Google search engine has served as the world’s primary gateway to the internet. But with a new kind of chat bot technology poised to reinvent or even replace traditional search engines, Google could face the first serious threat to its main search business. One Google executive described the efforts as make or break for Google’s future. ChatGPT was released by an aggressive research lab called OpenAI, and Google is among the many other companies, labs and researchers that have helped build this technology. But experts believe the tech giant could struggle to compete with the newer, smaller companies developing these chat bots, because of the many ways the technology could damage its business.

Source: ChatGPT Is a ‘Code Red’ for Google’s Search Business – Slashdot

FBI warns of fake shopping sites – recommends to use an ad blocker

The FBI is warning the public that cyber criminals are using search engine advertisement services to impersonate brands and direct users to malicious sites that host ransomware and steal login credentials and other financial information.

[…]

Cyber criminals purchase advertisements that appear within internet search results using a domain that is similar to an actual business or service. When a user searches for that business or service, these advertisements appear at the very top of search results with minimum distinction between an advertisement and an actual search result. These advertisements link to a webpage that looks identical to the impersonated business’s official webpage.

[…]

The FBI recommends individuals take the following precautions:

  • Before clicking on an advertisement, check the URL to make sure the site is authentic. A malicious domain name may be similar to the intended URL but with typos or a misplaced letter.
  • Rather than search for a business or financial institution, type the business’s URL into an internet browser’s address bar to access the official website directly.
  • Use an ad blocking extension when performing internet searches. Most internet browsers allow a user to add extensions, including extensions that block advertisements. These ad blockers can be turned on and off within a browser to permit advertisements on certain websites while blocking advertisements on others.

The FBI recommends businesses take the following precautions:

  • Use domain protection services to notify businesses when similar domains are registered to prevent domain spoofing.
  • Educate users about spoofed websites and the importance of confirming destination URLs are correct.
  • Educate users about where to find legitimate downloads for programs provided by the business.

Source: Internet Crime Complaint Center (IC3) | Cyber Criminals Impersonating Brands Using Search Engine Advertisement Services to Defraud Users

For Firefox you have uBlock Origin or NoScript / Disconnect / Facebook Container / Privacy Badger / Ghostery / Super Agent / LocalCDN – you can run them all at once, but will have to sometimes whitelist certain sites just to get them to work. It’s a bit of trouble but internet will look much better being mainly ad free.

LastPass admits attackers copied password vaults

Password locker LastPass has warned customers that the August 2022 attack on its systems saw unknown parties copy encrypted files that contains the passwords to their accounts.

In a December 22nd update to its advice about the incident, LastPass brings customers up to date by explaining that the August 2022 attack saw “some source code and technical information were stolen from our development environment and used to target another employee, obtaining credentials and keys which were used to access and decrypt some storage volumes within the cloud-based storage service.”

Those creds allowed the attacker to copy information “that contained basic customer account information and related metadata including company names, end-user names, billing addresses, email addresses, telephone numbers, and the IP addresses from which customers were accessing the LastPass service.”

The update reveals that the attacker also copied “customer vault” data – the file LastPass uses to let customers record their passwords.

That file “is stored in a proprietary binary format that contains both unencrypted data, such as website URLs, as well as fully-encrypted sensitive fields such as website usernames and passwords, secure notes, and form-filled data.”

Which means the attackers have users’ passwords. But thankfully those passwords are encrypted with “256-bit AES encryption and can only be decrypted with a unique encryption key derived from each user’s master password”.

LastPass’ advice is that even though attackers have that file, customers who use its default settings have nothing to do as a result of this update as “it would take millions of years to guess your master password using generally-available password-cracking technology.”

One of those default settings is not to re-use the master password that is required to log into LastPass. The outfit suggests you make it a complex credential and use that password for just one thing: accessing LastPass.

Yet we know that users are often dumfoundingly lax at choosing good passwords, while two thirds re-use passwords even though they should know better.

[…]

LastPass therefore offered the following advice to individual and business users:

If your master password does not make use of the defaults above, then it would significantly reduce the number of attempts needed to guess it correctly. In this case, as an extra security measure, you should consider minimizing risk by changing passwords of websites you have stored.

Enjoy changing all those passwords, dear reader.

LastPass’s update concludes with news it decommissioned the systems breached in August 2022 and has built new infrastructure that adds extra protections.

Source: LastPass admits attackers copied password vaults

Epic Forced To Pay $520 Million Fine over Fortnite Privacy and Dark Patterns

Fortnite-maker Epic Games has agreed to pay a massive $520 million fine in settlements with the Federal Trade Commission for allegedly illegally gathering data from children and deploying dark patterns techniques to manipulate users into making unwanted in-game purchases. The fines mark a major regulatory win for the Biden administration’s progressive-minded FTC, who, up until now, had largely failed to deliver on its promise of more robust reinforcement of U.S. tech companies.

The first $275 million fine will settle allegations Epic collected personal information from children under the age of 13 without their parent’s consent when they played the hugely popular battle royale game. The FTC claims that unjustified data collection violates the Children’s Online Privacy Protection Act. Internal Epic surveys and the licensing of Fortnite branded toys, the FTC alleges, show Epic clearly knew at least some of its player base was underage. Worse still, the agency claims Epic forced parents to wade through cumbersome barriers when they requested to have their children’s data deleted.

[…]

The game-maker additionally agreed to pay $245 million to refund customers who the FTC says fell victim to manipulative, unfair billing practices that fall under the category, “dark patterns.” Fortnite allegedly deployed a, “counterintuitive, inconsistent, and confusing button configuration,” that led players to incur unwanted charges with a single press of a button. In some cases, the FTC claims that single press button meant users were charged while sitting in a loading screen or while trying to wake the game from sleep mode. Users, the complaint alleges, collectively lost hundreds of millions of dollars to those shady practices. Epic allegedly “ignored more than one million user complaints,” suggesting a high number of users were being wrongly charged.

[…]

And though the FTC’s latest fine is far cry from the $5 billion penalty the agency issued against Facebook in 2019 and represents just a portion of the billions Fortnite reportedly rakes in each year, supporters said it nonetheless represents more than a mere slap on the wrist.

[…]

Source: Epic Forced To Pay Record-Breaking $520 Million Fine

China’s Setting the Standard for Deepfake Regulation

[…]

On January 10, according to The South China Morning Post, China’s Cyberspace Administration will implement new rules that are intended to protect people from having their voice or image digitally impersonated without their consent. The regulators refer to platforms and services using the technology to edit a person’s voice or image as, “deep synthesis providers.”

Those deep synthesis technologies could include the use of deep learning algorithms and augmented reality to generate text, audio, images or video. We’ve already seen numerous instances over the years of these technologies used to impersonate high profile individuals, ranging from celebrities and tech executives to political figures.

Under the new guidelines, companies and technologists who use the technology must first contact and receive the consent from individuals before they edit their voice or image. The rules, officially called The Administrative Provisions on Deep Synthesis for Internet Information Services come in response to governmental concerns that advances in AI tech could be used by bad actors to run scams or defame people by impersonating their identity. In presenting the guidelines, the regulators also acknowledge areas where these technologies could prove useful. Rather than impose a wholesale ban, the regulator says it would actually promote the tech’s legal use and, “provide powerful legal protection to ensure and facilitate,” its development.

But, like many of China’s proposed tech policies, political considerations are inseparable. According to the South China Morning Post, news stories reposted using the technology must come from a government approved list of news outlets. Similarly, the rules require all so-called deep synthesis providers adhere to local laws and maintain “correct political direction and correct public opinion orientation.” Correct here, of course, is determined unilaterally by the state.

Though certain U.S states like New Jersey and Illinois have introduced local privacy legislation that addresses deepfakes, the lack of any meaningful federal privacy laws limits regulators’ abilities to address the tech on a national level. In the private sector, major U.S. platforms like Facebook and Twitter have created new systems meant to detect and flag deepfakes, though they are constantly trying to stay one step ahead of bad actors continually looking for ways to evade those filters.

If China’s new rules are successful, it could lay down a policy framework other nations could build upon and adapt. It wouldn’t be the first time China’s led the pack on strict tech reform. Last year, China introduced sweeping new data privacy laws that radically limited the ways private companies could collect an individual’s personal identity. Those rules were built off of Europe’s General Data Protection Regulation

[…]

That all sounds great, but China’s privacy laws have one glaring loophole tucked within it. Though the law protects people from private companies feeding off their data, it does almost nothing to prevent those same harms being carried out by the government. Similarly, with deepfakes, it’s unclear how the newly proposed regulations would, for instance, prohibit a state-run agency from doctoring or manipulating certain text or audio to influence the narrative around controversial or sensitive political events.

Source: China’s Setting the Standard for Deepfake Regulation

China is also the one setting the bar for anti-monopolistic practices, the EU and US have been caught with their fingers in the jam jar and their pants down.

Transparent sunlight-activated antifogging metamaterials

[…] Here, guided by nucleation thermodynamics, we design a transparent, sunlight-activated, photothermal coating to inhibit fogging. The metamaterial coating contains a nanoscopically thin percolating gold layer and is most absorptive in the near-infrared range, where half of the sunlight energy resides, thus maintaining visible transparency. The photoinduced heating effect enables sustained and superior fog prevention (4-fold improvement) and removal (3-fold improvement) compared with uncoated samples, and overall impressive performance, indoors and outdoors, even under cloudy conditions. The extreme thinness (~10 nm) of the coating—which can be produced by standard, readily scalable fabrication processes—enables integration beneath other coatings […]

Source: Transparent sunlight-activated antifogging metamaterials | Nature Nanotechnology

Skyglow pollution is separating us from the stars but also killing earth knowledge and species

[…]

It’s not only star gazing that’s in jeopardy. Culture, wildlife and other scientific advancements are being threatened by mass light infrastructure that is costing cities billions of dollars a year as it expands alongside exponential population growth.

Some researchers call light pollution cultural genocide. Generations of complex knowledge systems, built by Indigenous Australians and Torres Strait Islanders upon a once-clear view of the Milky Way, are being lost.

In the natural world, the mountain pygmy possum, a marsupial native to Australia, is critically endangered. Its main food source, the bogong moth, is being affected by artificial outdoor lighting messing with its migration patterns. Sea turtles are exhibiting erratic nesting and migration behaviours due to lights blasting from new coastal developments.

So how bright does our future look under a blanket of light?

“If you go to Mount Coot-tha, basically the highest point in Brisbane, every streetlight you can see from up there is a waste of energy,” Downs says. “Why is light going up and being wasted into the atmosphere? There’s no need for it.”

Skyglow

Around the world, one in three people can’t see the Milky Way at night because their skies are excessively illuminated. Four in five people live in towns and cities that emit enough light to limit their view of the stars. In Europe, that figure soars to 99%.

Blame skyglow – the unnecessary illumination of the sky above, and surrounding, an urban area. It’s easy to see it if you travel an hour from a city, turn around, then look back towards its centre.

[…]

Artificial lights at night cause skyglow in two ways: spill and glare. Light spills from a bulb when it trespasses beyond the area intended to be lit, while glare is a visual sensation caused by excessive brightness.

Streetlights contribute hugely to this skyglow and have been causing astronomers anxiety for decades.

[…]

Source: Blinded by the light: how skyglow pollution is separating us from the stars | Queensland | The Guardian

Hertz Shells Out $168 Million To Settle 364 False Theft Reports

[…]

Months of reporting tied to lawsuits filed by Hertz renters falsely accused of theft should now come to a halt. Maybe.

Here’s the company’s statement on the multi-million dollar settlement, which doesn’t say much about Hertz’s culpability, nor any plans it has in place to prevent something that has only occurred with this rental company from happening again.

Hertz Global Holdings, Inc. (NASDAQ: HTZ) today announced the settlement of 364 pending claims relating to vehicle theft reporting, bringing resolution to more than 95% of its pending theft reporting claims. The company will pay an aggregate amount of approximately $168 million by year-end to resolve these disputes. The company believes it will recover a meaningful portion of the settlement amount from its insurance carriers.  

[…]

First, it’s only “95%” of pending theft reporting claims, which means the company is still somewhat tied up in litigation.

Second, while it may hurt Hertz a bit to cough up roughly a half-million per bogus theft claim, it appears it won’t hurt much. Apparently, its insurance carrier will be footing the bill, which means as long as its insurers are willing to cover costs related to horrendous inventory control practices, there’s really no deterrent in place to prevent this sort of thing (a sort of thing extremely particular to Hertz) from happening again.

Third, the CEO’s statement portrays the false arrest of people as a commonplace customer service failure, rather than the potentially deadly, life disrupting experience it is.

Fourth, the plans for “moving forward” do not address the underlying issues. Instead, the CEO touts a future full of app usage and electric vehicles, something that’s apparently meant to make us forgive its recent past full of sloppy inventory control, outsourcing of repo work to local cops, and a reputation for converting honest renters into criminals.

The statement also says nothing about the company’s unwillingness to drop bogus prosecutions of renters despite being sued multiple times.

[…]

The CEO promised to clean this mess up, but he’s the same person who hasn’t explained why his company has allowed prosecutions over bogus theft reports to proceed even though Hertz was aware the reports were false.

[…]

Source: Hertz Shells Out $168 Million To Settle 364 False Theft Reports | Techdirt

Z-Wave Alliance Announces Z-Wave Source Code Project is Complete, Now Open and Widely Available to Members

The Z-Wave Alliance, the Standards Development Organization (SDO) dedicated to advancing the smart home and Z-Wave® technology, today announced the completion of the Z-Wave Source Code project, which has been published and made available on GitHub to Alliance members.

The Z-Wave Source Code Project opens development of Z-Wave and enables members to contribute code to shape the future of the protocol under the supervision of the new OS Work Group (OSWG).

[…]

For more information on joining the Z-Wave Alliance, please visit http://z-wavealliance.org.

Source: Z-Wave Alliance Announces Z-Wave Source Code Project is Complete, Now Open and Widely Available to Members – z-wavealliance

So Open Source but not FOSS

Epic Cutting Off Online Service, Servers For Some Old Games

Fortnite developer Epic Games announced today that it will no longer provide online service or servers for 17 older games, including six from the Unreal series dating back as far as 1998, and it will end access to some additional games entirely.

[…]

The full list of affected games is as follows:

  • 1000 Tiny Claws
  • Dance Central 1
  • Dance Central 2
  • Dance Central 3 (Epic notes that Dance Central VR online multiplayer “will remain available”)
  • Green Day: Rock Band
  • Monsters (Probably) Stole My Princess
  • Rock Band 1
  • Rock Band 2
  • Rock Band 3 (Epic notes that Rock Band 4 online multiplayer “will remain available”)
  • The Beatles: Rock Band
  • Supersonic Acrobatic Rocket-Powered Battle-Cars
  • Unreal Gold
  • Unreal II: The Awakening
  • Unreal Tournament 2003
  • Unreal Tournament 2004
  • Unreal Tournament 3 (Epic notes that it has “plans to bring back online features via Epic Online Services in the future.”)
  • Unreal Tournament: Game of the Year Edition

[…]

On top of changing online service, Epic wrote that it has already removed Mac and Linux versions of bird dating sim Hatoful Boyfriend, first released in 2011, Hatoful Boyfriend: Holiday Star, and mobile game DropMix, only five years old, from digital storefronts. As of writing, though, the former two are still available on Steam.

And the last Band-Aid: though you can play those previous games if you own them, Epic is performing a few total shutdowns. Players will lose access to the following titles on their specified removal dates:

  • Battle Breakers on December 30 (“We will automatically refund players for any in-game purchases made via Epic direct payment 180 days prior to today,” Epic said in its blog)
  • Unreal Tournament (Alpha) on January 24
  • Rock Band Blitz on January 24
  • Rock Band Companion app on January 24
  • SingSpace on January 24

For some fans, Epic’s seemingly sudden decision to stop servicing games or obliterate them entirely comes as a disappointing shock, and serves as writing on the wall for the state of digital game preservation. All I can say is this is your last chance to top your Rock Band high score.

Source: Epic Cutting Off Online Service, Servers For Some Old Games

U.S. authorities charge 8 social media influencers in pump and dump plan

U.S. prosecutors on Wednesday said they have charged eight individuals in a securities fraud scheme, alleging they reaped about $114 million from by using Twitter and Discord to manipulate stocks.

The eight men allegedly purported to be successful traders on the social media platforms and then engaged in a so-called “pump and dump” scheme by hyping particular stocks to their followers with the intent to dump them once prices had risen, according to prosecutors in the Southern District of Texas.

The U.S. Securities and Exchange Commission (SEC) said it has filed related civil charges against the defendants in the scheme, claiming that seven of the defendants used Twitter and Discord to boost stocks. It said the eighth was charged with aiding and abetting the scheme with his podcast.

[…]

The individuals charged were Texas residents Edward Constantinescu, Perry Matlock, John Rybarczyk and Dan Knight, along with California residents Gary Deel and Tom Cooperman, Stefan Hrvatin of Miami and Mitchell Hennessey of Hoboken, New Jersey.

[…]

Source: U.S. authorities charge 8 social media influencers in securities fraud scheme | Reuters

Only 8? How about the ones on CNBC?

Large Hadron Collider Beauty releases first set of data to the public

[…] While all scientific results from the LHCb collaboration are already publicly available through open access papers, the data used by the researchers to produce these results is now accessible to anyone in the world through the CERN open data portal. The data release is made in the context of CERN’s Open Science Policy, reflecting the values of transparency and international collaboration enshrined in the CERN Convention for more than 60 years.

[…]

The data sample made available amounts to 20% of the total data set collected by the LHCb experiment in 2011 and 2012 during LHC Run 1. It comprises 200 terabytes containing information obtained from proton–proton collision events filtered and recorded with the detector.

[…]

The analysis of LHC data is a complex and time-consuming exercise. Therefore, to facilitate the analysis, the samples are accompanied by extensive documentation and metadata, as well as a glossary explaining several hundred special terms used in the preprocessing. The data can be analyzed using dedicated LHCb algorithms, which are available as .

[…]

More information: CERN open data portal

Source: Large Hadron Collider Beauty releases first set of data to the public

Google must delete search results about you if they’re fake, EU court rules

People in Europe can get Google to delete search results about them if they prove the information is “manifestly inaccurate,” the EU’s top court ruled Thursday.

The case kicked off when two investment managers requested Google to dereference results of a search made on the basis of their names, which provided links to certain articles criticising that group’s investment model. They say those articles contain inaccurate claims.

Google refused to comply, arguing that it was unaware whether the information contained in the articles was accurate or not.

But in a ruling Thursday, the Court of Justice of the European Union opened the door to the investment managers being able to successfully trigger the so-called “right to be forgotten” under the EU’s General Data Protection Regulation.

“The right to freedom of expression and information cannot be taken into account where, at the very least, a part – which is not of minor importance – of the information found in the referenced content proves to be inaccurate,” the court said in a press release accompanying the ruling.

People who want to scrub inaccurate results from search engines have to provide sufficient proof that what is said about them is false. But it doesn’t have to come from a court case against a publisher, for instance. They have “to provide only evidence that can reasonably be required of [them] to try to find,” the court said.

[…]

Source: Google must delete search results about you if they’re fake, EU court rules – POLITICO

JetBlue no longer plans to offset emissions from domestic flights, will use sustainable fuel instead

[…] Back in 2020, JetBlue became the first US airline to voluntarily offset greenhouse gas emissions from all of its domestic flights. That effort ends in 2023, the company announced this week.

The airline now plans to effectively cut its per-seat emissions in half by 2035. For flights to take off without generating as much pollution, JetBlue says its planes will need to run on sustainable aviation fuels [SAF].

“JetBlue views SAF as the most promising avenue for addressing aviation emissions in a meaningful and rapid way – once cost-effective SAF is made available commercially at scale,” the company said in a December 6th press release

Since 2020, JetBlue’s routes between San Francisco and Los Angeles have regularly run on sustainable aviation fuels. But the company’s eventually going to need a lot more SAF, which can be made from waste or crops like corn. It’s seen as a potential “bridge fuel” while electric planes and hydrogen-powered jets are still in development. JetBlue has inked deals with several companies to purchase more SAF, but it’s still in pretty limited supply and is more expensive than conventional kerosene jet fuel.

There are environmental challenges with SAF, too. Making and burning SAF still generates CO2 emissions. A lot of that CO2 is supposed to be canceled out by crops grown to produce the fuel, but there are also concerns about those crops leading to more deforestation.

[…]

In October, a report found that eight of Europe’s biggest airlines use carbon offsets to make customers think their flights are greener than they actually are. The airlines purchased poor-quality carbon offsets unlikely to actually reduce carbon dioxide emissions, according to the report.

Carbon offsets are supposed to cancel out the pollution from burning aviation fuel by reducing emissions elsewhere — usually through investments in renewable energy or forestry projects that rely on trees’ ability to trap carbon dioxide. But years of investigations and research have found that most carbon offsets on the market don’t actually represent real-world reductions in pollution.

[…]

Source: JetBlue no longer plans to offset emissions from domestic flights – The Verge