Game Dev Turns Down $500k Exploitative Contract, explains why – looks like music industry contracts

Receiving a publishing deal from an indie publisher can be a turning point for an independent developer. But when one-man team Jakefriend was approached with an offer to invest half a million Canadian dollars into his hand-drawn action-adventure game Scrabdackle, he discovered the contract’s terms could see him signing himself into a lifetime of debt, losing all rights to his game, and even paying for it to be completed by others out of his own money.

In a lengthy thread on Twitter, indie developer Jakefriend explained the reasons he had turned down the half-million publishing deal for his Kickstarter-funded project, Scrabdackle. Already having raised CA$44,552 from crowdfunding, the investment could have seen his game released in multiple languages, with full QA testing, and launched simultaneously on PC and Switch. He just had to sign a contract including clauses that could leave him financially responsible for the game’s completion, while receiving no revenue at all, should he breach its terms.

“I turned down a pretty big publishing contract today for about half a million in total investment,” begins Jake’s thread. Without identifying the publisher, he continues, “They genuinely wanted to work with me, but couldn’t see what was exploitative about the terms. I’m not under an NDA, wanna talk about it?”

Over the following 24 tweets, the developer lays out the key issues with the contract, most especially focusing on the proposed revenue share. While the unnamed publisher would eventually offer a 50:50 split of revenues (albeit minus up to 10% for other sundry costs, including—very weirdly—international sales taxes), this wouldn’t happen until 50% of the marketing spend (approximately CA$200,000/US$159,000) and the entirety of his development funds (CA$65,000 Jake confirms to me via Discord) was recouped by sales. That works out to about 24,000 copies of the game, before which its developer would receive precisely 0% of revenue.

Even then, Scrabdackle’s lone developer explains, the contract made clear there would be no payments until a further 30 days after the end of the next quarter, with a further clause that allowed yet another three month delay beyond that. All this with no legal requirement to show him their financial records.

Should Jake want to challenge the sales data for the game, he’d be required to call for an audit, which he’d have to pay for whether there were issues or not. And should it turn out that there were discrepancies, there’d be no financial penalty for the publisher, merely the requirement to pay the missing amount—which he would have to hope would be enough to cover paying for the audit in the first place.

Another section of the contract explained that should there be disagreement about the direction of the game, the publisher could overrule and bring in a third-party developer to make the changes Jake would not, at Jake’s personal expense. With no spending limit on that figure.

But perhaps most surprising was a section declaring that should the developer be found in breach of the contract—something Jake explains is too ambiguously defined—then they would lose all rights to their game, receive no revenue from its sales, have to repay all the money they received, and pay for all further development costs to see the game completed. And here again there was no upper limit on what those costs could be.

It might seem obvious that no one should ever sign a contract containing clauses just so ridiculous. To be liable—at the publisher’s whim—for unlimited costs to complete a game while also required to pay back all funds (likely already spent), for no income from the game’s sales… Who would ever agree to such a thing? Well, as Jake tells me via Discord, an awful lot of independent developers, desperate for some financial support to finish their project. The contract described in his tweets might sound egregious, but the reality is that most of them offer some kind of awful term(s) for indie game devs.

“My close indie dev friends discuss what we’re able to of contracts frequently,” he says, “and the only thing surprising to them about mine is that it hit all the typical red flags instead of typically most of them. We’re all extremely fatigued and disheartened by how mundane an unjust contract offer is. It’s unfair and it’s tiring.”

Jake makes it clear that he doesn’t believe the people who contacted him were being maliciously predatory, but rather they were simply too used to the shitty terms. “I felt genuinely no sense of wanting to give me a bad deal with the scouts and producers I was speaking to, but I have to assume they are aware of the problems and are just used to that being the norm as well.”

Since posting the thread, Jake tells me he’s heard from a lot of other developers who described the terms to which he objected as, “sadly all-too-familiar.” At one point creator of The Witness, Jonathan Blow, replied to the thread saying, “I can guess who the publisher is because I have seen equivalent contracts.” Except Jake’s fairly certain he’d be wrong.

“The problem is so widespread,” Jake explains, “that when you describe the worst of terms, everyone thinks they know who it is and everyone has a different guess.

While putting this piece together, I reached out to boutique indie publisher Mike Rose of No More Robots, to see if he had seen anything similar, and indeed who he thought the publisher might be. “Honestly, it could be anyone,” he replied via Discord. “What [Jake] described is very much the norm. All of the big publishers you like, his description is all of their contracts.”

This is very much a point that Jake wants to make clear. In fact, it’s why he didn’t identify the publisher in his thread. Rather than to spare their blushes, or harm his future opportunities, Jake explains that he did it to ensure his experience couldn’t be taken advantage of by other indie publishers. “I don’t want to let others with equally bad practices off the hook,” he tells me. “As soon as I say ‘It was SoAndSo Publishing’, everyone else can say, ‘Wow, can’t believe it, glad we’re not like that,’ and have deniability.”

I also reached out to a few of the larger indie publishers, listing the main points of contention in Jake’s thread, to see if they had any comments. The only company that replied by the time of publication was Devolver. I was told,

“Publishing contracts have dozens of variables involved and a developer should rightfully decline points and clauses that make them feel uncomfortable or taken advantage of in what should be an equitable relationship with their partner—publisher, investor, or otherwise. Rev share and recoupment in particular should be weighed on factors like investment, risk, and opportunity for both parties and ultimately land on something where everyone feels like they are receiving a fair shake on what was put forth on the project. While I have not seen the full contract and context, most of the bullet points you placed here aren’t standard practice for our team.”

Where does this leave Jake and the future of Scrabdackle? “The Kickstarter funds only barely pay my costs for the next 10 months,” he tells Kotaku. “So there’s no Switch port or marketing budget to speak of. Nonetheless, I feel more motivated than ever going it alone.”

I asked if he would still consider a more reasonable publishing deal at this point. “This was a hobby project that only became something more when popular demand from an incredible and large community rallied for me to build a crowdfunding campaign…A publisher can offer a lot to an indie project, and a good deal is the difference between gamedev being a year-long stint or a long-term career for me, but that’s not worth the pound of flesh I was asked for.”

Source: Game Dev Turns Down Half Million Dollar Exploitative Contract

For the music industry:

Source: Courtney Love does the math

Source: How much do musicians really make from Spotify, iTunes and YouTube?

Source: How Musicians Make Money — Or Don’t at All — in 2018

Source: Kanye’s Contracts Reveal Dark Truths About the Music Industry

Source: Smiles and tears when “slave contract” controls the lives of K-Pop artists.

Source: Youtube’s support for musicians comes with a catch

Stop using Zoom, Hamburg’s DPA warns state government – The US does not safeguard EU citizen data

Hamburg’s state government has been formally warned against using Zoom over data protection concerns.

The German state’s data protection agency (DPA) took the step of issuing a public warning yesterday, writing in a press release that the Senate Chancellory’s use of the popular videoconferencing tool violates the European Union’s General Data Protection Regulation (GDPR) since user data is transferred to the U.S. for processing.

The DPA’s concern follows a landmark ruling (Schrems II) by Europe’s top court last summer which invalidated a flagship data transfer arrangement between the EU and the U.S. (Privacy Shield), finding U.S. surveillance law to be incompatible with EU privacy rights.

The fallout from Schrems II has been slow to manifest — beyond an instant blanket of legal uncertainty. However, a number of European DPAs are now investigating the use of U.S.-based digital services because of the data transfer issue, in some instances publicly warning against the use of mainstream U.S. tools like Facebook and Zoom because user data cannot be adequately safeguarded when it’s taken over the pond.

German agencies are among the most proactive in this respect. But the EU’s data protection supervisor is also investigating the bloc’s use of cloud services from U.S. giants Amazon and Microsoft over the same data transfer concern.

[…]

The agency asserts that use of Zoom by the public body does not comply with the GDPR’s requirement for a valid legal basis for processing personal data, writing: “The documents submitted by the Senate Chancellery on the use of Zoom show that [GDPR] standards are not being adhered to.”

The DPA initiated a formal procedure earlier, via a hearing, on June 17, 2021, but says the Senate Chancellory failed to stop using the videoconferencing tool. Nor did it provide any additional documents or arguments to demonstrate compliance usage. Hence, the DPA taking the step of a formal warning, under Article 58 (2) (a) of the GDPR.

[…]

Source: Stop using Zoom, Hamburg’s DPA warns state government | TechCrunch

How to Limit Spotify From Tracking You, Because It Knows Too Much – and sells it

Most Spotify users are likely aware the streaming service tracks their listening activity, search history, playlists, and the songs they like or skip—that’s all part of helping the algorithm figure out what you like, right? However, some users may be less OK with how much other data Spotify and its partners are logging.

According to Spotify’s privacy policy, the company tracks:

  • Your name
  • Email address
  • Phone number
  • Date of birth
  • Gender
  • Street address, country, and other GPS location data
  • Login info
  • Billing info
  • Website cookies
  • IP address
  • Facebook user ID, login information, likes, and other data.
  • Device information like accelerometer or gyroscope data, operating system, model, browser, and even some data from other devices on your wifi network.

This information helps Spotify tailor song and artist recommendations to your tastes and is used to improve the in-app user experience, sure. However, the company also uses it to attract advertising partners, who can create personalized ads based on your information. And that doesn’t even touch on the third-party cross-site trackers that are eagerly eyeing your Spotify activity too.

Treating people and their data like a consumable resource is scummy, but it’s common practice for most companies and websites these days, and the common response from the general public is typically a shrug (never mind that a survey of US adults revealed we place a high value on our personal data). However, it’s still a security risk. As we’ve seen repeatedly over the years, all it takes is one poorly-secured server or an unusually skilled hacker to compromise the personal data that companies like Spotify hold onto.

And to top things off, almost all of your Spotify profile’s information is public by default—so anyone else with a Spotify account can easily look you up unless you go out of your way to change your settings.

Luckily, you can limit some of the data Spotify and connected third-party apps collect, and can review the personal information the app has stored. Spotify doesn’t offer that many data privacy options, and many of them are spread out across its web, desktop, and mobile apps, but we’ll show you where to find them all and which ones you should enable for the most private Spotify listening experience possible. You know, relatively.

How to change your Spotify account’s privacy settings

The web player is where to start if you want to tune up your Spotify privacy. Almost all of Spotify’s data privacy settings are found on there, rather than in the mobile or desktop apps.

We’ll start by cutting down on how much personal data you share with Spotify.

Image for article titled How to Limit Spotify From Tracking You, Because It Knows Too Much
Screenshot: Brendan Hesse
  1. Log in to Spotify’s web player on desktop.
  2. Click your user icon then go to Account > Edit profile.
  3. Remove or edit any personal info that you’re able to.
  4. Uncheck “Share my registration data with Spotify’s content providers for marketing purposes.”
  5. Click “Save Changes.”
Image for article titled How to Limit Spotify From Tracking You, Because It Knows Too Much
Screenshot: Brendan Hesse

Next, let’s limit how Spotify uses your personal data for advertising.

  1. Go to Account > Privacy settings.
  2. Turn off “Process my personal data for tailored ads.” Note that you’ll still get just as many ads—and Spotify will still track you—but your personal data will no longer be used to deliver you targeted ads.
  3. Turn off “Process my Facebook data. This will stop Spotify from using your Facebook account data to further refine the ads you hear.

Lastly, go to Account > Apps to review all the external apps linked to your Spotify account and see a list of all devices you’re logged in to. Remove any you don’t need or use anymore.

How to review your Spotify account data

You can also see how much of your personal data Spotify has collected. At the bottom of the Privacy Settings page, there’s an option to download your Spotify data for review. While you can’t remove this data from your account, it shows you a selection of personal information, your listening and search history, and other data the company has collected. Click “Request” to begin the process. Note that it can take up to 30 days for Spotify to get your data ready for download.

How to hide public playlists and listening activity on Spotify

Your Spotify playlists and listening activity are public by default, but you can quickly turn them off or even block certain listening activity in Spotify’s web and desktop apps. While this doesn’t affect Spotify’s data tracking, it’s still a good idea to keep some info hidden if you’re trying to make Spotify as private as possible.

How to turn off Spotify listening activity

Desktop

Image for article titled How to Limit Spotify From Tracking You, Because It Knows Too Much
Screenshot: Brendan Hesse
  1. Click your profile image and go to Settings > Social
  2. Turn off “Make my new playlists public.”
  3. Turn off “Share my listening activity on Spotify.”

Mobile

Image for article titled How to Limit Spotify From Tracking You, Because It Knows Too Much
Screenshot: Brendan Hesse
  1. Tap the settings icon in the upper-right of the app.
  2. Scroll down to “Social.”
  3. Disable “Listening Activity.”

How to hide Spotify Playlists

Don’t forget to hide previously created playlists, which are made public by default. This can be done from the desktop, web, and mobile apps.

Mobile

  1. Open the “Your Library” tab.
  2. Select a playlist.
  3. Tap the three-dot icon in the upper-right of the screen.
  4. Select “Make Secret.”

Desktop app and web player

  1. Open a playlist from the library bar on the left.
  2. Click the three-dot icon by the Playlist’s name.
  3. Select “Make Secret.”

How to use Private Listening mode on Spotify

Spotify’s Private Listening mode also hides your listening activity, but you need to enable it manually each time you want to use it.

Mobile

  1. In the app, go to Settings > Social.
  2. Tap “Enable private session.”

Desktop app and web player

There are three ways to enable a Private session on desktop:

  • Click your profile picture then select “Private session.”
  • Or, click the “…” icon in the upper-left and go to File > Private session.
  • Or, go to Settings > Social and toggle “Start a private session to listen anonymously.”

Note that Private sessions only affect what other users see (or don’t see, rather). It doesn’t stop Spotify from tracking your activity—though as Wired points out, Spotify’s Privacy Policy vaguely implies Private Mode “may not influence” your recommendations, so it’s possible some data isn’t tracked while this mode is turned on. It’s better to use the privacy controls outlined in the sections above if you want to change how Spotify collects data.

How to limit third-party cookie tracking in Spotify

Turning on the privacy settings above will help reduce how much data Spotify tracks and uses for advertising and keep some of your Spotify listening history hidden from other users, but you should also take steps to limit how other apps and websites track your Spotify activity.

Image for article titled How to Limit Spotify From Tracking You, Because It Knows Too Much
Screenshot: Brendan Hesse

The desktop app has built-in cookie blocking controls that can do this:

  1. In the desktop app, click your username in the top right corner.
  2. Go to Settings > Show advanced settings.
  3. Scroll down to “Privacy” and turn on “Block all cookies for this installation of the Spotify desktop app.”
  4. Close and restart the app for the change to take effect.

For iOS and iPad users, you can disable app tracking in your device’s settings. Android users have a similar option, though it’s not as aggressive. And for those listening on the Spotify web player, use browsers with strict privacy controls like Safari, Firefox, or Brave.

The last resort: Delete your Spotify account

Even with all possible privacy settings turned on and Private Listening sessions enabled at all times, Spotify is still tracking your data. If that is absolutely unacceptable to you, the only real option is to delete your account. This will remove all your Spotify data for good—just make sure you download and back up any data you want to import to other services before you go through with it.

  1. Go to the Contact Spotify Support web page and sign in with your Spotify account.
  2. Select the “Account” section.
  3. Click “I want to close my account” from the list of options.
  4. Scroll down to the bottom of the page and click “Close Account.”
  5. Follow the on-screen prompts, clicking “Continue” each time to move forward.
  6. After the final confirmation, Spotify will send you an email with the cancellation link. Click the “Close My Account” button to verify you want to delete your account (this link is only active for 24 hours).

To be clear, we’re not advocating everyone go out and delete their Spotify accounts over the company’s privacy policy and advertising practices, but it’s always important to know how—and why—the apps and websites we use are tracking us. As we said at the top, even companies with the best intentions can fumble your data, unwittingly delivering it into the wrong hands.

Even if you’re cool with Spotify tracking you and don’t feel like enabling the options we’ve outlined in this guide, take a moment to tune up your account’s privacy with a strong password and two-factor sign-in, and remove any unnecessary info from your profile. These extra steps will help keep you safe if there’s ever an unexpected security breach.

Source: How to Limit Spotify From Tracking You, Because It Knows Too Much

Apple’s iPhone computer vision has the potential to preserve privacy but also break it completely

[…]

an AI on your phone will scan all those you have sent and will send to iPhotos. It will generate fingerprints that purportedly identify pictures, even if highly modified, that will be checked against fingerprints of known CSAM material. Too many of these – there’s a threshold – and Apple’s systems will let Apple staff investigate. They won’t get the pictures, but rather a voucher containing a version of the picture. But that’s not the picture, OK? If it all looks too dodgy, Apple will inform the authorities

[…]

In a blog post “Recognizing People in Photos Through Private On-Device Machine Learning” last month, Apple plumped itself up and strutted its funky stuff on how good its new person recognition process is. Obscured, oddly lit, accessorised, madly angled and other bizarrely presented faces are no problemo, squire.

By dint of extreme cleverness and lots of on-chip AI, Apple says it can efficiently recognise everyone in a gallery of photos. It even has a Hawkings-grade equation, just to show how serious it is, as proof that “finally, we rescale the obtained features by ss and use it as logit to compute the softmax cross-entropy loss based on the equation below.” Go, look. It’s awfully science-y.

The post is 3,500 words long, complex, and a very detailed paper on computer vision, one of the two tags Apple has given it. The other tag, Privacy, can be entirely summarised in six words: it’s on-device, therefore it’s private. No equation.

That would be more comforting if Apple hadn’t said days later how on-device analysis is going to be a key component in informing law enforcement agencies about things they disapprove of. Put the two together, and there’s a whole new and much darker angle to the fact, sold as a major consumer benefit, that Apple has been cramming in as much AI as it can so it can look at pictures as you take and after you’ve stored them.

We’ve all been worried about how mobile phones are stuffed with sensors that can watch what we watch, hear what we hear, track where we go and note what we do. The evolving world of personal data privacy is based around these not being stored in the vast vaults of big data, keeping them from being grist to the mill of manipulating our digital personas.

But what happens if the phone itself grinds that corn? It may never share a single photograph without your permission, but what if it can look at that photograph and generate precise metadata about what, who, how, when, and where it depicts?

This is an aspect of edge computing that is ahead of the regulators, even those of the EU who want to heavily control things like facial recognition. By the time any such regulation is produced, countless millions of devices will be using it to ostensibly provide safe, private, friendly on-device services that make taking and keeping photographs so much more convenient and fun.

It’s going to be very hard to turn that off, and very easy to argue for exemptions that weaken the regs to the point of pointlessness. Especially if the police and security services lobby hard as well, which they will as soon as they realise that this defeats end-to-end encryption without even touching end-to-end encryption.

So yes, Apple’s anti-CSAM model is capable of being used without impacting the privacy of the innocent, if it is run exactly as version 1.0 is described. It is also capable of working with the advances elsewhere in technology to break that privacy utterly, without setting off the tripwires of personal protection we’re putting in place right now.

[…]

Source: Apple’s iPhone computer vision has the potential to preserve privacy but also break it completely • The Register

Rockstar Begins A War On Modders For ‘GTA’ Games For Totally Unclear Reasons

[…]Rockstar Games has previously had its own run-in with its modding community, banning modders who attempted to shift GTA5’s online gameplay to dedicated servers that would allow mods to be used, since Rockstar’s servers don’t allow mods. What it’s now doing in issuing copyright notices on modders who have been forklifting older Rockstar assets into newer GTA games, however, is totally different.

Grand Theft Auto publisher Take-Two has issued copyright takedown notices for several mods on LibertyCity.net, according to a post from the site. The mods either inserted content from older Rockstar games into newer ones, or combined content from similar Rockstar games into one larger game. The mods included material from Grand Theft Auto 3, San Andreas, Vice City, Mahunt, and Bully.

This has been a legally active year for Take-Two, starting with takedown notices for reverse-engineered versions of GTA3 and Vice City. Those projects were later restored. Since then, Take-Two has issued takedowns for mods that move content from older Grand Theft Auto games into GTA5, as well as mods that combine older games from the GTA3 generation into one. That lead to a group of modders preemptively taking down their 14-year-old mod for San Andreas in case they were next on Take-Two’s list.

All of this is partially notable because it’s new. Like many games released for the PC, the GTA series has enjoyed a healthy modding community. And Rockstar, previously, has largely left this modding community alone. Which is generally smart, as mods such as the ones the community produces are fantastic ways to both keep a game fresh as it ages and lure in new players to the original game by enticing them with mods that meet their particular interests. I’ll never forget a Doom mod that replaced all of the original MIDI soundtrack files with MIDI versions of 90’s alternative grunge music. That mod caused me to play Doom all over again from start to finish.

But now Rockstar Games has flipped the script and is busily taking these fan mods down. Why? Well, no one is certain, but likely for the most obvious reason of all.

One reason a company might become more concerned with this kind of copyright infringement is that it’s planning to release a similar product and wants to be sure that its claim to the material can’t be challenged. It’s speculative at this point, but that tracks with the rumors we heard earlier this year that Take-Two is working on remakes of the PS2 Grand Theft Auto games.

In other words, Rockstar appears to be completely happy to reap all the benefits from the modding community right up until the moment it thinks it can make more money with re-releases, at which point the company cries “Copyright!” The company may well be within its rights to operate that way, but why in the world would the modding community ever work on Rockstar games again?

Source: Rockstar Begins A War On Modders For ‘GTA’ Games For Totally Unclear Reasons | Techdirt

Senators ask Amazon how it will use palm print data from its stores

If you’re concerned that Amazon might misuse palm print data from its One service, you’re not alone. TechCrunch reports that Senators Amy Klobuchar, Bill Cassidy and Jon Ossoff have sent a letter to new Amazon chief Andy Jassy asking him to explain how the company might expand use of One’s palm print system beyond stores like Amazon Go and Whole Foods. They’re also worried the biometric payment data might be used for more than payments, such as for ads and tracking.

The politicians are concerned that Amazon One reportedly uploads palm print data to the cloud, creating “unique” security issues. The move also casts doubt on Amazon’s “respect” for user privacy, the senators said.

In addition to asking about expansion plans, the senators wanted Jassy to outline the number of third-party One clients, the privacy protections for those clients and their customers and the size of the One user base. The trio gave Amazon until August 26th to provide an answer.

[…]

The company has offered $10 in credit to potential One users, raising questions about its eagerness to collect palm print data. This also isn’t the first time Amazon has clashed with government

[…]

Amazon declined to comment, but pointed to an earlier blog post where it said One palm images were never stored on-device and were sent encrypted to a “highly secure” cloud space devoted just to One content.

Source: Senators ask Amazon how it will use palm print data from its stores (updated) | Engadget

Basically having these palm prints all in the cloud is really an incredibly insecure way to keep all this biometric data of people that they can’t ever change, short of burning their palms off.

The End Of Ownership: How Big Companies Are Trying To Turn Everyone Into Renters

We’ve talked a lot on Techdirt about the end of ownership, and how companies have increasingly been reaching deep into products that you thought you bought to modify them… or even destroy them. Much of this originated in the copyright space, in which modern copyright law (somewhat ridiculously) gave the power to copyright holders to break products that people had “bought.” Of course, the legacy copyright players like to conveniently change their language on whether or not you’re buying something or simply “licensing” it temporarily based on what’s most convenient (i.e., what makes them the most money) at the time.

Over at the Nation, Maria Bustillos, recently wrote about how legacy companies — especially in the publishing world — are trying to take away the concept of book ownership and only let people rent books. A little over a year ago, picking up an idea first highlighted by law professor Brian Frye, we highlighted how much copyright holders want to be landlords. They don’t want to sell products to you. They want to retain an excessive level of control and power over it — and to make you keep paying for stuff you thought you bought. They want those monopoly rents.

As Bustillos points out, the copyright holders are making things disappear, including “ownership.”

Maybe you’ve noticed how things keep disappearing—or stop working—when you “buy” them online from big platforms like Netflix and Amazon, Microsoft and Apple. You can watch their movies and use their software and read their books—but only until they decide to pull the plug. You don’t actually own these things—you can only rent them. But the titanic amount of cultural information available at any given moment makes it very easy to let that detail slide. We just move on to the next thing, and the next, without realizing that we don’t—and, increasingly, can’t—own our media for keeps.

And while most of the focus on this space has been around music and movies, it’s happening to books as well:

Unfortunately, today’s mega-publishers and book distributors have glommed on to the notion of “expiring” media, and they would like to normalize that temporary, YouTube-style notion of a “library.” That’s why, last summer, four of the world’s largest publishers sued the Internet Archive over its National Emergency Library, a temporary program of the Internet Archive’s Open Library intended to make books available to the millions of students in quarantine during the pandemic. Even though the Internet Archive closed the National Emergency Library in response to the lawsuit, the publishers refused to stand down; what their lawsuit really seeks is the closing of the whole Open Library, and the destruction of its contents. (The suit is ongoing and is expected to resume later this year.) A close reading of the lawsuit indicates that what these publishers are looking to achieve is an end to the private ownership of books—not only for the Internet Archive but for everyone.

[…]

The big publishers and other large copyright holders always insist that they’re “protecting artists.” That’s almost never the case. They regularly destroy and suppress creativity and art with their abuse of copyright law. Culture shouldn’t have to be rented, especially when the landlords don’t care one bit about the underlying art or cultural impact.

Source: The End Of Ownership: How Big Companies Are Trying To Turn Everyone Into Renters | Techdirt

Boffins propose Pretty Good Phone Privacy to end pretty invasive location data harvesting by telcos

[…] In “Pretty Good Phone Privacy,” [PDF] a paper scheduled to be presented on Thursday at the Usenix Security Symposium, Schmitt and Barath Raghavan, assistant professor of computer science at the University of Southern California, describe a way to re-engineer the mobile network software stack so that it doesn’t betray the location of mobile network customers.

“It’s always been thought that since cell towers need to talk to phones then all users have to accept the status quo in which mobile operators track our every movement and sell the data to data brokers (as has been extensively reported),” said Schmitt. “We show how it’s possible to protect users’ mobile privacy while at the same time providing normal connectivity, and to do so without changing any of the hardware in mobile networks.”

In recent years, mobile carriers have been routinely selling and leaking location data, to the detriment of customer privacy. Efforts to alter the status quo have been hampered by an uneven regulatory landscape, the resistance of data brokers that profit from the status quo, and the assumption that cellular network architecture requires knowing where customers are located.

[…]

The purpose of Pretty Good Phone Privacy (PGPP) is to avoid using a unique identifier for authenticating customers and granting access to the network. It’s a technology that allows a Mobile Virtual Network Operator (MVNO) to issue SIM cards with identical SUPIs for every subscriber because the SUPI is only used to assess the validity of the SIM card. The PGPP network can then assign an IP address and a GUTI (Globally Unique Temporary Identifier) that can change in subsequent sessions, without telling the MVNO where the customer is located.

“We decouple network connectivity from authentication and billing, which allows the carrier to run Next Generation Core (NGC) services that are unaware of the identity or location of their users but while still authenticating them for network use,” the paper explains. “Our architectural change allows us to nullify the value of the user’s SUPI, an often targeted identifier in the cellular ecosystem, as a unique identifier.”

[…]

Its primary focus is defending against the surreptitious sale of location data by network providers.

[…]

Schmitt argues PGPP will help mobile operators comply with current and emerging data privacy regulations in US states like California, Colorado, and Virginia, and post-GDPR rules in Europe

Source: Boffins propose Pretty Good Phone Privacy to end pretty invasive location data harvesting by telcos • The Register

Apple App Store, Google Play Store Targeted by Open App Markets Act

The Open App Markets Act, which is being spearheaded by Sens. Richard Blumenthal, and Marsha Blackburn, is designed to crack down on some of the scummiest tactics tech players use to rule their respective app ecosystems, while giving users the power to download the apps they want, from the app stores they want, without retaliation.

“For years, Apple and Google have squashed competitors and kept consumers in the dark—pocketing hefty windfalls while acting as supposedly benevolent gatekeepers of this multibillion-dollar market,” Blumenthal told the Wall Street Journal. As he put it, this bill is tailor-made to “break these tech giants’ ironclad grip open the app economy to new competitors and give mobile users more control over their own devices.”

The antitrust issues facing both of these companies—along with fellow tech giants like Facebook and Amazon—have come to a boiling point on Capitol Hill over the past year. We’ve seen lawmakers roll out bill after bill meant to target some of the most lucrative monopolies these companies hold: Amazon’s marketplace, Facebook’s collection of platforms, and, of course, Apple and Google’s respective app stores. Last month, three dozen state attorneys general levied a fresh antitrust suit against Google for the Play Store fees forced on app developers. Meanwhile, Apple is still in a heated legal battle with Epic Games over its own mandated commissions, which can take up to 30% from every in-app purchase users make.

Blumenthal and Blackburn target these fees specifically. The bill would prohibit app stores from requiring that developers use their payment systems, for example. It would also prevent app stores from retaliating against developers who try to implement payment systems of their own, which is the exact scenario that got Epic booted from the App Store last summer.

On top of this, the bill would require that devices allow app sideloading by default. Google’s allowed this practice for a while, but this month started taking steps to narrow the publishing formats developers could use. Apple hardware, meanwhile, has never been sideload-friendly—a choice that’s meant to uphold the “privacy initiatives” baked into the App Store, according to Apple CEO Tim Cook.

Here are some other practices outlawed by the Open App Markets Act: Apple, Google, or any other app store owner would be barred from using a developer’s proprietary app intel to develop their own competing product. They’d also be barred from applying ranking algorithms that rank their own apps over those of their competitors. Users, meanwhile, would (finally) need to be given choices of the app store they can use on their device, instead of being pigeonholed into Apple’s App Store or Google’s Play Store.

Like all bills, this new legislation still needs to go through the regulatory churn before it has any hope of passing, and it might look like a very different set of rules by the time it finally does. But at this point, antitrust action is going to come for these companies whether they like it or not.

Source: Apple App Store, Google Play Store Targeted by Open App Markets Act

I have been talking about this since early in 2019 and it’s great to see all the action around this

Amazon Drops Policy claiming ownership of Games made by employees After Work Hours

Amazon.com Inc. withdrew a set of staff guidelines that claimed ownership rights to video games made by employees after work hours and dictated how they could distribute them, according to a company email reviewed by Bloomberg.

[…]

The old policies mandated that employees of the games division who were moonlighting on projects would need to use Amazon products, such as Amazon Web Services, and sell their games on Amazon digital stores. It also gave the company “a royalty free, worldwide, fully paid-up, perpetual, transferable license” to intellectual property rights of any games developed by its employees.

[…]

The games division has struggled practically since its inception in 2012 and can hardly afford another reputational hit. It has never released a successful game, and some current and former employees have placed the blame with Frazzini. Bloomberg reported in January that Frazzini had hired veteran game developers and executives but largely dismissed or ignored their advice.

Source: Amazon Drops ‘Draconian’ Policy on Making Games After Work Hours – Bloomberg

So tbh if they can’t make games during work hours, what difference is it that their incompentence after work hours can’t be sold outside of Amazon. Or are the employees ripping the Amazon Games division off?

China stops networked vehicle data going offshore under new infosec rules

China has drafted new rules required of its autonomous and networked vehicle builders.

Data security is front and centre in the rules, with manufacturers required to store data generated by cars – and describing their drivers – within China. Data is allowed to go offshore, but only after government scrutiny.

Manufacturers are also required to name a chief of network security, who gets the job of ensuring autonomous vehicles can’t fall victim to cyber attacks. Made-in-China auto-autos are also required to be monitored to detect security issues.

Over-the-air upgrades are another requirement, with vehicle owners to be offered verbose information about the purpose of software updates, the time required to install them, and the status of upgrades.

Behind the wheel, drivers must be informed about the vehicle’s capabilities and the responsibilities that rest on their human shoulders. All autonomous vehicles will be required to detect when a driver’s hands leave the wheel, and to detect when it’s best to cede control to a human.

If an autonomous vehicle’s guidance systems fail, it must be able to hand back control.

[…]

Source: China stops networked vehicle data going offshore under new infosec rules • The Register

And again China is doing what the EU and US should be doing to a certain extent.

Have you made sure you have changed these Google Pay privacy settings?

Google Pay is an online paying system and digital wallet that makes it easy to buy anything on your mobile device or with your mobile device. But if you’re concerned about what Google is doing with all your data (which you probably should be), Google doesn’t make it easy for Google Pay has some secret settings to manage your settings.

 

A report from Bleeping Computer shows that privacy settings aren’t available through the main Google Pay setting page that is accessible through the navigation sidebar.

The URL for that settings page is:

https://pay.google.com/payments/u/0/home#settings

 

On that page, users can change general settings like address and payment users.

But if users want to change privacy settings, they have to go to a separate page:

https://pay.google.com/payments/u/0/home?page=privacySettings#privacySettings

 

On that screen, users can adjust all the same settings available on the other settings page, but they can also address three additional privacy settings—controlling whether Google Pay is allowed to share account information, personal information, and creditworthiness.

Here’s the full language of those three options:

-Allow Google Payment Corporation to share third party creditworthiness information about you with other companies owned and controlled by Google LLC for their everyday business purposes.

-Allow your personal information to be used by other companies owned and controlled by Google LLC to market to you. Opting out here does not impact whether other companies owned and controlled by Google LLC can market to you based on information you provide to them outside of Google Payment Corporation.

-Allow Google LLC or its affiliates to inform a third party merchant, whose site or app you visit, whether you have a Google Payments account that can be used for payment to that merchant. Opting out may impact your ability to use Google Payments to transact with certain third party merchants.

 

According to Bleeping Computer, the default of Google Pay is to enable all the above settings. In order to opt out, users have to go to the special URL that is not accessible through the navigation bar.

As the Reddit post that inspired the Bleeping Computer report claims, this discrepancy makes it appear that Google Pay is hiding its privacy options. “Google is not walking the talk when it claims to make it easy for their users to control the privacy and use of their own data,” the Redditor surmised.

A Google spokesperson told Gizmodo they’re working to make the privacy settings more accessible. “The different settings views described here are an issue resulting from a previous software update and we are working to fix this right away so that these privacy settings are always visible on pay.google.com,” the spokesperson told Gizmodo.

“All users are currently able to access these privacy settings via the ‘Google Payments privacy settings page’ link in the Google Pay privacy notice.”

In the meantime, here’s that link again for the privacy settings. Go ahead and uncheck those three boxes, if you feel so inclined.

Source: How To Find Google Pay’s Hidden Privacy Settings

Here’s hoping that my bank can set up it’s own version of Google Pay instead of integrating with it. I definitely don’t want Google or Apple getting their grubby little paws on my financial data.

create virtual cards to pay with online with Privacy

Protect your card details and your money by creating virtual cards at each place you spend online, or for each purchase

Create single-use cards that close themselves automatically

browser extension to create and auto-fill card numbers at checkout

Privacy Cards put the control in your hands when you make a purchase online. Business or personal, one-time or subscription, now you decide who can charge your card, how much, how often, and you can close a card any time

Source: Privacy – Smarter Payments

Post-implementation review of the repeal of section 52 of the CDPA 1988 and associated amendments – Call for views – GOV.UK

The Copyright, Designs and Patents Act 1988 (CDPA) sets the term of protection for works protected copyright. For artistic works, the term of protection is life of the author plus 70 years. For more information on the term of copyright, see our Copyright Notice: Duration of copyright (term) on this subject. Section 52 CDPA previously reduced the term of copyright for industrially manufactured artistic works to 25 years.

In 2011, a judgment was made by the Court of Justice of the European Union (CJEU) in relation to copyright for design works. The government concluded that section 52 CDPA should be repealed to provide equal protection for all types of artistic work. This repeal was included in the Enterprise and Regulatory Reform Act 2013. The main copyright works affected were works of artistic craftsmanship. The primary types of work believed to be in scope were furniture, jewellery, ceramics, lighting and other homewares. This would be both the 3D manufacture and retail and the 2D representation in publishing.

[…]

The Copyright (Amendment) Regulations 2016 came into force on 6 April 2017. They amended Schedule 1 CDPA to allow works made before 1957 to attract copyright protection, whatever their separate design status. They also removed a compulsory licensing provision for works with revived copyright from the Duration of Copyright and Rights in Performances Regulations 1995 (1995 Regulations). Existing compulsory licences which had agreed a royalty or remuneration with the rights holder could continue. The relevant documents can be found in the Changes to Schedule 1 CDPA and duration of Copyright Regulations consultation.

[…]

Source: Post-implementation review of the repeal of section 52 of the CDPA 1988 and associated amendments – Call for views – GOV.UK

So if you are interested in copyright in the UK, make sure you fill in the questions at the bottom of the link and email them!

Ancestry.com Gave Itself the Rights to Your Family Photos

The Blackstone-owned genealogy giant Ancestry.com raised a ton of red flags earlier this month with an update to its terms and conditions that give the company a bit more power over your family photos. From here on out, the August 3 update reads, Ancestry can use these pics for any reason, at any time, forever.

[…]

By submitting User Provided Content through any of the Services, you grant Ancestry a perpetual, sublicensable, worldwide, non-revocable, royalty-free license to host, store, copy, publish, distribute, provide access to, create derivative works of, and otherwise use such User Provided Content to the extent and in the form or context we deem appropriate on or through any media or medium and with any technology or devices now known or hereafter developed or discovered. This includes the right for Ancestry to copy, display, and index your User Provided Content. Ancestry will own the indexes it creates.

[…]

The company also noted that it added a helpful clause to clarify that, yes, deleting your documents from Ancestry’s site would also remove any rights Ancestry holds over them. But there’s a catch: if any other Ancestry users copied or saved your content, then Ancestry still holds those rights until these other users delete your documents, too.

[…]

Source: Ancestry.com Gave Itself the Rights to Your Family Photos

WhatsApp head says Apple’s child safety update is a ‘surveillance system’

One day after Apple confirmed plans for new software that will allow it to detect images of child abuse on users’ iCloud photos, Facebook’s head of WhatsApp says he is “concerned” by the plans.

In a thread on Twitter, Will Cathcart called it an “Apple built and operated surveillance system that could very easily be used to scan private content for anything they or a government decides it wants to control.” He also raised questions about how such a system may be exploited in China or other countries, or abused by spyware companies.

[…]

Source: WhatsApp head says Apple’s child safety update is a ‘surveillance system’ | Engadget

Pots and kettles – but he’s right though. This is a very serious lapse of privacy for Apple

How Google quietly funds Europe’s leading tech policy institutes

A recent scientific paper proposed that, like Big Tobacco in the Seventies, Big Tech thrives on creating uncertainty around the impacts of its products and business model. One of the ways it does this is by cultivating pockets of friendly academics who can be relied on to echo Big Tech talking points, giving them added gravitas in the eyes of lawmakers.

Google highlighted working with favourable academics as a key aim in its strategy, leaked in October 2020, for lobbying the EU’s Digital Markets Act – sweeping legislation that could seriously undermine tech giants’ market dominance if it goes through.

Now, a New Statesman investigation can reveal that over the last five years, six leading academic institutes in the EU have taken tens of millions of pounds of funding from Google, Facebook, Amazon and Microsoft to research issues linked to the tech firms’ business models, from privacy and data protection to AI ethics and competition in digital markets. While this funding tends to come with guarantees of academic independence, this creates an ethical quandary where the subject of research is also often the primary funder of it.

 

The New Statesman has also found evidence of an inconsistent approach to transparency, with some senior academics failing to disclose their industry funding. Other academics have warned that the growing dependence on funding from the industry raises questions about how tech firms influence the debate around the ethics of the markets they have created.

The Institute for Ethics in Artificial Intelligence at the Technical University of Munich (TUM), for example, received a $7.5m grant from Facebook in 2019 to fund five years of research, while the Humboldt Institute for Internet and Society in Berlin, has accepted almost €14m from Google since it was founded in 2012, and the tech giant accounts for a third of the institute’s third-party funding.

The Humboldt Institute is seeking to diversify its funding sources, but still receives millions from Google
Annual funding to the Humboldt Institute by Google and other third-party institutions

Researchers at Big Tech-funded institutions told the New Statesman they did not feel any outward pressure to be less critical of their university’s benefactors in their research.

But one, who wished to remain anonymous, said Big Tech wielded a subtle influence through such institutions. They said that the companies typically appeared to identify uncritical academics – preferably those with political connections – who perhaps already espoused beliefs aligned with Big Tech. Companies then cultivate relationships with them, sometimes incentivising academics by granting access to sought-after data.

[…]

Luciano Floridi, professor of philosophy and ethics of information at Oxford University’s Internet Institute, is one of the most high-profile and influential European tech policy experts, who has advised the European Commission, the Information Commissioner’s Office, the UK government’s Centre for Data Ethics and Innovation, the Foreign Office, the Financial Conduct Authority and the Vatican.

Floridi is one of the best-connected tech policy experts in Europe, and he is also one of the most highly funded. The ethicist has received funding from Google, DeepMind, Facebook, the Chinese tech giant Tencent and the Japanese IT firm Fujitsu, which developed the infrastructure involved in the Post Office’s Horizon IT scandal.

OII digital ethics director Luciano Floridi is one of Europe’s most influential tech policy experts
Funding sources, and advisory positions declared by Luciano Floridi in public integrity statements

Although Floridi is connected to several of the world’s most valuable tech companies, he is especially close to Google. In the mid-2010s the academic was described as the company’s “in-house philosopher”, with his role on the company’s “right to be forgotten” committee. When the Silicon Valley giant launched a short-lived ethics committee to oversee its technology development in 2019, Floridi was among those enlisted.

Last year, Floridi oversaw and co-authored a study that found some alternative and commercial search engines returned more misinformation about healthcare to users than Google. The authors of the pro-Google study didn’t disclose any financial interests, despite Floridi’s long-running relationship with the company.

[…]

Michael Veale, a lecturer in law at University College London, said that beyond influencing independent academics, there are other motives for firms such as Google to fund policy research. “By funding very pedantic academics in an area to investigate the nuances of economics online, you can heighten the amount of perceived uncertainty in things that are currently taken for granted in regulatory spheres,” he told the New Statesman.

[…]

This appears to be the case within competition law as well. “I have noticed several common techniques used by academics who have been funded by Big Tech companies,” said Oles Andriychuk, a senior lecturer in law at Strathclyde University. “They discuss technicalities – very technical arguments which are not wrong, but they either slow down the process, or redirect the focus to issues which are less important, or which blur clarity.”

It is difficult to measure the impact of Big Tech on European academia, but Valletti adds that a possible outcome is to make research less about the details, and more about framing. “Influence is not just distorting the result in favour of [Big Tech],” he said, “but the kind of questions you ask yourself.”

Source: How Google quietly funds Europe’s leading tech policy institutes

Major U.K. science funder to require grantees to make papers immediately free to all

[…]

UK Research and Innovation (UKRI), will expand on existing rules covering all research papers produced from its £8 billion in annual funding. About three-quarters of papers recently published from U.K. universities are open access, and UKRI’s current policy gives scholars two routes to comply: Pay journals for “gold” open access, which makes a paper free to read on the publisher’s website, or choose the “green” route, which allows them to deposit a near-final version of the paper on a public repository, after a waiting period of up to 1 year. Publishers have insisted that an embargo period is necessary to prevent the free papers from peeling away their subscribers.

But starting in April 2022, that yearlong delay will no longer be permitted: Researchers choosing green open access must deposit the paper immediately when it is published. And publishers won’t be able to hang on to the copyright for UKRI-funded papers: The agency will require that the research it funds—with some minor exceptions—be published with a Creative Commons Attribution license (known as CC-BY) that allows for free and liberal distribution of the work.

UKRI developed the new policy because “publicly funded research should be available for public use by the taxpayer,” says Duncan Wingham, the funder’s executive champion for open research. The policy falls closely in line with those issued by other major research funders, including the nonprofit Wellcome Trust—one of the world’s largest nongovernmental funding bodies—and the European Research Council.

The move also brings UKRI’s policy into alignment with Plan S, an effort led by European research funders—including UKRI—to make academic literature freely available to read

[…]

It clears up some confusion about when UKRI will pay the fees that journals charge for gold open access, he says: never for journals that offer a mix of paywalled and open-access content, unless the journal is part of an agreement to transition to exclusively open access for all research papers. (More than half of U.K. papers are covered by transitional agreements, according to UKRI.)

[…]

Publishers have resisted the new requirements. The Publishers Association, a member organization for the U.K. publishing industry, circulated a document saying the policy would introduce confusion for researchers, threaten their academic freedom, undermine open access, and leave many researchers on the hook for fees for gold open access—which it calls the only viable route for researchers. The publishing giant Elsevier, in a letter sent to its editorial board members in the United Kingdom, said it had been working to shape the policy by lobbying UKRI and the U.K. government, and encouraged members to write in themselves.

[…]

It would not be in the interest of publishers to refuse to publish these green open-access papers, Rooryck says, because the public repository version ultimately drives publicity for publishers. And even with a paper immediately deposited in a public repository, the final “version of record” published behind a paywall will still carry considerable value, Prosser says. Publishers who threaten to reject such papers, Rooryck believes, are simply “saber rattling and posturing.”

Source: Major U.K. science funder to require grantees to make papers immediately free to all | Science | AAAS

It’s pretty bizarre that publically funded research is hidden behind paywalls – the public that paid for it can’t get to it and innovation is stifled because people who need the research can’t get at it either.

Apple confirms it will begin scanning your iCloud Photos

[…] Apple told TechCrunch that the detection of child sexual abuse material (CSAM) is one of several new features aimed at better protecting the children who use its services from online harm, including filters to block potentially sexually explicit photos sent and received through a child’s iMessage account. Another feature will intervene when a user tries to search for CSAM-related terms through Siri and Search.

Most cloud services — Dropbox, Google, and Microsoft to name a few — already scan user files for content that might violate their terms of service or be potentially illegal, like CSAM. But Apple has long resisted scanning users’ files in the cloud by giving users the option to encrypt their data before it ever reaches Apple’s iCloud servers.

Apple said its new CSAM detection technology — NeuralHash — instead works on a user’s device, and can identify if a user uploads known child abuse imagery to iCloud without decrypting the images until a threshold is met and a sequence of checks to verify the content are cleared.

News of Apple’s effort leaked Wednesday when Matthew Green, a cryptography professor at Johns Hopkins University, revealed the existence of the new technology in a series of tweets. The news was met with some resistance from some security experts and privacy advocates, but also users who are accustomed to Apple’s approach to security and privacy that most other companies don’t have.

Apple is trying to calm fears by baking in privacy through multiple layers of encryption, fashioned in a way that requires multiple steps before it ever makes it into the hands of Apple’s final manual review.

[…]

Source: Apple confirms it will begin scanning iCloud Photos for child abuse images | TechCrunch

No matter what the cause, they have no right to be scanning your stuff at all, for any reason, at any time.

Apple is about to start scanning iPhone users’ photos

Apple is about to announce a new technology for scanning individual users’ iPhones for banned content. While it will be billed as a tool for detecting child abuse imagery, its potential for misuse is vast based on details entering the public domain.

The neural network-based tool will scan individual users’ iDevices for child sexual abuse material (CSAM), respected cryptography professor Matthew Green told The Register today.

Rather than using age-old hash-matching technology, however, Apple’s new tool – due to be announced today along with a technical whitepaper, we are told – will use machine learning techniques to identify images of abused children.

[…]Indiscriminately scanning end-user devices for CSAM is a new step in the ongoing global fight against this type of criminal content. In the UK the Internet Watch Foundation’s hash list of prohibited content is shared with ISPs who then block the material at source. Using machine learning to intrusively scan end user devices is new, however – and may shake public confidence in Apple’s privacy-focused marketing.

[…]

Governments in the West and authoritarion regions alike will be delighted by this initiative, Green feared. What’s to stop China (or some other censorious regime such as Russia or the UK) from feeding images of wanted fugitives into this technology and using that to physically locate them?

[…]

“Apple will hold the unencrypted database of photos (really the training data for the neural matching function) and your phone will hold the photos themselves. The two will communicate to scan the photos on your phone. Alerts will be sent to Apple if *multiple* photos in your library match, it can’t just be a single one.”

The privacy-busting scanning tech will be deployed against America-based iThing users first, with the idea being to gradually expand it around the world as time passes. Green said it would be initially deployed against photos backed up in iCloud before expanding to full handset scanning.

[…]

Source: Apple is about to start scanning iPhone users’ devices for banned content, warns professor • The Register

Wow, no matter what the pretext (and the pretext of sex offenders is very very often the very first step they take on a much longer road, because hey, who can be against bringing sex offenders to justice, right?) Apple has just basically said that they think they have the right to read whatever they like on your phone. Nothing privacy! So what will be next? Your emails? Text messages? Location history (again)?

As a user, you actually bought this hardware – anyone you don’t explicitly give consent to (and that means not being coerced by limiting functionality, eg) should stay out of it!

Australian Court Rules That AI Can Be an Inventor, as does South Africa

In what can only be considered a triumph for all robot-kind, this week, a federal court has ruled that an artificially intelligent machine can, in fact, be an inventor—a decision that came after a year’s worth of legal battles across the globe.

The ruling came on the heels of a years-long quest by University of Surrey law professor Ryan Abbot, who started putting out patent applications in 17 different countries across the globe earlier this year. Abbot—whose work focuses on the intersection between AI and the law—first launched two international patent filings as part of The Artificial Inventor Project at the end of 2019. Both patents (one for an adjustable food container, and one for an emergency beacon) listed a creative neural system dubbed “DABUS” as the inventor.

The artificially intelligent inventor listed here, DABUS, was created by Dr. Stephen Thaler, who describes it as a “creativity engine” that’s capable of generating novel ideas (and inventions) based on communications between the trillions of computational neurons that it’s been outfitted with. Despite being an impressive piece of machinery, last year, the US Patent and Trademark Office (USPTO) ruled that an AI cannot be listed as the inventor in a patent application—specifically stating that under the country’s current patent laws, only “natural persons,” are allowed to be recognized. Not long after, Thaler sued the USPTO, and Abbott represented him in the suit.

More recently, the case has been caught in a case of legal limbo—with the overseeing judge suggesting that the case might be better handled by congress instead.

DABUS had issues being recognized in other countries, too. One spokesperson for the European patent office told the BBC in a 2019 interview that systems like DABUS are merely “a tool used by a human inventor,” under the country’s current laws. Australian courts initially declined to recognize AI inventors as well, noting earlier this year that much like in the US, patents can only be granted to people.

Or at least, that was Australia’s stance until Friday, when justice Jonathan Beach overturned the decision in Australia’s federal court. Per Beach’s new ruling, DABUS can neither be the applicant nor grantee for a patent—but it can be listed as the inventor. In this case, those other two roles would be filled by Thaler, DABUS’s designer.

“In my view, an inventor as recognised under the act can be an artificial intelligence system or device,” Beach wrote. “I need to grapple with the underlying idea, recognising the evolving nature of patentable inventions and their creators. We are both created and create. Why cannot our own creations also create?”

It’s not clear what made the Australian courts change their tune, but it’s possible South Africa had something to do with it. The day before Beach walked back the country’s official ruling, South Africa’s Companies and Intellectual Property Commission became the first patent office to officially recognize DABUS as an inventor of the aforementioned food container.

It’s worth pointing out here that every country has a different set of standards as part of the patent rights process; some critics have noted that it’s “not shocking” for South Africa to give the idea of an AI inventor a pass, and that “everyone should be ready,” for future patent allowances to come. So while the US and UK might have given Thalen the thumbs down on the idea, we’re still waiting to see how the patents filed in any of the other countries—including Japan, India, and Israel—will shake out. But at the very least, we know that DABUS will finally be recognized as an inventor somewhere.

Source: Australian Court Rules That AI Can Be an Inventor

Amazon hit with $887 million fine by European privacy watchdog

Amazon has been issued with a fine of 746 million euros ($887 million) by a European privacy watchdog for breaching the bloc’s data protection laws.

The fine, disclosed by Amazon on Friday in a securities filing, was issued two weeks ago by Luxembourg’s privacy regulator.

The Luxembourg National Commission for Data Protection said Amazon’s processing of personal data did not comply with the EU’s General Data Protection Regulation.

[…]

Source: Amazon hit with $887 million fine by European privacy watchdog

Pretty massively strange that they don’t tell us what exactly they are fining Amazon for…

Bungie & Ubisoft Sue Destiny 2 Cheatmakers Ring-1 For Copyright Infringement

Bungie and Ubisoft have filed a lawsuit against five individuals said to be behind Ring-1, the claimed creator and distributor of cheat software targeting Destiny and Rainbox Six Seige. Among other offenses the gaming companies allege copyright infringement and trafficking in circumvention devices, estimating damages in the millions of dollars.

[…]

Filed in a California district court, the lawsuit targets Andrew Thorpe (aka ‘Krypto’), Jonathan Aguedo (aka ‘Overpowered’), Wesam Mohammed (aka ‘Grizzly’), Ahmad Mohammed, plus John Does 1-50. According to the plaintiffs, these people operate, oversee or participate in Ring-1, an operation that develops, distributes and markets a range of cheats for Destiny 2 and Rainbow Six Seige, among others.

Ring-1 is said to largely operate from Ring-1.io but is also active on hundreds of forums, websites and social media accounts selling cheats that enable Ubisoft and Bungie customers to automatically aim their weapons, reveal the locations of opponents, and see information that would otherwise be obscured.

“Defendants’ conduct has caused, and is continuing to cause, massive and irreparable harm to Plaintiffs and their business interests. The success of Plaintiffs’ games depends on their being enjoyable and fair for all players,” the lawsuit reads.

[…]

According to the lawsuit, the cheats developed and distributed by Ring-1 are not cheap. Access to Destiny 2 cheats via the Ring-1 website costs 30 euros per week or 60 euros per month while those for Rainbox Six Seige cost 25 euros and 50 euros respectively, netting the defendants up to hundreds of thousands of dollars in revenue.

The plaintiffs believe that Ring-1 or those acting in concert with them fraudulently obtained access to the games’ software clients before disassembling, decompiling and/or creating derivative works from them. These tools were then tested on Destiny 2 and Rainbow Six Seige servers under false pretenses by using “throwaway accounts” and false identities.

Copyright Infringement Offenses

Since the cheating software developed and distributed by Ring-1 is primarily designed for the purpose of circumventing technological measures that control access to their games, the plaintiffs state that the defendants are trafficking in circumvention devices in violation of the DMCA (17 U.S.C. § 1201(a)(2)).

[…]

In addition, it’s alleged that the defendants unlawfully reproduced and displayed the plaintiffs’ artwork on the Ring-1 website, adapted the performance of the games, and reproduced game client files without a license during reverse engineering and similar processes.

In the alternative, Ubisoft and Bungie suggest that the defendants can be held liable for inducing and contributing to the copyright-infringing acts of their customers when they deploy cheats that effectively create unauthorized derivative works.

[…]

In addition to the alleged copyright infringement offenses, Bungie and Ubisoft say the defendants are liable for trademark infringement due to the use of various marks on the Ring-1 website and elsewhere. They are also accused of ‘false designation of origin’ due to false or misleading descriptions that suggest a connection with the companies, and intentional interference with contractual relations by encouraging Destiny 2 and Rainbow Six Seige players to breach their licensing conditions.

[…]

Source: Bungie & Ubisoft Sue Destiny 2 Cheatmakers Ring-1 For Copyright Infringement * TorrentFreak

Wow, this seems to me to be a stretch. Nobody likes playing online against a cheat but calling it copyright infringement and creating derivative works seems like a stretch, as does saying people might think the cheat creators (which to me seems like original work) might be mistaken as being affiliated with the companies. Even Trump and QAnon followers aren’t that stupid. Then as for the licenses  imposed: yes, people click yes on the usage licenses but I’m pretty sure almost no-one has any idea what they are clicking yes to.

Edward Snowden calls for spyware trade ban amid Pegasus revelations

Governments must impose a global moratorium on the international spyware trade or face a world in which no mobile phone is safe from state-sponsored hackers, Edward Snowden has warned in the wake of revelations about the clients of NSO Group.

Snowden, who in 2013 blew the whistle on the secret mass surveillance programmes of the US National Security Agency, described for-profit malware developers as “an industry that should not exist”.

He made the comments in an interview with the Guardian after the first revelations from the Pegasus project, a journalistic investigation by a consortium of international media organisations into the NSO Group and its clients.

[…]

For traditional police operations to plant bugs or wiretap a suspect’s phone, law enforcement would need to “break into somebody’s house, or go to their car, or go to their office, and we’d like to think they’ll probably get a warrant”, he said.

But commercial spyware made it cost-efficient for targeted surveillance against vastly more people. “If they can do the same thing from a distance, with little cost and no risk, they begin to do it all the time, against everyone who’s even marginally of interest,” he said.

“If you don’t do anything to stop the sale of this technology, it’s not just going to be 50,000 targets. It’s going to be 50 million targets, and it’s going to happen much more quickly than any of us expect.”

Part of the problem arose from the fact that different people’s mobile phones were functionally identical to one another, he said. “When we’re talking about something like an iPhone, they’re all running the same software around the world. So if they find a way to hack one iPhone, they’ve found a way to hack all of them.”

He compared companies commercialising vulnerabilities in widely used mobile phone models to an industry of “infectioneers” deliberately trying to develop new strains of disease.

“It’s like an industry where the only thing they did was create custom variants of Covid to dodge vaccines,” he said. “Their only products are infection vectors. They’re not security products. They’re not providing any kind of protection, any kind of prophylactic. They don’t make vaccines – the only thing they sell is the virus.”

Snowden said commercial malware such as Pegasus was so powerful that ordinary people could in effect do nothing to stop it. Asked how people could protect themselves, he said: “What can people do to protect themselves from nuclear weapons?

“There are certain industries, certain sectors, from which there is no protection, and that’s why we try to limit the proliferation of these technologies. We don’t allow a commercial market in nuclear weapons.”

He said the only viable solution to the threat of commercial malware was an international moratorium on its sale. “What the Pegasus project reveals is the NSO Group is really representative of a new malware market, where this is a for-profit business,” he said. “The only reason NSO is doing this is not to save the world, it’s to make money.”

He said a global ban on the trade in infection vectors would prevent commercial abuse of vulnerabilities in mobile phones, while still allowing researchers to identify and fix them.

“The solution here for ordinary people is to work collectively. This is not a problem that we want to try and solve individually, because it’s you versus a billion dollar company,” he said. “If you want to protect yourself you have to change the game, and the way we do that is by ending this trade.”

[…]

Source: Edward Snowden calls for spyware trade ban amid Pegasus revelations | Edward Snowden | The Guardian

How To Check If Your iPhone Is Infected With Pegasus Using MVT

The revelation that our government might be using spyware called Pegasus to hack into its critics’ phones has started a whole new debate on privacy. The opposition is taking a dig at the ruling party every chance it gets, while the latter is trying to damage control after facing such serious allegations.

Amidst the chaos, one of the members of The Pegasus Project, Amnesty, recently made a public toolkit that can check if your phone is infected with Pegasus. The toolkit, known as MVT, requires users to know their way around the command line.

In a previous post, we wrote about how it works and successfully traces signs of Pegasus. Moreover, we mentioned how MVT is more effective on iOS than Android (the most you can do is scan APKs and SMSes). Hence, in this guide, we’re focusing on breaking down the process to detect Pegasus on iPhone into a step-by-step guide.

First off, you’ll need to create an encrypted backup and transfer it to either a Mac or PC. You can also do this on Linux instead, but you’ll have to install libimobiledevice beforehand for that.

Once the phone backup is transferred, you need to download Python 3.6 (or newer) on your system — if you don’t have it already. Here’s how you can install the same for Windows, macOS, and Linux.

After that, go through Amnesty’s manual to install MVT correctly on your system. Installing MVT will give you new utilities (mvt-ios and mvt-android) that you can use in the Python command line.

Now, let’s go through the steps for detecting Pegasus on an iPhone backup using MVT.

Steps To Detect Pegasus On iPhone

First of all, you have to decrypt your data backup. To do that, you’ll need to enter the following instruction format while replacing the placeholder text (marked with a forward slash) with your custom path.

mvt-ios decrypt-backup -p password -d /decrypted /backup

Note: Replace “/decrypted” with the directory where you want to store the decrypted backup and “/backup” with the directory where your encrypted backup is located.

Now, we will run a scan on the decrypted backup, referencing it with the latest IOCs (possible signs of Pegasus spyware), and store the result in an output folder.

To do this, first, download the newest IOCs from here (use the folder with the latest timestamp). Then, enter the instruction format as given below with your custom directory path.

mvt-ios check-backup -o /output -i /pegasus.stix2 /backup

Note: Replace “/output” with the directory where you want to store the scan result, “/backup” with the path where your decrypted backup is stored, and “/pegasus.stix2” with the path where you downloaded the latest IOCs.

After the scan completion, MVT will generate JSON files in the specified output folder. If there is a JSON file with the suffix “_detected,” then that means your iPhone data is most likely Pegasus-infected.

However, the IOCs are regularly updated by Amnesty’s team as they develop a better understanding of how Pegasus operates. So, you might want to keep running scans as the IOCs are updated to make sure there are no false positives.

Source: How To Check If Your Phone Is Infected With Pegasus Using MVT