Major U.K. science funder to require grantees to make papers immediately free to all

[…]

UK Research and Innovation (UKRI), will expand on existing rules covering all research papers produced from its £8 billion in annual funding. About three-quarters of papers recently published from U.K. universities are open access, and UKRI’s current policy gives scholars two routes to comply: Pay journals for “gold” open access, which makes a paper free to read on the publisher’s website, or choose the “green” route, which allows them to deposit a near-final version of the paper on a public repository, after a waiting period of up to 1 year. Publishers have insisted that an embargo period is necessary to prevent the free papers from peeling away their subscribers.

But starting in April 2022, that yearlong delay will no longer be permitted: Researchers choosing green open access must deposit the paper immediately when it is published. And publishers won’t be able to hang on to the copyright for UKRI-funded papers: The agency will require that the research it funds—with some minor exceptions—be published with a Creative Commons Attribution license (known as CC-BY) that allows for free and liberal distribution of the work.

UKRI developed the new policy because “publicly funded research should be available for public use by the taxpayer,” says Duncan Wingham, the funder’s executive champion for open research. The policy falls closely in line with those issued by other major research funders, including the nonprofit Wellcome Trust—one of the world’s largest nongovernmental funding bodies—and the European Research Council.

The move also brings UKRI’s policy into alignment with Plan S, an effort led by European research funders—including UKRI—to make academic literature freely available to read

[…]

It clears up some confusion about when UKRI will pay the fees that journals charge for gold open access, he says: never for journals that offer a mix of paywalled and open-access content, unless the journal is part of an agreement to transition to exclusively open access for all research papers. (More than half of U.K. papers are covered by transitional agreements, according to UKRI.)

[…]

Publishers have resisted the new requirements. The Publishers Association, a member organization for the U.K. publishing industry, circulated a document saying the policy would introduce confusion for researchers, threaten their academic freedom, undermine open access, and leave many researchers on the hook for fees for gold open access—which it calls the only viable route for researchers. The publishing giant Elsevier, in a letter sent to its editorial board members in the United Kingdom, said it had been working to shape the policy by lobbying UKRI and the U.K. government, and encouraged members to write in themselves.

[…]

It would not be in the interest of publishers to refuse to publish these green open-access papers, Rooryck says, because the public repository version ultimately drives publicity for publishers. And even with a paper immediately deposited in a public repository, the final “version of record” published behind a paywall will still carry considerable value, Prosser says. Publishers who threaten to reject such papers, Rooryck believes, are simply “saber rattling and posturing.”

Source: Major U.K. science funder to require grantees to make papers immediately free to all | Science | AAAS

It’s pretty bizarre that publically funded research is hidden behind paywalls – the public that paid for it can’t get to it and innovation is stifled because people who need the research can’t get at it either.

Apple confirms it will begin scanning your iCloud Photos

[…] Apple told TechCrunch that the detection of child sexual abuse material (CSAM) is one of several new features aimed at better protecting the children who use its services from online harm, including filters to block potentially sexually explicit photos sent and received through a child’s iMessage account. Another feature will intervene when a user tries to search for CSAM-related terms through Siri and Search.

Most cloud services — Dropbox, Google, and Microsoft to name a few — already scan user files for content that might violate their terms of service or be potentially illegal, like CSAM. But Apple has long resisted scanning users’ files in the cloud by giving users the option to encrypt their data before it ever reaches Apple’s iCloud servers.

Apple said its new CSAM detection technology — NeuralHash — instead works on a user’s device, and can identify if a user uploads known child abuse imagery to iCloud without decrypting the images until a threshold is met and a sequence of checks to verify the content are cleared.

News of Apple’s effort leaked Wednesday when Matthew Green, a cryptography professor at Johns Hopkins University, revealed the existence of the new technology in a series of tweets. The news was met with some resistance from some security experts and privacy advocates, but also users who are accustomed to Apple’s approach to security and privacy that most other companies don’t have.

Apple is trying to calm fears by baking in privacy through multiple layers of encryption, fashioned in a way that requires multiple steps before it ever makes it into the hands of Apple’s final manual review.

[…]

Source: Apple confirms it will begin scanning iCloud Photos for child abuse images | TechCrunch

No matter what the cause, they have no right to be scanning your stuff at all, for any reason, at any time.

Apple is about to start scanning iPhone users’ photos

Apple is about to announce a new technology for scanning individual users’ iPhones for banned content. While it will be billed as a tool for detecting child abuse imagery, its potential for misuse is vast based on details entering the public domain.

The neural network-based tool will scan individual users’ iDevices for child sexual abuse material (CSAM), respected cryptography professor Matthew Green told The Register today.

Rather than using age-old hash-matching technology, however, Apple’s new tool – due to be announced today along with a technical whitepaper, we are told – will use machine learning techniques to identify images of abused children.

[…]Indiscriminately scanning end-user devices for CSAM is a new step in the ongoing global fight against this type of criminal content. In the UK the Internet Watch Foundation’s hash list of prohibited content is shared with ISPs who then block the material at source. Using machine learning to intrusively scan end user devices is new, however – and may shake public confidence in Apple’s privacy-focused marketing.

[…]

Governments in the West and authoritarion regions alike will be delighted by this initiative, Green feared. What’s to stop China (or some other censorious regime such as Russia or the UK) from feeding images of wanted fugitives into this technology and using that to physically locate them?

[…]

“Apple will hold the unencrypted database of photos (really the training data for the neural matching function) and your phone will hold the photos themselves. The two will communicate to scan the photos on your phone. Alerts will be sent to Apple if *multiple* photos in your library match, it can’t just be a single one.”

The privacy-busting scanning tech will be deployed against America-based iThing users first, with the idea being to gradually expand it around the world as time passes. Green said it would be initially deployed against photos backed up in iCloud before expanding to full handset scanning.

[…]

Source: Apple is about to start scanning iPhone users’ devices for banned content, warns professor • The Register

Wow, no matter what the pretext (and the pretext of sex offenders is very very often the very first step they take on a much longer road, because hey, who can be against bringing sex offenders to justice, right?) Apple has just basically said that they think they have the right to read whatever they like on your phone. Nothing privacy! So what will be next? Your emails? Text messages? Location history (again)?

As a user, you actually bought this hardware – anyone you don’t explicitly give consent to (and that means not being coerced by limiting functionality, eg) should stay out of it!

Australian Court Rules That AI Can Be an Inventor, as does South Africa

In what can only be considered a triumph for all robot-kind, this week, a federal court has ruled that an artificially intelligent machine can, in fact, be an inventor—a decision that came after a year’s worth of legal battles across the globe.

The ruling came on the heels of a years-long quest by University of Surrey law professor Ryan Abbot, who started putting out patent applications in 17 different countries across the globe earlier this year. Abbot—whose work focuses on the intersection between AI and the law—first launched two international patent filings as part of The Artificial Inventor Project at the end of 2019. Both patents (one for an adjustable food container, and one for an emergency beacon) listed a creative neural system dubbed “DABUS” as the inventor.

The artificially intelligent inventor listed here, DABUS, was created by Dr. Stephen Thaler, who describes it as a “creativity engine” that’s capable of generating novel ideas (and inventions) based on communications between the trillions of computational neurons that it’s been outfitted with. Despite being an impressive piece of machinery, last year, the US Patent and Trademark Office (USPTO) ruled that an AI cannot be listed as the inventor in a patent application—specifically stating that under the country’s current patent laws, only “natural persons,” are allowed to be recognized. Not long after, Thaler sued the USPTO, and Abbott represented him in the suit.

More recently, the case has been caught in a case of legal limbo—with the overseeing judge suggesting that the case might be better handled by congress instead.

DABUS had issues being recognized in other countries, too. One spokesperson for the European patent office told the BBC in a 2019 interview that systems like DABUS are merely “a tool used by a human inventor,” under the country’s current laws. Australian courts initially declined to recognize AI inventors as well, noting earlier this year that much like in the US, patents can only be granted to people.

Or at least, that was Australia’s stance until Friday, when justice Jonathan Beach overturned the decision in Australia’s federal court. Per Beach’s new ruling, DABUS can neither be the applicant nor grantee for a patent—but it can be listed as the inventor. In this case, those other two roles would be filled by Thaler, DABUS’s designer.

“In my view, an inventor as recognised under the act can be an artificial intelligence system or device,” Beach wrote. “I need to grapple with the underlying idea, recognising the evolving nature of patentable inventions and their creators. We are both created and create. Why cannot our own creations also create?”

It’s not clear what made the Australian courts change their tune, but it’s possible South Africa had something to do with it. The day before Beach walked back the country’s official ruling, South Africa’s Companies and Intellectual Property Commission became the first patent office to officially recognize DABUS as an inventor of the aforementioned food container.

It’s worth pointing out here that every country has a different set of standards as part of the patent rights process; some critics have noted that it’s “not shocking” for South Africa to give the idea of an AI inventor a pass, and that “everyone should be ready,” for future patent allowances to come. So while the US and UK might have given Thalen the thumbs down on the idea, we’re still waiting to see how the patents filed in any of the other countries—including Japan, India, and Israel—will shake out. But at the very least, we know that DABUS will finally be recognized as an inventor somewhere.

Source: Australian Court Rules That AI Can Be an Inventor

Amazon hit with $887 million fine by European privacy watchdog

Amazon has been issued with a fine of 746 million euros ($887 million) by a European privacy watchdog for breaching the bloc’s data protection laws.

The fine, disclosed by Amazon on Friday in a securities filing, was issued two weeks ago by Luxembourg’s privacy regulator.

The Luxembourg National Commission for Data Protection said Amazon’s processing of personal data did not comply with the EU’s General Data Protection Regulation.

[…]

Source: Amazon hit with $887 million fine by European privacy watchdog

Pretty massively strange that they don’t tell us what exactly they are fining Amazon for…

Bungie & Ubisoft Sue Destiny 2 Cheatmakers Ring-1 For Copyright Infringement

Bungie and Ubisoft have filed a lawsuit against five individuals said to be behind Ring-1, the claimed creator and distributor of cheat software targeting Destiny and Rainbox Six Seige. Among other offenses the gaming companies allege copyright infringement and trafficking in circumvention devices, estimating damages in the millions of dollars.

[…]

Filed in a California district court, the lawsuit targets Andrew Thorpe (aka ‘Krypto’), Jonathan Aguedo (aka ‘Overpowered’), Wesam Mohammed (aka ‘Grizzly’), Ahmad Mohammed, plus John Does 1-50. According to the plaintiffs, these people operate, oversee or participate in Ring-1, an operation that develops, distributes and markets a range of cheats for Destiny 2 and Rainbow Six Seige, among others.

Ring-1 is said to largely operate from Ring-1.io but is also active on hundreds of forums, websites and social media accounts selling cheats that enable Ubisoft and Bungie customers to automatically aim their weapons, reveal the locations of opponents, and see information that would otherwise be obscured.

“Defendants’ conduct has caused, and is continuing to cause, massive and irreparable harm to Plaintiffs and their business interests. The success of Plaintiffs’ games depends on their being enjoyable and fair for all players,” the lawsuit reads.

[…]

According to the lawsuit, the cheats developed and distributed by Ring-1 are not cheap. Access to Destiny 2 cheats via the Ring-1 website costs 30 euros per week or 60 euros per month while those for Rainbox Six Seige cost 25 euros and 50 euros respectively, netting the defendants up to hundreds of thousands of dollars in revenue.

The plaintiffs believe that Ring-1 or those acting in concert with them fraudulently obtained access to the games’ software clients before disassembling, decompiling and/or creating derivative works from them. These tools were then tested on Destiny 2 and Rainbow Six Seige servers under false pretenses by using “throwaway accounts” and false identities.

Copyright Infringement Offenses

Since the cheating software developed and distributed by Ring-1 is primarily designed for the purpose of circumventing technological measures that control access to their games, the plaintiffs state that the defendants are trafficking in circumvention devices in violation of the DMCA (17 U.S.C. § 1201(a)(2)).

[…]

In addition, it’s alleged that the defendants unlawfully reproduced and displayed the plaintiffs’ artwork on the Ring-1 website, adapted the performance of the games, and reproduced game client files without a license during reverse engineering and similar processes.

In the alternative, Ubisoft and Bungie suggest that the defendants can be held liable for inducing and contributing to the copyright-infringing acts of their customers when they deploy cheats that effectively create unauthorized derivative works.

[…]

In addition to the alleged copyright infringement offenses, Bungie and Ubisoft say the defendants are liable for trademark infringement due to the use of various marks on the Ring-1 website and elsewhere. They are also accused of ‘false designation of origin’ due to false or misleading descriptions that suggest a connection with the companies, and intentional interference with contractual relations by encouraging Destiny 2 and Rainbow Six Seige players to breach their licensing conditions.

[…]

Source: Bungie & Ubisoft Sue Destiny 2 Cheatmakers Ring-1 For Copyright Infringement * TorrentFreak

Wow, this seems to me to be a stretch. Nobody likes playing online against a cheat but calling it copyright infringement and creating derivative works seems like a stretch, as does saying people might think the cheat creators (which to me seems like original work) might be mistaken as being affiliated with the companies. Even Trump and QAnon followers aren’t that stupid. Then as for the licenses  imposed: yes, people click yes on the usage licenses but I’m pretty sure almost no-one has any idea what they are clicking yes to.

Edward Snowden calls for spyware trade ban amid Pegasus revelations

Governments must impose a global moratorium on the international spyware trade or face a world in which no mobile phone is safe from state-sponsored hackers, Edward Snowden has warned in the wake of revelations about the clients of NSO Group.

Snowden, who in 2013 blew the whistle on the secret mass surveillance programmes of the US National Security Agency, described for-profit malware developers as “an industry that should not exist”.

He made the comments in an interview with the Guardian after the first revelations from the Pegasus project, a journalistic investigation by a consortium of international media organisations into the NSO Group and its clients.

[…]

For traditional police operations to plant bugs or wiretap a suspect’s phone, law enforcement would need to “break into somebody’s house, or go to their car, or go to their office, and we’d like to think they’ll probably get a warrant”, he said.

But commercial spyware made it cost-efficient for targeted surveillance against vastly more people. “If they can do the same thing from a distance, with little cost and no risk, they begin to do it all the time, against everyone who’s even marginally of interest,” he said.

“If you don’t do anything to stop the sale of this technology, it’s not just going to be 50,000 targets. It’s going to be 50 million targets, and it’s going to happen much more quickly than any of us expect.”

Part of the problem arose from the fact that different people’s mobile phones were functionally identical to one another, he said. “When we’re talking about something like an iPhone, they’re all running the same software around the world. So if they find a way to hack one iPhone, they’ve found a way to hack all of them.”

He compared companies commercialising vulnerabilities in widely used mobile phone models to an industry of “infectioneers” deliberately trying to develop new strains of disease.

“It’s like an industry where the only thing they did was create custom variants of Covid to dodge vaccines,” he said. “Their only products are infection vectors. They’re not security products. They’re not providing any kind of protection, any kind of prophylactic. They don’t make vaccines – the only thing they sell is the virus.”

Snowden said commercial malware such as Pegasus was so powerful that ordinary people could in effect do nothing to stop it. Asked how people could protect themselves, he said: “What can people do to protect themselves from nuclear weapons?

“There are certain industries, certain sectors, from which there is no protection, and that’s why we try to limit the proliferation of these technologies. We don’t allow a commercial market in nuclear weapons.”

He said the only viable solution to the threat of commercial malware was an international moratorium on its sale. “What the Pegasus project reveals is the NSO Group is really representative of a new malware market, where this is a for-profit business,” he said. “The only reason NSO is doing this is not to save the world, it’s to make money.”

He said a global ban on the trade in infection vectors would prevent commercial abuse of vulnerabilities in mobile phones, while still allowing researchers to identify and fix them.

“The solution here for ordinary people is to work collectively. This is not a problem that we want to try and solve individually, because it’s you versus a billion dollar company,” he said. “If you want to protect yourself you have to change the game, and the way we do that is by ending this trade.”

[…]

Source: Edward Snowden calls for spyware trade ban amid Pegasus revelations | Edward Snowden | The Guardian

How To Check If Your iPhone Is Infected With Pegasus Using MVT

The revelation that our government might be using spyware called Pegasus to hack into its critics’ phones has started a whole new debate on privacy. The opposition is taking a dig at the ruling party every chance it gets, while the latter is trying to damage control after facing such serious allegations.

Amidst the chaos, one of the members of The Pegasus Project, Amnesty, recently made a public toolkit that can check if your phone is infected with Pegasus. The toolkit, known as MVT, requires users to know their way around the command line.

In a previous post, we wrote about how it works and successfully traces signs of Pegasus. Moreover, we mentioned how MVT is more effective on iOS than Android (the most you can do is scan APKs and SMSes). Hence, in this guide, we’re focusing on breaking down the process to detect Pegasus on iPhone into a step-by-step guide.

First off, you’ll need to create an encrypted backup and transfer it to either a Mac or PC. You can also do this on Linux instead, but you’ll have to install libimobiledevice beforehand for that.

Once the phone backup is transferred, you need to download Python 3.6 (or newer) on your system — if you don’t have it already. Here’s how you can install the same for Windows, macOS, and Linux.

After that, go through Amnesty’s manual to install MVT correctly on your system. Installing MVT will give you new utilities (mvt-ios and mvt-android) that you can use in the Python command line.

Now, let’s go through the steps for detecting Pegasus on an iPhone backup using MVT.

Steps To Detect Pegasus On iPhone

First of all, you have to decrypt your data backup. To do that, you’ll need to enter the following instruction format while replacing the placeholder text (marked with a forward slash) with your custom path.

mvt-ios decrypt-backup -p password -d /decrypted /backup

Note: Replace “/decrypted” with the directory where you want to store the decrypted backup and “/backup” with the directory where your encrypted backup is located.

Now, we will run a scan on the decrypted backup, referencing it with the latest IOCs (possible signs of Pegasus spyware), and store the result in an output folder.

To do this, first, download the newest IOCs from here (use the folder with the latest timestamp). Then, enter the instruction format as given below with your custom directory path.

mvt-ios check-backup -o /output -i /pegasus.stix2 /backup

Note: Replace “/output” with the directory where you want to store the scan result, “/backup” with the path where your decrypted backup is stored, and “/pegasus.stix2” with the path where you downloaded the latest IOCs.

After the scan completion, MVT will generate JSON files in the specified output folder. If there is a JSON file with the suffix “_detected,” then that means your iPhone data is most likely Pegasus-infected.

However, the IOCs are regularly updated by Amnesty’s team as they develop a better understanding of how Pegasus operates. So, you might want to keep running scans as the IOCs are updated to make sure there are no false positives.

Source: How To Check If Your Phone Is Infected With Pegasus Using MVT

Huge data leak shatters the lie that the innocent need not fear surveillance – governments are spying on critics, journos, etc without a warrant using commercial Pegasus spyware by NSO

Billions of people are inseparable from their phones. Their devices are within reach – and earshot – for almost every daily experience, from the most mundane to the most intimate.

Few pause to think that their phones can be transformed into surveillance devices, with someone thousands of miles away silently extracting their messages, photos and location, activating their microphone to record them in real time.

Such are the capabilities of Pegasus, the spyware manufactured by NSO Group, the Israeli purveyor of weapons of mass surveillance.

NSO rejects this label. It insists only carefully vetted government intelligence and law enforcement agencies can use Pegasus, and only to penetrate the phones of “legitimate criminal or terror group targets”.

Yet in the coming days the Guardian will be revealing the identities of many innocent people who have been identified as candidates for possible surveillance by NSO clients in a massive leak of data.

Without forensics on their devices, we cannot know whether governments successfully targeted these people. But the presence of their names on this list indicates the lengths to which governments may go to spy on critics, rivals and opponents.

First we reveal how journalists across the world were selected as potential targets by these clients prior to a possible hack using NSO surveillance tools.

Over the coming week we will be revealing the identities of more people whose phone numbers appear in the leak. They include lawyers, human rights defenders, religious figures, academics, businesspeople, diplomats, senior government officials and heads of state.

Our reporting is rooted in the public interest. We believe the public should know that NSO’s technology is being abused by the governments who license and operate its spyware. But we also believe it is in the public interest to reveal how governments look to spy on their citizens and how seemingly benign processes such as HLR lookups can be exploited in this environment.

[…]

Companies such as NSO operate in a market that is almost entirely unregulated, enabling tools that can be used as instruments of repression for authoritarian regimes such as those in Saudi Arabia, Kazakhstan and Azerbaijan.

The market for NSO-style surveillance-on-demand services has boomed post-Snowden, whose revelations prompted the mass adoption of encryption across the internet. As a result the internet became far more secure, and mass harvesting of communications much more difficult.

But that in turn spurred the proliferation of companies such as NSO offering solutions to governments struggling to intercept messages, emails and calls in transit. The NSO answer was to bypass encryption by hacking devices.

Two years ago the then UN special rapporteur on freedom of expression, David Kaye, called for a moratorium on the sale of NSO-style spyware to governments until viable export controls could be put in place. He warned of an industry that seemed “out of control, unaccountable and unconstrained in providing governments with relatively low-cost access to the sorts of spying tools that only the most advanced state intelligence services were previously able to use”.

His warnings were ignored. The sale of surveillance continued unabated. That GCHQ-like surveillance tools are now available for purchase by repressive governments may give some of Snowden’s critics pause for thought.

[…]

Source: Huge data leak shatters the lie that the innocent need not fear surveillance | Surveillance | The Guardian

Police Are Telling ShotSpotter to Alter Evidence From Gunshot-Detecting AI

On May 31 last year, 25-year-old Safarain Herring was shot in the head and dropped off at St. Bernard Hospital in Chicago by a man named Michael Williams. He died two days later.

Chicago police eventually arrested the 64-year-old Williams and charged him with murder (Williams maintains that Herring was hit in a drive-by shooting). A key piece of evidence in the case is video surveillance footage showing Williams’ car stopped on the 6300 block of South Stony Island Avenue at 11:46 p.m.—the time and location where police say they know Herring was shot.

How did they know that’s where the shooting happened? Police said ShotSpotter, a surveillance system that uses hidden microphone sensors to detect the sound and location of gunshots, generated an alert for that time and place.

Except that’s not entirely true, according to recent court filings.

That night, 19 ShotSpotter sensors detected a percussive sound at 11:46 p.m. and determined the location to be 5700 South Lake Shore Drive—a mile away from the site where prosecutors say Williams committed the murder, according to a motion filed by Williams’ public defender. The company’s algorithms initially classified the sound as a firework. That weekend had seen widespread protests in Chicago in response to George Floyd’s murder, and some of those protesting lit fireworks.

But after the 11:46 p.m. alert came in, a ShotSpotter analyst manually overrode the algorithms and “reclassified” the sound as a gunshot. Then, months later and after “post-processing,” another ShotSpotter analyst changed the alert’s coordinates to a location on South Stony Island Drive near where Williams’ car was seen on camera.

Williams reclassified photo

A screenshot of the ShotSpotter alert from 11:46 PM, May 31, 2020 showing that the sound was manually reclassified from a firecracker to a gunshot.

“Through this human-involved method, the ShotSpotter output in this case was dramatically transformed from data that did not support criminal charges of any kind to data that now forms the centerpiece of the prosecution’s murder case against Mr. Williams,” the public defender wrote in the motion.

[…]

The case isn’t an anomaly, and the pattern it represents could have huge ramifications for ShotSpotter in Chicago, where the technology generates an average of 21,000 alerts each year. The technology is also currently in use in more than 100 cities.

Motherboard’s review of court documents from the Williams case and other trials in Chicago and New York State, including testimony from ShotSpotter’s favored expert witness, suggests that the company’s analysts frequently modify alerts at the request of police departments—some of which appear to be grasping for evidence that supports their narrative of events.

[…]

Untested evidence

Had the Cook County State’s Attorney’s office not withdrawn the evidence in the Williams case, it would likely have become the first time an Illinois court formally examined the science and source code behind ShotSpotter, Jonathan Manes, an attorney at the MacArthur Justice Center, told Motherboard.

“Rather than defend the evidence, [prosecutors] just ran away from it,” he said. “Right now, nobody outside of ShotSpotter has ever been able to look under the hood and audit this technology. We wouldn’t let forensic crime labs use a DNA test that hadn’t been vetted and audited.”

[…]

A pattern of alterations

In 2016, Rochester, New York, police looking for a suspicious vehicle stopped the wrong car and shot the passenger, Silvon Simmons, in the back three times. They charged him with firing first at officers.

The only evidence against Simmons came from ShotSpotter. Initially, the company’s sensors didn’t detect any gunshots, and the algorithms ruled that the sounds came from helicopter rotors. After Rochester police contacted ShotSpotter, an analyst ruled that there had been four gunshots—the number of times police fired at Simmons, missing once.

Paul Greene, ShotSpotter’s expert witness and an employee of the company, testified at Simmons’ trial that “subsequently he was asked by the Rochester Police Department to essentially search and see if there were more shots fired than ShotSpotter picked up,” according to a civil lawsuit Simmons has filed against the city and the company. Greene found a fifth shot, despite there being no physical evidence at the scene that Simmons had fired. Rochester police had also refused his multiple requests for them to test his hands and clothing for gunshot residue.

Curiously, the ShotSpotter audio files that were the only evidence of the phantom fifth shot have disappeared.

Both the company and the Rochester Police Department “lost, deleted and/or destroyed the spool and/or other information containing sounds pertaining to the officer-involved shooting,”

[…]

Greene—who has testified as a government witness in dozens of criminal trials—was involved in another altered report in Chicago, in 2018, when Ernesto Godinez, then 27, was charged with shooting a federal agent in the city.

The evidence against him included a report from ShotSpotter stating that seven shots had been fired at the scene, including five from the vicinity of a doorway where video surveillance showed Godinez to be standing and near where shell casings were later found. The video surveillance did not show any muzzle flashes from the doorway, and the shell casings could not be matched to the bullets that hit the agent, according to court records.

During the trial, Greene testified under cross-examination that the initial ShotSpotter alert only indicated two gunshots (those fired by an officer in response to the original shooting). But after Chicago police contacted ShotSpotter, Greene re-analyzed the audio files.

[…]

Prior to the trial, the judge ruled that Godinez could not contest ShotSpotter’s accuracy or Greene’s qualifications as an expert witness. Godinez has appealed the conviction, in large part due to that ruling.

“The reliability of their technology has never been challenged in court and nobody is doing anything about it,” Gal Pissetzky, Godinez’s attorney, told Motherboard. “Chicago is paying millions of dollars for their technology and then, in a way, preventing anybody from challenging it.”

The evidence

At the core of the opposition to ShotSpotter is the lack of empirical evidence that it works—in terms of both its sensor accuracy and the system’s overall effect on gun crime.

The company has not allowed any independent testing of its algorithms, and there’s evidence that the claims it makes in marketing materials about accuracy may not be entirely scientific.

Over the years, ShotSpotter’s claims about its accuracy have increased, from 80 percent accurate to 90 percent accurate to 97 percent accurate. According to Greene, those numbers aren’t actually calculated by engineers, though.

“Our guarantee was put together by our sales and marketing department, not our engineers,” Greene told a San Francisco court in 2017. “We need to give them [customers] a number … We have to tell them something. … It’s not perfect. The dot on the map is simply a starting point.”

In May, the MacArthur Justice Center analyzed ShotSpotter data and found that over a 21-month period 89 percent of the alerts the technology generated in Chicago led to no evidence of a gun crime and 86 percent of the alerts led to no evidence a crime had been committed at all.

[..]

Meanwhile, a growing body of research suggests that ShotSpotter has not led to any decrease in gun crime in cities where it’s deployed, and several customers have dropped the company, citing too many false alarms and the lack of return on investment.

[…]

a 2021 study by New York University School of Law’s Policing Project that determined that assaults (which include some gun crime) decreased by 30 percent in some districts in St. Louis County after ShotSpotter was installed. The study authors disclosed that ShotSpotter has been providing the Policing Project unrestricted funding since 2018, that ShotSpotter’s CEO sits on the Policing Project’s advisory board, and that ShotSpotter has previously compensated Policing Project researchers.

[…]

Motherboard recently obtained data demonstrating the stark racial disparity in how Chicago has deployed ShotSpotter. The sensors have been placed almost exclusively in predominantly Black and brown communities, while the white enclaves in the north and northwest of the city have no sensors at all, despite Chicago police data that shows gun crime is spread throughout the city.

Community members say they’ve seen little benefit from the technology in the form of less gun violence—the number of shootings in 2021 is on pace to be the highest in four years—or better interactions with police officers.

[…]

Source: Police Are Telling ShotSpotter to Alter Evidence From Gunshot-Detecting AI

QR Menu Codes Are Tracking You More Than You Think

If you’ve returned to the restaurants and bars that have reopened in your neighborhood lately, you might have noticed a new addition to the post-quarantine decor: QR codes. Everywhere. And as they’ve become more ubiquitous on the dining scene, so has the quiet tracking and targeting that they do.

That’s according to a new analysis by the New York Times, that found these QR codes have the ability to collect customer data—enough to create what Jay Stanley, a senior policy analyst at the American Civil Liberties Union, called an “entire apparatus of online tracking,” that remembers who you are every time you sit down for a meal. While the data itself contains pretty uninteresting information, like your order history or contact information, it turns out there’s nothing stopping that data from being passed to whomever the establishment wants.

[…]

But as the Times piece points out, these little pieces of tech aren’t as innocuous as they might initially seem. Aside from storing data like menus or drink options, QR codes are often designed to transmit certain data about the person who scanned them in the first place—like their phone number or email address, along with how often the user might be scanning the code in question. This data collection comes with a few perks for the restaurants that use the codes (they know who their repeat customers are and what they might order). The only problem is that we actually don’t know where that data actually goes.

Source: QR Menu Codes Are Tracking You More Than You Think

Note for ant fuckers: the QR code does not in fact “transmit” anything – a server behind it detects that you have visited it (if you follow a URL in the code) and then collects data based on what you do on the server, but also on the initial connection (eg location through IP address, URL parameters which can include location information, OS, browser type, etc etc etc)

Want unemployment benefits in the US? You may have to submit to facial recognition with a little known company ID.me

[…]

Watkins, a self-described privacy advocate whose mother and grandmother shredded personal information when he was growing up, said he is unwilling to complete the identity verification process his state now requires, which includes having his face analyzed by a little-known company called ID.me.
He sent a sharply worded letter to his state’s unemployment agency criticizing ID.me’s service, saying he would not take part in it given his privacy concerns. In response, he received an automated note from the agency: “If you do not verify your identity soon, your claim will be disqualified and no further benefit payments will be issued.” (A spokesperson for the Colorado Department of Labor and Employment said the agency only allows manual identity verification “as a last resort” for unemployment claimants who are under 18 — because ID.me doesn’t work with minors — and those who have “technological barriers.”)
[…]
Watkins is one of millions across the United States who are being instructed to use ID.me, along with its facial recognition software, to get their unemployment benefits. A rapidly growing number of US states, including Colorado, California and New York, turned to ID.me in hopes of cutting down on a surge of fraudulent claims for state and federal benefits that cropped up during the pandemic alongside a tidal wave of authentic unemployment claims.
As of this month, 27 states’ unemployment agencies had entered contracts with ID.me, according to the company, with 25 of them already using its technology. ID.me said it is in talks with seven more. ID.me also verifies user identities for numerous federal agencies, such as the Department of Veterans Affairs, Social Security Administration and IRS.
[…]
The face-matching technology ID.me employs comes from a San Francisco-based startup called Paravision
[…]
Facial recognition technology, in general, is contentious. Civil rights groups frequently oppose it for privacy issues and other potential dangers. For instance, it has been shown to be less accurate when identifying people of color, and several Black men, at least, have been wrongfully arrested due to the use of facial recognition. It’s barely regulated — there are no federal laws governing its use, though some states and local governments have passed their own rules to limit or prohibit its use. Despite these concerns, the technology has been used across the US federal government, as a June report from the Government Accountability Office showed.
Several ID.me users told CNN Business about problems they had verifying their identities with the company, which ranged from the facial recognition technology failing to recognize their face to waiting for hours to reach a human for a video chat after encountering problems with the technology. A number of people who claim to have had issues with ID.me have taken to social media to beg the company for help with verification, express their own concerns about its face-data collection or simply rant, often in response to ID.me’s own posts on Twitter. And some like Watkins are simply frustrated not to have a say in the matter.
[…]
ID.me said it does not sell user data — which includes biometric and related information such as selfies people upload, data related to facial analyses, and recordings of video chats users participate in with ID.me — but it does keep it. Biometric data, like the facial geometry produced from a user’s selfie, may be kept for years after a user closes their account.
Hall said ID.me keeps this information only for auditing purposes, particularly for government agencies in cases of fraud or identity theft. Users, according to its privacy policy, can ask ID.me to delete personally identifiable information it has gathered from them, but the company “may keep track of certain information if required by law” and may not be able to “completely delete” all user information since it “periodically” backs up such data. (As Ryan Calo, codirector of the University of Washington’s Tech Policy Lab, put it, this data retention policy is “pretty standard,” but, he added, that “doesn’t make it great!”)
[…]
Beyond state unemployment agencies, ID.me is also becoming more widespread among federal agencies such as the IRS, which in June began using ID.me to verify identities of people who want to use its Child Tax Credit Update Portal.
“We’re verifying more than 1% of the American adult population each quarter, and that’s starting to compress more to like 45 or 50 days,” Hall said. The company has more than 50 million users, he said, and signs up more than 230,000 new ones each day.
[…]
Vasquez said that, when a state chooses to use a tool it knows has a tendency to not work as well on some people, she thinks that “starts to invade something more than privacy and get at questions of what society values and how it values different members’ work and what our society believes about dignity.”
Hall claims ID.me’s facial recognition software is over 99% accurate and said an internal test conducted on hundreds of faces of people who had failed to pass the facial recognition check for logging in to the social security website did not show statistically significant evidence of racial bias.

In cases where users are able to opt out of the ID.me process, it can still be arduous and time-consuming: California’s Employment Development Department website, for instance, instructs people who can’t verify their identity via ID.me when applying online to file their claim over the phone or by mail or fax.
Most people aren’t doing this, however; it’s time consuming to deal with snail mail or wade through EDD’s phone system, and many people don’t have access to a fax machine. An EDD spokesperson said that such manual identity verification, which used to be a “significant” part of EDD’s backlog, now accounts for “virtually none” of it.

Long wait times for some

Eighty-five percent of people are able to verify their identity with ID.me immediately for state workforce agencies without needing to go through a video chat, Hall said.
What happens to the remaining 15% worries Akselrod, of the ACLU, since users must have access to a device with a camera — like a smartphone or computer — as well as decent internet access. According to recent Pew research, 15% of American adults surveyed don’t have a smartphone and 23% don’t have home broadband.
“These technologies may be inaccessible for precisely the people for whom access to unemployment insurance is the most critical,” Akselrod said.
[…]

Source: Want your unemployment benefits? You may have to submit to facial recognition first – CNN

What this excellent article doesn’t go into is what a terrible idea having huge centralised databases is, especially one filled with biometric information (which you can’t change) of an entire population

Commission starts legal action against 23 EU countries over copyright rules they won’t implement that favour big tech over small business and forced censorship

EU countries may be taken to court for their tardiness in enacting landmark EU copyright rules into national law, the European Commission said on Monday as it asked the group to explain the delays.

The copyright rules, adopted two years ago, aim to ensure a level playing field between the European Union’s trillion-euro creative industries and online platforms such as Google, owned by Alphabet (GOOGL.O), and Facebook (FB.O).

Note: level if you are one of the huge tech giants, not so much if you’re a small business or startup – in fact, this makes it very very difficult for startups to enter some sectors at all.

Some of Europe’s artists and broadcasters, however, are still not happy, in particular over the interpretation of a key provision, Article 17, which is intended to force sharing platforms such as YouTube and Instagram to filter copyrighted content.

[…]

The EU executive also said it had asked France, Spain and 19 other EU countries to explain why they missed a June 7 deadline to enact separate copyright rules for online transmission of radio and TV programmes.

The other countries are Austria, Belgium, Bulgaria, Croatia, Cyprus, the Czech Republic, Estonia, Greece, Finland, Ireland, Italy, Lithuania, Luxembourg, Latvia, Poland, Portugal, Romania, Slovenia and Slovakia.

Source: Commission starts legal action against 23 EU countries over copyright rules | Reuters

For more information see:
Article 11, Article 13: EU’s Dangerous Copyright Bill Advances: massive censorship and upload filters (which are impossible) and huge taxes for links.

European Commission Betrays Internet Users By Cravenly Introducing Huge Loophole For Copyright Companies In Upload Filter Guidance

EU Copyright Companies Want Legal Memes Blocked Too Because They Now Admit Upload Filters Are ‘Practically Unworkable’

Wow, the EU actually voted to break the internet for big business copyright gain

Anyway, well done those 23 countries for fighting for freedom of expression and going against big tech and non-democratic authoritarianism in Europe.

Japanese Police Arrest Man For Selling Modded Save Files For Single-Player Nintendo Game

Japan’s onerous Unfair Competition Prevention Law has created what looks from here like a massive overreach on the criminalization of copyright laws. Past examples include Japanese journalism executives being arrested over a book that tells people how to back up their own DVDs, along with more high-profile cases in which arrests occurred over the selling of cheats or exploits in online multiplayer video games. While these too seem like an overreach of copyright law, or at least an over-criminalization of relatively minor business problems facing electronic media companies, they are nothing compared with the idea that a person could be arrested and face jail time for the crime of selling modded save-game files for single player game like The Legend of Zelda: Breath of the Wild.

A 27-year old man in Japan was arrested after he was caught attempting to sell modified Zelda: Breath of The Wild save files.

As reported by the Broadcasting System of Niigata (and spotted by Dextro) Ichimin Sho was arrested on July 8 after he posted about modified save files for the Nintendo Switch version of Breath of The Wild. He posted his services onto an unspecified auction site, describing it as “the strongest software.” He would provide modded save files that would give the player improved in-game abilities and also items that were difficult to obtain were made available as requested by the customer. In his original listing, he reportedly was charging folks 3,500 yen (around $31 USD) for his service.

Upon arrest, Sho admitted that he’s made something like $90k over 18 months selling modded saves and software. Whatever his other ventures, the fact remains that Sho was arrested for selling modded saves for this one Zelda game to the public. And this game is fully a single-player game. In other words, there is not aspect of this arrest that involved staving off cheating in online multiplayer games, which is one of the concerns that has typically led to these arrests in Japan within the gaming industry.

[…]

Source: Japanese Police Arrest Man For Selling Modded Save Files For Single-Player Nintendo Game | Techdirt

Google fined €500m for not paying French publishers after copying their texts on search results

Google was fined €500m ($590m, £425m) by the French Competition Authority on Tuesday for failing to negotiate fees with news publishers for using their content.

In April last year, the regulator ruled the American search giant had to compensate French publishers for using snippets of their articles in Google News, citing European antitrust rules and copyright law. Google was given three months to figure out how much to pay publishers. More than a year later, no licensing deals have been struck, and Google did not “enter into negotiations in good faith,” we’re told. For one thing, it just stopped including snippets from French publishers in all Google services.

[…]

Now, the FCA has sanctioned the Chocolate Factory €500m and has given it two months to negotiate with French publishers. If the web giant continues to dilly-dally after this point, it’ll be fined up to €900,000 (over $1m or around £767,000) a day until it complies with the FCA’s demands.

[…]

Source: Google fined €500m for not paying French publishers after using their words on web • The Register

Inside the Industry That Unmasks People at Scale: yup your mobile advertising ID isn’t anonymous either

Tech companies have repeatedly reassured the public that trackers used to follow smartphone users through apps are anonymous or at least pseudonymous, not directly identifying the person using the phone. But what they don’t mention is that an entire overlooked industry exists to purposefully and explicitly shatter that anonymity.

They do this by linking mobile advertising IDs (MAIDs) collected by apps to a person’s full name, physical address, and other personal identifiable information (PII). Motherboard confirmed this by posing as a potential customer to a company that offers linking MAIDs to PII.

“If shady data brokers are selling this information, it makes a mockery of advertisers’ claims that the truckloads of data about Americans that they collect and sell is anonymous,” Senator Ron Wyden told Motherboard in a statement.

“We have one of the largest repositories of current, fresh MAIDS<>PII in the USA,” Brad Mack, CEO of data broker BIGDBM told us when we asked about the capabilities of the product while posing as a customer. “All BIGDBM USA data assets are connected to each other,” Mack added, explaining that MAIDs are linked to full name, physical address, and their phone, email address, and IP address if available. The dataset also includes other information, “too numerous to list here,” Mack wrote.

A MAID is a unique identifier a phone’s operating system gives to its users’ individual device. For Apple, that is the IDFA, which Apple has recently moved to largely phase out. For Google, that is the AAID, or Android Advertising ID. Apps often grab a user’s MAID and provide that to a host of third parties. In one leaked dataset from a location tracking firm called Predicio previously obtained by Motherboard, the data included users of a Muslim prayer app’s precise locations. That data was somewhat pseudonymized, because it didn’t contain the specific users’ name, but it did contain their MAID. Because of firms like BIGDBM, another company that buys the sort of data Predicio had could take that or similar data and attempt to unmask the people in the dataset simply by paying a fee.

[…]

“This real-world research proves that the current ad tech bid stream, which reveals mobile IDs within them, is a pseudonymous data flow, and therefore not-compliant with GDPR,” Edwards told Motherboard in an online chat.

“It’s an anonymous identifier, but has been used extensively to report on user behaviour and enable marketing techniques like remarketing,” a post on the website of the Internet Advertising Bureau, a trade group for the ad tech industry, reads, referring to MAIDs.

In April Apple launched iOS 14.5, which introduced sweeping changes to how apps can track phone users by making each app explicitly ask for permission to track them. That move has resulted in a dramatic dip in the amount of data available to third parties, with just 4 percent of U.S. users opting-in. Google said it plans to implement a similar opt-in measure broadly across the Android ecosystem in early 2022.

[…]

Source: Inside the Industry That Unmasks People at Scale

Samsung Washing Machine App Requires Access to Your Contacts and Location

A series of Samsung apps that allow customers to control their internet-connected appliances require access to all the phone’s contacts and, in some cases, the phone call app, phone’s location, and camera. Customers have been furious about this for years.

On Wednesday, a Reddit user complained that their washing machine app, the Samsung Smart Washer, wouldn’t work “unless I give it access to my contacts, location and camera.”

This is a common complaint.

[…]

These situations speak to two issues: Apps that demand permissions that they don’t need, and “smart” and internet of things devices that make formerly simple tasks very complicated, and open up potential privacy and security concerns.

Generally speaking, over the last few years, people have become more sensitive to what they’re giving up in privacy and potentially security when they deal with big tech companies. Smart TVs (Samsung included), for example, have been caught listening to users and automatically deliver ads. Tech companies have had to adapt and do better. For example, both Apple and Google allow users to see what data an app has access to, and in some cases users can toggle the permissions individually. The upcoming new version of Android will even have a dedicated “Privacy Dashboard” where users can see which apps used what permissions, and revoke them if they want. Apple’s iOS has similar functionality. But none of this stops app developers from asking users to accept unnecessary permissions.

It’s unclear why apps that are designed to let you set the type of washing cycle you want, or see how long it’s gonna take for the dryer to be done, would need access to your phone’s contacts. In an FAQ for another Samsung app, the company says it needs access to contacts “to check if you already have a Samsung account set up in your device. Knowing this information helps mySamsung to make the sign-in process seamless.”

[…]

Source: Samsung Washing Machine App Requires Access to Your Contacts and Location

DRM Strikes Again: Ubisoft Makes Its Own Game Unplayable By Shutting Down DRM Server

DRM has shown time after time to be of almost no hindrance whatsoever for those seeking to pirate video games, but has done an excellent job of hindering those who actually bought the game in playing what they’ve bought. Ubisoft, in particular, has had issues with this over the years, with DRM servers failing and preventing customers from playing games that can no longer ping the DRM server.

And while those instances involved unforeseen downtime or migrations impacting customers’ ability to play their games, this time it turns out that Ubisoft simply stopped supporting the DRM server for Might and Magic X-Legacy. And now basically everyone is screwed.

Last month, Ubisoft decided to end online support for a bunch of older games, but in doing so also brought down the DRM servers for Might and Magic X – Legacy, meaning players couldn’t access the game’s single-player content or DLC.

As Eurogamer reports, fans were not happy, having to cobble together an unofficial workaround to be able to continue playing past a certain point in the single-player. But instead of Ubisoft taking the intervening weeks to release something official to fix this, or reversing their original move to shut down the game’s DRM servers, they’ve decided to do something else.

They have simply removed the game for sale on Steam.

This, of course, does nothing for the people who already bought the game and now suddenly cannot progress through it completely, as all the DLC is non-functional. They can play the game up until a point, but then it just doesn’t work.

There are multiple bad actions on Ubisoft’s part here. First, using DRM like this is a terrible idea with almost no good consequences. But once it’s in use, you would think it would be the obligation of the company to ensure any changes it makes on its end don’t suddenly render purchases made by its customers unplayable. In other words, rather than ending support for a DRM server that nixes parts of a paid-for game, the company could have rolled out patches to remove the DRM completely so that none of this happened. After all, with the game no longer even available as a new purchase, what would be the harm in removing the DRM? And, of course, there’s the total lack of communication to Ubisoft customers about basically all of this.

Which is what has people so understandably pissed.

Source: DRM Strikes Again: Ubisoft Makes Its Own Game Unplayable By Shutting Down DRM Server | Techdirt

Audacity users stick the knife – and fork – in to strip audio editor of unwanted features and govt / police spyware

Contributors disgruntled with the recent direction of cross-platform FOSS audio software Audacity are forking the sound editor to a version that does not have the features or requirements that have upset some in the community.

One such project can be found on GitHub, with user “cookiengineer” proclaiming themselves “evil benevolent temporary dictator” in order to get the ball rolling.

“Being friendly seemed to have invited too many trolls,” observed the engineer, “and we must stop this behaviour.”

Presumably that refers to the trolling rather than being friendly. And goodness, the project has had somewhat of a baptism by fire in recent hours as a number of 4chan users elected to launch a raid on it.

This is why we can’t have nice things.

The project is blunt with regard to the causes of the fork – Audacity’s privacy policy updates, the contributors licence agreement, and the a furore over introducing telemetry have all played a part.

[…]

Source: Audacity users stick the knife – and fork – in to strip audio editor of unwanted features • The Register

Sam Altman’s New Startup Wants to Give You Crypto for Eyeball Scans – yes this is a terrible dr evil plan idea

hould probably sit down for this one. Sam Altman, the former CEO of famed startup incubator Y Combinator, is reportedly working on a new cryptocurrency that’ll be distributed to everyone on Earth. Once you agree to scan your eyeballs.

Yes, you read correctly.

You can thank Bloomberg for inflicting this cursed news on the rest of us. In its report, Bloomberg says Altman’s forthcoming cryptocurrency and the company behind it, both dubbed Worldcoin, recently raised $25 million from investors. The company is purportedly backed by Andreessen Horowitz, LinkedIn founder Reid Hoffman, and Day One Ventures.

“I’ve been very interested in things like universal basic income and what’s going to happen to global wealth redistribution and how we can do that better,” Altman told Bloomberg, explaining what fever dream inspired this.
[…]

What supposedly makes Worldcoin different is it adds a hardware component to cryptocurrency in a bid to “ensur[e] both humanness and uniqueness of everybody signing up, while maintaining their privacy and the overall transparency of a permissionless blockchain.” Specifically, Bloomberg says the gadget is a portable “silver-colored spherical gizmo the size of a basketball” that’s used to scan people’s irises. It’s undergoing testing in some cities, and since Worldcoin is not yet ready for distribution, the company is giving volunteers other cryptocurrencies like Bitcoin in exchange for participating. There are supposedly fewer than 20 prototypes of this eyeball scanning orb, and currently, each reportedly costs $5,000 to make.

Supposedly the whole iris scanning thing is “essential” as it would generate a “unique numerical code” for each person, thereby discouraging scammers from signing up multiple times. As for the whole privacy problem, Worldcoin says the scanned image is deleted afterward and the company purportedly plans to be “as transparent as possible.”

Source: Sam Altman’s New Startup Wants to Give You Crypto for Eyeball Scans

Advertisers Are Selling Americans’ Data to Hundreds of Shady Foreign Businesses

Senator Ron Wyden has released a list of hundreds of secretive, foreign-owned companies that are buying up Americans’ data. Some of the customers include companies based in states that are ostensibly “unfriendly” to the U.S., like Russia and China.

First reported by Motherboard, the news comes after recent information requests made by a bipartisan coalition of Senators, who asked prominent advertising exchanges to provide a transparent list of any “foreign-headquartered or foreign-majority owned” firms to whom they sell consumer “bidstream data.” Such data is typically collected, bought, and sold amidst the intricate advertising ecosystem, which uses “real-time bidding” to monetize consumer preferences and interests.

Wyden, who helped lead the effort, has expressed concerns that Americans’ data could fall into the hands of foreign intelligence agencies to “supercharge hacking, blackmail, and influence campaigns,” as a previous letter from him and other Senators puts it.

“Few Americans realize that some auction participants are siphoning off and storing ‘bidstream’ data to compile exhaustive dossiers about them. In turn, these dossiers are being openly sold to anyone with a credit card, including to hedge funds, political campaigns, and even to governments,” the letter states.

In response to the information requests, most companies seem to have responded with vague, evasive answers. However, advertising firm Magnite has provided a list of over 150 different companies it sells to while declining to note which countries they are based in. Wyden’s staff spent time researching the companies and Motherboard reports that the list includes the likes of Adfalcon—a large ad firm based in Dubai that calls itself the “first mobile advertising network in the Middle East”—as well as Chinese companies like Adtiming and Mobvista International.

Magnite’s response further shows that the kinds of data it provides to these companies may include all sorts of user information—including age, name, and the site names and domains they visit, device identifiers, IP address, and other information that would help any discerning observer piece together a fairly comprehensive picture of who you are, where you’re located, and what you’re interested in.

You can peruse the full list of companies that Magnite works with and, foreign ownership aside, they just naturally sound creepy. With confidence-inspiring names like “12Mnkys,” “Freakout,” “CyberAgent Dynalst,” and “Zucks,” these firms—many of which you’d be hard-pressed to even find an accessible website for—are doing God knows what with the data they procure.

The question naturally arises: How is it that these companies that we know literally nothing about seem to have access to so much of our personal information? Also: Where are the federal regulations when you need them?

Source: Advertisers Are Selling Americans’ Data to Hundreds of Shady Foreign Businesses

And that’s why Europe has GDPR

Microsoft exec: Targeting of Americans’ records ‘routine’

Federal law enforcement agencies secretly seek the data of Microsoft customers thousands of times a year, according to congressional testimony Wednesday by a senior executive at the technology company.

Tom Burt, Microsoft’s corporate vice president for customer security and trust, told members of the House Judiciary Committee that federal law enforcement in recent years has been presenting the company with between 2,400 to 3,500 secrecy orders a year, or about seven to 10 a day.

“Most shocking is just how routine secrecy orders have become when law enforcement targets an American’s email, text messages or other sensitive data stored in the cloud,” said Burt, describing the widespread clandestine surveillance as a major shift from historical norms.

[…]

Brad Smith, Microsoft’s president, called for an end to the overuse of secret gag orders, arguing in a Washington Post opinion piece that “prosecutors too often are exploiting technology to abuse our fundamental freedoms.” Attorney General Merrick Garland, meanwhile, has said the Justice Department will abandon its practice of seizing reporter records and will formalize that stance soon.

[…]

Burt said that while the revelation that federal prosecutors had sought data about journalists and political figures was shocking to many Americans, the scope of surveillance is much broader. He criticized prosecutors for reflexively seeking secrecy through boilerplate requests that “enable law enforcement to just simply assert a conclusion that a secrecy order is necessary.”

[…]

As possible solutions, Burt said, the government should end indefinite secrecy orders and should also be required to notify the target of the data demand once the secrecy order has expired.

Just this week, he said, prosecutors sought a blanket gag order affecting the government of a major U.S. city for a Microsoft data request targeting a single employee there.

“Without reform, abuses will continue to occur and they will occur in the dark,” Burt said.

Source: Microsoft exec: Targeting of Americans’ records ‘routine’

High Court disallows Dutch Filmworks forcing ISPs to give out personal details of potential movie downloaders

As expected, the Supreme Court rejected the appeal in cassation by Dutch FilmWorks. The highest judicial body follows the motivation of the Prosecutor General, who previously issued advice on this. DFW announced in 2015 that it would take enforcement action against people who illegally download films. The matter was widely publicized. DFW wanted to address individual users and possibly even fine them. It engaged an outside company to collect the IP addresses. The distributor also received permission for this data collection. However, in order to address these users, DFW had to have their name and address details, which are only known to internet providers. Ziggo refused to provide that information. Dutch Filmworks was rejected by the court and the Supreme Court also sees no reason to annul the earlier judgment.

Source: Zaak Dutch Filmworks strandt bij Hoge Raad – Emerce

Windows Users Surprised by Windows 11’s Short List of Supported CPUs – and front facing camera requirements

While a lot of focus has been on the TPM requirements for Windows 11, Microsoft has since updated its documentation to provide a complete list of supported processors. At present the list includes only Intel 8th Generation Core processors or newer, and AMD Ryzen Zen+ processors or newer, effectively limiting Windows 11 to PC less than 4-5 years old.

Notably absent from the list is the Intel Core i7-7820HQ, the processor used in Microsoft’s current flagship $3500+ Surface Studio 2. This has prompted many threads on Reddit from users angry that their (in some cases very new) Surface PC is failing the Windows 11 upgrade check.
The Verge confirms: Windows 11 will only support 8th Gen and newer Intel Core processors, alongside [Intel’s 2016-era] Apollo Lake and newer Pentium and Celeron processors. That immediately rules out millions of existing Windows 10 devices from upgrading to Windows 11… Windows 11 will also only support AMD Ryzen 2000 and newer processors, and 2nd Gen or newer [AMD] EPYC chips. You can find the full list of supported processors on Microsoft’s site…

Originally, Microsoft noted that CPU generation requirements are a “soft floor” limit for the Windows 11 installer, which should have allowed some older CPUs to be able to install Windows 11 with a warning, but hours after we published this story, the company updated that page to explicitly require the list of chips above.

Many Windows 10 users have been downloading Microsoft’s PC Health App (available here) to see whether Windows 11 works on their systems, only to find it fails the check… This is the first significant shift in Windows hardware requirements since the release of Windows 8 back in 2012, and the CPU changes are understandably catching people by surprise.

Microsoft is also requiring a front-facing camera for all Windows 11 devices except desktop PCs from January 2023 onwards.
“In order to run Windows 11, devices must meet the hardware specifications,” explains Microsoft’s official compatibility page for Windows 11.

“Devices that do not meet the hardware requirements cannot be upgraded to Windows 11.”

Source: Windows Users Surprised by Windows 11’s Short List of Supported CPUs – Slashdot

Why on earth should Microsoft require that it can look at you?!

Ubisoft Takes Down Fan’s Incredible Far Cry 5 ‘GoldenEye’ Maps

For the past few years, a YouTuber known as Krollywood has painstakingly recreated every level from GoldenEye 007 inside the level editor of Far Cry 5. This week, Ubisoft removed all of those levels from Far Cry 5 due to a copyright infringement claim.

Kotaku first reported on Krollywood’s efforts earlier this month. Over the course of three years, in an endeavor that tallied more than 1,400 hours, Krollywood recreated every stage from GoldenEye 007, the classic N64 shooter (well, save for the two bonus levels). It was an impressive effort: a modernized recreation of a beloved yet tough-to-find old game. And it looked great, too.

Read More: Here’s GoldenEye 007 Remade From The Ground Up In Far Cry 5

You could find and play these levels yourself by hopping into Far Cry 5’s arcade mode and punching in Krollywood’s username. As of this writing, you no longer can. Ubisoft removed them all from Far Cry 5, a move that Krollywood described as “really sad,” noting that he probably won’t be able to restore them since he’s “on their radar now.”

“I’m really sad—not because of myself or the work I put in the last three years, [but] because of the players who wanna play it or bought Far Cry just to play my levels,” Krollywood told Kotaku in an email today.

When reached for comment, a representative for Ubisoft kicked over this statement:

In following the guidelines within the ‘Terms of Use’, there were maps created within Far Cry 5 arcade that have been removed due to copyright infringement claims from a right [sic] holder received by Ubisoft and are currently unavailable. We respect the intellectual property rights of others and expect our users to do the same. This matter is currently with the map’s creator and the rights holder and we have nothing further to share at this time.

Ubisoft did not immediately respond to follow-up requests asking whether the rights holder mentioned is MGM, which controls the license to the original GoldenEye 007.

The rights around the GoldenEye 007 game have been stuck in a quagmire for decades. Famously, Rare, the developer of the original game, planned a remake for the Xbox 360. That was cancelled in 2008. (Years later, Xbox boss Phil Spencer chalked up the cancellation to the legal rights issues being “challenging.”) That canned remake resurfaced as a full 4K60 longplay via a leak this January, with a playable version making the rounds online shortly after. A Kotaku report concluded: It was fun.

It is further unclear how, exactly, Krollywood’s map remakes in Far Cry 5 harm MGM at all—or how it violates Ubisoft’s terms of service in the first place. Krollywood didn’t use any assets or code from the original game. He didn’t attempt to sell it or otherwise turn a profit. And MGM doesn’t own any of the code from Ubisoft’s open-world shooter.

A sampling of Krollywood’s efforts…Image: Krollywood / Ubisoft
Those corpses represent every attempt to play GoldenEye 007 in any other format than the original game.Image: Krollywood / Ubisoft
Some of the remade levels stoke major wanderlust.Image: Krollywood / Ubisoft

Players just want a taste of nostalgia, and MGM has a track record of shattering the plates before they’re even delivered to the table. (Recall GoldenEye 25, the fan remake of GoldenEye 007 remade entirely in Unreal 4 that was lawyered into oblivion last year.) MGM has further neglected to do anything with the license it’s sitting on—for a game that’s older than the Game Boy Color, by the way. At the end of the day, shooting this latest fan-made project out of the sky comes across as a punitive move, at best.

“In the beginning, I started this project just for me and my best friend, because we loved the original game so much,” Krollywood said. “But there are many GoldenEye fans out there … [The project] found many new fans and I’m so happy about it.”

Source: Ubisoft Takes Down Fan’s Incredible Far Cry 5 ‘GoldenEye’ Maps

Bah. Humbug.