The Linkielist

Linking ideas with the world

The Linkielist

About Robin Edgar

Organisational Structures | Technology and Science | Military, IT and Lifestyle consultancy | Social, Broadcast & Cross Media | Flying aircraft

Samsung’s has larger, stretchier OLED display that looks a bit like a rubber sheet

Hot on the heels of LG’s “Real Folding Window” showcase, Samsung is taking its moment to shine with a new stretchable OLED display demo.

At this week’s Global Tech Korea 2021, Samsung presented an impressive 13-inch OLED panel that appears to stretch at varying degrees. The panel displayed a video of lava flowing while different sections rose and fell as if to mimic the flow of the lava and thus adding another level of 3D immersion to the content.

According to Changhee Lee, executive vice president of Samsung Display, the degree at which stretchable displays could be deformed “was about 5% in the past, but now it has improved significantly,” going on to suggest that the company plans to use this technology in future products like rollable smartphones and more (via ETNews).

This isn’t the first time Samsung showcased impressive display technology aimed at future form factors. Earlier this year, the company presented a video showing off display concepts like a slideable smartphone, a display that folds in two parts, and folding tablets. The company has also shown off an earlier concept of its stretchable display technology back in 2017, although the panel was smaller at 9.1-inches.

[…]

Source: Samsung’s larger, stretchier OLED display technology is creepy and cool | Android Central

Bing Search Results Erases Images Of ‘Tank Man’ On Anniversary Of Tiananmen Square Crackdown (2021)

On the 32nd anniversary of the Tiananmen Square protests, internet users noticed Microsoft’s Bing search engine was producing some interesting results. Or, rather, it wasn’t producing expected search results for some possibly interesting reasons.

Users searching for the most iconic image of the protests — that of the unidentified person known only as “Tank Man” — were coming up empty. It appeared that Microsoft’s search engine was blocking results for an image that often serves as shorthand for rebellion against the Chinese government.

As was reported by several web users, followed by several news outlets, the apparent blocking of search results could be observed in both the United States and the United Kingdom, leaving users with the impression the Chinese government had pressured Microsoft to moderate search results for “tank man” in hopes of reducing any remembrance of the Tiananmen Square Massacre, which resulted in the deaths of 2,500-3,500 protesters.

The apparent censorship was blamed on Microsoft’s close relationship with the Chinese government, which allowed its search engine to be accessed by Chinese residents in exchange for complying with government censorship requests.

This led to Microsoft being criticized by prominent politicians for apparently allowing the Chinese government to dictate what users around the world could access in relation to the Tiananmen Square protests.

[…]

Shortly after the apparent censorship of the iconic “Tank Man” image was reported, Microsoft claimed the very timely removal of relevant search results was the byproduct of “accidental human error.”

However, the company refused to offer any additional explanation. And, while searching the term “Tank Man” produced search results in Bing, it did not generate the expected results.

Image via The Verge

Several hours after the first “fix,” things returned to normal, with “Tank Man” searches bringing up the actual Tank Man, rather than just tanks or tanks with men near or on the tanks.

Image via Twitter user Steven F

More clarification and comment was sought, but Microsoft apparently had nothing more to say about this “human error” and its conspicuous timing. Nor did it offer any details on whether or not this “human error” originated with its Beijing team. It also didn’t explain why the first fix resulted in images very few people would associate with the term “Tank Man.”

Source: Content Moderation Case Study: Bing Search Results Erases Images Of ‘Tank Man’ On Anniversary Of Tiananmen Square Crackdown (2021) | Techdirt

Marvel Files Lawsuit to Keep Iron Man, Spider-Man Rights From Creators

The families of iconic Marvel comic book writers and artists Stan Lee, Steve Ditko, Don Heck, Gene Colan, and Don Rico have filed termination of copyright notices on the superheroes they helped create. Marvel—which Disney has owned since 2009—unsurprisingly, disagrees and has filed lawsuits against all five to keep the characters in the Marvel stable and making the company billions.

The Hollywood Reporter broke the news. Without trying to get into too much legalese, creators can file termination of copyright notices to reclaim rights to their work after a set amount of time, with a minimum of 35 years. Marvel’s suits argue that the characters are ineligible for copyright termination because they were made as “work-for-hire”—as in Marvel paid people to create characters for the company, meaning the company owns them outright. According to the report, if the creators’ heirs notices were accepted, Marvel would lose rights to characters including Iron Man, Spider-Man, Hawkeye, Black Widow, Doctor Strange, Falcon, Ant-Man, and more. One caveat is this only matters in the United States. According to THR, even if Marvel loses, Disney can continue making money off the characters everywhere else. If the heirs win, Disney would still share ownership.

Since Marvel has pro-actively sued to keep the copyrights to these characters, I suppose the creators’ claims have some validity to them, but as a layman, the case looks hopeless to me. Not only does the Walt Disney Company have the infinite cash reserves to keep the rights tied up with them for years, but there have been previous cases where Marvel creators have claimed ownership and had to settle. Additionally, the lawyer representing the heirs is Marc Toberoff, who also represented the families of Superman creators Joe Shuster and Jerry Siegel when they tried to terminate DC Comics’ rights to the Man of Steel. DC was successfully represented by Dan Petrocelli—and he’s the one who just filed the lawsuits for Marvel.

More likely, the case will ultimately be about paying people some kind of fair compensation for turning Marvel into a billion-dollar company, which Disney has no desire to do (remember, Disney’s reportedly been paying creators a mere $5,000 for work it’s made those billions on). This is unfair, immoral, and purely greedy; the company has more than enough money to make all of these creators rich without coming close to losing a profit. In the best-case scenario, Disney/Marvel will give these folks as little as possible to make these legal annoyances go away early. It won’t be nearly as much as the company could and should give them, but at least it’ll be something.

Source: Marvel Files Lawsuit to Keep Iron Man, Spider-Man Rights From Creators

Even the fact that there is copyright on these characters still after the original creators have died is downright ridiculous

Apple’s App Tracking Transparency Feature Doesn’t Stop Tracking

In 2014, some very pervy creeps stole some very personal iCloud photos from some very high-profile celebs and put them on the open web, creating one very specific PR crisis for Apple’s CEO, Tim Cook. The company was about to roll out Apple Pay as part of its latest software update, a process that took more than a decade bringing high-profile payment processors and retailers on board. The only issue was that nobody seemed to want their credit card details in the hands of the same company whose service had been used to steal dozens of nude photos of Jennifer Lawrence just a week earlier.

Apple desperately needed a rebrand, and that’s exactly what we got. Within days, the company rolled out a polished promotional campaign—complete with a brand new website and an open letter from Cook himself—explaining the company’s beefed-up privacy prowess, and the safeguards adopted in the wake of that leak. Apple wasn’t only a company you could trust, Cook said, it was arguably the company—unlike the other guys (*cough* Facebook *cough*) who built their Silicon Valley empires off of pawning your data to marketing companies, Apple’s business model is built off of “selling great products,” no data-mining needed.

That ad campaign’s been playing out for the last seven years, and by all accounts, it’s worked. It’s worked well enough that in 2021, we trust Apple with our credit card info, our personal health information, and most of what’s inside our homes. And when Tim Cook decried things like the “data-industrial complex” in interviews earlier this year and then rolled out a slew of iOS updates meant to give users the power they deserved, we updated our iPhones and felt a tiny bit safer.

The App Tracking Transparency (ATT) settings that came bundled in an iOS 14 update gave iPhone users everywhere the power to tell their favorite apps (and Facebook) to knock off the whole tracking thing. Saying no, Apple promised, would stop these apps from tracking you as you browse the web, and through other apps on your phone. Well, it turns out that wasn’t quite the case. The Washington Post was first to report on a research study that put Apple’s ATT feature to the test, and found the setting… pretty much useless. As the researchers put it:

In our tests of ten top-ranked apps, we found no meaningful difference in third-party tracking activity when choosing App Tracking Transparency’s “Ask App Not To Track.” The number of active third-party trackers was identical regardless of a user’s ATT choice, and the number of tracking attempts was only slightly (~13%) lower when the user chose “Ask App Not To Track”.

So, what the hell happened? In short, ATT addresses one specific (and powerful) piece of digital data that advertisers use to identify your specific device—and your specific identity—across multiple sites and services: the so-called ID for Advertisers, or IDFA. Telling an app not to track severs their access to this identifier, which is why companies like Facebook lost their minds over these changes. Without the IDFA, Facebook had no way to know whether, say, an Instagram ad translated into a sale on some third-party platform, or whether you downloaded an app because of an ad you saw in your news feed.

Luckily for said companies (but unluckily for us), tracking doesn’t start and end with the IDFA. Fingerprinting—or cobbling together a bunch of disparate bits of mobile data to uniquely identify your device—has come up as a pretty popular alternative to some major digital ad companies, which eventually led Apple to tell them to knock that shit off. But because “fingerprinting” encompasses so many different kinds of data in so many different contexts (and can go by many different names), nobody knocked anything off. And outside of one or two banned apps, Apple really didn’t seem to care.

[…]

Some Apple critics in the marketing world have been raising red flags for months about potential antitrust issues with Apple’s ATT rollout, and it’s not hard to see why. It gave Apple exclusive access to a particularly powerful piece of intel on all of its customers, the IDFA, while leaving competing tech firms scrambling for whatever scraps of data they can find. If all of those scraps become Apple’s sole property, too, that’s practically begging for even more antitrust scrutiny to be thrown its way. What Apple seems to be doing here is what any of us would likely do in its situation: picking its battles.

Source: Apple’s App Tracking Transparency Feature Doesn’t Stop Tracking

110,000 Affected by Epik Breach – Including Those Who Trusted Epik to Hide Their Identity as hate mongerers

Epik’s massive data breach is already affecting lives. Today the Washington Post describes a real estate agent in Pompano Beach who urged buyers on Facebook to move to “the most beautiful State.” His name and personal details “were found on invoices suggesting he had once paid for websites with names such as racisminc.com, whitesencyclopedia.com, christiansagainstisrael.com and theholocaustisfake.com”. The real estate brokerage where he worked then dropped him as an agent. The brokerage’s owner told the Post they didn’t “want to be involved with anyone with thoughts or motives like that.”

“Some users appear to have relied on Epik to lead a double life,” the Post reports, “with several revelations so far involving people with innocuous day jobs who were purportedly purveyors of hate online.” (Alternate URL here.) Epik, based outside Seattle, said in a data-breach notice filed with Maine’s attorney general this week that 110,000 people had been affected nationwide by having their financial account and credit card numbers, passwords and security codes exposed…. Heidi Beirich, a veteran researcher of hate and extremism, said she is used to spending weeks or months doing “the detective work” trying to decipher who is behind a single extremist domain. The Epik data set, she said, “is like somebody has just handed you all the detective work — the names, the people behind the accounts…”

Many website owners who trusted Epik to keep their identities hidden were exposed, but some who took additional precautions, such as paying in bitcoin and using fake names, remain anonymous….

Aubrey “Kirtaner” Cottle, a security researcher and co-founder of Anonymous, declined to share information about the hack’s origins but said it was fueled by hackers’ frustrations over Epik serving as a refuge for far-right extremists. “Everyone is tired of hate,” Cottle said. “There hasn’t been enough pushback, and these far-right players, they play dirty. Nothing is out of bounds for them. And now … the tide is turning, and there’s a swell moving back in their direction.”
Earlier in the week, the Post reported: Since the hack, Epik’s security protocols have been the target of ridicule among researchers, who’ve marveled at the site’s apparent failure to take basic security precautions, such as routine encryption that could have protected data about its customers from becoming public… The hack even exposed the personal records from Anonymize, a privacy service Epik offered to customers wanting to conceal their identity.

Source: 110,000 Affected by Epik Breach – Including Those Who Trusted Epik to Hide Their Identity – Slashdot

Microsoft Exchange protocol can leak credentials cleartext

A flaw in Microsoft’s Autodiscover protocol, used to configure Exchange clients like Outlook, can cause user credentials to leak to miscreants in certain circumstances.

The upshot is that your Exchange-connected email client may give away your username and password to a stranger, if the flaw is successfully exploited. In a report scheduled to be published on Wednesday, security firm Guardicore said it has identified a design blunder that leaks web requests to Autodiscover domains that are outside the user’s domain but within the same top-level domain (TLD).

Exchange’s Autodiscover protocol, specifically the version based on POX XML, provides a way for client applications to obtain the configuration data necessary to communicate with the Exchange server. It gets invoked, for example, when adding a new Exchange account to Outlook. After a user supplies a name, email address, and password, Outlook tries to use Autodiscover to set up the client.

As Guardicore explained in a report provided to The Register, the client parses the email address – say, user@example.com – and tries to construct a URL for the configuration data using combinations of the email domain, a subdomain, and a path string as follows:

  • https://Autodiscover.example.com/Autodiscover/Autodiscover.xml
  • http://Autodiscover.example.com/Autodiscover/Autodiscover.xml
  • https://example.com/Autodiscover/Autodiscover.xml
  • http://example.com/Autodiscover/Autodiscover.xml

If the client doesn’t receive any response from these URLs – which would happen if Exchange was improperly configured or was somehow prevented from accessing the designated resources – the Autodiscover protocol tries a “back-off” algorithm that uses Autodiscover with a TLD as a hostname. Eg:

  • http://Autodiscover.com/Autodiscover/Autodiscover.xml

“This ‘back-off’ mechanism is the culprit of this leak because it is always trying to resolve the Autodiscover portion of the domain and it will always try to ‘fail up,’ so to speak,” explained Amit Serper, Guardicore area vice president of security research for North America, in the report. “This means that whoever owns Autodiscover.com will receive all of the requests that cannot reach the original domain.”

In an email to The Register, Serper said, “I believe that this was the consequence of careless, or rather, naïve design. [The] same flaws appear in other Microsoft protocols of similar functions.”

Sensing a potential problem with making credentials available to any old TLD with Autodiscover, Guardicore acquired several variations on that theme: Autodiscover.com.br, Autodiscover.com.cn, Autodiscover.com.co, Autodiscover.uk, and Autodiscover.online, among others.

After assigning these domains to its web server, Guardicore started receiving numerous requests to Autodiscover endpoints from assorted IP addresses and clients. It turns out a lot of Exchange servers and clients aren’t set up very carefully.

… with the Authorization header already populated with credentials in HTTP basic authentication

“The most notable thing about these requests was that they requested the relative path of /Autodiscover/Autodiscover.xml with the Authorization header already populated with credentials in HTTP basic authentication,” said Serper, who observed that web requests of this sort should not be sent blindly pre-authentication.

HTTP basic access authentication is Base64 encoded but is not encrypted, so this amounts to sending credentials in cleartext.

Between April 16, 2021 and August 25, 2021, Guardicore received about 649,000 HTTP requests aimed at its Autodiscover domains, 372,000 requests with credentials in basic authentication, and roughly 97,000 unique pre-authentication requests.

The credentials came from publicly traded companies in China, food makers, investment banks, power plants, energy delivery firms, real estate businesses, shipping and logistics operations, and fashion/jewelry companies.

There were also many requests that used alternatives to HTTP basic authentication, like NTLM and Oauth, that didn’t expose associated credentials immediately. To obtain access to these, Guardicore set up a downgrade attack.

So upon receiving an HTTP request with an authentication token or NLTM hash, the Guardicore server responded with an HTTP 401 with the WWW-Authenticate: basic header, which tells the client that the server only supports HTTP basic authentication. Then to make the session look legit, the company used a Let’s Encrypt certificate to prevent an SSL warning and ensure the presentation of a proper Outlook authentication prompt so potential victims enter their credentials with confidence.

[…]

Source: Microsoft Exchange protocol can leak credentials • The Register

Apple Confirms Fortnite Won’t Come Back to iPhones Anytime Soon

Today, Tim Sweeney confirmed on Twitter just how massive of an “L” Epic took in its recent trial against Apple. Apple has effectively “blacklisted” Fortnite from all Apple products until the legal clash between the two massive corporations reaches its conclusion, which could take as long as five years. (It’s even longer in Peely years.)

In the tweet, Sweeney posted a letter Epic had received from Apple confirming that Epic’s Apple developer account will not be reinstated, and that Epic cannot even request reinstatement until “the court’s judgement becomes final and unappealable.” That can take up to five years, according to Sweeney, who also claims that this is a renege on Apple’s previous position expressed to both the court and the press. However, given that Epic is currently trying to appeal the decision, I’d argue that Apple’s reticence to let it return to the platform makes perfect sense.

This letter reinforces the reality of this trial, that both Epic and Apple resoundingly lost. There was no court order to get Fortnite back on the store, and Apple lost its ability to refuse payments outside of its ecosystem. Both massive corporations lost, and all other developers will reap the rewards of Epic’s hubris.

[…]

 

Source: Apple Confirms Fortnite Won’t Come Back to iPhones Anytime Soon

I’m not sure Epic minds so much, considering Apples are only used by parents, but it sure shows how childish Apple is.

Lithuania tells citizens to throw Xiaomi mobiles away for censoring functionality

In an audit it published yesterday [PDF] the agency called out Xiaomi’s Mi 10T 5G phone handset firmware for being able to censor terms such as “Free Tibet”, “Long live Taiwan independence” or “democracy movement”.

Defence Deputy Minister Margiris Abukevicius told reporters at the audit’s release: “Our recommendation is to not buy new Chinese phones, and to get rid of those already purchased as fast as reasonably possible.”

Although the censorship setting was disabled for phones sold into the manufacturer’s “European region”, the Lithuanian NCSC said (page 22):

It has been established that during the initialisation of the system applications factory-installed on a Xiaomi Mi 10T device, these applications contact a server in Singapore at the address globalapi.ad.xiaomi.com (IP address 47.241.69.153) and download the JSON file MiAdBlacklistConfig, and save this file in the metadata catalogues of the applications.

That file contained a list of more than 400 terms, including “free Tibet”, “89 Democracy Movement” (a reference to Tiananmen Square) and “long live Taiwan’s independence”.

The local security agency’s 32-page report, titled “Assessment of cybersecurity of mobile devices supporting 5G technology sold in Lithuania”, focused on devices from Xiaomi, Huawei and OnePlus.

“It is believed that this functionality allows a Xiaomi device to perform an analysis of the target multimedia content entering the phone; to search for keywords based on the MiAdBlacklist list received from the server,” said the Lithuanian report.

“Once the device determines that the content contains certain keywords, the device performs filtering of this content and the user cannot see it. The principle of data analysis allows analysis not only of words written in letters; the list that is regularly downloaded from the server can be formed in any language.”

The agency said the censorship could be remotely re-enabled at any time by Xiaomi.

Source: Lithuania tells citizens to throw Xiaomi mobiles away • The Register

Ministry of Defence: Another huge Afghanistan email blunder

A second leak of personal data was reportedly committed by the Ministry of Defence, raising further questions about the ministry’s commitment to the safety of people in Afghanistan, some of whom are its own former employees.

The BBC reported overnight that the details of a further 55 Afghans  – claimed to be candidates for potential relocation – had been leaked through the classic cc-instead-of-bcc email blunder, echoing the previously reported breach of 250 Afghan interpreters’ data through a similar failure.

An MoD spokeswoman said in a statement: “We have been made aware of a data breach that occurred earlier this month by the Afghan Relocation and Assistance Policy (Arap) team. This week, the defence secretary instigated an investigation into data-handling within that team.”

A defence official has reportedly been suspended from duty, following demands from defence secretary Ben Wallace for an immediate enquiry into how the blunder happened.

After the US-led military coalition left Afghanistan, a number of local civilians employed as translators were left behind as the Taliban re-established control over the country. Some of those civilians have since been murdered for their perceived support of the Western militaries.

[…]

Source: Ministry of Defence: Another huge Afghanistan email blunder • The Register

A Stalkerware Firm Is Leaking Real-Time Screenshots of People’s Phones Online

A stalkerware company that’s designed to let customers spy on their spouses’s, children’s, or employees’ devices is exposing victims’ data, allowing anyone on the internet to see screenshots of phones simply by visiting a specific URL.

The news highlights the continuing lax security practices that many stalkerware companies use; not only do these companies sometimes market their tools specifically for illegal surveillance, but the targets are re-victimized by these breaches.

[…]

The stalkerware company, called pcTattleTale, offers the malware for Windows computers and Android phones.

[…]

Security researcher Jo Coscia showed Motherboard that pcTattleTale uploads victim data to an AWS server that requires no authentication to view specific images. Coscia said they found this by using a trial version of the stalkerware. Motherboard also downloaded a copy of the trial version of pcTattleTale and verified Coscia’s findings.

The URL for images that pcTattleTale captures is constructed with the device ID—a code given by pcTattleTale to the infected device that appears to be sequentially generated—the date, and a timestamp. Theoretically, an attacker may be able to churn through different URL combinations to discover images uploaded by other infected devices

[…]

Coscia said they used the free trial version of pcTattleTale when discovering the issue. In promotional emails, pcTattleTale said it would delete users’ data after the free trial expired. But Coscia found the screenshots were still accessible after their free trial period ended.

[…]

In one video online, Fleming said he built the code for pcTattleTale in 2003 over the better part of a year before launching it. Then he rewrote the code base when he bought out his business partner in 2012, he added. At one point Fleming complains about his server crashing because more and more people are using the service. Later on he says that pcTattleTale receives about 40,000 unique visitors a month.

“The market’s good, you know,” he said.

“To catch a cheating spouse using an android phone you will need to know their pass-code and have access to the phone for about 5 minutes. The best time to do this is when they are sleeping,” one guide on the company’s website reads. Another separate post from the company tells users how to trick their spouse into handing over their iCloud password.

[…]

 

Source: A Stalkerware Firm Is Leaking Real-Time Screenshots of People’s Phones Online

Apple miffed by EU’s ‘strict’ one-size-fits-all charger plan

Smartphones, tablets, and cameras sold within the European Union could be forced to adopt a single standard charging port by the middle of the decade if the latest plans from the European Commission get the go-ahead.

The proposals for a revised Radio Equipment Directive would mean that charging port and fast-charging technology would be “harmonised” across the EU with USB-C becoming the standard for all tech. Quite where this leaves Apple is open to some debate.

Plans to standardise chargers were hatched all the way back in 2011 and by 2014 MicroUSB was the connector design chosen. Vendors signed an MoU but Cupertino went its own way.

Under the EU’s latest effort, the proposal will be legally binding. A bloc-wide common charging standard was put to MEPs in January 2020 and the measure passed by 582 votes to 40, with 37 abstentions.

Today’s announcement also means that chargers would no longer be sold with gadgets and gizmos. The EU calculated seven years ago that 51,000 metric tons of electronics waste across the nation states was attributed annually to old chargers, although that number seems to have fallen dramatically since.

[…]

The direction of travel, however, has flagged concerns for Apple – not for the first time – which appears displeased at being steamrolled into making changes. El Reg understands the tech giant is concerned about the impact this would have on Apple’s bottom line the industry and create waste (in the short term at least).

Indeed, there are also concerns that if the rules are introduced too quickly it could mean that perfectly good tech with plenty of shelf life gets dumped prematurely.

In a statement, a spokesperson for Apple told The Reg – you heard that right – that while it “shares the European Commission’s commitment to protecting the environment,” it remains “concerned that strict regulation mandating just one type of connector stifles innovation rather than encouraging it, which in turn will harm consumers in Europe and around the world.”

Nevertheless, the EU is prepared to plough on.

[…]

Source: Apple miffed by EU’s ‘strict’ one-size-fits-all charger plan • The Register

Hackers leak LinkedIn 700 million June data scrape

A collection containing data about more than 700 million users, believed to have been scraped from LinkedIn, was leaked online this week after hackers previously tried to sell it earlier this year in June.

The collection, obtained by The Record from a source, is currently being shared in private Telegram channels in the form of a torrent file containing approximately 187 GB of archived data.

LinkedIn-scrape-torrent
Image: The Record

The Record analyzed files from this collection and found the data to be authentic, with data points such as:

  • LinkedIn profile names
  • LinkedIn ID
  • LinkedIn profile URL
  • Location information (town, city, country)
  • Email addresses
LinkedIn-scrape-details
Image: The Record

While the vast majority of the data points contained in the leak are already public information and pose no threat to LinkedIn users, the leak also contains email addresses that are not normally viewable to the public on the official LinkedIn site.

[…]

Source: Hackers leak LinkedIn 700 million data scrape – The Record by Recorded Future

This Site Can Tell You If Anyone Else Has Taken Pictures With Your Camera

[…]

This website provides an avenue for investigation, and offers a sliver of hope. It’s a tiny sliver of hope to be sure, but it’s better than no hope at all.

It works like this: You upload a picture taken with the missing camera to stolencamerafinder.com, which then uses the camera’s serial number (saved in the photo’s EXIF data) to crawls the internet in search of other photos taken with that same camera. If it finds a match, you may have a lead on where your camera ended up.

From there, you can try to track down and contact the “new owner” via email to request your camera’s return, file a report with the authorities, or devote your life to hunting the thief yourself, John Wick style.

None of these options is likely to result in the return of your Nikon, but it has worked in the past, and maybe it will help you find closure. Maybe just knowing what the hell happened to your camera is the best you can hope for? And the site also provides a database of lost cameras all over the world, so you’ll at least know you’re not alone.

[…]

Source: This Site Can Tell You If Anyone Else Has Taken Pictures With Your Camera

UK appeals court rules AI cannot be listed as a patent inventor

Add the United Kingdom to the list of countries that says an artificial intelligence can’t be legally credited as an inventor. Per the BBC, the UK Court of Appeal recently ruled against Dr. Stephen Thaler in a case involving the country’s Intellectual Property Office. In 2018, Thaler filed two patent applications in which he didn’t list himself as the creator of the inventions mentioned in the documents. Instead, he put down his AI DABUS and said the patent should go to him “by ownership of the creativity machine.”

The Intellectual Property Office told Thaler he had to list a real person on the application. When he didn’t do that, the agency decided he had withdrawn from the process. Thaler took the case to the UK’s High Court. The body ruled against him, leading to the eventual appeal. “Only a person can have rights. A machine cannot,” Lady Justice Elisabeth Laing of the Appeal Court wrote in her judgment. “A patent is a statutory right and it can only be granted to a person.”

Thaler has filed similar legal challenges in other countries, and the results so far have been mixed. In August, a judge in Australia ruled inventions created by an AI can qualify for a patent. However, only earlier this month, US District Judge Leonie M Brinkema upheld a decision by the US Patent and Trademark Office that said “only natural persons may be named as an inventor in a patent application.” Judge Brinkema said there may eventually be a time when AI becomes sophisticated enough to satisfy the accepted definitions of inventorship, but noted, “that time has not yet arrived, and, if it does, it will be up to Congress to decide how, if at all, it wants to expand the scope of patent law.”

Source: UK appeals court rules AI cannot be listed as a patent inventor | Engadget

This is strange as Patents can be granted to companies – which are legally people, but not really, well, people

China says all cryptocurrency-related transactions are illegal and must be banned

China’s central bank said on Friday that all cryptocurrency-related transactions are illegal in the country and they must be banned, citing concerns around national security and “safety of people’s assets.” The world’s most populated nation also said that foreign exchanges are banned from providing services to users in the country.

In a joint statement, ten Chinese government agencies vowed to work closely to maintain a “high pressure” crackdown on trading of cryptocurrencies in the nation. The People’s Bank of China separately ordered internet, financial and payment companies from facilitating cryptocurrency trading on their platforms.

The central bank said cryptocurrencies, including Bitcoin and Tether, cannot be circulated in the market as they are not fiat currency. The surge in usage of cryptocurrencies has disrupted “economic and financial order,” and prompted a proliferation of “money laundering, illegal fund-raising, fraud, pyramid schemes and other illegal and criminal activities,” it said.

Offenders, the central bank warned, will be “investigated for criminal liability in accordance with the law.”

The Chinese government will “resolutely clamp down on virtual currency speculation, and related financial activities and misbehaviour in order to safeguard people’s properties and maintain economic, financial and social order,” the People’s Bank of China said in a statement.

The move has already started to cause panic among some crypto traders, sending the price of bitcoin and several other currencies down. Bitcoin was down 5.5% at the time of publication.

[…]

Source: China says all cryptocurrency-related transactions are illegal and must be banned

Scientists develop the next generation of reservoir computing

A relatively new type of computing that mimics the way the human brain works was already transforming how scientists could tackle some of the most difficult information processing problems.

Now, researchers have found a way to make what is called reservoir computing work between 33 and a million times faster, with significantly fewer computing resources and less needed.

In fact, in one test of this next-generation reservoir computing, researchers solved a complex computing problem in less than a second on a .

Using the now current state-of-the-art technology, the same problem requires a supercomputer to solve and still takes much longer, said Daniel Gauthier, lead author of the study and professor of physics at The Ohio State University.

[…]

Reservoir computing is a machine learning algorithm developed in the early 2000s and used to solve the “hardest of the hard” computing problems, such as forecasting the evolution of dynamical systems that change over time, Gauthier said.

Dynamical systems, like the weather, are difficult to predict because just one small change in one condition can have massive effects down the line, he said.

One famous example is the “butterfly effect,” in which—in one metaphorical illustration—changes created by a butterfly flapping its wings can eventually influence the weather weeks later.

Previous research has shown that reservoir computing is well-suited for learning dynamical systems and can provide accurate forecasts about how they will behave in the future, Gauthier said.

It does that through the use of an artificial neural network, somewhat like a human brain. Scientists feed data on a dynamical network into a “reservoir” of randomly connected artificial neurons in a network. The network produces useful output that the scientists can interpret and feed back into the network, building a more and more accurate forecast of how the system will evolve in the future.

The larger and more complex the system and the more accurate that the scientists want the forecast to be, the bigger the of artificial neurons has to be and the more computing resources and time that are needed to complete the task.

One issue has been that the reservoir of artificial neurons is a “black box,” Gauthier said, and scientists have not known exactly what goes on inside of it—they only know it works.

The artificial neural networks at the heart of reservoir computing are built on mathematics, Gauthier explained.

“We had mathematicians look at these networks and ask, ‘To what extent are all these pieces in the machinery really needed?'” he said.

In this study, Gauthier and his colleagues investigated that question and found that the whole reservoir computing system could be greatly simplified, dramatically reducing the need for computing resources and saving significant time.

They tested their concept on a forecasting task involving a weather system developed by Edward Lorenz, whose work led to our understanding of the .

Their next-generation reservoir computing was a clear winner over today’s state—of-the-art on this Lorenz forecasting task. In one relatively simple simulation done on a desktop computer, the new system was 33 to 163 times faster than the current model.

But when the aim was for great accuracy in the forecast, the next-generation reservoir computing was about 1 million times faster. And the new-generation computing achieved the same accuracy with the equivalent of just 28 neurons, compared to the 4,000 needed by the current-generation model, Gauthier said.

An important reason for the speed-up is that the “brain” behind this next generation of reservoir computing needs a lot less warmup and training compared to the current generation to produce the same results.

Warmup is training data that needs to be added as input into the reservoir computer to prepare it for its actual task.

[…]

“Currently, scientists have to put in 1,000 or 10,000 data points or more to warm it up. And that’s all data that is lost, that is not needed for the actual work. We only have to put in one or two or three data points,” he said.

[…]

In their test of the Lorenz forecasting task, the researchers could get the same results using 400 data points as the current generation produced using 5,000 or more, depending on the accuracy desired.

[…]

A reservoir computing system for temporal data classification and forecasting

More information: Next generation reservoir computing, Nature Communications (2021). DOI: 10.1038/s41467-021-25801-2

Source: Scientists develop the next generation of reservoir computing

FBI Had REvil’s Kaseya Ransomware Decryption Key for Weeks

The Kaseya ransomware attack, which occurred in July and affected as many as 1,500 companies worldwide, was a big, destructive mess—one of the largest and most unwieldy of its kind in recent memory. But new information shows the FBI could have lightened the blow victims suffered but chose not to.

A new report from the Washington Post shows that, shortly after the attack, the FBI came into possession of a decryption key that could unlock victims’ data—thus allowing them to get their businesses back up and running. However, instead of sharing it with them or Kaseya, the IT firm targeted by the attack, the bureau kept it a secret for approximately three weeks.

The feds reportedly did this because they were planning an operation to “disrupt” the hacker gang behind the attack—the Russia-based ransomware provider REvil—and didn’t want to tip their hand. However, before the FBI could put its plan into action, the gang mysteriously disappeared. The bureau finally shared the decryption key with Kaseya on July 21—about a week after the gang had vanished.

[…]

Source: FBI Had REvil’s Kaseya Ransomware Decryption Key for Weeks: Report

Database containing 106m Thailand travelers’ details over the past decade leaked

A database containing personal information on 106 million international travelers to Thailand was exposed to the public internet this year, a Brit biz claimed this week.

Bob Diachenko, head of cybersecurity research at product-comparison website Comparitech, said the Elasticsearch data store contained visitors’ full names, passport numbers, arrival dates, visa types, residency status, and more. It was indexed by search engine Censys on August 20, and spotted by Diachenko two days later. There were no credentials in the database, which is said to have held records dating back a decade.

[…]

Diachenko said he alerted the operator of the database, which led to the Thai authorities finding out about it, who “were quick to acknowledge the incident and swiftly secured the data,” Comparitech reported. We’re told that the IP address of the exposed database, hidden from sight a day after Diachenko raised the alarm, is still live, though connecting to it reports that the box is now a honeypot.

[…]

We’ve contacted the Thai embassy in the US for further comment. Diachenko told The Register a “server misconfiguration” by an IT outsourcer caused the database to be exposed to the whole world.

[…]

Additionally, it’s possible that if you’ve traveled to Thailand and stayed there during the pandemic, you’ve already been leaked. A government website used to sign foreigners up for COVID-19 vaccines spilled names and passport numbers in June.

Additionally, last month, Bangkok Airways was hit by ransomware group LockBit resulting in the publishing of passenger data. And in 2018, TrueMove H, the biggest 4G mobile operator in Thailand, suffered a database breach of around 46,000 records.

Comparitech said the database it found contained several assets, in addition to the 106 million records, making the total leaked information come to around 200 GB.

Source: Database containing 106m Thailand travelers’ details leaked • The Register

India antitrust probe finds Google abused Android dominance

NEW DELHI, Sept 18 (Reuters) – Google abused the dominant position of its Android operating system in India, using its “huge financial muscle” to illegally hurt competitors, the country’s antitrust authority found in a report on its two-year probe seen by Reuters.

Alphabet Inc’s (GOOGL.O) Google reduced “the ability and incentive of device manufacturers to develop and sell devices operating on alternative versions of Android,” says the June report by the Competition Commission of India’s (CCI) investigations unit.

[…]

Its findings are the latest antitrust setback for Google in India, where it faces several probes in the payments app and smart television markets. The company has been investigated in Europe, the United States and elsewhere. This week, South Korea’s antitrust regulator fined Google $180 million for blocking customised versions of Android.

‘VAGUE, BIASED AND ARBITRARY’

Google submitted at least 24 responses during the probe, defending itself and arguing it was not hurting competition, the report says.

Microsoft Corp (MSFT.O), Amazon.com Inc (AMZN.O), Apple Inc (AAPL.O), as well as smartphone makers like Samsung and Xiaomi, were among 62 entities that responded to CCI questions during its Google investigation, the report says.

Android powers 98% of India’s 520 million smartphones, according to Counterpoint Research.

When the CCI ordered the probe in 2019, it said Google appeared to have leveraged its dominance to reduce device makers’ ability to opt for alternate versions of its mobile operating system and force them to pre-install Google apps.

The 750-page report finds the mandatory pre-installation of apps “amounts to imposition of unfair condition on the device manufacturers” in violation of India’s competition law, while the company leveraged the position of its Play Store app store to protect its dominance.

Play Store policies were “one-sided, ambiguous, vague, biased and arbitrary”, while Android has been “enjoying its dominant position” in licensable operating systems for smartphones and tablets since 2011, the report says.

The probe was triggered in 2019 after two Indian junior antitrust research associates and a law student filed a complaint, Reuters reported.

[…]

Source: India antitrust probe finds Google abused Android dominance, report shows | Reuters

MoD apologises after Afghan interpreters’ personal data exposed (yes the ones still in Afghanistan)

The UK’s Ministry of Defence has launched an internal investigation after committing the classic CC-instead-of-BCC email error – but with the names and contact details of Afghan interpreters trapped in the Taliban-controlled nation.

The horrendous data breach took place yesterday, with Defence Secretary Ben Wallace promising an immediate investigation, according to the BBC.

Included in the breach were profile pictures associated with some email accounts, according to the state-owned broadcaster. The initial email was followed up by a second message urging people who had received the first one to delete it – a way of drawing close attention to an otherwise routine missive.

The email was reportedly sent by the British government’s Afghan Relocations and Assistance Policy (ARAP) unit, urging the interpreters not to put themselves or their families at risk. The ministry was said to have apologised for the “unacceptable breach.”

“This mistake could cost the life of interpreters, especially for those who are still in Afghanistan,” one source told the Beeb.

Since the US-led military coalition pulled out of Afghanistan at the end of August, there have been distressing scenes in the country as the ruling Taliban impose Islamic Sharia law – while hunting down and punishing those who helped the Western militaries. Some interpreters have reportedly been murdered, with others fearing for their lives and the well-being of their families.

[…]

Source: MoD apologises after Afghan interpreters’ data exposed • The Register

Facebook Documents Show It Fumbled the Fight Over Vaccines

he Wall Street Journal has had something of a banner week tearing down Facebook. Its series on a trove of internal company documents obtained by the paper has unveiled Facebook’s secret system for treating certain users as above the rules, company research showing how harmful Instagram is for young girls, how the site’s algorithmic solutions to toxic content have backfired, and that Facebook executives are slow to respond to reports of organized criminal activity. On Friday, it published another article detailing how badly Facebook has fumbled fighting anti-vax content and CEO Mark Zuckerberg’s campaign to get users vaccinated.

[…]

One big problem was that Facebook users were brigading any content addressing vaccination with anti-vax comments. Company researchers, according to the Journal, warned executives that comments on vaccine-related content were flooded with anti-vax propaganda, pseudo-scientific claims, and other false information and lies about the virus and the vaccines.

Global health institutions such as the World Health Organization (WHO) and Unicef had registered their concern with Facebook, with one internal company memo warning of “anti-vaccine commenters that swarm their Pages,” while another internal report in early 2021 made an initial estimate that up to 41% of comments on vaccine-related posts appeared to risk discouraging people from getting vaccinated (referred to within the company “barrier to vaccination” content). That’s out of a pool of around 775 million vaccine-related comments seen by users daily.

[…]

Facebook had promised in 2019 to crack down on antivax content and summoned WHO reps to meet with tech leaders in February 2020. Zuckerberg personally got in contact with National Institute of Allergy and Infectious Diseases director Dr. Anthony Fauci to discuss funding vaccine trials, offer ad space and user data for government-run vaccination campaigns, and arrange a live Q&A between the two on the site. Facebook had also made adjustments to its content-ranking algorithm that a June 2020 memo claimed reduced health misinformation by 6.7% to 9.9%, the Journal wrote.

But by summer 2020, BS claims about the coronavirus and vaccines were going viral on the site, including the viral “Plandemic” video, a press conference staged by a group of right-wing weirdos calling themselves “America’s Frontline Doctors,” and a handful of anti-vax accounts such as Robert F. Kennedy Jr.’s that advocacy group Avaaz later identified as responsible for a wildly disproportionate share of the offending content. According to the Journal, Facebook was well aware that the phenomenon was being driven by a relatively small but determined and prolific segment of posters and group admins:

As the rollout of the vaccine began early this year, antivaccine activists took advantage of that stance. A later analysis found that a small number of “big whales” were behind many antivaccine posts and groups on the platform. Out of nearly 150,000 posters in Facebook Groups disabled for Covid misinformation, 5% were producing half of all posts, and around 1,400 users were responsible for inviting half the groups’ new members, according to one document.

“We found, like many problems at FB, this is a head-heavy problem with a relatively few number of actors creating a large percentage of the content and growth,” Facebook researchers would write in May, likening the movement to QAnon and efforts to undermine elections.

Zuckerberg waffled and suggested that Facebook shouldn’t be in the business of censoring anti-vax posts in an interview with Axios in September 2020, saying “If someone is pointing out a case where a vaccine caused harm or that they’re worried about it —you know, that’s a difficult thing to say from my perspective that you shouldn’t be allowed to express at all.” This was a deeply incorrect assessment of the problem, as Facebook was well aware that a small group of bad actors was actively and intentionally pushing the anti-vax content.

Another internal assessment conducted earlier this year by a Facebook employee, the Journal wrote, found that two-thirds of randomly sampled comments “were anti-vax” (though the sample size was just 110 comments). In their analysis, the staffer noted one poll that showed actual anti-vaccine sentiment in the general population was 40% lower.

[…]

The Journal reported that one integrity worker flagged a post with 53,000 shares and three million views that asserted vaccines are “all experimental & you are in the experiment.” Facebook’s automated moderation tools had ignored it after somehow concluding it was written in the Romanian language. By late February, researchers came up with a hasty method to scan for “vaccine hesitant” comments, but according to the Journal their report mentioned the anti-vax comment problem was “rampant” and Facebook’s ability to fight it was “bad in English, and basically non-existent elsewhere.”

[…]

 

Source: Facebook Documents Show It Fumbled the Fight Over Vaccines

FTC releases findings on how Big Tech eats little tech in deals that fly under the radar

Federal Trade Commission chair Lina Khan signaled changes are on the way in how the agency scrutinizes acquisitions after revealing the results of a study of a decade’s worth of Big Tech company deals that weren’t reported to the agency.

Why it matters: Tech’s business ecosystem is built on giant companies buying up small startups, but the message from the antitrust agency this week could chill mergers and acquisitions in the sector.

What they found: The FTC reviewed 616 transactions valued at $1 million or more between 2010 and 2019 that were not reported to antitrust authorities by Amazon, Apple, Facebook, Google and Microsoft.

  • 94 of the transactions actually exceeded the dollar size threshold that would require companies to report a deal. The deals may have qualified for other regulatory exemptions.
  • 79% of transactions used deferred or contingent compensation to founders and key employees, and nearly 77% involved non-compete clauses.
  • 36% of the transactions involved assuming some amount of debt or liabilities.

What they’re saying: In a statement, Khan said the report shows that loopholes may be “unjustifiably enabling deals to fly under the radar.”

  • Matt Stoller, director of research at the American Economic Liberties Project, said the high percentage of non-compete clauses was especially troubling.
  • “If nothing else, it’s a clear anticompetitive intent to just take talent and prevent them from competing with you,” Stoller said. “And there is a limited amount of tech talent.”

The other side: Nothing in the report indicates that rules were broken or that the deals were anticompetitive, Neil Chilson, a former FTC adviser, pointed out.

  • “I think the message is pretty clear from the chair: She’s suspicious of mergers, no matter what the size, just based on a belief that mergers at any size are suspect and should be reviewed,” Chilson, now senior research fellow for Tech and Innovation at Stand Together, told Axios.
  • “The law certainly is not behind her on that, and I don’t think the economics are particularly there either, and nothing in the report supports that assertion.”

Source: FTC releases findings on how Big Tech eats little tech – Axios

There we go – it’s a problem I have been talking about for some time

Facebook’s 2018 Algorithm Change ‘Rewarded Outrage’. Zuck Resisted Fixes

Internal memos show how a big 2018 change rewarded outrage and that CEO Mark Zuckerberg resisted proposed fixes

In the fall of 2018, Jonah Peretti, chief executive of online publisher BuzzFeed, emailed a top official at Facebook Inc. The most divisive content that publishers produced was going viral on the platform, he said, creating an incentive to produce more of it.

He pointed to the success of a BuzzFeed post titled “21 Things That Almost All White People are Guilty of Saying,” which received 13,000 shares and 16,000 comments on Facebook, many from people criticizing BuzzFeed for writing it, and arguing with each other about race. Other content the company produced, from news videos to articles on self-care and animals, had trouble breaking through, he said.

Mr. Peretti blamed a major overhaul Facebook had given to its News Feed algorithm earlier that year to boost “meaningful social interactions,” or MSI, between friends and family, according to internal Facebook documents reviewed by The Wall Street Journal that quote the email.

BuzzFeed built its business on making content that would go viral on Facebook and other social media, so it had a vested interest in any algorithm changes that hurt its distribution. Still, Mr. Peretti’s email touched a nerve.

Facebook’s chief executive, Mark Zuckerberg, said the aim of the algorithm change was to strengthen bonds between users and to improve their well-being. Facebook would encourage people to interact more with friends and family and spend less time passively consuming professionally produced content, which research suggested was harmful to their mental health.

Within the company, though, staffers warned the change was having the opposite effect, the documents show. It was making Facebook’s platform an angrier place.

Company researchers discovered that publishers and political parties were reorienting their posts toward outrage and sensationalism. That tactic produced high levels of comments and reactions that translated into success on Facebook.

“Our approach has had unhealthy side effects on important slices of public content, such as politics and news,” wrote a team of data scientists, flagging Mr. Peretti’s complaints, in a memo reviewed by the Journal. “This is an increasing liability,” one of them wrote in a later memo.

They concluded that the new algorithm’s heavy weighting of reshared material in its News Feed made the angry voices louder. “Misinformation, toxicity, and violent content are inordinately prevalent among reshares,” researchers noted in internal memos.

Some political parties in Europe told Facebook the algorithm had made them shift their policy positions so they resonated more on the platform, according to the documents.

“Many parties, including those that have shifted to the negative, worry about the long term effects on democracy,” read one internal Facebook report, which didn’t name specific parties.

Facebook employees also discussed the company’s other, less publicized motive for making the change: Users had begun to interact less with the platform, a worrisome trend, the documents show.

The email and memos are part of an extensive array of internal company communications reviewed by the Journal. They offer an unparalleled look at how much Facebook knows about the flaws in its platform and how it often lacks the will or the ability to address them. This is the third in a series of articles based on that information.

[…]

Anna Stepanov, who led a team addressing those issues, presented Mr. Zuckerberg with several proposed changes meant to address the proliferation of false and divisive content on the platform, according to an April 2020 internal memo she wrote about the briefing. One such change would have taken away a boost the algorithm gave to content most likely to be reshared by long chains of users.

“Mark doesn’t think we could go broad” with the change, she wrote to colleagues after the meeting. Mr. Zuckerberg said he was open to testing the approach, she said, but “We wouldn’t launch if there was a material tradeoff with MSI impact.”

Last month, nearly a year and a half after Ms. Stepanov said Mr. Zuckerberg nixed the idea of broadly incorporating a similar fix, Facebook announced it was “gradually expanding some tests to put less emphasis on signals such as how likely someone is to comment or share political content.” The move is part of a broader push, spurred by user surveys, to reduce the amount of political content on Facebook after the company came under criticism for the way election protesters used the platform to question the results and organize protests that led to the Jan. 6 riot at the Capitol in Washington.

[…]

“MSI ranking isn’t actually rewarding content that drives meaningful social interactions,” Mr. Peretti wrote in his email to the Facebook official, adding that his staff felt “pressure to make bad content or underperform.”

It wasn’t just material that exploited racial divisions, he wrote, but also “fad/junky science,” “extremely disturbing news” and gross images.

Political effect

In Poland, the changes made political debate on the platform nastier, Polish political parties told the company, according to the documents. The documents don’t specify which parties.

“One party’s social media management team estimates that they have shifted the proportion of their posts from 50/50 positive/negative to 80% negative, explicitly as a function of the change to the algorithm,” wrote two Facebook researchers in an April 2019 internal report.

Nina Jankowicz, who studies social media and democracy in Central and Eastern Europe as a fellow at the Woodrow Wilson Center in Washington, said she has heard complaints from many political parties in that region that the algorithm change made direct communication with their supporters through Facebook pages more difficult. They now have an incentive, she said, to create posts that rack up comments and shares—often by tapping into anger—to get exposure in users’ feeds.

The Facebook researchers, wrote in their report that in Spain, political parties run sophisticated operations to make Facebook posts travel as far and fast as possible.

“They have learnt that harsh attacks on their opponents net the highest engagement,” they wrote. “They claim that they ‘try not to,’ but ultimately ‘you use what works.’ ”

In the 15 months following fall 2017 clashes in Spain over Catalan separatism, the percentage of insults and threats on public Facebook pages related to social and political debate in Spain increased by 43%, according to research conducted by Constella Intelligence, a Spanish digital risk protection firm.

[…]

Early tests showed how reducing that aspect of the algorithm for civic and health information helped reduce the proliferation of false content. Facebook made the change for those categories in the spring of 2020.

When Ms. Stepanov presented Mr. Zuckerberg with the integrity team’s proposal to expand that change beyond civic and health content—and a few countries such as Ethiopia and Myanmar where changes were already being made—Mr. Zuckerberg said he didn’t want to pursue it if it reduced user engagement, according to the documents.

[…]

Source: Facebook tried to make its platform a healthier place. It got angrier instead

Ig Nobel Prizes blocked by YouTube takedown over 1914 song snippet – can’t find human to fix the error

YouTube, the Ig Nobel Prizes, and the Year 1914

YouTube’s notorious takedown algorithms are blocking the video of the 2021 Ig Nobel Prize ceremony.

We have so far been unable to find a human at YouTube who can fix that. We recommend that you watch the identical recording on Vimeo.

The Fatal Song

This is a photo of John McCormack, who sang the song “Funiculi, Funicula” in the year 1914, inducing YouTube to block the 2021 Ig Nobel Prize ceremony.

Here’s what triggered this: The ceremony includes bits of a recording (of tenor John McCormack singing “Funiculi, Funicula”) made in the year 1914.

The Corporate Takedown

YouTube’s takedown algorithm claims that the following corporations all own the copyright to that audio recording that was MADE IN THE YEAR 1914: “SME, INgrooves (on behalf of Emerald); Wise Music Group, BMG Rights Management (US), LLC, UMPG Publishing, PEDL, Kobalt Music Publishing, Warner Chappell, Sony ATV Publishing, and 1 Music Rights Societies”

UPDATES: (Sept 19, 2021) There’s an ongoing discussion on Slashdot.(Sept 13, 2021) There’s an ongoing discussion on Hacker News, about this problem.

Source: Improbable Research » Blog Archive

First of all, what is copyright doing protecting anything from 1914? The creator is more than dead and buried and the model of creating once and keeping raking in money is ridiculous anyway.
Second, this shows the power the large copyright holders hold over smaller players – and the Ig Nobel Prizes aren’t exactly a small player! If a big corporation throws a DMCA at you, there’s nothing you can do – you are caught in a Kafka-esque hole with no hope in sight.

A Stanford Proposal Over AI’s ‘Foundations’ Ignites Debate

Last month, Stanford researchers declared that a new era of artificial intelligence had arrived, one built atop colossal neural networks and oceans of data. They said a new research center at Stanford would build—and study—these “foundation models” of AI.

Critics of the idea surfaced quickly—including at the workshop organized to mark the launch of the new center. Some object to the limited capabilities and sometimes freakish behavior of these models; others warn of focusing too heavily on one way of making machines smarter.

“I think the term ‘foundation’ is horribly wrong,” Jitendra Malik, a professor at UC Berkeley who studies AI, told workshop attendees in a video discussion.

Malik acknowledged that one type of model identified by the Stanford researchers—large language models that can answer questions or generate text from a prompt—has great practical use. But he said evolutionary biology suggests that language builds on other aspects of intelligence like interaction with the physical world.

“These models are really castles in the air; they have no foundation whatsoever,” Malik said. “The language we have in these models is not grounded, there is this fakeness, there is no real understanding.” He declined an interview request.

A research paper coauthored by dozens of Stanford researchers describes “an emerging paradigm for building artificial intelligence systems” that it labeled “foundation models.” Ever-larger AI models have produced some impressive advances in AI in recent years, in areas such as perception and robotics as well as language.

Large language models are also foundational to big tech companies like Google and Facebook, which use them in areas like search, advertising, and content moderation. Building and training large language models can require millions of dollars worth of cloud computing power; so far, that’s limited their development and use to a handful of well-heeled tech companies.

But big models are problematic, too. Language models inherit bias and offensive text from the data they are trained on, and they have zero grasp of common sense or what is true or false. Given a prompt, a large language model may spit out unpleasant language or misinformation. There is also no guarantee that these large models will continue to produce advances in machine intelligence.

[…]

Dietterich wonders if the idea of foundation models isn’t partly about getting funding for the resources needed to build and work on them. “I was surprised that they gave these models a fancy name and created a center,” he says. “That does smack of flag planting, which could have several benefits on the fundraising side.”

[…]

Emily M. Bender, a professor in the linguistics department at the University of Washington, says she worries that the idea of foundation models reflects a bias toward investing in the data-centric approach to AI favored by industry.

Bender says it is especially important to study the risks posed by big AI models. She coauthored a paper, published in March, that drew attention to problems with large language models and contributed to the departure of two Google researchers. But she says scrutiny should come from multiple disciplines.

“There are all of these other adjacent, really important fields that are just starved for funding,” she says. “Before we throw money into the cloud, I would like to see money going into other disciplines.”

[…]

 

Source: A Stanford Proposal Over AI’s ‘Foundations’ Ignites Debate | WIRED