About Robin Edgar

Organisational Structures | Technology and Science | Military, IT and Lifestyle consultancy | Social, Broadcast & Cross Media | Flying aircraft

We Found Subscription Menus in Our BMW Test Car. And other models have different subscriptions. WTF BMW?

[…]

We were recently playing in the menus of a 2023 BMW X1 when we came across a group of screens offering exactly that sort of subscription. BMW TeleService and Remote Software Upgrade showed a message that read Activated, while BMW Drive Recorder had options to subscribe for one month, one year, three years, or “Unlimited.” Reactions from the Car and Driver staff were swift and emotional. One staff member responded to the menus with a vomiting emoji, while another likened the concept to a video-game battle pass.

We reached out to BMW to ask about the menus we found and to learn more about its plan for future subscriptions. The company replied that it doesn’t post a comprehensive list of prices online because of variability in what each car can receive. “Upgrade availability depends on factors such as model year, equipment level, and software version, so this keeps things more digestible for consumers,” explained one BMW representative.

Our X1 for example, has an optional $25-per-year charge for traffic camera alerts, but that option isn’t available to cars without BMW Live Cockpit. Instead of listing all the available options online, owners can see which subscriptions are available for their car either in the menus of the vehicle itself or from a companion app.

[…]

BMW USA may not want to confuse its customers by listing all its options in one place, but BMW Australia has no such reservations. In the land down under, heated front seats and a heated steering wheel are available in a month-to-month format, as is BMW’s parking assistant technology. In contrast, BMW USA released a statement in July saying that if a U.S.-market vehicle is ordered with heated seats from the factory, that option will remain functional throughout the life of the vehicle.

[…]

In 2019, BMW announced it would charge customers $80 per year for wireless Apple CarPlay. After considerable public backlash, BMW walked back the decision and instead offered the technology for free. BMW is wading into mostly uncharted waters here. The court of public opinion forced BMW to reverse a subscription in the past. If people decide these newer subscriptions are as egregious as the old ones, will they force BMW back again? Or will they instead stick to automakers who sell features outright?

Source: We Found Subscription Menus in Our BMW Test Car. Is That Bad?

If the hardware is there, then you bought it and should be allowed to have it. If it’s externally processed data (eg an updated database of streets and traffic cameras) then a subscription is fine.

John Deere signs right to repair agreement

As farming has become more technology-driven, Deere has increasingly injected software into its products with all of its tractors and harvesters now including an autopilot feature as standard.

There is also the John Deere Operations Center, which “instantly captures vital operational data to boost transparency and increase productivity for your business.”

Within a matter of years, the company envisages having 1.5 million machines and half a billion acres of land connected to the cloud service, which will “collect and store crop data, including millions of images of weeds that can be targeted by herbicide.”

Deere also estimates that software fees will make up 10 percent of the company’s revenues by the end of the decade, with Bernstein analysts pegging the average gross margin for farming software at 85 percent, compared to 25 percent for equipment sales.

Just like other commercial software vendors, however, Deere exercises close control and restricts what can be done with its products. This led farm labor advocacy groups to file a complaint to the US Federal Trade Commission last year, claiming that Deere unlawfully refused to provide the software and technical data necessary to repair its machinery.

“Deere is the dominant force in the $68 billion US agricultural equipment market, controlling over 50 per cent of the market for large tractors and combines,” said Fairmark Partners, the groups’ attorneys, in a preface to the complaint [PDF].

“For many farmers and ranchers, they effectively have no choice but to purchase their equipment from Deere. Not satisfied with dominating just the market for equipment, Deere has sought to leverage its power in that market to monopolize the market for repairs of that equipment, to the detriment of farmers, ranchers, and independent repair providers.”

[…]

The MoU, which can be read here [PDF], was signed yesterday at the 2023 AFBF Convention in San Juan, Puerto Rico, and seems to be a commitment by Deere to improve farmers’ access and choice when it comes to repairs.

[…]

Duvall said on a podcast about the matter that the MoU is the result of several years’ work. “As you use equipment, we all know at some point in time, there’s going to be problems with it. And we did have problems with having the opportunity to repair our equipment where we wanted to, or even repair it on the farm,” he added.

“It ensures that our farmers can repair their equipment and have access to the diagnostic tools and product guides so that they can find the problems and find solutions for them. And this is the beginning of a process that we think is going to be real healthy for our farmers and for the company because what it does is it sets up an opportunity for our farmers to really work with John Deere on a personal basis.”

[…]

Source: John Deere signs right to repair agreement • The Register

But… still gives John Deere access to their data for free?

This may also have something to do with the security of John Deere machines being so incredibly piss poor, mainly due to really bad update hygiene

DoNotPay Offers $1M for Its AI to Argue Before Supreme Court

[…]

“DoNotPay will pay any lawyer or person $1,000,000 with an upcoming case in front of the United States Supreme Court to wear AirPods and let our robot lawyer argue the case by repeating exactly what it says,” Browder wrote on Twitter on Sunday night. “[W]e are making this serious offer, contingent on us coming to a formal agreement and all rules being followed.”

[…]

Although DoNotPay’s robot lawyer is set to make its debut in a U.S. courtroom next month to help someone contest a parking ticket, Browder wants the robot to go before the Supreme Court to address hypothetical skepticism about its abilities.

“We have upcoming cases in municipal (traffic) court next month. But the haters will say ‘traffic court is too simple for GPT,’” Browder tweeted.

[…]

DoNotPay started out as a simple chatbot back in 2015 to help people resolve basic but infuriating scenarios, such as canceling subscriptions or appealing parking tickets. In recent years, the company used AI to ramp up its robot lawyer’s capabilities, equipping it to dispute medical bills and successfully negotiate with Comcast.

[…]

Source: DoNotPay Offers $1M for Its AI to Argue Before Supreme Court

Gizmodo is incredibly disparaging of this idea, but they often are when faced with the future. And the legal profession is one of those in the most direct firing line of AI.

Meet GPTZero: The AI-Powered AI Plagiarism detection Program

[…]

Edward Tian, a college student studying computer science and journalism at Princeton University, recently created an app called GPTZero to help detect whether the text was written by AI or a human. The motivation behind the app was to help combat increasing AI plagiarism.

[…]

To analyze text, GPTZero uses metrics such as perplexity and burstiness. Perplexity measures how complex the text is, while burstiness measures how randomly it is written. This allows GPTZero to accurately detect whether an essay was written by a human or by ChatGPT.

[…]

Source: Meet GPTZero: The AI-Powered Anti-Plagiarism Program | by Liquid Ocelot | InkWater Atlas | Jan, 2023 | Medium

Of course universities are working along with AI developments instead of trying to stop them: University students are using AI to write essays. Teachers are learning how to embrace that

Edit 16/7/23 – Of course you have GPT minus 1 which takes your GPT output and scrambles it so that these GPT checkers can’t recognise it any more

LastPass is being sued following major cyberattack

[…]

According to the class action complaint filed in a Massachusetts court, names, usernames, billing addresses, email addresses, telephone numbers, and even the IP addresses used to access the service were all made available to wrongdoers.

The final straw in the hat could have been the leak of customers’ unencrypted vault data, which includes all manner of information ranging from website usernames and passwords to other secure notes and form data.

According to the lawsuit, “LastPass understood and appreciated the value of this Information yet chose to ignore it by failing to invest in adequate data security measures”.

The case’s plaintiff claims to have invested $53,000 in Bitcoin since July 2022, which was later “stolen” several months later, leading to police and FBI reports.

[…]

Source: LastPass is being sued following major cyberattack

There are more articles about LastPass on this blog. It seems they did not take their security quite as seriously as they led us to believe.

Startup Claims It’s Sending Sulfur Into the Atmosphere to Fight Climate Change

A startup says it has begun releasing sulfur particles into Earth’s atmosphere, in a controversial attempt to combat climate change by deflecting sunlight. Make Sunsets, a company that sells carbon offset “cooling credits” for $10 each, is banking on solar geoengineering to cool down the planet and fill its coffers. The startup claims it has already released two test balloons, each filled with about 10 grams of sulfur particles and intended for the stratosphere, according to the company’s website and first reported on by MIT Technology Review.

The concept of solar geoengineering is simple: Add reflective particles to the upper atmosphere to reduce the amount of sunlight that penetrates from space, thereby cooling Earth. It’s an idea inspired by the atmospheric side effects of major volcanic eruptions, which have led to drastic, temporary climate shifts multiple times throughout history, including the notorious “year without a summer” of 1816.

Yet effective and safe implementation of the idea is much less simple. Scientists and engineers have been studying solar geoengineering as a potential climate change remedy for more than 50 years. But almost nobody has actually enacted real-world experiments because of the associated risks, like rapid changes in our planet’s precipitation patterns, damage to the ozone layer, and significant geopolitical ramifications.

[…]

if and when we get enough sulfur into the atmosphere to meaningfully cool Earth, we’d have to keep adding new particles indefinitely to avoid entering an era of climate change about four to six times worse than what we’re currently experiencing, according to one 2018 study. Sulfur aerosols don’t stick around very long. Their lifespan in the stratosphere is somewhere between a few days and a couple years, depending on particle size and other factors.

[…]

Rogue agents independently deciding to impose geoengineering on the rest of us has been a concern for as long as the thought of intentionally manipulating the atmosphere has been around. The Pentagon even has dedicated research teams working on methods to detect and combat such clandestine attempts. But effectively defending against solar geoengineering is much more difficult than just doing it.

In Iseman’s rudimentary first trials, he says he released two weather balloons full of helium and sulfur aerosols somewhere in Baja California, Mexico. The founder told MIT Technology Review that the balloons rose toward the sky but, beyond that, he doesn’t know what happened to them, as the balloons lacked tracking equipment. Maybe they made it to the stratosphere and released their payload, maybe they didn’t.

[…]

Iseman and Make Sunsets claim that a single gram of sulfur aerosols counteracts the warming effects of one ton of CO2. But there is no clear scientific basis for such an assertion, geoengineering researcher Shuchi Talati told the outlet. And so the $10 “cooling credits” the company is hawking are likely bunk (along with most carbon credit/offset schemes.)

Even if the balloons made it to the stratosphere, the small amount of sulfur released wouldn’t be enough to trigger significant environmental effects, said David Keith to MIT Technology Review.

[…]

The solution to climate change is almost certainly not a single maverick “disrupting” the composition of Earth’s stratosphere. But that hasn’t stopped Make Sunsets from reportedly raising nearly $750,000 in funds from venture capital firms. And for just ~$29,250,000 more per year, the company claims it can completely offset current warming. It’s not a bet we recommend taking.

Source: Startup Claims It’s Sending Sulfur Into the Atmosphere to Fight Climate Change

University students are using AI to write essays. Teachers are learning how to embrace that

As word of students using AI to automatically complete essays continues to spread, some lecturers are beginning to rethink how they should teach their pupils to write.

Writing is a difficult task to do well. The best novelists and poets write furiously, dedicating their lives to mastering their craft. The creative process of stringing together words to communicate thoughts is often viewed as something complex, mysterious, and unmistakably human. No wonder people are fascinated by machines that can write too.

[…]

Although AI can generate text with perfect spelling, great grammar and syntax, the content often isn’t that good beyond a few paragraphs. The writing becomes less coherent over time with no logical train of thought to follow. Language models fail to get their facts right – meaning quotes, dates, and ideas are likely false. Students will have to inspect the writing closely and correct mistakes for their work to be convincing.

Prof: AI-assisted essays ‘not good’

Scott Graham, associate professor at the Department of Rhetoric & Writing at the University of Texas at Austin, tasked his pupils with writing a 2,200-word essay about a campus-wide issue using AI. Students were free to lightly edit and format their work with the only rule being that most of the essay had to be automatically generated by software.

In an opinion article on Inside Higher Ed, Graham said the AI-assisted essays were “not good,” noting that the best of the bunch would have earned a C or C-minus grade. To score higher, students would have had to rewrite more of the essay using their own words to improve it, or craft increasingly narrower and specific prompts to get back more useful content.

“You’re not going to be able to push a button or submit a short prompt and generate a ready-to-go essay,” he told The Register.

[…]

“I think if students can do well with AI writing, it’s not actually all that different from them doing well with their own writing. The main skills I teach and assess mostly happen after the initial drafting,” he said.

“I think that’s where people become really talented writers; it’s in the revision and the editing process. So I’m optimistic about [AI] because I think that it will provide a framework for us to be able to teach that revision and editing better.

“Some students have a lot of trouble sometimes generating that first draft. If all the effort goes into getting them to generate that first draft, and then they hit the deadline, that’s what they will submit. They don’t get a chance to revise, they don’t get a chance to edit. If we can use those systems to speed write the first draft, it might really be helpful,” he opined.

[…]

Listicles, informal blog posts, or news articles will be easier to imitate than niche academic papers or literary masterpieces. Teachers will need to be thoughtful about the essay questions they set and make sure students’ knowledge are really being tested, if they don’t want them to cut corners.

[…]

“The onus now is on writing teachers to figure out how to get to the same kinds of goals that we’ve always had about using writing to learn. That includes students engaging with ideas, teaching them how to formulate thoughts, how to communicate clearly or creatively. I think all of those things can be done with AI systems, but they’ll be done differently.”

The line between using AI as a collaborative tool or a way to cheat, however, is blurry. None of the academics teaching writing who spoke to The Register thought students should be banned from using AI software. “Writing is fundamentally shaped by technology,” Vee said.

“Students use spell check and grammar check. If I got a paper where a student didn’t use these, it stands out. But it used to be, 50 years ago, writing teachers would complain that students didn’t know how to spell so they would teach spelling. Now they don’t.”

Most teachers, however, told us they would support regulating the use of AI-writing software in education

[…]

Mills was particularly concerned about AI reducing the need for people to think for themselves, considering language models carry forward biases in their training data. “Companies have decided what to feed it and we don’t know. Now, they are being used to generate all sorts of things from novels to academic papers, and they could influence our thoughts or even modify them. That is an immense power, and it’s very dangerous.”

Lauren Goodlad, professor of English and Comparative Literature at Rutgers University, agreed. If they parrot what AI comes up with, students may end up more likely to associate Muslims with terrorism or mention conspiracy theories, for example.

[…]

“As teachers, we are experimenting, not panicking,” Monroe told The Register.

“We want to empower our students as writers and thinkers. AI will play a role… This is a time of exciting and frenzied development, but educators move more slowly and deliberately… AI will be able to assist writers at every stage, but students and teachers will need tools that are thoughtfully calibrated.”

[…]

 

Source: University students are using AI to write essays. Now what? • The Register

FSF Warns: Stay Away From iPhones, Amazon, Netflix, and Music Steaming Services

For the last thirteen years the Free Software Foundation has published its Ethical Tech Giving Guide. But what’s interesting is this year’s guide also tags companies and products with negative recommendations to “stay away from.” Stay away from: iPhones
It’s not just Siri that’s creepy: all Apple devices contain software that’s hostile to users. Although they claim to be concerned about user privacy, they don’t hesitate to put their users under surveillance.

Apple prevents you from installing third-party free software on your own phone, and they use this control to censor apps that compete with or subvert Apple’s profits.

Apple has a history of exploiting their absolute control over their users to silence political activists and help governments spy on millions of users.

Stay away from: M1 MacBook and MacBook Pro
macOS is proprietary software that restricts its users’ freedoms.

In November 2020, macOS was caught alerting Apple each time a user opens an app. Even though Apple is making changes to the service, it just goes to show how bad they try to be until there is an outcry.

Comes crawling with spyware that rats you out to advertisers.

Stay away from: Amazon
Amazon is one of the most notorious DRM offenders. They use this Orwellian control over their devices and services to spy on users and keep them trapped in their walled garden.

Be aware that Amazon isn’t the peddler of ebook DRM. Disturbingly, it’s enthusiastically supported by most of the big publishing houses.

Read more about the dangers of DRM through our Defective by Design campaign.

Stay away from: Spotify, Apple Music, and all other major streaming services
In addition to streaming music encumbered by DRM, people who want to use Spotify are required to install additional proprietary software. Even Spotify’s client for GNU/Linux relies on proprietary software.

Apple Music is no better, and places heavy restrictions on the music streamed through the platform.

Stay away from: Netflix
Netflix is continuing its disturbing trend of making onerous DRM the norm for streaming media. That’s why they were a target for last year’s International Day Against DRM (IDAD).

They’re also leveraging their place in the Motion Picture Association of America (MPAA) to advocate for tighter restrictions on users, and drove the effort to embed DRM into the fabric of the Web.

“In your gift giving this year, put freedom first,” their guide begins.

And for a freedom-respecting last-minute gift idea, they suggest giving the gift of a FSF membership (which comes with a code and a printable page “so that you can present your gift as a physical object, if you like.”) The membership is valid for one year, and includes the many benefits that come with an FSF associate membership, including a USB member card, email forwarding, access to our Jitsi Meet videoconferencing server and member forum, discounts in the FSF shop and on ThinkPenguin hardware, and more.

If you are in the United States, your gift would also be fully tax-deductible in the USA.

Source: FSF Warns: Stay Away From iPhones, Amazon, Netflix, and Music Steaming Services – Slashdot

Mickey’s Copyright Adventure: Early Disney Creation Will Soon Be Public Property – finally. What lawsuits lie in wait?

The version of the iconic character from “Steamboat Willie” will enter the public domain in 2024. But those trying to take advantage could end up in a legal mousetrap. From a report: There is nothing soft and cuddly about the way Disney protects the characters it brings to life. This is a company that once forced a Florida day care center to remove an unauthorized Minnie Mouse mural. In 2006, Disney told a stonemason that carving Winnie the Pooh into a child’s gravestone would violate its copyright. The company pushed so hard for an extension of copyright protections in 1998 that the result was derisively nicknamed the Mickey Mouse Protection Act. For the first time, however, one of Disney’s marquee characters — Mickey himself — is set to enter the public domain.

“Steamboat Willie,” the 1928 short film that introduced Mickey to the world, will lose copyright protection in the United States and a few other countries at the end of next year, prompting fans, copyright experts and potential Mickey grabbers to wonder: How is the notoriously litigious Disney going to respond? “I’m seeing in Reddit forums and on Twitter where people — creative types — are getting excited about the possibilities, that somehow it’s going to be open season on Mickey,” said Aaron J. Moss, a partner at Greenberg Glusker in Los Angeles who specializes in copyright and trademark law. “But that is a misunderstanding of what is happening with the copyright.” The matter is more complicated than it appears, and those who try to capitalize on the expiring “Steamboat Willie” copyright could easily end up in a legal mousetrap. “The question is where Disney tries to draw the line on enforcement,” Mr. Moss said, “and if courts get involved to draw that line judicially.”

Only one copyright is expiring. It covers the original version of Mickey Mouse as seen in “Steamboat Willie,” an eight-minute short with little plot. This nonspeaking Mickey has a rat-like nose, rudimentary eyes (no pupils) and a long tail. He can be naughty. In one “Steamboat Willie” scene, he torments a cat. In another, he uses a terrified goose as a trombone. Later versions of the character remain protected by copyrights, including the sweeter, rounder Mickey with red shorts and white gloves most familiar to audiences today. They will enter the public domain at different points over the coming decades. “Disney has regularly modernized the character, not necessarily as a program of copyright management, at least initially, but to keep up with the times,” said Jane C. Ginsburg, an authority on intellectual property law who teaches at Columbia University.

Source: Mickey’s Copyright Adventure: Early Disney Creation Will Soon Be Public Property – Slashdot

How it’s remotely possible that a company is capitalising on a thought someone had around 100 years ago is beyond me.

The LastPass disclosure of leaked password vaults is being torn apart by security experts

Last week, just before Christmas, LastPass dropped a bombshell announcement: as the result of a breach in August, which led to another breach in November, hackers had gotten their hands on users’ password vaults. While the company insists that your login information is still secure, some cybersecurity experts are heavily criticizing its post, saying that it could make people feel more secure than they actually are and pointing out that this is just the latest in a series of incidents that make it hard to trust the password manager.

LastPass’ December 22nd statement was “full of omissions, half-truths and outright lies,” reads a blog post from Wladimir Palant, a security researcher known for helping originally develop AdBlock Pro, among other things. Some of his criticisms deal with how the company has framed the incident and how transparent it’s being; he accuses the company of trying to portray the August incident where LastPass says “some source code and technical information were stolen” as a separate breach when he says that in reality the company “failed to contain” the breach.

He also highlights LastPass’ admission that the leaked data included “the IP addresses from which customers were accessing the LastPass service,” saying that could let the threat actor “create a complete movement profile” of customers if LastPass was logging every IP address you used with its service.

Another security researcher, Jeremi Gosney, wrote a long post on Mastodon explaining his recommendation to move to another password manager. “LastPass’s claim of ‘zero knowledge’ is a bald-faced lie,” he says, alleging that the company has “about as much knowledge as a password manager can possibly get away with.”

LastPass claims its “zero knowledge” architecture keeps users safe because the company never has access to your master password, which is the thing that hackers would need to unlock the stolen vaults. While Gosney doesn’t dispute that particular point, he does say that the phrase is misleading. “I think most people envision their vault as a sort of encrypted database where the entire file is protected, but no — with LastPass, your vault is a plaintext file and only a few select fields are encrypted.”

Palant also notes that the encryption only does you any good if the hackers can’t crack your master password, which is LastPass’ main defense in its post: if you use its defaults for password length and strengthening and haven’t reused it on another site, “it would take millions of years to guess your master password using generally-available password-cracking technology” wrote Karim Toubba, the company’s CEO.

“This prepares the ground for blaming the customers,” writes Palant, saying that “LastPass should be aware that passwords will be decrypted for at least some of their customers. And they have a convenient explanation already: these customers clearly didn’t follow their best practices.” However, he also points out that LastPass hasn’t necessarily enforced those standards. Despite the fact that it made 12-character passwords the default in 2018, Palant says, “I can log in with my eight-character password without any warnings or prompts to change it.”

LastPass’ post has even elicited a response from a competitor, 1Password — on Wednesday, the company’s principal security architect Jeffrey Goldberg wrote a post for its site titled “Not in a million years: It can take far less to crack a LastPass password.” In it, Goldberg calls LastPass’ claim of it taking a million years to crack a master password “highly misleading,” saying that the statistic appears to assume a 12 character, randomly generated password. “Passwords created by humans come nowhere near meeting that requirement,” he writes, saying that threat actors would be able to prioritize certain guesses based on how people construct passwords they can actually remember.

Of course, a competitor’s word should probably be taken with a grain of salt, though Palant echos a similar idea in his post — he claims the viral XKCD method of creating passwords would take around 3 years to guess with a single GPU, while some 11-character passwords (that many people may consider to be good) would only take around 25 minutes to crack with the same hardware. It goes without saying that a motivated actor trying to crack into a specific target’s vault could probably throw more than one GPU at the problem, potentially cutting that time down by orders of magnitude.

Both Gosney and Palant take issue with LastPass’ actual cryptography too, though for different reasons. Gosney accuses the company of basically committing “every ‘crypto 101’ sin” with how its encryption is implemented and how it manages data once it’s been loaded into your device’s memory.

Meanwhile, Palant criticizes the company’s post for painting its password-strengthening algorithm, known as PBKDF2, as “stronger-than-typical.” The idea behind the standard is that it makes it harder to brute-force guess your passwords, as you’d have to perform a certain number of calculations on each guess. “I seriously wonder what LastPass considers typical,” writes Palant, “given that 100,000 PBKDF2 iterations are the lowest number I’ve seen in any current password manager.”

[…]

Source: The LastPass disclosure of leaked password vaults is being torn apart by security experts – The Verge

EarSpy: Spying on Phone Calls via Ear Speaker Vibrations Captured by Accelerometer

As smartphone manufacturers are improving the ear speakers in their devices, it can become easier for malicious actors to leverage a particular side-channel for eavesdropping on a targeted user’s conversations, according to a team of researchers from several universities in the United States.

The attack method, named EarSpy, is described in a paper published just before Christmas by researchers from Texas A&M University, Temple University, New Jersey Institute of Technology, Rutgers University, and the University of Dayton.

EarSpy relies on the phone’s ear speaker — the speaker at the top of the device that is used when the phone is held to the ear — and the device’s built-in accelerometer for capturing the tiny vibrations generated by the speaker.

[…]

Android security has improved significantly and it has become increasingly difficult for malware to obtain the required permissions.

On the other hand, accessing raw data from the motion sensors in a smartphone does not require any special permissions. Android developers have started placing some restrictions on sensor data collection, but the EarSpy attack is still possible, the researchers said.

A piece of malware planted on a device could use the EarSpy attack to capture potentially sensitive information and send it back to the attacker.

[…]

The researchers discovered that attacks such as EarSpy are becoming increasingly feasible due to the improvements made by smartphone manufacturers to ear speakers. They conducted tests on the OnePlus 7T and the OnePlus 9 smartphones — both running Android — and found that significantly more data can be captured by the accelerometer from the ear speaker due to the stereo speakers present in these newer models compared to the older model OnePlus phones, which did not have stereo speakers.

The experiments conducted by the academic researchers analyzed the reverberation effect of ear speakers on the accelerometer by extracting time-frequency domain features and spectrograms. The analysis focused on gender recognition, speaker recognition, and speech recognition.

In the gender recognition test, whose goal is to determine whether the target is male or female, the EarSpy attack had a 98% accuracy. The accuracy was nearly as high, at 92%, for detecting the speaker’s identity.

When it comes to actual speech, the accuracy was up to 56% for capturing digits spoken in a phone call.

Source: EarSpy: Spying on Phone Calls via Ear Speaker Vibrations Captured by Accelerometer

ETSI’s Activities in Artificial Intelligence: White Paper

[…]

This White Paper entitled ETSI Activities in the field of Artificial Intelligence supports all stakeholders and summarizes ongoing effort in ETSI and planned future activities. It also includes an analysis on how ETSI deliverables may support current policy initiatives in the field of artificial intelligence.  A section of the document outlines ETSI activities of relevance to address Societal Challenges in AI while another addresses the involvement of the European Research Community.

AI activities in ETSI also rely on a unique testing experts’ community to ensure independently verifiable and repeatable testing of essential requirements in the field of AI. ETSI engages with its highly recognised Human Factors community to develop solutions on Human Oversight of AI systems.

AI requires a multitude of distinct expertise where, often, AI is not the end goal but a means to achieve the goal. For this reason, ETSI has chosen to implement a distributed approach to AI – specialized communities meet in technically focused groups. Examples include the technical committee Cyber with a specific focus on Cybersecurity aspects, ISG SAI working towards securing AI systems, ISG ENI dealing with the question of how to integrate AI into a network architecture. These are three of the thirteen groups currently working on AI related technologies within ETSI. The first initiative dates back to 2016 with the publication of a White Paper describing GANA (the Generic Autonomic Networking Architecture).

[…]

Source: ETSI – ETSI’s Activities in Artificial Intelligence: Read our New White Paper

Two people charged with hacking Ring security cameras to livestream swattings

In a reminder of smart home security’s dark side, two people hacked Ring security cameras to livestream swattings, according to a Los Angeles grand jury indictment (according to a report from Bloomberg). The pair called in hoax emergencies to authorities and livestreamed the police response on social media in late 2020.

James Thomas Andrew McCarty, 20, of Charlotte, North Carolina, and Kya Christian Nelson, 21, of Racine, Wisconsin, hacked into Yahoo email accounts to gain access to 12 Ring cameras across nine states in November 2020 (disclaimer: Yahoo is Engadget’s parent company). In one of the incidents, Nelson claimed to be a minor reporting their parents for firing guns while drinking alcohol. When police arrived, the pair used the Ring cameras to taunt the victims and officers while livestreaming — a pattern appearing in several incidents, according to prosecutors.

[…]

Although the smart devices can deter things like robberies and “porch pirates,” Amazon admits to providing footage to police without user consent or a court order when it believes someone is in danger. Inexplicably, the tech giant made a zany reality series using Ring footage, which didn’t exactly quell concerns about the tech’s Orwellian side.

Source: Two people charged with hacking Ring security cameras to livestream swattings | Engadget

Amazing that people don’t realise that Amazon is creating a total and constant surveillance system with hardware that you paid for.

Meta agrees to $725 million settlement in Cambridge Analytica class-action lawsuit

It’s been four years since Facebook became embroiled in its biggest scandal to date: Cambridge Analytica. In addition to paying $4.9 billion to the Federal Trade Commission in a settlement, the social network has just agreed to pay $725 million to settle a long-running class-action lawsuit, making it the biggest settlement ever in a privacy case.

To recap, a whistleblower revealed in 2018 that now-defunct British political consulting firm Cambridge Analytica harvested the personal data of almost 90 million users without their consent for targeted political ads during the 2016 US presidential campaign and the UK’s Brexit referendum.

The controversy led to Mark Zuckerberg testifying before congress, a $4.9 billion fine levied on the company by the FTC in July 2019, and a $100 million settlement with the US Securities and Exchange Commission. There was also a class-action lawsuit filed in 2018 on behalf of Facebook users who alleged the company violated consumer privacy laws by sharing private data with other firms.

Facebook parent Meta settled the class action in August, thereby ensuring CEO Mark Zuckerberg, chief operating officer Javier Oliva, and former COO Sheryl Sandberg avoided hours of questioning from lawyers while under oath

[…]

This doesn’t mark the end of Meta’s dealings with the Cambridge Analytica fallout. Zuckerberg is facing a lawsuit from Washington DC’s attorney general Karl A. Racine over allegations that the Meta boss was personally involved in failures that led to the incident and his “policies enabled a multi-year effort to mislead users about the extent of Facebook’s wrongful conduct.”

Source: Meta agrees to $725 million settlement in Cambridge Analytica class-action lawsuit | TechSpot

OpenAI releases Point-E, an AI that generates 3D point clouds / meshes

[…] This week, OpenAI open sourced Point-E, a machine learning system that creates a 3D object given a text prompt. According to a paper published alongside the code base, Point-E can produce 3D models in one to two minutes on a single Nvidia V100 GPU.

[…]

Outside of the mesh-generating model, which stands alone, Point-E consists of two models: a text-to-image model and an image-to-3D model. The text-to-image model, similar to generative art systems like OpenAI’s own DALL-E 2 and Stable Diffusion, was trained on labeled images to understand the associations between words and visual concepts. The image-to-3D model, on the other hand, was fed a set of images paired with 3D objects so that it learned to effectively translate between the two.

When given a text prompt — for example, “a 3D printable gear, a single gear 3 inches in diameter and half inch thick” — Point-E’s text-to-image model generates a synthetic rendered object that’s fed to the image-to-3D model, which then generates a point cloud.

After training the models on a dataset of “several million” 3D objects and associated metadata, Point-E could produce colored point clouds that frequently matched text prompts, the OpenAI researchers say. It’s not perfect — Point-E’s image-to-3D model sometimes fails to understand the image from the text-to-image model, resulting in a shape that doesn’t match the text prompt.

[…]

Earlier this year, Google released DreamFusion, an expanded version of Dream Fields, a generative 3D system that the company unveiled back in 2021. Unlike Dream Fields, DreamFusion requires no prior training, meaning that it can generate 3D representations of objects without 3D data.

[…]

Source: OpenAI releases Point-E, an AI that generates 3D models | TechCrunch

The Copyright Industry Is About To Discover That There Are Hundreds Of Thousands Of Songs Generated By AI Already Available, Already Popular

ou may have noticed the world getting excited about the capabilities of ChatGPT, a text-based AI chat bot. Similarly, some are getting quite worked up over generative AI systems that can turn text prompts into images, including those mimicking the style of particular artists. But less remarked upon is the use of AI in the world of music. Music Business Worldwide has written two detailed news stories on the topic. The first comes from China:

Tencent Music Entertainment (TME) says that it has created and released over 1,000 tracks containing vocals created by AI tech that mimics the human voice.

And get this: one of these tracks has already surpassed 100 million streams.

Some of these songs use synthetic voices based on human singers, both dead and alive:

TME also confirmed today (November 15) that – in addition to “paying tribute” to the vocals of dead artists via the Lingyin Engine – it has also created “an AI singer lineup with the voices of trending [i.e currently active] stars such as Yang Chaoyue, among others”.

The copyright industry will doubtless have something to say about that. It is also unlikely to be delighted by the second Music Business Worldwide story about AI-generated music, this time in the Middle East and North Africa (MENA) market:

MENA-focused Spotify rival, Anghami, is now taking the concept to a whole other level – claiming that it will soon become the first platform to host over 200,000 songs generated by AI.

Anghami has partnered with a generative music platform called Mubert, which says it allows users to create “unique soundtracks” for various uses such as social media, presentations or films using one million samples from over 4,000 musicians.

According to Mohammed Ogaily, VP Product at Anghami, the service has already “generated over 170,000 songs, based on three sets of lyrics, three talents, and 2,000 tracks generated by AI”.

It’s striking that the undoubtedly interesting but theoretical possibilities of ChatGPT and generative AI art are dominating the headlines, while we hear relatively little about these AI-based music services that are already up and running, and hugely popular with listeners. It’s probably a result of the generally parochial nature of mainstream Western media, which often ignores the important developments happening elsewhere.

Source: The Copyright Industry Is About To Discover That There Are Hundreds Of Thousands Of Songs Generated By AI Already Available, Already Popular | Techdirt

AI-Created Comic Has Copyright Protection Revoked by US

The United States Copyright Office (USCO) reversed an earlier decision to grant a copyright to a comic book that was created using “A.I. art,” and announced that the copyright protection on the comic book will be revoked, stating that copyrighted works must be created by humans to gain official copyright protection.

In September, Kris Kashtanova announced that they had received a U.S. copyright on his comic book, Zarya of the Dawn, a comic book inspired by their late grandmother that she created with the text-to-image engine Midjourney. Kashtanova referred to herself as a “prompt engineer” and explained at the time that she went to get the copyright so that she could “make a case that we do own copyright when we make something using AI.”

[…]

Source: AI-Created Comic Has Been Deemed Ineligible for Copyright Protection

I guess there is no big corporate interest in lobbying for AI created content – yet – and so the copyright masters have no idea what to do without their corporate cash carrying masters telling them what to do.

ChatGPT Is a ‘Code Red’ for Google’s Search Business

A new wave of chat bots like ChatGPT use artificial intelligence that could reinvent or even replace the traditional internet search engine. From a report: Over the past three decades, a handful of products like Netscape’s web browser, Google’s search engine and Apple’s iPhone have truly upended the tech industry and made what came before them look like lumbering dinosaurs. Three weeks ago, an experimental chat bot called ChatGPT made its case to be the industry’s next big disrupter. […] Although ChatGPT still has plenty of room for improvement, its release led Google’s management to declare a “code red.” For Google, this was akin to pulling the fire alarm. Some fear the company may be approaching a moment that the biggest Silicon Valley outfits dread — the arrival of an enormous technological change that could upend the business.

For more than 20 years, the Google search engine has served as the world’s primary gateway to the internet. But with a new kind of chat bot technology poised to reinvent or even replace traditional search engines, Google could face the first serious threat to its main search business. One Google executive described the efforts as make or break for Google’s future. ChatGPT was released by an aggressive research lab called OpenAI, and Google is among the many other companies, labs and researchers that have helped build this technology. But experts believe the tech giant could struggle to compete with the newer, smaller companies developing these chat bots, because of the many ways the technology could damage its business.

Source: ChatGPT Is a ‘Code Red’ for Google’s Search Business – Slashdot

FBI warns of fake shopping sites – recommends to use an ad blocker

The FBI is warning the public that cyber criminals are using search engine advertisement services to impersonate brands and direct users to malicious sites that host ransomware and steal login credentials and other financial information.

[…]

Cyber criminals purchase advertisements that appear within internet search results using a domain that is similar to an actual business or service. When a user searches for that business or service, these advertisements appear at the very top of search results with minimum distinction between an advertisement and an actual search result. These advertisements link to a webpage that looks identical to the impersonated business’s official webpage.

[…]

The FBI recommends individuals take the following precautions:

  • Before clicking on an advertisement, check the URL to make sure the site is authentic. A malicious domain name may be similar to the intended URL but with typos or a misplaced letter.
  • Rather than search for a business or financial institution, type the business’s URL into an internet browser’s address bar to access the official website directly.
  • Use an ad blocking extension when performing internet searches. Most internet browsers allow a user to add extensions, including extensions that block advertisements. These ad blockers can be turned on and off within a browser to permit advertisements on certain websites while blocking advertisements on others.

The FBI recommends businesses take the following precautions:

  • Use domain protection services to notify businesses when similar domains are registered to prevent domain spoofing.
  • Educate users about spoofed websites and the importance of confirming destination URLs are correct.
  • Educate users about where to find legitimate downloads for programs provided by the business.

Source: Internet Crime Complaint Center (IC3) | Cyber Criminals Impersonating Brands Using Search Engine Advertisement Services to Defraud Users

For Firefox you have uBlock Origin or NoScript / Disconnect / Facebook Container / Privacy Badger / Ghostery / Super Agent / LocalCDN – you can run them all at once, but will have to sometimes whitelist certain sites just to get them to work. It’s a bit of trouble but internet will look much better being mainly ad free.

LastPass admits attackers copied password vaults

Password locker LastPass has warned customers that the August 2022 attack on its systems saw unknown parties copy encrypted files that contains the passwords to their accounts.

In a December 22nd update to its advice about the incident, LastPass brings customers up to date by explaining that the August 2022 attack saw “some source code and technical information were stolen from our development environment and used to target another employee, obtaining credentials and keys which were used to access and decrypt some storage volumes within the cloud-based storage service.”

Those creds allowed the attacker to copy information “that contained basic customer account information and related metadata including company names, end-user names, billing addresses, email addresses, telephone numbers, and the IP addresses from which customers were accessing the LastPass service.”

The update reveals that the attacker also copied “customer vault” data – the file LastPass uses to let customers record their passwords.

That file “is stored in a proprietary binary format that contains both unencrypted data, such as website URLs, as well as fully-encrypted sensitive fields such as website usernames and passwords, secure notes, and form-filled data.”

Which means the attackers have users’ passwords. But thankfully those passwords are encrypted with “256-bit AES encryption and can only be decrypted with a unique encryption key derived from each user’s master password”.

LastPass’ advice is that even though attackers have that file, customers who use its default settings have nothing to do as a result of this update as “it would take millions of years to guess your master password using generally-available password-cracking technology.”

One of those default settings is not to re-use the master password that is required to log into LastPass. The outfit suggests you make it a complex credential and use that password for just one thing: accessing LastPass.

Yet we know that users are often dumfoundingly lax at choosing good passwords, while two thirds re-use passwords even though they should know better.

[…]

LastPass therefore offered the following advice to individual and business users:

If your master password does not make use of the defaults above, then it would significantly reduce the number of attempts needed to guess it correctly. In this case, as an extra security measure, you should consider minimizing risk by changing passwords of websites you have stored.

Enjoy changing all those passwords, dear reader.

LastPass’s update concludes with news it decommissioned the systems breached in August 2022 and has built new infrastructure that adds extra protections.

Source: LastPass admits attackers copied password vaults

Epic Forced To Pay $520 Million Fine over Fortnite Privacy and Dark Patterns

Fortnite-maker Epic Games has agreed to pay a massive $520 million fine in settlements with the Federal Trade Commission for allegedly illegally gathering data from children and deploying dark patterns techniques to manipulate users into making unwanted in-game purchases. The fines mark a major regulatory win for the Biden administration’s progressive-minded FTC, who, up until now, had largely failed to deliver on its promise of more robust reinforcement of U.S. tech companies.

The first $275 million fine will settle allegations Epic collected personal information from children under the age of 13 without their parent’s consent when they played the hugely popular battle royale game. The FTC claims that unjustified data collection violates the Children’s Online Privacy Protection Act. Internal Epic surveys and the licensing of Fortnite branded toys, the FTC alleges, show Epic clearly knew at least some of its player base was underage. Worse still, the agency claims Epic forced parents to wade through cumbersome barriers when they requested to have their children’s data deleted.

[…]

The game-maker additionally agreed to pay $245 million to refund customers who the FTC says fell victim to manipulative, unfair billing practices that fall under the category, “dark patterns.” Fortnite allegedly deployed a, “counterintuitive, inconsistent, and confusing button configuration,” that led players to incur unwanted charges with a single press of a button. In some cases, the FTC claims that single press button meant users were charged while sitting in a loading screen or while trying to wake the game from sleep mode. Users, the complaint alleges, collectively lost hundreds of millions of dollars to those shady practices. Epic allegedly “ignored more than one million user complaints,” suggesting a high number of users were being wrongly charged.

[…]

And though the FTC’s latest fine is far cry from the $5 billion penalty the agency issued against Facebook in 2019 and represents just a portion of the billions Fortnite reportedly rakes in each year, supporters said it nonetheless represents more than a mere slap on the wrist.

[…]

Source: Epic Forced To Pay Record-Breaking $520 Million Fine

China’s Setting the Standard for Deepfake Regulation

[…]

On January 10, according to The South China Morning Post, China’s Cyberspace Administration will implement new rules that are intended to protect people from having their voice or image digitally impersonated without their consent. The regulators refer to platforms and services using the technology to edit a person’s voice or image as, “deep synthesis providers.”

Those deep synthesis technologies could include the use of deep learning algorithms and augmented reality to generate text, audio, images or video. We’ve already seen numerous instances over the years of these technologies used to impersonate high profile individuals, ranging from celebrities and tech executives to political figures.

Under the new guidelines, companies and technologists who use the technology must first contact and receive the consent from individuals before they edit their voice or image. The rules, officially called The Administrative Provisions on Deep Synthesis for Internet Information Services come in response to governmental concerns that advances in AI tech could be used by bad actors to run scams or defame people by impersonating their identity. In presenting the guidelines, the regulators also acknowledge areas where these technologies could prove useful. Rather than impose a wholesale ban, the regulator says it would actually promote the tech’s legal use and, “provide powerful legal protection to ensure and facilitate,” its development.

But, like many of China’s proposed tech policies, political considerations are inseparable. According to the South China Morning Post, news stories reposted using the technology must come from a government approved list of news outlets. Similarly, the rules require all so-called deep synthesis providers adhere to local laws and maintain “correct political direction and correct public opinion orientation.” Correct here, of course, is determined unilaterally by the state.

Though certain U.S states like New Jersey and Illinois have introduced local privacy legislation that addresses deepfakes, the lack of any meaningful federal privacy laws limits regulators’ abilities to address the tech on a national level. In the private sector, major U.S. platforms like Facebook and Twitter have created new systems meant to detect and flag deepfakes, though they are constantly trying to stay one step ahead of bad actors continually looking for ways to evade those filters.

If China’s new rules are successful, it could lay down a policy framework other nations could build upon and adapt. It wouldn’t be the first time China’s led the pack on strict tech reform. Last year, China introduced sweeping new data privacy laws that radically limited the ways private companies could collect an individual’s personal identity. Those rules were built off of Europe’s General Data Protection Regulation

[…]

That all sounds great, but China’s privacy laws have one glaring loophole tucked within it. Though the law protects people from private companies feeding off their data, it does almost nothing to prevent those same harms being carried out by the government. Similarly, with deepfakes, it’s unclear how the newly proposed regulations would, for instance, prohibit a state-run agency from doctoring or manipulating certain text or audio to influence the narrative around controversial or sensitive political events.

Source: China’s Setting the Standard for Deepfake Regulation

China is also the one setting the bar for anti-monopolistic practices, the EU and US have been caught with their fingers in the jam jar and their pants down.

Transparent sunlight-activated antifogging metamaterials

[…] Here, guided by nucleation thermodynamics, we design a transparent, sunlight-activated, photothermal coating to inhibit fogging. The metamaterial coating contains a nanoscopically thin percolating gold layer and is most absorptive in the near-infrared range, where half of the sunlight energy resides, thus maintaining visible transparency. The photoinduced heating effect enables sustained and superior fog prevention (4-fold improvement) and removal (3-fold improvement) compared with uncoated samples, and overall impressive performance, indoors and outdoors, even under cloudy conditions. The extreme thinness (~10 nm) of the coating—which can be produced by standard, readily scalable fabrication processes—enables integration beneath other coatings […]

Source: Transparent sunlight-activated antifogging metamaterials | Nature Nanotechnology

Skyglow pollution is separating us from the stars but also killing earth knowledge and species

[…]

It’s not only star gazing that’s in jeopardy. Culture, wildlife and other scientific advancements are being threatened by mass light infrastructure that is costing cities billions of dollars a year as it expands alongside exponential population growth.

Some researchers call light pollution cultural genocide. Generations of complex knowledge systems, built by Indigenous Australians and Torres Strait Islanders upon a once-clear view of the Milky Way, are being lost.

In the natural world, the mountain pygmy possum, a marsupial native to Australia, is critically endangered. Its main food source, the bogong moth, is being affected by artificial outdoor lighting messing with its migration patterns. Sea turtles are exhibiting erratic nesting and migration behaviours due to lights blasting from new coastal developments.

So how bright does our future look under a blanket of light?

“If you go to Mount Coot-tha, basically the highest point in Brisbane, every streetlight you can see from up there is a waste of energy,” Downs says. “Why is light going up and being wasted into the atmosphere? There’s no need for it.”

Skyglow

Around the world, one in three people can’t see the Milky Way at night because their skies are excessively illuminated. Four in five people live in towns and cities that emit enough light to limit their view of the stars. In Europe, that figure soars to 99%.

Blame skyglow – the unnecessary illumination of the sky above, and surrounding, an urban area. It’s easy to see it if you travel an hour from a city, turn around, then look back towards its centre.

[…]

Artificial lights at night cause skyglow in two ways: spill and glare. Light spills from a bulb when it trespasses beyond the area intended to be lit, while glare is a visual sensation caused by excessive brightness.

Streetlights contribute hugely to this skyglow and have been causing astronomers anxiety for decades.

[…]

Source: Blinded by the light: how skyglow pollution is separating us from the stars | Queensland | The Guardian