About Robin Edgar

Organisational Structures | Technology and Science | Military, IT and Lifestyle consultancy | Social, Broadcast & Cross Media | Flying aircraft

Kleiman v. Wright: $65 Billion Bitcoin Case Has Started

The civil trial of Ira Kleiman vs. Craig Wright started on Monday in Miami. The estate of David Kleiman is suing Craig Wright, the self declared inventor of bitcoin, for 50% ownership of 1.1 million bitcoins. The estate claims Kleiman was in a partnership with Wright to mine the coins but after Kleiman died in April 2013, Wright denied any partnership. At over $60,000 each per bitcoin, this case is currently worth $65 billion.

Craig Wright has previously claimed he is the inventor of Bitcoin, Satoshi Nakamoto, which has been met with skepticism based on his inability to show any proof. In this case, Wright has made numerous dubious claims. After the case was filed in 2018, Wright claimed he did not have the keys to the coins but that they would be arriving in January 2020 through a “bonded courier.” After January 2020, Wright provided keys to the estate for verification which the estate claims the bitcoins were fake. Expressing skepticism that the courier even existed, the estate asked for more information about the courier. Wright then claimed the identity of the courier and all communications were protected under attorney-client privilege as the courier was an attorney.

Source: Kleiman v. Wright: $65 Billion Bitcoin Case Has Started – Slashdot

Code compiled to WASM may lack standard security defenses

[…]

In a paper titled, The Security Risk of Lacking Compiler Protection in WebAssembly, distributed via ArXiv, the technical trio say that when a C program is compiled to WASM, it may lack anti-exploit defenses that the programmer takes for granted on native architectures.

The reason for this, they explain, is that security protections available in compilers like Clang for x86 builds don’t show up when WASM output is produced.

“We compiled 4,469 C programs with known buffer overflow vulnerabilities to x86 code and to WebAssembly, and observed the outcome of the execution of the generated code to differ for 1,088 programs,” the paper states.

“Through manual inspection, we identified that the root cause for these is the lack of security measures such as stack canaries in the generated WebAssembly: while x86 code crashes upon a stack-based buffer overflow, the corresponding WebAssembly continues to be executed.”

[….]

For those not in the know, a stack is a structure in memory used by programs to store temporary variables and information controlling the operation of the application. A stack canary is a special value stored in the stack. When someone attempts to exploit, say, a buffer overflow vulnerability in an application, and overwrite data on the stack to hijack the program’s execution, they should end up overwriting the canary. Doing so will be detected by the program, allowing it to trap and end the exploitation attempt.

Without these canaries, an exploited WASM program could continue running, albeit at the bidding of whoever attacked it, whereas its x86 counterpart exits for its own protection, and that’s a potential security problem. Stack canaries aren’t a panacea, and they can be bypassed, though not having them at all makes exploitation a lot easier.

And these issues are not necessarily a deal-breaker: WASM bytecode still exists in a sandbox, and has further defenses against control-flow hijacking techniques such as return-oriented programming.

But as the researchers observe, WASM’s documentation insists that stack-smashing protection isn’t necessary for WASM code. The three boffins say their findings indicate security assumptions for x86 binaries should be questioned for WASM builds and should encourage others to explore the consequences of this divergent behavior, as it applies both to stack-based buffer overflows and other common security weaknesses.

[…]

Source: Code compiled to WASM may lack standard security defenses • The Register

Likely Drone Attack On U.S. Power Grid Revealed In New Intelligence Report

U.S. officials believe that a DJI Mavic 2, a small quadcopter-type drone, with a thick copper wire attached underneath it via nylon cords was likely at the center of an attempted attack on a power substation in Pennsylvania last year. An internal U.S. government report that was issued last month says that this is the first time such an incident has been officially assessed as a possible drone attack on energy infrastructure in the United States, but that this is likely to become more commonplace as time goes on. This is a reality The War Zone has sounded the alarm about in the past, including when we were first to report on a still unexplained series of drone flights near the Palo Verde nuclear powerplant in Arizona in 2019.

[…]

“This is the first known instance of a modified UAS [unmanned aerial system] likely being used in the United States to specifically target energy infrastructure,” the JIB states. “We assess that a UAS recovered near an electrical substation was likely intended to disrupt operations by creating a short circuit to cause damage to transformers or distribution lines, based on the design and recovery location.”

ABC and other outlets have reported that the JIB says that this assessment is based in part on other unspecified incidents involving drones dating back to 2017.

[…]

Beyond the copper wire strung up underneath it, the drone reportedly had its camera and internal memory card removed. Efforts were taken to remove any identifying markings, indicating efforts by the operator or operators to conceal the identifies and otherwise make it difficult to trace the drone’s origins.

[…]

 

Source: Likely Drone Attack On U.S. Power Grid Revealed In New Intelligence Report

US bans trade with security firm NSO Group over Pegasus spyware

Surveillance software developer NSO Group may have a very tough road ahead. The US Commerce Department has added NSO to its Entity List, effectively banning trade with the firm. The move bars American companies from doing business with NSO unless they receive explicit permission. That’s unlikely, too, when the rule doesn’t allow license exceptions for exports and the US will default to rejecting reviews.

NSO and fellow Israeli company Candiru (also on the Entity List) face accusations of enabling hostile spying by authoritarian governments. They’ve allegedly supplied spyware like NSO’s Pegasus to “authoritarian governments” that used the tools to track activists, journalists and other critics in a bid to crush political dissent. This is part of the Biden-Harris administration’s push to make human rights “the center” of American foreign policy, the Commerce Department said.

The latest round of trade bans also affects Russian company Positive Technologies and Singapore’s Computer Security Initiative Consultancy, bot of which were accused of peddling hacking tools.

[…]

Source: US bans trade with security firm NSO Group over Pegasus spyware (updated) | Engadget

UK Schools Normalizing Biometric Collection By Using Facial Recognition For Meal Payments

Subjecting students to surveillance tech is nothing new. Most schools have had cameras installed for years. Moving students from desks to laptops allows schools to monitor internet use, even when students aren’t on campus. Bringing police officers into schools to participate in disciplinary problems allows law enforcement agencies to utilize the same tech and analytics they deploy against the public at large. And if cameras are already in place, it’s often trivial to add facial recognition features.

The same tech that can keep kids from patronizing certain retailers is also being used to keep deadbeat kids from scoring free lunches. While some local governments in the United States are trying to limit the expansion of surveillance tech in their own jurisdictions, governments in the United Kingdom seem less concerned about the mission creep of surveillance technology.

Some students in the UK are now able to pay for their lunch in the school canteen using only their faces. Nine schools in North Ayrshire, Scotland, started taking payments using biometric information gleaned from facial recognition systems on Monday, according to the Financial Times. [alt link]

The technology is being provided by CRB Cunningham, which has installed a system that scans the faces of students and cross-checks them against encrypted faceprint templates stored locally on servers in the schools. It’s being brought in to replace fingerprint scanning and card payments, which have been deemed less safe since the advent of the COVID-19 pandemic.

According to the Financial Times report, 65 schools have already signed up to participate in this program, which has supposedly dropped transaction times at the lunchroom register to less than five seconds per student. I assume that’s an improvement, but it seems fingerprints/cards weren’t all that slow and there are plenty of options for touchless payment if schools need somewhere to spend their cafeteria tech money.

CRB says more than 97% of parents have consented to the collection and use of their children’s biometric info to… um… move kids through the lunch line faster. I guess the sooner you get kids used to having their faces scanned to do mundane things, the less likely they’ll be to complain when demands for info cross over into more private spaces.

The FAQ on the program makes it clear it’s a single-purpose collection governed by a number of laws and data collection policies. Parents can opt out at any time and all data is deleted after opt out or if the student leaves the school. It’s good this is being handled responsibly but, like all facial recognition tech, mistakes can (and will) be made. When these inevitably occur, hopefully the damage will be limited to a missed meal.

The FAQ handles questions specifically about this program. The other flyer published by the North Ayrshire Council explains nothing and implies facial recognition is harmless, accurate, and a positive addition to students’ lives.

We’re introducing Facial Recognition!

This new technology is now available for a contactless meal service!

Following this exciting announcement, the flyer moves on to discussing biometric collections and the tech that makes it all possible. It accomplishes this in seven short “land of contrasts” paragraphs that explain almost nothing and completely ignore the inherent flaws in these systems as well as the collateral damage misidentification can cause.

The section titled “The history of biometrics” contains no history. Instead, it says biometric collections are already omnipresent so why worry about paying for lunch with your face?

Whilst the use of biometric recognition has been steadily growing over the last decade or so, these past couple of years have seen an explosion in development, interest and vendor involvement, particularly in mobile devices where they are commonly used to verify the owner of the device before unlocking or making purchases.

If students want to learn more (or anything) about the history of biometrics, I guess they’ll need to do their own research. Because this is the next (and final) paragraph of the “history of biometrics” section:

We are delighted to offer this fast and secure identification technology to purchase our delicious and nutritious school meals

Time is a flattened circle, I guess. The history of biometrics is the present. And the present is the future of student payment options, of which there are several. But these schools have put their money on facial recognition, which will help them raise a generation of children who’ve never known a life where they weren’t expected to use their bodies to pay for stuff.

Source: UK Schools Normalizing Biometric Collection By Using Facial Recognition For Meal Payments | Techdirt

Nintendo Killed Emulation Sites Then Released Garbage N64 Games For The Switch

[…]

You will recall that a couple of years back, Nintendo opened up a new front on its constant IP wars by going after ROM and emulation sites. That caused plenty of sites to simply shut themselves down, but Nintendo also made a point of getting some scalps to hang on its belt, most famously in the form of RomUniverse. That site, which very clearly had infringing material not only on the site but promoted by the site’s ownership, got slapped around in the courts to the tune of a huge judgement against, which the site owners simply cannot pay.

But all of those are details and don’t answer the real question: why did Nintendo do this? Well, as many expected from the beginning, it did this because the company was planning to release a series of classic consoles, namely the NES mini and SNES mini. But, of course, what about later consoles? Such as the Nintendo 64?

Well, the answer to that is that Nintendo has offered a Nintendo Switch Online service uplift that includes some N64 games that you can play there instead.

After years of “N64 mini” rumors (which have yet to come to fruition), Nintendo announced plans to honor its first fully 3D gaming system late last month in the form of the Nintendo Switch Online Expansion Pack. Pay a bit extra, the company said, and you’d get a select library of N64 classics, emulated by the company that made them, on Switch consoles as part of an active NSO subscription.

One month later, however, Nintendo’s sales proposition grew more sour. That “bit extra” ballooned to $30 more per year, on top of the existing $20/year fee—a 150 percent jump in annual price. Never mind that the price also included an Animal Crossing expansion pack (which retro gaming fans may not want) and Sega Genesis games (which have been mostly released ad nauseam on every gaming system of the past decade). For many interested fans, that price jump was about the N64 collection.

So, a bit of a big price tag and a bunch of extras that are mostly besides the point from the perspective of the buyer. Buy, hey, at least Nintendo fans will finally get some N64 games to play on their Switch consoles, right?

Well, it turns out that Nintendo’s offering cannot come close to matching the quality of the very emulators and ROMs that Nintendo has worked so hard to disappear. The Ars Technica post linked above goes into excruciating details, some of which we’ll discuss for the purpose of giving examples, but here are the categories that Nintendo’s product does worse than an emulator on a PC.

 

  • Game options, such as visual settings for resolution to fit modern screens
  • Visuals, such as N64’s famous blur settings, and visual changes that expose outdated graphical sprites
  • Controller input lag
  • Controller configuration options
  • Multiplayer lag/stutter

 

If that seems like a lot of problems compared with emulators that have been around for quite a while, well, ding ding ding! We’ll get into some examples briefly below, but I’ll stipulate that none of the issues in the categories above are incredibly bad. But there are so many of them that they all add up to bad!

[….]

Source: Nintendo Killed Emulation Sites Then Released Garbage N64 Games For The Switch | Techdirt

NFI decrypts Tesla’s hidden driving data

[…] The Netherlands Forensic Institute (NFI) said it discovered a wealth of information about Tesla’s Autopilot, along with data around speed, accelerator pedal positions, steering wheel angle and more. The findings will allow the government to “request more targeted data” to help determine the cause of accidents, the investigators said.

The researchers already knew that Tesla vehicles encrypt and store accident related data, but not which data and how much. As such, they reverse-engineered the system and succeeded in “obtaining data from the models S, Y, X and 3,” which they described in a paper presented at an accident analysis conference.

[….]

With knowledge of how to decrypt the storage, the NFI carried out tests with a Tesla Model S so it could compare the logs with real-world data. It found that the vehicle logs were “very accurate,” with deviations less than 1 km/h (about 0.6 MPH).

[…]

It used to be possible to extract Autopilot data from Tesla EVs, but it’s now encrypted in recent models, the investigators said. Tesla encrypts data for good reason, they acknowledged, including protecting its own IP from other manufacturers and guarding a driver’s privacy. It also noted that the company does provide specific data to authorities and investigators if requested.

However, the team said that the extra data they extracted would allow for more detailed accident investigations, “especially into the role of driver assistance systems.” It added that it would be ideal to know if other manufacturers stored the same level of detail over long periods of time. “If we would know better which data car manufacturers all store, we can also make more targeted claims through the courts or the Public Prosecution Service,” said NFI investigator Frances Hoogendijk. “And ultimately that serves the interest of finding the truth after an accident.”

Source: The Dutch government claims it can decrypt Tesla’s hidden driving data | Engadget

Protecting your IP this way basically means things like not being able to use the data for legitimate reasons – such as investigating accidents – as well as halting advancements. This whole IP thing has gotten way out of hand to the detriment of the human race!

Also, this sounds like non-GDPR compliant data collection

Commercial and Military Applications and Timelines for Quantum Technology | RAND

This report provides an overview of the current state of quantum technology and its potential commercial and military applications. The author discusses each of the three major categories of quantum technology: quantum sensing, quantum communication, and quantum computing. He also considers the likely commercial outlook over the next few years, the major international players, and the potential national security implications of these emerging technologies. This report is based on a survey of the available academic literature, news reporting, and government-issued position papers.

Most of these technologies are still in the laboratory. Applications of quantum sensing could become commercially or militarily ready within the next few years. Although limited commercial deployment of quantum communication technology already exists, the most-useful military applications still lie many years away. Similarly, there may be niche applications of quantum computers in the future, but all known applications are likely at least ten years away. China currently leads the world in the development of quantum communication, while the United States leads in the development of quantum computing.

Key Findings

Quantum technology is grouped into three broad categories: quantum sensing, quantum communication, and quantum computing

  • Quantum sensing refers to the ability to use quantum mechanics to build extremely precise sensors. This is the application of quantum technology considered to have the nearest-term operational potential.
  • The primary near-term application of quantum communication technology is security against eavesdroppers, primarily through a method known as quantum key distribution (QKD). Longer-term applications include networking together quantum computers and sensors.
  • Quantum computing refers to computers that could, in principle, perform certain computations vastly more quickly than is fundamentally possible with a standard computer. Certain problems that are completely infeasible to solve on a standard computer could become feasible on a quantum computer.

Every subfield of quantum technology potentially has major implications for national security

  • Some of the primary applications for quantum sensing include position, navigation, and timing and possibly intelligence, surveillance, and reconnaissance.
  • Quantum communication technology could use QKD to protect sensitive encrypted communications against hostile interception, although some experts consider other security solutions to be more promising.
  • Quantum computing could eventually have the most severe impact on national security. A large-scale quantum computer capable of deploying Shor’s algorithm on current encryption would have a devastating impact on virtually all internet security.

There is no clear overall world leader in quantum technology

  • The United States, China, the European Union, the United Kingdom, and Canada all have specific national initiatives to encourage quantum-related research.
  • The United States and China dominate in overall spending and the most-important technology demonstrations, but Canada, the United Kingdom, and the European Union also lead in certain subfields.
  • China is the world leader in quantum communication, and the United States is the world leader in quantum computing.

The highest-impact quantum technologies are still many years away

  • Applications of quantum sensing could become commercially or militarily ready within the next few years.
  • Limited commercial deployment of quantum communication technology already exists, but the most-useful military and commercial applications still lie many years away.
  • There may be niche applications of quantum computers over the next few years, but all currently known applications are likely at least ten years away.

Source: Commercial and Military Applications and Timelines for Quantum Technology | RAND

Google pays fines to Russia over banned content – because fine is paltry

 U.S. tech giant Google has paid Russia more than 32 million roubles ($455,079) in fines for failing to delete content Moscow deems illegal, the company and a Russian lawmaker said after talks on Monday.

[…]

In 2020, Google’s compliance with requests to delete content was 96.2%, Pancini said, and in the first half of this year, it removed over 489,000 videos, but Russia said too much banned content still remained available.

Piskarev said last week that this included child pornography. Russia has ordered other foreign tech firms to delete posts promoting drug abuse and dangerous pastimes, information about homemade weapons and explosives, as well as ones by groups it designates as extremist or terrorist.

Around 2,650 pieces of illegal content on Google’s internet resources remained undeleted as of the start of October, the RIA news agency cited Piskarev as saying.

“Work has been carried out, as we see, however it is still very far from ideal,” he said.

Piskarev said Pancini had cited technical difficulties for Google’s failure to remove all the banned content.

Source: Google pays fines to Russia over banned content

As soon as you hear about child pornography alarm bells should be ringing – someone is trying to do something else which is totally unacceptable

5 notable Facebook fuckups in the recent relevations

The Facebook Papers are based on leaks from former Facebook staffer Frances Haugen and other inside sources. Haugen has appeared before US Congress, British Parliament, and given prominent television interviews. Among the allegations raised are that Facebook:

  • Knows that its algorithms lead users to extreme content and that it employs too few staff or contractors to curb such content, especially in languages other than English. Content glorifying violence and hate therefore spreads on Facebook – which really should know better by now after the The New York Times in 2018 reported that Myanmar’s military used Facebook to spread racist propaganda that led to mass violence against minority groups;
  • Enforces its rules selectively, allowing certain celebrities and websites to get away with behavior that would get others kicked off the platform. Inconsistent enforcement means users don’t get the protection from harmful content Facebook has so often promised, implying that it prioritises finding eyeballs for ads ahead of user safety;
  • Planned a special version of Instagram targeting teenagers, but cancelled it after Haugen revealed the site’s effects on some users – up to three per cent of teenage girls experience depression or anxiety, or self-harm, as a result of using the service;
  • Can’t accurately assess user numbers and may be missing users with multiple accounts. The Social Network™ may therefore have misrepresented its reach to advertisers, or made its advertising look more super-targeted than it really is – or both;
  • Just isn’t very good at spotting the kind of content it says has no place on its platform – like human trafficking – yes, that means selling human beings on Facebook. At one point Apple was so upset by the prevalence of Facebook posts of this sort it threatened to banish Zuckerberg’s software from the App Store.

Outlets including AP News and The Wall Street Journal have more original reporting on the leaks.

Source: Facebook labels recent revelations unfair • The Register

Google deliberately throttled ad load times to promote AMP, locking advertisers into it’s own advertising market place

More detail has emerged from a 173-page complaint filed last week in the lawsuit brought against Google by a number of US states, including allegations that Google deliberately throttled advertisements not served to its AMP (Accelerated Mobile) pages.

The lawsuit – as we explained at the end of last week – was originally filed in December 2020 and concerns alleged anti-competitive practice in digital advertising. The latest document, filed on Friday, makes fresh claims alleging ad-throttling around AMP.

Google introduced AMP in 2015, with the stated purpose of accelerating mobile web pages. An AMP page is a second version of a web page using AMP components and restricted JavaScript, and is usually served via Google’s content delivery network. Until 2018, the AMP project, although open source, had as part of its governance a BDFL (Benevolent Dictator for Life), this being Google’s Malte Ubl, the technical lead for AMP.

In 2018, Ubl posted that this changed “from a single Tech lead to a Technical Steering Committee”. The TSC sets its own membership and has a stated goal of “no more than 1/3 of the TSC from one employer”, though currently has nine members, of whom four are from Google, including operating director Joey Rozier.

According to the Friday court filing, representing the second amended complaint [PDF] from the plaintiffs, “Google ad server employees met with AMP employees to strategize about using AMP to impede header bidding.” Header bidding, as described in our earlier coverage, enabled publishers to offer ad space to multiple ad exchanges, rather than exclusively to Google’s ad exchange. The suit alleges that AMP limited the compatibility with header bidding to just “a few exchanges,” and “routed rival exchange bids through Google’s ad server so that Google could continue to peek at their bids and trade on inside information”.

The lawsuit also states that Google’s claims of faster performance for AMP pages “were not true for publishers that designed their web pages for speed”.

A more serious claim is that: “Google throttles the load time of non-AMP ads by giving them artificial one-second delays in order to give Google AMP a ‘nice comparative boost’. Throttling non-AMP ads slows down header bidding, which Google then uses to denigrate header bidding for being too slow.”

The document goes on to allege that: “Internally, Google employees grappled with ‘how to [publicly] justify [Google] making something slower’.”

Google promoted AMP in part by ranking non-AMP pages below AMP pages in search results, and featuring a “Search AMP Carousel” specifically for AMP content. This presented what the complaint claims was a “Faustian bargain,” where “(1) publishers who used header bidding would see the traffic to their site drop precipitously from Google suppressing their ranking in search and re-directing traffic to AMP-compatible publishers; or (2) publishers could adopt AMP pages to maintain traffic flow but forgo exchange competition in header bidding, which would make them more money on an impression-by-impression basis.”

The complaint further alleges that “According to Google’s internal documents, [publishers made] 40 per cent less revenue on AMP pages.”

A brief history of AMP

AMP was controversial from its first inception. In 2017 developer Jeremy Keith described AMP as deceptive, drawing defensive remarks from Ubl. Keith later joined the AMP advisory committee, but resigned in August saying that “I can’t in good faith continue to advise on the AMP project for the OpenJS Foundation when it has become clear to me that AMP remains a Google product, with only a subset of pieces that could even be considered open source.”

One complaint is that the AMP specification requires a link to Google-hosted JavaScript.

In May 2020 Google stated it would “remove the AMP requirement from Top Stories eligibility”.

This was confirmed in April 2021, when Google posted about an update to its “page experience” whereby “the Top Stories carousel feature on Google Search will be updated to include all news content, as long as it meets the Google News policies. This means that using the AMP format is no longer required.” In addition, “we will no longer show the AMP badge icon to indicate AMP content.” Finally, Google Search signed exchanges, which pre-fetches content to speed page rendering on sites which support the feature, was extended to all web pages where it was previously restricted to AMP pages.

This is evidence that Google is pulling back from its promotion of AMP, though it also said that “Google continues to support AMP”.

As for the complaint, it alleges that Google has an inherent conflict of interest. According to the filing: “Google was able to demand that it represent the buy-side (i.e., advertisers), where it extracted one fee, as well as the sell-side (i.e., publishers), where it extracted a second fee, and it was also able to force transactions to clear in its exchange, where it extracted a third, even larger, fee.”

The company also has more influence than any other on web standards, thanks to the dominant Chrome browser and Chromium browser engine, and on mobile technology, thanks to Android.

That Google would devise a standard from which it benefited is not surprising, but the allegation of deliberately delaying ads on other formats in order to promote it is disturbing and we have asked the company to comment.

Source: Google deliberately throttled ad load times to promote AMP, claims new court document • The Register

Monopolies eh!

UK government hands secret services cloud contract to AWS

The UK’s intelligence services are to store their secret files in the AWS cloud in a deal inked earlier this year, according to reports.

The GCHQ organisation (electrical/radio communications eavesdropping), MI5 (domestic UK intelligence matters), MI6 (external UK intel) and also the Ministry of Defence (MoD) will access their data in the cloud, albeit in UK-located AWS data centres.

The news was first reported in the Financial Times newspaper (paywall), which said GCHQ drove the deal that was signed earlier this year, and the data will be stored in a high-security way. It is claimed by unknown sources that AWS itself will not have access to the data.

Apparently the three agencies plus the MoD will be able to access information faster and share it more quickly when needed. This is presumably in contrast to each agency storing its own information on its own on-premises computer systems.

[…]

The US’s CIA signed a $600m AWS Cloud contract in 2013. That contract was upgraded in 2020 and involved AWS, Google, IBM, Microsoft and Oracle in a consortium.

Of course, for the US, AWS is a domestic firm. The French government is setting up its own sovereign public cloud called Bleu for sensitive government data. This “Cloud de Confiance” will be based on Microsoft’s Azure platform – and will include Microsoft 365 – but will apparently be “delivered via an independent environment” that has “immunity from all extraterritorial legislation and economic independence” from within an “isolated infrastructure that uses data centres located in France.”

In GCHQ’s reported view, no UK-based public cloud could provide the scale or capabilities needed for the security services data storage requirements.

[….]

Source: UK government hands secret services cloud contract to AWS • The Register

Hackers steal $130 million from Cream Finance; the company’s 3rd hack this year

Hackers have stolen an estimated $130 million worth of cryptocurrency assets from Cream Finance, a decentralized finance (DeFi) platform that allows users to loan and speculate on cryptocurrency price variations.

The incident, detected earlier today by blockchain security firms PeckShield and SlowMist, was confirmed by the Cream Finance team earlier today.

The attackers are believed to have found a vulnerability in the platform’s lending system —called flash loaning— and used it to steal all of Cream’s assets and tokens running on the Ethereum blockchain, according to blockchain security firm BlockSec, which also posted an explanation of the security flaw on Twitter earlier today.

A breakdown of the stolen funds is available below, courtesy of the SlowMist team.

CreamFinance-hack-SlowMist
Image: SlowMist

Roughly six hours after the attack, Cream Finance said it fixed the bug exploited in the hack with the help of cryptocurrency platform Yearn.

Even if the attacker’s initial wallet, used to exfiltrate a large chunk of the funds, has been identified, the funds have already been moved to new accounts, and there appears to be a small chance the stolen crypto can be tracked down and returned to the platform.

Third time’s a charm

Today’s hack marks the third time Cream Finance has been hacked this year after the company lost $37 million in February and another $29 million in August.

All attacks were flash loan exploits, a common way through which most DeFi platforms have been hacked over the past two years.

DeFi related hacks have accounted for 76% of all major hacks in 2021, and users have lost more than $474 million to attacks on DeFi platforms this year, CipherTrace said in a report in August.

Similarly, DeFi hacks also made up 21% of all the 2020 cryptocurrency hacks and stolen funds after being almost inexistent a year before, in 2019, the same CipherTrace said in a report last year.

The Cream heist also marks the second-largest cryptocurrency hack this year after DeFi platform Poly Network lost $600 million in August. However, the individual behind the Poly hack eventually returned all the stolen funds two weeks later on the promise the company won’t seek charges.

Source: Hackers steal $130 million from Cream Finance; the company’s 3rd hack this year – The Record by Recorded Future

Ultraleap launches its 5th Gen hand tracking platform Gemini

Ultraleap’s fifth-generation hand tracking platform, known as Gemini, is fully available on Windows. The most robust, flexible hand tracking ever, it’s already powering amazing experiences from Varjo, been integrated into Qualcomm’s Snapdragon XR2 platform, and is bringing touchless technology to self-service solutions around the world. 

The Gemini Windows release is the first step in making the world’s best hand tracking easier to access and more flexible for multiple platforms, camera systems and third-party hardware.

Ultraleap have rebuilt their tracking engine from the ground up to be able to improve hand tracking across various aspects including:

  1. Improved two-handed interaction
  2. Faster initialization and hand detection
  3. Improved robustness to challenging environmental conditions
  4. Better adaptation to hand anatomy

Ultraleap have also made significant changes to the tracking platform to able to extend hand-tracking to different platforms and hardware. Varjo’s XR-3 and VR-3 headsets and Qualcomm’s XR2 chipset are two variations already announced, with more in the pipeline.

[…]

Meet Ultraleap Gemini – the best hand tracking ever. Fast initialization, interaction with two hands together, tracks diverse hand sizes, works in challenging environments.

Hand tracking will be to XR what touchscreens were to mobile

The Windows release is the first time full Gemini has been made available to all and the first full hand tracking release from the company in three years. Since a preview of the release went out earlier in the year, Ultraleap have been further refining the software ahead of full launch.

[…]

Source: Ultraleap launches its 5th Gen hand tracking platform – Gemini | Ultraleap

Giant, free index to world’s research papers released online

In a project that could unlock the world’s research papers for easier computerized analysis, an American technologist has released online a gigantic index of the words and short phrases contained in more than 100 million journal articles — including many paywalled papers.

The catalogue, which was released on 7 October and is free to use, holds tables of more than 355 billion words and sentence fragments listed next to the articles in which they appear. It is an effort to help scientists use software to glean insights from published work even if they have no legal access to the underlying papers, says its creator, Carl Malamud. He released the files under the auspices of Public Resource, a non-profit corporation in Sebastopol, California, that he founded.

[….]

Computer scientists already text mine papers to build databases of genes, drugs and chemicals found in the literature, and to explore papers’ content faster than a human could read. But they often note that publishers ultimately control the speed and scope of their work, and that scientists are restricted to mining only open-access papers, or those articles they (or their institutions) have subscriptions to. Some publishers have said that researchers looking to mine the text of paywalled papers need their authorization.

And although free search engines such as Google Scholar have — with publishers’ agreement — indexed the text of paywalled literature, they only allow users to search with certain types of text queries, and restrict automated searching. That doesn’t allow large-scale computerized analysis using more specialized searches, Malamud says.

[…]

Michael Carroll, a legal researcher at the American University Washington College of Law in Washington DC, says that distributing the index should be legal worldwide because the files do not copy enough of an underlying article to infringe the publisher’s copyright — although laws vary by country. “Copyright does not protect facts and ideas, and these results would be treated as communication of facts derived from the analysis of the copyrighted articles,” he says.

The only legal question, Carroll adds, is whether Malamud’s obtaining and copying of the underlying papers was done without breaching publishers’ terms. Malamud says that he did have to get copies of the 107 million articles referenced in the index to create it; he declined to say how,

 

Source: Giant, free index to world’s research papers released online

It is sad indeed that much research – lots of it probably paid for by tax payers and all of it eventually subsidised by customers of the companies who paid for it – is impossible or very hard for scientists to look up: because of copyright. This is a clear impediment to growth of wealth and knowledge and it’s not very strange to understand why countries like China who don’t allow people to sit on their copyrighted arses but make them innovate for a living are doing much better at growth than the legally quagmired west.

Location Data Firm Got GPS Data From Apps Even When People Opted Out

Huq, an established data vendor that obtains granular location information from ordinary apps installed on people’s phones and then sells that data, has been receiving GPS coordinates even when people explicitly opted-out of such collection inside individual Android apps, researchers and Motherboard have found.

The news highlights a stark problem for smartphone users: that they can’t actually be sure if some apps are respecting their explicit preferences around data sharing. The data transfer also presents an issue for the location data companies themselves. Many claim to be collecting data with consent, and by extension, in line with privacy regulations. But Huq was seemingly not aware of the issue when contacted by Motherboard for comment, showing that location data firms harvesting and selling his data may not even know whether they are actually getting this data with consent or not.

“This shows an urgent need for regulatory action,” Joel Reardon, assistant professor at the University of Calgary and the forensics lead and co-founder of AppCensus, a company that analyzes apps, and who first flagged some of the issues around Huq to Motherboard, said in an email. “I feel that there’s plenty wrong with the idea that—as long as you say it in your privacy policy—then it’s fine to do things like track millions of people’s every moment and sell it to private companies to do what they want with it. But how do we even start fixing problems like this when it’s going to happen regardless of whether you agree, regardless of any consent whatsoever.”

[…]

Huq does not publicly say which apps it has relationships with. Earlier this year Motherboard started to investigate Huq by compiling a list of apps that contained code related to the company. Some of the apps have been downloaded millions or tens of millions of times, including “SPEEDCHECK,” an internet speed testing app; “Simple weather & clock widget,” a basic weather app; and “Qibla Compass,” a Muslim prayer app.

Independently, Reardon and AppCensus also examined Huq and later shared some of their findings with Motherboard. Reardon said in an email that he downloaded one app called “Network Signal Info” and found that it still sent location and other data to Huq after he opted-out of the app sharing data with third parties.

[…]

Source: Location Data Firm Got GPS Data From Apps Even When People Opted Out

5-Day Brain Stimulation Treatment Highly Effective Against Depression, Stanford Researchers Find

Stanford researchers think they’ve devised an effective and quick-acting way to treat difficult cases of depression, by improving on an already approved form of brain stimulation. In a new trial published this week, the researchers found that almost 80% of patients improved after going through treatment—a far higher rate than those who were given a sham placebo.

Brain stimulation has emerged as a promising avenue for depression, particularly depression that hasn’t responded to other treatments. The basic concept behind it is to use electrical impulses to balance out the erratic brain activity associated with neurological or psychiatric disorders. There are different forms of stimulation, which vary in intensity and how they interact with the body. Some require permanent implants in the brain, while others can be used noninvasively, like repetitive transcranial magnetic stimulation (rTMS). As the name suggests, rTMS relies on magnetic fields that are temporarily applied to the head.

[…]

the Stanford neuromodulation therapy (SNT), relies on higher-dose magnetic pulses delivered over a quicker, five-day schedule, meant to mimic about seven months of standard rTMS treatment. The treatment is also personalized to each patient, with MRI scans used beforehand to pick out the best possible locations along the brain to deliver these pulses.

[…]

Last year, Williams and his team published a small study of 21 patients who were given SNT, showing that 90% of people severely affected by their depression experienced remission—in other words, that they no longer met the criteria for an acute depressive episode. Moreover, people’s feelings of suicidal ideation went away as well. The study was open label, though, meaning that patients and doctors knew what treatment was being given. Confirming that any drug or treatment actually works requires more rigorous tests, such as a double-blinded and placebo-controlled experiment. And that’s what the team has done now, publishing the results of their new trial in the American Journal of Psychiatry.

[…]

This time, about 78% of patients given genuine SNT experienced remission, based on standard diagnostic tests, compared to about 13% of the sham group. There were no serious side effects, with the most common being a short-lasting headache. And when participants were asked to guess which treatment they took, neither group did better than chance, indicating that the blinding worked.

[…]

Source: 5-Day Brain Stimulation Treatment Highly Effective Against Depression, Stanford Researchers Find

Scientists discover new phase of water, known as “superionic ice,” inside planets

Scientists have discovered a new phase of water — adding to liquid, solid and gas — know as “superionic ice.” The “strange black” ice, as scientists called it, is normally created at the core of planets like Neptune and Uranus.

In a study published in Nature Physics, a team of scientists co-led by Vitali Prakapenka, a University of Chicago research professor, detailed the extreme conditions necessary to produce this kind of ice. It had only been glimpsed once before, when scientists sent a massive shockwave through a droplet of water, creating superionic ice that only existed for an instant.

In this experiment, the research team took a different approach. They pressed water between two diamonds, the hardest material on Earth, to reproduce the intense pressure that exists at the core of planets. Then, they used the Advanced Photon Source, or high-brightness X-ray beams, to shoot a laser through the diamonds to heat the water, according to the study.

“Imagine a cube, a lattice with oxygen atoms at the corners connected by hydrogen when it transforms into this new superionic phase, the lattice expands, allowing the hydrogen atoms to migrate around while the oxygen atoms remain steady in their positions,” Prakapenka said in a press release. “It’s kind of like a solid oxygen lattice sitting in an ocean of floating hydrogen atoms.”

Using an X-ray to look at the results, the team found the ice became less dense and was described as black in color because it interacted differently with light.

“It’s a new state of matter, so it basically acts as a new material, and it may be different from what we thought,” Prakapenka said.

What surprised the scientists the most was that superionic ice was created under a much lighter pressure than they’d originally speculated. They had thought that it would not be created until the water was compressed to over 50 gigapascals of pressure — the same amount of pressure inside rocket fuel as it combusts for lift-off — but it only took 20 gigapascals of pressure.

[…]

Superionic ice doesn’t exist only inside far-away planets — it’s also inside Earth, and it plays a role in maintaining our planet’s magnetic fields. Earth’s intense magnetism protects the planet’s surface from dangerous radiation and cosmic rays that come from outer space.

[…]

Source: Scientists discover new phase of water, known as “superionic ice,” inside planets – CBS News

What Else Do the Leaked ‘Facebook Papers’ Show? Angry face emojis have 5x the weight of a like thumb emoji… and more other stuff

The documents leaked to U.S. regulators by a Facebook whistleblower “reveal that the social media giant has privately and meticulously tracked real-world harms exacerbated by its platforms,” reports the Washington Post.

Yet it also reports that at the same time Facebook “ignored warnings from its employees about the risks of their design decisions and exposed vulnerable communities around the world to a cocktail of dangerous content.”

And in addition, the whistleblower also argued that due to Mark Zuckberg’s “unique degree of control” over Facebook, he’s ultimately personally response for what the Post describes as “a litany of societal harms caused by the company’s relentless pursuit of growth.” Zuckerberg testified last year before Congress that the company removes 94 percent of the hate speech it finds before a human reports it. But in internal documents, researchers estimated that the company was removing less than 5 percent of all hate speech on Facebook…

For all Facebook’s troubles in North America, its problems with hate speech and misinformation are dramatically worse in the developing world. Documents show that Facebook has meticulously studied its approach abroad, and is well aware that weaker moderation in non-English-speaking countries leaves the platform vulnerable to abuse by bad actors and authoritarian regimes. According to one 2020 summary, the vast majority of its efforts against misinformation — 84 percent — went toward the United States, the documents show, with just 16 percent going to the “Rest of World,” including India, France and Italy…

Facebook chooses maximum engagement over user safety. Zuckerberg has said the company does not design its products to persuade people to spend more time on them. But dozens of documents suggest the opposite. The company exhaustively studies potential policy changes for their effects on user engagement and other factors key to corporate profits.

Amid this push for user attention, Facebook abandoned or delayed initiatives to reduce misinformation and radicalization… Starting in 2017, Facebook’s algorithm gave emoji reactions like “angry” five times the weight as “likes,” boosting these posts in its users’ feeds. The theory was simple: Posts that prompted lots of reaction emoji tended to keep users more engaged, and keeping users engaged was the key to Facebook’s business. The company’s data scientists eventually confirmed that “angry” reaction, along with “wow” and “haha,” occurred more frequently on “toxic” content and misinformation. Last year, when Facebook finally set the weight on the angry reaction to zero, users began to get less misinformation, less “disturbing” content and less “graphic violence,” company data scientists found.
The Post also contacted a Facebook spokeswoman for their response. The spokewoman denied that Zuckerberg “makes decisions that cause harm” and then also dismissed the findings as being “based on selected documents that are mischaracterized and devoid of any context…”

Responding to the spread of specific pieces of misinformation on Facebook, the spokeswoman went as far to acknowledge that at Facebook, “We have no commercial or moral incentive to do anything other than give the maximum number of people as much of a positive experience as possible.”

She added that the company is “constantly making difficult decisions.”

Source: What Else Do the Leaked ‘Facebook Papers’ Show? – Slashdot

‘A Mistake by YouTube Shows Its Power Over Media’ – and Kafka-esque arbritration rules

“Every hour, YouTube deletes nearly 2,000 channels,” reports the New York Times. “The deletions are meant to keep out spam, misinformation, financial scams, nudity, hate speech and other material that it says violates its policies.

“But the rules are opaque and sometimes arbitrarily enforced,” they write — and sometimes, YouTube does end up making mistakes. (Alternate URL here…) The gatekeeper role leads to criticism from multiple directions. Many on the right of the political spectrum in the United States and Europe claim that YouTube unfairly blocks them. Some civil society groups say YouTube should do more to stop the spread of illicit content and misinformation… Roughly 500 hours of video are uploaded to YouTube every minute globally in different languages. “It’s impossible to get our minds around what it means to try and govern that kind of volume of content,” said Evelyn Douek, senior research fellow at the Knight First Amendment Institute at Columbia University. “YouTube is a juggernaut, by some metrics as big or bigger than Facebook.”

In its email on Tuesday morning, YouTube said Novara Media [a left-leaning London news group] was guilty of “repeated violations” of YouTube’s community guidelines, without elaborating. Novara’s staff was left guessing what had caused the problem. YouTube typically has a three-strikes policy before deleting a channel. It had penalized Novara only once before… Novara’s last show released before the deletion was about sewage policy, which hardly seemed worthy of YouTube’s attention. One of the organization’s few previous interactions with YouTube was when the video service sent Novara a silver plaque for reaching 100,000 subscribers…

Staff members worried it had been a coordinated campaign by critics of their coverage to file complaints with YouTube, triggering its software to block their channel, a tactic sometimes used by right-wing groups to go after opponents…. An editor, Gary McQuiggin, filled out YouTube’s online appeal form. He then tried using YouTube’s online chat bot, speaking with a woman named “Rose,” who said, “I know this is important,” before the conversation crashed. Angry and frustrated, Novara posted a statement on Twitter and other social media services about the deletion. “We call on YouTube to immediately reinstate our account,” it said. The post drew attention in the British press and from members of Parliament.

Within a few hours, Novara’s channel had been restored. Later, YouTube said Novara had been mistakenly flagged as spam, without providing further detail.
“We work quickly to review all flagged content,” YouTube said in a statement, “but with millions of hours of video uploaded on YouTube every day, on occasion we make the wrong call ”

But Ed Procter, chief executive of the Independent Monitor for the Press, told the Times that it was at least the fifth time that a news outlet had material deleted by YouTube, Facebook or Twitter without warning.

Source: ‘A Mistake by YouTube Shows Its Power Over Media’ – Slashdot

So if you have friends in Parliament you can get YouTube to have a look at unbanning you, but if you only have a few hundred thousand followers you are fucked.

It’s a bit like Amazon, except more people depend on the Amazon marketplace for a living:

At Amazon, Some Brands Get More Protection From Fakes Than Others

Dirty dealing in the $175 billion Amazon Marketplace

Amazon’s Alexa Collects More of Your Data Than Any Other Smart Assistant

Our smart devices are listening. Whether it’s personally identifiable information, location data, voice recordings, or shopping habits, our smart assistants know far more than we realize.

[…]

All five services collect your name, phone number, device location, and IP address; the names and numbers of your contacts; your interaction history; and the apps you use. If you don’t like that information being stored, you probably shouldn’t use a voice assistant.

[…]

data collection

Keep in mind that no voice assistant provider is truly interested in protecting your privacy. For instance, Google Assistant and Cortana maintain a log of your location history and routers, Alexa and Bixby record your purchase history, and Siri tracks who is in your Apple Family.

[…]

If you’re looking to take control of your smart assistant, you can stop Alexa from sending your recordings to Amazon, turn off Google Assistant and Bixby, and manage Siri‘s data collection habits.

Source: Amazon’s Alexa Collects More of Your Data Than Any Other Smart Assistant

Intel open-sources AI-powered tool to spot bugs in code

Intel today open-sourced ControlFlag, a tool that uses machine learning to detect problems in computer code — ideally to reduce the time required to debug apps and software. In tests, the company’s machine programming research team says that ControlFlag has found hundreds of defects in proprietary, “production-quality” software, demonstrating its usefulness.

[…]

ControlFlag, which works with any programming language containing control structures (i.e., blocks of code that specify the flow of control in a program), aims to cut down on debugging work by leveraging unsupervised learning. With unsupervised learning, an algorithm is subjected to “unknown” data for which no previously defined categories or labels exist. The machine learning system — ControlFlag, in this case — must teach itself to classify the data, processing the unlabeled data to learn from its inherent structure.

ControlFlag continually learns from unlabeled source code, “evolving” to make itself better as new data is introduced. While it can’t yet automatically mitigate the programming defects it finds, the tool provides suggestions for potential corrections to developers, according to Gottschlich.

[…]

AI-powered coding tools like ControlFlag, as well as platforms like Tabnine, Ponicode, Snyk, and DeepCode, have the potential to reduce costly interactions between developers, such as Q&A sessions and repetitive code review feedback. IBM and OpenAI are among the many companies investigating the potential of machine learning in the software development space. But studies have shown that AI has a ways to go before it can replace many of the manual tasks that human programmers perform on a regular basis.

Source: Intel open-sources AI-powered tool to spot bugs in code | VentureBeat

Internet Service Providers Collect, Sell Horrifying Amount of Sensitive Data, Government Study Concludes

The new FTC report studied the privacy practices of six unnamed broadband ISPs and their advertising arms, and found that the companies routinely collect an ocean of consumer location, browsing, and behavioral data. They then share this data with dodgy middlemen via elaborate business arrangements that often aren’t adequately disclosed to broadband consumers.

“Even though several of the ISPs promise not to sell consumers personal data, they allow it to be used, transferred, and monetized by others and hide disclosures about such practices in fine print of their privacy policies,” the FTC report said.

The FTC also found that while many ISPs provide consumers tools allowing them to opt out of granular data collection, those tools are cumbersome to use—when they work at all. 

[…]

The agency’s report also found that while ISPs promise to only keep consumer data for as long as needed for “business purposes,” the definition of what constitutes a “business purpose” is extremely broad and varies among broadband providers and wireless carriers.

The report repeatedly cites Motherboard reporting showing how wireless companies have historically sold sensitive consumer location data to dubious third parties, often without user consent. This data has subsequently been abused from everyone from bounty hunters and stalkers to law enforcement and those posing as law enforcement.

The FTC was quick to note that because ISPs have access to the entirety of the data that flows across the internet and your home network, they often have access to even more data than what’s typically collected by large technology companies, ad networks, and app makers.


That includes the behavior of internet of things devices connected to your network, your daily movements, your online browsing history, clickstream data (not only which sites you visit but how much time you linger there), email and search data, race and ethnicity data, DNS records, your cable TV viewing habits, and more.

In some instances ISPs have even developed tracking systems that embed each packet a user sends over the internet with an individual identifier, allowing monitoring of user behavior in granular detail. Wireless carrier Verizon was fined $1.3 million in 2016 for implementing such a system without informing consumers or letting them opt out.

“Unlike traditional ad networks whose tracking consumers can block through browser or mobile device settings, consumers cannot use these tools to stop tracking by these ISPs, which use ‘supercookie’ technology to persistently track users,” the FTC report said.

[…]

Source: Internet Service Providers Collect, Sell Horrifying Amount of Sensitive Data, Government Study Concludes

Researchers design antibodies that destroy old cells, slowing down aging

No one knows why some people age worse than others and develop diseases -such as Alzheimer’s, fibrosis, type 2 diabetes or some types of cancer- associated with this aging process. One explanation for this could be the degree of efficiency of each organism’s response to the damage sustained by its cells during its life, which eventually causes them to age. In relation to this, researchers at the Universitat Oberta de Catalunya (UOC) and the University of Leicester (United Kingdom) have developed a new method to remove old cells from tissues, thus slowing down the aging process.

Specifically, they have designed an antibody that acts as a smart bomb able to recognize specific proteins on the surface of these aged or senescent . It then attaches itself to them and releases a drug that removes them without affecting the rest, thus minimizing any potential side effects.

[…]

“We now have, for the first time, an antibody-based drug that can be used to help slow down in humans,” noted Salvador Macip, the leader of this research and a doctor and researcher at the UOC and the University of Leicester.

“We based this work on existing cancer therapies that target specific proteins present on the surface of cancer cells, and then applied them to senescent cells,” explained the expert.

All have a mechanism known as “cellular senescence” that halts the division of damaged cells and removes them to stop them from reproducing. This mechanism helps slow down the progress of cancer, for example, as well as helping model tissue at the embryo development stage.

However, in spite of being a very beneficial biological mechanism, it contributes to the development of diseases when the organism reaches old age. This seems to be because the immune system is no longer able to efficiently remove these senescent cells, which gradually accumulate in tissues and detrimentally affect their functioning.

[…]

The drug designed by Macip and his team is a second-generation senolytic with high specificity and remote-controlled delivery. They started from the results of a previous study that looked at the “surfaceome,” the proteins on the cell’s surface, to identify those proteins that are only present in senescent cells. “They’re not universal: some are more present than others on each type of aged cell,” said Macip.

In this new work, the researchers used a monoclonal antibody trained to recognize senescent cells and attach to them. “Just like our antibodies recognize germs and protect us from them, we’ve designed these antibodies to recognize old cells. In addition, we’ve given them a toxic load to destroy them, as if they were a remote-controlled missile,” said the researcher, who is the head of the University of Leicester’s Mechanisms of Cancer and Aging Lab.

Treatment could start to be given as soon as the first symptoms of the disease, such as Alzheimer’s, type 2 diabetes, Parkinson’s, arthritis, cataracts or some tumors, appear. In the long term, the researchers believe that it could even be used to achieve healthier aging in some circumstances.

Source: Researchers design antibodies that destroy old cells, slowing down aging