The flow of air over Airfoils, or how planes fly

In this article we’ll investigate what makes airplanes fly by looking at the forces generated by the flow of air around the aircraft’s wings. More specifically, we’ll focus on the cross section of those wings to reveal the shape of an airfoil – you can see it presented in yellow below:

a wing with air flow and pressure showing as well as a selected angle of attack

We’ll find out how the shape and the orientation of the airfoil helps airplanes remain airborne. We’ll also learn about the behavior and properties of air and other flowing matter.

Source: Airfoil – Bartosz Ciechanowski

The article goes very deeply into how air flow works and is modelled, how velocity and pressure affect vectors, the shape of an airfoil, the boundry layer and the angle of attack. It requires a bit of scrolling before you get to the planes, but it’s mesmerising to play with the sliders.

Google researchers unveil ‘VLOGGER’, an AI that can animate a single still photos to allow them to talk

Described in a research paper titled “VLOGGER: Multimodal Diffusion for Embodied Avatar Synthesis,” the AI model can take a photo of a person and an audio clip as input, and then output a video that matches the audio, showing the person speaking the words and making corresponding facial expressions, head movements and hand gestures. The videos are not perfect, with some artifacts, but represent a significant leap in the ability to animate still images.

VLOGGER generates photorealistic videos of talking and gesturing avatars from a single image. (Credit: enriccorona.github.io)

A breakthrough in synthesizing talking heads

The researchers, led by Enric Corona at Google Research, leveraged a type of machine learning model called diffusion models to achieve the novel result. Diffusion models have recently shown remarkable performance at generating highly realistic images from text descriptions. By extending them into the video domain and training on a vast new dataset, the team was able to create an AI system that can bring photos to life in a highly convincing way.

“In contrast to previous work, our method does not require training for each person, does not rely on face detection and cropping, generates the complete image (not just the face or the lips), and considers a broad spectrum of scenarios (e.g. visible torso or diverse subject identities) that are critical to correctly synthesize humans who communicate,” the authors wrote.

A key enabler was the curation of a huge new dataset called MENTOR containing over 800,000 diverse identities and 2,200 hours of video — an order of magnitude larger than what was previously available. This allowed VLOGGER to learn to generate videos of people with varied ethnicities, ages, clothing, poses and surroundings without bias.

The paper demonstrates VLOGGER’s ability to automatically dub videos into other languages by simply swapping out the audio track, to seamlessly edit and fill in missing frames in a video, and to create full videos of a person from a single photo.

[…] One could imagine actors being able to license detailed 3D models of themselves that could be used to generate new performances. The technology could also be used to create photorealistic avatars for virtual reality and gaming. And it might enable the creation of AI-powered virtual assistants and chatbots that are more engaging and expressive.[…] the technology also has the potential for misuse, for example in creating deepfakes — synthetic media in which a person in a video is replaced with someone else’s likeness. As these AI-generated videos become more realistic and easier to create, it could exacerbate the challenges around misinformation and digital fakery.

[…]

Source: Google researchers unveil ‘VLOGGER’, an AI that can bring still photos to life | VentureBeat

Apex Legends streamers surprised to find aimbot and other hacks added to their PCs in the middle of major competition

The Apex Legends Global Series is currently in regional finals mode, but the North America finals have been delayed after two players were hacked mid-match. First, Noyan “Genburten” Ozkose of DarkZero suddenly found himself able to see other players through walls, then Phillip “ImperialHal” Dosen of TSM was given an aimbot.

Genburten’s hack happened part of the way through the day’s third match. A Twitch clip of the moment shows the words “Apex hacking global series by Destroyer2009 & R4ndom” repeating over chat as he realizes he’s been given a cheat and takes his hands off the controls. “I can see everyone!” he says, before leaving the match.

ImperialHal was hacked in the game immediately after that. “I have aimbot right now!” he shouts in a clip of the moment, before declaring “I can’t shoot.” Though he continued attempting to play out the round, the match was later abandoned.

The volunteers at the Anti-Cheat Police Department have since issued a PSA announcing, “There is currently an RCE exploit being abused in [Apex Legends]” and that it could be delivered via from the game itself, or its anti-cheat protection. “I would advise against playing any games protected by EAC or any EA titles”, they went on to say.

As for players of the tournament, they strongly recommended taking protective measures. “It is advisable that you change your Discord passwords and ensure that your emails are secure. also enable MFA for all your accounts if you have not done it yet”, they said, “perform a clean OS reinstall as soon as possible. Do not take any chances with your personal information, your PC may have been exposed to a rootkit or other malicious software that could cause further damage.”

Source: Apex Legends streamers surprised to find aimbot and other hacks added to their PCs in the middle of major competition | PC Gamer