SkyFi lets you order up fresh satellite imagery in real time with a click

Commercial Earth-observation companies collect an unprecedented volume of images and data every single day, but purchasing even a single satellite image can be cumbersome and time-intensive. SkyFi, a two-year-old startup, is looking to change that with an app and API that makes ordering a satellite image as easy as a click of a few buttons on a smartphone or computer.

SkyFi doesn’t build or operate satellites; instead, it partners with over a dozen companies to deliver various kinds of satellite images — including optical, synthetic aperture radar (SAR), and hyperspectral — directly to the customer via a web and mobile app. A SkyFi user can task a satellite to capture a specific image or choose from a library of previously captured images. Some of SkyFi’s partners include public companies like Satellogic, as well as newer startups like Umbra and Pixxel.

[…]

SkyFi’s mission has resonated with investors. The company closed a $7 million seed round led by Balerion Space Ventures, with contributions from existing investors J2 Ventures and Uber alumna’s VC firm Moving Capital. Bill Perkins also participated. SkyFi has now raised over $17 million to date.

The startup is targeting three types of customers: individual consumers; large enterprise customers, from verticals spanning agriculture, mining, finance, insurance and more; and U.S. government and defense customers. SkyFi’s solution is appealing even these latter customers, who may have plenty of experience working with satellite companies already and could afford the high costs in the traditional marketplace.

[…]

Looking ahead, the Austin, Texas–based startup is planning on integrating insight and analytics capabilities into the SkyFi app. This feature will be especially useful for customers interested in hyperspectral or SAR images. The company also plans to do more feature updates as it integrates more providers — from satellites, to stratospheric balloons, to drones — to the platform.

“I think of SkyFi as the Netflix of the geospatial world, where I think of Umbra, Satellogic and Maxar as the movie studios of the world,” Fischer said. “I just want them to produce great content and put it on the platform.”

Source: SkyFi lets you order up fresh satellite imagery in real time with a click | TechCrunch

Samsung Display demos long rollable and a health-sensing OLED

The Rollable Flex is an interesting new flexible screen from Samsung Display that can be unrolled from just 49mm to 254.4mm, over five times its length. The display is being shown off at the annual Display Week trade show in Los Angeles alongside another Samsung panel that the company says offers fingerprint and blood pressure sensing in the OLED panel without the need for a separate module.

Aside from its maximum and minimum lengths, details on the Rollable Flex in Samsung Display’s press release are relatively slim, and it’s unclear what its overall size or resolution might be. The company says the panel unrolls on an “O-shaped axis like a scroll,” allowing it to “turn a difficult-to-carry large-sized display into a portable form factor.”

[…]

Source: Samsung Display demos long rollable and a health-sensing OLED – The Verge

Samsung’s new Sensor OLED display can read fingerprints anywhere on the screen

Samsung has unveiled a new display technology that could lead to new biometric and health-related capabilities in future phones and tablets. The tech giant has debuted what it calls the Sensor OLED Display that can read your fingerprints regardless of what part of the screen you touch at this year’s SID Display Week in LA. While most smartphones now have fingerprint readers on the screen, their sensors are attached under the panel as a separate module that only works within a small designated area. For Sensor OLED, Samsung said it embedded the fingerprint sensor into the panel itself.

Since the display technology can read fingerprints anywhere on the screen, it can also be used to monitor your heart rate and blood pressure. The company said it can even return more accurate readings than available wearables can. To measure your blood pressure, you’d need to place two fingers on the screen. OLED light is apparently reflected differently depending on your blood vessels’ contraction and relaxation. After that information is returned to the panel, the sensor converts it into health metrics.

Samsung explained in its press release: “To accurately measure a person’s blood pressure, it is necessary to measure the blood pressure of both arms. The Sensor OLED display can simultaneously sense the fingers of both hands, providing more accurate health information than existing wearable devices.” The company has yet to announce if it’s planning to use this new technology on devices it’s releasing in the future, but the exhibit at SID Display already shows it being able to read blood pressure and heart rate.

[…]

Source: Samsung’s new Sensor OLED display can read fingerprints anywhere on the screen

Meta’s open-source speech AI recognizes over 4,000 spoken languages | Engadget

Meta has created an AI language model that (in a refreshing change of pace) isn’t a ChatGPT clone. The company’s Massively Multilingual Speech (MMS) project can recognize over 4,000 spoken languages and produce speech (text-to-speech) in over 1,100. Like most of its other publicly announced AI projects, Meta is open-sourcing MMS today to help preserve language diversity and encourage researchers to build on its foundation. “Today, we are publicly sharing our models and code so that others in the research community can build upon our work,” the company wrote.

[…]

Speech recognition and text-to-speech models typically require training on thousands of hours of audio with accompanying transcription labels. (Labels are crucial to machine learning, allowing the algorithms to correctly categorize and “understand” the data.) But for languages that aren’t widely used in industrialized nations — many of which are in danger of disappearing in the coming decades — “this data simply does not exist,” as Meta puts it.

Meta used an unconventional approach to collecting audio data: tapping into audio recordings of translated religious texts. “We turned to religious texts, such as the Bible, that have been translated in many different languages and whose translations have been widely studied for text-based language translation research,” the company said. “These translations have publicly available audio recordings of people reading these texts in different languages.” Incorporating the unlabeled recordings of the Bible and similar texts, Meta’s researchers increased the model’s available languages to over 4,000.

[…]

“While the content of the audio recordings is religious, our analysis shows that this does not bias the model to produce more religious language,” Meta wrote. “We believe this is because we use a connectionist temporal classification (CTC) approach, which is far more constrained compared with large language models (LLMs) or sequence-to-sequence models for speech recognition.” Furthermore, despite most of the religious recordings being read by male speakers, that didn’t introduce a male bias either — performing equally well in female and male voices.

[…]

After training an alignment model to make the data more usable, Meta used wav2vec 2.0, the company’s “self-supervised speech representation learning” model, which can train on unlabeled data. Combining unconventional data sources and a self-supervised speech model led to impressive outcomes. “Our results show that the Massively Multilingual Speech models perform well compared with existing models and cover 10 times as many languages.” Specifically, Meta compared MMS to OpenAI’s Whisper, and it exceeded expectations. “We found that models trained on the Massively Multilingual Speech data achieve half the word error rate, but Massively Multilingual Speech covers 11 times more languages.”

Meta cautions that its new models aren’t perfect. “For example, there is some risk that the speech-to-text model may mistranscribe select words or phrases,” the company wrote. “Depending on the output, this could result in offensive and/or inaccurate language. We continue to believe that collaboration across the AI community is critical to the responsible development of AI technologies.”

[…]

Source: Meta’s open-source speech AI recognizes over 4,000 spoken languages | Engadget

Establishing a wildflower meadow bolstered biodiversity and reduced greenhouse gas emissions, study finds

A new study examining the effects of planting a wildflower meadow in the historic grounds of King’s College, Cambridge, has demonstrated its benefits to local biodiversity and climate change mitigation.

 

The study, led by King’s Research Fellow Dr. Cicely Marshall, found that establishing the meadow had made a considerable impact to the wildlife value of the land, while reducing the associated with its upkeep.

Marshall and her colleagues, among them three King’s undergraduate students, conducted biodiversity surveys over three years to compare the , abundance and composition supported by the meadow and adjacent .

They found that, in spite of its small size, the wildflower meadow supported three times as many species of plants, spiders and bugs, including 14 species with conservation designations.

Terrestrial invertebrate biomass was found to be 25 times higher in the meadow, with bat activity over the meadow also being three times higher than over the remaining lawn.

The study is published May 23 in the journal Ecological Solutions and Evidence.

As well as looking at the benefits to biodiversity, Marshall and her colleagues modeled the impact of the meadow on efforts, by assessing the changes in reflectivity, soil carbon sequestration, and emissions associated with its maintenance.

The reduced maintenance and fertilization associated with the meadow was found to save an estimated 1.36 tons CO2-e per hectare per year when compared with the grass lawn.

Surface reflectance increased by more than 25%, contributing to a reduced urban heat island effect, with the meadow more likely to tolerate an intensified drought regime.

[…]

Source: Establishing a wildflower meadow bolstered biodiversity and reduced greenhouse gas emissions, study finds

Brain waves can tell us how much pain someone is in

Brain signals can be used to detect how much pain a person is experiencing, which could overhaul how we treat certain chronic pain conditions, a new study has suggested.

The research, published in Nature Neuroscience today, is the first time a human’s chronic-pain-related brain signals have been recorded. It could aid the development of personalized therapies for the most severe forms of pain.

[…]

Researchers from the University of California, San Francisco, implanted electrodes in the brains of four people with chronic pain. The patients then answered surveys about the severity of their pain multiple times a day over a period of three to six months. After they finished filling out each survey, they sat quietly for 30 seconds so the electrodes could record their brain activity. This helped the researchers identify biomarkers of chronic pain in the brain signal patterns, which were as unique to the individual as a fingerprint.

Next, the researchers used machine learning to model the results of the surveys. They found they could successfully predict how the patients would score the severity of their pain by examining their brain activity, says Prasad Shirvalkar, one of the study’s authors.

“The hope is that now that we know where these signals live, and now that we know what type of signals to look for, we could actually try to track them noninvasively,” he says. “As we recruit more patients, or better characterize how these signals vary between people, maybe we can use it for diagnosis.”

The researchers also found they were able to distinguish a patient’s chronic pain from acute pain deliberately inflicted using a thermal probe. The chronic-pain signals came from a different part of the brain, suggesting that it’s not just a prolonged version of acute pain, but something else entirely.

Source: Brain waves can tell us how much pain someone is in | MIT Technology Review