Leap Motion brings out TouchFree software – Add Touchless Gesture Control

Touchless, hygienic interaction

TouchFree is a software application that runs on an interactive kiosk or advertising totem. It detects a user’s hand in mid-air and converts it to an on-screen cursor.

touchless-kiosk-with-ultraleap-touchfree.jpg

Easy to integrate, deploy, and use

• Runs invisibly on top of existing user interfaces

• Add touchless interaction without writing a single line of code

• Familiar touchscreen-style interactions

services@leapmotion.com

How users interact

• A user’s hand is detected, and shown as a cursor displayed on the screen

• Users can select items without touching the screen using a simple “air push” motion, similar to tapping a screen but in mid-air.

• To drag or scroll, “air push”, then move


Download TouchFree app

Minimum system requirements

Source: TouchFree | Add Touchless Gesture Control — Leap Motion Developer

TCL’s new paper-like display can also play videos

NXTPAPER today — a new type of display that’s meant to offer better eye protection by reducing flicker, blue light and light output. The company said the effect is similar to E Ink, calling it a “combination of screen and paper.” TCL also said it has received eye protection certifications from the German Rhine laboratory, and has 11 different patents for eye protection.

Don’t expect to see NXTPAPER appear on a smartphone, though. TCL said it’s meant for larger devices like tablets or e-readers. The new screen tech will support Full HD definition and allow for smooth video playback on a paper-like experience. Compared to E Ink, TCL said its version will offer 25 percent higher contrast. It uses a “highly reflective screen” to “reuse natural light,” doing away with backlighting in the process. TCL said NXTPAPER will be 36 percent thinner than typical LCD while offering higher contrast. Because it doesn’t require its own lights, the company said the new screen tech is also 65 percent more power efficient. This way, devices won’t need large unwieldy batteries for prolonged use.

Source: TCL’s new paper-like display can also play videos | Engadget

TCL Announces E Ink Color Display That Can Handle Video

Known for its tablets, TVs, and phones, TCL has this week announced a new technology, NXTPAPER, that could totally change how you think about e ink. E ink displays are known for being great to stare at for hours and perfect for reading books (and sometimes even comics), but the latest color displays from E Ink have low resolution and slow refresh rates, making them unusable for video. TCL claims its new NXTPAPER tech could be a solution.

TCL’s press release is a little confusing, as it appears to compare NXTPAPER both to E Ink’s displays and to traditional LCD displays that you find in most tablets and phones today. But by all accounts, the technology used in NXTPAPER sounds like e ink technology. The press release claims it will be 36% thinner than LCD displays and 65% more power-efficient—which lines up with the gains you get from e ink.

Last week, E Ink told the blog Good Ereader that it had plans to improve its own color E Ink technology. While we adore the first color E Ink devices, they’ve not been without their flaws, including a paltry 100-PPI resolution and slower refresh rates. E Ink promised to at least double the resolution to 200 PPI by 2021, with a goal of hitting 300 PPI—the resolution of high-end LCD and monochrome E Ink displays—at a later date.

We don’t know the exact planned resolution for TCL’s competing NXTPAPER technology, but the company claims it will be full HD, and that the text incorporated will allow it to have 25% higher contrast than traditional e ink devices

TCL also says it will offer a “paper-like visual experience in full color with no flicker and no harmful blue light” and that it will rely on natural light—which, again, sounds like e ink.

Source: TCL Announces E Ink Color Display That Can Handle Video

Fraunhofer releases H.266/VVC which encodes video 50% smaller

Fraunhofer HHI (together with partners from industry including Apple, Ericsson, Intel, Huawei, Microsoft, Qualcomm, and Sony) is celebrating the release and official adoption of the new global video coding standard H.266/Versatile Video Coding (VVC). This new standard offers improved compression, which reduces data requirements by around 50% of the bit rate relative to the previous standard H.265/High Efficiency Video Coding (HEVC) without compromising visual quality. In other words, H.266/VVC offers faster video transmission for equal perceptual quality. Overall, H.266/VVC provides efficient transmission and storage of all video resolutions from SD to HD up to 4K and 8K, while supporting high dynamic range video and omnidirectional 360° video.

[…]

Through a reduction of data requirements, H.266/VVC makes video transmission in mobile networks (where data capacity is limited) more efficient. For instance, the previous standard H.265/HEVC requires ca. 10 gigabytes of data to transmit a 90-min UHD video. With this new technology, only 5 gigabytes of data are required to achieve the same quality. Because H.266/VVC was developed with ultra-high-resolution video content in mind, the new standard is particularly beneficial when streaming 4K or 8K videos on a flat screen TV. Furthermore, H.266/VVC is ideal for all types of moving images: from high-resolution 360° video panoramas to screen sharing contents.

Source: Fraunhofer Heinrich Hertz Institute HHI

Tanvas Haptic Touch Screen

With touch screens getting more and more prevalent, we’re seeing more experimentation with haptics. Being able to feel something other than just the smooth glass surface can be incredibly advantageous. Have you been in a car with a touch screen radio system? If so you’ll know the frustration.

Tanvas is a system that adds haptics by changing the amount of adhesion your finger tip experiences on the screen. Basically, they’re increasing the friction in a controlled manner. The result is a distinct difference between various areas of the screen. To be clear, you’re not feeling ridges, edges, or other 3 dimensional items, but you can definitely feel where something exists and something does not.

The touch screen itself isn’t really a consumer product. This is a dev kit, so you could incorporate their tech into your projects. Admittedly, this is only appealing to a very narrow subset of our readership (those developing a product that uses a touch screen) but I felt the tech was very interesting and wanted to share. Personally, I’d love to see this technology employed in popular consumer devices such as iPads!

Source: Quick Look: Tanvas Haptic Touch Screen

Google’s Autoflip Can Intelligently Crop Videos

Google has released an open-source tool, Autoflip, that could make bad cropping a thing of the past by intelligently reframing video to correctly fit alternate aspect ratios.

In a blog post, Google’s AI team wrote that footage shot for television and desktop computers normally comes in a 16:9 or 4:3 format, but with mobile devices now outpacing TV in terms of video consumption, the footage is often displayed in a way that looks odd to the end-user. Fixing this problem typically requires “video curators to manually identify salient contents on each frame, track their transitions from frame-to-frame, and adjust crop regions accordingly throughout the video,” soaking up time and effort that could be better spent on other work.

Autoflip aims to fix that with a framework that applies video stabilizer-esque techniques to keep the camera focused on what’s important in the footage. Using “ML-enabled object detection and tracking technologies to intelligently understand video content” built on the MediaPipe framework, Google’s team wrote, it’s able to adjust the frame of a video on the fly.

Gif: Google AI

What’s more, Autoflip automatically adjusts between scenes by identifying “changes in the composition that signify scene changes in order to isolate scenes for processing,” according to the company. Finally, it analyzes each scene to determine whether it should use a static frame or tracking mode.

Illustration for article titled Googles Autoflip Can Intelligently Crop Videos on the Fly to Fit Any Aspect Ratio
Graphic: Google AI

This is pretty neat and offers obvious advantages over static cropping of videos, though it’s probably better suited to things like news footage and Snapchat videos than movies and TV shows (where being able to view an entire shot is more important).

For a more technical explanation of how all this works, the Google AI team explained the various technologies involved in its blog post. The project’s source code is also available to view on Github, along with instructions on how to take it for a spin.

Source: Google’s Autoflip Can Intelligently Crop Videos

Delta and Misapplied Sciences introduce parallel reality – a display that shows different content to different people at the same time without augmentation

In a ritual I’ve undertaken at least a thousand times, I lift my head to consult an airport display and determine which gate my plane will depart from. Normally, that involves skimming through a sprawling list of flights to places I’m not going. This time, however, all I see is information meant just for me:

Hello Harry
Flight DL42 to SEA boards in 33 min
Gate C11, 16 min walk
Proceed to Checkpoint 2

Stranger still, a leather-jacketed guy standing next to me is looking at the same display at the same time—and all he sees is his own travel information:

Hello Albert
Flight DL11 to ATL boards in 47 min
Gate C26, 25 min walk
Proceed to Checkpoint 4

Okay, confession time: I’m not at an airport. Instead, I’m visiting the office of Misapplied Sciences, a Redmond, Washington, startup located in a dinky strip mall whose other tenants include a teppanyaki joint and a children’s hair salon. Albert is not another traveler but rather the company’s cofounder and CEO, Albert Ng. We’ve been play-acting our way through a demo of the company’s display, which can show different things to different people at one time—no special glasses, smartphone-camera trickery, or other intermediary technology required. The company calls it parallel reality.

The simulated airport terminal is only one of the scenarios that Ng and his cofounder Dave Thompson show off for me in their headquarters. They also set up a mock store with a Pikachu doll, a Katy Perry CD, a James Bond DVD, and other goods, all in front of one screen. When I glance up at it, I see video related to whichever item I’m standing near. In a makeshift movie theater, I watch The Sound of Music with closed captions in English on a display above the movie screen, while Ng sits one seat over and sees Chinese captions on the same display. And I flick a wand to control colored lights on Seattle’s Space Needle (or for the sake of the demo, a large poster of it).

At one point, just to definitively prove that their screen can show multiple images at once, Ng and Thompson push a grid of mirrors up in front of it. Even though they’re all reflecting the same screen, each shows an animated sequence based on the flag or map of a different country.
[…]
The potential applications for the technology—from outdoor advertising to traffic signs to theme-park entertainment—are many. But if all goes according to plan, the first consumers who will see it in action will be travelers at the Detroit Metropolitan Airport. Starting in the middle of this year, Delta Air Lines plans to offer parallel-reality signage, located just past TSA, that can simultaneously show almost 100 customers unique information on their flights, once they’ve scanned their boarding passes. Available in English, Spanish, Japanese, Korean, and other languages, it will be a slicked-up, real-world deployment of the demo I got in Redmond.
[…]

At a January 2014 hackathon, a researcher named Paul Dietz came up with an idea to synchronize crowds in stadiums via a smartphone app that gave individual spectators cues to stand up, sit down, or hold up a card. The idea was to “use people as pixels,” he says, by turning the entire audience into a giant, human-powered animated display. It worked. “But the participants complained that they were so busy looking at their phones, they couldn’t enjoy the effect,” Dietz remembers.

That led him to wonder if there was a more elegant way to signal individuals in a crowd, such as beaming different colors to different people. As part of this investigation, he set up a pocket projector in an atrium and projected stripes of red and green. “The projector was very dim,” he says. “But when I looked into it from across the atrium, it was this beautiful, bright, saturated green light. Then I moved over a few inches into a red stripe, and then it looked like an intense red light.”

Based on this discovery, Dietz concluded that it might be possible to create displays that precisely aimed differing images at people depending on their position. Later in 2014, that epiphany gave birth to Misapplied Sciences, which he cofounded with Ng—who’d been his Microsoft intern while studying high-performance computing at Stanford—and Thompson, whom Dietz had met when both were creating theme-park experiences at Walt Disney Imagineering.

[…]

the basic principle—directing different colors in different directions—remains the same. With garden-variety screens, the whole idea is to create a consistent picture, and the wider the viewing angle, the better. By contrast, with Misapplied’s displays, “at one time, a single pixel can emit green light towards you,” says Ng. “Whereas simultaneously that same pixel can emit red light to the person next to you.”

The parallel-reality effect is all in the pixels. [Image: courtesy of Misapplied Sciences]

In one version of the tech, it can control the display in 18,000 directions; in another, meant for large-scale outdoor signage, it can control it in a million. The company has engineered display modules that can be arranged, Lego-like, in different configurations that allow for signage of varying sizes and shapes. A Windows PC performs the heavy computational lifting, and there’s software that lets a user assign different images to different viewing positions by pointing and clicking. As displays reach the market, Ng says that the price will “rival that of advanced LED video walls.” Not cheap, maybe, but also not impossibly stratospheric.

For all its science-fiction feel, parallel reality does have its gotchas, at least in its current incarnation. In the demos I saw, the pixels were blocky, with a noticeable amount of space around them—plus black bezels around the modules that make up a sign—giving the displays a look reminiscent of a sporting-arena electronic sign from a few generations back. They’re also capable of generating only 256 colors, so photos and videos aren’t exactly hyperrealistic. Perhaps the biggest wrinkle is that you need to stand at least 15 feet back for the parallel-reality effect to work. (Venture too close, and you see one mishmashed image.)

[…]

The other part of the equation is figuring out which traveler is standing where, so people see their own flight details. Delta is accomplishing that with a bit of AI software and some ceiling-mounted cameras. When you scan your boarding pass, you get associated with your flight info—not through facial recognition, but simply as a discrete blob in the cameras’ view. As you roam near the parallel-reality display, the software keeps tabs on your location, so that the signage can point your information at your precise spot.

Delta is taking pains to alleviate any privacy concerns relating to this system. “It’s all going to be housed on Delta systems and Delta software, and it’s always going to be opt-in,” says Robbie Schaefer, general manager of Delta’s airport customer experience. The software won’t store anything once a customer moves on, and the display won’t display any highly sensitive information. (It’s possible to steal a peek at other people’s displays, but only by invading their personal space—which is what I did to Ng, at his invitation, to see for myself.)

The other demos I witnessed at Misapplied’s office involved less tracking of individuals and handling of their personal data. In the retail-store scenario, for instance, all that mattered was which product I was standing in front of. And in the captioning one, the display only needed to know what language to display for each seat, which involved audience members using a smartphone app to scan a QR code on their seat and then select a language.

Source: Delta and Misapplied Sciences introduce parallel reality

LEDs in routers, power strips, and more, can ship data to the LightAnchors AR smartphone app

A pentad of bit boffins have devised a way to integrate electronic objects into augmented reality applications using their existing visible light sources, like power lights and signal strength indicators, to transmit data.

In a recent research paper, “LightAnchors: Appropriating Point Lights for Spatially-Anchored Augmented Reality Interfaces,” Carnegie Mellon computer scientists Karan Ahuja, Sujeath Pareddy, Robert Xiao, Mayank Goel, and Chris Harrison describe a technique for fetching data from device LEDs and then using those lights as anchor points for overlaid augmented reality graphics.

As depicted in a video published earlier this week on YouTube, LightAnchors allow an augmented reality scene, displayed on a mobile phone, to incorporate data derived from an LED embedded in the real-world object being shown on screen. You can see it here.

Unlike various visual tagging schemes that have been employed for this purpose, like using stickers or QR codes to hold information, LightAnchors rely on existing object features (device LEDs) and can be dynamic, reading live information from LED modulations.

The reason to do so is that device LEDs can serve not only as a point to affix AR interface elements, but also as an output port for the binary data being translated into human-readable form in the on-screen UI.

“Many devices such as routers, thermostats, security cameras already have LEDs that are addressable,” Karan Ahuja, a doctoral student at the Human-Computer Interaction Institute in the School of Computer Science at Carnegie Mellon University told The Register.

“For devices such as glue guns and power strips, their LED can be co-opted with a very cheap micro-controller (less than US$1) to blink it at high frame rates.”

Source: LightAnchors array: LEDs in routers, power strips, and more, can sneakily ship data to this smartphone app • The Register

On-Device, Real-Time Hand Tracking with MediaPipe

The ability to perceive the shape and motion of hands can be a vital component in improving the user experience across a variety of technological domains and platforms. For example, it can form the basis for sign language understanding and hand gesture control, and can also enable the overlay of digital content and information on top of the physical world in augmented reality. While coming naturally to people, robust real-time hand perception is a decidedly challenging computer vision task, as hands often occlude themselves or each other (e.g. finger/palm occlusions and hand shakes) and lack high contrast patterns. Today we are announcing the release of a new approach to hand perception, which we previewed CVPR 2019 in June, implemented in MediaPipe—an open source cross platform framework for building pipelines to process perceptual data of different modalities, such as video and audio. This approach provides high-fidelity hand and finger tracking by employing machine learning (ML) to infer 21 3D keypoints of a hand from just a single frame. Whereas current state-of-the-art approaches rely primarily on powerful desktop environments for inference, our method achieves real-time performance on a mobile phone, and even scales to multiple hands. We hope that providing this hand perception functionality to the wider research and development community will result in an emergence of creative use cases, stimulating new applications and new research avenues.

3D hand perception in real-time on a mobile phone via MediaPipe. Our solution uses machine learning to compute 21 3D keypoints of a hand from a video frame. Depth is indicated in grayscale.

An ML Pipeline for Hand Tracking and Gesture Recognition Our hand tracking solution utilizes an ML pipeline consisting of several models working together:

  • A palm detector model (called BlazePalm) that operates on the full image and returns an oriented hand bounding box.
  • A hand landmark model that operates on the cropped image region defined by the palm detector and returns high fidelity 3D hand keypoints.
  • A gesture recognizer that classifies the previously computed keypoint configuration into a discrete set of gestures.

This architecture is similar to that employed by our recently published face mesh ML pipeline and that others have used for pose estimation. Providing the accurately cropped palm image to the hand landmark model drastically reduces the need for data augmentation (e.g. rotations, translation and scale) and instead allows the network to dedicate most of its capacity towards coordinate prediction accuracy.

Hand perception pipeline overview.

BlazePalm: Realtime Hand/Palm Detection To detect initial hand locations, we employ a single-shot detector model called BlazePalm, optimized for mobile real-time uses in a manner similar to BlazeFace, which is also available in MediaPipe. Detecting hands is a decidedly complex task: our model has to work across a variety of hand sizes with a large scale span (~20x) relative to the image frame and be able to detect occluded and self-occluded hands. Whereas faces have high contrast patterns, e.g., in the eye and mouth region, the lack of such features in hands makes it comparatively difficult to detect them reliably from their visual features alone. Instead, providing additional context, like arm, body, or person features, aids accurate hand localization. Our solution addresses the above challenges using different strategies. First, we train a palm detector instead of a hand detector, since estimating bounding boxes of rigid objects like palms and fists is significantly simpler than detecting hands with articulated fingers. In addition, as palms are smaller objects, the non-maximum suppression algorithm works well even for two-hand self-occlusion cases, like handshakes. Moreover, palms can be modelled using square bounding boxes (anchors in ML terminology) ignoring other aspect ratios, and therefore reducing the number of anchors by a factor of 3-5. Second, an encoder-decoder feature extractor is used for bigger scene context awareness even for small objects (similar to the RetinaNet approach). Lastly, we minimize the focal loss during training to support a large amount of anchors resulting from the high scale variance. With the above techniques, we achieve an average precision of 95.7% in palm detection. Using a regular cross entropy loss and no decoder gives a baseline of just 86.22%.

Source: Google AI Blog: On-Device, Real-Time Hand Tracking with MediaPipe

3D volumetric display creates hologram-like tactile animated objects with sound using a polystyrene bead thrown around at high pace

Researchers in Sussex have built a device that displays 3D animated objects that can talk and interact with onlookers.

A demonstration of the display showed a butterfly flapping its wings, a countdown spelled out by numbers hanging in the air, and a rotating, multicoloured planet Earth. Beyond interactive digital signs and animations, scientists want to use it to visualise and even feel data.

[…]

it uses a 3D field of ultrasound waves to levitate a polystyrene bead and whip it around at high speed to trace shapes in the air.

The 2mm-wide bead moves so fast, at speeds approaching 20mph, that it traces out the shape of an object in less than one-tenth of a second. At such a speed, the brain doesn’t see the moving bead, only the completed shape it creates. The colours are added by LEDs built into the display that shine light on the bead as it zips around.

Because the images are created in 3D space, they can be viewed from any angle. And by careful control of the ultrasonic field, the scientists can make objects speak, or add sound effects and musical accompaniments to the animated images. Further manipulation of the sound field enables users to interact with the objects and even feel them in their hands.

[…]

The images are created between two horizontal plates that are studded with small ultrasonic transducers. These create an inaudible 3D sound field that contains a tiny pocket of low pressure air that traps the polystyrene bead. Move the pocket around, by tweaking the output of the transducers, and the bead moves with it.

The most basic version of the display creates 3D colour animations, but writing in the journal Nature, the scientists describe how they improved the display to produce sounds and tactile responses to people reaching out to the image.

Speech and other sounds, such as a musical accompaniment, were added by vibrating the polystyrene bead as it hares around. The vibrations can be tuned to produce soundwaves across the entire range of human hearing, creating, for example, crisp and clear speech. Another trick makes the display tactile by manipulating the ultrasonic field to create a virtual “button” in mid-air.

The prototype uses a single bead and can create images inside a 10cm-wide cube of air. But future displays could use more powerful transducers to make larger animations, and employ multiple beads at once. Subramanian said existing computer software can be used to ensure the tiny beads do not crash into one another, although choreographing the illumination of multiple beads mid-air is another problem.

[…]

“The interesting thing about the tactile content is that it’s created using ultrasound waves. Unlike the simple vibrations most people are familiar with through smartphones or games consoles, the ultrasound waves move through the air to create precise patterns against your hands. This allows multimedia experiences where the objects you feel are just as rich and dynamic as the objects you see in the display.”

Julie Williamson, also at Glasgow, said levitating displays are a first step towards truly interactive 3D displays. “I imagine a future where 3D displays can create experiences that are indistinguishable from the physical objects they are simulating,” she said.

Source: Hologram-like device animates objects using ultrasound waves | Technology | The Guardian

3D Holographic Air Fan Displays

 

  • TOP-NOTCH 3D EFFECT – The image has no borders and backgrounds,makes you feel it completely appears in the air and creates best attraction for your products or events.Widely used in department store,shopping mall,casino,bars,railway station signage display
  • 🔥EYE-CATCHING BLACK TECH PRODUCT – Stand out from your competitors.Holo One is far beyond being “just a cool thing”. It is a comprehensive solution that can be seamlessly integrated into your business delivering you a complete media planning system and helping you outshine industry competition.
  • 🔥MAIN PARAMETERS – 224pcs led lights,8G Kingston SD Card( be careful when inserting the card slot) , display support format: MP4, AVI, RMVB, MKV, GIF, JPG, PNG with a black background.Software compatible with Windows XP / Windows 7 / Windows 8 / Windows 10(NOT SUPPORT MAC BOOK).Resolution is 450*224 px

https://www.amazon.com/GIWOX-Hologram-Advertising-Display-Holographic/dp/B077YD59RN

Smallest pixels ever created, a million times smaller than on smartphones, could light up color-changing buildings

The smallest pixels yet created—a million times smaller than those in smartphones, made by trapping particles of light under tiny rocks of gold—could be used for new types of large-scale flexible displays, big enough to cover entire buildings.

The colour pixels, developed by a team of scientists led by the University of Cambridge, are compatible with roll-to-roll fabrication on flexible plastic films, dramatically reducing their production cost. The results are reported in the journal Science Advances.

It has been a long-held dream to mimic the colour-changing skin of octopus or squid, allowing people or objects to disappear into the natural background, but making large-area flexible display screens is still prohibitively expensive because they are constructed from highly precise multiple layers.

At the centre of the pixels developed by the Cambridge scientists is a tiny particle of gold a few billionths of a metre across. The grain sits on top of a reflective surface, trapping light in the gap in between. Surrounding each grain is a thin sticky coating which changes chemically when electrically switched, causing the to change colour across the spectrum.

The team of scientists, from different disciplines including physics, chemistry and manufacturing, made the pixels by coating vats of golden grains with an active polymer called polyaniline and then spraying them onto flexible mirror-coated plastic, to dramatically drive down production cost.

The pixels are the smallest yet created, a million times smaller than typical smartphone pixels. They can be seen in bright sunlight and because they do not need constant power to keep their set colour, have an energy performance that make large areas feasible and sustainable. “We started by washing them over aluminized food packets, but then found aerosol spraying is faster,” said co-lead author Hyeon-Ho Jeong from Cambridge’s Cavendish Laboratory.

“These are not the normal tools of nanotechnology, but this sort of radical approach is needed to make sustainable technologies feasible,” said Professor Jeremy J Baumberg of the NanoPhotonics Centre at Cambridge’s Cavendish Laboratory, who led the research. “The strange physics of light on the nanoscale allows it to be switched, even if less than a tenth of the film is coated with our active pixels. That’s because the apparent size of each pixel for light is many times larger than their physical area when using these resonant gold architectures.”

The pixels could enable a host of new application possibilities such as building-sized display screens, architecture which can switch off solar heat load, active camouflage clothing and coatings, as well as tiny indicators for coming internet-of-things devices.

The team are currently working at improving the colour range and are looking for partners to develop the technology further.

Source: Smallest pixels ever created could light up color-changing buildings

Color-Changing LEDs Pave the Way to Impossibly High Screen Resolutions

An international collaboration between several universities around the world has led to an innovation in LEDs that could potentially result in a giant leap forward when it comes to increasing the resolution on TV screens and mobile devices. For the first time ever, a single LED can now change color all by itself.

The current design and chemical makeup of LEDs limit the technology to producing light in just a single color. “But Andrew, what about my color-changing LED smart bulbs,” you’re probably asking. Those actually rely on a cluster of LEDs inside that each produce either red, green, or blue light. When their individual intensities are adjusted, the colors that each light produces mix to produce an overall shade of light. LED-backlit LCD TVs work in a similar fashion, but to produce one-colored pixel, three filtered LEDs are required. Even the next big breakthrough in flatscreen TV technology, MicroLEDs, require a trio of ultra-tiny light-producing diodes to create a single pixel, which limits how many can be squeezed into a given area, and resolution.

In a paper recently published to the ACS Photonics Journal, researchers from Lehigh University and West Chester University in Pennsylvania, Osaka University in Japan, and the University of Amsterdam, detail a new approach to making LEDs that uses a rare earth ion called Europium that when paired with Gallium Nitride (an alternative to silicon that’s now showing up in electronics other than LEDs, like Anker’s impossibly tiny PowerPort Atom PD 1 laptop charger) allows the LED’s color to be adjusted on the fly. The secret sauce is how power is used to excite the Europium and Gallium Nitride-different ratios and intensities of current can be selectively applied to produce the emission of three primary colors: red, blue, and green.

Using this approach, LED lightbulbs with specific color temperatures could be produced and sold at much cheaper price points since the colors from multiple tint-specific LEDs don’t have to be mixed. The technology could yield similar benefits for TVs and the screens that end up in mobile devices. Instead of three LEDs (red, green, and blue) needed to generate every pixel, a single Europium-based LED could do the job. Even more exciting than cheaper price tags is the fact that replacing three LEDs with just one could result in a display with three times the resolution. Your eyes probably wouldn’t be able to discern that many pixels on a smartphone screen, but in smaller displays, like those used in the viewfinders of digital cameras, a significant step in resolution would be a noticeable improvement.

Source: Color-Changing LEDs Pave the Way to Impossibly High Screen Resolutions

Intel’s new Vaunt smart glasses actually look good

There is no camera to creep people out, no button to push, no gesture area to swipe, no glowing LCD screen, no weird arm floating in front of the lens, no speaker, and no microphone (for now).

From the outside, the Vaunt glasses look just like eyeglasses. When you’re wearing them, you see a stream of information on what looks like a screen — but it’s actually being projected onto your retina.

The prototypes I wore in December also felt virtually indistinguishable from regular glasses. They come in several styles, work with prescriptions, and can be worn comfortably all day. Apart from a tiny red glimmer that’s occasionally visible on the right lens, people around you might not even know you’re wearing smart glasses.

Like Google Glass did five years ago, Vaunt will launch an “early access program” for developers later this year. But Intel’s goals are different than Google’s. Instead of trying to convince us we could change our lives for a head-worn display, Intel is trying to change the head-worn display to fit our lives.

Source: Exclusive: Intel’s new Vaunt smart glasses actually look good – The Verge

Scientists Found a Way to Make Inexpensive, Solid-Looking 3D Holograms / volumetric displays

Researchers at Brigham Young University in Utah made something they’re calling an Optical Trap Display (OTD). The device traps a tiny opaque particle in mid-air using an invisible laser beam, then moves the beam around a preset path in free space. At the same time, it illuminates the particle with red, green, or blue lights. When the particle moves fast enough, it creates a solid holographic image in the air. Move it even faster, and you can create the illusion of movement.
[…]
“We can think about this image like a 3D-printed object,” lead author Daniel Smalley, an assistant professor in electroholography at Brigham Young University, explained in a Nature video. “A single point was dragged sequentially through all these image points, and as it did, it scattered light. And the accumulated effect of all that scattering and moving was to create this 3D image in space that is visible from all angles.”

Scientifically, what Smalley and his team are creating are known as volumetric images, which differentiates them from 2D-hologram technologies. Other companies and scientists have made devices that create volumetric images, but the researchers say theirs is the first to generate free-floating images that can occupy the same space as other objects, as opposed to volumetric images that need to be contained inside a specially designed field. Other devices often require a much more elaborate set-up as well, while the OTD is relatively cheap, made with commercially available parts and low-cost lasers.
[…]
That said, the device does have its limitations. Namely, that the images produced right now are quite tiny: smaller than a fingernail. Making the images bigger will require the researchers learn how to manipulate more than one particle at a time. And it’s unlikely the device will be usable outdoors for the foreseeable future, since fast moving air particles can muck up the process. Video cameras also have a problem capturing the images the way our eyes or still cameras do—a video’s frame rate makes the image look like it’s flickering, while our eyes only see a solid image.

Source: Scientists Found a Way to Make Inexpensive, Solid-Looking 3D Holograms

Asus Bezel-Free Kit uses illusion to hide bezels in multimonitor setups

The concept is simple. Thin lenses are placed along the seams where screens meet; they contain optical micro-structures that refract light, bending it inward to hide the bezels underneath.
[…]
The kit’s optical obfuscation is designed to work at a specific angle. We selected 130° because it offered the best balance of comfort and immersion in internal testing. Proper fit and alignment are extremely important, so the lenses and associated mounting hardware are made for specific monitors.

Source: Bezel-Free Kit makes multi-monitor setups seamless | ROG – Republic of Gamers Global

Does your monitor unplug from HDMI when you turn it off and mess up your desktop? Monitordetectkiller is the solution!

Remove Monitor Detection EDID override turn off disable monitor auto detect remove windows monitor autodetect

The computer detects when a TV/monitor is ‘turned off’ or ‘switched’ to another input. Then when powered-on or switched back, it gives the wrong resolution or breaks your extended display to reflect the single monitor, there may even be crashes and other issues.

Our hardware solution, the “MDK device” is a male to female modified adapter with integrated circuitry.

Now, the computer/device won’t receive a signal telling it the monitor is offline, thus avoiding any issues.

Source: Remove Monitor Detection disable monitor auto detect EDID

Buying a new Monitor / TV

When buying a new monitor there are 5 sites you should have open at all times:
1. The site selling monitors (eg plattetv.nl)
2. The comparison site Display specifications which allows you to search for models, add them to comparison lists and then view detailed specifications next to each other
3. A google search for the reviews of the model
4. AV Forums to search for good or bad experiences with the model.
5. Your price comparison site (eg Tweakers Pricewatch)

Also useful are sites that tell you what each model means, how the model number is built up. For Samsung you can use This site

The important specifications are:
What type of panel is it? (IPS / VA / PQL / OLED / Quantum Dot / QLED / MicroLED / etc)

Panel bit depth: is it 8 bits, 10 bits native or 10 bits (8 bits + FRC)

Colour bit depth: 30 bits?

Resolution: native UHD 3840×2160 pixels

Pixel density: higher is better

Display area: bigger is better

Static contrast: more is better

Response times (minimum / average) and input lag (for gaming): less is better

3D: if you think that’s important

frequency: most are 60Hz, some are 120Hz or 200Hz (higher is better)

Interpolation value: most are around 1200, higher is better

Power consumption: less is better

Other features:

  • connectivity (what kind of USB ports (3.0?), HDMI, Displayport etc fit in)
  • sizes
  • colour
  • stand size at the back
  • network (does it do 802.11n 5G and 802.11ac?)
  • features

Good luck!

Researchers Discover a Method That Could Triple Our Screen Resolutions

The researchers have outlined the technical details in a new study published in Nature. Basically, what they’ve done is figure out a method to control subpixels with voltage. Each pixel on an LCD screen contains three subpixels. Each of those subpixels handles one of three colors: red, green or blue. A white backlight shines through the pixel and the LCD shutter controls which subpixel is viewable. For instance, if the pixel should be blue, the LCD shutter will cover the red and green subpixels. In order to make purple, the shutter only needs to cover the green subpixel. The white backlight determines how light or dark the color will be.

The team at UCF’s NanoScience Technology Center has demonstrated a way of using an embossed nanostructure surface and reflective aluminum that could eliminate the need for subpixels entirely. On a test device, the researchers were able to control the color of each subpixel individually. Rather than one subpixel being dedicated to blue, it can produce the full range of color that the TV is capable of displaying. With each subpixel suddenly doing the work of three, the potential resolution of the device is suddenly three times as high. Additionally, this would mean that every subpixel (or in this case, a tinier pixel) would be on whenever displaying a color or white. That would lead to displays that are far brighter.

Source: Researchers Discover a Method That Could Triple Our Screen Resolutions

Refresh rates are a bit low, but the biggest hurdle will probably be your TV manufacturer refusing to incorporate this into a software update: they would much rather have you buy a new TV.

World’s thinnest hologram paves path to new 3D world – RMIT University

Now a pioneering team led by RMIT University’s Distinguished Professor Min Gu has designed a nano-hologram that is simple to make, can be seen without 3D goggles and is 1000 times thinner than a human hair.

“Conventional computer-generated holograms are too big for electronic devices but our ultrathin hologram overcomes those size barriers,” Gu said.

“Our nano-hologram is also fabricated using a simple and fast direct laser writing system, which makes our design suitable for large-scale uses and mass manufacture.

“Integrating holography into everyday electronics would make screen size irrelevant – a pop-up 3D hologram can display a wealth of data that doesn’t neatly fit on a phone or watch.
[…]
Dr Zengji Yue, who co-authored the paper with BIT’s Gaolei Xue, said: “The next stage for this research will be developing a rigid thin film that could be laid onto an LCD screen to enable 3D holographic display.

“This involves shrinking our nano-hologram’s pixel size, making it at least 10 times smaller.

“But beyond that, we are looking to create flexible and elastic thin films that could be used on a whole range of surfaces, opening up the horizons of holographic applications.”

Source: World’s thinnest hologram paves path to new 3D world – RMIT University

feeling things you touch in VR

haptics for VR walls and other objects [CHI17 fullpaper]
← SIC on EMS [UIST16 contest hardware]
Ad Infinitum: a parasite [ScienceGallery’17] →

In this project, we explored how to add haptics to walls and other heavy objects in virtual reality. Our main idea is to prevent the user’s hands from penetrating virtual objects by means of electrical muscle stimulation (EMS). Figure 1a shows an example. As the shown user lifts a virtual cube, our system lets the user feel the weight and resistance of the cube. The heavier the cube and the harder the user presses the cube, the stronger a counterforce the system generates. Figure 1b illustrates how our system implements the physicality of the cube, i.e., by actuating the user’s opposing muscles with EMS.

Source: haptics for VR walls and other objects [CHI17 fullpaper] – pedro lopes research