Non-line-of-sight (NLOS) imaging and tracking is an emerging technology that allows the shape or position of objects around corners or behind diffusers to be recovered from transient, time-of-flight measurements. However, existing NLOS approaches require the imaging system to scan a large area on a visible surface, where the indirect light paths of hidden objects are sampled. In many applications, such as robotic vision or autonomous driving, optical access to a large scanning area may not be available, which severely limits the practicality of existing NLOS techniques. Here, we propose a new approach, dubbed keyhole imaging, that captures a sequence of transient measurements along a single optical path, for example, through a keyhole. Assuming that the hidden object of interest moves during the acquisition time, we effectively capture a series of time-resolved projections of the object’s shape from unknown viewpoints. We derive inverse methods based on expectation-maximization to recover the object’s shape and location using these measurements. Then, with the help of long exposure times and retroreflective tape, we demonstrate successful experimental results with a prototype keyhole imaging system.
C. Metzler, D. Lindell, G. Wetzstein, Keyhole Imaging: Non-Line-of-Sight Imaging and Tracking of Moving Objects Along a Single Optical Path, IEEE Transactions on Computational Imaging, 2021.
Overview of results
Keyhole imaging. A time-resolved detector and pulsed laser illuminate and image a point visible through a keyhole (left). As a hidden person moves, the detector captures a series of time-resolved measurements of the indirectly scattered light (center). From these measurements, we reconstruct both hidden object shape (e.g., for a hidden mannequin) and the time-resolved trajectory (right).
Experimental setup. Our optical system sends a laser pulse through the keyhole of a closed door. On the other side of the door, the hidden object moves along a translation stage. When third-bounce photons return, they are recorded and time-stamped by a SPAD. Top-right inset: A beam splitter (BS) is used to place the laser and SPAD in a confocal configuration.
Experimental results. First row: Images of the hidden objects. Second row: Reconstructions of the hidden objects using GD when their trajectories are known. Third row: EM reconstructions of the hidden objects when their trajectories are unknown. Fourth row: EM estimates of the trajectories of the hidden objects, each of which follows a different trajectory, where the dot color indicates position over time.
Computational imaging of moving 3D objects through the keyhole of a closed door.
Android is implementing this option as part of the accessibility feature, Switch Access. Switch Access adds a blue selection window to your display, and lets you use external switches, a keyboard, or the buttons on your Android to move that selection window through the many different items on your screen until you land on the one you want to select.
The big update to Switch Access is to make facial gestures the triggers that move the selection window across your screen. This new feature is part of Android Accessibility Suite’s 12.0.0 beta, which arrives packed into the latest Android 12 beta (beta 4, to be exact). If you aren’t running the beta on your Android device, you won’t be able to take advantage of this cool new feature until Google seeds Android 12 to the general public.
If you want to try it out right now, however, you can simply enroll your device in the Android 12 beta program, then download and install the work-in-progress software to your phone. Follow along on our walkthrough here to set yourself up.
How to set up facial gestures on Android 12
To get started on a device running Android 12 beta 4, head over to Settings > Accessibility > Switch Access, then tap the toggle next to Use Switch Access. You’ll need to grant the feature full control over your device, which involves viewing and controlling the screen, as well as viewing and performing actions. Tap Allow to confirm.
The first time you do this, Android will automatically open the Switch Access setup guide. Here, tap Camera Switch, then tap Next. On the following page, choose between one switch or two switches, the latter of which Android recommends. With one switch, you use the same gesture to begin highlighting items on screen that you do to select a particular item. With two switches, you set one gesture to start highlighting, and a separate one to select.
Screenshot: Jake Peterson
We’re going to demonstrate the instructions for choosing Two switches. On the following page, choose how you’d like Android to scan through a particular page of options:
Linear scanning (except keyboard): Move between items one at a time. If you’re using a keyboard, however, it will scan by row.
Row-column scanning: Scan one row at a time. After the row is selected, move through items in that list.
Group selection (advanced): All items will be assigned a color. You perform a face gesture corresponding to the color of the item you want to select. Narrow down the size of the group until you reach your choice.
We’ll choose Linear scanning for this walkthrough. Once you make your selection, choose Next, then choose a gesture to assign to the action Next (which is what tells the blue selection window to move through the screen). You can choose from Open Mouth, Smile, Raise Eyebrows, Look Left, Look Right, and Look Up, and can assign as many of these gestures as you want to the one action. Just know that when you assign a gesture to an action, you won’t be able to use it with another action. When finished, tap Next.
Screenshot: Jake Peterson
Now, choose a gesture for the action Select (which selects an items that the blue selection window is hovering over). You can choose from the same list as before, barring any gestures you assigned to Next. Once you make your choice, you can actually start using these gestures to continue, since you can use your first gesture to move through the options, and your second gesture to select.
Finally, choose a gesture to pause or unpause camera switches. You don’t need to use this feature, but Android recommends you do. Pick your gesture or gestures, then choose Next. Once you do, the setup is done and you can now use your facial gestures to move around Android.
Other face gesture settings and options
Once you finish your setup, you’ll find some additional settings you can go through. Under Face Gesture Settings, you’ll find all the gesture options, as well as their assigned actions. Tap on one to test it out, adjust the gesture size, set the gesture duration, and edit the assignment for the gesture.
Screenshot: Jake Peterson
Beneath Additional settings for Camera Switches, you’ll find four more options to choose from:
Enhanced visual feedback: Show a visual indication of how long you have held a gesture.
Enhanced audio feedback: Play a sound when something on the screen changes in response to a gesture.
Keep screen on: Keep the screen on when Camera Switches in enabled. Camera Switches cannot unlock the screen if it turns off.
Ignore repeated Camera Switch triggers: You can choose a duration of time where the system will interpret multiple Camera Switch triggers as one trigger.
How to turn off facial gestures (Camera Switches)
If you find that controlling your phone with facial gestures just isn’t for you, don’t worry; it’s easy to turn off the feature. Just head back to Settings > Accessibility > Switch Access, then choose Settings. Tap Camera Switch gestures, then tap the slider next to Use Camera Switches. That will disable the whole feature, while saving your setup. If you want to reenable the feature, just return to this page at any time, and tap the toggle again.
In a joint project with Shanghai Jiao Tong University, the school designed a neuroprosthetic that costs about $500 in components. It’s an inflatable hand made from an elastomer called EcoFlex and looks a bit like Baymax from Big Hero 6.
The device foregoes electric motors in favor of a pneumatic system that inflates and bends its balloon-like digits. The hand can assume various grasps that allow an amputee to subsequently do things like pet a cat, pour a carton of milk or even pick up a cupcake. The device translates how its wearer wants to use it through a software program that “decodes” the EMG signals the brain sends to an injured limb.
The prosthetic weighs about half a pound and can even restore some sense of feeling for its user. It does this with a series of pressure sensors. When the wearer touches or squeezes an object, they send an electric signal to a specific position on their amputated arm. Another advantage of the arm is it doesn’t take long to learn how to use it. After about 15 minutes, two volunteers found they could write with a pen and stack checkers.
“This is not a product yet, but the performance is already similar or superior to existing neuroprosthetics, which we’re excited about,” said Professor Xuanhe Zhao, one of the engineers who worked on the project. “There’s huge potential to make this soft prosthetic very low cost, for low-income families who have suffered from amputation.”
Until recently, there was only one smartphone on the market equipped with an under-screen camera: last year’s ZTE Axon 20 5G. Other players such as Vivo, Oppo and Xiaomi had also been testing this futuristic tech, but given the subpar image quality back then, it’s no wonder that phone makers largely stuck with punch-hole cameras for selfies.
Despite much criticism of its first under-screen camera, ZTE worked what it claims to be an improved version into its new Axon 30 5G, which launched in China last week. Coincidentally, today Oppo unveiled its third-gen under-screen camera which, based on a sample shot it provided, appears to be surprisingly promising — no noticeable haziness nor glare. But that was just one photo, of course, so I’ll obviously reserve my final judgement until I get to play with one. Even so, the AI tricks and display circuitry that made this possible are intriguing.
Oppo
In a nutshell, nothing has changed in terms of how the under-screen camera sees through the screen. Its performance is limited by how much light can travel through the gaps between each OLED pixel. Therefore, AI compensation is still a must. For its latest under-screen camera, Oppo says it trained its own AI engine “using tens of thousands of photos” in order to achieve more accurate corrections on diffraction, white balance and HDR. Hence the surprisingly natural-looking sample shot.
Oppo
Another noteworthy improvement here lies within the display panel’s consistency. The earlier designs chose to lower the pixel density in the area above the camera, in order to let sufficient light into the sensor. This resulted in a noticeable patch above the camera, which would have been a major turn-off when you watched videos or read fine text on that screen.
But now, Oppo — or the display panel maker, which could be Samsung — figured out a way to boost light transmittance by slightly shrinking each pixel’s geometry above the camera. In order words, we get to keep the same 400-ppi pixel density as the rest of the screen, thus creating a more consistent look.
Oppo added that this is further enhanced by a transparent wiring material, as well as a one-to-one pixel-circuit-to-pixel architecture (instead of two-to-one like before) in the screen area above the camera. The latter promises more precise image control and greater sharpness, with the bonus being a 50-percent longer panel lifespan due to better burn-in prevention.
Oppo didn’t say when or if consumers will get to use its next-gen under-screen camera, but given the timing, I wouldn’t be surprised if this turns out to be the same solution on the ZTE Axon 30 5G. In any case, it would be nice if the industry eventually agreed to dump punch-hole cameras in favor of invisible ones.
A few days ago, the US Federal Trade Commission (FTC) came out with a 5-0 unanimous vote on its position on right to repair. (PDF) It’s great news, in that they basically agree with us all:
Restricting consumers and businesses from choosing how they repair products can substantially increase the total cost of repairs, generate harmful electronic waste, and unnecessarily increase wait times for repairs. In contrast, providing more choice in repairs can lead to lower costs, reduce e-waste by extending the useful lifespan of products, enable more timely repairs, and provide economic opportunities for entrepreneurs and local businesses.
The long version of the “Nixing the Fix” report goes on to list ways that the FTC found firms were impeding repair: ranging from poor initial design, through restrictive firmware and digital rights management (DRM), all the way down to “disparagement of non-OEM parts and independent repair services”.
While the FTC isn’t making any new laws here, they’re conveying a willingness to use the consumer-protection laws that are already on the books: the Magnuson-Moss Warranty Act and Section 5 of the FTC Act, which prohibits unfair competitive practices.
Only time will tell if this dog really has teeth, but it’s a good sign that it’s barking. And given that the European Union is heading in a similar direction, we’d be betting that repairability increases in the future.
iFixit co-founder and CEO Kyle Wiens has exposed how companies including Apple, Samsung, and Microsoft manipulate the design of their products and the supply chain to prevent consumers and third-party repairers from accessing necessary tools and parts to repair products such as smartphones and laptops.
Speaking during the Productivity Commission’s virtual right to repair public hearing on Monday, Weins took the opportunity to draw on specific examples of how some of the largest tech companies are obstructing consumers from a right to repair.
“We’ve seen manufacturers restrict our ability to buy parts. There’s a German battery manufacturer named Varta that sells batteries to a wide variety of companies. Samsung happens to use these batteries in their Galaxy earbuds … but when we go to Varta and say can we buy that part as a repair part, they’ll say ‘No, our contract with Samsung will not allow us to sell that’. We’re seeing that increasingly,” he said.
“Apple is notorious for doing this with the chips in their computers. There’s a particular charging chip on the MacBook Pro … there is a standard version of the part and then there’s the Apple version of the part that sits very slightly tweaked, but it’s tweaked enough that it’s only required to work in this computer, and that company again is under contractual requirement with Apple.”
He continued, highlighting that a California-based recycler was contracted by Apple to recycle spare parts that were still in new condition.
“California Apple stops providing service after seven years, so this was at seven years and Apple have warehouses full of spare parts, and rather than selling that out in the marketplace — so someone like me who eagerly would’ve bought them — they were paying the recycler to destroy them,” Wiens said.
Weins also pointed to an example involving a Microsoft Surface laptop.
“[iFixit] rated it on our repairability score, we normally rate products from one to 10; the Surface laptop got a zero. It had a glued-in battery … we had to actually cut our way into the product and destroyed it in the process of trying to get inside,” he said.
[…]
The other major point that was covered during the Productivity Commission’s public hearing was whether there is plausibility to introduce a labelling scheme, much like one that exists in France, in Australia.
[…]
Based on his observation, Weins said the adoption of the French index has been “pretty universal” across all five categories. He also pointed out that a recent Samsung survey showed 86% of French citizens say that the index impacts their purchasing behaviour while 80% said they would give up their favourite brand for a more repairable product.
“This is really substantially driving consumer behaviour,” he said.
For consumer group Choice, the possibility of introducing a labelling scheme to improve right to repair in Australia could work.
“We know from experience, particularly with the water and energy labelling scheme, that if you want manufacturers to improve the quality of products, start by rating and ranking them,” Choice campaign and communications director Erin Turner said during the hearing.
What are these terms and how do they work in terms of control schemes? In this world you generally get what you pay for – if it’s cheap, then it’s probably plasticky and nasty. If it’s expensive, then it’s probably high quality. Saitek and Logitech have equipment running from low to midrange. Thrustmaster from mid to high range.
The VKB Gladiator NXT is currently the most popular midrange joystick you can find around $120 – $150 which comes in left and righthand versions.
If you have the money though, you go for the Virpil (VPC) Constellation Alpha (both left and right hand) and MongoosT-50CM2 grips and bases
WingWin.cn has a very good F-16 throttle, stick and instrument panel with desk mounts
HOTAS
The world of flight sim control used to be fairly straightforward: ideally you had a stick on the right, a throttle unit on the left and rudders in the middle. Some stick makers tried to replace the rudder with a twistable stick grip and maybe a little throttle lever on the stick so you could get full control cheaper – the four degrees of freedom (roll, yaw, pitch and thrust) / 4 DOF on a single stick. You had less buttons but you used the keyboard and mouse more.
HOSAS / Dual Stick
Now in the resurgence of the age of space sims – (Elite Dangerous, Star Citizen,No Mans Sky, Star Wars Squadrons and Tie Fighter Total Conversion to name a few) the traditional HOTAS (Hands on Throttle and Stick) is losing ground to the HOSAS (Hands on Stick and Stick). The HOSAS offers six degrees of freedom (6 DOF): roll, yaw, pitch, thrust + horizontal and vertical translation / strafing, which makes sense for a space plane that can not only go backwards but can also strafe directly upwards and downwards or left and right.
This gives rise to some interesting control schemes:
Left stick
x-axis
translate / strafe left + right
y-axis
throttle
z/twist-axis
translate / strafe up + down
Right stick
x-axis
roll
y-axis
pitch
z/twist axis
yaw
a variation which seems to be popular in Star Wars Squadrons is
Right stick
x-axis
yaw
y-axis
pitch
z/twist-axis
roll
`
another variation with throttling
Left stick
x-axis
translate / strafe left + right
y-axis
translate / strafe up + down
z/twist-axis
thrust
often combined with:
rudder left foot
throttle backwards / reverse
rudder right foot
throttle forwards
Different combinations work better or worse depending on the person and how tiring it is for them personally. As Reddit user Enfiguralimificuleur points out: “It worked best for me with Z/twist being the throttle. I found it very efficient to adjust your speed properly. Very easy to stay at that speed as well. However due to wrist issue and tendinitis, some positions are VERY awkward. Try pulling+right+twisting. Ouch. And even without the pain, this is not comfortable.”
Throttling and the Omnithrottle
The throttle can be set in different ways: a traditional HOTAS throttle is set to where it’s pushed to. Generally sticks have a recentering mechanism. This means that it’s easy to find reverse but can get annoying because to throttle you need to keep pushing the stick forwards. There are a few solutions to this.
First, The VKB gunfighter III base has a dry clutch which will remove the centering spring back of the pitch axes, meaning you can assign that to thrust and basicly have a stick that stays there mimicking a throttle while still allowing for rotation and roll axes.
Second, people can use a traditional throttle as well (so then I guess it becomes a HOTSAS)
Third, you can map a hat to 0, 50, 75, 100% speed and set speeds that way as a sort of cruise control
Fourth you can use the rudder (left foot back, front foot forward) or z-axis (twist) for thrust / throttle control. This will not eliminate the problem though.
The omnithrottle is when you angle the left hand stick around 90 degrees downwards so that it looks like a throttle. You retain the three axes and the extra buttons and hats, giving you more freedom.
There’s a limit to what you can learn about cells from 2D pictures, but creating 3D images is a time-intensive process. Now, scientists from UT Southwestern have developed a new “simple and cost-effective” device capable of capturing multi-angle photos that can be retrofitted onto existing lab microscopes. The team say their solution — which involves inserting a unit of two rotating mirrors in front of a microscope’s camera — is 100 times faster than converting images from 2D to 3D.
Currently, this process involves collecting hundreds of photos of a specimen that can be uploaded as an image stack into a graphics software program, which then performs computations in order to provide multiple viewing perspectives. Even with a powerful computer, those two steps can be time-consuming. But, using their optical device, the team found they could bypass that method altogether.
What’s more, they claim their approach is even faster as it requires only one camera exposure instead of the hundreds of camera frames used for entire 3D image stacks. They discovered the technique while de-skewing the images captured by two common light-sheet microscopes. While experimenting with their optical method, they realized that when they used an incorrect amount of de-skew the projected image seemed to rotate.
“This was the aha! moment,” said Reto Fiolka, assistant professor at the Lyda Hill Department of Bioinformatics at UT Southwestern. “We realized that this could be bigger than just an optical de-skewing method; that the system could work for other kinds of microscopes as well.”
Using their modified microscope, the team imaged calcium ions carrying signals between nerve cells in a culture dish and looked at the circulatory system of a zebrafish embryo. They also rapidly imaged cancer cells in motion and a beating zebrafish heart. They also applied the optical unit to additional microscopes, including light-sheet and spinning disk confocal microscopy.
A British right to repair law comes into force today, requiring manufacturers to make spares available to both consumers and third-party repair companies.
However, despite claiming to cover “televisions and other electronic displays,”‘ the law somehow excludes smartphones and laptops…
The European Union introduced a right to repair law back in March, and the UK agreed prior to Brexit that it would introduce its own version.
From Thursday, manufacturers will have to make spares available to consumers, with the aim of extending the lifespan of products by up to 10 years, it said […]
The right to repair rules are designed to tackle “built-in obsolescence” where manufacturers deliberately build appliances to break down after a certain period to encourage consumers to buy new ones.
Manufacturers will now be legally obliged to make spare parts available to consumers so appliances can be fixed.
Which? notes that the UK law ensures spares are available for either 7 or 10 years after the discontinuation of a product, but that it only covers four specific consumer product categories (plus some commercial/industrial ones).
Spare parts will have to be available within two years of an appliance going on sale, and up until either seven or 10 years after the product has been discontinued, depending on the part. Some parts will only be available to professional repairers, while others will be available to everyone, so you can fix it yourself.
For now, the right to repair laws only cover:
Dishwashers
Washing machines and washer-dryers
Refrigeration appliances
Televisions and other electronic displays
They also cover non-consumer electronics, such as light sources, electric motors, refrigerators with a direct sales function (eg fridges in supermarkets, vending machines for cold drinks), power transformers and welding equipment.
However, while you would expect “other electronic displays” to include iPhones, iPads, and most Macs, Which? states that these product categories are excluded.
Cookers, hobs, tumble dryers, microwaves or tech such as laptops or smartphones aren’t covered.
A cynical person might suspect some behind-the-scenes lobbying by Apple and other phone and computer brands…
As you dive deeper into the world of electronics, a good oscilloscope quickly is an indispensable tool. However, for many use cases where you’re debugging low voltage, low speed circuits, that expensive oscilloscope is using only a fraction of its capabilities. As a minimalist alternative for these use cases [fhdm-dev] created Scoppy, a combination of firmware for the Raspberry Pi Pico and an Android app to create a functional oscilloscope.
As you would expect, the specifications are rather limited, capturing a maximum of 100 kpts at a speed of 500 kS/s shared between the two channels. Without some additional front end circuitry to protect the Pico, the input voltage is limited to 0-3.3 V. Neither the app nor the firmware is open source, and getting access to the second channel and removing ads requires a ~$3 in-app purchase. Even so, we can still think of plenty of practical uses for a ~$7 oscilloscope. If you do decide to add some front-end circuitry to change to voltage range, you can set them in the app, and switch between them by pulling certain GPIO pins high or low. The app has most of the basic oscilloscope features covered, continuous and single shot capture, adjustable trigger settings and a scalable waveform display.
if you’re thinking of buying a new One SL, you ought to keep in mind that it’ll only work with the newer Sonos S2 app.
This won’t be a problem for every Sonos owner, especially if you bought all your Sonos devices in the past year or two. It might be an issue, however, if you’re still operating a mix of newer and older Sonos hardware. Namely, the “legacy” Sonos products that were “killed off” last year. Those legacy gadgets will only work with the S1 app, and although Sonos committed to providing updates for these devices, controlling a mix of legacy and current Sonos gadgets isn’t possible on the S2 app.
You can’t roll back from the old update which basically only seems to add rounded corners to backgrounds and break in dark mode – except that you allow Sonos to spy on you through the built in microphone with S2.
The new technique uses a computer to convert attempted handwriting movements from brain activity into on-screen text. As part of their tests, the team worked with a 65-year-old participant (named T5 in the study) who was paralyzed from the neck down due to a spinal cord injury sustained in 2007.
The researchers started by placing two brain chip implants into T5’s motor cortex — the part of the brain that controls movement. They told the participant to imagine he was writing normally with a pen on a piece of ruled paper. The brain chips then sent his neural signal through wires to a computer where an AI algorithm essentially transcribed his “mindwriting” by decoding hand and finger motion.
The end result saw T5 reach a writing speed of about 18 words per minute with 94.1 percent accuracy. Comparatively, an able-bodied adult of a similar age can type about 23 words per minute on a smartphone
Combined with today’s massive flat panel displays, a nice surround sound system can provide an extremely immersive environment for watching movies or gaming. But a stumbling block many run into is speaker placement. The front speakers generally just go on either side of the TV, but finding a spot for the rear speakers that’s both visually and acoustically pleasing can be tricky.
Which is why [Peter Waldraff] decided to take a rather unconventional approach and hide his rear surround sound speakers in a pair of functioning table lamps. This not only looks better than leaving the speakers out, but raises them up off the floor and into a better listening position. The whole thing looks very sleek thanks to some clever wiring, to the point that you’d never suspect they were anything other than ordinary lamps.
The trick here is the wooden box located at the apex of the three copper pipes that make up the body of the lamp. [Peter] mounted rows of LEDs to the sides of the box that can be controlled with a switch on the bottom, which provides light in the absence of a traditional light bulb. The unmodified speaker goes inside the box, and connects to the audio wires that were run up one of the pipes.
In the base, the speaker and power wires are bundled together so it appears to be one cable. Since running the power and audio wires together like this could potentially have resulted in an audible hum, [Peter] only ran 12 VDC up through the lamp to the LEDs and used an external “wall wart” transformer. For convenience, he also put a USB charging port in the center of the base.
The hack involves popping open the case of the watch and exposing the back of the main PCB. There, a series of jumpers control various features. [Ian]’s theory is that this allows Casio to save on manufacturing costs by sharing one basic PCB between a variety of watches and enabling features via the jumper selection. With a little solder wick, a jumper pad can be disconnected, enabling the hidden countdown feature. Other features, such as the multiple alarms, can be disabled in the same way with other jumpers, suggesting lower-feature models use this same board too.
It’s a useful trick that means [Ian] now always has a countdown timer on his wrist when he needs it. Excuses for over-boiling the eggs will now be much harder to come by, but we’re sure he can deal. Of course, watch hacks don’t have to be electronic – as this custom transparent case for an Apple Watch demonstrates. Video after the break.
With Galaxy Upcycling at Home, users can easily turn their old Galaxy devices² into smart home devices like a childcare monitor, a pet care solution and other tools that meet individual lifestyle needs.
Make Any Home a Smart Home
The Galaxy Upcycling at Home program provides enhanced sound and light-control features, by repurposing built-in sensors. Users can transform their old devices through SmartThings Labs, a feature within the SmartThings app.
[…]
For a device to continuously detect sound and light, it needs to be actively operating for long periods of time. For this reason, Samsung equipped the Galaxy Upcycling at Home upgrade with battery optimization solutions to minimize battery usage. Devices will also be able to connect effortlessly to SmartThings, allowing them to interact with countless other IoT devices in the SmartThings ecosystem.
The Logitech Voice M380 wireless mouse looks and acts like a regular mouse but with a special button to initiate voice dictation. Baidu claimed recognition facilitates content creation at two to three times the speed of what one can type.
The device supports dictation in Chinese, English, and Japanese, and can translate content to English, Japanese, Korean, French, Spanish, and Thai. However, as of this month, you can only pick it up in China. There’s no word on when or if it will be available elsewhere.
The Logitech M380 Baidu voice mouse. Click to enlarge
The mouse uses Baidu’s AI open platform Baidu Brain speech technology. The Chinese tech company said of the platform:
As of September 2020, Baidu Brain has developed more than 270 core AI capabilities and created over 310,000 models for developers.
Baidu Brain is made of a security module and four components: a foundation layer (uses open-source Chinese deep learning platform Paddle Paddle, Kunlun AI processors, and databases); the so-called “perception” layer (aggregates the company’s algorithm in voice technology, computer vision and AR/VR); a cognition layer (integrates new information); and a platform layer.
[…]
The mouse comes in three colours, graphite, rose, and off-white, and costs around $30 (£22, €25).
Apparently, if the GPS on your shiny new DJI FPV Drone detects that it’s not in the United States, it will turn down its transmitter power so as not to run afoul of the more restrictive radio limits elsewhere around the globe. So while all the countries that have put boots on the Moon get to enjoy the full 1,412 mW of power the hardware is capable of, the drone’s software limits everyone else to a paltry 25 mW. As you can imagine, that leads to a considerable performance penalty in terms of range.
But not anymore. A web-based tool called B3YOND promises to reinstate the full power of your DJI FPV Drone no matter where you live by tricking it into believing it’s in the USA. Developed by the team at [D3VL], the unlocking tool uses the new Web Serial API to send the appropriate “FCC Mode” command to the drone’s FPV goggles over USB. Everything is automated, so this hack is available to anyone who’s running a recent version of Chrome or Edge and can click a button a few times.
A pair of good dice is a guilty pleasure for a tabletop RPG gamer. You can never have enough, but I can tell you this: None are going to be as flashy as Pixels. These dice have an ace up their sleeve that the rest of your dice don’t have, because they light up and allow you to play online.
Yes, Pixels are electronic dice. Externally they look like ordinary resin dice, but when you throw them, their numbers light up using programmable LEDs. This alone would be enough for many players to smash the buy button on Kickstarter, where the product has already raised $2 million. But there’s more: The Pixels have a Bluetooth connection.
These days it’s not easy to get together with friends to play. Lockdowns have made things very complicated, but even without a pandemic, RPG players live in different cities, move to other countries, or simply can’t meet for meet several hours at other people’s houses on a regular basis. Online role-playing platforms that allow you to play different games over video calls are popping up, and these dice are compatible with popular services like D&D Beyond, Roll20, and Foundry.
Gif: Pixels
Both the lights and the Bluetooth run on small batteries and one die lasts around five hours on a charge. You can also turn off the LEDs to get in 20,000 rolls before the battery dies. Charging is wireless and uses an inductor hidden under one of its faces. The dice are sold separately or in kits containing D20, D12, D10, D8, D6, and D4 models.
Graphically design your farm by dragging and dropping plants into the map. The game-like interface is learned in just a few minutes so you’ll have the whole growing season planned in no time.
Farm from Anywhere
The FarmBot web app can be loaded on any computer, tablet, or smartphone with a modern web browser, giving you the power to manage your garden from anywhere at any time.
Using the manual controls, you can move FarmBot and operate its tools and peripherals in real-time. Scare birds away while at work, take photos of your veggies, turn the lights on for a night time harvest, or simply impress your friends and neighbors with a quick demo.
The sky is a fascinating place, but the real interesting stuff resides far beyond the thin atmosphere. The Universe, the Milky Way and our Solar System is where it’s at. To be able to peer far out through the sky and observe the galaxy and beyond, one needs a telescope.
This Instructable follows my journey as I develop a miniture GOTO telescope. We’ll look through some of the research I perform, glimpse at my design process, observe the assembly & wiring processes, view instuctions for the software configuration and then finally step outside to scope out the cosmos.
The Micro Scope Features.
Raspberry Pi 4B & HQ Camera.
300mm Mirror Lens.
Canon EOS Lens compatible.
NEMA 8 Geared Stepper Motors.
Fully GOTO with tracking.
GPS.
WiFi Enabled.
GT2 Belt Drive.
Hand Controller.
3D Printed Parts.
Tripod.
OnStep Telescope Mount GOTO Controller.
INDI Server.
KStars/Ekos.
Bill Of Materials & 3D Printable Parts.
The BOM & STLs are available from Thingiverse (4708262). However, I recommend downloading The Micro Scope Build Pack as it contains extras not available from Thingiverse!
If you own a 2013 SmartThings hub (that’s the original) or a SmartThings Link for the Nvidia Shield TV, your hardware will stop working on June 30 of this year. The device depreciation is part of the announced exodus from manufacturing and supporting its own hardware and the Groovy IDE that Samsung Smartthings announced last summer. SmartThings has set up a support page for customers still using those devices to help those users transition to newer hubs.
[…]
Those who purchased one of these products in the last three years (Kevin just missed the window with his March 2018 purchase of the SmartThings Link for the Nvidia Shield) can share their proof-of-purchase at Samsung’s Refund Portal to find out if they are eligible for a refund. And in a win for those of us worried about e-waste, Samsung is also planning to recycle the older gear (or it will at least send you a prepaid shipping label so you can send back the devices for theoretical recycling).
The design started with OBS, but this slick little keyboard turned into a system-wide assistant. It assigns the eight keys dynamically based on the program that has focus, and even updates the icon to show changes like the microphone status.
This is done with a Python script on the PC that monitors the running programs and updates the macro keeb accordingly using a serial protocol that [Sebastian] wrote. Thanks to the flexibility of this design, [Sebastian] can even use it to control the office light over MQTT and make the CO2 monitor send a color-coded warning to the jog wheel when there’s trouble in the air.
This project is wide open with fabulous documentation, and [Sebastian] is eager to see what improvements and alternative enclosure materials people come up with. Be sure to check out the walk-through/build video after the break.
Since late January, most users running a pre-installed Lenovo image of Windows 10 has been bitten by a bug in Lenovo’s System Update Service (SUService.exe) causing it to constantly occupy a CPU thread. This was noticed by many ThinkPad and IdeaPad users as an unexpected increase in fan noise, but many desktop users might not notice the problem. I’m submitting this story to Slashdot because Lenovo does not provide an official support venue for their software, and the problem has persisted for several weeks with no indication of a patch forthcoming. While this bug continues to persist, anyone with a preinstalled Lenovo image of Windows 10 will have greatly reduced battery life on a laptop, and greatly increased power consumption in any case. As a thought experiment, if this causes 1 million systems to increase their idle power consumption by 40 watts, this software bug is currently wasting 40 megawatts, or about 1/20th the output of a typical commercial power station. On my ThinkPad P15, this bug actually wastes 80 watts of power, so the indication is that 40 watts per system is a very conservative number.
Lenovo’s official forums and unofficial reddit pages have seen several threads pop up since late January with confused users noticing the issue, but so far Lenovo is yet to issue an official statement. Users have recommended uninstalling the Lenovo System Update Service as a workaround, but that won’t stop this power virus from eating up megawatts of power around the world for those who don’t notice this power virus’s impact on system performance.
A new company called Metalenz, which emerges from stealth mode today, is looking to disrupt smartphone cameras with a single, flat lens system that utilizes a technology called optical metasurfaces. A camera built around this new lens tech can produce an image of the same if not better quality as traditional lenses, collect more light for brighter photos, and can even enable new forms of sensing in phones, all while taking up less space.
[…]
“The optics usually in smartphones nowadays consists of between four and seven lens elements,” says Oliver Schindelbeck, innovation manager at the optics manufacturer Zeiss, which is known for its high-quality lenses. “If you have a single lens element, just by physics you will have aberrations like distortion or dispersion in the image.”
More lenses allow manufacturers to compensate for irregularities like chromatic aberration (when colors appear on the fringes of an image) and lens distortion (when straight lines appear curved in a photo). However, stacking multiple lens elements on top of each other requires more vertical space inside the camera module.
[…]
Phone makers like Apple have increased the number of lens elements over time, and while some, like Samsung, are now folding optics to create “periscope” lenses for greater zoom capabilities, companies have generally stuck with the tried-and-true stacked lens element system.
[…]
Instead of using plastic and glass lens elements stacked over an image sensor, Metalenz’s design uses a single lens built on a glass wafer that is between 1×1 to 3×3 millimeter in size. Look very closely under a microscope and you’ll see nanostructures measuring one-thousandth the width of a human hair. Those nanostructures bend light rays in a way that corrects for many of the shortcomings of single-lens camera systems.
[…]
Light passes through these patterned nanostructures, which look like millions of circles with differing diameters at the microscopic level. “Much in the way that a curved lens speeds up and slows down light to bend it, each one of these allows us to do the same thing, so we can bend and shape light just by changing the diameters of these circles,” Devlin says.
[…]
nd the design doesn’t just conserve space. Devlin says a Metalenz camera can deliver more light back to the image sensor, allowing for brighter and sharper images than what you’d get with traditional lens elements.
Another benefit? The company has formed partnerships with two semiconductor leaders (that can currently produce a million Metalenz “chips” a day), meaning the optics are made in the same foundries that manufacture consumer and industrial devices—an important step in simplifying the supply chain.
New Forms of Sensing
Metalenz will go into mass production toward the end of the year. Its first application will be to serve as the lens system of a 3D sensor in a smartphone. (The company did not give the name of the phone maker.)
Researchers have made a smart school of robotic fish that swarm and swim just like the real deal, and they offer promising insights into how developers can improve decentralized, autonomous operations for other gizmos like self-driving vehicles and robotic space explorers. Also, they’re just pretty stinking cute.
These seven 3D-printed robots, or Bluebots, can synchronize their movements to swim in a group, or Blueswarm, without any outside control, per research published in Science Robotics this month from the Harvard John A. Paulson School of Engineering and Applied Sciences and the Wyss Institute for Biologically Inspired Engineering.
Equipped with two wide-angle cameras for eyes, each bot navigates their tank by tracking the LEDs lights on their peers. Based on the cues they observe, each robot reacts accordingly using an onboard Raspberry Pi computer and custom algorithm to gauge distance, direction, and heading.
“Each Bluebot implicitly reacts to its neighbors’ positions,” explains Florian Berlinger, a PhD candidate at SEAS and Wyss and first author of the research paper, per a press release. “So, if we want the robots to aggregate, then each Bluebot will calculate the position of each of its neighbors and move towards the center. If we want the robots to disperse, the Bluebots do the opposite. If we want them to swim as a school in a circle, they are programmed to follow lights directly in front of them in a clockwise direction.”
Previous robotic swarms could navigate in two-dimensional spaces, but operating in three-dimensional spaces like air or water has proven tricky. The goal of this research was to create a robofish swarm that could move in sync all on their own without the need for WiFi or GPS and without input from their human handlers.