Tag Archives: robot

MIT Robot Sees Hidden Objects with Radio Waves

This site may earn affiliate commissions from the links on this page. Terms of use.

Creating robots that see the world like humans has been a challenge for scientists. While computer vision has come a long way, these systems are still easy to fool. So, why not give robots superhuman perception to compensate? MIT’s Fadel Adib created a robot that uses radar waves to find its target, allowing it to see through walls

The robot, known as RF-Grasp, has traditional cameras for object recognition. The camera is mounted to the bot’s mechanical grasper, giving it a good view of anything the hand might be trying to pick up. However, what if the target is in a box or under something else? Radio waves can pass through the obstacle, and RF-Grasp can use the reflected signal to spot its target. 

To accomplish this, Adib and his team used radio frequency tags, not unlike the ones used to identify pets or open secure doors. The reader sends out RF pings, which power and modulate the tag’s circuits. The reflected signal can provide data, but in this case, it’s being used to track the physical location of the tag. 

For the purposes of testing RF-Grasp, the team deployed a small, focused RF reader next to the robot. The reader scans for RF tags in its field of view, and then feeds that data into the robot’s computer vision algorithm. So, when told to pick up an object it cannot see, RF-Grasp relies on the RF pings to seek out the target. When it’s uncovered the object, the robot is smart enough to give more weight to the camera feed in its algorithms. The team says merging data from the camera and RF reader into the bot’s decision-making was the most challenging part. 

Compared with robots that only have visual data, RF-Grasp was much more efficient in laboratory tests involving picking and sorting objects. It has the ability to remove clutter from the environment to find its target, guided by RF data that tells it where to dig. For example, it can remove packing material from a box to find something at the bottom. Other robots just don’t have this extra layer of guidance. 

This technology could lead to robots that can find objects no matter where they’re hidden. Lost your keys? Just fire up the RF-Grasp Mk V and it’ll figure out which pocket of which coat they’re in. A more realistic application is in the warehouse industry. Robots like Boston Dynamics’ Stretch can pick up and move heavy boxes, but only if they’re visible and regularly shaped. A robot with RF sensing could sort through a messy shelf to find specific objects, not unlike a human. We could be one step closer to eliminating human labor in these environments. 

Now read:

Let’s block ads! (Why?)

ExtremeTechExtreme – ExtremeTech

Boston Dynamics Unveils ‘Stretch’ Box Lifting Robot

This site may earn affiliate commissions from the links on this page. Terms of use.

Boston Dynamics has spent years posting creepy videos of lifelike robots, but it started selling its first real product last year in the form of a $ 75,000 robot dog called Spot. Now, the company has unveiled its second production model robot, and the first designed for commercial warehouse applications. It’s called Stretch, and you’ll be able to buy one next year. You might want to start saving up, though. 

This is not Boston Dynamics‘ first foray into the world of box-hoisting robots. A few years ago, it revealed a machine called Handle. This bird-like robot scooted around on two-wheeled legs, dangling a mechanical arm with a grasper in front of it. The robot went on to make an appearance in the company’s 2020 farewell video, but it wasn’t the most practical design for a warehouse environment, and it looks like Boston Dynamics is moving on. 

The new Stretch robot is much less visually impressive (and disconcerting) than Spot or Handle. The boxy base conceals good old-fashioned wheels, and atop that is the robotic arm complete with suction cup grasper and seven degrees of movement. There’s also a “perception mast” next to the arm that has cameras and laser sensors to help guide the automaton. 

Stretch’s suction pad arm can lift boxes as heavy as 50 pounds (23 kilograms), which is about a third more mass than Handle could manage with its two-wheel design. However, the boxes need to be very boxy. If there’s no flat surface, Stretch can’t attach to it with the suction pad. Boston Dynamics didn’t specify, but I’d wager the computer vision system is also less able to identify oddly shaped objects. 

BostonDynamics, which became part of Hyundai last year, designed Stretch in this way to make it useful to the maximum number of customers possible. It doesn’t require any existing automation infrastructure — it can roll down an aisle, go up a ramp into a truck, and stack the boxes anywhere there’s physical room for the robot to maneuver. 

Boston Dynamics hopes to open sales of Stretch in 2022. For now, it’s looking for some partners who would like to test the robot as part of a pilot program. Interested parties can apply for access, but everyone else will have to wait for next year. Boston Dynamics hasn’t revealed the pricing, but it will no doubt be high. Spot is much less complex, and it’s 75 grand.

Now read:

Let’s block ads! (Why?)

ExtremeTechExtreme – ExtremeTech

New Spherical Robot Could Explore Lunar Caves

This site may earn affiliate commissions from the links on this page. Terms of use.

We’ve all been laser-focused on Mars as a site for future human outposts, but let’s not forget about the Moon. It’s only marginally less habitable than Mars right now, and it’s a lot closer. Thanks to radiation and temperature variation, however, the safest place for a long-term human presence on the Moon might be underground. We don’t know much about the Moon’s subsurface environment, but the same was true of the Moon’s surface in the past. To explore these unseen depths, the European Space Agency is evaluating a spherical robot bristling with spinning cameras. 

The robot is known as DAEDALUS (Descent And Exploration in Deep Autonomy of Lunar Underground Structures), and it was designed by a team from Germany’s Julius-Maximilians-Universität of Würzburg (JMU). DAEDALUS is currently a design study under consideration by the ESA, but the team has built some basic proof-of-concept hardware (see above). 

We don’t know what the environment will be like in lunar caves and lava tubes, so DAEDALUS has a generalist design for better adaptability. As envisioned by the JMU team, DAEDALUS will be a 46-centimeter transparent sphere with a cable tether at one end. Operators would lower it into a cave, using the tether to transmit data until the robot is released. 

The spherical design allows DAEDALUS to map its environment in full 360-degrees with cameras and lidar. As it’s being lowered into a cave, the internal mechanism will spin laterally to photograph its surroundings. The cable will detach to allow the robot to roam, but it will double as a wireless antenna to ensure the robot remains connected as it explores. 

Once it’s on the cave floor, the robot needs to flip over, which it does by moving the battery packs to shift its center of mass around the interior. Now on its “side,” DAEDALUS can roll around and explore the cave thanks to an internal motor that spins the outer plastic shell. While rolling, the camera system scans in 360-degrees, and the lidar scans ahead and behind. However, there’s also a lidar scanning mode that flips things around. DAEDALUS extends legs that lock the outer surface in place, allowing the inner instrument cluster to spin. In doing so, it gives the lidar sensors a look at the full 360-degree space. 

DAEDALUS sounds like a very clever robot, but there’s no information on when it might take shape as a real ESA project (if at all). First, humanity has to return to the Moon. A few uncrewed rovers have set down recently, but NASA might begin landing astronauts once again as part of the Artemis Program as soon as 2024.

Now read:

Let’s block ads! (Why?)

ExtremeTechExtreme – ExtremeTech

ET Deals: Save $550 On Dell XPS 8940 Intel Core i5 Nvidia GTX 1660 Ti Gaming Desktop, Roborock S6 Robot Vacuum and Mop for $401

This site may earn affiliate commissions from the links on this page. Terms of use.

Today you can save $ 550 on a gaming desktop from Dell that comes equipped with an Intel Core i5 processor and an Nvidia GeForce GTX 1660 Ti graphics card. This makes the system well suited for gaming at 1080p resolutions. There’s also an excellent discount on a Roborock S6 robot vacuum that’s marked down to just $ 401.99.

  • Dell XPS 8940 Intel Core i5-10400 Gaming Desktop w/ Nvidia GeForce GTX 1660 Ti GPU, 16GB DDR4 RAM, 256GB NVMe SSD and 1TB HDD for $ 729.99 from Dell with promo code DTXPSAFF323 (List price $ 1,279.99)
  • Roborock S6 Robot Vacuum and Mop for $ 401.99 from Amazon with promo code ROBOROCKS6 (List price $ 649.99)
  • Dell Vostro 5000 Intel Core i7-10700 Desktop w/ Nvidia GeForce GT 730, 8GB DDR4 RAM, 256GB M.2 NVMe SSD and 1TB HDD for $ 779.00 from Dell (List price $ 1,427.14)
  • Logitech M330 Silent Plus Wireless Mouse for $ 12.99 from Amazon (Regularly $ 29.99)
  • Dell Vostro 15 7500 Intel Core i7-10750H 15.6-Inch 1080p Laptop w/ Nvidia GeForce GTX 1650 GPU, 8GB DDR4 RAM and 256GB NVMe SSD for $ 949.00 from Dell (List price $ 1,712.86)
  • Dell UltraSharp U2520D 25-Inch 2K USB-C Monitor + $ 100 Gift Card for $ 389.99 from Dell (List price $ 519.99)

Dell XPS 8940 Intel Core i5-10400 Gaming Desktop w/ Nvidia GeForce GTX 1660 Ti GPU, 16GB DDR4 RAM, 256GB NVMe SSD and 1TB HDD ($ 729.99)

Dell’s new XPS 8940 features an updated design and it comes loaded with strong processing hardware that’s able to tackle just about any task you throw at it. The mid-range Intel Core i5-10400 with its six CPU cores is well suited for running numerous applications at the same time. As the system also has an Nvidia GeForce GTX 1660 Ti graphics card, the system’s is able to run games with high settings fairly well, making it a fitting machine for gaming and work. Currently you can get one of these systems from Dell marked down from $ 1,279.99 to just $ 729.99 with promo code DTXPSAFF323.

Roborock S6 Robot Vacuum and Mop ($ 401.99)

This high-powered robot vacuum has 2,000Pa of suction power and it has a built-in mop function, which makes it a versatile cleaning tool for your home. The Roborock S6 was also built to be fairly quiet with an average cleaning volume of just 56dB. Currently, these robot vacs are selling on Amazon marked down from $ 649.99 to just $ 401.99 with promo code ROBOROCKS6.

Dell Vostro 5000 Intel Core i7-10700 Desktop w/ Nvidia GeForce GT 730, 8GB DDR4 RAM, 256GB M.2 NVMe SSD and 1TB HDD ($ 779.00)

Dell’s Vostro computers were designed as office and business solutions, and this Vostro 5000 is no different. It’s equipped with an Intel Core i7-10700 processor and 8GB of RAM, which gives the system solid performance that’s perfect for a wide range of office and work tasks. Dell is offering these systems for a limited time marked down from $ 1,427.14 to $ 779.00.

Logitech M330 Silent Plus Wireless Mouse ($ 12.99)

The M330 mouse from Logitech was built to be an affordable wireless mouse with solid performance. The mouse reportedly produces 90 percent less noise when clicked than a standard mouse and it can also last for up to 2 years on a fully charged battery. Amazon is offering these mice at the moment marked down from $ 29.99 to just $ 12.99.

Note: Terms and conditions apply. See the relevant retail sites for more information. For more great deals, go to our partners at TechBargains.com.

Now read:

Let’s block ads! (Why?)

ExtremeTechExtreme – ExtremeTech

Hardware Accelerators May Dramatically Improve Robot Response Times

This site may earn affiliate commissions from the links on this page. Terms of use.

(Credit: onurdongel/Getty Images)
New work in robotics research at MIT suggests that long-term bottlenecks in robot responsiveness could be alleviated through the use of dedicated hardware accelerators. The research team also suggests it’s possible to develop a general methodology for programming robot responsiveness to create specific templates, which would then be deployed into various robot models. The researchers envision a combined hardware-software approach to the problem of motion planning.

“A performance gap of an order of magnitude has emerged in motion planning and control: robot joint actuators react at kHz rates,” according to the research team, “but promising online techniques for complex robots e.g., manipulators, quadrupeds, and humanoids (Figure 1) are limited to 100s of Hz by state-of-the-art software.”
Robomorphic-Computing

Optimizing existing models and the code for specific robot designs has not closed the performance gap. The researchers write that some compute-bound kernels, such as calculating the gradient of rigid body dynamics, take 30 to 90 percent of the available runtime processing power in emerging nonlinear Model Predictive Control (MPC) systems.

The specific field of motion planning has received relatively little focus compared with collision detection, perception, and localization (the ability to orient itself in three-space relative to its environment). In order for a robot to function effectively in a 3D environment, it has to first perceive its surroundings, map them, localize itself within the map, and then plan the route it needs to take to accomplish a given task. Collision detection is a subset of motion planning.

The long-term goal of this research isn’t just to find a way to perform motion-planning more effectively, but it’s also to create a template for hardware and software that can be generalized to many different types of robots, speeding both development and deployment times. The two key claims of the paper are that per-robot software optimization techniques can be implemented in hardware through the use of specialized accelerators, and that these techniques can be used to create a design methodology for building said accelerators. This allows for the creation of a new field of robot-optimized hardware that they dub “robomorphic computing.”

The team’s methodology relies on creating a template that implements an existing control algorithm once, exposing both parallelism and matrix sparsity. The specific template parameters are then programmed with values that correspond with the capabilities of the underlying robot. 0-values contained within the matrices correspond with motions that a given robot is incapable of performing. For example, a humanoid bipedal robot would store non-zero values in areas of the matrices that governed the proper motion of its arms and legs. A robot with a reversible elbow joint that can bend freely in either direction would be programmed with different values than a robot with a more human-like elbow. Because these specific models are derived from a common movement-planning template, the evaluation code for all conditions could be implemented in a specialized hardware accelerator.

The researchers report that implementing their proposed structure in an FPGA as opposed to a CPU or GPU reduces latency by 8x to 86x and improves response rates by an overall 1.9x – 2.9x when the FPGA is deployed as a co-processor. Improving robot reaction times could allow them to operate effectively in emergency situations where quick responses are required.

A key trait of robots and androids in science fiction is their faster-than-human reflexes. Right now, the kind of speed displayed by an android such as Data is impossible. But part of the reason why is that we can’t currently push the limits of our own actuators. Improve how quickly the machine can “think,” and we will improve how quickly it can move.

Now Read:

Let’s block ads! (Why?)

ExtremeTechExtreme – ExtremeTech

Boston Dynamics Says Goodbye to 2020 With a Robot Dance Party

This site may earn affiliate commissions from the links on this page. Terms of use.

One day, robots may be dancing on our graves, and they’re going to be surprisingly good at it! Boston Dynamics, the robotics firm once owned by Google and now a part of Hyundai, has posted another fascinating and mildly disconcerting video showing off the smooth movement and agility of its robots. This time, the company put together a little dance routine set to the 1962 hit track “Do You Love Me” by The Contours. 

The song, which peaked at number 3 on the Billboard charts, is less than three minutes long, but it’s jam-packed with robots. The video starts with Atlas, a 6-foot humanoid robot that has previously leaped on top of boxes and done a flip, getting down with its bad self. The clever thing about the video is how it ramps up. You start with the single robot, and just as you’re about to get bored, boom, there’s another Atlas dancing in lock-step with the first. They’ve got great rhythm — digital, I assume. 

Again, you don’t have time to truly come to terms with the lifelike movement of the humanoid robots, because here comes Spot just a minute later. This quadrupedal robot is the only product Boston Dynamics sells to the public — you can get your own for a mere $ 75,000. Although, I imagine it’s not easy to program it to dance like this. Still, this shows how limber Boston Dynamics’ robots can be with a skilled operator, similar to the “Uptown Funk” dance from 2018. Even the clunky-looking Handle box-lifting robot joins the fun, rolling around like Big Bird on wheels. 

Boston Dynamics says in the video description that the demo features its “whole crew,” but there’s no sign of the classic BigDog robot that was the company’s first online hit. Presumably, it means just the bots it’s still actively developing. BigDog probably wasn’t agile enough to get its dance on anyway. 

Hyundai recently acquired 80 percent of Boston Dynamics from SoftBank for $ 880 million. SoftBank kept a 20 percent stake in the company via an affiliate but won’t have any say on how the company is run. Hyundai hasn’t announced any plans for Boston Dynamics, but at least the new management hasn’t put a stop to Boston Dynamics’ cheeky YouTube videos. The videos will have to do until we can all have robotic servants that definitely won’t rise up and destroy humanity while dancing to “Do You Love Me.” To answer that question: We kind of do, but only so we don’t have to be afraid.

Now read:

Let’s block ads! (Why?)

ExtremeTechExtreme – ExtremeTech

MIT Designs Robot That Eliminates Coronavirus With UV Light

This site may earn affiliate commissions from the links on this page. Terms of use.

The United States is currently experiencing a surge in COVID-19 cases as states begin dropping restrictions and allowing businesses to open once again. With people venturing outside and returning to offices, it’s more important than ever to neutralize coronavirus particles on surfaces before they can add to the infection rate. MIT has developed a robot that navigates around spaces to blast the virus with UV light. The team has even tested the system at a Boston-area food bank with encouraging results. 

The most significant source of coronavirus particles is an infected person, but such people can leave behind viruses on surfaces and drifting through the air that can be infectious for several days. The UV robot comes from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) as a way to clear these unwanted visitors from public spaces using the power of ultraviolet radiation. 

You’ve probably heard about the dangers of UV-A and UV-B light from doctors and the occasional sunscreen bottle. UV-C is a higher energy form of ultraviolet radiation with a wavelength between 280 and 100 nanometers. UV-A is as high as 400nm, and X-rays start around 10nm. So, UV-C is much more damaging to living organisms than UV-A and UV-B. Luckily, UV-C from the sun is absorbed by the atmosphere before reaching us. You can, however, use artificial UV-C radiation to effectively sterilize objects. 

The robotic base of the CSAIL project comes from Ava Robotics, which makes telepresence machines. The team replaced the screen that usually sits on top of the robot with a custom ultraviolet lighting rig. MIT decided to test the system in the Greater Boston Food Bank (GBFB). Since UV-C is dangerous for all living organisms, it can only run when there’s no one around. Being a telepresence robot, it’s simple for a remote operator to guide it around the GBFB facility by placing waypoints. Later, the robot can simply follow those waypoints autonomously. 

As the robot makes its way down the aisles at 0.22 miles per hour, the UV-C sweeps over every surface. It takes just half an hour to cover a 4,000 square foot area, delivering enough UV-C energy to neutralize about 90 percent of coronaviruses (and other organisms) on surfaces. Currently, the team’s focus is on improving the algorithms running the GBFB system, but that may lead to more robotic UV scrubbers. CSAIL hopes to use the data gathered at GBFB to design automated UV cleaning systems for dorms, schools, airplanes, and grocery stores.

Now read:

Let’s block ads! (Why?)

ExtremeTechExtreme – ExtremeTech

Hands On With Nvidia’s New Jetson Xavier NX AI ‘Robot Brain’

This site may earn affiliate commissions from the links on this page. Terms of use.

Today Nvidia officially launched its most powerful card-sized IoT GPU ever, the Nvidia Jetson Xavier NX (dev kit $ 399). We covered the basics of the Xavier NX and its industry-leading MLPerf stats when it was announced in November, but since then we’ve had a chance to get our hands on an early version of the device and dev kit and do some real work on them. Along with the dev kit, Nvidia also introduced cloud-native deployment for Jetson using docker containers, which we also had a chance to try out.

Nvidia Jetson Xavier NX by the Numbers

Built on its Volta architecture, the Jetson Xavier NX is a massive performance upgrade compared with the TX2 and becomes a bigger-sibling to the Jetson Nano. It features 384 CUDA cores, 48 Tensor cores, and 2 Nvidia Deep Learning Accelerators (DLA) engines. Nvidia rates it for 21 Trillion Operations per Second (TOPS) for deep learning performance. Along with the GPU is a reasonably-capable 6-core Nvidia Carmel ARM 64-bit CPU with 6MB of L2 and 4MB of L3 cache. The processor also includes 8GB of 128-bit LPDDR4x RAM with 51.8GB/s bandwidth.

All that fits in a module the size of a credit card that consumes 15 watts — or 10 watts in a power-limited mode. As with earlier Jetson products, the Xavier NX runs Nvidia’s deep-learning software stack, including advanced analytic systems like DeepStream. For connectivity, the developer kit version includes a microSD slot for the OS and applications, as well as 2 MIPI camera connectors, Gigabit Ethernet, M.2 Key E with Wi-Fi/Bluetooth, and an open M.2 Key M for an optional NVMe SSD. Both an HDMI and DisplayPort connector are provided, along with 4 USB 3.1 and 1 USB 2 micro-USB port.

Cloud-Native Deployment Thanks to Docker Containers

Jetson Xavier NXIt’s one thing to come up with a great industrial or service robot product, but another to keep it up to date and competitive over time. As new technologies emerge, or requirements evolve, update and software maintenance are a major issue. With Xavier NX, Nvidia is also launching its “cloud native” architecture as an option for deploying embedded systems. Now, I’m not personally a fan of slapping “cloud-native” onto technologies just because it is a buzzword. But in this case, at least the benefits of the underlying feature set are clear.

Basically, individual applications and services can be packaged as Docker containers and individually distributed and updated via the cloud. Nvidia sent us a pre-configured SSD loaded with demos, but I was also able to successfully re-format it and download all the relevant Docker containers with just a few commands, which was pretty slick.

Putting the Xavier NX Through Its Paces

Nvidia put together an impressive set of demos as part of the Xavier NX review units. The most sophisticated of them loads a set of docker containers that demonstrate the variety of applications that might be running on an advanced service robot. That includes recognizing people in four HD camera streams, doing full-body pose detection for nearby people in another stream, gaze detection for someone facing the robot, and natural language processing using one of the BERT family of models and a custom corpus of topics and answers.

Nvidia took pains to point out that the demo models have not been optimized for either performance or memory requirements, but aside from requiring some additional SSD space, they still all ran fairly seamlessly on a Xavier NX that I’d set to 15-watt / 6-core mode. To help mimic a real workday, I left the demo running for 8 hours and the system didn’t overheat or crash. Very impressive for a credit-card-sized GPU!

Running multiple Docker container-based demos on Nvidia Jetson Xavier NX

Running multiple Docker container-based demos on the Nvidia Jetson Xavier NX.

The demo uses canned videos, as otherwise, it’d be very hard to recreate in a review. But based on my experience with its smaller sibling, the Jetson Nano, it should be pretty easy to replicate with a combination of directly-attached camera modules, USB cameras, and cameras streaming over the internet. Third-party support during the review period is pretty tricky, as the product was still under NDA. I’m hoping that once it is out I’ll be able to attach a RealSense camera that reports depth along with video, and perhaps write a demo app that shows how far apart the people in a scene are from each other.

Developing for the Jetson Xavier NX

Being ExtremeTech, we had to push past the demos for some coding. Fortunately, I had just the project. I foolishly agreed to help my colleague Joel with his magnum opus project of creating better renderings of various Star Trek series. My task was to come up with an AI-based video upscaler that we could train on known good and poor versions of some episodes and then use it to re-render the others. So in parallel to getting on setup on my desktop using my Nvidia 1080, I decided to see what would happen if I worked on the Xavier NX.

Nvidia makes development — especially video and AI development — deceptively easy on its Jetson devices. Its JetPack toolset comes with a lot of AI frameworks pre-loaded, and Nvidia’s excellent developer support sites offer downloadable packages for many others. There is also plenty of tutorial content for local development, remote development, and cross-compiling. The deceptive bit is that you get so comfortable that you just about forget that you’re developing on an ARM CPU.

At least until you stumble across a library or module that only runs on x86. That happened to me with my first choice of super-resolution frameworks, an advanced GAN-based approach, mmsr. Mmsr itself is written in Python, which is always encouraging as far as being cross-platform, but it relies on a tricked-out deformation module that I couldn’t get to build on the Jetson. I backed off to an older, simpler, CNN-based scaler, SRCNN, which I was able to get running. Training speed was only a fraction of my 1080, but that’s to be expected. Once I get everything working, the Xavier NX should be a great solution for actually grinding away on the inference-based task of doing the scaling.

Is a Xavier NX Coming to a Robot Near You?

In short, probably. To put it in perspective, the highly-capable Skydio autonomous drone uses the older TX2 board to navigate obstacles and follow subjects in real time. The Xavier NX provides many times (around 10x in pure TOPS numbers) the performance in an even smaller form factor. It’s also a great option for DIY home video applications or hobby robot projects.

Now Read:

Let’s block ads! (Why?)

ExtremeTechExtreme – ExtremeTech

New Robot With ‘Emotional Intelligence’ Arrives at Space Station

This site may earn affiliate commissions from the links on this page. Terms of use.

The population of the International Space Station (ISS) is about to go up by one, but it won’t be another human occupant. It’ll be an AI-powered flying robot called CIMON-2, a followup to the experimental CIMON robot that debuted last year. The ISS’ newest AI bot should reach the station in a few days, and its designers hope CIMON-2 can prove even more useful to the crew than its predecessor. 

The CIMON project is a collaboration between German Aerospace Center (DLR), Airbus, and IBM that aims to design a robotic assistant that can save astronauts time on the ISS. Every minute on the station is valuable, and some of the research carried out there cannot be done anywhere else. CIMON can read experimental procedures, record video, and even have simple conversations with astronauts. Astronauts can even tell CIMON to go snap a photo of something on the station and report back. 

CIMON-2 and the original CIMON robot look like floating spheres with one flattened side. There’s a screen on the flat side that can display images or data, but it has a humanoid face at rest. IBM seems to want to avoid any HAL 9000 comparisons by making CIMON look extra friendly. 

The original CIMON debuted in mid-2018 and operated on the ISS for 14 months. Now, CIMON-2 is on its way to the station after launching aboard a SpaceX resupply mission on December 4th. While the new CIMON looks like the old one, IBM says it has improved the robot’s spatial awareness with ultrasonic sensors. It can also respond to human emotions thanks to IBM’s Watson Tone Analyzer. 

Non-robots holding CIMON-2 before its launch.

CIMON-1 proved that an AI-powered robot could operate on the ISS and understand commands given to it by the crew. The emotional angle for CIMON-2 could give the robot more context when interacting with humans. For example, CIMON-2 might be able to understand when a person is in a good mood, causing it to be extra chatty. If its humanoid partner is frustrated or preoccupied, it could change its behavior to be less distracting. IBM even sees a day when CIMON could detect group-think in a conversation and combat it by acting as a devil’s advocate. 

Airbus predicts that CIMON-2 will remain active on the ISS for at least three years. In the future, the companies want to export the Watson AI on robots that will travel to the Moon and Mars to aid astronauts.

Now read:

Let’s block ads! (Why?)

ExtremeTechExtreme – ExtremeTech

Japan’s Hayabusa2 Spacecraft Drops Off Its Last Robot on Asteroid Ryugu

This site may earn affiliate commissions from the links on this page. Terms of use.

Japan’s Hayabusa2 spacecraft has been hanging around the asteroid Ryugu for more than a year, dropping off robots and blasting the surface with metal slugs. The Japanese Space Agency (JAXA) hopes to bring Hayabusa2 and its precious cargo of asteroid samples home soon, but first, there’s just one more robot to deploy. 

Hayabusa2 carried several robots with it to Ryugu, and JAXA confirms the spacecraft released its final robotic explorer last night. The Minerva-II2 rover began its descent from an altitude of about 1 kilometer (0.6 miles), but it won’t reach the surface until early next week. After releasing the lander, Hayabusa2 moved back up to a higher orbit to monitor the robot’s progress. 

The first round of robotic explorers consisted of Minerva-II1A and Minerva-II1B, but they weren’t quite rovers. These drum-shaped robots had motors that allowed them to hop along the surface, taking photos and gathering temperature data along the way. This helped JAXA understand the structure of Ryugu, which was much more uneven than they had expected. In late 2018, Hayabusa2 dropped off another robot called MASCOT. This robot used a similar method of moving around the surface, but it was boxy and had no solar panels. It ran just 16 hours on internal battery power before shutting down. 

Minerva-II2 is the third distinct type of robot aboard Hayabusa2. Again, it’s mobile, but calling it a “rover” is a bit misleading. Minerva-II2 is drum-shaped like the previous Minerva explorers, but it’s larger than they were. The image above shows the descent module containing Minerva-II2, and below you can see what the robot actually looks like. 

The Minerva-II2 lander is similar to JAXA’s last Minerva robots, but it’s substantially larger.

JAXA previously reported possible issues with Minerva-II2’s CPU. That might prevent it from relaying data to the spacecraft, but there’s no harm in the attempt at this point. Hayabusa2 has completed its primary mission, using tantalum slugs to launch material from the surface of Ryugu into its sample collection compartment. Even if Minerva-II2 doesn’t work, Hayabusa2 can set course for Earth in the coming weeks with a wealth of data about Ryugu. The sample container should land on Earth in late 2020, allowing scientists to study material from an asteroid that hasn’t been scorched in the atmosphere for the first time.

Now read:

Let’s block ads! (Why?)

ExtremeTechExtreme – ExtremeTech