Full width home advertisement

Welcome Home


Post Page Advertisement [Top]

A robot that locates misplaced items

 


A harried commuter is about to walk out the door when they realize they've misplaced their keys and must rummage through piles of belongings to locate them. They wish they could figure out which pile contained the keys as they sift through the clutter.

 

MIT researchers have developed a robotic system capable of performing precisely that function. RFfusion is a robotic arm equipped with a camera and a radio frequency (RF) antenna mounted on its gripper. It combines antenna signals with visual input from the camera to locate and retrieve an item, even if it is buried beneath a pile and completely hidden from view.

 

The researchers' RFfusion prototype is based on RFID tags, which are inexpensive, battery-free tags that can be attached to an item and reflect signals sent by an antenna. Due to the fact that RF signals can pass through virtually any surface (including the mound of dirty laundry that may obscure the keys), RFusion is capable of locating a tagged item within a pile.

 

The robotic arm automatically zeroes in on the exact location of the object, moves the items on top of it, grasps the object, and verifies that it picked up the correct item using machine learning. Due to the integration of the camera, antenna, robotic arm, and AI, RFusion can operate in any environment without requiring any special setup.

 

While locating lost keys is beneficial, RFfusion has the potential to be used for a variety of other tasks in the future, including sorting through piles to fulfill orders in a warehouse, identifying and installing components in an auto manufacturing plant, and assisting an elderly individual with daily tasks in their home, though the current prototype is not fast enough for these applications.

 

"The ability to locate items in a chaotic world is an open problem that we've been working on for several years. Having robots capable of searching for items beneath a pile is a growing requirement in industry today. At the moment, this is comparable to a Roomba on steroids, but in the near future, this could have a variety of applications in manufacturing and warehouse environments,” said senior author Fadel Adib, associate professor of Electrical Engineering and Computer Science and director of the MIT Media Lab's Signal Kinetics group.

 

Co-authors include Tara Boroushaki, a research assistant; Isaac Perper, a graduate student in electrical engineering and computer science; Mergen Nachin, a research associate; and Alberto Rodriguez, a Class of 1957 Associate Professor in the Department of Mechanical Engineering. The findings will be presented next month at the Association for Computing Machinery's Embedded Networked Sensor Systems Conference.

 

Transmitting signals

 

RFfusion begins by searching for an object using its antenna, which bounces signals off the RFID tag (much like sunlight bouncing off a mirror) to determine the spherical area in which the tag is located. It combines that sphere with the camera input, allowing for a more precise location of the object. For instance, the item cannot be located on an empty section of a table.

 

However, once the robot has a rough idea of the item's location, it would need to swing its arm widely around the room, taking additional measurements, which is slow and inefficient.

 

The researchers trained a neural network using reinforcement learning to optimize the robot's trajectory to the object. In reinforcement learning, the algorithm is trained using a reward system through trial and error.

 

"This is also how our brain acquires knowledge. We are rewarded by our teachers, our parents, or a computer game, among others. In reinforcement learning, the same thing occurs. We allow the agent to make errors or perform well, and then we punish or reward the network. This is how the network acquires knowledge about something that is extremely difficult to model," Boroushaki explains.

 

In the case of RFfusion, the optimization algorithm was rewarded for minimizing the number of moves required to localize the item and the distance traveled to retrieve it.

 

Once the system has determined the precise location of the object, the neural network uses combined RF and visual information to predict how the robotic arm should grasp it, including the angle of the hand and gripper width, as well as whether other items must be removed first. Additionally, it scans the item's tag one final time to ensure it picked up the correct item.

 

Eliminating blight

 

The researchers evaluated RFusion in a variety of environments. They concealed a keychain in a cluttered box and a remote control beneath a pile of items on a couch.

 

However, if they fed all camera and RF data to the reinforcement learning algorithm, the system would have been overwhelmed. As a result, they summarized the RF measurements and limited the visual data to the area directly in front of the robot, similar to how a GPS consolidates data from satellites.

 

Their approach was successful — RFusion recovered objects that were completely hidden beneath a pile with a 96 percent success rate.

 

"Occasionally, if you rely solely on RF measurements, an outlier will occur, and if you rely solely on vision, a camera error will occur. However, if you combine them, they will compensate for one another. That is what contributed to the system's robustness," Boroushaki explains.

 

The researchers hope to improve the system's speed in the future, rather than stopping periodically to take measurements. This would enable the use of RFfusion in a fast-paced manufacturing or warehouse environment.

 

Apart from its industrial applications, Boroushaki believes that a system like this could be integrated into future smart homes to assist people with a variety of household tasks.

 

“Billions of RFID tags are used each year to identify objects in today's complex supply chains, which include apparel and a variety of other consumer goods. The RFfusion approach paves the way for autonomous robots that can dig through a pile of mixed items and sort them out using the data stored in the RFID tags, much more efficiently than manually inspecting each item, especially when the items look similar to a computer vision system," says Matthew S. Reynolds, CoMotion Presidential Innovation Fellow and associate professor of electrical and computer engineering at the University of Washington. “The RFusion approach is a significant step forward for robotics operating in complex supply chains, where quickly and accurately identifying and 'picking' the correct item is critical to completing orders on time and keeping demanding customers happy.”

No comments:

Post a Comment

Bottom Ad [Post Page]