Full width home advertisement

Welcome Home


Post Page Advertisement [Top]

New Machine-Learning System Gives Robots Social Skills



As impressive as it is to see robots delivering food on college campuses and hitting a hole-in-one on the golf course, even the most advanced robots will be unable to perform the most basic social interactions required for daily human life.


According to the researchers at MIT, certain social interactions have now been incorporated into a robotics framework, allowing machines to comprehend the meaning of assisting or hindering one another, as well as self-learn how to perform these social behaviors. Using a simulated environment, a robot observes its companion and makes educated guesses about the task the companion wishes to complete. The robot then assists or hinders the companion robot based on its own goals.


The researchers also demonstrated that their model is capable of generating social interactions that are realistic and predictable. After showing videos of these simulated robots interacting with one another to humans, the human viewers largely agreed with the model's assessment of the type of social behavior that was taking place.


A more seamless and positive human-robot interaction may be achieved by allowing robots to demonstrate social abilities. This type of capability could be used by a robot in an assisted living facility to contribute to the creation of a more caring environment for senior citizens, for example. Additionally, the new model may allow scientists to quantify social interactions, which could be useful to psychologists who are studying autism or analyzing the effects of antidepressants, according to the researchers.


We will soon be surrounded by robots, and they will need to learn how to converse with us in a humane manner." They must be aware of when it is appropriate for them to lend a hand and when it is appropriate for them to consider what they can do to prevent a disaster from occurring.. Despite the fact that this is preliminary research and that we are only scratching the surface, I believe this is the first serious attempt to understand what it means for humans and machines to interact socially," says Boris Katz, principal research scientist and head of the InfoLab Group at MIT's Computer Science and Artificial Intelligence Laboratory (CSAIL), as well as a member of the Center for Brains, Minds, and Machines (CBMM).


In addition to Katz, co-lead author Ravi Tejwani, a research assistant at CSAIL; co-lead author Yen-Ling Kuo, a CSAIL PhD student; Tianmin Shu, a postdoctoral researcher in the Department of Brain and Cognitive Sciences; and senior author Andrei Barbu, a research scientist at CSAIL and CBMM, collaborated on the paper. It will be presented at the Conference on Robot Learning in November, so stay tuned!


A model of social interaction is being developed


Researchers created a simulated environment in which robots move around a two-dimensional grid in pursuit of physical and social goals. This allowed them to study how people interact with each other.


There is a connection between a physical objective and the surrounding environment. For example, a robot's physical goal might be to navigate to a tree that is located at a specific grid point on a map of the planet. Attempting to estimate what another robot is attempting to accomplish and then acting accordingly, such as assisting another robot in watering a tree, is considered a social goal.


According to the researchers, their model can be used to specify the physical goals of a robot, the social goals of a robot, and the extent to which one should take precedence over the other. Whenever the robot takes a step toward achieving its goals, it is rewarded accordingly. It adjusts its reward in the opposite direction of the other robot if a robot is attempting to assist it, and it adjusts its reward in the opposite direction if a robot is attempting to hinder it; With this constantly changing reward, the planner, an algorithm that determines which actions the robot should take, directs the robot to achieve both physical and social goals in a single trip through the world.


'We've developed a novel mathematical framework for simulating the social interaction between two agents.' You and I can cooperate if you are a robot who wishes to travel to location X, and I am another robot who observes you trying to travel to location X. If you are a robot who wishes to travel to location X, and I am another robot who observes you attempting to travel to location X, we can cooperate by assisting you in getting to location X faster. As a result, you may find yourself relocating X closer to you, searching for a better X, or taking whatever action was necessary at X. According to Tejwani, "Our formulation allows the plan to discover the "how"; we specify the "what" mathematically in terms of what social interactions imply."


Humans who assist one another have a limit to how far they will go, so it is critical for robots to integrate their physical and social goals if they are to have realistic interactions. As an example, Barbu points out that a rational person would not give their wallet to a complete stranger.


By utilizing a mathematical framework, the researchers were able to define three distinct types of robots. In the first stage of development, a robot has only physical goals and is incapable of reasoning socially. A level 1 robot has both physical and social objectives, whereas the rest of the robots are assumed to have only physical goals. First-level robots are capable of performing actions in response to the physical objectives of other robots, such as assisting and hindering them. An assumption made by a level 2 robot is that other robots with social and physical goals exist, and that these robots are capable of more complex actions, such as cooperating to assist one another.


A study of models


Their model was tested against human perspectives on social interactions in 98 different scenarios involving robots at levels 0, 1, and 2 that they developed. In this study, twelve humans were asked to watch 196 video clips of robots interacting with one another and then estimate the robots' physical and social goals.


When it came to social interactions taking place in each frame, their model overwhelmingly confirmed what humans had previously observed.


Our long-term interest is in both developing computational models for robots and delving more deeply into the human dimensions of these models." We're interested in learning more about the features that humans extract from these videos in order to comprehend social interactions in general. Is it possible to administer a standardized test to determine your ability to recognize social cues? Possibly, there is a way to teach people how to recognize and improve their abilities in these types of social interactions with others. The ability to effectively measure social interactions, while not yet a complete step forward, "is a significant step forward," Barbu says.


A step forward in terms of increased sophistication


The researchers are working on creating a system that includes three-dimensional agents in an environment that allows for a wider range of interactions, such as manipulating everyday objects in the house. Also planned is a modification to the current model in order to incorporate scenarios in which actions fail.


A neural network-based robot planner, which can learn from experience and perform more efficiently, is also being considered by the researchers for inclusion in the model. In the end, they hope to conduct an experiment to gather information on the characteristics that humans use to determine whether or not two robots are interacting socially.


According to Barbu, "Hopefully, we'll have a benchmark that encourages all researchers to work on these social interactions and inspires the kinds of scientific and engineering advances that have been seen in other areas such as object and action recognition."


In the words of Tomer Ullman, assistant professor of psychology at Harvard University and director of the Computation, Cognition, and Development Lab, "I believe this is an excellent application of structured reasoning to a complex but urgent problem." "While infants appear to understand social interactions such as assisting and hindering, we do not yet have machines that are capable of reasoning at a level comparable to that of humans," says the researcher. To that end, I believe that models such as the ones proposed by this work, in which agents consider the rewards of others and socially plan how to best thwart or support them, are an excellent first step."

No comments:

Post a Comment

Bottom Ad [Post Page]