Full width home advertisement

Welcome Home

Post Page Advertisement [Top]

Developing Artificial Intelligence That “Thinks” Like Humans

It takes more than just mimicking human behavior to develop human-like artificial intelligence; technology must also be able to process information, or 'think,' in a similar way to humans if it is to be completely trusted.

Researchers at the University of Glasgow's School of Psychology and Neuroscience have used 3D modeling to analyze how Deep Neural Networks – a subset of machine learning – process information, visualizing how their processing matches that of humans. The findings are published in the journal Nature Communications.

It is hoped that this new research will pave the way for the development of more dependable artificial intelligence technology that is capable of processing information in the same way that humans do while making errors that are understandable and predictable to human observers.

One of the remaining challenges in artificial intelligence development is determining how to improve our understanding of how machines think and whether their processing of information matches that of humans in order to ensure accuracy in the results. A deep neural network is frequently hailed as the best current model of human decision-making behavior, capable of matching or even surpassing human performance on certain tasks. Deep Neural Networks are a type of artificial intelligence that is used to model human decision-making behavior. When compared to humans, even deceptively simple visual discrimination tasks can reveal significant inconsistencies and errors in artificial intelligence (AI) models.

In spite of the fact that Deep Neural Network technology is currently used in applications such as face recognition, scientists do not fully understand how these networks process information and therefore how and when errors can occur.

In this new study, the researchers addressed this issue by modeling the visual stimulus that was presented to the Deep Neural Network and transforming it in multiple ways to demonstrate similarity of recognition between humans and the AI model, which was previously demonstrated.

The study's senior author, Professor Philippe Schyns, Director of the University of Glasgow's Institute of Neuroscience and Technology, shared his thoughts on the findings in the following statement: "When developing artificial intelligence models that behave "like" humans, for example, when they recognize a person's face whenever they see it, just as a human would, we must ensure that the AI model uses the same information from the face that another human would use to make that recognition. The failure to do so may lead to us having the false impression that the system behaves identically to humans, only to discover later that the system makes mistakes in novel or untested situations."

A series of three-dimensional faces that could be modified by the participants were created, and they were asked to rate the similarity of these randomly generated faces to four well-known individuals. Following this, the researchers used the data to determine whether Deep Neural Networks assigned the same ratings for the same reasons as humans did – determining not only whether humans and artificial intelligence made the same decisions, but also whether they were based on the same data. The researchers' approach, in particular, enables them to visualize their findings as three-dimensional faces that drive human and network behavior. This is significant because One such network, driven by a heavily caricatured face, correctly classified 2,000 identities, demonstrating that it correctly identified faces while processing facial information in a very different way than humans.

These findings, the researchers hope, will pave the way for more dependable artificial intelligence technology that behaves more naturally and makes fewer unpredictable errors in the future.

No comments:

Post a Comment

Bottom Ad [Post Page]