Measuring Trust in Artificial Intelligence (AI)

According to researchers, public trust in AI varies significantly by application.

In response to the growing prominence of artificial intelligence (AI) in society, researchers at the University of Tokyo examined public attitudes toward the ethics of AI. Their findings quantify the effect of various demographic characteristics and ethical scenarios on these attitudes. The team developed an octagonal visual metric, analogous to a rating system, as part of this study. This metric could be useful for AI researchers interested in determining how their work is perceived by the public.

Many people believe that the rapid advancement of technology frequently outpaces the development of the implicit social structures that guide and regulate it, such as law or ethics. AI, in particular, exemplifies this, having become so pervasive in so many people’s daily lives, seemingly overnight. This proliferation, combined with the relative complexity of AI in comparison to more familiar technology, has the potential to breed fear and mistrust of this critical component of modern living. Who distrusts AI and in what ways is information that developers and regulators of AI technology would benefit from knowing, but these types of questions are difficult to quantify.

The University of Tokyo researchers, led by Professor Hiromi Yokoyama of the Kavli Institute for the Physics and Mathematics of the Universe, sought to quantify public attitudes toward ethical issues surrounding artificial intelligence. The team sought to answer two specific questions through survey analysis: how attitudes change in response to the scenario presented to a respondent, and how the respondent’s demographic affected attitudes.

Because ethics cannot be quantified, the team used eight themes common to many AI applications that raised ethical concerns to gauge attitudes toward the ethics of AI: privacy, accountability, safety and security, transparency and explainability, fairness and non-discrimination, human control of technology, professional responsibility, and promotion of human values. These “octagon measurements,” as the group refers to them, were inspired by a 2020 paper by Harvard University researcher Jessica Fjeld and her colleagues.

Respondents were presented with a series of four scenarios to evaluate using these eight criteria. Each scenario examined a different application of artificial intelligence. They included artificial intelligence-generated art, artificial intelligence-assisted customer service, autonomous weapons, and crime prediction.

Additionally, respondents provided the researchers with demographic information about themselves, including their age, gender, occupation, and level of education, as well as an assessment of their level of interest in science and technology via an additional set of questions. This information was critical in determining which characteristics of people corresponded to specific attitudes.

“Previous research has established that risk is viewed more negatively by women, the elderly, and those with greater subject knowledge. Given how pervasive AI has become, I expected to see something different in this survey, but surprisingly, we saw similar trends,” Yokoyama explained. “What we did observe, however, was how the various scenarios were perceived, with the idea of AI weapons receiving significantly more skepticism than the other three.”

The team hopes that the findings will result in the development of a sort of universal scale for measuring and comparing ethical issues surrounding artificial intelligence. While this survey focused exclusively on Japan, the team has already begun collecting data in several other countries.

“With a universal scale, researchers, developers, and regulators can more accurately assess the acceptability of specific AI applications or impacts and take appropriate action,” Assistant Professor Tilman Hartwig explained. “One thing I discovered while developing the scenarios and questionnaire is that many topics in artificial intelligence require extensive explanation, far more than we realized. This demonstrates that there is a significant disconnect between perception and reality when it comes to artificial intelligence.”

By admin

Leave a Reply

Your email address will not be published.