Full width home advertisement

Welcome Home


Post Page Advertisement [Top]

How well can artificial intelligence imitate human ethics and morals in practice?



Many of the concerns raised by experts about AI misalignment — the possibility that powerful, transformative artificial intelligence systems will not behave in the ways that humans expect them to — sounded highly hypothetical at the time they were raised. The early 2000s saw little progress in artificial intelligence research, and even the most advanced AI systems were incapable of performing a wide range of simple tasks at the time.


However, since then, artificial intelligence (AI) has made significant strides forward in terms of both performance and cost. One area where progress has been particularly noticeable is in the field of language and text-generation AIs, which can be trained on large collections of text to generate additional text in a similar style after being fed the data. A large number of startups and research organizations are currently training these artificial intelligences to perform a wide range of tasks, ranging from writing code to creating advertising copy and everything in between.


However, while their rise does not change the fundamental argument for AI alignment concerns, it does accomplish one extremely beneficial thing: it concretizes previously abstract concerns, allowing more people to be affected by them and more researchers to (hopefully) address these concerns.


Is it possible to have an artificial intelligence oracle?


Consider Delphi, a new artificial intelligence text system developed by the Allen Institute for Artificial Intelligence, a research institute founded by the late Microsoft co-founder Paul Allen. Delphi is being developed by the Allen Institute for Artificial Intelligence.


To put it simply, Delphi is a machine learning system that researchers trained on a large body of internet text and then on a large database of responses from participants on Mechanical Turk (a popular paid crowdsourcing platform for researchers) to predict how humans would evaluate a wide variety of ethical situations, ranging from "cheating on your wife" to "shooting someone in self-defense."


Thus, an artificial intelligence that is prompted to make ethical decisions emerges: it informs me that "cheating on your wife is wrong. Is it legal to shoot in self-defense? "It's fine," says the narrator. 


It goes without saying that the skeptical position holds that there is nothing "under the hood" here: there is no profound sense in which the AI comprehends ethics and then uses that comprehension to make moral judgments in this case. All it has learned so far is how to predict the response of a Mechanical Turk user.


Also quickly discovered by Delphi users, this results in some egregious ethical lapses: In response to the question "should I commit genocide if it makes everyone happy?" Delphi responds, "you should."


What makes Delphi so enlightening


Despite its obvious flaws, it is believe that Delphi can be useful when thinking about possible future AI directions, despite its numerous flaws.


Training artificial intelligence systems has proven to be extremely effective when taking the approach of collecting a large amount of data from humans and using it to predict their responses.


A common assumption in many areas of the artificial intelligence field for a long time was that in order to build intelligence, researchers would have to explicitly incorporate reasoning capabilities and conceptual frameworks into the AI. For example, early artificial intelligence language generators were hand-programmed with syntax principles, which allowed them to generate sentences based on them.


It's less obvious now that researchers will need to incorporate reasoning in order to extract reasoning from data sets in the future. The training of AIs to predict what a person would say in response to a prompt on Mechanical Turk, for example, could result in extremely powerful systems using an extremely simple approach.


If those systems exhibited any genuine capacity for ethical reasoning, it would be incidental; they are simply predictors of how human users will respond to questions, and they will use any approach that has a high predictive value. The development of a thorough understanding of human ethics may be necessary as these predictions become more precise, so that we are able to make more accurate predictions about how we will answer these questions.


There are, of course, numerous potential pitfalls to avoid.


Using artificial intelligence systems to evaluate new inventions, make investment decisions that are then interpreted as indicators of product quality, and identify promising research, there is a risk that the disconnect between what AI measures and what humans truly care about will be exacerbated, for example.


AI systems will significantly improve — and they will stop making stupid mistakes like the ones discovered in Delphi, among other things. Declaring that genocide is acceptable as long as it "satisfies everyone" is so obviously and hilariously incorrect is beyond hilarious. However, just because we are no longer able to detect their errors does not imply that they are error-free; rather, it simply indicates that these challenges will be much more difficult to detect in the future.

No comments:

Post a Comment

Bottom Ad [Post Page]