Why can humans make mistakes, but machines can’t?

By C é SAR A. Hidalgo

Translation | & nbsp; Wei Shuhao

Reviser | & nbsp; Sun Linyu, clefable

Before artificial intelligence (AI) technology was popular, computer scientist Henry Lieberman invited me to visit his laboratory at MIT. Henry is obsessed with the idea that AI lacks common sense. To this end, he and his colleagues Catherine havasi and Robyn Speer have been collecting common sense content on the Internet to analyze human understanding and reasoning process of common sense, hoping to find ways to enhance AI’s general ability.

People are used to judging good and bad

Common sense such as “water (and rain) is wet” or “love (and hate) is a feeling” is obvious to humans, but it is difficult for machines to master. This is also a difficulty of artificial intelligence. Scholars are still trying to understand why it is so difficult for machines to understand common sense. That day, Henry excitedly showed me a chart. Through the method of principal component analysis (PCA), they will conduct multi-dimensional analysis and compression on the collected common sense word set, and finally come to the two dimensions that can differentiate all words most. They are set as the horizontal axis and the vertical axis, and words such as “love”, “water” and “feeling” are scattered on this picture& nbsp;

A660202

Henry said, “using PCA to analyze common sense cognition is like trying to answer an ancient philosophical question with mathematical methods: what is human knowledge about?” I asked him what these axes mean. After selling, Henry revealed the answer: the first dimension that can best distinguish common sense concepts is good and bad.

In hindsight, this rule seems obvious. We make countless good and bad judgments in cognition and communication every day. We say the weather is “good” and “bad”. We are looking for better jobs. We like “good” music and don’t drink “bad” wine. However, people can still rationally view that a hole in someone’s sock is harmless and has nothing to do with his morality, but we often can’t help abusing moral criticism in language. Henry’s chart proves that there is a universal judgment of good and bad in human language, and this moral judgment is also implied in the understanding and reasoning of common sense knowledge.

AI makes mistakes, unforgivable?

Today, AI and the people who develop it will suffer a lot of criticism. People’s anger often makes sense. AI has been involved in many technology scandals, such as mistakes in classifying photos, mistakes in facial recognition leading to the arrest of innocent people, or obvious bias in assessing recidivism, and gender related stereotypes in translation. In most cases, AI researchers listen to the public. Up to now, researchers have been well aware of these problems of AI and are actively trying to solve them.

As the fierce debate gradually passes, it is worth thinking about not only whether AI is “good” or “bad”, but also the morality shown by people when criticizing AI mistakes. After all, AI ethics is also formulated by human beings, so we can make judgments.

In the past period of time, my team and I have done dozens of experiments. In these, we recruited thousands of subjects and asked them to respond to human and AI behavior in the experiment. These experiments include some scenes, such as human or machine accidentally digging a grave with an excavator, or monitoring the tsunami warning system, but failed to send an alarm to coastal towns in time. This comparison enables us to go beyond the limitation of judging AI by people alone and observe the difference between people’s judgment of machine and human behavior.

This difference may seem subtle, but it forces us to judge machine behavior with a more realistic reference standard. In the past, we tended to use a perfect standard to measure AI, but we didn’t compare how people would react if people had the same behavior and caused the same consequences. So what do these experiments tell us?

On trace regardless of heart

Early data have clearly shown that human responses to human and machine behavior are different. For example, in the event of an accident, people are less tolerant of AI than humans, especially when AI causes substantial damage. Based on thousands of data points, we can study these problems more deeply and explain how people judge humans and machines by establishing a statistical model. The model successfully predicted how people would score according to the harm caused and the “degree of intent” of the wrong party.

To our surprise, the model shows that people’s moral criticism of human beings is not lighter than AI, but we use different types of moral requirements to judge them. The following figure summarizes this finding. The blue and red planes show how people judge the behavior of other people and machines (average).

You can clearly see that the planes are not parallel, and there is an offset between the red and blue planes. This is because when judging the machine, people seem to be more concerned about the result of the event, that is, the degree of damage caused by the event. This is why the red plane grows almost entirely along the damage dimension. But when people judge others, we find a curvature in the blue plane. This time, the growth is rising diagonally along the plane of injury and intention (degree of intention). This explains why people judge machines more severely in case of accidents; People use the consequentialist method to judge machines, and the intention is irrelevant, but it is not the case when judging humans.

This simple model allows us to draw an interesting conclusion. People will judge the behavior of machines and humans based on an empirical principle. In short, “people judge humans according to motivation and machines according to results.”

Common “double label”

But what are the implications of these experiments for our understanding of people’s moral intuition? First of all, these experiments tell us that people’s moral ideas are not fixed. We may tell ourselves that we are principled people, but the fact is that our judgment often changes according to the person or thing we judge (double label for short). This wavering moral standard is not limited to judging human and machine behavior.

We are morally wavering, or not as simple as favoring a group. If people only support humans rather than machines, the red and blue planes will be parallel, but this is not the result. This means that people don’t only like humans but don’t like machines, but we judge people and machines differently. Compared with the consequentialist moral judgment machine, we have a moral or more Kantian moral judgment standard for our human compatriots.

But perhaps the most important point is that we can understand human morality by training machine technology. In my research, we understand the likes and dislikes of human morality by establishing a simple, multi-dimensional moral evaluation model. In the research of Henry and others, they analyzed and understood the morality in human common sense through the popular “dimensionality reduction technology”.

After telling me that the first dimension was “good and bad”, Henry asked me to guess what the second axis represented. “Easy and difficult,” he said, “simply put, all common sense knowledge is about what is’ good ‘or’ bad ‘, and what is’ easy’ or ‘difficult’. The rest is not important.”

Original link:

  https://www.scientificamerican.com/article/why-we-forgive-humans-more-readily-than-machines/



留下评论