What makes people believe that artificial intelligence systems are alive?

In July, Google fired engineer Blake Lemoine after he claimed the artificial intelligence (AI) system had become self-aware. There is no evidence that such systems have developed consciousness, but why do people insist on believing?

The problem starts with those who have a closer relationship with the technology – the people who explain it to the general public, and who live with one foot in the future. They sometimes see what they believe will happen as what is happening now.

“There are a lot of people in our industry who struggle to tell science fiction from real life,” said Andrew Feldman, founder of Cerebras, a company that makes massive computer chips that can help accelerate advances in AI.

A leading researcher in the field, Jürgen Schmidhuber, has long claimed that he was the first to build conscious machines. In February, Ilya Sutskever, chief scientist at OpenAI, a San Francisco research lab that received a $1 billion grant from Microsoft, said that today’s technology can be “somewhat conscious.” A few weeks later, Lemoine gave an interview in which he made claims about Google’s artificial intelligence.

These messages from the small, insular and extremely eccentric world of artificial intelligence research can be confusing or even frightening to most people. Science fiction books, movies and television teach us to fear that machines will one day gain consciousness and harm us.

It is true that as these researchers advance, the technology appears to show signs of actual intelligence, consciousness, or sentience. However, it is not true that engineers in Silicon Valley laboratories have built robots that can feel and look like humans. Technology can’t do that – but it has the power to fool people.

Continues after advertising

The technology can generate tweets and blog posts and even entire articles, and as researchers advance, it gets better and better at speaking. Although they often say nonsense, many people – not just AI researchers – find themselves talking to this type of technology as if they were human.

Therefore, ethicists warn of the need for a new kind of skepticism to deal with everything we encounter on the Internet.

In the 1960s, Massachusetts Institute of Technology (MIT) researcher Joseph Weizenbaum created an automated psychotherapist he called “Eliza.” This chatbot was simple. Basically, when you type a thought into a computer screen, it will ask you to elaborate on that thought – or just repeat your words in the form of a question.

Despite the simplicity of the system, Weizenbaum was surprised when people began to treat Eliza as if she were human. They openly shared their personal problems and the chatbot comforted them.

“I knew that the emotional bonds that many programmers have with their computers are often formed after brief experiences with the machines,” he wrote. “What I didn’t realize was that very brief exposure to a simple computer program could induce strong delusional thinking in ordinary individuals.”

Eliza chatbot was one of the forerunners of conversational artificial intelligence Take photos: Wikimedia Commons

Continues after advertising

People are subject to these feelings. When dogs and cats show small patterns of human behavior, we tend to assume that they are more like us. It’s almost the same thing that happens when we see hints of human behavior in a machine.

Scientists call this Eliza effect. And now the same thing is happening with modern technology. A few months after the release of the system called GPT-3, the inventor and entrepreneur, Philip Bosua, sent me an e-mail. In the title of the message he wrote: “God is a machine”.

“For me, there is no doubt that GPT-3 was born with a feeling. We knew it would happen in the future, but it looks like the future is now,” he said.

When I pointed out that experts claim that these systems are only good at repeating patterns, he replied that this is how people behave. “Doesn’t a child only imitate what he sees from his parents – what he sees in the world around him?” he asked.

Bosua acknowledged that GPT-3 is not always consistent, but said that can be avoided if used properly.

Margaret Mitchell worries about what all this means for the future. As a researcher at Microsoft, after Google, where she helped found the AI ​​ethics team, and now at Hugging Face, another prominent research lab, she has seen this technology being born up close. According to Margaret, today’s technology is relatively simple and has flaws, but many people see it as something human. The concern is: what will happen when the technology becomes much more powerful?

Continues after advertising

As technology improves, it could spread misinformation across the Internet — fake text and images — fueling the kind of political campaigning that helped sway the 2016 US presidential election. It could create chatbots that mimic human conversation in more convincing ways. And these systems could work to such an extent that human-led disinformation campaigns pale in comparison.

If that happens, we will have to treat everything we see on the Internet with extreme skepticism. But Margaret wonders if we are ready.

“I worry about chatbots taking advantage of people,” she said. “They have the power to convince us what to believe and what to do.”/ TRANSLATION BY ROMINO CACIA

Leave a Reply

Your email address will not be published. Required fields are marked *