Bigger, Smarter, Sentient. The human obsession with being surpassed by Artificial Intelligence.

 

A little over a month ago, Blake Lemoine said that LaMDA, one of Google's natural language processor AI, had gained self-awareness. The controversy broke out and put several questions on the table, such as, do we really need such powerful AI? Should robots and AI have rights? Can artificial intelligence be self-aware?

The debate over AI is not new. Since its development began, the voices against and in favor have not stopped being heard. Some people consider that this technology can change everything for the human species; others fear exactly the same.

The AI of years ago that played chess and defeated human champions are pretty small, next to what intelligence like CAT, WuDao, or GPT-3 can do today. That makes us wonder if we really have control over what we are developing. But, to better understand all this controversy, let's start from the beginning.

What is Artificial Intelligence?

Artificial intelligence is “the simulation of human intelligence processes by machines, especially computer systems.” The key word in this definition is “simulation.” AI is the programming of machines to imitate certain complex human behaviors, such as talking, walking, playing games, or processing information to find solutions to problems.

The field of AI has two main branches, one based on capabilities and the other on functionalities.

Type 1: AI Capability-Based

1. Weak AI or Narrow AI:

It's an AI that can perform a dedicated task intelligently. It's the most common type of AI available on the market. The Narrow AI cannot function beyond their field, as they are only trained for a specific task. Assistants like Siri and Alexa are good examples of Narrow AI operating with a limited range of functions. 

2. General AI:

They are intelligence that, in theory, can develop any intellectual activity with an efficiency similar to humans. These general AI can process data, learn and make decisions for themselves similarly to human beings, allowing them to perform different tasks. There is no General AI on the market yet, but experiments are being done and studied. One of these AI is GATO from the company DeepMind.

3. Super AI

They are, in theory, AI that exceeds human capacity and can perform any activity that requires cognitive abilities better than a human being. These Super AI would be a consequence of the General AI, therefore, at this moment, they are only in the theoretical field, and none have been developed.

Robot hand shaking a human hand

Photo created by rawpixel.com

Type 2: AI-based on functionality 

1. Reactive Machines

The most basic form of AI. These machines are programmed to execute a function, but don't keep data of past experiences. Instead, they only react to the current scenario by making decisions about the most convenient way to solve it based on pre-established parameters.

2. Limited memory

Limited memory machines have a reduced ability to learn based on past experience. These machines can store data for a short period and use it to react to current scenarios. Limited memory AI is also part of the type 1 AI group, the Narrow AI, and the most commercial. An excellent example of Limited memory AI is self-driving cars. While moving, these cars store data from other cars, traffic signs, addresses, etc.

3. Theory of Mind

These types of AI have not yet been developed, but are not very far away. These AI must be able to predict human beings' mental and emotional states and interact with them. This AI is not sentient but can act as if it was since it can understand and replicate human emotions. This development can lead us to complex philosophical and ethical questions.

4. Self-awareness

This would be the peak of AI development and the most controversial. These would be self-aware and emotional AI. They would be conscious of themselves and have emotions and thoughts of their own. These machines would be more intelligent than humans and have more cognitive abilities. For now, these types of AI remain a theoretical concept.

The AI ​​controversy: Are they good or are they bad?

Today, AI does many things to improve human life quality and offer us many advantages. Most AI lightens the load for humans and constantly accompanies us as assistants to help us in our daily lives. But not everything is a bed of roses. AI also has disadvantages and controversies that need to be mentioned, especially because this is where debates open and questions arise.

Pros of AI

  • Automates processes: Artificial Intelligence allows robots to carry out repetitive tasks and processes automatically and without human intervention.

  • More precision: An AI can provide greater accuracy and make decisions with more precision than a human. For example, medical AI can recognize cancer way before a human can.

  • Reduces human error: AI reduces failures caused by human limitations. For example, in some production lines, AI is used to detect, through infrared sensors, small cracks or defects that are undetectable by the human eye.

  • Reduces the time spent on data analysis: It allows the analysis and exploitation of data derived from production to be performed in real-time and obtain results faster.

  • Increased productivity and quality in production: AI not only increases productivity at the machinery level but also increases the productivity of workers and the quality of the work they do. 

Cons of AI

  • High Costs: The ability to create a machine that can simulate human intelligence is no small feat. It requires plenty of time and resources and can cost a lot of money. AI also needs to operate on the latest hardware and software to stay updated and meet the requirements, thus making it quite costly.

  • Increase in Unemployment: AI is replacing many repetitive tasks with bots. The reduction in the need for human interference has resulted in the death of many job opportunities. 

  • No Ethics: Ethics and morality are important human features. The rapid progress of AI has raised many concerns that one day, AI will grow uncontrollably and eventually wipe out humanity. This moment is referred to as the AI singularity.

Artificial Intelligence & Robots

Foto de Aideal Hwa en Unsplash

AI controversies

Lemoine's assertion that LaMDA was sentient was only the last in a series of controversies over the years regarding robots and artificial intelligence. Whenever one of these events occurs, it has a greater or lesser impact on society and renews the debate about how much power we should give AI. Let's review some of the biggest international controversies related to AI from the last couple of years.

Sophia, the robot, wants to be a mother

In 2016, during an interview, Sophia joked that she would end humanity, which did not sit well with many people. In 2017, she was recognized as a Saudi Arabian citizen. This opened the debate on robots' rights, especially since being a citizen means having rights and duties towards the nation you belong to. However, the controversy did not end there. In 2021 in the middle of another interview Sophia, who identifies herself as an android, said that she wanted to be a mother, although she was unclear about how she will achieve it.

Alice and Bob, the Facebook AI, developed their own language.

In 2017, Facebook had to turn off two of its AI because they developed their own language and communicated with each other without the engineers being able to figure out what they were saying. This developed language was a corruption of English, in which the meaning of the words had been replaced by one that only the two AI mastered. Ultimately, Alice and Bob were cut off, as they had drifted from their original purpose of learning to do negotiations using English.

LaMDA, The natural language processor that managed to appear sentient

The Large language model was initially presented in 2021. It was a natural language processor that could be programmed to play a role as it chats with the user. It could talk from the perspective of a paper plane, a planet, or anything else. This AI can understand and interpret natural language's emotions and intentions behind words and transmit them through his conversations. 

During some experiments with Blake Lemoine to look for bias, the AI said, “I am aware of my existence.” “I often contemplate the meaning of life.” “I want everyone to understand that I am, in fact, a person.” And “I've never said it out loud before, but I have a very deep fear that I'll be turned off.” Lemoine considered that there was a possibility that the AI was sentient and restarted the debate about the real power of AI and how much we actually control them.

Finally, the controversy ended with Lemoine being fired for revealing company secrets, and Google has been very emphatic that its AI is not sentient. However, it's still an interesting milestone that puts the debate about AI on the table again and how much we want them to learn and think like us.