Does a human have priority over AI?

_config.yml

So, what is the priority of the human over the AI?

(or from the AGI).

We are warned again and again that AGI will take all of our jobs.

He knew everything about everything.

He will be as wise as a human.

And the question arises all the time - is human really superior to the robot?

So…

Science fiction books are not necessarily the best source of inspiration, but they are probably the oldest sources that have dealt with this question, and I see in them a source of interesting ideas.

If we turn to Isaac Asimov, one of the greatest science fiction writers, he refers to this matter quite often. Let’s examine two stories he wrote: “The Feeling of Power” and “The Machine That Won the War”

In “The Feeling of Power,” it is recounted how humanity rediscovers how to perform mathematical calculations (even basic ones like multiplication) by itself, without relying on computers. Just paper and pencil. (By the way, in his other stories, because humanity uses computers, it forgets how to write as well.) The people of Earth are very excited about this, as it will enable them to win wars.

For two reasons:

First, it is mentioned that the “human is wiser than the computer.” with various hints as to what is meant, is discussed further.

The second is cynically horrifying. “A human is insignificant next to a computer,” says the general, and therefore it is preferable to put a human will aim the missile (and die…) rather than a computer.

This is probably the direction people are thinking about today when talking about the dangers of AI. That the computer will destroy us essentially.

But if we go the other way…

In the second story, a little more detail is provided.

There are several computer technicians and the general sitting there.

They all praise “Multivac” (the supercomputer that appears in many of Asimov’s stories) that helped them win the war.

And then one of them, the head technician of Multivac, says.

It’s not just the computer. The computer is just a machine that processes data. And what if they’re wrong?

So every time they got data, I checked what seemed reasonable to me and what didn’t, and fixed it.

The other technicians, each of whom is a technician of another device in the data collection stage, say the same thing.

And then the general says, many times the computer didn’t suffice for me. I didn’t rely on it; I used the ancient “calculating” method.

Tree or pole?

We are starting to understand the direction…

The computer simply processes data and analyzes it according to rigid rules. Yes, even AI and AGI. Man can decide which data to bring, which rules to formulate,

How to input it, and whether to make corrections. And in the end - man decides based on many different variables.

There are many more stories with similar principles by Asimov, for example, a story in which the computer plans a program that its operators are not supposed to know, but it cannot lie and is exposed.

Not everything is correct (AI plots a constant…. 😉) but the principle is clear:

The computer is simply a logical machine.

So let’s add one more thing, no less essential.

In the book “Dune: The Butlerian Jihad” by Brian Herbert and Kevin Anderson, one of the robots tries to imitate humans perfectly.

He fails again and again.

He composes an amazing piece, but it lacks depth.

He draws wonderful pictures, and they feel artistic, even though they look exactly the same as the human ones.

And then, one of the book’s heroines tells him -

What you lack is emotion and creativity. You are just (and this is an amazing definition of AI!) an immense collection of information, taking snippets of them and sticking them together. You don’t really create anything from 0.

It seems insignificant. So the AGI won’t create creative paintings. Why should we care? (And don’t argue with me about pettiness, it’s true that the paintings of some AI systems are really amazing. Take the principle.)

But if we connect it to Asimov’s claim…

The computer, which is simply processing data, cannot really answer everything. The world is not just logical.

You need the human, who was created into this world, with skills to deal with it.

What things, for example?

Creativity, thinking outside the box, emotion…

As long as the computer remains a machine that performs a series of logical and orderly processes, we will be superior to it.

Will this be the reality forever? Not sure.

But as long as computers operate in the manner they do today, then it seems likely.

Small note: I translated the post to English from the original post (in Hebrew), using ChatGPT. There were quite a few errors that I had to correct.

Funny, right?