Calculations show that it will be impossible to control a super-intelligent AI

The idea of ​​artificial intelligence that will overthrow humanity has been talked about for decades, and scientists have just given their verdict on whether we will be able to control high-level computerized superintelligence. The answer? Almost certainly not.

The story is that controlling a superintelligence far beyond human understanding would require a simulation of the superintelligence we can analyze. But if we fail to understand it, it is impossible to create such a simulation.

Rules such as “do not harm people” cannot be established if we do not understand the kind of scenarios that an AI is going to create, the authors of the new paper suggest. Once a computer system operates at a level beyond the scope of our programmers, we can no longer set limits.

“Superintelligence poses a fundamentally different problem from those normally studied under the banner of ‘robot ethics,'” the researchers write.

“This is because a superintelligence has many facets and is therefore able to mobilize a variety of resources to achieve goals that may be incomprehensible to humans, let alone be controllable.”

Part of the team’s reasoning stems from the stop problem invoked by Alan Turing in 1936. The problem is whether or not a computer program will come to a conclusion and an answer (so it stops) or simply try to find one.

As Turing has shown in some clever mathematics, although we may know that for some specific programs, it is logically impossible to find a way to let us know this for every potential program that could ever be written. This brings us back to AI, which in a super-intelligent state could feasibly keep all possible computer programs in its memory simultaneously.

Any program written to stop harming people and destroying the world, for example, can come to a conclusion (or stop) or not – it is mathematically impossible for us to be absolutely sure in both directions, which means it cannot be contained.

“In fact, this makes the isolation algorithm unusable,” says computer scientist Iyad Rahwan of the Max Planck Institute for Human Development in Germany.

The alternative to teaching AI a certain ethic and telling it not to destroy the world – something that no algorithm can be absolutely sure of doing, say researchers – is to limit the capabilities of superintelligence. For example, it could be cut off from certain parts of the internet or from certain networks.

The new study also rejects this idea, suggesting that it would limit the achievement of artificial intelligence – the argument is that if we do not use it to solve problems beyond the realm of humans, then why not create it at all?

If we go further with artificial intelligence, we may not even know when a superintelligence reaches beyond our control, so is its misunderstanding. This means that we need to start asking some serious questions about the directions we are going.

“A super-intelligent machine that controls the world sounds like science fiction,” says computer scientist Manuel Cebrian of the Max Planck Institute for Human Development. “But there are already machines that perform certain important tasks independently, without the programmers fully understanding how they learned it.”

“So the question is whether this could become uncontrollable and dangerous to humanity at some point.”

The research was published in Journal of Artificial Intelligence Research.

.Source