Learn the meaning of artificial intelligence with us

Everything we love about civilization is a product of intelligence. Therefore, supporting human intelligence with artificial intelligence may help a civilization flourish in an unprecedented manner, as long as we are able to make this technology feasible. ”

Max Tegmark, Leader of the Future of Life Institute

What is artificial intelligence:

Artificial intelligence has made great advances from the Siri service to self-driving vehicles like Tesla. 
Aside from the fact that science fiction often portrays AI as a robot with human-like characteristics, it can include anything from Google's search algorithms and IBM's Watson system to autonomous weapons.

Artificial intelligence this day is known as restricted (or lean) artificial intelligence, and this is because it was designed for an important and restricted application (for example facial recognition, internet searching, or driving a vehicle), and yet the long-term intent of many researchers is to design artificial intelligence General (or strong). 
While restrained AI may outperform humans in its special task such as playing chess or solving equations, general AI outperforms humans in nearly every cognitive task.

Why is AI security research conducted:

The intent to keep AI's influence useful in the near term spurs research in many fields, from investment and law to technical matters such as verification and validation, security, and control. While stopping or hacking your laptop is a slight inconvenience, the issue will become more important and serious when it comes to the AI's response to what you want it accurately if it is controlling your car, your plane, the cardiac pacemaker, the automatic exchange system, or the power grid.
 Another short-term challenge is preventing a devastating autonomous arms race.

We face an important question in the long-term, which is what will happen if the extreme artificial intelligence investigation succeeds, and becomes better than humans in all cognitive tasks. Mathematician (Irving Judd) noted in 1965 that designing more intelligent artificial intelligence systems is in itself a cognitive task, it is possible that these systems are capable of self-progressing in a repeated way, updating this (an explosion in intelligence) so that it surpasses human intelligence in different stages. “. By developing modern revolutionary technologies, such superintelligent artificial intelligence can help us eliminate wars, diseases, and poverty, and therefore the design of a strong artificial intelligence could be considered the largest event in human history, except that some experts expressed concern that this framework will be the last event. 
Unless we learn to match AI's intentions with ours before it becomes a superhero.

There are those who question the opportunity to investigate a strong artificial intelligence system, and there are others who assert that the benefit from the design of a supermodel is uncertain. 
We believe that research in the present will help us prepare for and prevent such potentially negative consequences in the future so that we can enjoy the advantages of artificial intelligence and avoid its risks at the same time.

How can AI be dangerous:

Most researchers agree that it is unlikely that a superintelligent AI will be able to explain human emotions such as love or hate, and that there is no reason to expect AI to be intentionally beneficial or harmful. 
Instead, experts consider the following two scenarios toward a vision in the manner in which AI could become dangerous:

1. Artificial intelligence has been programmed to implement destructive things

Autonomous weapons are artificial intelligence systems that are programmed to kill, so if they fall into the wrong hands, they can easily be the cause of many deaths. 
Moreover, an AI arms race could unintentionally lead to a war between AI systems and lead to many casualties.
 To avoid being obstructed by the enemy, the weapons are designed to be difficult to stop, and thus a person may lose control of them. 
This danger lies even with restricted or weak AI systems, and it can increase with the exaggeration of the level of intelligence in the systems and the expense of self-control.

2. Artificial intelligence is programmed to implement useful things, but it develops destructive ways to reach its goal

It can happen whenever we fail to match our goals with his goals overall, and that's tough.
 If you ask an autonomous vehicle to take you to the airstrip as quickly as possible, it can take you there after chases with helicopters and after you become nauseous as a result of its speed, and it is clear here that it did not do what you wanted but did what you asked it literally.
 If the superintelligent pattern is employed by a security project in geoengineering, it could negatively affect our ecosystem as a side effect, and the human endeavor to stop it could see a danger that needs to be faced.

As the previous examples show, the tension is not about the virulence of AI systems, but about their efficiency.
 Artificial intelligence will be super intelligent and very capable of investigating its goals, and if those goals are not in line with our goals, then we will face problems.
 Let's clarify what we mean in an example, let's say that you are not hateful to ants and do not seek to trample them, but if you are in charge of an environmentally friendly hydropower project and there was an ant colony in the site on which the project will be built, then establishing that project would be very harmful to the ants. Therefore, the intention of AI security research is not to locate mankind as the seat of these ants.

Why attention deferred with AI security:

Stephen Hawking, Elon Musk, Steve Woznick, Bill Gates, and other leading technology and science scientists have expressed their concern about the risks of artificial intelligence in the media and through open correspondence related to the risks of artificial intelligence, as well as many leading researchers in that field. 
Why did this topic suddenly appear in the headlines?

The idea of ​​winning the Search for System was a powerful artificial intelligence seen as science fiction that could only be realized after many centuries. 
However, thanks to the new advances, a level was reached five years ago by experienced people that could only be achieved after centuries, which made many of them seriously predict the possibility of reaching super-intelligence in our lives.
 Although some experts in artificial intelligence still believe that achieving an artificial intelligence pattern at the level of human intelligence will not be achieved until after many centuries, the majority of people with artificial intelligence expertise in the Puerto Rico meeting held in 2015 expected the opportunity for this to happen before 2016. 
And because we require decades to complete the search for the required security, we must start this moment.

Because AI has the potential to be smarter than any human being, we have no foolproof tool to predict how they will behave. 
We cannot use advanced technological advances as a basis, because we have never developed anything that could be smarter than us, whether it was for a purpose or not.

The Future of Life Institute position is that our civilization will flourish as long as we win the race between the power of growing technology and the wisdom of managing it.
 With regard to artificial intelligence technology, the best tool to win the race is not to impede technology but to accelerate wisdom in managing it, and this by supporting AI security research.