Artificial intelligence is slowly changing the world with the various developments it is ushering in fantastic ways. However, fans of the "Terminator" franchise understands that there are dangers present with advanced AI technologies.
This is why industry giants and experts such as Elon Musk and Stephen Hawking have advocated for humane and "safe" robotics. The two figures, along with hundreds of researchers and experts in the fields, have proposed 23 "guiding principles" that will ensure the development of AI for the benefit of mankind.
This compilation is called the Asilomar AI Principles. These was developed during the Beneficial AI 2017 conference which, according to the Future of Life Institute, was the brainchild of various experts. According to Inverse, the experts that pitched in on the principles included at least 892 AI or robotics researchers and 1,445 other experts, including Tesla CEO Elon Musk and Stephen Hawking.
The Asilomar AI Principles includes, primarily, the creation of beneficial intelligence and not undirected intelligence. This means research should go not only to the improvement of AI, but the set of values aligned to it. It aims for a healthy exchange between scientists and policy makers, saying that a culture of trust and transparency should exist in the community.
Safety, failure and judicial transparency should be able to exist between the parties. Designers and builders of advanced systems should also be held accountable for any misconduct by their machines.
Privacy and shared benefits should also be present and that an AI "arms" race should be avoided. A result is a set of guidelines that is far more expansive than Isaac Asimov's Three Rules of Robotics. The various fields it covered is evidence that robotics affects numerous parts of humanity's life.
The Future of Life website explained that what was left were 23 guiding principles that range from strategies, data rights to future issues that include potential super-intelligence. The collection "is by no means comprehensive and it's open to differing interpretations." There are chances that the 23 principles may be edited some time in the future. For now, it is best to assume they are there to guide AI development along the "right path."