Artificial intelligence experts recently warn the threats that machines pose are not as mediocre as initially thought. They said machines can quickly outsmart human beings when they achieve human-level intelligence.
In a YouTube video released by The Future of Humanity Institute, Elon Musk, co-founder of Tesla, SpaceX and Paypal, participated in a panel earlier this month alongside the likes of DeepMind CEO Demis Hassabis and Oxford philosopher Nick Bostrom.
According to The Guardian, Hassabis previously sold his company to Google for 400 million pounds in 2014.
Musk and his colleagues were asked how long it may take for machines to develop superintelligence. Bostrom said this may happen in the next few decades, which is quite a feat if such an eventuality occurs. However, this is not something that has not been predicted throughout the development of AI.
Read Also: The One Percent has Started Buying Underground Bunkers for the Apocalypse -- Why?
Max Tegmark, an MIT professor, even asked about a so-called "intelligence explosion" or a sudden "spike" of bits of intelligence created by AIs. As predicted by the singularity theory, there will come a time that AIs themselves -- once they become sentient -- will find ways to improve themselves even faster than a normal human being could predict. This means it will always be in a constant state of wanting to upgrade.
Musk said that if an AI reaches a threshold that it's smarter than the smartest human, then it's only a matter of days before it becomes smarter than anyone.
Others predicted that it may take several years before machines become superintelligent, though none of them said it will take more than 100 years.
Hassabis said it depends on the architecture that ends up delivering human-level AI, and that the timescale is something that will make a big difference. If things happen quicker than society can respond to them, it will be very hard to predict if the superintelligent AI will be for the good of humanity or not.
Regardless, other scientists are now beginning to question whether or not "sentient" AI will be here for the benefit of humanity or usher its destruction.
The recent news adds to the looming concern regarding the rise of AI-based jobs overtaking livelihoods that were normally reserved for people. In the growing age, where humans are slowly depending on automation, is there still a way for humans to find jobs, or in this case, a purpose?