Artificial Intelligence has held the curiosity and attention of the general populace for decades. Whether it be in the form of books, films, or TedTalks, people have always had the occasion to question the ethics and morality surrounding AI. But artificial intelligence is more than fantastical crime-fighting medical companions or malevolent sentient computers capable of guiding rockets.
In a live debate in Cambridge Union, IBM's Project Debator was given the floor to defend the benefits of artificial intelligence to a human audience. During the debate, the robot parsed through more than a thousand arguments about AI submitted through IBM's website the week before the showcase. Over 510 comments were in favor of the AI's greater potential for good, while around 570 believed that AI had more potential for harm.
In both cases, Project Debater filtered the data into five main themes it used to create a debating strategy. At the beginning of the debate, the robot presented the opening case for each side, such as the benefits of automating repetitive and mundane tasks for the pro-AI side and the risk of rigid bias from static datasets for the anti-AI side.
After setting the problem for both sides, the robot stepped aside to let the humans representing the pro and anti teams take over, but not without providing possible counter-arguments for each position. At the end of the presentation, the audience voted in favor of the pro-artificial intelligence side by a marginal 51% advantage.
The victory brought the pro-AI position to a 1-1 win/loss tally as Project Debater had narrowly lost against professional human debater Harish Natarajan in a one-to-one standoff in San Fransisco earlier in the year.
Project Debater itself had shown great progress since then. While initially, the artificial intelligence system had already been capable of cracking passable jokes during the initial debate, the robot had displayed advancements in natural language processing since the early stages of its infancy in 2016. Noam Slonim, one of the IBM engineers working on the project, compared its skill to having grown from toddler to university-level.
In many ways, an AI is only as good as the data it's been fed.
For example, Microsoft made its own forays into artificial intelligence with a Twitter chatbot named Tay that complete. Tay quickly became infamous among internet lurkers as the chatbot relied on Twitter comments to learn, which ultimately attracted hoards of trolls teaching the bot to tweet inappropriately.
Dan Lahav, onboard as both IBM scientist and experienced debater, reported that people had tried to do the same with Project Debater. Submissions were made to feed racist and obscene language to the system in hopes of the machine parroting the malicious commentary during the demonstration but the AI's natural language processing prevailed.
But what does this mean for natural language processing as a whole?
Humans become less tolerant of visual or behavioral inconsistencies the closer robots or simulations get to mimicking human characteristics such as in how it looks with faux skin and talks with speech recognition software. This phenomenon is called the uncanny valley.
While the technology is still far from perfect, it would appear that IBM's Project Debater has reached the Uncanny Valley for natural language processing. The implications of an AI capable of arguing soundly with humans include the suggestion that AI could be capable of engaging in conversations nearly imperceptible from actual humans.
The next question, then, would be on providing quality customer servicing as well as raising higher levels of security to protect against malicious cyberactivity in the future.
RELATED: How Artificial Intelligence Improves the Quality of Data Mapping