The idea of the self, or self-awareness and the idea to understand and explain oneself, is the one big divide between AI and human intelligence. While we as humans have a lot of information to still understand, we can offer up a reasoning for ourselves in a lot of cases.

However, AI algorithms are usually only programmed to provide answers based on data we've learned. However, we have no idea how they arrived at their conclusion, and this limits our ability to improve AI. Interestingly, something is about to change that.

QZ explained that in the most part, the way AI functions could be explained with a simple procedure. The way AI identifies people, for example, has a lot of factors. There are data points or pictures where AI finds patterns (individuals) from. They then draw a line (specific person) that connects everything.

However, these "lines" and "data" tend to be too complex that we have no idea where the line begins and ends.

New research from the University of California, Berkeley and the Max Planck Institute for Informatics may have just begun working on an AI algorithm that analyzes these kinds of data in two ways.

The first is to answer the original question, and the other that identifies the data used to answer the question. This means not only are we getting answers, we can finally figure out how they do it.

The algorithm can only so far recognize human actions in pictures. This is according to the unreviewed paper posted on ArXiv. It's trained on two sets of information -- what is happening and why.

The first uses pictures of human activities with descriptions and explanations of the task. The second set has images with three associated questions and a lot of answers per question.

By the time the neural network is able to explain why a certain picture is showing something, it points towards the data used to explain that conclusion.The researchers call this a "pointing and justification' system, wherein it points to the data used and explains the rationale behind it.

This is needed because as much as algorithms of AI are popular in face recognition and automatic photo tagging, a lot of them tend to be hard to understand, even by their own programmers.

The problem perhaps lies in the core of machine learning itself, because when the algorithm learns, it takes data and rearranges it itself that only it can understand the process behind it.

Boston University professor of computer science Kate Saenko explained that "we didn't design them in the conventional sense. We only designed the way they think, but the rest is up to them."

While the Berkeley research will not completely "fix" the problem, it does point toward a future where we can ask machines to explain their decisions and arrive at clear answers.