Google's DeepMind will publicly release its AI training platform in its bid to increasingly embrace open-source initiatives. This also appears to be a response to Elon Musk's initiative on making its OpenAI robot gym available to the public.

According to Bloomberg, Alphabet Inc. (the parent company of Google) will finally be making the maze-like gaming platform it uses to train its experiments available for the consumption of users, researchers, and the general public.

It will add the entire source code for its training environment -- previously called Labyrinth and now known as DeepMind Lab -- into GitHub. Anyone will now be able to download the code and customize it to train their own AI systems.

Interestingly, they will also be able to upload and create game levels for DeepMind Lab and upload these within GitHub.

This move cements DeepMind's decision to embrace more openness around its research on its AI testbed.

According to Bloomberg, a recent move toward this matter is a partnership with Activision Blizzard Inc. This allowed the AI to use the popular game StarCraft 2 for any AI researcher who wanted to make systems to master the game.

Now, the code on GitHub will allow researchers to see if its developer's own breakthrough can be replicated. These means other scientists can also gauge the performance of their own AI agents with the same tests.

Interestingly, OpenAI, the rival shop set up by Elon Musk, venture capitalist Peter Thiel and Y Combinator founder Sam Altman, also has its AI training platform called the Gym that was made available in April.

OpenAI also plans to make a public interface called Universe that lets an AI agent use a "computer like a human does" by looking at screen pixels and operating a virtual keyboard and mouse. This means it's a go-between that allows an AI system learn the skills needed to play games or operate applications.

DeepMind has skyrocketed to popularity after creating an AI agent that was able to beat the world's top-ranked player in the ancient strategy game Go.

This is a great achievement in the field of AI because Go has so many possible moves that it cannot be mastered by brute force calculation alone, so an AI has to have something similar to human intuition to play the game successfully.

Meanwhile, another recent experiment had the company researchers significantly reduce the time to train an AI agent to navigate through a game environment sand score points by finding digital apples.