DeepMind was a 2010 startup originally founded by three people: Shane Legg, Demis Hassabis and Mustafa Suleyman. In an interview, Hassabis noted that they worked on the artificial intelligence technology by choosing primitive games from the 1970s and 1980s for the AI to play. As if out of Ready Player One, the AI machine played the likes of Space Invaders, Breakout and even the retro classic, Pong.
As the AI was introduced, it learned through gameplay, without having been told the rules previously. After playing the game and failing for a while, the AI would learn and master the game. For example, this video illustrates the AI mastering Atari Breakout. According to a Forbes article, the AI’s cognitive processes are said to be “very like those a human who had never seen the game would use” in order to grasp the key concepts then become adept at it.
Gaming as an Evolutionary Tool
AlphaGo, a computer Go program developed by DeepMind, beat the Europen Go champion Fan Hui (a two dan out of nine), five to zero in October of 2015. That was an important milestone for AI technology, as it was the first time a computer had been able to beat a professional Go player. The reason behind that is the level of complexity. Go is much more difficult for computers to win -- compared to other games such as chess -- due to an extensive range of possibilities, making it harder for AI to use exhaustive search methods to win.
Then, in March of 2016, AlphaGo defeated Lee Sedol (a nine dan out of nine), one of the highest ranked Go players globally in a four to one match. In the next year, AlphaGo won a three-game match against Ke Jie who had held the No. 1 world spot for Go for the past two years. In order to learn, the AI used supervised learning techniques, studying many games humans played against one another.
Reinforcement Deep Learning
AlphaGo technology was created based on the reinforcement deep learning method, a type of Machine Learning. According to an article by Artificial Intelligence Depot, it “allows machines and software agents to automatically determine the ideal behavior within a specific context, in order to maximize its performance.” When the AI plays Go, for example, simple reward feedback helps the AI learn from experience and this is called the “reinforcement signal.”
AlphaGo & AlphaGo Zero
As AlphaGo learned from watching others and playing more Go games itself, it learned not only from failures, but also from wins. Then historical data was added into its gaming knowledge until it had processed upwards of 30 million games. AlphaGo Zero, an improved version, defeated AlphaGo 100 games to zero in 2017. Amazingly this feat was possible after only three days of AlphaGo Zero learning the game. Additionally, AlphaZero, a modified AlphaGo Zero, became superhumanly skilled at chess and shogi learning purely through self-play.
DeepMind Health & Ocular Disease Identification
In 2016, DeepMind went into collaboration with Moorsfield Eye Hospital to develop AI applications for healthcare. Their focus was on improving the ways in which patients are treated and referred for eye diseases.
Currently, eyecare professionals utilize optical coherence tomography (OCT) scans to aid in eye disease diagnosis. OCT scans are 3D images which map out the back side of the eye. They are notoriously hard to read and need expert analysis to gain further understanding. Between the lengthy analysis time and the sheer number of scans a day (roughly 1,000 per day at Moorfields), the time between scan and treatment can be quite delayed -- even in the case of urgent care. Unfortunately, this means that if there is a sudden-onset problem such as bleeding, the scan-to-treatment delay could even cost a patient their sight.
DeepMind article reports that their AI system “can quickly interpret eye scans from routine clinical practice with unprecedented accuracy” and then recommend courses of treatment “as accurately as world-leading expert doctors.” The system can detect the presence of eye disease in seconds as well as prioritize patients who need urgent care the most. Due to this technology, the wait time between scan and treatment will be drastically reduced, helping those at risk and lowering their chance of eye loss due to time lapse.
DeepMind Health & Ocular Disease Identification
Earlier this year, DeepMind partnered with the Cancer Research UK Imperial Centre to assess whether AI technology could aid professionals in diagnosing breast cancer in mammograms more effectively and quicker than a human professional could. Work continues on as DeepMind hopes to help the betterment of breast cancer recognition as it did with ocular disease.
On October 4, 2018, a DeepMind article announced that the DeepMind Health project was being expanded to Jikei University Hospital -- one of Japan’s leading medical institutions -- for a five-year partnership. The purpose of the pairing is to analyze historic, de-identified mammograms from roughly 30,000 women alongside the historic, de-identified mammography database provided by the UK, in order to see if technology can locate signs of cancerous tissue on x-rays more effectively than current techniques.
However, research must be conducted carefully, as bias can occur when an AI system is trained on data which doesn’t accurately reflect those it is trying to help or assess. For example, there can be variations in breast density between ethnic groups, a factor which could accidentally cause some patients to be labelled as cancerous, when really their density was just higher.
DeepMind & Predicting Eye Disease
Just as DeepMind Health used the OCT scans to note the presence of disease, professionals are trying to see if it can also be used to predict impending disease before it happens. By analyzing 7,000 patients at Moorfield Eye Hospital who have received eye treatment in only one eye, the machine can try to predict deterioration in the other eye.
According to a DeepMind article, “predicting potential indicators for disease is much more complicated -- and computationally intense -- task than identifying existing known symptoms.” In order to match ability to computing need, DeepMind and Moorfields have agreed to use Google’s cloud computing infrastructure from the UK and the US.
AlphaFold Protein Folding
Recently, DeepMind’s latest AI program, AlphaFold, has beaten out the competition at one particularly grueling feat: predicting the 3D shapes of proteins. While it may be little discussed outside of academic circles, protein folding involves everyone. It’s a form of “molecular origami,” as put by an article in The Guardian. Scientists usually use cryo-electron microscopy, X-ray crystallography and nuclear magnetic resonance in order to determine protein shapes, but it depends on trial and error, time and money.
The bigger the protein, the harder it is to model correctly because there are more interactions to take into account. For instance, DNA contains information about the amino acids, which form long chains. Predicting the structure of those chains and how they form into 3D shapes is the “protein folding problem.” As a DeepMind article notes, Levinthal’s paradox states that “it would take longer than the age of the universe to enumerate all the possible configurations o a typical protein before reaching the right 3D structure.”
Understanding a protein’s shape is to go a long way in understanding its function. This act alone of AI-predicted protein folding could bring massive process for scientific and medical fields. Take into consideration that when proteins become tangled or misfolded, they can lead to diabetes, Parkinson’s, Huntington’s, cystic fibrosis and Alzheimer’s disease.
The hope is that if scientists can predict a protein’s shape from its chemical composition, they can work out what it does and how it may fold incorrectly and cause harm. Furthermore, the AI applications could even design new proteins in order to fight disease or perform duties, ushering in a new era of medicine.