News broke last week that Mustafa Suleyman, one of the three co-founders of DeepMind, had been placed on an extended leave after “ten hectic years” at the popular artificial intelligence lab acquired by Google in 2014.
The media was quick to pick up on the news and concoct stories about disputes and disarray in the AI lab’s leadership. Forbes quoted unnamed former DeepMind employees who spoke of tensions between Suleyman and Demis Hassabis, CEO and co-founder of DeepMind, as well as disagreements with Alphabet’s own AI lab, Google Brain. Financial Times quoted acclaimed AI researcher and writer Pedro Domingos, who—again quoting unnamed DeepMind employees—said Suleyman was disenchanted by Google’s decision to subsume DeepMind’s health unit.
There were mentions of fallouts in DeepMind’s top brass following the privacy scandals of the lab’s health care research unit, which was overseen by Suleyman. And there are also rumors of reshuffling in DeepMind’s structure in light of the lab’s huge losses.
Of course, everything the media says is speculation. Suleyman himself tweeted on Thursday and expressed his desire to return to the AI lab soon. Is he sincere in his comments or is he saving face for DeepMind, the research lab he worked so hard to create and develop into one of the most successful AI outfits? We don’t know yet.
There’s been some speculation over the last 24 hours about what I’m up to. After ten hectic years, I’m taking some personal time for a break to recharge and I’m looking forward to being back in the saddle at DeepMind soon.
— Mustafa Suleyman (@mustafasuleymn) August 22, 2019
But none of that matters. The mere fact that the event caused such hype and panic, especially in tech and business media, highlights one key point: DeepMind’s scientific AI research has become too commercialized.
DeepMind’s stated goal is to “advance the state of the art in artificial intelligence” and use its technologies “for widespread public benefit and scientific discovery, and collaborate with others on critical challenges, ensuring safety and ethics.” Those are the kinds of things you would expect from a non-profit AI research lab that is not working on an end-to-end commercial product.
But DeepMind needs a lot of money for its research. Its expenses can’t be covered with government grants and donations. I’d doubt any entity short of Google’s caliber would be able to afford the huge sums DeepMind doles out to pay for its scientists and the compute resources it consumes to train its AI models.
Google also has a vested interest in acquiring DeepMind for $600 million. After all, Google’s AI algorithms are giving it the edge over its competitors. Owning an outfit that houses some of the most brilliant minds in AI would help ensure its domination.
However, here’s the problem: Google is a for-profit company that cares for its own bottom line. It expects short-term return on investment. Therefore, it would expect the acquisition of DeepMind to translate into new products or enhancements to its AI algorithms that will help increase its revenue.
But that’s not how scientific research works. Science projects might absorb billions of dollars without yielding any commercial fruit. Take the Large Hadron Collider: It was built over five years at $4.75 billion, and it costs approx. $1 billion per year to operate its 700-gigawatt facility. And that doesn’t account for the salaries of the thousands of researchers and engineers that contribute to its research.
An ambiguous goal such as achieving human-level AI, which DeepMind pursues, is not something that can be achieved in years (or maybe even decades). DeepMind has managed to find a middle ground by creating an “applied” division, which works on practical AI solutions that can be transformed or integrated into commercial products.
Suleyman was at the helm of DeepMind’s applied AI division, which included DeepMind Health, the team that created Streams, an AI technology designed to help doctors identify patients at risk of developing acute kidney injury. DeepMind’s practical AI applications also include improving the efficiency of cooling units in Google’s data centers and improving the batter life of Android phones.
But the revenue of those ventures doesn’t compare to DeepMind’s costs. DeepMind’s losses nearly doubled in 2018, climbing from $340 million to $570 million, and the AI lab already has a billion-dollar debt to Alphabet Inc., Google’s parent company.
Of course, those costs would probably be a footnote in Alphabet’s balance sheet, but it’s easy to image things becoming frustrating for Alphabet execs when they see that the costs of owning DeepMind are not on par with its yield. And the receiving end of that frustration can be Suleyman, who is in charge of DeepMind’s for-profit division.
What happens if Alphabet patience runs out? In 2017, Alphabet sold famous robotics research lab Boston Dynamics to Softbank for an undisclosed sum after it found no benefit in investing in industrial and humanoid robots. Having been handed from one commercial firm to another, Boston Dynamics is still alive, but it is still struggling to provide a workable business model for its owners.
So if history is any guide, Google will absorb the profit-yielding parts of DeepMind, as it did with its health unit (to the apparent chagrin of Suleyman), hire its top scientists, and sell whatever is left of the lab to some other organization. That is, unless DeepMind finds a way to make itself more profitable. But that will probably come at the cost of abandoning or toning down its scientific vision, which was why the AI lab was founded in the first place.
Of course, this is all speculation. Despite Suleyman’s apparent dispute with DeepMind and Google leadership, Alphabet might continue to support the AI lab’s dreams of reaching for the stars.
But what’s for certain is that mingling commercial and scientific AI is a complicated feat that can easily turn into a disaster. After DeepMind decided to associate itself, the world considers it the subsidiary of one of the largest companies of the world rather than an non-profit AI research lab. The AI lab will have to tread this minefield carefully and find the best way to maintain its scientific identity.