Last Thursday, MIT hosted a celebration for the new Stephen A. Schwarzman College of Computing, a $1 billion effort to create an interdisciplinary hub of AI research. During an onstage conversation between Schwarzman, the CEO and co-founder of investment firm Blackstone, and the Institute’s president, Rafael Reif, Schwarzman noted, as he has before, that his leading motivation for donating the first $350 million to the college was to give the US a competitive boost in the face of China’s coordinated national AI strategy.
That prompted a series of questions about the technological race between the countries. They essentially boiled down to this: When it comes to AI, more data is better, because it is a brute-force situation. How can the US outcompete China when the latter has far more people and the former cares more about data privacy? Is it, in other words, just a lost cause for the US to try to “win”?
Here was Reif’s response: “That is the state of the art today—that you need tons of data to teach a machine.” He added, “State of the art changes with research.”
Reif’s comments served as an important reminder about the nature of the AI: throughout its history, the state of the art has evolved quickly. We could very well be one breakthrough away from a day when the technology looks nothing like what it does now. In other words, data may not always be king.
AI World Society has created a new model for the government with deeply applied AI called AI-Government, with the core of Decision Making Center, looking forward to new approaches, algorithms, and methods that would not require too much data and would simulate human brain thinking well.