A recent article in The New York Times by Gary Marcus argues AI is an industry lost on the road to progress. He says to reach human-like intelligence, it needs a top-down approach — like the physics community took with CERN to create the Large Hadron Collider — instead of relying on today’s approaches. Marcus states that existing organizations, like the Elon Musk-led OpenAI and the Partnership for AI (in which the ACLU, Allen Institute, Human Rights Watch, Google, Apple, Amazon, Microsoft and OpenAI are all represented), are too small to be effective, and that large companies armed with the data sets and algorithms to make real headway are too focused on ad optimization to worry themselves with real advancement.
The issue is not the pace of progress, which is moving incredibly fast. The issue is runaway expectations about AI.
It’s true that there is a long way to go before we have AI capable of tackling challenges that require implicit knowledge, the way a human can pick up a guitar and figure out how to play, or listen to a foreign language and learn to speak with no formal classes. But it’s not up to OpenAI and Partnership for AI to focus computer science on solving this problem. They serve as important meeting places for a community that has more than just technical equations to solve — they tackle other important issues like privacy, ethics and public knowledge, each of which could actually get AI stuck in a rut if not addressed.
But AI is not stuck: It is making measurable gains in areas like natural language processing and object classification, surpassing past benchmarks to push toward 90 percent proficiency. The growth through programs like Imagnet and countless others have also shown marked progress in a short time. One day, in the not too distant future, as these disparate AI fields of research and study progress at the current pace, they will likely merge. Then we will have AI agents that are on their way to capable perception without being fed data, like Marcus desires. However, incremental advances, apparent through the slew of consumer products like Alexa, Siri and Google Home, are coming because of competition within industry, not in spite of it.
That’s because for AI to truly advance, competition needs to fuel innovation, not top-down bureaucracy, like Marcus suggests. It’s hard to imagine computer science successfully pivoting to physics’ CERN model — not because it wasn’t wildly successful. It was, and physics deserves credit for that. But, there are so many questions left to answer in computer science, it could never mirror physics’ hard-fought landscape where it needs the entire braintrust of the industry to solve its complex mysteries that serve as outliers to the grand unified theory. It takes serious dollars to approach the question marks surrounding how quantum mechanics fits into this picture — finding the Higgs Boson cost an estimated $13.25 billion. And, perhaps even more importantly, the answers to the questions left in physics can’t be easily monetized.
By comparison, computer science innovation has clearance prices — no machine or storage costs in the tens of billions. So, innovation becomes intrinsically decentralized. It would take a short time to come up with a list of all the interdisciplinary spaces there are left to research until we get anywhere near Marcus’ desire for sentient beings.
Despite no unified research program focusing on one grand mystery, small groups of researchers are making a lot of headway on their own, continuing AI’s competition-friendly history. The pace of change is happening so quickly that researchers are clamoring to be the first to get their ideas into the public. This has borne a culture where competition is so fierce that the simple wait to get an article in a peer-reviewed journal is seen as too long by some. Instead, researchers are leveraging tools like Cornell’s ArXiv to promote their innovations at their natural pace.
At the same time, there is a lot of organized collaboration going on, but it’s not divorced from competition. AI has a series of collaborative open-source communities, like Kaggle and GitHub, where the spirit of innovation and competition aren’t mutually exclusive. The result, once again, is breakthroughs fueled by friendly rivalries. We also can’t ignore the fact that AI has a record of challenges that were designed to further the state of the art. DARPA, X Prize, IBM and others have all hosted a series of challenges where corporate and academic groups competed with each other and the outcomes allowed AI to step to the next rung on the innovation ladder.
It’s clear that both competition and collaboration are working as designed to advance AI. To someone with macro-level expectations about superintelligence, it may seem like AI is too slow and too small in its approach. But anyone left lamenting that AI is stuck spinning its wheels hasn’t been paying attention. From identifying lung cancer to increasing public safety, AI is doing much more than counting up ad clicks, or waiting for a big bureaucratic push to solve a single Holy Grail problem set. It’s optimizing the way we live. Instead of fretting about AI’s hype, industry is advancing the field every day — and the credit goes to the careful balance the AI community maintains between competition and collaboration, where bureaucracy is nowhere in the equation.