What is my AI model doing?
That question is critically important to companies today — especially in heavily regulated industries. Banks need to clearly tell their customers and regulators why they blocked someone’s request for more credit or why a certain transaction triggered a fraud warning. But finding out the answer isn’t always immediately obvious.
This isn’t just an issue in financial services or for insurance companies, though. Any business needs the ability to trust its analytics, to be certain its corporate intelligence isn’t leading the business awry. In fact, Gartner predicts that by 2022, enterprise AI projects with built-in transparency will be 100 percent more likely to get funding from chief information officers.
It’s no surprise then that, within AI’s current capabilities, data scientists are the golden children of the analytics-driven enterprise. They hold the keys when it comes to training up a model in the first place, emphasizing representation learning, or feature learning, to build out supervised neural networks using labeled, inputted data. In the state of AI, the model economy is a precious resource and data scientists can unlock the black box and see what’s actually behind an insight.
But it’s inevitable that one day we’ll progress past very structured data and labor-intensive features engineering and move onto a world where AI models can leverage unstructured data and determine insights. We’ll no longer assign data scientists the job of wrangling data on the front end. We’ll have semantic smoothing that’s able to perfectly understand natural language and sentiments. And even though there are physical limitations on computational power itself, one day we’ll even push those to the limit, stretching power, space and cooling restrictions right up to their environmental constraints. Quantum principles are already being applied to specific problem sets, and these kinds of landmark moments will likely define 21st century computing.
Alongside all these amazing accomplishments, there must be a proportional effort to build trust into these AI models as they progress. Another one of Gartner’s 2018 predictions is “AI will fuel a broad reaction in terms of growing concerns over liability, privacy violations, ‘fake news’ and pervasive digital distrust.”
The reality is that if you push more intelligence into a machine and then that machine collaborates with other machines, that environment needs to be steeped in trust. Otherwise, unchecked power is bound to be exploited.
But, advancements in an adjacent field that seems poised to get just as hyped as AI could hold some answers — blockchain.
Currently mostly known as a cryptocurrency, the immutable ledgers created by blockchains are a prime example of how to take a generalized approach to decentralizing how machines interact with each other via a process that also ensures that the output is exchanged and embedded with trust. This technology is still in its infancy, but many experts are already imagining how blockchain paired with AI could create massive-scale distributed databases that push out more sophisticated — and still auditable — AI models. As such, it’s no surprise that these highly regulated fields, like health care, have the most blockchain hype.
This type of trust in AI models isn’t inevitable, however. It’s something that needs to be prioritized to ensure that as the field progresses, it has the fidelity necessary to shape how enterprises make their decisions.