“Lessons from the Sentient Enterprise” is a series of posts timed around the publication of “The Sentient Enterprise”, a new book on big data analytics by Oliver Ratzesberger and Mohan Sawhney. Each post in the series highlights a major theme covered in book and at executive workshops being held in conjunction with its upcoming release by Wiley publishing.
One of the bedrock principles Mohan Sawhney and I put forth in “The Sentient Enterprise” is that more data is only as good as your ability to keep up and leverage it for insight. It’s a sentiment shared by many of the top analytics leaders we interviewed for the book. As Jacek Becla, a former data executive at Stanford University’s prestigious SLAC National Accelerator Laboratory and current Teradata vice president of technology and innovation, told us analytics don’t progress unless there’s a “symbiotic relationship between capacity and skills.”
Capacity, unfortunately, can easily outpace our skills in managing it. In fact, our book focuses on several “pain points” of data drift, duplication and error — side-effects of poorly governed capacity that can leave people swimming in oceans of data, without much insight to be found. These problems get more critical as you try to scale the operation.
A ‘Forcing Function’ for Agility
Dell Vice President for Enterprise Services Jennifer Felch and her colleagues learned this first-hand as they worked to aggregate global manufacturing data into one master environment for reporting and analytics. “Scaling is the forcing function for standardizing and becoming as efficient and accurate as possible with your data,” she told us. As we describe in the book, Dell’s solution involved setting up “virtual data marts” — more than two dozen specialized data labs that access, but do not corrupt, the master environment.
The virtual data mart is a feature of the Agile Data Platform, the first of five stages in the Sentient Enterprise journey. That’s where we “decompose” data into architectures that preserve its most granular form, so data’s more malleable and adaptable to various business needs across the organization. The next couple stages — the Behavioral Data Platform and Collaborative Ideation Platform — are where build capacity and set up a social-media style “LinkedIn for Analytics” environment for business users to share insights from this newly agile data.
But sharing insights is not the same as prioritizing them. And here’s where I’d like to emphasize a concept from our book that’s surprisingly simple, yet still underutilized in most businesses today: The key is to not just socialize data insights among business users, but to “merchandise” them!
‘Merchandising’ the Value of Data
What do I mean by “merchandising” analytic insights? Think of how we shop on Amazon, eBay or any other major e-commerce site. We search, we promote, we recommend, we follow. All that activity is tracked by analytics, such as eBay’s “Customer DNA” database — which we examine in the book — that can follow patterns of browsing, bidding and other indicators of value amid some 800 million concurrent auction listings. Over time, analytics running underneath learn what’s important in order to tailor searches and increase the relevance of product recommendations.
In the Sentient Enterprise, we’re essentially doing the same thing with data and analytics. We’re applying the same form of merchandising to the analytics network within an enterprise — promoting and recommending questions, people, and answers that a data scientist or business user might be interested in based on previous queries and activity.
Particularly at scale, there really is no other way to go about it. That’s because — as the book explains — we’re carrying forward the merchandising process beyond just data insights, and applying it to the valuation of entire prepackaged workflows (in the Stage 4 Analytical Application Platform) and self-decisioning algorithms (in the Stage 5 Autonomous Decisioning Platform). I’m covering a lot of ground here, which is why I invite you learn more about the Sentient Enterprise through online resources and, of course, the book itself!
I’m hoping it’s already clear, however, that scaling requires the absolute commitment to rethink old habits — such as extract, transform, load (ETL) and centralized metadata — and embrace new, scalable ways of listening to data and positioning algorithms for “wisdom-of-the-crowd” insights. That’s because, as we we’re fond of saying, humans don’t scale the way data does — and a hundred or even a thousand analysts will remain outgunned without some way to automatically “merchandise” insights from the huge volumes of lightning-fast data streams coming at them.