Investment in data and analytics is expensive. At least that’s the view of the CIO, who typically focuses on costs. This issue is only compounded by the CFO, who often lacks visibility of the value generated from data and analytics (for more on this, see part one of this blog series). What we needed was a way to overcome the shortcomings by providing alternatives to value measurement (and you can read more on that with part two of this blog series).
The good news is that CFOs seem to be transforming themselves as growth enablers, rather than being stuck as old-style ‘bean counters’.
Let’s face it, CFOs are always looking to sweat the assets before new investments are approved, simply because corporate resources are always constrained. So, how can we help drive more value from data assets? Here are four top tips that will make your CFO happy (and maybe even encourage investment):
Project-oriented delivery leads to data silos
The first tip is to develop an understanding of the bottlenecks that inhibit value. This point has been superbly articulated by the head of analytics and modelling for consumer products at a big-brand Australian financial services organisation,
“The data warehouse over here is a ‘Project Oriented Warehouse’, meaning it grows via the execution of discrete projects which create duplicate data by ignoring what is already available, have their own objectives, constraints and data. The impact of this is that each project does not necessarily follow the same standards which dilutes the ‘single source of truth’ value proposition and results in additional cleansing and reconciliation activities at our end.”
Consequently, many organisations accumulate technical debt. A senior manager for forecasting and reporting of a wealth management division said,
“IT can be cumbersome to deal with and are often distracted by ‘shiny new toys’ without due regard for the business value they deliver or their impact on our existing architecture.”
In isolation, these individual projects or applications may pass financial scrutiny, but in aggregate they may slowly destroy shareholder value. Likewise, when considered separately, a project or application may not pass financial scrutiny, but when leveraged against the efforts of other projects, it may be viewed as feasible.
You can make cost savings by leveraging data
The way an organization handles its data directly affects the preconceived and actual costs of a project. Some organizations are entrenched in the idea that data must be moved to applications, rather than moving applications to a common data infrastructure. The result is data redundancy, higher aggregate cost of managing information assets, increased data security exposure, and greater data and information complexity.
Companies that have taken this next tip on board quickly learn to leverage their existing data. As a result, they enjoy a dramatically different business case dynamic; lower total application costs and greater value, realized faster. In this significant paradigm shift, not only are new applications calculated at the margin, rather than at their isolated total cost, but time to market can be significantly decreased.
Perform data reuse analyses
Teradata data reuse and overlap analyses change how an organization regards and values its information assets. Each application is considered incrementally and on its own merits in terms of its added financial value. However, new applications are analysed in the context of data leverage and reuse for their cost and effort. This means that seemingly disparate projects are rationalized in the context of data commonality and ability to leverage.
For example, in banking, telecom, retail and utilities, offer optimisation requires the product management team to demonstrate knowledge of its customers, demography, product pricing, buying behaviours etc. in order to determine uptake of offers while maintaining profitability. Since much of this data is also used by other departments – including marketing, digital channels and retention teams – the data can be maintained in a common repository. This makes sense from both a business and IT perspective: lower cost of upkeep, plus increased consistency and accuracy.
If product management developed the application in isolation, it’s unlikely that data would be shared. The project would therefore also have to take into account the cost of compiling and integrating this data.
The picture below provides an overview of how the product management team and other departments can leverage data reuse.
Impress the CFO with simple math
The final tip in this blog is to develop cost/value metrics as shown in the chart below.
Without integrated data, each new application project must be considered and built from the ground up. First, new computing equipment is sourced, and database and network connections set up. Then the data model is established and new extract, transform and load processes created. The data from source systems is loaded and integrated, and users are trained and granted access rights. In the end, all of this entails a significant effort that drives up the cost and lengthens the project’s time to value.
Using data that is not integrated forces the business case for this project to stand on its own. With data leverage from reuse, the time, cost and effort associated with many of the setup tasks indicated above do not have to be incurred again. The result is a streamlined business that is better aligned with, and accountable to, shareholder desires to optimize investment.
So, what are you waiting for?
Some of the content from this blog originally appeared in a white paper titled “Reduce, reuse, recycle”, written by William V. Bishop and Imad Birouty.