Category Archives: Uncategorized

Banks, Declutter Your Data Architecture!


Banks do not need to be wedded to complexity, says Navin Suri, Percipient’s CEO 

Marie Kondō’s bestseller, The Life-Changing Magic of Tidying Up: The Japanese Art of Decluttering and Organizing, is sweeping the world. Her message that simplicity pays off applies as much to a bank’s data architecture as it does to a person’s wardrobe.

Few bankers would argue with the notion that the IT architecture in banks is overly complex and as a result, far less productive than it could be. So how did we get here? Rather than a single blueprint, most banks’ IT evolved out of the global financial industry’s changing consumer demands, regulatory requirements, geographic expansion, and M&As. This has led to a tangled web of diverse operational systems, databases and data tools.

Rapid Digitisation

But rapid digitisation has put this complex architecture under further stress. Amid dire warnings, such as the one from Francisco Gonzales, then CEO of BBVA, that non-tech ready banks “face certain death”, many rushed to pick up the pace of their digital transformation.

Banks rolled out their mobile apps and digital services by adopting a so-called “two-speed infrastructure”, that is, enhanced capabilities at the front, built on a patchwork of legacy systems at the back. Now over a third of all banks, according to a 2015 Capgemini survey, say “building the architecture/ application infrastructure supporting transformation of the apps landscape” is their topmost priority.

Fragmented Infrastructure

Meanwhile a key reward of digitisation – high value business intelligence – remains elusive. Banking circles may be abuzz with talk of big data, but the lack of interoperability across systems makes this difficult to achieve. In some cases, cost effective big data processing technologies like Hadoop have actually deepened the problem by introducing yet more elements to an already unwieldy architecture.

To address the problem, financial institutions have opted for two vastly contrasting approaches. Either paper over the cracks with a growing number of manual processes, or bite the bullet, as UBS is doing. The world’s largest private bank announced in October last year that it will be spending US$ 1 billion on an IT overhaul to integrate its “historically fragmented infrastructure”.

Attack On Complexity

However, for those banks unable or unwilling to rip out and replace their existing sytems, there is a third way. The availability of highly innovative open source software offer banks the option of using middleware to declutter and integrate what they have.

Percipient’s data technology solutions, for example, enable banks to pull together all their data without the need for data duplication, enterprise data warehouses, an array of data transformation tools, or new processes and skills. These solutions are, at their core, an attack on the architectural complexity that banks have come to grudgingly accept.

Visible Order

As Marie Kondō points out, “Visible mess helps distract us from the true source of the disorder.” In the case of most banks, the true source of the disorder appears to be an IT infrastructure derived, rather than designed, to meet the huge demands placed on it by digitisation. There is now a real opportunity to turn this visible mess into visible order.

This article was a contribution to, and originally appeared in,


Big Data 3.0: Delivering on the Promise

There is unusual consensus among economists that 2016 will be a more-of-the-same year of moderate growth, low inflation and emerging market risks. Against this unspectacular backdrop, how do corporations achieve spectacular outperformance?

One clear weapon of choice is big data. Last year, the world received further affirmation of big data’s real world applicability. The largest ever deal in IT industry history – courtesy of Michael Dell and data storage provider, EMC – was just icing on the cake.

Here is a recap of big data’s big achievements in 2015, and what lies in store for 2016:

Managed Healthcare

 percipent cartoons 2a_1

Most heartening for big data proponents were the strides made in healthcare in 2015. In June for example, Forbes reported on the way data is now being collected from every US cancer patient in order to detect patterns, not just of the emergence of the disease, but also of patients’ responses to treatment.

In 2016, the focus looks set to shift from battling illnesses to maintaining good health. Biometric data from fitness apps and IoT-based wearable healthcare devices is likely to come to the fore. The challenge to date has been the technical and intellectual capabilities required to make sense of this overwhelming amount of data. As the Internet-of-Things comes of age, the opportunities for innovation will fall to those able to master AoT – the Analytics of Things.

Machine Learning

 percipent cartoons 4a_1

Machine learning was hard to ignore in 2015. Algorithms moved from a term familiar only to maths geeks to one used in the trendiest of places. Last month, fashion retailer, The North Face, launched its virtual personal shopper service, providing style trends, fashion recommendations and practical travel advice, all powered by a big data machine learning platform.

However, machine learning in 2016 may take on a different flavour. Rather than replacing humans, machine learning is moving to the incorporation of human intervention. “Human-in-the-loop” computing is based on a continual cycle of 1. ascertaining an algorithm’s ability to solve a problem, 2. seeking human input where necessary, and 3. feeding this back into the model. A range of automated services from self driving cars to natural language processing (NLP) will likely continue to rely on humans teaching machines a thing or two.

Dynamic Pricing

percipent cartoons 1a_1 

Talk advanced retail pricing and you are likely to think of price-matching. Some of the biggest names in retail have relied on price-matching strategies to build customer loyalty. However, Walmart took the concept to a new level in 2015 with its Savings Catcher mobile app, which allows shoppers to scan their receipts and claim for items more expensive than at competitor stores. This feature alone is credited with taking Walmart’s app usage from 5 million to 22 million in just one year.

That said, the evolution of big data use is paving the way for retailers to improve their margins. In 2016, rather than the intense focus on price, big data will offer retailers the ability to price based on a variety of highly dynamic and complex factors, including pop-up trends, time of day, weather conditions, underperforming products, etc. Amazon, for example, is known to change prices up to 2.5 million times a day during the busy holiday season. According to IDC, by the end of 2016, product intelligence will inform 90% of pricing decisions made by the top 10 US e-Commerce retailers.

Chief Data Officers

 percipent cartoons 3a_1

A survey conducted in 2015 found that 61% of CIOs wanted to hire chief data officer (CDOs) within the next 12 months. This suggests that in 2016, CDOs will be a significantly more common sight on company payrolls than at present.

Although the CDO designation has been in place in some US banks since the 2008 financial crisis to oversee data quality, the role in its current form has only surfaced in the last couple of years. In 2014, about 25% of Fortune 500 companies claimed to have one appointed to optimise their use of data.

Despite this, a study by IBM suggests that many CDO employers are not clear what this role is all about. Furthermore, while the growing importance of big data will see CDOs gaining more power in 2016, research firm Forrester note in an intriguing report that this power may be eroded by an increased trend to outsource data solutioning, and to succumb to data security concerns.

Big Data 3.0

 Forrester’s report also notes that 2016 may see on the end of big data’s honeymoon. The report warns that big data tools alone will not be enough, if not accompanied by enterprise-wide commitment and visionary management.

If Big Data 1.0 was the era of discovery, and Big Data 2.0 was marked by a period of experimentation and application, then what we see unfolding in 2016 must surely be Big Data 3.0, when tangible results will be key. All I can say is, bring it on!

What airlines can teach banks about big data

In the topsy turvy world of big data, it has become apparent that some industries are unexpectedly lagging behind others in their data mining capabilities.

For example, one would be justified to assume that financial companies, given their access to vast amounts of customer profile and transaction data, are leaders in this field. However, the industry can be divided into two camps.

On the one hand are the traditional global banks and insurance firms, which continue to lumber under a mountain of new, legacy, proprietary, departmental and country data systems that do not communicate with each other.

A good example of this is Deutsche Bank, whose CEO, John Cryan, noted at a recent press conference that, “…about 35 percent of the hardware in the data centers has come close to the end of its lifecycles or is already beyond that”. Jost Hopperman, vice president of banking and applications and architecture at Forrester Research, claims that the problem is widespread, “The situation at Deutsche is not unusual at many other large tier 1 or tier 2 banks simply because many of them have similar issues, including too many silos, too fragmented outsourcing, and lost control of applications”.

On the other hand are the new and nimble digital-only banks, the likes of US-based Ally Bank and UK-based Atom Bank. The latter, even before its launch, targetted for early-2016, has received a 45 million pound investment from Spanish bank BBVA. According to Atom Bank CEO, Mark Mullen, the bank aims to be as disruptive to banking as Uber has been to taxis.

Assuming that brick and mortar banks are able to overcome their data woes and to unify the volumous data currently sitting untapped, then opportunities abound for to them to catch up to their retail and online counterparts.

Top of their list of priorities must be the use of data to achieve a meaningful 360-view of the customer, as a necessary step towards re-imagining the customer experience. While the airlines industry is not obvious as a trend-setter in this field, a number of airlines have started to take customer-360 initiatives to heart.

Southwest Airlines, British Airways and American Airlines, for example, all employ mobile apps to keep frontline staff informed in real time of individual customer circumstances, whether this be food preferences, flight delays, or complaints on social media. These airlines, through a process of trial and error, have also determined the limits of communicating personal data (favourite drinks are okay, pets’ names are not). This “know me on the go” approach is one that could easily be adopted by bank tellers and advisors when dealing with their customers, but remains sadly lacking.

There are other transferable practices. In June this year, Qantas launched a programme for frequent flyers, rewarding them with extra points in exchange for information on their consumer habits. This data is used not only within the airline but is shared (in anonymised form) with partner organisations, for example hotels and car rental services. Banks’ credit card businesses have the opportunity to do the same, and to share this information with their merchants, but again, this is not yet common practice.

Finally, in this age of algorithms, banks, like airlines, can do more to target products to specific customer types. United Airlines has sought to connect together its 3.5 petabytes of customer data in order to use it more effectively. For example, according to executive vice president of marketing, technology and strategy, Jeffrey Foland, sales of United’s economy-plus seats have surged since the airline started using data to target fliers who are more likely to buy them.

This kind of sophisticated customer analytics comes on the back of well-rounded data on an individual, linking together his/her buying patterns with his/her demographics, personal preferences and lifestyle. Many banks have made strong strides in their “digital revolution” journey. But only those able to capture customer data as a consolidated whole are likely to enjoy the customer loyalty and ROI results they had hoped for.


The real-world is coming a-calling


Data derived from what customers do and feel, rather than what they say, has got a lot more interesting

Big Data scepticism appears to be alive and well in Asia. I have lost count of the number of times people say to me: “There is nothing that big data can tell you that small data can’t”.

While the reasons for this “Big Data Backlash” are still unclear to me, I can only assume that many sceptics are focused on the “Big” of Big Data. That is, the volume of data that companies struggle to store, make sense of, and then use to generate growth.

What is perhaps less obvious are the other “V”s of Big Data: its velocity and variety. The potential utility of real-time and diverse data is still unfolding but clearly, social media-derived data is just the tip of the iceberg.

Even more exciting is what is now called ‘real-world’ data, that is, data based on individual behaviour as customers or prospects interact with the world around them. There is much here that businesses can get their teeth into, especially those based in Asia.

Why so? Well, Asian consumers differ from their Western counterparts in a number of ways, as highlighted by recent studies.

For example, Asian consumers tend to be less loyal. A 2014 survey by Bain and Company found that even for top-selling brands in mainland China, up to 80% of their customers were likely to be new rather than repeats. Asian consumers also complain less, thereby masking their dissatisfaction. A 2014 IPSOS survey found that 60% of Singaporeans did not complain about the poor service they felt they had received.

These findings make it even more important for businesses in Asia to understand what makes their customers tick, while staying mindful of survey fatigue and sample biases.

Many may therefore be intrigued by such real-world inventions as video software capable of detecting facial expressions and corresponding emotions. This technology analyses in real time whether individuals are happy, annoyed, confused, surprised or frustrated. When then assessed against other operational factors, such as the waiting time at a bank branch, or the service at a restaurant, this data becomes a powerful customer experience tool.

So how reliable is emotion detection software, I hear you ask? The idea is that humans betray their emotions through “microexpressions”, that is, tiny facial movements such that people are often not even aware that they have displayed an emotion. Based on machine learning of trillions of data points and billions of emotional reactions, this software is apparently customizable for different cultures and settings.

Furthermore, with emotion detection video devices and analytics recently marketed for just US$100 per device per month, this technology has become startlingly affordable for many smaller companies. Meanwhile, those serving large numbers of customers, such as airlines, shopping malls and train stations, may find the software’s ability to track across multiple faces a particular asset. Not forgetting, of course, that video cameras are an already omni-present (and generally accepted) part of daily life.

Of course, sentiment data is just one piece of the puzzle. The other pieces remain: Which customers are unhappy and why? How can businesses use positive emotion to encourage brand loyalty and advocacy? What can businesses do to improve the customer experience?

These questions can only be answered by accessing even more customer data, and then fitting all the pieces together. So whether the sceptics like it or not, there is in fact a lot that big data can tell them, that small data can’t.

Satisfaction, Engagement & Loyalty are different. Here’s how…

Percipient Notes Image

Today’s buzzwords, these terms are great performance indicators, if used properly

Customer “satisfaction”, “engagement” and “loyalty” are now commonplace business lingo, but how these should be used to drive business is perhaps less well understood.

While all three terms relate to customers’ sentiment (ie feelings) and their behaviour, these terms are often (and mistakenly) used interchangeably. There are also attempts to assign them some sort of linear relationship, for example, Satisfaction  –> Engagement –> Loyalty.

The trouble starts when business managers are asked how these variables are best measured, and even more dauntingly, to demonstrate their impact on business outcomes.

 It is useful therefore to unpackage these terms into their raw ingredients.

So what is Satisfaction?

This term is now most commonly defined as a customer’s post-use evaluation of a product or service.

This evaluation can come from a combination of factors including price, usability and convenience. For example, bank customers are often asked to evaluate their satisfaction (or lack of it) with their credit card’s interest rate or reward features, or the waiting times and teller’s helpfulness at their branch.

Satisfaction is therefore easily measured and deemed to be a good indicator of a customer’s likelihood to continue to use a service, or to repurchase a product. A business’ satsifaction index can then be translated into potential customer behaviour, for example, the number of transactions made or products owned.

 Then what is Engagement?

Engagement originates from a different type of experience assessment, one that is more directly related to the customers’ personal desires, needs and goals.

For marketeers, engagement has become a more powerful (although somewhat less easily implementable) concept than satisfaction because it addresses the emotional connection with a customer. At a restaurant, rather than asking the customer to rate its cleanliness and service quality, engagement is about determining whether the customer’s dining experience was “memorable”, and whether he felt “passionate” about the food.

Given that engagement is a customer’s subjective assessment of the restaurant, engagement measures can be more difficult to interpret. However, an understanding of a customer’s engagement can help suggest how often and to what extent a customer may re-use a product or service.

And what is Loyalty?

 Generally defined as a customer’s resistance to switching to a competitor’s products or services, loyalty is not necessarily a result of either satisfaction or engagement.

Nor is loyalty measured using Net Promoter Scores (NPS). NPS is an indication of how much word of mouth (WoM) a product or service can generate. However, unlike loyalty, WoM does not directly translate into business performance. For example, many customers may be willing to recommend a product or service but do not themselves intend to repurchase the product or re-use the service.

Rather, loyalty can be assessed by asking customers a more direct question such as “How often do you use a competitor brand instead of this brand?”. By doing so, businesses can then derive a customer’s lifetime value.

Why bother?

The time has come for Voice of Customer metrics to receive the same focus as other business KPIs. However, this is not possible without an acute understanding of the relevance of each metric.

A starting point for all businesses is the measurement of satisfaction. Satisfaction ratings provide businesses with a good indicator of customers’ potential for repeat purchases and product deepening. Customer engagement measures, while more difficult to design, help to highlight ways to truly bond with customers. Meanwhile, loyalty measures help business ascertain the potential for customer attrition and retention.

All three measures are not the same, but they all matter.

Connecting The Dots

Percipient Notes Image

Welcome to Percipient Notes, a blog aimed at shining the spotlight on all aspects of data aggregation, architecture and analytics.

Connected data is the true opportunity of our modern times.  Until recently, data silos were a grudgingly accepted consequence of new technology, M&As, proprietary systems, and geographic expansion. But that was when data was growing at relatively manageable rates. Organisations could still turn to batch downloads and excel spreadsheets to piece together their fragmented datasets. 

Today and going forward, the actual and opportunity cost of disjointed data will be far more material. As organisations rush to launch their suite of new digital apps, so the data they are collecting is now growing faster, and at a higher cost of storage, than ever imagined. At the same, inability to tap this data for real time customer, sales and marketing use means falling drastically short on expected investment returns. 

However, the task is not just how to unify data, but how to extract value from it in as efficient, speedy and affordable a manner as possible. This requires organisations to also consider the processes that are keeping data apart, the tools to make sense of unified data, and the application of the insights that this data can bring. Cesare R. Mainardi, former CEO of Booz & Co, puts this task into perspective, “In spite of big data, we will still need to make informed judgments. Connecting the dots, or rather connecting the data, will remain key—in management, in business and in life.”