WHAT LIFE STAGES ARE YOUR CUSTOMERS?

20160630145105233[1]

Insurers and Wealth Managers have long aspired to segment customers based on their life stage. Big Data can finally make this happen

by Ai Meun Lim, Chief Product Officer, Percipient, and Dr Shidan Murphy,  Senior Data Scientist, Angoss Software Corporation

Shakespeare describes the “The Seven Ages of Man” as the Infant, Schoolboy, Lover, Soldier, Justice, Pantaloon (roughly translated as foolish old man) and Old Age. While today our view of one’s life stage is perhaps less cynical (and poetic!), the desire for such a clear classification remains a strong business goal across a range of industries.

Most businesses implicitly recognize that an individual’s life stage contributes towards a multifaceted perspective into the needs of a customer. Further, in some tightly regulated financial businesses, such as wealth management and insurance, determining a customer’s life stage is required to determine risk tolerance and financial objectives. Understanding a customer’s life stage is essential for any customer-centric business strategy.

However, despite the importance, most financial institutions have failed to adequately monetize the concept of a life stage. Major limitations to such monetization are:
– Outdated perceptions of a customer life stage.
– Unavailable or outdated customer data
– Inability to track customer life changes
– A lack of predictive analytics

Why a Dynamic Segmentation of Life Stages?

Traditional methods have based life stage assessments on easily quantifiable factors such as age and income. Yet an age or income-only lens has proven inadequate for detecting financially-important events such as getting married, starting a family, launching a business venture, or taking care of elderly parents.

While the automatic and fluid tracking of a customer’s life stage is sought after in financial institutions today, most still rely on ad hoc updates from customers themselves, and at best, annual customer surveys. Such passive tracking methods point to difficulties financial institution have meeting their “duty of care” obligations, let alone running profitable and targeted life stage campaigns.

The development of a dynamic segmentation of life stages is therefore a generational-leap in customer understanding. By digesting real time data from traditional and non-traditional data sources and using advanced algorithms to analyse this data, it is now possible to create a granular customer segmentation that is capable of evolving with the customer’s lifecycle.

Life Stage Analytics

Percipient and Angoss are collaborating to offer dynamic life stage segmentation that applies the tools capable of unifying such data, and technological advances in big data analytics.

The foundation of this approach is data – and lots of it. For the segmentation to be meaningful, the data must include both conventional and unconventional data sources.  Conventional sources include demographic data, spending patterns, and both assets and liabilities. Although these data are readily available in financial institutions, they are often underutilized.

Advancements in data technology means there is also scope to incorporate unconventional data sources such as social media (think LinkedIn), wearable devices (think Fitbit), digital footprints and third-party data aggregators.  Some financial institutions may already be collecting this data, but have not been put it to use for this purpose.

Customer segmentation is then created using cutting-edge analytics. The analytical approach combines business rules and next-generation tools and techniques to create granular-level customer categorizations. The life stage categorizations are dynamic and regularly updated to reflect changes in the data feeds.

Such dynamic categorizations create the opportunities for, among other requirements, next product recommendations, customer retention and calculating the actual and expected profitability of the customer base.

A Revolution in Needs Based Selling

The financial industry’s mantra is that product sales be underpinned by their customer’s life stage. Today, data and next-generation tools are in place to create more accurate, dynamic and granular view into the needs of their customers. An accurate, analytics-driven understanding of life stages are set to revolutionize needs-based selling.

Advertisements

Banks, Declutter Your Data Architecture!

IT-guy

Banks do not need to be wedded to complexity, says Navin Suri, Percipient’s CEO 

Marie Kondō’s bestseller, The Life-Changing Magic of Tidying Up: The Japanese Art of Decluttering and Organizing, is sweeping the world. Her message that simplicity pays off applies as much to a bank’s data architecture as it does to a person’s wardrobe.

Few bankers would argue with the notion that the IT architecture in banks is overly complex and as a result, far less productive than it could be. So how did we get here? Rather than a single blueprint, most banks’ IT evolved out of the global financial industry’s changing consumer demands, regulatory requirements, geographic expansion, and M&As. This has led to a tangled web of diverse operational systems, databases and data tools.

Rapid Digitisation

But rapid digitisation has put this complex architecture under further stress. Amid dire warnings, such as the one from Francisco Gonzales, then CEO of BBVA, that non-tech ready banks “face certain death”, many rushed to pick up the pace of their digital transformation.

Banks rolled out their mobile apps and digital services by adopting a so-called “two-speed infrastructure”, that is, enhanced capabilities at the front, built on a patchwork of legacy systems at the back. Now over a third of all banks, according to a 2015 Capgemini survey, say “building the architecture/ application infrastructure supporting transformation of the apps landscape” is their topmost priority.

Fragmented Infrastructure

Meanwhile a key reward of digitisation – high value business intelligence – remains elusive. Banking circles may be abuzz with talk of big data, but the lack of interoperability across systems makes this difficult to achieve. In some cases, cost effective big data processing technologies like Hadoop have actually deepened the problem by introducing yet more elements to an already unwieldy architecture.

To address the problem, financial institutions have opted for two vastly contrasting approaches. Either paper over the cracks with a growing number of manual processes, or bite the bullet, as UBS is doing. The world’s largest private bank announced in October last year that it will be spending US$ 1 billion on an IT overhaul to integrate its “historically fragmented infrastructure”.

Attack On Complexity

However, for those banks unable or unwilling to rip out and replace their existing sytems, there is a third way. The availability of highly innovative open source software offer banks the option of using middleware to declutter and integrate what they have.

Percipient’s data technology solutions, for example, enable banks to pull together all their data without the need for data duplication, enterprise data warehouses, an array of data transformation tools, or new processes and skills. These solutions are, at their core, an attack on the architectural complexity that banks have come to grudgingly accept.

Visible Order

As Marie Kondō points out, “Visible mess helps distract us from the true source of the disorder.” In the case of most banks, the true source of the disorder appears to be an IT infrastructure derived, rather than designed, to meet the huge demands placed on it by digitisation. There is now a real opportunity to turn this visible mess into visible order.

This article was a contribution to, and originally appeared in, finews.asia

Packing Real Punch Into Customer 360s

In marketing circles, the buzzphrase for the first quarter of 2017 was Customer Data Platform (CDP).

Although coined in 2013, it was Gartner’s decision in July 2016 to introduce this as a new industry category within its digital marketing “hype cycle” that has given the term real legs.

Un-holistic

To date, enterprises have relied on their CRM, channel or transaction systems to provide them with customer views. But these have been far from “holistic”, with the ambition to build a Customer 360 platform largely hampered by data silos and technology bottlenecks.

According to advocates, CDP platforms elevate UYC (Understand Your Customer) initiatives to a whole new level by unifying all customer data from marketing, sales and service channels into one database or interface. This is then made available to the entire organisation as an integrated view of each customer, rather than as an anonymised view of broad customer segments, as is the case with other data platforms.

Hence your platform isn’t a CDP unless it boasts the following features:

  • The ability to track a customer’s activity within an enterprise. This must apply to all touch points, regarding whether traditional or digital, and the when, what, how and why of every transaction.
  • The ability to plot the customers’ complete and personalised journey by piecing together data gathered from the customer’s devices, channels, and engagements. By so doing, enterprises are able to define the customer’s choices, experiences, and ultimately, sentiment.
  • The ability to support marketers across multiple customer facing applications. That includes helping such teams design their product recommendations, conduct cross sell optimisations, track customer retention and attrition, and manage their advertising and branding.
  • The ability to present a single source of truth by maintaining a persisted and updated profile of each customer. This profile should be usable across the entire enterprise and hence drive real time insight, decision making and execution relevant to the individual customer.
  • The ability to ensure data privacy and governance standards are maintained despite the shift from segmented to individualised customer data. This includes strict limits on the number of data copies and minimising the risks of data leakage.

More than just a Single Customer View

Importantly, CDPs are not just about embellishing a customer’s profile data or even establishing a single customer view. Embedded in the concept of CDPs is the ability to act on the intelligence that CDPs provide.

The ownership of the CDP is also important. According to technologist David Raab, founder of the Customer Data Platform (CDP) Institute,  CDPs represent “one of the few fundamental changes in marketing technology in the past decade, because it shifts control of the customer database from IT to marketers.”

At the core of a CDP is a marketer-managed database that is accessible to other systems. CDPs are accessible by external systems and are designed to support, for example, web-marketing campaigns that go beyond simple targeted promotions. Instead, CDPs must be capable of delivering pin point customer specificity. This means web content, product recommendations and service alerts that are entirely customised to the individual.

The making of a Customer Data Platform

Enterprises seeking to build their own CDP, or use one or more vendors to do so, can leverage on the many open source technologies now readily available. Elements of a CDP include connectors to a variety of data sources, a data store for structured and unstructured data, tools for data preparation flows, identity resolution processes, artificial intelligence systems, and integration to customer applications.

While some or all of these elements may already be in operation at an enterprise, CDPs force technology and data science teams to support a key digital shift: personalised interactions based on a holistic, data-driven view of every customer. For most marketers today, this remains wholly out of reach.

Open Banking is Happening, Folks!

Wireless transfer of money through mobile phone vector concept

On February 2, 2017, the UK’s open banking ambitions took a step closer to reality when the Competition and Markets Authority (CMA) issued its final order for the implementation of open banking reforms.

The final order sets down a strict one-year time frame for nine of the UK’s largest banks to launch their open banking API interfaces. This means that by the end of Q1, 2018, these banks will have enabled their customer data to be securely accessed by competitors and third parties.

A definite shift

For those of us who, having worked in banks, once considered the “open banking” concept to be no more than a pipe dream, this marks a significant milestone. If data is new oil, and banks are really only just starting to mine its full potential, then sharing this asset with the external world is a very tough decision, new opportunities notwithstanding. Which is why it was unlikely to happen without the CMA’s firm hand.

But all that is history. The UK’s open API banking initiative now looks to be set in stone with other regulators likely to follow suit. Meanwhile, the European Union already have their own open banking directive – known as PSD2 – due to kick in in 2018. So how will this change the lives of banking customers?

Choice, choice and more choice

The CMA’s Open Banking report, published in August last year, gives us some idea of what is to come. At this point, the most relatable improvement, from a consumer’s point of view, is the potential to view multiple bank accounts via a single app. Many industry players have already begun to envision that such an app could be used to not only keep better track of one’s finances, but to “cherry-pick” banking services.

The development of a so-called “banking-as-a-service” (BaaS) platform would pave the way for customers to easily select and assemble a set of banking services from across several providers. Similar to other portals, the criteria for choosing one bank’s services over another would extend beyond just fee and features comparisons, and include the available channels and digital experience on offer.

The new era of inter-bank platforms

However, whether this takes off in the way the CMA would like depends in large part on whether individual customers are sufficiently reassured about the security and confidentiality of their data. This in turn rests on the regulations, IT platforms and security protocols being established to support such BaaS projects.

For example, the PSD2’s Access to Account requirement stipulates that banks are obliged to share their customers’ payment account information with third party service providers. While the regulators have indicated that strict conditions apply, including how “payment account” is defined, and how service providers will be licenced, the industry is still struggling to interpret the guidelines.

Similarly, the private sector has sought to build common standard open banking protocols and platforms. These include the Ixaris-led Open Payment Ecosystem, the Microsoft and Intuit-led Open Financial Exchange (OFX) standard for the exchange of financial information, and the Banking Industry Architecture Network (BIAN), an association established to define a common IT framework for banking inter-operability. It remains to be seen which of these will take root.

No pain, no gain

 Despite the flurry of activity, it is no surprise that some banks have serious reservations about the effort required to support open banking. Aside from regulations that are extensive and still evolving, open banking brings the risks of fraudulent third parties, digital intrusion, impersonation, and the illegal use of data.

The pain of acquiring the resources and new skill sets, including in risk, compliance, technology, and data science, suggests substantial short term pain ahead of gains that are as yet unquantifiable. Clearly, the first salvos in support of open banking have been made, but there is still much work to do.

Background vector created by Lexamer – Freepik.com

2016 Revelations

 

Big data can be funny too…no…seriously! 

funny-bigdata

As the year draws to a close, and we treat ourselves to some well-earned merry making, here is a look back at some 2016 big data events that prove big data isn’t all dull and serious.

A World of How, What and Why

20070416-how-to-tie-a-tie

Google’s annual list of most googled terms are always revealing and this year’s is no different. Australia’s Top Ten “How to” searches in 2016 suggest that Australians are still concerned with the challenges of daily life: “How to tie a tie?” and “how to get rid of pimples?”

UK-based Googlers, on the other hand, were concerned with a somewhat more esoteric question: “How to make slime?”. Their Top Ten also included the more sadly philosophical: “How to accept myself for who I am?”.

Meanwhile, the “Why” question uppermost with Swedish Googlers in 2016 was “Why are eggs brown or white?”. Clearly, the world is still full of innocents.

 (Extra) Ordinary Gifts

legal-us-raw-milk-sales-behind-rapid-increase-in-outbreaks-cdc

This year’s Singles Day has been deemed Alibaba’s most successful yet. Sales this year reached USD 17.79 billion, compared to USD 14.3 billion last year. Unsurprisingly, top sellers were phones and appliances.

But everyday household items are also big hits on Singles Day. Hera BB cream from Korea and Laurier sanitary napkins from Japan are traditional favourites, as is milk.

Last year, one German supplier alone accounted for 2.35 million litres (USD14.3 million) of Singles Day sales of liquid milk. On the same auspicious day this year, an Australian manufacturer was able to shift 350,000 goat soap items worth over USD 1 million.

Deep Learners?

0036_04711893_0270_f

Alongside top trending searches, Google’s 2016 Breakout Searches (ie yoy searches that rose by 5,000% or more) were equally eye catching.

While “Severe Weather” featured in Germany’s list, “Nobel Prize” in Sweden, and “Melania Trump” in Slovenia, China’s list contained only two: “Deep Learning” and “Machine Learning”.

This presumably has something to do with ordinary people wanting to understand what organisations – from the government to corporates – are doing with their data. For example, e-retailer Shangpin.com has determined that Chinese consumers prefer to shop for underwear in the late evening. While it is still unclear how this will drive future marketing, the retailer says such “subtleties” provide valuable insights into the Chinese psyche.

 Better Restroom Service

images

2016 also provided some insight into Japanese restroom habits and preferences. A report by the Japan Times revealed that highway operator, Central Nippon Expressway Co. (NEXCO Central) had installed 3,000 sensors, including motion detectors for toilet bowls, in 51 restroom locations along the Shin-Tomei Expressway.

Analysis of the data collected suggest that cubicle use is gaining on urinals, and an average cubicle visit is four minutes and four seconds, up 35 seconds in seven years, which NEXCO puts down to the increased use of mobile phones in restrooms.

NEXCO expect this data to help it improve its service. For male commuters faced with long cubicle waiting times, this will be a relief.

Its Been Weird

spotify-band 

Spotify dug deep into its data stores to come up with one of the year’s most interesting end-of-year campaigns. Displayed on billboards at select locations around the world, Spotify highlighted the behaviour of some of the site’s users, and their reaction to it.

Under a general tagline of “Thanks, 2016. Its been weird”, examples included:

  • Dear 3,749 people who streamed “Its The End Of The World As We Know It” the day of the Brexit vote. Hang on in there.
  • To the 1,235 guys who loved the Girls’ Night playlist this year. We Love You.
  • Dear person who played “Sorry” 42 times on Valentine’s Day. What did you do?

Developed by Spotify’s in-house team, Chief Marketing Officer, Seth Farbman said their data was inspiring and gave an insight into people’s emotions.

We agree – where can you detect more emotion than in a person’s playlist?.

When the BACK end isn’t running as smoothly as the FRONT

Many organisations’ impressive new digital apps are underpinned by slow and cumbersome backends

rhinocheetah

The dollar value of the digital transformation market is impressive indeed. IDC’s January 2016 report suggests that worldwide spending on digital transformation technologies is growing at around 17% pa to reach over $2.1 trillion by 2019.

However, it is clear that this spending is not uniform across enterprises’ digital infrastructure. While mobile app development tends to receive the lion’s share of an enterprise’s attention, the focus on back end improvements, such as in data security and data integration, appears to lag. For example, a 2016 Gartner survey found that of the enterprises expecting to increase their mobile app spending by an average of 30%, their overall mobile budget was only expected to grow by 10% over the same period.

This trend is worrying for several reasons. Outdated and disjointed systems run a greater risk of failures and security breaches. And coupled with the rapid release of new digital channels, any malfunction is likely to have a wider, more immediate and higher profile impact on customers.

However, system disconnects are not wholly related to cost. Here we look at three underlying reasons for an enterprise’s mismatched IT growth.

  1. The two-speed approach: Inevitable… and useful?

 Firstly, the process of digitisation has caused a shift in the enterprise IT decision-making process. According to a Bain & Company 2014 survey, nearly one third of technology purchasing power has migrated from CIOs and technology teams to business stakeholders.

This shift in power and accountability is in line with the need to respond promptly to customers’ fast changing digital preferences. As a result, business stakeholders and marketing executives are now better able to make “quick fixes” to customer-centric apps and digital channel features, without waiting to ensure long term alignment with back-end systems. Some business stakeholders even resort to “Shadow IT” ie new technologies not yet sanctioned by their IT teams.

The consequence of this is a “two-speed” IT infrastructure. Despite the risks, this is a situation some analysts would deem inevitable and even desirable, in order to ensure that customer experience demands are met without delay. The problem starts when, having been launched, interest in the project wanes in favour of the next big thing. Under such circumstances, neither business owners nor IT teams work to ensure that back-end systems “catch-up” and are able to sustainably support front-end innovation. The lesson here is that a two-speed approach is acceptable, but only if the arrangement is temporary.

  1. Which comes first: the skills or the project?

Another weak link in the synchronisation of front and back end systems development is the prevailing skills shortage. According to a survey of 2,600 CIOs conducted by Gartner, the biggest obstacle to digital transformation is the lack of technology talent. In fact, so deep is the crisis that, according to Gartner, of 1.4 million computer specialist job openings in 2020 projected by U.S. Department of Labour, there are not enough qualified graduates to fill even about 30 percent of these jobs.

So which IT sectors are most affected? While front end and back end developers are equally in vogue, the key need is to bring the two together. Across a variety of HR surveys, project management and solutions architecture are among the top five most hard-to-find IT skills today. These skills encapsulate the ability to scope out a business need, dimension it, identify the frontend solution, and manage its backend delivery.

While enterprises remain committed to launching new digital offerings, this shortage is dampening their enthusiasm for large scale IT upgrades. Albert Ellis, CEO of the Harvey Nash Group, notes that, “In the past, CIOs would set their IT strategy first, followed by a resourcing plan. Now, it’s all changed. Certain skills are in such demand IT executives are facing up to the reality there’s no point in having the right technology platform if you don’t have the right people to build and support it.”

The solution, suggests HR specialists, is to focus on access rather than ownership. In a world undergoing digital tansformation, a business model that relies only on inhouse talent appears to no longer be tenable. The question then is where, not whether, to locate external talent.

  1. Embedding new technology: Out with the old?

 A third barrier to back end IT growth is the difficulty in incorporating new technology into existing enterprise infrastructure. While front ends can be created afresh, back end upgrades are often a question of merging the old and the new. Which is no easy task. It is perhaps not surprising that a new A.T. Kearney report found that the failure to integrate new technologies with legacy systems was cited by 59% of respondents as the reason they were not able to achieve their digitisation plans.

This is driving enterprises to find creative solutions to the problem. One way is for them to adopt new platforms altogther by leveraging on cloud computing services such as PaaS (platform as a service). This service means enterprises are free to develop their own apps, while PaaS vendors provide the key backend tools such as hosting, operating systems, databases, etc, needed to support such applications. Today the global market for PaaS is projected by Global Industry Analysts to reach USD 7.5 billion by 2020, with Asia Pacific the fastest growing market at a projected CAGR of over 20%.

Of course, enterprises that have spent large amounts on their back systems will find this solution harder to swallow. However, many are using application programming interfaces (APIs) to leverage on new software components that have been developed internally and externally. According to one study, 67% of “digital disruptors”, that is, enterprises most successful at their digital transformations, put a high emphasis on the managed use of modern APIs techniques, standards and protocols to integrate their front and back end systems.

Over the next few years, there is likely to be a concerted shift of focus from already polished digital interfaces to an equally polished back end infrastructure. Turning digital offerings into primary profit drivers is unlikely unless enterprises also deliver data security, operational efficiency, and processing speed.

.

To personalise or not: WhatsApp re-ignites the data sharing debate

WhatsApp announced last month that it will allow its parent company Facebook to sell its user data to advertisers. The news was met with widespread consternation.

The timing too couldn’t be more unfortunate, coming just a fortnight after a landmark US Court of Appeal ruling in favour of Microsoft’s bid not to hand over its customer emails to the US federal government.

Also playing on many minds is the EU’s General Data Protection Regulation (GDPR), adopted in April this year after four years of debate, and due to come into force by mid-2018. Corporations located in EU member states are already scrambling to ensure compliance with requirements that are variously described as “onerous”, “radical” or “doesn’t go far enough”.

As both the private and public sectors struggle to draw the line between data privacy and data utility, it is perhaps time for individuals to ask themselves the same questions. What do I stand to gain from government and corporate use of my data? And when does this cross the line?

This is not just a philosophical exercise. In fact, data privacy advocates strongly espouse the concept that an individual’s personal data belongs to the individual. Governments and corporations are deemed “temporary custodians” of the data they collect, and can choose to offer personalised services, but based only on the information that individuals are willing to share with them.

Billed as a first of its kind, the high profile MyData 2016 conference taking place in Helsinki this week promotes this view of “human-centric data management”. Organisers of the event take pains to stress that their intention is not to stifle innovation, but rather to lay the ground rules for the ethical use of data. However, implicit in the conference themes is the notion that the scales may now have tilted too far in favour of the organisation versus the individual.

Aside from the issues of data security (ie keeping data safe from hackers and unauthorised persons), research suggests that individuals greatly fear the improper use of big data to drive key decisions made about them. Media, internet, telecommunication and insurance companies are said to face the greatest “data trust deficit” and need to make the most effort to ensure that their brand is associated with data transparency and accountability.

On the other hand, individuals should not under-estimate the extent to which big data mining has become an expectation. Research by the Aberdeen Group suggest that 74 percent of online consumers actually get frustrated with website offers and promotions that have nothing to do with their interests. The research also found that more than half of all consumers are now more inclined to use a retailer if it offers a good personalised experience.

The key appears to be control. According to CMO.com, more than 60% of online users, while valuing personalisation, sought to understand how websites select such content. A similar number wanted the ability to influence the final results by proactively providing or editing personal information about themselves.

However, it is not marketing websites but IoTs and wearables that will be the biggest test of users’ embrace or disaffection with big data. Today’s low cost wearables are effectively subsidised by the potential monetisation of the data that these devices are able to generate. This data is vast and highly personal, and while the IoT trend is not new, experts expect wearable technology to escalate dramatically over the next few years.

It is therefore increasingly incumbent on the industry to put in place measures that protect the data from abuse. Manulife’s MOVE and AIA’s Vitality programmes are classic examples of how it is possible to align individual and corporate objectives to the benefit of both. Individual policy holders are provided with fitness trackers that record workout data, and rewarded for healthy behaviour through discounts or points. This is done by ensuring workout data is explicitly collected and consent explicitly provided, with a firm undertaking that the data will not be shared or used for other purposes.

It is said that with great power comes great responsibility. Big data has the power to transform organisations and disrupt industry practices. But in delivering the personalisation that individuals now demand, organisations cannot lose sight of their responsibility to maintain the individual’s innate desire for self-determination, even in the digital world.