Skip to main content

Data providers - Valuable partners for every player in the financial industry


Expressions like "Data is the new oil" are already becoming a cliché on social media, but there is still a lot of truth to them. Obviously with customer centricity and hyper-personalization increasing, data gets more and more value. However just like raw oil, data in its raw form has little usage. It needs to be cleaned, filtered, structured, matched and enriched, consent needs to be captured from the data owner (via GDPR compliant opt-in requests) and the data needs to be analysed before it can give its value in the form of new (customer) insights.

This is where most companies in the financial services sector are facing issues. While collecting and storing raw data is relatively easy, ensuring high quality data sets (avoiding the "Garbage In – Garbage Out" principle) and converting them into insights is a whole different story. Most financial institutions are facing a lot of data quality issues, i.e. most of their data is mostly right, most of the time:

  • According to a Harvard Business Review study, only 3% of companies’ data meets basic quality standards

  • According to Digital Banking Report, 60% of financial institution executives indicate that the quality of data used by marketing and business intelligence areas was either unacceptable (22%) or acceptable but requiring significant additional support (38%).

  • Gartner reports that poor data quality is a primary reason for 40% of all business initiatives failing to achieve their targeted benefits

  • Experian reports that 33% of customer or prospect data in financial institutions is considered to be inaccurate in some way.

  • …​

It should therefore come as no surprise that the importance of "Good Quality Data Sets" has considerably increased in the last years. The 1-10-100 rule applies here, i.e. if it costs you R1 to capture the information, it will cost you R10 to correct it and R100 to resolve the issues caused by any incorrect information.
Especially in the financial industry, where correct data is the foundation of trust and trust being the most precious asset of a financial company, data quality needs to be good or even excellent. Data quality problems lead to wrong decisions both tactically and strategically (unreliable analytics), to increased operational costs (e.g. incorrect addresses increase mail costs for a bank) and damage in customer reputation, it blocks the usage of new AI/ML models (or exponentially increases the cost price of such projects) and can result in high regulatory fines (when it comes to data related to KYC, AML, GDPR, MiFID, Basel II/III, Solvency II…​)

As a result, new players are trying to provide an answer to this, like solutions automating data capture (like OCR and RPA solutions, KYC automated workflows…​) or platforms for data governance (like Collibra, Datactics, Soda, OwlDQ, Monte Carlo, Bigeye, Attacama…​), but also specialized vertical solutions, which try to improve specific data sets, like the Securities Master File (like TheGoldenSource, Broadridge, MarkIt EDM…​), the Customer Master File (like specialised CRM packages) or the data sets required for KYC (like iComply, Docbyte, KYC Portal…​) or AML (like AMLtrac, Clari5, FinAMLOR…​).
Apart from those solutions which try to improve the governance and quality of the data internally generated in a financial institution, there are of course also dozens of players who provide high-quality external datasets for specific domains, which are ready to use. These so-called external data providers will also grow in importance.

Data providers capture data from different sources (including manual input) and execute all necessary steps to improve the data quality (like validation, transformation, matching, enriching, filtering…​). Afterwards they sell this cleaned and structured data sets to third parties. Their business model comes obviously from the scaling effect, i.e. instead of every financial institution doing this effort by itself, a data provider can do this work once and sell it to several financial institutions.
Still data providers have the reputation to be expensive, resulting in many financial institutions still assembling certain data sets themselves. This is because an internal data collection and processing seems cheaper but has a lot of hidden costs which are not taken into account in the equation. If all costs are correctly accounted for, a data provider will almost always be cheaper, thanks to their scaling effect and their focus (resulting in specific skills and tooling making the work more efficient).

Data providers exist in all forms and sizes, like

  • Government (associated) institutions, like in Belgium the KBO/BEC, the National Bank, Staatsblad (SPF Justice), Identifin or Social Security databases

  • BigTech companies, like Google, Facebook or LinkedIn

  • Specialized niche players for customer, risk and financial data, like Data.be, World-Check, Bureau Van Dijk (Moody’s Analytis), Graydon, CrediWeb, Creditsafe, North Data, Bisnode, Infocredit Group, Probe Info, ICP…​

  • Specialized niche players for car (insurance) information, like in Belgium the DIV (official instance for registrating a vehicle), Veridass, IRES (Informex), FEBIAC (Technicar) or RSR (Datassur)

  • Large international data providers of financial data, like Bloomberg, Refinitiv, Morningstar, MarkIt, SIX Financial Information, FactSet, EDI, MSCI, S&P, FTSE, Russell, Fitch, Moody’s…​

  • …​

Although these data providers are likely to increase in importance, they are also facing several issues:

  • Customers are demanding higher quality data set. As such data providers need to improve (or even reinvent) their current processes, via data profiling and monitoring, extensive data validations, data enrichments…​

  • With data becoming more and more freely accessible in a quite well-structured way on the internet, it is no longer sufficient for a data provider to offer raw data. Instead data providers need to bring added value, e.g. in the form of data cleaning and structuring actions, but also by delivering derived results from the data, like calculated figures, ratios and KPIs. E.g. instead of just providing securities prices, a securities data provider can also provide the mean price over the last week/month, the variance in the price or the VaR of a security.
    Idem for financial data about companies. Instead of just providing access to the annual report data, all kinds of ratios can be calculated, like Solvency ratios (like Current and Quick ratio), Asset Management ratios (like Inventory Turnover or Days' Receivables), Debt Management Ratios (like TIE Ratio or Equity Multiplier) or Profitability Ratios (like ROA or ROE)

  • With flexibility, accuracy and speed becoming more important, data providers need to provide more real-time integrations via simple and standardized integration patterns, like REST APIs, webhooks…​ Today many data providers are still working with daily batches of files with a very proprietary and complex format, which is difficult and costly to integrate. This will rapidly change thanks to Open Data protocols (like REST APIs, OAuth2 authentication…​).
    Additionally API (data provider) marketplaces will rise, which allow to authenticate in a unique way for different data providers and profit of value-added services, like single billing, usage optimizations, fallback solutions to other data providers (in case of unavailability)…​

  • Data providers are requested to give more transparency and visibility on the usage and associated pricing linked to the data consumption of a customer. Customers want to pay only for what they use, but at the same time have tooling to keep their bill under control. E.g. the Google APIs seem to have a low cost per API call, but when poorly implemented and high usage, the amount of the monthly bill can exponentially increase very rapidly (thus becoming very expensive).

  • As customers want to pay only for the data they use, good subscription models are required. These are models which automatically push any data updates (e.g. via a webhook) for entities on which a consumer is subscribed, but at the same time allow to define detailed selection (filtering) criteria to get notified of new data records fitting the condition. E.g. an asset manager might want to be notified about securities data updates only for securities in position, but at the same time also offer its customers to trade on any security (also those not yet in position of any customer) on specific stock markets.

  • Data providers are struggling to define a good license model. With data being processed and transformed dozens of times in a financial institution, it becomes difficult to know the origin of data. As such it becomes very difficult to check if data licensed only for 1 institution, is in fact not used for multiple institutions (within the bank group) or even sold by the bank itself to third parties. Most data provider strongly restrict what can be done with the data provided to data consumers, but this is often extremely difficult to enforce (especially for smaller data providers selling data to large financial institutions).

  • With BigTech, like Google, Facebook and LinkedIn, collecting enormous amounts of data on a daily basis and having all expertise to process this Big Data, these firms are ideally positioned to act as a data provider. Especially as they can convince its users to check/correct/update their data themselves, allowing to have a crowdsourced data quality acceptance process.

  • A major issue for data providers remains the lack of a common(world-wide accepted) unique identifier for data entities. E.g. for a simple entity like a person, company, point of sale or securities, it is already nearly impossible to find a common identifier. E.g. a person can be identified by his name, first name and birth date, but often this is not unique enough. Another way is via his National Register Number or ID card number, but this information is often not permanent (e.g. temporary national register number or expiring ID cards) and/or very regulated. Other options are the Google, Facebook or LinkedIn identifier, but not every person has an account on those platforms and often there is no formal identification (verification of identity) on those platforms.

However these issues will certainly be overcome by data providers, meaning these firms will continue to grow in the future and the data provided by them will become a commodity (i.e. a non-differentiating factor) for financial institutions. The enrichments financial institutions will be able to bring and the insights/analytics they will be able to derive from all data will allow financial companies to further compete in the domain of data management.

Comments

Popular posts from this blog

Transforming the insurance sector to an Open API Ecosystem

1. Introduction "Open" has recently become a new buzzword in the financial services industry, i.e.   open data, open APIs, Open Banking, Open Insurance …​, but what does this new buzzword really mean? "Open" refers to the capability of companies to expose their services to the outside world, so that   external partners or even competitors   can use these services to bring added value to their customers. This trend is made possible by the technological evolution of   open APIs (Application Programming Interfaces), which are the   digital ports making this communication possible. Together companies, interconnected through open APIs, form a true   API ecosystem , offering best-of-breed customer experience, by combining the digital services offered by multiple companies. In the   technology sector   this evolution has been ongoing for multiple years (think about the travelling sector, allowing you to book any hotel online). An excelle...

Are product silos in a bank inevitable?

Silo thinking   is often frowned upon in the industry. It is often a synonym for bureaucratic processes and politics and in almost every article describing the threats of new innovative Fintech players on the banking industry, the strong bank product silos are put forward as one of the main blockages why incumbent banks are not able to (quickly) react to the changing customer expectations. Customers want solutions to their problems   and do not want to be bothered about the internal organisation of their bank. Most banks are however organized by product domain (daily banking, investments and lending) and by customer segmentation (retail banking, private banking, SMEs and corporates). This division is reflected both at business and IT side and almost automatically leads to the creation of silos. It is however difficult to reorganize a bank without creating new silos or introducing other types of issues and inefficiencies. An organization is never ideal and needs to take a numbe...

RPA - The miracle solution for incumbent banks to bridge the automation gap with neo-banks?

Hypes and marketing buzz words are strongly present in the IT landscape. Often these are existing concepts, which have evolved technologically and are then renamed to a new term, as if it were a brand new technology or concept. If you want to understand and assess these new trends, it is important to   reduce the concepts to their essence and compare them with existing technologies , e.g. Integration (middleware) software   ensures that 2 separate applications or components can be integrated in an easy way. Of course, there is a huge evolution in the protocols, volumes of exchanged data, scalability, performance…​, but in essence the problem remains the same. Nonetheless, there have been multiple terms for integration software such as ETL, ESB, EAI, SOA, Service Mesh…​ Data storage software   ensures that data is stored in such a way that data is not lost and that there is some kind guaranteed consistency, maximum availability and scalability, easy retrieval...

IoT - Revolution or Evolution in the Financial Services Industry

1. The IoT hype We have all heard about the   "Internet of Things" (IoT)   as this revolutionary new technology, which will radically change our lives. But is it really such a revolution and will it really have an impact on the Financial Services Industry? To refresh our memory, the Internet of Things (IoT) refers to any   object , which is able to   collect data and communicate and share this information (like condition, geolocation…​)   over the internet . This communication will often occur between 2 objects (i.e. not involving any human), which is often referred to as Machine-to-Machine (M2M) communication. Well known examples are home thermostats, home security systems, fitness and health monitors, wearables…​ This all seems futuristic, but   smartphones, tablets and smartwatches   can also be considered as IoT devices. More importantly, beside these futuristic visions of IoT, the smartphone will most likely continue to be the cent...

PSD3: The Next Phase in Europe’s Payment Services Regulation

With the successful rollout of PSD2, the European Union (EU) continues to advance innovation in the payments domain through the anticipated introduction of the   Payment Services Directive 3 (PSD3) . On June 28, 2023, the European Commission published a draft proposal for PSD3 and the   Payment Services Regulation (PSR) . The finalized versions of this directive and associated regulation are expected to be available by late 2024, although some predictions suggest a more likely timeline of Q2 or Q3 2025. Given that member states are typically granted an 18-month transition period, PSD3 is expected to come into effect sometime in 2026. Notably, the Commission has introduced a regulation (PSR) alongside the PSD3 directive, ensuring more harmonization across member states as regulations are immediately effective and do not require national implementation, unlike directives. PSD3 shares the same objectives as PSD2, i.e.   increasing competition in the payments landscape and en...

Trade-offs Are Inevitable in Software Delivery - Remember the CAP Theorem

In the world of financial services, the integrity of data systems is fundamentally reliant on   non-functional requirements (NFRs)   such as reliability and security. Despite their importance, NFRs often receive secondary consideration during project scoping, typically being reduced to a generic checklist aimed more at compliance than at genuine functionality. Regrettably, these initial NFRs are seldom met after delivery, which does not usually prevent deployment to production due to the vague and unrealistic nature of the original specifications. This common scenario results in significant end-user frustration as the system does not perform as expected, often being less stable or slower than anticipated. This situation underscores the need for   better education on how to articulate and define NFRs , i.e. demanding only what is truly necessary and feasible within the given budget. Early and transparent discussions can lead to system architecture being tailored more close...

Low- and No-code platforms - Will IT developers soon be out of a job?

“ The future of coding is no coding at all ” - Chris Wanstrath (CEO at GitHub). Mid May I posted a blog on RPA (Robotic Process Automation -   https://bankloch.blogspot.com/2020/05/rpa-miracle-solution-for-incumbent.html ) on how this technology, promises the world to companies. A very similar story is found with low- and no-code platforms, which also promise that business people, with limited to no knowledge of IT, can create complex business applications. These   platforms originate , just as RPA tools,   from the growing demand for IT developments , while IT cannot keep up with the available capacity. As a result, an enormous gap between IT teams and business demands is created, which is often filled by shadow-IT departments, which extend the IT workforce and create business tools in Excel, Access, WordPress…​ Unfortunately these tools built in shadow-IT departments arrive very soon at their limits, as they don’t support the required non-functional requirements (like h...

An overview of 1-year blogging

Last week I published my   60th post   on my blog called   Bankloch   (a reference to "Banking" and my family name). The past year, I have published a blog on a weekly basis, providing my humble personal vision on the topics of Fintech, IT software delivery and mobility. This blogging has mainly been a   personal enrichment , as it forced me to dive deep into a number of different topics, not only in researching for content, but also in trying to identify trends, innovations and patterns into these topics. Furthermore it allowed me to have several very interesting conversations and discussions with passionate colleagues in the financial industry and to get more insights into the wonderful world of blogging and more general of digital marketing, exploring subjects and tools like: Search Engine Optimization (SEO) LinkedIn post optimization Google Search Console Google AdWorks Google Blogger Thinker360 Finextra …​ Clearly it is   not easy to get the necessary ...

The UPI Phenomenon: From Zero to 10 Billion

If there is one Indian innovation that has grabbed   global headlines , it is undoubtedly the instant payment system   UPI (Unified Payments Interface) . In August 2023, monthly UPI transactions exceeded an astounding 10 billion, marking a remarkable milestone for India’s payments ecosystem. No wonder that UPI has not only revolutionized transactions in India but has also gained international recognition for its remarkable growth. Launched in 2016 by the   National Payments Corporation of India (NPCI)   in collaboration with 21 member banks, UPI quickly became popular among consumers and businesses. In just a few years, it achieved   remarkable milestones : By August 2023, UPI recorded an unprecedented   10.58 billion transactions , with an impressive 50% year-on-year growth. This volume represented approximately   190 billion euros . In July 2023, the UPI network connected   473 different banks . UPI is projected to achieve a staggering   1 ...

AI in Financial Services - A buzzword that is here to stay!

In a few of my most recent blogs I tried to   demystify some of the buzzwords   (like blockchain, Low- and No-Code platforms, RPA…​), which are commonly used in the financial services industry. These buzzwords often entail interesting innovations, but contrary to their promise, they are not silver bullets solving any problem. Another such buzzword is   AI   (or also referred to as Machine Learning, Deep Learning, Enforced Learning…​ - the difference between those terms put aside). Again this term is also seriously hyped, creating unrealistic expectations, but contrary to many other buzzwords, this is something I truly believe will have a much larger impact on the financial services industry than many other buzzwords. This opinion is backed by a study of McKinsey and PWC indicating that 72% of company leaders consider that AI will be the most competitive advantage of the future and that this technology will be the most disruptive force in the decades to come. Deep Lea...