Skip to main content

AI in Financial Services - A buzzword that is here to stay!


In a few of my most recent blogs I tried to demystify some of the buzzwords (like blockchain, Low- and No-Code platforms, RPA…​), which are commonly used in the financial services industry. These buzzwords often entail interesting innovations, but contrary to their promise, they are not silver bullets solving any problem.

Another such buzzword is AI (or also referred to as Machine Learning, Deep Learning, Enforced Learning…​ - the difference between those terms put aside). Again this term is also seriously hyped, creating unrealistic expectations, but contrary to many other buzzwords, this is something I truly believe will have a much larger impact on the financial services industry than many other buzzwords.
This opinion is backed by a study of McKinsey and PWC indicating that 72% of company leaders consider that AI will be the most competitive advantage of the future and that this technology will be the most disruptive force in the decades to come.

Deep Learning (= DL) is a subset of Machine Learning (= ML), which is also a subset of Artificial Intelligence (= AI). AI is the all-encompassing concept that initially erupted.

Nonetheless to correctly grasp the impacts of AI, it is also important to first clear the smokescreen and understand that there is no magic under the hood. AI is (today) nothing more than a mathematical technique (based on the development of neural networks) for advanced pattern recognition, where statistical correlations identified in large sets of data, are modelled in an advanced way. High level it all comes down to calculating the right weights in an enormous mathematical matrix, which maps the available inputs on the outputs (at least in supervised learning, which is the most used in the industry).

In supervised learning, a model is trained based on an existing training set (of in- and outputs). The trained model is then applied to other data.
The other type is unsupervised or reinforcement learning, which works with a cost function (i.e. the objective) which should gradually be optimized by adapting the model. Unsupervised learning has the advantage of not requiring large labelled sets of training data, but the disadvantage is that you need quick and good feedback on the resulting cost function (objective). This means the technique is very good for e.g. recommendation engines (as you can see immediately if a recommendation is clicked on or not), but not so good for credit risk scoring, as a credit default is often months (if not years) away from the credit decision moment.

This technique was till recently only used in academic circles, but has now become widely accessible in the industry, thanks to:

  • User-friendly tooling (like TensorFlow, PyTorch, Keras…​), abstracting away the complexity of the mathematical modeling behind.

  • The exponential increase of available computing power (mainly due to the rise of the cloud providers), allowing to train much more complicated models

  • The available data sets that increased enormously, as more activities are digitized, but also as more data can be stored in a cheap and efficient way.

Nonetheless the current AI techniques (sometimes referred to as "narrow AI" or "applied AI", as the model only performs well for a very narrowly defined task) are still very far away from the concept of General AI, which can apply intelligence to any problem.

More than 15 years ago, I did my thesis on Non-linear system identification, which was not very different from the current AI techniques, except that we derived much smaller models with only a few dozens of weights to estimate. The aim however was the same, i.e. model an unknown system - handled as a black-box - based on a number of well-defined inputs provided to the system and their associated measured outputs.
In the same year, a friend of mine who was studying economics, asked me for some help with his exam on "Multi-variate analysis", which also came down to pretty much the same thing.
It is interesting to remember this basis, because these "simpler" models apply similar principles, but are much easier to understand. E.g. you can give a meaning (a physical explanation) on every calculated weight in the model, which is no longer possible now (as the models are too complex to understand).

Although AI is very similar to these simpler modelling techniques, AI is applied on a much larger scale and is fully automated, meaning AI can help us to model large amounts of (unstructured) data and can evolve the derived model automatically based on new (changing) data.
Compared to a human being, speed, accuracy and lack of bias are some of the main advantages.

Nonetheless this unknown behavior of the model (i.e. the fact that a human being can no longer explain the model) gives also some crazy results:

  • Ability to trap the model, e.g. by changing only 1 pixel in certain images an AI model to identify faces can be misled

  • Certain conclusions cannot be explained. There are some examples of AI correctly identifying heart conditions based on ECGs, but cardiologists and researchers unable to explain on which basis the AI system came to this conclusion.
    For some industries, the importance of the prediction’s explanation might even surpass the importance of the prediction itself, often it is even a legal explanation. E.g. GDPR gives the right to every person to obtain an explanation about an automated decision. Transparency on AI models will become crucial, hence the rise of explainable AI models.

  • Discrimination is very hard to avoid. Even when you avoid providing discriminatory inputs to the model (like gender, race, age…​), the model might still be discriminative, as it might derive the discriminatory characteristics from other inputs. Furthermore, even if the model is not discriminatory, it is still nearly impossible to proof to external auditors and regulatory authorities that this claim is justified.

  • Techniques exist to know if an input was used as data to setup the model or not. This is of course info you don’t want to share with every user of the AI model.

Additionally, due to the internal complexity (and abstraction of this complexity) of AI models, some basic principles of mathematical modelling tend to get overlooked, despite that those principles apply just as well to AI modelling:

  • With statistics you can proof everything: it is important to have sufficient data for each category/segment, otherwise for certain categories/segments the modelling will only be based on a few data points, which can give very wrong statistical results. As in statistics, the choice of a sufficiently diverse data set, is therefore very important.

  • Avoid over-fitting: the number of data points should be considerably more than the number of coefficients (weights) to be estimated in your model. As AI models are very large (thousands of weights), you need hundreds of thousands of data points in order to avoid over-fitting.

  • Splitting your data set: when training a model, it is important to split your data set in 2, i.e. the first half should be used for training, while the second half should be used for verification of the quality of the model. Too many people still verify the accuracy of their model with their training set, which can give very wrong conclusions.

  • Garbage in, garbage out: an AI model is only as good as its training data. If this data contains too much errors, the model will also be poor. As a result, data cleansing is crucial in AI modeling.

  • Don’t ignore what you do know about the system being modelled: most systems are not really black boxes, i.e. you typically know how your business conducts or how a product works, meaning you can model already a big part of the system as a white-box. It would be a pity to throw all this knowledge aside, when setting up an AI model. In many cases a rule-based system, in which you define yourself the business rules from your understanding of the business/process/product can give an equally good result if not better. It can therefore be interesting to use a combination, i.e. a rule-based system or programmed algorithm, which is fine-tuned by an AI model.

  • …​

All the above arguments don’t reduce however the importance of AI as a technique to bring our digital world to a next level, as long as we understand that there is no magic and that AI is not the solution to any problem.

The technique can however have a profound impact in almost every domain. Most common examples are:

  • Gaming: with the very known examples of AlphaGo of Deepmind Technologies (Google) beating the world-champion (Lee Sedol) in Go in 2016 and IBM Watson playing Jeopardy.

  • Natural Language processing (NLP): used a lot in chatbots or in interpreting unstructured documents (e.g. for classification)

  • Real-time translations, which not only uses NLP to understand the text to be translated, but also to compose the new text in the destination language

  • Image recognition: identify patterns in pictures and videos. Used for face recognition, self-driving cars, but also in quality control in production lines

  • Speech recognition: used in speech-based systems like Alexa (Amazon), Google Assistant and Siri (Apple)

  • Handwriting Recognition: used to improve the quality of OCR, but also for recognizing signatures

  • Prediction: predicting the future behavior of a user, e.g. for recommendations (predicting next desired action of a user), credit risk scoring (predicting if customer will not default on his credit) or insurance actuary models (predict the number and amount of insurance claims…​),

  • Expert Systems: such as interpreting medical X-ray pictures, blood results or ECGs, spam filtering, targeted advertisement…​

  • …​

In the financial services sector more specifically it is commonly used in:

  • Robo-advise in order to assist (allowing the advisor to increase the number of portfolios he is advising) or even replace an investment portfolio advisor

  • Scoring models for credit scoring and insurance underwriting

  • Pricing engine for intelligent pricing, i.e. set the highest price the consumer is willing to accept and set a risk-based pricing (like for determine credit interest rate and insurance premiums)

  • Recommendation engines for advising the next-best offer and proposing the best product for a specific need/project (e.g. best financing solution for a specific project)

  • Ability to read and process unstructured data for credit analysis (especially for business credits) and Know Your Customer

  • Anti-Money Laundering (AML) to better identify complex money laundering mechanisms via intelligent identification of cases (suspicious anomalies) and prioritizing those cases

  • Know Your Customer (KYC): analyze an onboarding request and use maximum of available data to determine the client risk. AI can be used in multiple forms for automating the KYC process, e.g. face recognition (matching ID card image with camera image), NLP processing (to interpret unstructured data) and expert system to analyse the request (e.g. identify anomalies) and take a decision.

  • Cyber-security: usage of AI to identify anomalies in usage of systems and access to systems, to immediately identify potential breaches (see blog https://bankloch.blogspot.com/2020/02/securing-your-doors-is-not-enough-go.html), but in risk-based authentication (see blog https://bankloch.blogspot.com/2020/02/multi-factor-authentication-and.html) to authenticate a user in a more user-friendly and secure way (e.g. via AI identify deviating/abnormal user behavior resulting in a request for additional authentication).

  • Fraud detection and prevention: identify in payment transactions automatically abnormal patterns, to immediately (real-time monitoring) refuse fraudulent transactions. But also in insurance claim handling, identify potential fraudulent claims.

  • Chatbots and voice assistants (conversational banking): allowing to provide 24/7 personalized advise (mimic live employees), without overloading customer care departments. A nice example is the advanced AI bot "Kate" of KBC, which will not only help customers with financial questions, but is foreseen to offer advanced concierge services in the future.

  • Automating manual back-end processes, like automatic reconciliation, data cleansing, data classification…​

  • AI-based PFM and BFM services, like automatically identifying and predicting budget plans and upcoming cash flows, suggest deals (coupons or cashbacks) based on insights obtained on past transactions, cash management and liquidity planning…​

  • Advanced trading: with AI complex models can be setup trying to predict how the market will evolve and in this way allow to generate revenues

  • Matching parties on financial marketplaces (cfr. blog https://bankloch.blogspot.com/2020/06/marketplaces-in-financial-industry-here.html), which can profit a lot of AI in optimally matching the consumer and producer on such a marketplace.

  • …​

It’s clear that AI can give enormous value to a bank or insurance company, but like with any technique, it’s important to make the business case and not fall in the trap of feeling the urge to be innovative and use AI at all cost. Millions are invested in AI algorithms, but for many of them it is debatable if the investment was worth it. In many use cases AI still has to prove it gives really better results than the carefully designed algorithms or rule-based system / expert systems, used before. E.g. new alternative credit scoring algorithms using AI and alternative data sets to serve the underbanked and unbanked are already starting to show their first cracks and seem not to be better than traditional scoring systems.
Time will in any case separate the chaff from the wheat, with regards to the best use cases to apply AI on.

Comments

  1. This is an awesome post.Really very informative and creative contents. These concept is a good way to enhance the knowledge.I like it and help me to development very well.Thank you for this brief explanation and very nice information.Well, got a good knowledge.
    eSignature Solution

    ReplyDelete
    Replies
    1. Many thanks for this very kind comment. Love to hear that people can learn from my blogs. This is super motivating. A very big thanks.

      Delete
  2. This comment has been removed by the author.

    ReplyDelete
  3. It’s clear that AI can give enormous value to a bank or insurance company, but like with any technique, it’s important to make the business case and not fall in the trap of feeling the urge to be innovative and use AI at all cost.
    AI will be definitely the most competitive advantage of the future and that this technology will be the most disruptive force in the decades to come. Try with Gray Feather Financial for this service.

    ReplyDelete
  4. I found this article on AI in financial services to be insightful and well-articulated. The integration of AI technologies indeed presents significant opportunities for enhancing efficiency and decision-making within the financial sector. It would be intriguing to explore how such advancements might influence the strategies and approaches of alternative fund advisors in navigating market complexities.

    ReplyDelete
  5. AI is revolutionizing the financial services industry by driving innovation, improving efficiency, and enhancing customer experiences. While there are challenges to address, the benefits of AI adoption are substantial, and its potential applications continue to expand. Financial institutions that strategically implement AI solutions will be well-positioned to lead in the evolving financial landscape. Best Cash Flow Forecasting Software | Financial Forecasting Strategy

    ReplyDelete
  6. Australian AI Chatbot provider
    are at the forefront of this technological revolution, offering solutions that not only meet current customer service demands but also anticipate future needs. By adopting AI chatbot technology, businesses can enhance customer engagement, drive efficiency, and achieve a competitive edge in their respective markets.






    ReplyDelete

Post a Comment

Popular posts from this blog

Transforming the insurance sector to an Open API Ecosystem

1. Introduction "Open" has recently become a new buzzword in the financial services industry, i.e.   open data, open APIs, Open Banking, Open Insurance …​, but what does this new buzzword really mean? "Open" refers to the capability of companies to expose their services to the outside world, so that   external partners or even competitors   can use these services to bring added value to their customers. This trend is made possible by the technological evolution of   open APIs (Application Programming Interfaces), which are the   digital ports making this communication possible. Together companies, interconnected through open APIs, form a true   API ecosystem , offering best-of-breed customer experience, by combining the digital services offered by multiple companies. In the   technology sector   this evolution has been ongoing for multiple years (think about the travelling sector, allowing you to book any hotel online). An excellent example of this

Are product silos in a bank inevitable?

Silo thinking   is often frowned upon in the industry. It is often a synonym for bureaucratic processes and politics and in almost every article describing the threats of new innovative Fintech players on the banking industry, the strong bank product silos are put forward as one of the main blockages why incumbent banks are not able to (quickly) react to the changing customer expectations. Customers want solutions to their problems   and do not want to be bothered about the internal organisation of their bank. Most banks are however organized by product domain (daily banking, investments and lending) and by customer segmentation (retail banking, private banking, SMEs and corporates). This division is reflected both at business and IT side and almost automatically leads to the creation of silos. It is however difficult to reorganize a bank without creating new silos or introducing other types of issues and inefficiencies. An organization is never ideal and needs to take a number of cons

RPA - The miracle solution for incumbent banks to bridge the automation gap with neo-banks?

Hypes and marketing buzz words are strongly present in the IT landscape. Often these are existing concepts, which have evolved technologically and are then renamed to a new term, as if it were a brand new technology or concept. If you want to understand and assess these new trends, it is important to   reduce the concepts to their essence and compare them with existing technologies , e.g. Integration (middleware) software   ensures that 2 separate applications or components can be integrated in an easy way. Of course, there is a huge evolution in the protocols, volumes of exchanged data, scalability, performance…​, but in essence the problem remains the same. Nonetheless, there have been multiple terms for integration software such as ETL, ESB, EAI, SOA, Service Mesh…​ Data storage software   ensures that data is stored in such a way that data is not lost and that there is some kind guaranteed consistency, maximum availability and scalability, easy retrieval and searching

IoT - Revolution or Evolution in the Financial Services Industry

1. The IoT hype We have all heard about the   "Internet of Things" (IoT)   as this revolutionary new technology, which will radically change our lives. But is it really such a revolution and will it really have an impact on the Financial Services Industry? To refresh our memory, the Internet of Things (IoT) refers to any   object , which is able to   collect data and communicate and share this information (like condition, geolocation…​)   over the internet . This communication will often occur between 2 objects (i.e. not involving any human), which is often referred to as Machine-to-Machine (M2M) communication. Well known examples are home thermostats, home security systems, fitness and health monitors, wearables…​ This all seems futuristic, but   smartphones, tablets and smartwatches   can also be considered as IoT devices. More importantly, beside these futuristic visions of IoT, the smartphone will most likely continue to be the center of the connected devi

PSD3: The Next Phase in Europe’s Payment Services Regulation

With the successful rollout of PSD2, the European Union (EU) continues to advance innovation in the payments domain through the anticipated introduction of the   Payment Services Directive 3 (PSD3) . On June 28, 2023, the European Commission published a draft proposal for PSD3 and the   Payment Services Regulation (PSR) . The finalized versions of this directive and associated regulation are expected to be available by late 2024, although some predictions suggest a more likely timeline of Q2 or Q3 2025. Given that member states are typically granted an 18-month transition period, PSD3 is expected to come into effect sometime in 2026. Notably, the Commission has introduced a regulation (PSR) alongside the PSD3 directive, ensuring more harmonization across member states as regulations are immediately effective and do not require national implementation, unlike directives. PSD3 shares the same objectives as PSD2, i.e.   increasing competition in the payments landscape and enhancing consum

Trade-offs Are Inevitable in Software Delivery - Remember the CAP Theorem

In the world of financial services, the integrity of data systems is fundamentally reliant on   non-functional requirements (NFRs)   such as reliability and security. Despite their importance, NFRs often receive secondary consideration during project scoping, typically being reduced to a generic checklist aimed more at compliance than at genuine functionality. Regrettably, these initial NFRs are seldom met after delivery, which does not usually prevent deployment to production due to the vague and unrealistic nature of the original specifications. This common scenario results in significant end-user frustration as the system does not perform as expected, often being less stable or slower than anticipated. This situation underscores the need for   better education on how to articulate and define NFRs , i.e. demanding only what is truly necessary and feasible within the given budget. Early and transparent discussions can lead to system architecture being tailored more closely to realisti

An overview of 1-year blogging

Last week I published my   60th post   on my blog called   Bankloch   (a reference to "Banking" and my family name). The past year, I have published a blog on a weekly basis, providing my humble personal vision on the topics of Fintech, IT software delivery and mobility. This blogging has mainly been a   personal enrichment , as it forced me to dive deep into a number of different topics, not only in researching for content, but also in trying to identify trends, innovations and patterns into these topics. Furthermore it allowed me to have several very interesting conversations and discussions with passionate colleagues in the financial industry and to get more insights into the wonderful world of blogging and more general of digital marketing, exploring subjects and tools like: Search Engine Optimization (SEO) LinkedIn post optimization Google Search Console Google AdWorks Google Blogger Thinker360 Finextra …​ Clearly it is   not easy to get the necessary attention . With th

Low- and No-code platforms - Will IT developers soon be out of a job?

“ The future of coding is no coding at all ” - Chris Wanstrath (CEO at GitHub). Mid May I posted a blog on RPA (Robotic Process Automation -   https://bankloch.blogspot.com/2020/05/rpa-miracle-solution-for-incumbent.html ) on how this technology, promises the world to companies. A very similar story is found with low- and no-code platforms, which also promise that business people, with limited to no knowledge of IT, can create complex business applications. These   platforms originate , just as RPA tools,   from the growing demand for IT developments , while IT cannot keep up with the available capacity. As a result, an enormous gap between IT teams and business demands is created, which is often filled by shadow-IT departments, which extend the IT workforce and create business tools in Excel, Access, WordPress…​ Unfortunately these tools built in shadow-IT departments arrive very soon at their limits, as they don’t support the required non-functional requirements (like high availabili

Deals as a competitive differentiator in the financial sector

In my blog " Customer acquisition cost: probably the most valuable metric for Fintechs " ( https://bankloch.blogspot.com/2020/06/customer-acquisition-cost-probably-most.html ) I described how a customer acquisition strategy can make or break a Fintech. In the traditional Retail sector, focused on selling different types of products for personal usage to end-customers,   customer acquisition  is just as important. No wonder that the advertisement sector is a multi-billion dollar industry. However in recent years due to the digitalization and consequently the rise of   Digital Marketing , customer acquisition has become much more focused on   delivering the right message via the right channel to the right person on the right time . Big tech players like Google and Facebook are specialized in this kind of targeted marketing, which is a key factor for their success and multi-billion valuations. Their exponential growth in marketing revenues seems however coming to a halt, as digi