Skip to main content

Are banking systems inherently more complex than 10 years ago?


Banks struggle to evolve to the changing customer needs and expectations and to introduce latest technologies. In a world where cost, time to market and flexibility are key business drivers, it is painful to see that the time and price for implementing changes in banking applications has increased compared to 10 years ago. This lack (and drop) of agility is often attributed to the increased complexity of the banking systems.
For sure, the functional and non-functional requirements imposed to financial IT systems have increased exponentially:
  • Multitude of channels: physical branches, ATMs, contact centers and internet banking have been around for a while, but the rise of mobile and even more recently the rise of Open Banking APIs and the customer’s expectations for cross-channel continuation are new.
  • Real-time interactions: these types of interactions result in a considerably more complex architecture than the traditional batch-oriented systems.
  • 24/7 availability: in the past most applications only needed to be available during business hours. With the rise of internet banking, this evolved to 24/7 availability, but occasional downtimes during the night (for batch processes, maintenance activities or upgrades) were still acceptable. Today customers expect availability all the time, even in case of unexpected failures or when deploying software upgrades. This imposes an architecture with a lot of redundancy, implementing techniques to avoid data migrations, ensuring that multiple versions of the same service can run in parallel…​
  • High performance: customers expect all actions, even complex calculations, to happen in a matter of seconds. This requires the implementation of several techniques, like data and result caching (at different layers), pre-calculation of results, parallel processing of calculations…​
  • Personalized services: customers expect that the banking products and services are automatically adapted to the individual current needs of the customer, e.g. dynamic pricing, adaptive screens anticipating the next best action for the customer, filtering out irrelevant products and services…​ This requires the execution of real-time analytics on vast amounts of customer data and adapting in real-time the ongoing business processes based on the results of these analytics.
  • Complexity of banking regulations and new taxes: after the Financial Crisis, banks have been overwhelmed by new regulations (like Dodd Frank Act, MiFID2, Basel III, GDPR, PSD2…​) and new taxes (like speculation taxes, taxes on generated revenues…​), which are often the result of a political compromise between different conflicting opinions. The result are complex regulations and tax rules, often with specific local flavors, which increase the overall complexity of the banking IT landscape.
  • …​
This seems to confirm the commonly assumed hypothesis of increased inherent complexity, leading to reduced agility.
The above arguments ignore however the explosive evolution in the domain of software engineering, which has resulted in multiple techniques to manage complexity more efficiently or even reduce complexity. Typical examples of such evolutions are:
  • High-level programming languages, abstracting a large part of the technical complexities
  • The huge amount of building blocks today’s software engineers have at their disposal, e.g. open source software, software libraries, open APIs, development frameworks…​ All those components provide state-of-the-art solutions to almost any known complex problem in the financial services industry (and often for free).
  • The commoditization of infrastructure and the drop of infrastructure prices, allowing developers to pay less attention (thus reducing complexity) to infrastructural issues, like infrastructure sizing, storage optimization, performance tuning, …​
  • New methodologies to support software delivery, such as Agile, PMI, ITIL, CMMi, TOGAF, DevOps…​ provide all kinds of techniques to organise project teams more efficiently, allowing to increase productivity, reduce project risks, forecast project outcomes better…​
  • New software delivery tools, like IDEs (e.g. Eclipse or IntelliJ), continuous integration/delivery tools (e.g. Jenkins, Git, Nexus…​), containerization (e.g. Docker), monitoring and debugging tools (e.g. ELK stack) and communication tools (e.g. Skype, Slack…​), allowing to automate, accelerate and simplify each step of the software delivery lifecycle.
  • Cloud based services (like Amazon AWS, Microsoft Azure, Google Cloud Platform, IBM Bluemix…​), which hide the complexity of infrastructure and other commoditized services, such as the setup of operating systems, databases…​ This domain is still in full expansion, e.g. serverless computing (e.g. AWS Lambda) will hide the notion of infrastructure all together.
  • …​
Why can’t all those positive evolutions balance out the increased complexity in business requirements? For the Silicon Valley Unicorns this seems to be the case, but unfortunately not in the traditional banking industry. To answer this question, we should first get a better understanding of complexity.
Research teaches us that there are 2 types of complexity:
  • Inherent or essential complexity: the difficulty of the actual problem that the software in question is trying to resolve.
  • Incidental or accidental complexity: anything about software that is hard but doesn’t really have to be.
The inherent complexity is directly linked to the business requirements. Managing this complexity can be facilitated by the enormous evolutions in software engineering, little optimization is possible on this type of complexity.
It is common believe however that most of the complexity in software is not coming from inherent complexity, but rather from incidental complexity, which is always caused by a "people issue".
Typical causes of such incidental complexity are:
  • Choosing the wrong delivery tools: programmers often choose the wrong tools to deliver solutions. This can be caused by the multitude of tools available in the market, but also by the love of programmers to use (modern and exciting) tools, which are often overkill for a certain situation (e.g. building a simple static website with Ruby on Rails).
  • Developers like to build from scratch, instead of reusing existing internal or external building blocks. Almost any bank has some kind of messaging protocol, middleware platform, monitoring or logging tool, which they built themselves, rather than reusing best-practice open source tools.
  • Most IT applications carry around a lot of dead code (i.e. code no longer actively used). As clean-up of such code creates no direct added value for the business, there are typically no (or very few) projects planned for cleaning up (and/or refactoring) the code basis. Over time this gradually increases the complexity of the applications.
  • Many banks have bought full business solutions (packages) from third-parties. While this normally should reduce complexity, as it transfers most of the inherent complexity to the third-party (which can spread the effort to manage the complexity over multiple customers), in reality it often increases complexity. This is caused by the fact that banks typically do not adapt their requirements to the built-in logic of the package, but instead try to force their business requirements into the package via extensive customizations and all kind of satellite tools.
  • As changes requested by the business departments of banks tend to take months if not years (due to lack of agility in applications and the strict release calendars), business stakeholders tend to ask during the requirements phase also requirements for which there is no immediate business need. This tendency is strengthened even more by the strict (often bureaucratic) allocation of budgets in many big banks. Between projects, there is typically no budgets nor IT capacity to implement small evolutions, which forces business to push a maximum of requirements in the larger projects.
    Such delivery of functionality for future business needs increases complexity and creates very little added value. Typically, such anticipated functionality will either never be used, either be completely wrong when there is a business need (requiring then quick patchwork to make it work, leading to even more complexity).
  • In many countries, the banking industry has originated from several mergers and acquisitions. These M&As almost always lead to an increased complexity of the IT landscape, as the choice for the target landscape is rarely guided by objective IT parameters, but rather by political objectives.
  • The setup of cost/profit centres in many large banks and the common use of the Waterfall approach to deliver large IT projects, has led to the creation of strong silos between business and IT, where each silo tries to protect its own interests rather than creating value for the organisation. Furthermore, increased formalized quality controls (like sign-offs, quality gates, strict change control processes…​) and a stronger split in roles and responsibilities (forced by audit and compliance to reduce risk for fraud), have even led to the creation of silos within the silos, e.g. silos between IT Run (Operations) and IT Change or silos between analysts, developers and testers. All those silos result in a lack of open discussions, feedback and challenging of requirements. This means technical specialists are often forced to create complex systems, which could be avoided if those technical specialists could debate on an equal basis with business to find the right compromise, in limiting IT complexity, while still fulfilling a maximum of the business requirements.
  • The trend of outsourcing parts of the IT value chain, often to different external parties, has also led to an increased complexity. Instead of all IT stakeholders working together to find the best solution for the organisation, outsourcing often leads to a lot of overhead complexities, like contractual discussions, miscommunication due to cultural differences or language problems, difficult cooperation due to time zone differences or conflicting interests (e.g. outsource partners only focusing on meeting KPIs/SLAs, even if this brings no added value). These tendencies enforce the previous point of silo (within silos) creation. For example, an open discussion in the beginning of a project becomes very hard, as outsourcing partner will force an official intake of the project, before spending any time on it.
As mentioned above all those reasons result to an increase of incidental complexity. This means that they can also be reduced or even solved, without impacting the business end-result. When a few simple guidelines are followed, meaningful results can already be obtained:
  • Avoid large transformation programs, but instead make sure that each project can be executed by a team of no more than 10 persons (cfr. a so-called 2 pizza team as stated by Amazon’s CEO Jeff Bezos). This keeps the implementation lean and avoids high PMO overheads to manage resources, dependencies…​ Furthermore, communication gets terrible as team sizes grow (as the number of interactions between team members exponentially grow with the team size).
  • Phase projects so that each delivery to production (creating value for the business) is maximum 6-9 months from the kick-off of the project (large multi-year projects are doomed to fail, as the market evolves too much in the project timespan, resulting in continuous replanning exercises). This means applications (and changes in general) should be gradually build up, i.e. instead of considering all requirements from the beginning, subsets should be delivered gradually. This requires a high degree of agility, which can best be achieved by building applications as a set of loosely coupled micro-services.
  • Setup mixed teams, composed of people from different departments (i.e. business, IT change and IT operations) with a common project goal and shared responsibilities.
  • Make sure that the different profiles in the team (project manager, architect, product owner, developer, analyst, tester…​) have similar seniority, remunerations, commitment to the project and decision power for their specific domain. The hierarchical structures existing in most banking project teams should disappear, as they tend to reduce commitment and lead to increased complexity.
  • Ensure that project teams are located together (physically) as much as possible and are working for the same firm (i.e. all internal resources or all working for same external firm). This simplifies collaboration and ensures that all team members are working towards a common goal.
  • Ensure that each project team member (except for ad-hoc subject experts) is full-time allocated to the project. This ensures that every team member is dedicated and committed to the goals of the project.
  • Maximize reuse of existing components (avoid reinventing the wheel), i.e. build application by using building blocks like software libraries, open-source software, existing micro-services within the organisation, open APIs…​ instead of reprogramming everything from scratch.
  • Automate every action, you need to do more than 1 time. This means setting up automated regression test sets, automated value-chains for continuous integration and delivery, automated status reporting to management…​
  • Foresee part of IT budget not to implement business added value, but rather to do mandatory technical migrations and clean-up and improvement actions of the code basis.
Implementing these guidelines will require a significant organisational change for most banks, but it will allow them to considerably reduce the incidental complexity, leading to more agile applications, which in their turn will ultimately lead to lower costs and shorter timelines for implementing business changes.

Comments

Popular posts from this blog

Transforming the insurance sector to an Open API Ecosystem

1. Introduction "Open" has recently become a new buzzword in the financial services industry, i.e.   open data, open APIs, Open Banking, Open Insurance …​, but what does this new buzzword really mean? "Open" refers to the capability of companies to expose their services to the outside world, so that   external partners or even competitors   can use these services to bring added value to their customers. This trend is made possible by the technological evolution of   open APIs (Application Programming Interfaces), which are the   digital ports making this communication possible. Together companies, interconnected through open APIs, form a true   API ecosystem , offering best-of-breed customer experience, by combining the digital services offered by multiple companies. In the   technology sector   this evolution has been ongoing for multiple years (think about the travelling sector, allowing you to book any hotel online). An excellent example of this

Are product silos in a bank inevitable?

Silo thinking   is often frowned upon in the industry. It is often a synonym for bureaucratic processes and politics and in almost every article describing the threats of new innovative Fintech players on the banking industry, the strong bank product silos are put forward as one of the main blockages why incumbent banks are not able to (quickly) react to the changing customer expectations. Customers want solutions to their problems   and do not want to be bothered about the internal organisation of their bank. Most banks are however organized by product domain (daily banking, investments and lending) and by customer segmentation (retail banking, private banking, SMEs and corporates). This division is reflected both at business and IT side and almost automatically leads to the creation of silos. It is however difficult to reorganize a bank without creating new silos or introducing other types of issues and inefficiencies. An organization is never ideal and needs to take a number of cons

RPA - The miracle solution for incumbent banks to bridge the automation gap with neo-banks?

Hypes and marketing buzz words are strongly present in the IT landscape. Often these are existing concepts, which have evolved technologically and are then renamed to a new term, as if it were a brand new technology or concept. If you want to understand and assess these new trends, it is important to   reduce the concepts to their essence and compare them with existing technologies , e.g. Integration (middleware) software   ensures that 2 separate applications or components can be integrated in an easy way. Of course, there is a huge evolution in the protocols, volumes of exchanged data, scalability, performance…​, but in essence the problem remains the same. Nonetheless, there have been multiple terms for integration software such as ETL, ESB, EAI, SOA, Service Mesh…​ Data storage software   ensures that data is stored in such a way that data is not lost and that there is some kind guaranteed consistency, maximum availability and scalability, easy retrieval and searching

IoT - Revolution or Evolution in the Financial Services Industry

1. The IoT hype We have all heard about the   "Internet of Things" (IoT)   as this revolutionary new technology, which will radically change our lives. But is it really such a revolution and will it really have an impact on the Financial Services Industry? To refresh our memory, the Internet of Things (IoT) refers to any   object , which is able to   collect data and communicate and share this information (like condition, geolocation…​)   over the internet . This communication will often occur between 2 objects (i.e. not involving any human), which is often referred to as Machine-to-Machine (M2M) communication. Well known examples are home thermostats, home security systems, fitness and health monitors, wearables…​ This all seems futuristic, but   smartphones, tablets and smartwatches   can also be considered as IoT devices. More importantly, beside these futuristic visions of IoT, the smartphone will most likely continue to be the center of the connected devi

PSD3: The Next Phase in Europe’s Payment Services Regulation

With the successful rollout of PSD2, the European Union (EU) continues to advance innovation in the payments domain through the anticipated introduction of the   Payment Services Directive 3 (PSD3) . On June 28, 2023, the European Commission published a draft proposal for PSD3 and the   Payment Services Regulation (PSR) . The finalized versions of this directive and associated regulation are expected to be available by late 2024, although some predictions suggest a more likely timeline of Q2 or Q3 2025. Given that member states are typically granted an 18-month transition period, PSD3 is expected to come into effect sometime in 2026. Notably, the Commission has introduced a regulation (PSR) alongside the PSD3 directive, ensuring more harmonization across member states as regulations are immediately effective and do not require national implementation, unlike directives. PSD3 shares the same objectives as PSD2, i.e.   increasing competition in the payments landscape and enhancing consum

Trade-offs Are Inevitable in Software Delivery - Remember the CAP Theorem

In the world of financial services, the integrity of data systems is fundamentally reliant on   non-functional requirements (NFRs)   such as reliability and security. Despite their importance, NFRs often receive secondary consideration during project scoping, typically being reduced to a generic checklist aimed more at compliance than at genuine functionality. Regrettably, these initial NFRs are seldom met after delivery, which does not usually prevent deployment to production due to the vague and unrealistic nature of the original specifications. This common scenario results in significant end-user frustration as the system does not perform as expected, often being less stable or slower than anticipated. This situation underscores the need for   better education on how to articulate and define NFRs , i.e. demanding only what is truly necessary and feasible within the given budget. Early and transparent discussions can lead to system architecture being tailored more closely to realisti

Low- and No-code platforms - Will IT developers soon be out of a job?

“ The future of coding is no coding at all ” - Chris Wanstrath (CEO at GitHub). Mid May I posted a blog on RPA (Robotic Process Automation -   https://bankloch.blogspot.com/2020/05/rpa-miracle-solution-for-incumbent.html ) on how this technology, promises the world to companies. A very similar story is found with low- and no-code platforms, which also promise that business people, with limited to no knowledge of IT, can create complex business applications. These   platforms originate , just as RPA tools,   from the growing demand for IT developments , while IT cannot keep up with the available capacity. As a result, an enormous gap between IT teams and business demands is created, which is often filled by shadow-IT departments, which extend the IT workforce and create business tools in Excel, Access, WordPress…​ Unfortunately these tools built in shadow-IT departments arrive very soon at their limits, as they don’t support the required non-functional requirements (like high availabili

An overview of 1-year blogging

Last week I published my   60th post   on my blog called   Bankloch   (a reference to "Banking" and my family name). The past year, I have published a blog on a weekly basis, providing my humble personal vision on the topics of Fintech, IT software delivery and mobility. This blogging has mainly been a   personal enrichment , as it forced me to dive deep into a number of different topics, not only in researching for content, but also in trying to identify trends, innovations and patterns into these topics. Furthermore it allowed me to have several very interesting conversations and discussions with passionate colleagues in the financial industry and to get more insights into the wonderful world of blogging and more general of digital marketing, exploring subjects and tools like: Search Engine Optimization (SEO) LinkedIn post optimization Google Search Console Google AdWorks Google Blogger Thinker360 Finextra …​ Clearly it is   not easy to get the necessary attention . With th

Deals as a competitive differentiator in the financial sector

In my blog " Customer acquisition cost: probably the most valuable metric for Fintechs " ( https://bankloch.blogspot.com/2020/06/customer-acquisition-cost-probably-most.html ) I described how a customer acquisition strategy can make or break a Fintech. In the traditional Retail sector, focused on selling different types of products for personal usage to end-customers,   customer acquisition  is just as important. No wonder that the advertisement sector is a multi-billion dollar industry. However in recent years due to the digitalization and consequently the rise of   Digital Marketing , customer acquisition has become much more focused on   delivering the right message via the right channel to the right person on the right time . Big tech players like Google and Facebook are specialized in this kind of targeted marketing, which is a key factor for their success and multi-billion valuations. Their exponential growth in marketing revenues seems however coming to a halt, as digi

AI in Financial Services - A buzzword that is here to stay!

In a few of my most recent blogs I tried to   demystify some of the buzzwords   (like blockchain, Low- and No-Code platforms, RPA…​), which are commonly used in the financial services industry. These buzzwords often entail interesting innovations, but contrary to their promise, they are not silver bullets solving any problem. Another such buzzword is   AI   (or also referred to as Machine Learning, Deep Learning, Enforced Learning…​ - the difference between those terms put aside). Again this term is also seriously hyped, creating unrealistic expectations, but contrary to many other buzzwords, this is something I truly believe will have a much larger impact on the financial services industry than many other buzzwords. This opinion is backed by a study of McKinsey and PWC indicating that 72% of company leaders consider that AI will be the most competitive advantage of the future and that this technology will be the most disruptive force in the decades to come. Deep Learning (= DL) is a s