Skip to main content

DevOps for dummies


In May of last year, I published the blog "Docker for dummies" (https://bankloch.blogspot.com/2020/02/docker-for-dummies.html), explaining some basic concepts for non-technical people about Docker containers.
Another concept which is closely associated with Docker containers and also impacts strongly the financial services industry, is DevOps. Also here, the non-technical impacts (on organization, security, processes…​) of this concept are sometimes hard to grasp for managers and businesspeople, while it is essential that everyone is aware and involved in this transformation.

DevOps (a contraction of "development" and "operations") is a practice within software engineering, which aims to bring the (before strongly separated) worlds of software development (developing the software) and software operations (responsible for keeping systems stable and running) together. The term was first introduced in Belgium on a conference in 2009, but really took off a bit more than 5 years ago, accelerated by other evolutions like Agile and Cloud computing.

DevOps aims to deliver software changes (value) to the end users in a faster, smarter, cheaper and better (repeatable, predictable, efficient, and fully audited) way, which ultimately benefits both the development and operations teams, but of course also the competitiveness of the entire organization.

This goal is achieved in multiple ways:

  • Automation (continuous organizational automation): automation of all steps in the software delivery life cycle allows to gain time (as no manual intervention required anymore), but also improves quality (no room for human errors) and security (no room for malicious intent). All this leads to increased speed of delivery, shortened bug fix lead times and reduced downtimes (e.g. by releasing without service interruption and faster mean time to recovery in case of failure).

  • Better collaboration (you build it, you run it) between teams, by breaking the silos between development teams, operation teams and other groups such as the architecture and testing teams, i.e. all stakeholders work collaboratively towards a shared goal and have end-to-end accountability and ownership.

  • Shorter feedback cycles: by speeding up the release cycles and better (proactive) monitoring the usage of the system, it becomes possible to have a shorter time to market (faster time to value), reduce the failure rate of releases (as each release is much smaller) and get much faster feedback from users to change your organization’s course of action.

Obtaining this goal and implementing these methods, is of course a long transformation journey, which is not completed overnight. Instead it is a never-ending process of continuous improvement, where an organization can be at different maturity levels for these different methods.

More in detail it consists of implementing a number of tools and processes, but also a drastic change in the organizational structure and people’s way of thinking and working (requiring extensive change management). More specifically following processes need to be adapted or implemented:

  • Source code management: all source code should be checked-in centrally (in a code repository like Git), allowing everyone to have the latest version of the source code and a full history of all changes to the source code.

  • Code analysis: when code is checked in, it should be automatically analyzed for potential issues like bugs (cfr. FindBugs), bad practices (cfr. PMD), convention breaches (cfr. Checkstyle), copy-paste errors (cfr. CPD), code coverage (cfr. Cobertura)…​

  • Continuous Integration (CI): code should be automatically built and assembled after each code commit. This is respectively done via a build (automation) tool (like Ant, Maven, Gradle…​) and a continuous integration tool (like Jenkins).
    This forces developers to make sure that the full code base still builds and executes correctly after each check-in. It enforces the end-to-end accountability, meaning everyone is responsible to have a successful code integration.

  • Continuous Testing (CT): after each code commit, there should be also an automatic execution of functional and non-functional tests (using tools like JMeter, LoadRunner, Selenium, Cucumber…) and this on different abstraction levels and with different focus areas, i.e. automated unit testing, assembly testing, end-to-end testing, stress testing, performance testing, security testing…​ Real-time dashboarding and alerting should notify developers immediately that a recent code commit has led to an (regression) issue.

  • Automated infrastructure: in order to automate all above processes, it should be possible to provision infrastructure in a very short time, i.e. allowing to spin-up a test environment fully automatically. This requires infrastructure to be treated as code, i.e. all configuration instructions to setup infrastructure should be in a code repository and versioned as well. Via Infrastructure as Code tools (such as Chef, Puppet, Terraform, Ansible…​) and containerization/container orchestration (i.e. Docker & Kubernetes), the infrastructure level can be fully automated and abstracted away.

  • Continuous Deployment (CD): each commit should be automatically released to a test environment and on a frequent basis (e.g. weekly) released to production. Important here is that the same automated pipelines and deployment steps used for the test environments should also be used for the production environment.

  • Continuous Monitoring (CM): a continuous monitoring of different metrics (infrastructure, application & business monitoring) via automated monitoring tools (like Kibana, NewRelic, Datadog, Prometheus…) gives the possibility to proactively monitor and intervene on a system and rapidly correct in an automated way (e.g. automatic roll-back of last deployment)

  • Continuous Insights/Improvement: based on the metrics collected via continuous monitoring from the usage of the end-users, the product roadmap should be driven (i.e. data driven product management). This means that small changes are deployed to a part of the user base (via canary or A/B testing) and checked on their impact (value). If the impact is positive, the change can be deployed to a larger user-base, otherwise a roll-back is done and a new iteration (Experiment & Fail fast) is launched.

  • Change Management: introduce a culture of collaboration and continuous improvement in the organisation via:

    • Adapting the existing organizational structure, i.e. remove the silos (bridge the gaps) of the development and operations teams and instead work with integrated, cross-functional teams with end-to-end responsibility, supported by transversal teams which setup and improve the DevOps tooling and common infrastructure components

    • Empower teams and make them as autonomous as possible, i.e. reward initiative and force teams to take decisions themselves, rather than promoting a top-down hierarchical decision process. This means also that a culture of rewarding success and avoiding blame (a fault is an opportunity to learn and improve) should be established.

    • Train people to become T-shaped all-round profiles (cfr. full stack engineers)

This transformation journey goes hand-in-hand with other evolutions, which mutually accelerate each other.

  • Agile: the Agile methodology practice also aims to deliver value to the business as quickly as possible (i.e. in every sprint), meaning also that every sprint a deployment to at least a test environment (but preferably a production environment) should be foreseen. Furthermore the Agile methodology foresees to work with cross-functional teams, which are empowered to be maximum autonomous and to continually adjust the product backlog based on new insights (gained from measuring).

  • Migrate to (Public/Private) Cloud: cloud providers don’t only make the automated infrastructure provisioning much simpler, but typically provide also a lot of out-of-the-box DevOps tooling helping to automate the different steps in the software delivery life cycle.

  • Microservices: in a microservices based architecture (cfr. my blog "https://bankloch.blogspot.com/2020/02/microservices-yet-another-buzzword-or.html) the overall application landscape is split in autonomous, isolated components (called microservices). Such an architecture simplifies the application of DevOps principles, as microservices are usually built and maintained by fully autonomous teams which are end-to-end responsible for a microservice. Furthermore due to the strong encapsulation of each microservice, continuous deployment becomes easier, as impacts of code changes can be much easier isolated.

  • Open source: thanks to open source (cfr. my blog "https://bankloch.blogspot.com/2020/02/banks-are-finally-embracing-open-source.html) hundreds of tools, libraries and best practices are available for free to be used to help automate every step of the software delivery lifecycle (at a fraction of the cost). Smaller organizations can profit here from all the DevOps experience of large, very mature organizations like Facebook, Google, Uber, Netflix or LinkedIn. These components allow to leapfrog the DevOps maturity level of your organization.

  • Containerization: the rise of (Docker) containers and container orchestration (Kubernetes) helps also to apply DevOps principles, as the released package (i.e. the container) becomes more high-level and more easily portable. Furthermore via the standard orchestration framework of Kubernetes, all kind of best practices with regards to deployment (without downtime and gradual roll-out strategies) and monitoring, come practically out of the box.

Despite all the obvious benefits and the fact that banks and insurers have been working on introducing DevOps practices and all other accelerators (like Agile, open source…​) for a while now, the maturity level (and thus the achieved benefits) of incumbent financial service firms is still relatively low. Obviously the complexity and historical legacy of a bank or insurer, makes it much more difficult to introduce DevOps than at a Fintech start-up, i.e.

  • A complex layering of processes: today in a bank or insurance company most activities are still subject to formal requests (strict reporting lines), rules, approvals, signoffs and strict schedules (often imposed by security, audit, risk and compliance departments). These restrictions are the result of the risk averse nature of the industry and the fact that the financial industry is highly regulated. The result is that it is typically very hard to get access to production environments, to install something (and experiment) on environments (even development environments), to rapidly release something to production or to introduce new technologies or new processes.

  • Most banks and insurance companies are still very hierarchically structured, with very strict defined IT departments. Such an organizational structure feeds the silo thinking (as each department has its own objectives) and complexity of the decision process.

  • Outsourcing developments and operations: as many banks have outsourced (a part of) their developments and operations, silos are automatically created, as the goals of the financial service company and the outsourcing vendor are obviously not always aligned.

  • Shared service teams: in an attempt to find economies of scale and thus improve efficiency, many financial service companies have setup so-called shared service teams for developing/configuring and maintaining services, which are used in almost every business application (such as database administration, managing middleware such as message queues or application servers like Websphere, JBoss or Tomcat). As these teams initially were flooded by demands, they have all introduced a very rigid system of ticketing, which enforces considerably the silo-thinking and results in a lack of ownership and end-to-end accountability (as the shared service engineer has little feeling with the bigger picture).

  • Technical legacy and technology sprawl: the complex and outdated application architecture of most incumbents in the financial services industry, makes it very difficult to roll-out modern DevOps practices. E.g. DevOps tooling is very difficult (if not impossible) to apply on mainframe systems.
    Furthermore the ongoing digital transformation programs and regulatory projects, typically consume already so much energy within these organizations, that very little focus remains to implement DevOps practices.

Despite these challenges, banks and insurance companies need to take the step to roll-out DevOps practices within their organization. In this transition, it’s everyone’s responsibility to make sure that user-centric and robust systems are delivered, not just the software developer. A bank or insurer is an IT firm, so the IT systems are key assets. Every (line) manager should therefore have a notion of those assets and how DevOps practices can help in improving those precious IT assets.

Comments

Post a Comment

Popular posts from this blog

Transforming the insurance sector to an Open API Ecosystem

1. Introduction "Open" has recently become a new buzzword in the financial services industry, i.e.   open data, open APIs, Open Banking, Open Insurance …​, but what does this new buzzword really mean? "Open" refers to the capability of companies to expose their services to the outside world, so that   external partners or even competitors   can use these services to bring added value to their customers. This trend is made possible by the technological evolution of   open APIs (Application Programming Interfaces), which are the   digital ports making this communication possible. Together companies, interconnected through open APIs, form a true   API ecosystem , offering best-of-breed customer experience, by combining the digital services offered by multiple companies. In the   technology sector   this evolution has been ongoing for multiple years (think about the travelling sector, allowing you to book any hotel online). An excellent example of this

Are product silos in a bank inevitable?

Silo thinking   is often frowned upon in the industry. It is often a synonym for bureaucratic processes and politics and in almost every article describing the threats of new innovative Fintech players on the banking industry, the strong bank product silos are put forward as one of the main blockages why incumbent banks are not able to (quickly) react to the changing customer expectations. Customers want solutions to their problems   and do not want to be bothered about the internal organisation of their bank. Most banks are however organized by product domain (daily banking, investments and lending) and by customer segmentation (retail banking, private banking, SMEs and corporates). This division is reflected both at business and IT side and almost automatically leads to the creation of silos. It is however difficult to reorganize a bank without creating new silos or introducing other types of issues and inefficiencies. An organization is never ideal and needs to take a number of cons

RPA - The miracle solution for incumbent banks to bridge the automation gap with neo-banks?

Hypes and marketing buzz words are strongly present in the IT landscape. Often these are existing concepts, which have evolved technologically and are then renamed to a new term, as if it were a brand new technology or concept. If you want to understand and assess these new trends, it is important to   reduce the concepts to their essence and compare them with existing technologies , e.g. Integration (middleware) software   ensures that 2 separate applications or components can be integrated in an easy way. Of course, there is a huge evolution in the protocols, volumes of exchanged data, scalability, performance…​, but in essence the problem remains the same. Nonetheless, there have been multiple terms for integration software such as ETL, ESB, EAI, SOA, Service Mesh…​ Data storage software   ensures that data is stored in such a way that data is not lost and that there is some kind guaranteed consistency, maximum availability and scalability, easy retrieval and searching

IoT - Revolution or Evolution in the Financial Services Industry

1. The IoT hype We have all heard about the   "Internet of Things" (IoT)   as this revolutionary new technology, which will radically change our lives. But is it really such a revolution and will it really have an impact on the Financial Services Industry? To refresh our memory, the Internet of Things (IoT) refers to any   object , which is able to   collect data and communicate and share this information (like condition, geolocation…​)   over the internet . This communication will often occur between 2 objects (i.e. not involving any human), which is often referred to as Machine-to-Machine (M2M) communication. Well known examples are home thermostats, home security systems, fitness and health monitors, wearables…​ This all seems futuristic, but   smartphones, tablets and smartwatches   can also be considered as IoT devices. More importantly, beside these futuristic visions of IoT, the smartphone will most likely continue to be the center of the connected devi

Neobanks should find their niche to improve their profitability

The last 5 years dozens of so-called   neo- or challenger banks  (according to Exton Consulting 256 neobanks are in circulation today) have disrupted the banking landscape, by offering a fully digitized (cfr. "tech companies with a banking license"), very customer-centric, simple and fluent (e.g. possibility to become client and open an account in a few clicks) and low-cost product and service offering. While several of them are already valued at billions of euros (like Revolut, Monzo, Chime, N26, NuBank…​), very few of them are expected to be profitable in the coming years and even less are already profitable today (Accenture research shows that the average UK neobank loses $11 per user yearly). These challenger banks are typically confronted with increasing costs, while the margins generated per customer remain low (e.g. due to the offering of free products and services or above market-level saving account interest rates). While it’s obvious that disrupting the financial ma

PFM, BFM, Financial Butler, Financial Cockpit, Account Aggregator…​ - Will the cumbersome administrative tasks on your financials finally be taken over by your financial institution?

1. Introduction Personal Financial Management   (PFM) refers to the software that helps users manage their money (budget, save and spend money). Therefore, it is often also called   Digital Money Management . In other words, PFM tools   help customers make sense of their money , i.e. they help customers follow, classify, remain informed and manage their Personal Finances. Personal Finance   used to be (or still is) a time-consuming effort , where people would manually input all their income and expenses in a self-developed spreadsheet, which would gradually be extended with additional calculations. Already for more than 20 years,   several software vendors aim to give a solution to this , by providing applications, websites and/or apps. These tools were never massively adopted, since they still required a lot of manual interventions (manual input of income and expense transaction, manual mapping transactions to categories…​) and lacked an integration in the day-to-da

Can Augmented Reality make daily banking a more pleasant experience?

With the   increased competition in the financial services landscape (between banks/insurers, but also of new entrants like FinTechs and Telcos), customers are demanding and expecting a more innovative and fluent digital user experience. Unfortunately, most banks and insurers, with their product-oriented online and mobile platforms, are not known for their pleasant and fluent user experience. The   trend towards customer oriented services , like personal financial management (with functions like budget management, expense categorization, saving goals…​) and robo-advise, is already a big step in the right direction, but even then, managing financials is still considered to be a boring intangible and complex task for most people. Virtual (VR) and augmented reality (AR)   could bring a solution. These technologies provide a user experience which is   more intuitive, personalised and pleasant , as they introduce an element of   gamification   to the experience. Both VR and AR

Low- and No-code platforms - Will IT developers soon be out of a job?

“ The future of coding is no coding at all ” - Chris Wanstrath (CEO at GitHub). Mid May I posted a blog on RPA (Robotic Process Automation -   https://bankloch.blogspot.com/2020/05/rpa-miracle-solution-for-incumbent.html ) on how this technology, promises the world to companies. A very similar story is found with low- and no-code platforms, which also promise that business people, with limited to no knowledge of IT, can create complex business applications. These   platforms originate , just as RPA tools,   from the growing demand for IT developments , while IT cannot keep up with the available capacity. As a result, an enormous gap between IT teams and business demands is created, which is often filled by shadow-IT departments, which extend the IT workforce and create business tools in Excel, Access, WordPress…​ Unfortunately these tools built in shadow-IT departments arrive very soon at their limits, as they don’t support the required non-functional requirements (like high availabili

Beyond Imagination: The Rise and Evolution of Generative AI Tools

Generative AI   has revolutionized the way we create and interact with digital content. Since the launch of Dall-E in July 2022 and ChatGPT in November 2022, the field has seen unprecedented growth. This technology, initially popularized by OpenAI’s ChatGPT, has now been embraced by major tech players like Microsoft and Google, as well as a plethora of innovative startups. These advancements offer solutions for generating a diverse range of outputs including text, images, video, audio, and other media from simple prompts. The consumer now has a vast array of options based on their specific   output needs and use cases . From generic, large-scale, multi-modal models like OpenAI’s ChatGPT and Google’s Bard to specialized solutions tailored for specific use cases and sectors like finance and legal advice, the choices are vast and varied. For instance, in the financial sector, tools like BloombergGPT ( https://www.bloomberg.com/ ), FinGPT ( https://fin-gpt.org/ ), StockGPT ( https://www.as

From app to super-app to personal assistant

In July of this year,   KBC bank   (the 2nd largest bank in Belgium) surprised many people, including many of us working in the banking industry, with their announcement that they bought the rights to   broadcast the highlights of soccer matches   in Belgium via their mobile app (a service called "Goal alert"). The days following this announcement the news was filled with experts, some of them categorizing it as a brilliant move, others claiming that KBC should better focus on its core mission. Independent of whether it is a good or bad strategic decision (the future will tell), it is clearly part of a much larger strategy of KBC to   convert their banking app into a super-app (all-in-one app) . Today you can already buy mobility tickets and cinema tickets and use other third-party services (like Monizze, eBox, PayPal…​) within the KBC app. Furthermore, end of last year, KBC announced opening up their app also to non-customers allowing them to also use these third-party servi