Skip to main content

Serverless for dummies


One of my first blogs was an introduction to non-technical people about Docker (https://www.linkedin.com/pulse/docker-dummies-joris-lochy) and last year I wrote a similar blog on DevOps (https://www.linkedin.com/pulse/devops-dummies-joris-lochy). Recently I joined the "Merit-VC" platform (i.e. https://www.merit-vc.com/ - a platform to train yourself in VC-funding), where I got the assignment to write a report on the trend of "Serverless". I thought it would be interesting to share my conclusions in the form of a third "for dummies" blog, introducing another innovative technical concept to non-technical people, namely the concept of "serverless".

Before introducing serverless it is however important to have a notion about cloud computing. Cloud computing allows to "rent" servers from cloud companies instead of having to setup and maintain your own infrastructure (data centre). The advantages are a switch from CapEx to OpEx (you pay only for what you use), a much higher flexibility to onboard/offboard servers (scalability based on your business needs) and you can avoid having to attract the specific skills required to properly operate infrastructure. Additionally most cloud providers have built all kind of abstraction layers, which allow to setup and maintain servers and the software deployed on them easier and faster.

This cloud market has known an exponential growth in the last years, which was even further accelerated by the Covid-crisis. Based on a report of Canalys, the market was estimated to around $142 billion for the year 2020. In the market there are 4 dominant players, i.e. Amazon AWS (31% market share), Microsoft Azure (20% market share), Google GCP (7% market share) and Alibaba Cloud (6% market share, although this is mainly in Asia). This means also that there are dozens of other parties with less than 5% market share still forming together a total market share of 35% (like Oracle, IBM, Tencent, DigitalOcean, SalesForce, Verizon Cloud…​).

The most common service bought from a cloud provider are still servers (which are actually virtual machines at the cloud provider), like Amazon EC2, Azure Virtual Machines, Google Compute Engine or DigitalOcean Droplets. These servers can be added/removed via a simple user interface (with the user being able to choose the size of the virtual machine, the availability…​), giving a very dynamic server management. On the server, the company can then install any software.

Over the years cloud providers have however started to offer more and more services, which abstract away the underlying infrastructure. The most common ones being:

  • Storage services, i.e. a simple service where files can be stored and retrieved via an easily accessible API. Most known examples are Amazon S3, Azure Blob Storage, Google Cloud Storage, DigitalOcean Spaces…​

  • Database services in all forms, like the most common SQL databases (like PostgreSQL, Oracle, MySQL or Maria DB) and noSQL databases (like MongoDB, Cassandra, ElasticSearch, HBase…​). Typical examples being Amazon RDS, Amazon Dynamo DB, Azure SQL Database, Azure Cosmos DB, Google Cloud SQL, Google Cloud BigTable, MongoDB Atlas…​.

  • Docker container management, like running containers in a flexible, scalable and reliable way, all or not on Kubernetes. E.g. Amazon ECS, Amazon EKS, Google GKE, Azure AKS, Red Hat OpenShift Container Platform…​

  • Content management systems and eCommerce platforms, like Wordpress, Drupal, Magento…​

  • Identity management, like Auth0, Amazon Cognito, Azure AD…​

But this is just the tip of the iceberg. Every cloud provider has an offering of hundreds of managed services, making it sometimes extremely difficult to find your way in this offering.

These abstractions have led to the new term of serverless. This doesn’t mean that there are no servers anymore, but rather that the underlying servers are fully abstracted away for the consumer of the service, i.e. the allocation of the infrastructure (server) is fully managed by the cloud provider.
The services can be the services described above (also called managed services, managed cloud, backend-as-a-service or BaaS), but it can also be services directly offered to developers in the form of functions which can be requested to be executed via an API. This type of serverless is called "Functions-as-a-service" (FaaS), with the most known ambassador being AWS Lambda, which is the implementation of Amazon of this service. Of course the other cloud providers have in the meantime similar offers like Google Cloud Functions, Azure Functions or IBM OpenWhisk.

Simplified, this type of serverless means the consuming company requests to execute a specific function (a piece of code). The cloud provider then deploys the software component (at run-time) on one of their servers (totally abstracted away for the consumer) and executes it. Theoretically this means that a company does not have the deal anymore with servers, has almost unlimited (auto-) scalability and only pays for what it consumes (i.e. you never pay for idle resources). The downside is the complexity and the performance. The complexity comes mainly from the fact that applications become a complex orchestration of small units (most cloud providers set quite strict time-out times to execute a FaaS requiring software to be split in small units) which are executed and from the fact that each function needs to be fully stateless. This means any data (used as input or generated as output by the function) and logging/tracing information needs to be stored by a separate service. The performance impact comes mainly from the time needed to deploy the function on a server. This is a domain where cloud providers try to optimize, by keeping certain functions "hot" (already installed) on a server and by optimally predicting (and pre-installing) which functions will (soon) be required (in the near future). However these techniques have their limits, as obviously a cloud provider cannot keep all functions installed on several server instances (to cope with potential loads), as this would make the economic model for the cloud provider impossible.

The ultimate goal of both serverless techniques (i.e. BaaS & FaaS) is of course to save money for a company. While in many cases these techniques can save money, this is not always the case. As always, it depends on the specific needs of the customer. Typically if you have a high and consistent (not a variable) usage and are a large corporate, these solutions might not be the best option (as price will likely be higher and the increased complexity and potential performance impacts outweigh the advantages). However for other use cases, like e.g. a start-up or scale-up, with very bursty and changing loads, and with a limited team having to cope with all IT and infrastructure aspects, this kind of services can be ideal.

However next to the infrastructure and scalability impacts, it is also important to know that FaaS is also a methodology, i.e. a way to implement software, with following characteristics:

  • Code is split into small modular components (microservices), which are orchestrated together for the software

  • Every component needs to be stateless and its execution should be finished in a limited time. The result is that such a component is by definition fault tolerant, as the function can be relaunched without any specific action in case of issues

The gains that can potentially come (in the long-term) of this development methodology can outweigh the possible infrastructure cost gain. In most companies, although infrastructure is a big cost, it is still marginal compared to the human cost of IT people working in IT change and in run departments.
As such the following questions should also be raised:

  • How much time can I gain in the IT teams with this new way of implementing software?

  • How easily can people collaborate together in this architecture?

  • How easy is it to onboard additional people?

  • How sure I am, I will not have to refactor everything, because the associated technology has become obsolete?

  • How easy can I find IT people mastering the underlying technology?

  • …​

The division of software applications in such smaller components is one way to give a more positive answer to those questions, but it depends of course on the type of software, the maturity level and culture of the organization and the character and style of the IT resources.

Independent of those reflections, it is clear that serverless is here to stay and will likely increase in the coming years.
As such following trend are clearly visible in the market:

  • Managed services offered by cloud providers become more and more powerful and functional (business oriented). While initially limited to storage and database services, the services are becoming much more business oriented, making the line with Software-as-a-Service (SaaS) and Business-as-a-Service (BaaS) more and more blurry.

  • Cloud providers build more and more abstraction layers on top of technical components. This makes the consumption of those services easier, but it also means that the setup becomes more opinionated and less flexible (cfr. my blog "Abstraction in Financial IT - How far can and should we go? - https://bankloch.blogspot.com/2020/02/abstraction-in-financial-it-how-far-can.html)

  • With software becoming more open source, cloud providers can provide excellent tooling, with very little investment. This creates a strong tension between cloud providers (like Amazon) on the one-side and (open-source) software companies (like MongoDB, Elastic, Confluent or Redis Labs) on the other side. A new form of alliance between both types of companies will be essential, with cloud providers bringing the infrastructure and DevOps tooling and the software providers providing the in-depth expertise of the specific software. Till now many software companies have tried to fight the cloud providers by changing their license policies (cfr. story of Redis), but this makes their product at the same time less future-proof and less desirable.

  • Functions as a Service will become more common, as companies switch more to the cloud, microservice based architecture become more popular, developers become more acquainted with this technology, the prices of FaaS become more competitive and the potential performance issues linked to FaaS (to deploy a function) become better managed.

  • After a first wave of companies shifting from on-premise infrastructure to the cloud, now you see a 2nd wave to multi-cloud setups. This gives the advantage of not being locked-in by one cloud provider, being able to offer a higher worldwide availability and better performance (as none of the cloud providers have data centres in every country) and allowing to choose the best managed service of every cloud provider. E.g. while AWS has the richest and most customer-centric offering, GCP has probably the best offering when it comes to Big Data and Machine Learning processing, while Azure has a lot of advantages via its link with the productivity tools (like Office 365, Teams, Sharepoint…​). However in order to be able to able to work multi-cloud it is important to use a little as possible specific abstractions of cloud providers. This means for FaaS that the specific cloud provider frameworks (like AWS Lambda) should be avoided and open source frameworks like Knative, OpenWhisk, Kubeless or OpenFaaS should be adopted.

Even though enormous steps have been made in serverless in the last years, it is clear this domain is still in its embryonic stage, meaning significant developments can be expected in the coming years. I am looking forward to further following the exciting innovations. 

Comments

Popular posts from this blog

Transforming the insurance sector to an Open API Ecosystem

1. Introduction "Open" has recently become a new buzzword in the financial services industry, i.e.   open data, open APIs, Open Banking, Open Insurance …​, but what does this new buzzword really mean? "Open" refers to the capability of companies to expose their services to the outside world, so that   external partners or even competitors   can use these services to bring added value to their customers. This trend is made possible by the technological evolution of   open APIs (Application Programming Interfaces), which are the   digital ports making this communication possible. Together companies, interconnected through open APIs, form a true   API ecosystem , offering best-of-breed customer experience, by combining the digital services offered by multiple companies. In the   technology sector   this evolution has been ongoing for multiple years (think about the travelling sector, allowing you to book any hotel online). An excelle...

Are product silos in a bank inevitable?

Silo thinking   is often frowned upon in the industry. It is often a synonym for bureaucratic processes and politics and in almost every article describing the threats of new innovative Fintech players on the banking industry, the strong bank product silos are put forward as one of the main blockages why incumbent banks are not able to (quickly) react to the changing customer expectations. Customers want solutions to their problems   and do not want to be bothered about the internal organisation of their bank. Most banks are however organized by product domain (daily banking, investments and lending) and by customer segmentation (retail banking, private banking, SMEs and corporates). This division is reflected both at business and IT side and almost automatically leads to the creation of silos. It is however difficult to reorganize a bank without creating new silos or introducing other types of issues and inefficiencies. An organization is never ideal and needs to take a numbe...

RPA - The miracle solution for incumbent banks to bridge the automation gap with neo-banks?

Hypes and marketing buzz words are strongly present in the IT landscape. Often these are existing concepts, which have evolved technologically and are then renamed to a new term, as if it were a brand new technology or concept. If you want to understand and assess these new trends, it is important to   reduce the concepts to their essence and compare them with existing technologies , e.g. Integration (middleware) software   ensures that 2 separate applications or components can be integrated in an easy way. Of course, there is a huge evolution in the protocols, volumes of exchanged data, scalability, performance…​, but in essence the problem remains the same. Nonetheless, there have been multiple terms for integration software such as ETL, ESB, EAI, SOA, Service Mesh…​ Data storage software   ensures that data is stored in such a way that data is not lost and that there is some kind guaranteed consistency, maximum availability and scalability, easy retrieval...

IoT - Revolution or Evolution in the Financial Services Industry

1. The IoT hype We have all heard about the   "Internet of Things" (IoT)   as this revolutionary new technology, which will radically change our lives. But is it really such a revolution and will it really have an impact on the Financial Services Industry? To refresh our memory, the Internet of Things (IoT) refers to any   object , which is able to   collect data and communicate and share this information (like condition, geolocation…​)   over the internet . This communication will often occur between 2 objects (i.e. not involving any human), which is often referred to as Machine-to-Machine (M2M) communication. Well known examples are home thermostats, home security systems, fitness and health monitors, wearables…​ This all seems futuristic, but   smartphones, tablets and smartwatches   can also be considered as IoT devices. More importantly, beside these futuristic visions of IoT, the smartphone will most likely continue to be the cent...

PSD3: The Next Phase in Europe’s Payment Services Regulation

With the successful rollout of PSD2, the European Union (EU) continues to advance innovation in the payments domain through the anticipated introduction of the   Payment Services Directive 3 (PSD3) . On June 28, 2023, the European Commission published a draft proposal for PSD3 and the   Payment Services Regulation (PSR) . The finalized versions of this directive and associated regulation are expected to be available by late 2024, although some predictions suggest a more likely timeline of Q2 or Q3 2025. Given that member states are typically granted an 18-month transition period, PSD3 is expected to come into effect sometime in 2026. Notably, the Commission has introduced a regulation (PSR) alongside the PSD3 directive, ensuring more harmonization across member states as regulations are immediately effective and do not require national implementation, unlike directives. PSD3 shares the same objectives as PSD2, i.e.   increasing competition in the payments landscape and en...

Trade-offs Are Inevitable in Software Delivery - Remember the CAP Theorem

In the world of financial services, the integrity of data systems is fundamentally reliant on   non-functional requirements (NFRs)   such as reliability and security. Despite their importance, NFRs often receive secondary consideration during project scoping, typically being reduced to a generic checklist aimed more at compliance than at genuine functionality. Regrettably, these initial NFRs are seldom met after delivery, which does not usually prevent deployment to production due to the vague and unrealistic nature of the original specifications. This common scenario results in significant end-user frustration as the system does not perform as expected, often being less stable or slower than anticipated. This situation underscores the need for   better education on how to articulate and define NFRs , i.e. demanding only what is truly necessary and feasible within the given budget. Early and transparent discussions can lead to system architecture being tailored more close...

Low- and No-code platforms - Will IT developers soon be out of a job?

“ The future of coding is no coding at all ” - Chris Wanstrath (CEO at GitHub). Mid May I posted a blog on RPA (Robotic Process Automation -   https://bankloch.blogspot.com/2020/05/rpa-miracle-solution-for-incumbent.html ) on how this technology, promises the world to companies. A very similar story is found with low- and no-code platforms, which also promise that business people, with limited to no knowledge of IT, can create complex business applications. These   platforms originate , just as RPA tools,   from the growing demand for IT developments , while IT cannot keep up with the available capacity. As a result, an enormous gap between IT teams and business demands is created, which is often filled by shadow-IT departments, which extend the IT workforce and create business tools in Excel, Access, WordPress…​ Unfortunately these tools built in shadow-IT departments arrive very soon at their limits, as they don’t support the required non-functional requirements (like h...

An overview of 1-year blogging

Last week I published my   60th post   on my blog called   Bankloch   (a reference to "Banking" and my family name). The past year, I have published a blog on a weekly basis, providing my humble personal vision on the topics of Fintech, IT software delivery and mobility. This blogging has mainly been a   personal enrichment , as it forced me to dive deep into a number of different topics, not only in researching for content, but also in trying to identify trends, innovations and patterns into these topics. Furthermore it allowed me to have several very interesting conversations and discussions with passionate colleagues in the financial industry and to get more insights into the wonderful world of blogging and more general of digital marketing, exploring subjects and tools like: Search Engine Optimization (SEO) LinkedIn post optimization Google Search Console Google AdWorks Google Blogger Thinker360 Finextra …​ Clearly it is   not easy to get the necessary ...

The UPI Phenomenon: From Zero to 10 Billion

If there is one Indian innovation that has grabbed   global headlines , it is undoubtedly the instant payment system   UPI (Unified Payments Interface) . In August 2023, monthly UPI transactions exceeded an astounding 10 billion, marking a remarkable milestone for India’s payments ecosystem. No wonder that UPI has not only revolutionized transactions in India but has also gained international recognition for its remarkable growth. Launched in 2016 by the   National Payments Corporation of India (NPCI)   in collaboration with 21 member banks, UPI quickly became popular among consumers and businesses. In just a few years, it achieved   remarkable milestones : By August 2023, UPI recorded an unprecedented   10.58 billion transactions , with an impressive 50% year-on-year growth. This volume represented approximately   190 billion euros . In July 2023, the UPI network connected   473 different banks . UPI is projected to achieve a staggering   1 ...

AI in Financial Services - A buzzword that is here to stay!

In a few of my most recent blogs I tried to   demystify some of the buzzwords   (like blockchain, Low- and No-Code platforms, RPA…​), which are commonly used in the financial services industry. These buzzwords often entail interesting innovations, but contrary to their promise, they are not silver bullets solving any problem. Another such buzzword is   AI   (or also referred to as Machine Learning, Deep Learning, Enforced Learning…​ - the difference between those terms put aside). Again this term is also seriously hyped, creating unrealistic expectations, but contrary to many other buzzwords, this is something I truly believe will have a much larger impact on the financial services industry than many other buzzwords. This opinion is backed by a study of McKinsey and PWC indicating that 72% of company leaders consider that AI will be the most competitive advantage of the future and that this technology will be the most disruptive force in the decades to come. Deep Lea...