Web Development
March 4, 2025
10 min read
One of the key goals of the Innovantage podcast is to let its audience learn more about the latest tech trends and explore how they are transforming the business landscape. And the recent episode serves exactly this purpose. This time, the topic of serverless technology has taken center stage.
The podcast host and the CBDO at Sigli Max Golikov invited Michael Dowden to speak about the decreasing significance of servers today, as well as the benefits businesses can gain from this shift.
Michael is a technology leader, international speaker, and serverless expert with more than 30 years of experience in software development and consulting. Of course, serverless technology hasn’t been around for that long. It started gaining adoption nearly 15 years ago. Given this, Michael has a very good understanding of how the shift to the new technology has happened. Moreover, he explained the reasons why businesses started ditching their servers and what are the cases when using servers is still feasible.
This and much more were discussed in that episode and the most interesting ideas we’ve gathered for you in this article.
The core concept of serverless doesn’t mean there are no servers. They are still there, managed by providers, not by businesses directly. The idea of this technology represents an additional layer of abstraction. Everything started with servers, then moved to virtual machines, followed by Infrastructure-as-a-Service (IaaS), and Platform-as-a-Service (PaaS). Serverless takes it a step further by providing only the essential operating containers required to run your tasks.
One key aspect of serverless is Function-as-a-Service (FaaS). Similar to microservices, each function is managed and scaled independently. You can build, deploy, and scale individual units of code without worrying about the entire system. If traffic spikes, serverless platforms scale automatically by creating copies of the function to handle the load. Once the traffic decreases, the platform adjusts by scaling back.
With serverless, you are charged only for the actual computing time used, and you don’t have to manage the infrastructure yourself. The platform scales according to your needs, making it efficient and cost-effective.
In other words, with serverless technology, you are basically outsourcing a part of your infrastructure to make it lighter, leaner and more responsive, better in every possible way.
Michael shared his experience of transitioning to serverless technology while working at startups. His goal was to find cost-effective solutions that would scale as the businesses grew. The first shift to serverless was not planned. It just happened naturally. Michael’s approach to software had always been user-focused. It means that he starts building the front-end and UX to quickly prove concepts and gather feedback from users.
One of the startups he worked with began as a progressive web application. As the product evolved, the team realized they needed a backend, and serverless was the logical choice to support their growth.
Though the idea of serverless may sound highly appealing, businesses also should stay aware of possible downsides.
They become obvious when you need to execute control at a low system level in a very specific way. While you can adjust parameters like memory allocation on platforms like GCP, AWS, or Azure, you don’t have full access to the underlying system. Additionally, you might lose visibility into the scaling process. For example, it may be impossible for you to control how many instances of your function are running at any given time. This lack of proper control over scaling and thresholds can be a challenge.
Another issue is the cold start problem. When a function is triggered, it often needs to spin up, typically using something like a Docker container. This setup takes time (it can be a couple of seconds). This leads to a lag before the function can serve traffic, which might be noticeable to users.
Observability becomes crucial in serverless environments. Without direct access to the system, you need to rely on external tools to monitor your code and infrastructure. If an issue arises, it can be hard to pinpoint the cause without proper monitoring in place.
Serverless is great for small, independently managed functions. But it has limitations for long-running services (often capped at 5 minutes) or applications requiring extremely low latency. While serverless offers massive scalability, the slight latency in ramping up can be a concern.
As with any technology, it’s essential to carefully consider your use case before choosing serverless.
Michael admitted that he advocates a serverless-first approach, especially for new projects. He believes that most companies and projects should start with serverless by default. When you are unsure about the application or just starting out, serverless should be the initial choice. You should consider a different architecture, only when you understand why serverless might not be suitable. This can be relevant, not for the entire project but for its specific parts.
One key reason for implementing the serverless-first approach is that this technology allows you to build faster. You spend less time setting up infrastructure. This enables you to start running your project quickly. Moreover, this technology scales automatically. So if your project experiences unexpected traffic spikes, it can handle them with minimal effort.
By starting with the serverless-first approach, you can focus on getting your product in front of users while managing traffic. This gives you the flexibility to learn what works and what doesn’t, without worrying about infrastructure bottlenecks.
Michael also mentioned the following case.
Amazon is well-known for its use of serverless infrastructure in its video streaming service. The company once published an article explaining how they stopped using serverless for one component of that service. However, the article was widely misinterpreted. Due to the provoked reaction, Amazon eventually retracted it. Despite the misunderstanding, the article was a great technical explanation of that decision.
The company explained that the specific part of the project had some requirements that their serverless infrastructure couldn’t meet. So, they changed the architecture of just that one component. This helped them save money, improve the user experience, and make the system more efficient.
According to Michael, this is a perfect example of the serverless-first approach. Amazon was able to build and run the service in front of customers for months or even years before realizing they needed a different solution. They had the time to learn, design a better approach, and successfully implement it. This, he believes, is a huge success story for serverless.
One of the key economic advantages of serverless, especially for startups, is its cost-efficiency. Many startups invest heavily in infrastructure before acquiring a single customer. With serverless, infrastructure costs remain low until traffic increases, allowing expenses to scale with revenue. This model ensures that businesses only pay for what they use, aligning costs with growth.
Another major benefit is flexibility. Unlike monolithic architectures, where components are tightly integrated, serverless systems are highly modular and event-driven. Functions, databases, and services operate independently, making it easier to scale, reorganize, or optimize specific parts of the system as needed. If two functions need closer integration, they can be easily adjusted without restructuring the entire architecture.
Starting with a modular serverless approach allows businesses to adapt more easily over time. It’s much simpler to merge independent services when necessary than to untangle tightly coupled components in a monolithic system. This adaptability makes serverless an ideal choice for companies looking to scale efficiently while maintaining agility.
According to Michael, a key test of adaptability in software is whether a piece of code can be removed without disrupting the rest of the application. It doesn’t matter whether you need to introduce a new implementation, update a feature, or improve functionality, in most cases, code will need to be replaced. Flexibility is essential, and well-structured code should allow for seamless modifications.
In a modular architecture, where functions are deployed independently, removing or replacing a function is straightforward. If a service is no longer needed, it can be deleted without affecting other components. If an update is required, a new function can be introduced without major disruptions.
However, sometimes the issue lies not in the function itself but in the process behind it. To improve user experience, it is often beneficial to handle certain operations in the background while allowing users to continue to the next step. Even when latency is a concern, effective solutions can mask delays and ensure a smooth and responsive experience.
While serverless computing is often associated with startups, it is also highly valuable for large enterprises (if not even more). Some of the biggest adopters of serverless technology are actually the companies that provide it: Google Cloud, Amazon Web Services (AWS), and Microsoft Azure. These tech giants didn’t create serverless computing for startups alone. They use it extensively themselves.
One of the primary reasons serverless is so beneficial at scale is agility. It allows enterprises to deploy new features and services without being directly tied to complex infrastructure work. While these companies invest heavily in infrastructure and employ some of the best hardware specialists in the world, their service developers don’t need to focus on hardware management. Serverless enables them to build and deploy applications faster and keep innovation cycles short and efficient.
Large enterprises also often need to deal with ongoing legacy system upgrades. Many companies operate in a continuous cycle of replacing outdated platforms with modern solutions. Serverless offers a strategic way to introduce modular, scalable services into these transitions. By gradually integrating serverless technology, businesses can break apart monolithic architectures, reduce infrastructure overhead, and create a more flexible and efficient system. And all this is possible without the need for a full-scale overhaul.
A few years ago, serverless computing was surrounded by significant hype and some people believed that serverless could be used for everything (which is not true). Now, this hype seems to be over as we have reached a so-called plateau. Some developers may not be familiar with it at all. However, serverless architecture is not a passing trend. It is here to stay. The question is not whether serverless will remain relevant but rather how widely it will be adopted and in what form.
In the coming years, more companies are expected to adopt a serverless-first approach, where serverless is the default choice unless specific needs dictate otherwise. However, this shift will not happen across the board, as different businesses have varying infrastructure requirements.
It’s difficult to make industry-wide predictions, as serverless is just one of many architectural choices available. Over time, the term “serverless” itself may fade, with the focus shifting toward broader cloud-native patterns rather than a distinct category.
Another key area of change could be the pricing model. Today, serverless operates primarily on a pay-as-you-use basis. This provides cost efficiency and scalability. However, companies may also see opportunities to reduce expenses by purchasing capacity in bulk. This will be quite similar to traditional cloud computing models where reserving resources upfront can be more cost-effective. This shift could help businesses optimize spending while still leveraging the benefits of serverless technology.
Apart from this, Michael explained that serverless computing will continue evolving. It’s highly likely that we will see new patterns emerging to streamline its adoption. Now each company needs to invent its own approach. Nevertheless, established best practices and frameworks could guide serverless implementations and make this technology an even more viable option for businesses of all sizes.
While speaking about the impact that new technologies may have on the world, Michael stated that the consequences of the adoption of some innovations can be quite worrying.
Machine learning (ML) has been around for decades, but large language models (LLMs) are still relatively new. While they have specific strengths and weaknesses, the current hype makes it difficult to fully assess their most effective use cases. As the technology matures, a clearer understanding will emerge regarding what LLMs are truly efficient at and where their limitations lie.
Beyond technical capabilities, a critical aspect of evaluating any architectural decision is its impact on the world. And here we should consider them not just in terms of performance and cost but also sustainability. Energy consumption, carbon footprint, and ecological impact are all essential considerations.
The ability to make informed decisions about minimizing negative effects and amplifying positive ones is crucial. However, when it comes to LLMs, this level of control is still lacking, which raises concerns about their long-term viability from an environmental perspective.
Serverless computing, on the other hand, offers a more sustainable approach to infrastructure. It inherently optimizes resource usage, scaling only when needed. Running dedicated servers, by contrast, can lead to wasteful energy consumption, making serverless a more viable option for reducing carbon footprints.
At the end of their discussion, Max also asked Michael to share his recommendations for businesses that need to choose the type of infrastructure for their systems.
According to the expert, your team’s existing skill set should be a key factor. Of course, it’s possible to push a team to learn, adapt, and grow. However, leveraging their current expertise can often be the most efficient approach. If your team has deep knowledge in a specific area, it might make sense to build around that strength rather than forcing a transition to an unfamiliar technology stack.
Michael mentioned that some companies make infrastructure decisions that require them to hire entirely new tech teams just to support the shift. While this can bring in fresh expertise, it also introduces risks, costs, and potential disruptions.
Another crucial consideration is risk mitigation. Relying on a single location for server management or cloud services can create vulnerabilities caused by outages, security breaches, or physical disasters. A robust disaster recovery strategy is essential to ensure business continuity.
For high availability and resilience, companies should consider multi-cloud strategies. This could mean using:
A multi-cloud approach can significantly enhance reliability and performance. Nevertheless, it requires investment in tools, expertise, and governance to manage complexity effectively.
As you can see, the implementation of every technology (even the most promising one) can be associated with a range of pitfalls. And it is always better to stay aware of them beforehand.
If you want to learn more about tech innovations that are expected to change the world, don’t miss the next episodes of the Innovantage podcast!