Adoption of Microservices – Insights and Lessons Learned Along the Way

June 21st, 2016

It has been a while since my colleague Justin Hart last posted about microservices, so I decided to revisit the topic, discuss some additional aspects of microservices architecture and highlight various insights we’ve gained on our journey toward the adoption of microservices for a virtual, cloud-based Session Border Controller (SBC).

Let’s start with microservices and complexity. From the outside looking in, it might appear the adoption of microservices is going to result in an increase in complexity. In reality, much of this complexity was already in the application, it was just hidden within a monolithic code base. By exposing this complexity, it is possible to gain value from it. The best approach to microservices is to make the independent functions self-organizing as much as possible. For example, to support scaling, have functions join/leave clusters automatically. Have them use update/broadcast mechanisms so other elements can learn about relevant changes.

Moving onto performance. We gained an appreciation for what really happens, with respect to system performance, when replacing direct function calls with Application Program Interface (API) calls. This boils down to determining the right level of microservices granularity. There will always be an inherent trade-off between “chattiness” of protocol and scalability, so what level of granularity is right for one use case, may not be the best for another. Another aspect of performance is that some technology choices look good on paper, but need to prototyped to really understand performance. And of course they need to be analyzed at scale to point out potential issues that would otherwise remain uncovered.

Our next insight was to give our Operation teams tools to control behavior. It became clear that it was going to be easier to have ways to control microservices behavior at the application level than to expect our Operations team to have to define how each element would behave from the ground up.

In order to achieve simplification and cohesion at the application level, the importance of automation as more parts become visible to operations was obvious. We determined we needed to invest in automated discovery, self-organization of components and cluster distribution of configuration data. Without this, the overhead of managing many discreet microservice components was going to become a solution bottleneck.

As we switched from technical insights, into business insights, we wanted to better understand our customer’s view of the key drivers for microservices adoption. We received some pretty good insight into this question after we conducted a poll during our recent webinar with Light Reading. We asked the attendees “What do you consider the lead driver for microservices deployment.” The had three options: Efficiency, Future Proof, or Agility.

And the top answer was? Agility.

The poll results were as follows:

  • Just over 25% selected Efficiency (scale only with resources that would be actually used)
  • Just under 25% chose Future Proof (ability to replace microservices without impacting an entire service)
  • Just under 50% selected Agility. (ability to rapidly react to changes in service demand or service functional needs)

I see two takeaways from this information. First, all three carry some level of importance because 25% of the respondents felt they were they lead driver, thus we need to keep them in mind in our solution. Second, I believe Agility came out on top because the migration to the Cloud and adoption of microservices is really about gaining a competitive differentiator. So while Efficiency (operational cost savings) or Future Proof (optimizing capital expense) are important, they are viewed as secondary to being more responsive to customer demands.

If you want more detail on these topics, please listen to our recent webinar with Light Reading, “Microservices Architecture Adoption”