Docker and Microservices: enabling the next generation of distributed applications


Used and advocated by organizations like Netflix and Amazon, microservices architectures boast lots of advantages. Instead of deploying monolithic applications, microservices architectures break them down into small, isolated parts, each responsible for a simple feature or function. The rationale is that each of those parts will be easier to write, easier to understand, and therefore easier to debug. It also relies on the idea that it is simpler to reason with many small components, rather than one big system being the sum of those components.

Those parts, being isolated, communicate with each other using APIs or RPC. If the chosen mechanisms are standard enough, each part can be implemented using totally different frameworks and languages. A Node.js web frontend can do API calls to a Python backend and vice versa. This also means that as long as API contracts are respected, it is possible to deploy a single service independently of the rest of the application. A new implementation can be rolled out without affecting the rest of the code base and rolled back the same way.

Pushing this idea further, different services can be assigned to different teams. This can help each team to build a deeper knowledge of "their" code, since that code is now better delimited by the boundaries between services. The root causes of failures are also easier to identify, since it is easier to implement auditing and tracing at API boundaries than within a single process.

Such architectures bring new requirements, however. API and RPC mechanisms are slower than internal function calls, and often less powerful (in terms of polymorphism and introspection, for instance). Communicating with a "remote" service (even if it is deployed locally) implies that we need service discovery, and possibly activation mechanisms.

When scaling out (i.e. adding more machines, as opposed to "scaling up" by replacing existing machines with bigger ones), the situation becomes even more complex due to the new need for fail-over and load-balancing. This makes a perfect deployment system more crucial than ever in order to accommodate the new desired fast-paced deployment.

By simplifying the packaging of applications, Docker and containers help solve that deployment challenge. We'll also see how container-based network setups can enable creative new ways of dealing with service discovery. We will also see how, thanks to Docker, the advanced network functions that we need in those microservices architectures can be implemented without increasing the complexity of our application code.

Saturday, February 21, 2015 - 18:00 to 19:00