Cashing in on Varnish
When building scalable web application infrastructures, the questions asked are how to serve up the content quickly, efficiently and reliably. As DevOps engineers, we typically face this challenge from our engineering, product and business teams. This presentation will present how Varnish helps solve for those concerns, but also tell a story about our lives before and after Varnish - dealing with the thundering herd from on-air callouts, the need for complex traffic manipulation, load balancing, monitoring and more.
The reasons for using Varnish are compelling, and fortunately integrating it into your infrastructure can be made easy with a configuration management framework like Chef. The lessons learned while integrating with Chef are important for successfully automating deployment and making the application environment scalable. Strategies and techniques for doing so will be covered in this talk.
But then you're up-and-running with Varnish and where do you go next? To operationalize Varnish you need to go further and provide the telemetry so that other teams, including Operations, can inspect the behaviour of a web application and understand how it's performing. The logging capabilities within Varnish, married with an ELKs stack will give you amazing dashboards and help you identify application performances issues, site issues, and provide analytics that you can see in near real-time. In addition, monitoring systems like New Relic, Zenoss, etc have Varnish plugins that can inspect a running instance and report the runtime metrics.
In conclusion, this talk covers the real-world strategies for implementing Varnish as a caching platform, the lessons learned and how it can add value to your organization.