GlusterFS is an open source, software-only, highly available, scalable, distributed storage system for modern data centers. GlusterFS is one way for public, private and hybrid cloud environments to scale their data storage capacity to hundreds of petabytes, and across multiple geographic locations. Learn how Pandora users GlusterFS to scale out their file serving operation around the globe. Highly Available GlusterFS delivers enterprise class high availability via local and remote replication capabilities. GlusterFS is the only highly available storage solution for Amazon Web Services (AWS) Elastic Compute Cloud (EC2) and Elastic Block Storage (EBS) and GoGrid’s Cloud Hosting environment. GlusterFS enables business continuity in the public cloud and enhances both business continuity and disaster recovery capabilities in the private cloud. Superior Economics With GlusterFS, superior economics is the rule. In both public and private cloud environments you pay only for what you use. In the private cloud you are empowered to choose and deploy on any certified commodity hardware. Only pay for performance and capacity as and when it is needed. In this session, attendees will learn what is required in a typical GlusterFS rollout - what are typical use case, where does GlusterFS shine, and where it doesn't. Attendees should walk away with a good understanding of how to start their storage project.
nginx can sit between the web browser and the web server. In this position it can offload some of the work for the web server(s). In addition to caching data, nginx can handle URL rewriting, data compression, byte ranges, chunked responses, image resizing, SSL and many other tasks. nginx has support for virtual hosts and can proxy for multiple groups of web servers. nginx can also proxy for IMAP, SMTP and POP3 mail protocols including all three protocols over SSL. In October of 2011, Netcraft claimed that nginx has more than 11% of the web server market and is growing. Learn about the up and coming web server that has already arrived.
This presentation will begin with an introduction to the concepts of "owner", "group", and "other" as well as "read", "write", and "execute" permissions. We will discuss how these permissions affect files and directories differently. With real examples, we will see how complications and undesired access can arise from different permission settings. A survey of common permissions schemes will be made. The SUID, SGID, and the "sticky bit" will be introduced as means of surmounting common security issues in collaborative directories. With a firm conceptual foundation in place, Access Control Lists can be discussed as a means of fine-tuning access granted by standard permissions. The presentation will close with a dicsusssion of how to implement, manage, and determine ACLs.
MySQL is the most popular database on the web. But there are relatively few qualified MySQL DBAs . This session provides a guide for Linux Admins who are looking to acquire DBA skills or have inherited servers but do not know how to care for them. Do you know your EXT2 /3 file system may be slowing you down? Do you have the right data indexed? And why you should invest in memory rather than processors.
While Security-Enhanced Linux (SELinux) is an incredibly powerful tool for securing Linux servers, it has historically had a reputation for being difficult to configure, and as a result many system administrators would simply turn it off. Fortunately, the incredible amount of work done by the SELinux community in recent years has made SELinux much more system administrator-friendly. In this session, attendees will learn the basics of SELinux, which include configuring, analyzing, and correcting SELinux errors, as well writing basic policies to enable non-SELinux aware applications to work on SELinux protected systems. Real-world examples will be used to better demonstrate how to use SELinux.
Infrastructure is code. The separation between how you manage infrastructure and applications is disappearing. System administrators love Chef because it gives them flexibility to integrate all aspects of their infrastructure such as monitoring and trending tools with applications. Software developers love Chef because it helps them take care of the muck so they can focus on writing great applications. Get beyond just configuration management. Investigate Chef's architecture and design including tools and capabilities and dissect the anatomy of a Chef run.
Formerly called Ensemble, juju is DevOps DistilledTM. Through the use of charms (renamed from formulas), juju provides you with shareable, re-usable, and repeatable expressions of DevOps best practices. You can use them unmodified, or easily change and connect them to fit your needs. Deploying a charm is similar to installing a package on Ubuntu: ask for it and it’s there, remove it and it’s completely gone. In this talk, we will discuss the concepts behind juju, and run a live demonstration of juju for deploying and managing various workloads.
Puppet has dramatically helped solve the issues of coordination and automation of configuration management while Puppet's cloud provisioning capabilities aid IT groups in areas of automated scaling. The most common approach for getting Puppet running in AWS, for example, is to have a base AMI that runs puppet when launched. Then, Puppet gets the instance to the role it's assigned. However, this is not always a fast process. Depending on many factors, it may take upwards of several minutes for Puppet to get all the configurations in place for the instance to fulfill its assigned roles. If your auto scaler (or you manually) fired up several hundred or several thousand instances, time is of the essence. Utilizing revision control system hooks, Puppet's cloud provisioning, and a custom Puppet report, your AMIs can be ensured to be always as up to date as possible. This dramatically will lower the time to scale ensuring you're ready for whatever may come. The entire premise is based on the notion that your Puppet code is what's authoritative in your network. Your build system may know the latest deployable version of your applications, but it can't know what's required for a system to actually deploy the application. If you update your Puppet code's requirements to deploy your application, however, it directly pertains to what a system should look like. Therefor, it's your Puppet code or meta data that should trigger AMI rebuilds. When code is committed to git/svn, a script will bring up instances of each AMI we want to manage. Puppet runs on the instances to their assigned roles. If changes occurred, and the were all successful, then the image needs to be updated.
Simplicity: Versatility between massive scale deployments and smaller systems may seem daunting, but Salt is very simple to set up and maintain, regardless of the size of the project. The architecture of Salt is designed to work with any number of servers, from a handful of local network systems to international deployments across disparate datacenters. The topology is a simple server/client model with the needed functionality built into a single set of daemons. While the default configuration will work with little to no modification, Salt can be fine tuned to meet specific needs.
Parallel execution: The core function of Salt is to enable remote commands to be called in parallel rather than in serial, to use a secure and encrypted protocol, the smallest and fastest network payloads possible, and with a simple programmer interface. Salt also introduces more granular controls to the realm of remote execution, allowing for commands to be executed in parallel and for systems to be targeted based on more than just hostname, but by system properties.
Building on proven technology: Salt takes advantage of a number of technologies and techniques. The networking layer is built with the excellent ZeroMQ networking library, so Salt itself contains a viable, and transparent, AMQ broker inside the daemon. Salt uses public keys for authentication with the master daemon, then uses faster AES encryption for payload communication, this means that authentication and encryption are also built into Salt. Salt takes advantage of communication via Python msgpack, enabling fast and light network traffic.
Python client interface: In order to allow for simple expansion, Salt execution routines can be written as plain Python modules and the data collected from Salt executions can be sent back to the master server, or to any arbitrary program. Salt can be called from a simple Python API, or from the command line, so that Salt can be used to execute one-off commands as well as operate as an integral part of a larger application.
Fast, flexible, scalable: The result is a system that can execute commands across groups of varying size, from very few to very many servers at considerably high speed. A system that is very fast, easy to set up and amazingly malleable, able to suit the needs of any number of servers working within the same system. Salt’s unique architecture brings together the best of the remote execution world, amplifies its capabilities and expands its range, resulting in this system that is as versatile as it is practical, able to suit any network.
Open: Salt is developed under the Apache 2.0 licence, and can be used for open and proprietary projects. Please submit your expansions back to the Salt project so that we can all benefit together as Salt grows. So, please feel free to sprinkle some of this around your systems and let the deliciousness come forth.
Back by popular demand, this standing-room only talk is updated from last year and introduces participants to the concepts of IPv6 and shows how they can easily add IPv6 capabilities to their Linux Systems, even if their ISP doesn't yet support IPv6. It covers all the basics, including interface configuration, DNS, Web services, Email, etc. A Q&A period at the end will allow participants to get answers to their IPv6 questions.