MySQL is the most popular database on the web. But there are relatively few qualified MySQL DBAs . This session provides a guide for Linux Admins who are looking to acquire DBA skills or have inherited servers but do not know how to care for them. Do you know your EXT2 /3 file system may be slowing you down? Do you have the right data indexed? And why you should invest in memory rather than processors.
A unique selling point of MySQL is that is supports multiple storage engines, which basically means users get to use the same top-level SQL interface while storing their data in many different sorts of methods. However as these are benefits they also come with some trade-off's and we discuss some (and point to solutions): * transaction support - not all storage engines support transactions, and storage engines that do use different locking strategies, so cross-storage engine transactions are always interesting. Do you choose a transactional engine for your workload? When is it right to use a non-transactional engine like MyISAM? * backup - cross-storage engine backup does not work unless you use OS-level approaches like LVM/ZFS snapshots. In-memory engines will naturally not allow snapshots to work. How do you backup across engines? * replication for different storage engines differ in the sense that we already write a second binlog, despite transactional engines having their existing one. How do you replicate when you have a mix of engines? * how do you monitor when you have several engines? What resources do you allocate to each in the configuration? * how does the optimiser deal with all the different storage engines? Today in MySQL 5.5 and greater, InnoDB is the default storage engine. It has spawned two large forks - XtraDB and HailDB (for Drizzle). Previously, MyISAM was the default storage engine. MySQL by default ships with about a dozen engines, and other branches like MariaDB ship with close to twenty. Naturally we'll cover cool tricks you can do with storage engines. For example, how you can make good use of the Spider storage engine for vertical partitioning? When do you use the Archive storage engine to store log tables? When do you use the Federated tables to get different views or execute remote commands? How do you use the Blackhole engine for replication relay despite an engine that really is the equivalent of /dev/null in Unix? We will go through the entire landscape, including the commercial landscape, and show you what engine is correct for your use-case. If you're a developer, you will benefit from learning about the extended storage engine API, in MariaDB which for starters supports an extended CREATE TABLE functionality.
MariaDB is a branch of the popular MySQL database. The project began in 2009 circled around a storage engine, but quickly evolved to being another database, with two major releases in 2010. MariaDB is community developed, feature enhanced and backward compatible with MySQL.
This session will introduce the project, and will help a DBA or developer come to grips with MariaDB.
Memcached has long been a simplistic temporary key/value store, used by Facebook, Twitter, Wikipedia, and many others. However in recent years a new protocol was designed, allowing much potential. Even more recently the project has picked up pace and continues to push the performance barrier. We will quickly cover new features of memcached by showing use cases for improved usage. This talk will start with a short overview of memcached, but you should come prepared with basic knowledge to get the most out of it. - Learn the benefits of the binary protocol. You've perhaps heard about it, but not enough about how awesome it is. - Find out about the 1.6 beta tree, currently used by MySQL and others. - Hear about new commands, features, and fixes for long standing issues. We push performance to 10gbps and beyond!
As the size and performance requirements of storage systems have increased, file system designers have looked to new architectures to facilitate system scalability. Ceph is a fully open source distributed object store, network block device, and file system designed for reliability, performance, and scalability from terabytes to exabytes. Ceph's architecture consists of two main components: an object storage layer, and a distributed file system that is constructed on top of this object store. The object store provides a generic, scalable storage platform with support for snapshots and distributed computation. This storage backend is used to provide a simple network block device (RBD) with thin provisioning and snapshots, or an S3 or Swift compatible RESTful object storage interface. It also forms the basis for a distributed file system, managed by a distributed metadata server cluster, which similarly provides advanced features like per-directory granularity snapshots, and a recursive accounting feature that provides a convenient view of how much data is stored beneath any directory in the system. This talk will describe the Ceph architecture and then focus on the current status and future of the project. This will include a discussion of Ceph's relationship with btrfs, the file system and RBD clients in the Linux kernel, RBD support for virtual block devices in Qemu/KVM and libvirt, and current engineering challenges.
Formerly called Ensemble, juju is DevOps DistilledTM. Through the use of charms (renamed from formulas), juju provides you with shareable, re-usable, and repeatable expressions of DevOps best practices. You can use them unmodified, or easily change and connect them to fit your needs. Deploying a charm is similar to installing a package on Ubuntu: ask for it and it’s there, remove it and it’s completely gone. In this talk, we will discuss the concepts behind juju, and run a live demonstration of juju for deploying and managing various workloads.
One of the most rapidly growing types of high-load applications today is the "high-volume data collector". Such applications collect thousands of facts per second and store them in one or more database systems for later summarization and analysis. Examples include fault reporting systems, hardware telemetry, and security and web monitoring. The challenges of such systems are several: coping with billions of inserts, storage of terabytes of data, and the integration of disparate processes, databases, and data processing tools. The biggest challenge, though, is allowing for component upgrade, replacement, and failure while continuing to process data 24/7, because the firehose never, ever, shuts off. PostgreSQL Core Team member Josh Berkus has worked on several of these systems in the last year, including Mozilla's Socorro crash reporting system, monitoring of power generation systems, and high-volume financial transaction reporting. This talk will explore some of the lessons he has learned and open-source tools he has employed in dealing with these applications.
Puppet has dramatically helped solve the issues of coordination and automation of configuration management while Puppet's cloud provisioning capabilities aid IT groups in areas of automated scaling. The most common approach for getting Puppet running in AWS, for example, is to have a base AMI that runs puppet when launched. Then, Puppet gets the instance to the role it's assigned. However, this is not always a fast process. Depending on many factors, it may take upwards of several minutes for Puppet to get all the configurations in place for the instance to fulfill its assigned roles. If your auto scaler (or you manually) fired up several hundred or several thousand instances, time is of the essence. Utilizing revision control system hooks, Puppet's cloud provisioning, and a custom Puppet report, your AMIs can be ensured to be always as up to date as possible. This dramatically will lower the time to scale ensuring you're ready for whatever may come. The entire premise is based on the notion that your Puppet code is what's authoritative in your network. Your build system may know the latest deployable version of your applications, but it can't know what's required for a system to actually deploy the application. If you update your Puppet code's requirements to deploy your application, however, it directly pertains to what a system should look like. Therefor, it's your Puppet code or meta data that should trigger AMI rebuilds. When code is committed to git/svn, a script will bring up instances of each AMI we want to manage. Puppet runs on the instances to their assigned roles. If changes occurred, and the were all successful, then the image needs to be updated.
The days when automotive software hacking meant trying to get MP3 music to play on a car's audio system are long behind us. The real-time fuel efficiency display of the Prius ably illustrates the driver empowerment that improved information can bring. Tata Motors, which owns Land Rover and Jaguar, has developed lane-departure warning systems that it is planning to deploy. BMW and Tesla already upgrade system firmware when cars are taken into the shop. DARPA Grand Challenge contenders from Stanford and CMU illustrate the potential for self-driving vehicles. Geely and Hawtai in China are already shipping cars running Moblin, a GNU/Linux variant based on Gnome and X11. The GENIVI Alliance, which has been formed in order to promulgate Linux-based automotive software standards, has well over 100 members, including familiar names like Delphi, ARM, Intel, Renault, Alpine, Mitsubishi, Samsung and Canonical. Along with new opportunities, there are new dangers in the auto software space. Do we *want* mechanics to be able to install new firmware in our cars? Can SELinux and iptables, or maybe Android's token-based sandboxing system, address the new security problems? How will we architect "multiseat" installations so that misbehaving applications don't overwhelm critical functions, or perhaps just distract the driver? Many questions remain unanswered, such as what kind of input devices drivers need (touchscreen, voice recognition, video-captured gestures, joysticks, other?) and which information should be presented when to which passengers. Safety aside, avoidance of motion sickness will bring a whole new dimension to Linux user interface design. But what about 2012? The unfortunately named "in-vehicle infotainment" (IVI) space is growing fast, so the field presents opportunities for job-seekers as well as hardware hackers. I'll demonstrate how hobbyists of limited means can display real-time fuel efficiency data in their own cars using open-source software running Linux on readily available hardware.