Explanation and case studies of the CEPH distributed file system for system administrators
As the size and performance requirements of storage systems have increased, ?le system designers have looked to new architectures to facilitate system scalability. This talk will describe a deployable and highly scalable solution to the current feature-limited selection of file storage systems. Ceph is an open source distributed file system capable of managing many petabytes of storage with ease. The architecture leverages device intelligence to provide a reliable, scalable, and high-performance ?le service in a dynamic cluster environment. Ceph’s architecture consists of two main components: An object storage layer, and a distributed file system that is constructed on top of this object store. The object store provides a generic, scalable cloud storage platform (much like Amazon S3) with support for snapshots and distributed computation. The distributed file system similarly provides advanced features like per-directory granularity snapshots, and a recursive accounting feature that provides a convenient view of how much data is stored beneath any directory in the system. In addition to a standard file system interface with support in the mainline Linux kernel, we have also built interfaces to integrate directly with Hadoop and Hypertable distributed computation and database systems. A distributed block device also provides shared reliable storage for virtual machine instances in a cloud environment (much like Amazon EBS), with support in Qemu/KVM and the Linux kernel. The project is licensed under the LGPL/GPL, and aims to play nice with the larger open source cloud, data processing and storage ecosystems.
Learn to write GIMP scripts in Python and Script-Fu
Much of the power of GIMP, the GNU Image Manipulation program, comes from its plug-in architecture. Most of the functions you use in GIMP, including everything in the Filters menu, are plug-ins or scripts. Wouldn't it be great to be able to write some scripts and plug-ins of your own? In this tutorial, you'll learn to write GIMP plug-ins and scripts in two languages: Python and Script-fu. Python is rapidly becoming the language of choice for writing GIMP plug-ins on Linux because of its flexibility, power and clean API. You'll see how easy it is to create a simple Python plug-in or modify an existing one. You'll also learn how to use GIMP's built-in developer documentation as well as where to find documentation online. And you'll see how you can use Python and PyGTK to create interactive plug-ins with custom user interfaces, and how to access image pixels directly. Script-fu isn't as powerful as Python, but it has a few other advantages. As GIMP's native language, you can count on users having it already, so your script-fu scripts will be useful to any GIMP user on any platform. And learning how to use it is easy, since there are at least a hundred helpful script-fu examples already installed on your machine. Script-fu is a variant of Scheme, but you don't need to be fluent in Scheme or Lisp to write Script-fu. With a few basics of Lisp syntax and knowing how to use GIMP's built-in help, you're all ready to write simple scripts you can share with the world.
Open source built the web, now it will power the cloud. The talk will give a brief overview of cloud computing and terminology followed by specific examples of open source technologies that lend themselves to managing and deploying cloud computing environ
I. A Brief Cloud Overview Cloud computing is a network-based, distributed computing environment where resources are shared to deliver on-demand applications and services that are expected to meet a certain quality of service (QOS). A. Characteristics of a Cloud 1. Agile – Rapidly adapt to changes 2. Multi-Tenancy – Sharing of resources across a large pool of users 3. Scalability – Dynamic expansion to meet user needs 4. High-Availability – Ability to handle workloads and adapt to multiple points of failure 5. Load Balancing – Balance workloads across virtual machines 6. API – Ability to interact with cloud through some well-defined interface (usually via Representational State Transfer (REST)) 7. Nice To Have – Security, metering, geographical independence, lower maintenance B. The Hype - The Benefits of a Cloud 1. Reduce Costs - Higher utilization, pay for what you need, faster response, automation, etc. 2. Portability – Ability to migrate from one type of cloud to another 3. Agility – Scale-up, scale down, live migrations, etc. 4. Lower Maintenance – Standardization via abstraction for target operating systems, heterogeneity, etc. C. Types of Clouds • Software-as-a-Service (SaaS) – Software-as-a-service indicates software that is offered on-demand and deployed either in a hosted model or with a subscription that typically provides new features and updates. Common examples of OSS offered by either a software company or hosting provider include Drupal, Linux, MindTouch and SugarCRM. • Platform-as-a-Service (PaaS) – Platform as a service are services offerings where the hardware and operating system have been abstracted. These services aren’t free and open source but are often powered by open source frameworks. Examples of these frameworks are JBoss Network by Red Hat, SpringSource by VMware and WS02. • Infrastructure-as-a-Service (IaaS) – IaaS is the deliver of resources in an expandable way so as to allow supply to dynamically meet demand of the user and removes the specifics of provisioning servers, software, data-center space or network equipment. The most common example of this is Amazon’s Elastic Cloud (EC2). This is commonly called a public cloud. The ability to pool internal resources to provide this infrastructure from behind the firewall is often referred to as a private cloud. Open source examples of this include Eucalyptus, Ubuntu Enterprise Cloud (EUC), Cloudstack and OpenStack. The ability for private and public clouds to interact is called a hybrid cloud. II. Open Source Cloud Computing Infrastructure or Open Source IaaS The type of cloud we will discuss will be the IaaS, and specifically compute clouds. When you talk about IaaS you look at taking discrete resources and pooling them in a way to allow the users of those resources to draw capacity as needed. The usual limiting factors of individual servers, compute, storage and networking are pulled from a resource pool that at least meets if not exceeds demand. A. Open Source Software to Build, Deploy and Manage a Private Cloud To discuss cloud computing you need to look at the elements that make up the cloud fabric. These resources abstract the hardware and pool computing resources to create the cloud operating system. A. Virtualization – “The Hypervisor” The foundation for cloud computing starts at the abstraction of the hardware. The availability of open source hypervisors has made it possible to build extremely complex and customized virtualization infrastructure. Of the open source hypervisor technologies, two lend themselves to building cloud computing infrastructure. • Kernel-based Virtual Machine (KVM) – KVM is a virtualization technology for Linux (first released in 2006) that leverages the Linux kernel and the virtual extensions provided by Intel or AMD to provide a hypervisor. The requirement for virtualization-enabled chips precludes its use on older hypervisors. • Xen – Xen is a more mature hypervisor (released in 2002), sponsored by Citrix and has wide adoption as a hypervisor. It is the most common and mature hypervisor. B. Cloud Operating System The ability to orchestrate virtual machines, storage and networking so as to pool resources and balance these loads • CloudStack – CloudStack is developed by Cloud.com and was released in May 2010. The features of CloudStack are hypervisor • Eucalyptus (Elastic Utility Computing Architecture for Linking Your Programs To Useful Systems) – Sponsored by Eucalyptus (first released in 2008) systems formed the initial basis for Ubuntu Enterprise Cloud. It was designed to keep complete compatibility with Amazon’s EC2 and S3 services. • OpenNebula – First released in 2008 spawned from a university research project. Now supported by a services company C12G Labs. • OpenStack – Newest kid on the block, released by Rackspace and NASA in July 2010. Large ecosystem first release in fall of 2010, Austin. • Ubuntu Enterprise Cloud – Canonical sponsored fork of Eucalyptus but uses KVM as the preferred hypervisor. C. The Management Tools There are numerous open source tools that have been proven for usage in legacy data centers. Many of them have characteristics that lend themselves to cloud computing because of their ability to automate tasks and/or because they are network aware. We’ll walk through the components that can be combined to form open source tool chains, combinations of tools that can be combined where the output of one ??? • Provisioning and Patching o Cobbler o Kickstart o OpenQRM o Spacewalk o Viper • Configuration Management o Cfengine o Chef o Puppet • Orchestration – Run scripts, take data from one system and export to another o AutomateIT o Capistrano o Control Tier o Func • Monitoring – Report and alert on the health o Nagios o OpenNMS o Zabbix o Zenoss D. Complimentary Cloud-Related Open Source Projects • DeltaCloud – DeltaCloud is a middleware to stop and start cloud instances on various types of cloud infrastructure, DeltaCloud Aggregator offers a web UI for DeltaCloud API (emerging technology from Red Hat) • libvirt – A toolkit to interact with virtualization capabilities of recent versions of Linux (emerging technology from Red Hat) • Jclouds – Abstraction of API across compute and storage clouds • libcloud – unified interface for the cloud incubated by Apache III. Putting it all Together to Build your Open Source Cloud - The appeal of cloud computing is elasticity on-demand, agility. This part of the discussion will focus on how to combine the components mentioned above to build a cloud and then deploy target operating systems and applications in a complete private cloud. Take all the components mentioned so far and compile them to create a cloud that can provide all the elasticity, and: • Choosing a Hypervisor - Do you have newer hardware (specifically VT enable processors) Linux expertise or a mix of older and newer hardware? • Choosing a Cloud Orchestration Project - Choose a cloud operating system based on hypervisor and other features • Management Tool Chains - Management tools use open source tool chains to automate the deployment of target operating systems and deploying them. Finally, time permitting, we will look at a live cloud computing management console showing the interface for monitoring. Speaker Bio: Mark Hinkle, Vice President of Community Mark is the Vice President of Community for Cloud.com where he is responsible for driving all of the community efforts around the company's open source, cloud computing software and ecosystem. Before that he was the force behind the Zenoss Core open source management projects adoption and community involvement, growing community membership to over 100,000 members. He is a co-founder of both the Open Source Management Consortium and the Desktop Linux Consortium, has served as Editor-in-Chief for both LinuxWorld Magazine and Enterprise Open Source Magazine, and authored the book, "Windows to Linux Business Desktop Migration." (Thomson, 2006) Mark has also held executive positions at a number of technology start-ups, including Earthlink, (previously MindSpring)--where he was the head of the technical support organization recognized by PC Computing and PC World as the best in the industry--Win4Lin and Emu Software.
Deploying OpenStack is a non-trivial effort. This talk will outline how Chef is used to automate deploying OpenStack to your infrastructure and then be able to deploy with Chef to the virtual private servers on that infrastructure.
Chef is an open source systems integration framework for automating the deployment of your entire infrastructure. OpenStack is a collection of open source technologies for delivering a massively scalable cloud operating system. Deploying OpenStack is a non-trivial effort, this talk will outline how Chef was used to automate deploying OpenStack Compute and Object Storage and then have the ability to deploy with Chef to the virtual private servers running on that infrastructure. Founded by Rackspace Hosting and NASA, OpenStack has grown to be a global software community of developers, technologists, researchers and corporations collaborating on a standard and massively scalable open source cloud operating system. Developed by Opscode and a vibrant open source community, Chef is being used to automate and deploy large (and small) infrastructures all over the world. Both projects are freely available under the Apache 2.0 license so that anyone can run it, build on it, or submit changes back to the projects. A number of different companies collaborated on automating OpenStack deployments, including Rackspace, Opscode and Cloudscaling. The seeds for development for this collaborative project were sown at the OpenStack Design Summit in November 2010, where over 250 attendees from all over the world came together to plan future releases. Recognizing the need to make development and deployment of OpenStack easier, we started gathering requirements and documentation to automate the process. OpenStack is now deployable with Chef and is now a supported platform for automatically deploying cloud instances with Chef.
This session will be a comprehensive overview of the new Asterisk SCF project
Asterisk SCF will be delivered as a system of distributed components that can be deployed in clusters on a single system or on many systems, transparently. The Asterisk SCF platform will support, as a part of its basic architecture, the full range of real-time IP communications, including video, multi-channel wideband and ultra-wideband audio, chat, desktop sharing and other media types that may arise in the future.
Asterisk SCF is not a replacement for Asterisk, the world’s most widely used open source voice communications platform. Digium and the Asterisk community are committed to the continued development and support of Asterisk, the telecommunications software.
Asterisk SCF is currently in the early stages of development but please join us for a discussion of this exciting new project and an overview of the solution, its capabilities and the wide array of opportunities that it creates for enterprises, carriers and application developers.
Use case for adoption of a static analysis tool as part of the development process
Software authors have a toolkit of utilities that make the development process more manageable. Version control, Bug trackers, Compilers and Debuggers are a well-established baseline of must-haves. New to the list are automated code testing tools that use techniques such as Static Analysis, which examines source code for flaws specific to the programming language used. The technique can identify a wide variety of coding errors, with minimal human effort - of course developer effort is still required to process the identified issues and write patches for the problems. This talk will use Samba as an example use case. Samba started using Coverity Static Analysis as part of the Department of Homeland Security sponsored Coverity Scan effort. The presentation will include technical details about what a static analysis tool can show you, and will track the resulting changes in the Samba codebase over time.
Membase: the Open Source simple, fast, elastic NoSQL database for interactive web applications.
The kinds of apps we build have evolved. Mobile apps. Facebook apps. Responses are needed in milliseconds. Techniques for storing and getting that data are starting to evolve too. The category even has a name: NoSQL. Which one should you choose though? Your site really runs on memcached, occasionally accessing a SQL database. You need SQL for some types of data access, or you fear the effort involved in breaking free from some of that legacy mapping code. Other types of data access could be serviced by something like memcached, but you would need the same speed, it would need to be compatible with current production applications and your application data has to survive the seemingly hostile environment from your cloud computing provider. You want to know that it will never make your application wait for data; you need to know that it’s been deployed for something other than batch-based workloads. Membase is a simple, fast, elastic key-value database. Building upon the memcached engine interface, it is memcapable, meaning it is completely compatible with existing memcached clients and applications. The functionality from the Membase project allows for persistence, replication of data, lots of statistics on data use and even streaming data for iterating over every item in the store. The founding sponsors of membase, Membase, Inc., Zynga and NHN launched a new project at membase.org under an Apache 2.0 license. Learn how to get it, about the deployments behind some of the largest sites and how you can get involved in the project.
Scripting Basics What Makes Python Special? Magic - shebang Where is Python? Comments Execution Debugging Python -d Python Interactive Shell IPython help() dir() pdb Objects and Methods Bools Strs Ints Lists Tuples Dicts What Are Iterators? Functions Also Objects Code Segregation Using a main() Function Classes Also Objects Writing a Class Instantiating a Class Modules Also Objects Importing Modules Installing Modules Some Cool Modules sys os re time socket httplib subprocess json Other Resources
Most DevOps organizations have systems configuration management in place, but not many have begun to automate their network. In this session Edmunds will present on how their DevOps organization has started to automate network configuration with an open-source framework that exposes content routing and load balancer management to web applications.
Bash is good at running many commands at the same time. What if you expect a few to fail, and you don't know which? This talk presents and demonstrates how to focus on the few commands that didn't work, while the computer is still running all the others.
Bash is good at running many commands at the same time. What if you expect a few to fail, and you don't know which? This talk presents and demonstrates how to focus on the few commands that didn't work, while Bash is still running all the others. A loop can launch many processes, but having too many processes at once can be bad for your system, so using xargs with the -P option lets you limit how many processes are created. Since xargs itself doesn't tell you which commands failed, or help you communicate with stalled processes, it is often usful to include file redirection or automatically created terminal windows.