Learning Chef

5 (1 reviews total)
By Rishabh Sharma , Mitesh Soni
    Advance your knowledge in tech with a Packt subscription

  • Instant online access to over 7,500+ books and videos
  • Constantly updated with 100+ new titles each month
  • Breadth and depth in over 1,000+ technologies
  1. An Overview of Automation and Advent of Chef

About this book

Chef automation helps to transform infrastructure into simple code. This means that building, rebuilding, configuration, and scaling to meet your customer's needs is possible in just a few minutes in a real-time environment.

This book begins with the conceptual architecture of Chef, walking you through detailed descriptions of every Chef element. You will learn the procedure to set up your workstation and how to create a Cookbook in a hosted Chef environment.

Private Chef Server setup is covered in depth, with information on the necessity of on-premise Private Chef deployment, benefits, and installation and configuration procedures for the different types of Private Chef servers including standalone, tiered, and high-availability.

This book sheds light on industry best practices with practical Chef scenarios and examples.

Publication date:
March 2015


Chapter 1. An Overview of Automation and Advent of Chef


"The first rule of any technology used in a business is that automation applied to an efficient operation will magnify the efficiency. The second is that automation applied to an inefficient operation will magnify the inefficiency."

 --Bill Gates

Before moving to the details of different Chef components and other practical things, it is recommended that you know the foundation of automation and some of the existing automation tools. This chapter will provide you with a conceptual understanding of automation, and a comparative analysis of Chef with the existing automation tools.

In this chapter, we will cover the following topics:

  • An overview of automation

  • The need for automation

  • A brief introduction of Chef

  • The salient features of Chef

  • Automation with Chef

  • Existing automation tools and comparison with Chef



Automation is the process of automating operations that control, regulate, and administrate machines, disparate systems, or software with little or no human intervention. In simple English, automation means automatic processing with little or no human involvement.

An automated system is expected to perform a function more reliably, efficiently, and accurately than a human operator. The automated machine performs a function at a lower cost with higher efficiency than a human operator, thereby, automation is becoming more and more widespread across various service industries as well as in the IT and software industry.

Automation basically helps a business in the following ways:

  • It helps to reduce the complexities of processes and sequential steps

  • It helps to reduce the possibilities of human error in repeatable tasks

  • It helps to consistently and predictably improve the performance of a system

  • It helps customers to focus on business rather than how to manage complexities of their system; hence, it increases the productivity and scope of innovation in a business

  • It improves robustness, agility of application deployment in different environments, and reduces the time to market an application

Automation has already helped to solve various engineering problems such as information gathering, preparation of automatic bills, and reports; with the help of automation, we get high-quality products and products that save cost.

IT operations are very much dependent on automation. A high degree of automation in IT operations results in a reduced need for manual work, improved quality of service, and productivity.


Why automation is needed

Automation has been serving different types of industries such as agriculture, food and drink, and so on for many years, and its usage is well known; here, we will concentrate on automation related to the information technology (IT) service and software industry.

Escalation of innovation in information technology has created tremendous opportunities for unbelievable growth in large organizations and small- and medium-sized businesses. IT automation is the process of automated integration and management of multifaceted compute resources, middleware, enterprise applications, and services based on workflow. Obviously, large organizations with gigantic profits can afford costly IT resources, manpower, and sophisticated management tools, while for small- and medium-scale organizations, it is not feasible. In addition to this, huge investments are at stake in all resources and most of the time, this resource management is a manual process, which is prone to errors. Hence, automation in the IT industry can be proved as a boon considering that it has repeatable and error-prone tasks. Let's drill down the reasons for the need of automation in more detail:

  • Agile methodology: An agile approach to develop an application results in the frequent deployment of a process. Multiple deployments in a short interval involve a lot of manual effort and repeatable activities.

  • Continuous delivery: Large number of application releases within a short span of time due to an agile approach of business units or organizations require speedy and frequent deployment in a production environment. Development of a delivery process involves development and operation teams that have different responsibilities for proper delivery of the outcome.

  • Non-effective transition between development and production environment: In a traditional environment, transition of a latest application build from development to production lasts over weeks. Execution steps taken to do this are manual, and hence, it is likely that they will create problems. The complete process is extremely inefficient. It becomes an exhaustive process with a lot of manual effort involved.

  • Inefficient communication and collaboration between teams: Priorities of development and IT operations teams are different in different organizations. A development team is focused on the latest development releases and considers new feature development, fixing the existing bugs, or development of innovative concepts; while an operations team cares about the stability of a production environment. Often, the first deployment takes place in a production-like environment when a development team completes its part. An operations team manages the deployment environment for the application independently, and there is hardly any interaction between both the teams. More often than not, ineffective or virtually, no collaboration and communication between the teams causes many problems in the transition of application package from the deployment environment to the production environment because of the different roles and responsibilities of the respective teams.

  • Cloud computing: The surfacing of cloud computing in the last decade has changed the perspective of the business stakeholders. Organizations are attempting to develop and deploy cloud-based applications to keep up their pace with the current market and technology trends. Cloud computing helps to manage a complex IT infrastructure that includes physical, consolidated, virtualized, and cloud resources, as well as it helps to manage the constant pressure to reduce costs. Infrastructure as a code is an innovative concept that models the infrastructure as a code to pool resources in an abstract manner with seamless operations to provision and deprovision for the infrastructure in a flexible environment of the cloud. Hence, we can consider that the infrastructure is redeployable using configuration management tools. Such an unimaginable agility in resources has provided us with the best platform to develop innovative applications with an agile methodology rather than the slow and linear waterfall of the Software Development Life Cycle (SDLC) model.

Automation brings the following benefits to the IT industry by addressing preceding concerns:

  • Agility: It provides promptness and agility to your IT infrastructure. Productivity and flexibility is the significant advantage of automation, which helps us to compete with the current agile economic condition.

  • Scalability: Using automation, we can manage the complications of the infrastructure and leverage the scalability of resources in order to fulfill our customers demand. It helps to transform infrastructure into a simple code, which means that building, rebuilding, configuring, and scaling of the infrastructure is possible in just a few minutes according to the need of the customers in a real-world environment.

  • Efficiency and consistency: It can handle all the repeated tasks very easily, so that you can concentrate on innovative business. It increases the agility and efficiency of managing a deployment environment and application deployment itself.

  • Effective management of resources: It helps to maintain a model of infrastructure which must be consistent. It provides a code-based design framework that leads us to a flexible and manageable way to know all the fundamentals of the complex network.

  • Deployment accuracy: Application development and delivery is a multifaceted, cumbersome, repetitive, and time-bound endeavor. Using automation, testability of a deployment environment and the enforcing discipline of an accurate scripting of the changes needs to be done to an environment, and the repeatability of those changes can be done very quickly.

We have covered DevOps-related aspects in the previous section, where we discussed the need for automation and its benefits. Let's understand it in a more precise manner. Recently, the DevOps culture has become very popular. A DevOps-based application development can handle quick changes, frequent releases, fix bugs and continuous delivery-related issues in the entire SDLC process. In simple English, we can say that DevOps is a blend of the tasks undertaken by the development and operation teams to make application delivery faster and more effective. DevOps (includes coding, testing, continuous integration of applications, and version releases) and various IT operations (includes change, incident, problem management, escalation, and monitoring,) can work together in a highly collaborative environment. It means that there must be a strong collaboration, integration, and communication between software developers and IT operations team.

The following figure shows you the applied view of DevOps, and how a development and an operations team collaborate with each other with the help of different types of tools. For different kinds of operations such as configuration management and deployment, both Chef and Puppet are being used. DevOps also shows how cloud management tools such as Dell Cloud Manager, formerly known as Enstratius, RightScale, and Scalr can be used to manage cloud resources for development and operations activities:

DevOps is not a technology or a product, but it is a combination of culture, people, process, and technology. Everyone who is involved in the software development process, including managers, works together and collaboratively on all the aspects of a project. DevOps represents an important opportunity for organizations to stay ahead of their competition by building better applications and services, thus opening the door for increased revenue and improved customer experiences. DevOps is the solution for the problems that arise from the interdependence of IT operations and software development.

There are various benefits of DevOps:

  • DevOps targets application delivery, new feature development, bug fixing, testing, and maintenance of new releases

  • It provides stable operating environments similar to an actual deployment environment and hence, results in less errors or unknown scenarios

  • It supports an effective application release management process by providing better control over the distributed development efforts, and by regulating development and deployment environments

  • It provides continuous delivery of applications and hence provides faster solutions to problems

  • It provides faster development and delivery cycles, which help us to increase our response to customer feedback in a timely manner and enhance customer experience and loyalty

  • It improves efficiency, security, reliability, predictability of outcome, and faster development and deployment cycles

In the following figure, we can see all the necessities that are based on the development of DevOps. In order to serve most of the necessities of DevOps, we need a tool for configuration management, such as Chef:

In order to support DevOps-based application development and delivery approach, infrastructure automation is mandatory, considering extreme need of agility. The entire infrastructure and platform layer should be configurable in the form of code or a script. These scripts will manage to install operating systems, install and configure servers on different instances or on virtual machines, and these scripts will manage to install and configure the required software and services on particular machines.

Hence, it is an opportunistic time for organizations that need to deliver innovative business value in terms of services or offerings in the form of working outcome – deployment ready applications.

With an automation script, same configuration can be applied to a single server or thousands of identical servers simultaneously. Thereby, it can handle error-prone manual tasks more efficiently without any intervention, and manage horizontal scalability efficiently and easily.

In the past few years, several open-source commercial tools have emerged for infrastructure automation, in which, Bcfg2, Cobbler, CFEngine, Puppet, and Chef are the most popular. These automation tools can be used to manage all types of infrastructure environments such as physical or virtual machines, or clouds. Our objective is to understand Chef in detail, and hence, we will look at the overview of the Chef tool in the next section.


Introduction to Chef

Chef is an open source configuration management tool developed by the Opscode community in 2008. They launched its first edition in January 2009. Opscode is run by individuals from the data center teams of Amazon and Microsoft. Chef supports a variety of operating systems; it typically runs on Linux, but supports Windows 7 and Windows Server too. Chef is written in Ruby and Erlang, both are real-time programming languages.

The Chef server, workstation, and nodes are the three major components of Chef. The Chef server stores data to configure and manage nodes effectively. A Chef workstation works as a local Chef repository. Knife is installed on a workstation. Knife is used to upload cookbooks to a Chef server. Cookbook is a collection of recipes. Recipes execute actions that are meant to be automated. A node communicates with a Chef server and gets the configuration data related to it and executes it to install packages or to perform any other operations for configuration management .

Most of the outages that impact the core services of business organizations are caused by human errors during configuration changes and release management. Chef helps software developers and engineers to manage server and application configurations, and provides for hardware or virtual resources by writing code rather than running commands manually. Hence, it is possible to apply best practices of coding and design patterns to automate infrastructure. Chef was developed to handle most critical infrastructure challenges in the current scenario; it makes deployment of server and applications to any physical, virtual, or cloud instances easy. Chef transforms infrastructure to code.

Considering virtual machines in a cloud environment, we can easily visualize the possibility of keeping versions of infrastructure and its configurations and creating infrastructure repeatedly and proficiently. Additionally, Chef also supports system administration, network management, and continuous delivery of an application.

Why Chef is a preferred tool

Currently, IT operations and processes are very much based on virtual systems and cloud deployments, which have increased the complexity and the number of systems managed. In order to manage these types of systems and environments, we need highly consistent, reliable, and secure automated processes. However, many existing configuration management tools are not sufficient in the current environment; they are actually adding complexity to an already complicated problem.

For these kinds of special scenarios, we need a tool that has built-in functionalities and doesn't require a dedicated team of developers to maintain it. We need a complete automated solution that must be easy to learn and can be used by developers easily. Chef is aligned in this direction. Chef is one of the most popular configuration management tools used by DevOps engineers across the world. To support this argument, let's examine the salient features of Chef in the next section.

The salient features of Chef

Based on comparative analysis with Chef's competitors, the following are the salient features of Chef, which make it an outstanding and the most popular choice among developers in the current IT infrastructure automation scenario:

  • Chef has different flavors of automated solutions for current IT operations such as Open Source Chef, Hosted Chef, and Private Chef.

  • Chef enables the highly scalable, secure, and fault-tolerant automation capability features of your infrastructure.

  • Every flavor has a specific solution to handle different kinds of infrastructure needs. For example, the Open Source Chef server is freely available for all, but supports limited features, while the Hosted Chef server is managed by Opscode as a service with subscription fees for standard and premium support. The Private Chef server provides an on-premise automated solution with a subscription price and licensing plans.

  • Chef has given us flexibility. According to the current industry use cases, we can choose among Open Source, Hosted, and Private Chef server as per our requirement.

  • Chef has the facility to integrate with third-party tools such as Test Kitchen, Vagrant, and Foodcritic. These integrations help developers to test Chef scripts and perform proof of concept (POC) before deploying an actual automation. These tools are very useful to learn and test Chef scripting.

  • Chef has a very strong community. The website, https://www.chef.io/ can help you get started with Chef and publish things. Opscode has hosted numerous webinars, it publishes training material, and makes it very easy for developers to contribute to new patches and releases.

  • Chef can quickly handle all types of traditional dependencies and manual processes of the entire network.

  • Chef has a strong dependency management approach, which means that only the sequence of order matters, and all dependencies would be met if they are specified in the proper order.

  • Chef is well suited for cloud instances, and it is the first choice of developers who are associated with cloud infrastructure automation. Therefore, demand for Chef automation is growing exponentially. Within a short span of time, Chef has acquired a good market reputation and reliability.

In the following figure, we can see the key features of Chef automation, which make it the most popular choice of developers in the current industry scenario:

Automation with Chef

Chef has been around since 2009. As discussed, it was very much influenced by Puppet and CFEngine. Chef supports multiple platforms including Ubuntu, Debian, RHEL/CentOS, Fedora, Mac OS X, Windows 7, and Windows Server. The working of Chef is based on the master/agent architecture and pulls mechanism.

In a short span of time, Chef has become quite popular in all types of industries, including mid-size and giant industries. There are various success stories of effective utilization of Chef by customers. Chef has proven its capabilities in all types of Chef solutions, including Open Source, Private, and Hosted versions of Chef server.

Chef is considered easy to use, and it is very much user- and developer-based. Everything in Chef is based on a Ruby script that follows a particular model, which developers use to work. There is rapid growth in the Chef community, and more open source developers are contributing to the Chef community to enhance its functionalities.

In order to practice, learn, or perform POC (proof of concept) with a Chef script, Chef allows great flexibility for usage of third-party tools and allows experimentation via Test kitchen, Vagrant, and Foodcritic. These third-party tools integrate with Chef and help users to test scripts without actually running them on a Chef server. Thereby, to test the accuracy of Chef, these tools are very useful, and the Chef community provides full support to this integration.

In the next section, we will go through the details of existing automation scripts and tools. It will help us to compare Chef with other automation tools or configuration management tools later, in the chapter.


Existing automation tools and comparison with Chef

In the following figure, we can see the evolution of various configuration automation tools over the past 20 years. This gives us a clear picture of all the existing automation tools:

The previous figure describes the order of evolution of different automation tools in the IT industry. In the current scenario, most of the tools are not that popular and some are used very less practically.

There are various automation tools for different types of industries. Here, we will concentrate on automation related to IT and software processes, which has existed and been of benefit to customers for a long time.

Various already existing configuration management systems such as CFEngine and Puppet determine the current state of the system, and then compile a list of configurations and services that need to be applied to generate a desired state.

We will consider some of the existing automation frameworks including some traditional and advance methods of automation to get a clear idea of the existing automation techniques in the software industry and IT service processes.


InstallShield is a software tool to create setups and software packages. InstallShield was developed by Stirling Technologies in 1992.

Features of InstallShield

  • This is basically used to install software in Microsoft Windows and Windows Server. InstallShield is a complete development solution for Windows installation.

  • It is designed to make development teams more manageable, accurate, and flexible with regards to collaboration while building optimal Installscript and Windows Installer (MSI) installations for the Web, a server system, desktop, PC, and cellular applications.

  • It is a unique software installer that builds Microsoft's App-V virtual packages.

  • It simplifies multitiered installations and automates the installation of Windows roles and features.

  • It runs PowerShell scripts, while setting up installation packages.


AutoIt is the automation and scripting language specially designed for Microsoft Windows in 1999.

Features of AutoIt

  • AutoIt v3 is a freeware BASIC-like scripting language designed to automate the Windows GUI and general scripting. AutoIt v3 manipulates window processes and interacts with all the standard window controls. It is compatible with Windows XP, Vista, 2000, 2003, 2007, and Windows 2008 R2.

  • It uses a combination of simulated keystrokes, mouse movements, and window/control manipulations in order to automate tasks in a way that is not possible or reliable with other languages (for example, VBScript and SendKeys).

  • One of the biggest advantages of an AutoIt automation script is that it can be changed into a compressed and standalone executable form. Therefore, execution of an AutoIt script is possible on those computers where an AutoIt interpreter is not installed.

Windows PowerShell scripting

Windows PowerShell is an automation framework developed by Microsoft in 2006. It is built on the .NET framework.

Features of PowerShell

  • Windows PowerShell is one of the most popular tools of automation on Windows Server. It is a command shell which is extendable. It is also a language used for scripting, which can be used to supervise server environments, such as Windows Server, Microsoft Exchange Server, and SharePoint 2010. With the help of PowerShell, we can save a lot of window admin tasks and ensure its effective usage.

  • PowerShell is a replacement shell for the Microsoft Windows operating system, which carries advanced scripting to Windows. Initially, Windows PowerShell was bundled as a distinct add-on to Windows, marketed mainly to server administrators.

  • Using PowerShell scripting, we can complete repetitive and composite processes in less time and in an easy manner. We can do this by combining several commands together and automating tasks such as deployment, for example, which reduces the risk of human error.


CFEngine was developed by Mark Burgess in 1993. It is one of the oldest automation scripting frameworks.

Features of CFEngine

  • CFEngine manages the complete life cycle of an IT infrastructure. It has a powerful language and tools to define the desired state of your infrastructure, irrespective of whether it is a single server or a complex, global network with thousands of servers, and storage and network devices.

  • It comprises of a powerful agent technology, which ensures that the state of the processes are continuously maintained. It basically works on three principles: define, automate, and verify.

  • CFEngine has automated software distribution, change management, copy configuration, inventory/asset management, job initiation, tracking and execution. It also has automated network provisioning, remote configuration, resource initialization, resource shut down and service activation, fault management, accounting, and allocation management capability of resources.

  • The prominent features of CFEngine are security and compliance of mission-critical applications and services. It is built upon well-established theory and high-quality engineering practices. CFEngine has an outstanding security record over the past 19 years. Now, we are going to see some advance automation tools.


Puppet is a well-known automation framework developed by Puppet Labs in 2005.

The features of Puppet are as follows:

  • A Puppet framework has different components such as Collective, Puppet Dashboard, Puppet DB, Hiera, and Facter.

  • A Puppet framework is used to provide continuous automation and orchestration. It has solved many real-time integration challenges with different types of server deployment.

  • With the help of Puppet, we can easily automate repetitive talks, quickly adapt to changes, and scale up servers on demand. A Puppet framework is also well suited to cloud deployment.

  • Puppet uses a declarative model-based approach for IT automation. It has four major stages: define, simulate, enforce, and report. The Puppet community supports reusable configuration modules. It has more than 1,000 prebuilt and freely downloadable configuration modules.

  • If we have a specific requirement than using Puppet's configuration language, we can build our own custom module. After defining our own custom module, we can reuse it for any type of requirement such as physical, virtual, or cloud.


Bcfg2 is a configuration management tool developed by Narayan Desai at Argonne National Laboratory. He launched its first release in 2008. After some more releases, the latest stable version was launched in July 2013.

The features of Bcfg2 are as follows:

  • Bcfg2 has developed in such way that it can provide full support and clear understanding of specification and current states of client.

  • It is designed in a way that it appropriately deals with manual system modifications.

  • If we talk about generations in configuration management tools, it is of the fifth generation and was developed in the Mathematics and Computer Science division of Argonne National Laboratory.

  • Bcfg2 enables system administrators to produce a consistent, reproducible, and representable description of their environment. It also offers visualization and reporting tools to support day-to-day administrative tasks.


Cobbler is a Linux-based installation server that speeds up the setup of installation of network environments. It was developed in 2008 by some open source community members. Initially, it supported Red Hat Linux, but later on, it was represented as a part of the Fedora project. Since January 2011, Cobbler has been packaged with Ubuntu.

The features of Cobbler are as follows:

  • Cobbler is a Linux installation server, which is designed for fast setups of installation environments in network connectivity. It gives you the facility to connect and automate many Linux tasks together, which in return gives you the ease of not jumping along the commands and applications while configuring new systems. Also, sometimes, it is useful to change the existing ones.

  • Cobbler's easy and simple methods to write the commands help in system configuration designing. Network installs can be configured for reinstallations, media-based net-installs, PXE, and virtualized installs (It also supports XEN, KVM, and some variants of VMware.)

  • As an option, Cobbler can also help to manage DNS, DHCP, and yum package mirroring infrastructure. In this respect, it is a more general automation application rather than an application that just deals with installations.

  • After the initial setup, newly registered users can use setup steps as in the command (Cobbler check and Cobbler import). This leads you to a pretty good approach for initialization.

  • Cobbler offers many features like reduced memory consumption, built-in configuration management system and it integrates with systems such as Pallet.

  • Cobbler has a web interface with a command-line interface and several API access options. New users can start with a web application after performing the initial setup steps on the command line: Cobbler check and Cobbler import. This will give the user a suitable idea of all the available properties. All the features (advanced) need not be understood at the same time, they can be learned over time as the need for them arises.

  • Cobbler consumes less memory as it is a very small application, that is, it has 15,000 lines of Python code. It works fine for all level installations. It is functional and valuable in all the enterprises as it has numerous properties and features. It gives you the flexibility to complete the work in a small span of time. Also, it saves time for all the manual tasks that are repeated.


Sprinkle is a tool that can be run locally and maintained easily. It was developed by Marcus Crafte in 2009. Sprinkle is also one of the open source products.

The features of Sprinkle are as follows:

  • Sprinkle is an easy-maintenance tool. Also, it can be managed and is run as a standalone tool.

  • Sprinkle is a software provisioning tool you can use to build remote servers, for example, you can use Sprinkle to install Rails directly after it is created.

  • It has a nice collection of Installers that lets you install applications from various sources.

  • It is best suited for small infrastructures. Sprinkle is based on Capistrano. It follows the same push model without any additional infrastructure.

  • Among commands from APT to ad-hoc commands, if you use Sprinkle, it can make your life easier (for those who write scripts).


cdist is a reusable configuration management system developed by Nico Schottelius and Steven Armstrong in 2010.

The features of cdist are as follows:

  • cdist is a reusable management system. It is configured in such a way that it can be used by small enterprises to grade environments.

  • The configuration of cdist is done in a shell script. A shell script has been used by Unix system engineers for decades.

  • If we consider target system, the requirement of cdist is very less because shell script is used in all cases and all dependencies are usually fulfilled only by shell script.

  • cdist does not require an agent or a high-level programming language on the target host; it will run on any host that has a running SSH server and a POSIX compatible shell (/bin/sh). Compared to other configuration management systems, it does not require you to open any additional port.

  • From a security point of view, only one machine needs access to the target hosts. No target host will ever need to connect back to the source host, which contains the full configuration.


Pallet is also an open source project developed by Hugo Duncan and Antoni Batchelli in 2010.

The features of Pallet are as follows:

  • Pallet is a tool for automation, and was especially designed for the cloud environment. It can also work with traditional servers.

  • As per its historic explanation, it was developed with Clojure, a JVM implementation of the Lisp (classic)

  • Pallet is not just a tool for system administrators, it is built for developers as well

  • It is a library which can be used with other applications, as it is more than a server.

  • In a DevOps world, it means that infrastructure as code starts with the development team and trickles down to operations.


Rex is a pure open source project developed in 2010 and managed by the www.rexify.org community.

The features of REX are as follows:

  • Rex is a small and lightweight framework. It is basically a server orchestration tool that does not need an agent on the hosts you want to manage as it uses SSH. Therefore, no agent installation is required for nodes.

  • It provides significant integration without any conflict. It is easy to learn just like Perl scripting.

  • Rex fully supports DevOps-based deployment.

  • Apart from open source support, REX also provides commercial support for all related services.


Glu is a free, open source deployment and monitoring automation platform, which was also developed by open source community members in 2010.

The features of Glu are as follows:

  • It deploys and monitors applications efficiently.

  • It is secure to use and provides reproducibility of infrastructure.

  • The Glu script provides a set of instructions describing how to deploy and run an application. It processes arbitrary large set of nodes efficiently with minimum manual security efforts.


RunDeck is open source software, which was developed by many members of open source in 2010.

The features of RunDeck are as follows:

  • RunDeck processes things according to standard procedures. Technically, it enables operations, tools, and processes into standard operating procedures, which are for public use so anybody can view it in the organization.

  • It has an open, pluggable design, which makes RunDeck easy to integrate and extend.

  • RunDeck creates workflows that can connect various scripts, tools, and other operations systems easily and safely. It provides built-in standard authorization, logging, and notifications, which make it easy and safe to give others self-service usage of RunDeck's API or WebGUI.

  • RunDeck also provides a Graphical User Interface (GUI) for monitoring on-demand and scheduled operations tasks.

  • It executes actions to nodes over SSH, WinRM, or any other transport.

  • It has workflow options that can be pulled from tools such as build servers, package repos, ticketing systems, or anything with an Application Programming Interface (API).


Crowbar is the open source deployment tool developed by Dell in 2011. It was initially developed to support Dell's OpenStack and Hadoop-powered solution.

The features of Crowbar are as follows:

  • Crowbar enables you to provision a server from BIOS, via Chef, up to higher-level server states.

  • Crowbar can be extended via plugins called barclamps. So far, there are barclamps available for provisioning on various platforms such as Cloud Foundry, Zenoss, Hadoop, and more.

  • With Dell Crowbar, we can discover and configure hardware (BIOS and RAID), and deploy as well as configure operating systems and applications. You can do all of this repeatedly in a fraction of the usual time required.


Fabric is also developed and managed by open source community members. It was mainly developed by Christian Vest Hansen and Jeffrey E. Forcier in 2011.

The features of Fabric are as follows:

  • It works as a command-line tool to streamline the use of SSH.

  • It is an advanced tool that allows you to orchestrate various configuration operations.

  • It is used for application deployment or systems administration tasks. It is the appropriate solution for uploading/downloading files, and is used for auxiliary functionalities such as prompting the current user for input, or canceling the execution.

  • Fabric is used to write and execute Python's function code or tasks to automate communications with remote servers. It works on clusters of machines and allows you to deploy applications. It starts/stops services on a cluster of machines.

  • Fabric is a tool written in the scripting language, Python (2.5 or a higher version) library.

  • It provides the best solution and best suite of operations to execute local or remote shell commands normally or through sudo.


Ansible is a configuration management tool. It is also a deployment and ad hoc task execution tool. It is an open source software, which was developed by many members of open source in 2012.

The features of Ansible are as follows:

  • Ansible is an execution and open source automation tool which is served as a configuration and deployment management and it is also served as ad-hoc task execution.

  • It works using SSH, which is very popular among Linux users and administrators. It does not need any daemons or software for remote machine management.

  • It is fast and simple to install as no configuration file, daemon, or database is needed.

  • The setup is very simple to install as no software needs to be installed on remote machines. This means that Ansible starts arranging the system at once if you have a clear image of your favorite running OS with you.

  • Ansible is designed in such a manner that it does not need anything more than a password or SSH key to start managing systems and it does so without installing any software agent. All these features make it quite useful when there are many nodes to be managed.


SaltStack is a systems and configuration management software for any enterprise that follows a cloud or DevOps deployment. It was developed by Tom Hatch and Marc Chenn in 2012.

The features of SaltStack are as follows:

  • It is one of the most active and fastest growing open source communities in the world.

  • It delivers a completely different approach to legacy alternatives not built for the speed and scale of a cloud.

  • The commendable achievement of SaltStack is that for the purpose of orchestrating and controlling any cloud and providing automation for DevOps tool chain, it is used by the largest IT enterprises and DevOps organizations in the world.

  • It provides heterogeneous support for cloud configuration and infrastructure or any software platform.

  • SaltStack is mainly known for parallel management. Moreover, it provides real-time data automation. It is much less time consuming, so if we check time specification in this case, remote execution takes place in seconds and not in minutes or hours.


Mina is a tool for fast server deployment and automation. It was developed by many open source contributors in 2012.

The features of Mina are as follows:

  • Mina has been designed to build and run scripts to manage your app deployments on servers via SSH

  • It is known for its fast working as it deploys a Bash script generator, and it can be used on just about any type of project deployable via SSH, Ruby or not

  • It produces an entire procedure as a Bash script and runs it remotely in the server

  • It only creates one SSH session per deploy, minimizing the SSH connection overhead

  • It even provides safe deployment and locking


Juju is a single-service orchestration tool. It was introduced by Canonical Ltd. in 2012. Earlier, it was named as Ensemble.

The features of Juju are as follows:

  • It runs on public clouds, private clouds, and micro clouds.

  • Juju requires Ubuntu developer workstations. It was developed to coexist with tools such as Puppet and Chef, and it was developed to provide extra services. Orchestration toolsets such as Juju take the process one step further by gluing all these applications together.

  • Juju provides you with a unique, easy, and straight way to extend deployments.

  • Unlike old, script-based approaches, it can grow fast and compress on demand by adding various layers or substituting components on the fly.

All advance automation tools work on master agent architecture, which is similar to the traditional client-server architecture and some work on the standalone architecture.

In the following table, we will compare some of the most demanding advance automation tools of the IT industry in the current scenario:











Stable release

Chef-client: 12.0.3 | 16 December 2014

Chef-server: 12.0.1 | 17 December 2014

3.7.1 | 15 September 2014

2014.7.0[1] | 3 November 2014

1.8.2 | 5 December 2014


Ruby (client) and Ruby/Erlang (server)









Push/Pull Mechanism





Selling feature

Industry leader

Industry leader

Speed and scale

Simple to install

Quality of Documentation

The quality of Chef's documentation is very good. It has free webinars for beginners and is easy to understand.

The quality of Puppet's documentation is good. It has free online training available.

The quality of SaltStack's documentation is evolving. Comparatively, it is not so good.

The quality of Ansible's documentation is good and well structured.

Cloud Integration


Amazon EC2, Windows Azure, HP Cloud, Google Compute Engine, Joyent Cloud, Rackspace, VMWare, IBM Smartcloud, OpenStack.


AWS, VMware, Google Compute Engine


Amazon AWS, Rackspace, SoftLayer, GoGrid, HP Cloud, Google Compute Engine, VMware, Windows Azure, and Parallels.


AWS, VMWare, OpenStack, CloudStack, Eucalyptus Cloud, and KVM,

Support Services

Support Tickets, Premium, Standard, and Free Support, IRC, Mailing lists

Customer support portal, Enterprise mailing list, Open source community

Salt IRC Chat, SaltStack Mailing List, and SaltStack User Groups

FAQs, Support Requests, Mail

Industry example

Facebook, Linkedin, Youtube, Splunk, Rackspace, GE Capital, Digital Science, and Bloomberg

Twitter, Verizon, VMware, Sony, Symantec, Redhat, Salesforce, Motorola, and Paypal


Apple, Juniper, Grainger, WeightWatchers, SaveMart and NASA

To get more insight in the comparison, refer to the article, Review: Puppet vs. Chef vs. Ansible vs. Salt by Paul Venezia at InfoWorld's website, http://www.infoworld.com/article/2609482/data-center/review--puppet-vs--chef-vs--ansible-vs--salt.html.

Comparison with other popular tools

For better understanding, we will compare Chef with some of the most popular infrastructure automation tools, which are the competitors of Chef.

Chef versus Puppet

The following are the key differences between Puppet and Chef:

  • Puppet uses a custom, JSON-like language. Although a Ruby option is available with Puppet, while in Chef, you write in Ruby. Chef is purely a Domain Specific Language (DSL), which extends the Ruby language to the resources that are useful to manage hosts and their applications. The unique quality of Ruby is that if you are writing, then you can get simple stuff done even without actually knowing Ruby. Therefore, Chef is preferred by most of the software developers or web developers.

  • Puppet is still struggling with proper documentation and webinar sessions for learners and developers, while Chef is very accurate with its documentation, training materials, recent updates, and releases. It has hosted various webinar sessions. Therefore, from a learning perspective, Chef is the preferred choice over Puppet.

  • It is true that Chef is very much influenced by Puppet. Adam Jacobs contributed to the creation of Chef and he was once a Puppet user himself. Therefore, it is obvious that developers of Chef have learned from Puppet itself to make Chef much better and more flexible for users.

  • Chef provides many more cloud integration options. We can integrate through APIs to different types of cloud providers such as AWS, Rackspace, MS Azure, OpenStack, Eucalyptus, VMware ESXi, VMware vCenter, VMware vCloud, and vCloud Air. While in Puppet, these kinds of integrations are not properly enabled, although Puppet supports cloud providers to some extent.

  • It has been observed that the Chef community provides very good support and gives quick responses to user queries for any problem faced while installing, using, and debugging Chef scripts. While using Puppet, we cannot expect promptness in response, therefore, Chef users are much more satisfied than Puppet users. Hence, Chef has become a more preferable choice of developers. Chef has a variety of choices according to customer use cases. We can use private, open source Chef or hosted Chef according to our use case, but in Puppet, we don't have this type of option. Therefore, Chef is much more flexible in terms of usage.

Chef versus CFEngine

The following are the key differences between Chef and CFEngine:

  • CFEngine is very much older than Chef; it was developed by Mark Burgess in 1993. CFEngine is also called the grandfather of configuration automation tools.

  • CFEngine runs on C while Chef runs on Ruby. This is the major problem with CFEngine because C is a low-level language and most of the communities don't support C. It takes much effort to learn CFEngine, while Chef can be learned easily by developers. Therefore, Chef is the preferred option nowadays for system administrators with limited coding experience.

  • CFEngine has been present in the market for the last 20 years. According to CFEngine's site, currently, they are managing more than 10 million nodes and have a good list of customer support, but as we know, the current IT infrastructure scenario is very much based on virtualization and cloud. Chef serves perfectly for current industry automation needs and is the preferred choice of reputed organizations such as Amazon and Facebook.

  • One more problem with CFEngine is its documentation. Being an open source community, they have less focus on the documentation of the latest releases and learning tutorials, while Chef has a strong community support for proper documentation and provides latest training material to all users. Therefore, popularity of Chef is increasing day by day and it has gone on to become the most popular open source automation tool.

Here, we got the conceptual understanding of Chef and its effectiveness over other configuration automation competitors. In Chef, there are different components that interact with one another during the execution and installation of scripts, which we are going to learn in the next chapter.


Self-test questions

  1. What is automation and why is it necessary in modern IT world?

  2. What is DevOps and what are its benefits?

  3. Chef is written using which languages?

  4. What does Chef actually provide? How does automation support DevOps?

  5. What is the dependency management feature of Chef, which makes it preferable in the current IT scenario?

  6. With respect to managing cloud instances, why is Chef more preferred over Puppet?

  7. CFEngine is very much older than any other modern automation scripting languages, but why it is not recommend by system administrators?

  8. What are the available third-party tools that can be integrated with Chef for testing purposes?



Here, we got the fundamental understanding of automation and the various ways in which automation helps the IT industry. DevOps is most popular nowadays, because it brings a highly collaborative environment in the entire software development process.

We got an overview of several traditional and advance automation tools, which have been used over the past 15 years. We got a clear idea why Chef is needed in the current scenario of IT automation process and why it is more preferable.

We saw the evolution of various IT automation tools and the comparison of some advance automation tools. We also discussed the comparison of Chef with other popular tools. We understood the salient features of Chef.

We will take a deep dive in to Chef's architecture, the different Chef components, and learn how to work with each one of these components in the next chapter.

About the Authors

  • Rishabh Sharma

    Rishabh Sharma is currently working as a chief technology officer (CTO) at JOB Forward, Singapore. Prior to working for JOB Forward, he worked for Wipro Technologies, Bangalore, as a solution delivery analyst. He was involved in research projects of cloud computing, proof of concepts (PoC), infrastructure automation, big data solutions, and various giant customer projects related to cloud infrastructure and application migration.

    In a short span of time, he has worked on various technologies and tools such as Java/J2EE, SAP(ABAP), AWS, OpenStack, DevOps, big data, and Hadoop. He has also authored many research papers in international journals and IEEE journals on a variety of issues related to cloud computing.

    He has authored five technical books until now. He recently published two books with international publications:

    Browse publications by this author
  • Mitesh Soni

    Mitesh Soni is a DevOps enthusiast. He has worked on projects for DevOps enablement using Microsoft Azure and Visual Studio Team Services. He also has experience of working with other DevOps-enabling tools, such as Jenkins, Chef, IBM UrbanCode Deploy, and Atlassian Bamboo.

    He is a CSM, SCJP, SCWCD, VCP, IBM Bluemix, and IBM Urbancode certified professional.

    Browse publications by this author

Latest Reviews

(1 reviews total)
Learning Chef
Unlock this book and the full library for FREE
Start free trial