Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech Guides - DevOps

29 Articles
article-image-top-7-devops-tools-2018
Vijin Boricha
25 Apr 2018
5 min read
Save for later

Top 7 DevOps tools in 2018

Vijin Boricha
25 Apr 2018
5 min read
DevOps is a methodology or a philosophy. It's a way of improving the friction between development and operations. But while we could talk about what DevOps is and isn't for decades (and people probably will), there are a range of DevOps tools that are integral to putting its principles into practice. So, while it's true that adopting a DevOps mindset will make the way you build software more efficiently, it's pretty hard to put DevOps into practice without the right tools. Let's take a look at some of the best DevOps tools out there in 2018. You might not use all of them, but you're sure to find something useful in at least one of them - probably a combination of them. DevOps tools that help put the DevOps mindset into practice Docker Docker is a software that performs OS-level virtualization, also known as containerization. Docker uses containers to package up all the requirements and dependencies of an application making it shippable to on-premises devices, data center VMs or even Cloud. It was developed by Docker, Inc, back in 2013 with complete support for Linux and limited support for Windows. By 2016 Microsoft had already announced integration of Docker with Windows 10 and Windows Server 2016. As a result, Docker enables developers to easily pack, ship, and run any application as a lightweight, portable container, which can run virtually anywhere. Jenkins Jenkins is an open source continuous integration server in Java. When it comes to integrating DevOps processes, continuous integration plays the most important part and this is where Jenkins comes into picture. It was released in 2011 to help developers integrate DevOps stages with a variety of in-built plugins. Jenkins is one of those prominent tools that helps developers find and solve code bugs quickly and also automates the testing of their builds. Ansible Ansible was developed by the Ansible community back in 2012 to automate network configuration, software provisioning, development environment, and application deployment. In a nutshell, it is responsible for delivering simple IT automation that puts a stop to repetitive task. This eventually helps DevOps teams to focus on more strategic work. Ansible is completely agentless where in it uses syntax written in YAML and follows a master-slave architecture. Puppet Puppet is an open source software configuration management tool written in C++ and Closure. It was released back in 2005 licensed under the GNU General Public License (GPL) until version 2.7.0. Later it was licensed under Apache License 2.0. Puppet is an open-source configuration management tool used to deploy, configure and manage servers. It uses a Master Slave architecture where the Master and Slave use secure encrypted channels to communicate. Puppet runs on any platform that supports Ruby, for example CentOS, Windows Server, Oracle Enterprise Linux, Microsoft, and more. Git Git is a version control system that allows you to track file changes which in turn helps in coordinating with team members working on those files. Git was released in 2005 where it was majorly used for Linux Kernel development. Its primary use case is source code management in software development. Git is a distributed version control system where every contributor can create a local repository by cloning the entire main repository. The main advantage of this system is that contributors can update their local repository without any interference to the main repository. Vagrant Vagrant is an open source tool released in 2010 by HashiCorp and it used to build and maintain virtual environments. It provides a simple command-line interface to manage virtual machines with custom configurations so that DevOps team members have an identical development environment. While Vagrant is written in Ruby, it supports development in all major languages. It works seamlessly on Mac, Windows, and all popular Linux distributions. If you are considering building and configuring a portable, scalable, and lightweight environment, Vagrant is your solution. Chef Chef is a powerful configuration management tool used to transform infrastructure into code. It was released back in 2009 and is written in Ruby and Erlang. Chef uses a pure-ruby domain specific language (DSL) to write system configuration 'recipes' which are put together as cookbook for easier management. Unlike Puppet’s master-slave architecture Chef uses a client-server architecture. Chef supports multiple cloud environments which makes it easy for infrastructures to manage data centers and maintain high availability. Think carefully about the DevOps tools you use To increase efficiency and productivity, the right tool is key. In a fast-paced world where DevOps engineers and their entire teams do all the extensive work, it is really hard to find the right tool that fits your environment perfectly. Your best bet is to choose your tool based on the methodology you are going to adopt. Before making a hard decision it is worth taking a step back to analyze what would work best to increase your team’s productivity and efficiency. The above tools have been shortlisted based on current market adoptions. We hope you find a tool in this list that eventually saves a lot of your time in choosing the right one. Learning resources Here is a small selection of books and videos from our Devops portfolio to help you and your team master the DevOps tools that fit your requirements: Mastering Docker (Second Edition) Mastering DevOps [Video] Mastering Docker [Video] Ansible 2 for Beginners [Video] Learning Continuous Integration with Jenkins (Second Edition) Mastering Ansible (Second Edition) Puppet 5 Beginner's Guide (Third Edition) Effective DevOps with AWS
Read more
  • 0
  • 0
  • 28618

article-image-aws-fargate-makes-container-infrastructure-management-a-piece-of-cake
Savia Lobo
17 Apr 2018
3 min read
Save for later

AWS Fargate makes Container infrastructure management a piece of cake

Savia Lobo
17 Apr 2018
3 min read
Containers such as Docker, FreeBSD Jails, and many more, are a substantial way for developers to develop and deploy their applications. Also, with container orchestration solutions such as Amazon ECS and EKS (Kubernetes), developers can easily manage and scale these containers, thus enabling them to perform other activities quickly. However, in spite of these management solutions at hand, one also has to take an account of the infrastructure maintenance, its availability, capacity and so on which are added tasks. AWS Fargate eases out these tasks and streamlines all deployments for you, resulting in faster completion of deliverables. At the Re:Invent in November 2017, AWS launched Fargate, a technology which enables one to manage containers without having to worry about managing the container infrastructure underneath. AWS Fargate comes to your rescue here. It is an easy way to deploy your containers on AWS. One can start using Fargate on ECS or EKS, try processes and workloads and later migrate workloads to Fargate. It eliminates most of the management such as placement of resources, scheduling, scaling, and so on, which is a requirement for containers. All you have to do is, Build your container image, Specify the CPU and memory requirements, Define your networking and IAM policies, and Launch your container application Some key benefits of AWS Fargate It allows developers to focus on design, development, and deployment of applications. This eliminates the need to manage a cluster of Amazon EC2 instances. One can easily scale applications using Fargate. Once, the application requirements such as CPU, memory, and so on are defined, Fargate manages effective scaling and infrastructure needed to make containers highly-available. One can launch thousands of containers in no time and easily scale them to run most of the mission-critical applications. AWS Fargate is integrated with Amazon ECS and EKS. Fargate launches and manages containers once CPU and memory needed, IAM policies that container needs are defined and uploaded to Amazon ECS. With Fargate, one gets flexible configuration options that matches one’s applications’ needs. Also, one pays on the basis of per-second granularity. Adoption of Container management as a trend is steadily increasing. Kubernetes, at present, is one of the popular and most used containerized application management platforms. However, users and developers are often confused about who the best Kubernetes provider is. Microsoft and Google have their managed Kubernetes services, but AWS Fargate provides an added ease to Amazon’s EKS (Elastic Container Service for Kubernetes) by eliminating the hassle of container infrastructure management. Read more about AWS Fargate on AWS’ official website.
Read more
  • 0
  • 0
  • 27406

article-image-docker-isnt-going-anywhere
Savia Lobo
22 Jun 2018
5 min read
Save for later

Docker isn't going anywhere

Savia Lobo
22 Jun 2018
5 min read
To create good software, developers often have to weave in the UI, frameworks, databases, libraries, and of course a whole bunch of code modules. These elements together build an immersive user experience on the front-end. However, deploying and testing software is way too complex these days as all these elements should be properly set-up in order to build successful software. Here, containers are of a great help as they enable developers to pack all the contents of their app, including the code, libraries, and other dependencies, and ship it over as a singular package. One can think of software as a puzzle and containers just help one to get all the pieces in their proper position for the effective functioning of the software. Docker is one of the popular choices in containers. The rise of Docker Containers Linux Containers have been in the market for almost a decade. However, it was after the release of Docker five years ago that developers widely started using containers in a simple way. At present, containers, especially Docker containers are popular and in use everywhere and this popularity seems set to stay. As per our Packt Skill Up developer survey on top sysadmin and virtualization tools, almost 46% of the developer crowd voted that they use Docker containers on a regular basis.  It ranked third after Linux and Windows OS in the lead. Source: Packt Skill Up survey 2018 Also, organizations such as Red Hat, Canonical, Microsoft, Oracle and all other major IT companies and cloud businesses that have adopted Docker. Docker is often confused with virtual machines; read our article on Virtual machines vs Containers to understand the differences between the two. VMs such as Hyper-V, KVM, Xen, and so on are based on the concept of emulating hardware virtually. As such, they come with huge system requirements. On the other hand, Docker containers or in general containers use the same OS and kernel. Apart from this, Docker is just right if you want to use minimal hardware to run multiple copies of your app at the same time. This would, in turn, save huge costs on power and hardware for data centers annually. Docker containers boot within a fraction of seconds unlike virtual machines that require 10-20 GB of operating system data to boot, which eventually slows down the whole process. For CI/CD, Docker makes it easy to set up environments for local development that replicates a live server. It helps run multiple development environments using different software, OS, and configurations; all from the same host. One can run test projects on new or different servers and can also work on the same project with similar settings, irrespective of the local host environment. Docker can also be deployed on the cloud as it is designed for integration within most of the DevOps platforms including Puppet, Chef, and so on. One can even manage standalone development environments with it. Why developers love Docker Docker brought in novel ideas in the market for the organizations starting with making containers easy to use and deploy. In the year 2014, Docker announced that it was partnering with the major tech leaders Google, Red Hat, and Parallels on its open-source component libcontainer. This made libcontainer the defacto standard for Linux containers. Microsoft also announced that it would bring Docker-based containers to its Azure Cloud. Docker has also donated its software container format and its runtime, along with its specifications to Linux’s Open Container Project. This project includes all the contents of the libcontainer project, nsinit, and all other modifications such that it can independently run without Docker.  Further, Docker Containerd, is also hosted by Cloud Native Computing Foundation (CNCF). Few reasons why Docker is preferred by many: It has a great user experience, which helps developers to use the programming language of their choice. One requires to perform less amount of coding One can run Docker on any operating system such as Windows, Linux, Mac, and etc. The Docker Kubernetes combo DevOps can be used to deploy and monitor Docker containers but they are not highly optimized for this task. Containers need to be individually monitored as they contain huge density in respect to the matter they contain. The possible solution to this is cloud orchestration tools, and what better than Kubernetes as it is one of the most dominant cloud orchestration tools in the market. As Kubernetes has a bigger community and a bigger share of the market, Docker made a smart move to include Kubernetes as one of its offerings. With this, Docker users and customers can not only use Kubernetes’ secure orchestration experience but also an end-to-end Docker experience. Docker gives an A to Z experience for developers and system administrators. With Docker, Developers can focus on writing code and forget about the rest of the deployment. They can also make use of different programs designed to run on Docker and can make use of it in their own projects. System administrators in a way can reduce system overhead as compared to VMs. Docker’s portability and ease of installation make it easy for admins to save a bunch of time lost in installing individual VM components. Also, with Google, Microsoft, RedHat, and others absorbing Docker technology in their daily operations, it is surely not going anywhere soon. Docker’s future is bright and we can expect machine learning to be a part of it sooner than later. Are containers the end of virtual machines? How to ace managing the Endpoint Operations Management Agent with vROps Atlassian open sources Escalator, a Kubernetes autoscaler project  
Read more
  • 0
  • 0
  • 26812

article-image-why-is-pentaho-8-3-great-for-dataops
Guest Contributor
07 Oct 2019
6 min read
Save for later

Why is Pentaho 8.3 great for DataOps?

Guest Contributor
07 Oct 2019
6 min read
Announced in July, Pentaho 8.3 is the latest version of the data integration and analytics platform software from Hitachi Vantara. Along with new and improved features, this version will support DataOps, a collaborative data management practice that helps customers access the full potential of their data. “DataOps is about having the right data, in the right place, at the right time and the new features in Pentaho 8.3 ensure just that,” said John Magee, vice president, Portfolio Marketing, Hitachi Vantara. “Not only do we want to ensure that data is stored at the lowest cost at the right service level, but that data is searchable, accessible and properly governed so actionable insights can be generated and the full economic value of the data is captured.” How Pentaho prevents the loss of data According to Stewart Bond, research director, Data Integration and Integrity Software, and Chandana Gopal, research director, Business Analytics Solutions from IDC, “A vast majority of data that is generated today is lost. In fact, only about 2.5% of all data is actually analyzed. The biggest challenge to unlocking the potential that is hidden within data is that it is complicated, siloed and distributed. To be effective, decision makers need to have access to the right data at the right time and with context.” The struggle is how to manage all the incoming data in a way that exposes everyone to what’s coming down the pipeline. When data is siloed, there’s no guarantee the right people are seeing it to analyze it. Pentaho Development is a single platform to help businesses keep up with data growth in a way that enables real-time data ingestion. With available data services, you can:   Make data sets immediately available for reports and applications.   Reduce the time needed to create data models.   Improve collaboration between business and IT teams.   Analyze results with embedded machines and deep learning models without knowing how to code them into data pipelines.   Prepare and blend traditional data with big data. Making all the data more accessible across the board is a key feature of Pentaho that this latest release continues to strengthen. What’s new in Pentaho 8.3? Latest version of Pentaho includes new features to support DataOps DataOps limits the overall cycle time of big data analytics. Starting from the initial origin of the ideas to the making of the visualization, the overall data analytics process is transformed with DataOps. Pentaho 8.3 is conceptualized to promote the easy management and collaboration of the data. The data analytics process is much more agile. Therefore, the data teams are able to work in sync. Also, efficiency and effectiveness are increased with DataOps. Businesses are looking for ways to transform the data digitally. They want to get more value from the massive pool of information. And, as data is almost everywhere, and it is distributed more than ever before, therefore, the businesses are looking for ways to get the key insights from the data quickly and easily. This is exactly where the role of Pentaho 8.3 comes into the picture. It accelerates the businesses’ innovation and agility. Plenty of new and exciting time-saving enhancements have been done to make Pentaho a better and more advanced solution for the corporates. It helps the companies to automate their data management techniques.  Key enhancements in Pentaho 8.3 Each enhancement included with Pentaho 8.3 in some way helps organizations modernize their data management practices in ways that assist with removing friction between data and insight, including: Improved drag and drop pipeline capabilities These help access and blend data that are hard to reach to provide deeper insights into the greater analytic value from enterprise integration. Amazon Web Services (AWS) developers can also now ingest and process streaming data through a visual environment rather than having to write code that must blend with other data. Enhanced data visibility Improved integration with Hitachi Content Platform (HCP), a distributed, object storage system designed to support large repositories of content, makes it easier for users to read, write and update HCP customer metadata. They can also more easily query objects with their system metadata, making data more searchable, governable, and applicable for analytics. It’s also now easier to trace real-time data from popular protocols like AMQP, JMS, Kafka, and MQTT. Users can also view lineage data from Pentaho within IBM’s Information Governance Catalog (IGC) to reduce the amount of effort required to govern data. Expanded multi-cloud support AWS Redshift bulk load capabilities now automate the process of loading Redshift. This removes the repetitive SQL scripting to complete bulk loads and allows users to boost productivity and apply policies and schedules for data onboarding. Also included in this category are updates that address Snowflake connectivity. As one of the leading destinations for cloud warehousing, Snowflake’s primary hiccup is when an analytics project wants to include data from other sources. Pentaho 8.3 allows blending, enrichment and the analysis of Snowflake data in conjunction with other sources, including other cloud sources. These include existing Pentaho-supported cloud platforms like AWS and Google Cloud. Pentaho and DataOps Each of the new capabilities and enhancements for this release of Pentaho are important for current users, but the larger benefit to businesses is its association with DataOps. Emerging as a collaborative data management discipline, focused on better communication, integration, and automation of how data flows across an organization, DataOps is becoming a practice embraced more often, yet not without its own setbacks. Pentaho 8.3 helps businesses gain the ability to make DataOps a reality without facing common challenges often associated with data management. According to John Magee, Vice President Portfolio Marketing at Hitachi,  “The new Pentaho 8.3 release provides key capabilities for customers looking to begin their DataOps journey.” Beyond feature enhancements Looking past the improvements and new features of the latest Pentaho release, it’s a good product because of the support it offers its community of users. From forums to webinars to 24/7 support, it not only caters to huge volumes of data on a practical level, but it doesn’t ignore the actual people using the product outside of the data. Author Bio James Warner is a Business Intelligence Analyst with Excellent knowledge on Hadoop/Big data analysis at NexSoftSys.com  New MapR Platform 6.0 powers DataOps DevOps might be the key to your Big Data project success Bridging the gap between data science and DevOps with DataOps
Read more
  • 0
  • 0
  • 24539

article-image-aiops-trick-or-treat
Bhagyashree R
31 Oct 2018
2 min read
Save for later

AIOps - Trick or Treat?

Bhagyashree R
31 Oct 2018
2 min read
AIOps, as the term suggests, is Artificial Intelligence for IT operations and was first introduced by Gartner last year. AIOps systems are used to enhance and automate a broad range of processes and tasks in IT operations with the help of big data analytics, machine learning, and other AI technologies. Read also: What is AIOps and why is it going to be important? In its report, Gartner estimated that, by 2020, approximately 50% of enterprises will be actively using AIOps platforms to provide insight into both business execution and IT Operations. AIOps has seen a fairly fast growth since its introduction with many big companies showing interest in AIOps systems. For instance, last month Atlassian acquired Opsgenie, an incident management platform that along with planning and solving IT issues, helps you gain insight to improve your operational efficiency. The reasons why AIOps is being adopted by companies are: it eliminates tedious routine tasks, minimizes costly downtime, and helps you gain insights from data that’s trapped in silos. Where AIOps can go wrong? AIOps alerts us about incidents beforehand, but in some situations, it can also go wrong. In cases where the event is unusual, the system will be less likely to predict it. Also, those events that haven’t occurred before will be entirely outside the ability for machine learning to predict or analyze. Additionally, it can sometimes give false negatives and false positives. False negatives could happen in the cases where the tests are not sensitive enough to detect possible issues. False positives can be the result of incorrect configuration. This essentially means that there will always be a need for human operators to review these alerts and warnings. Is AIOps a trick or treat? AIOps is bringing more opportunities for IT workforce such as AIOps Data Scientist, who will focus on solutions to correlate, consolidate, alert, analyze, and provide awareness of events. Dell defines its Data Scientist role as someone who will “contribute to delivering transformative AIOps solutions on their SaaS platform”. With AIOps, IT workforce won’t just disappear, it will evolve. AIOps is definitely a treat because it reduces manual work and provides an intuitive way of incident response. What is AIOps and why is it going to be important? 8 ways Artificial Intelligence can improve DevOps Tech hype cycles: do they deserve your attention?
Read more
  • 0
  • 0
  • 23894

article-image-5-things-to-remember-when-implementing-devops
Erik Kappelman
05 Dec 2017
5 min read
Save for later

5 things to remember when implementing DevOps

Erik Kappelman
05 Dec 2017
5 min read
DevOps is a much more realistic and efficient way to organize the creation and delivery of technology solutions to customers. But like practically everything else in the world of technology, DevOps has become a buzzword and is often thrown around willy-nilly. Let's cut through the fog and highlight concrete steps that will help an organization implement DevOps. DevOps is about bringing your development and operations teams together This might seem like a no-brainer, but DevOps is often explained in terms of tools rather than techniques or philosophical paradigms. At its core, DevOps is about uniting developers and operators, getting these groups to effectively communicate with each other, and then using this new communication to streamline various processes. This could include a physical change to the layout of an organization's workspace. It's incredible the changes that can happen just by changing the seating arrangements in an office. If you have a very large organization, development and operations might be in separate buildings, separate campuses, or even separate cities. While the efficacy of web-based communication has increased dramatically over the last few years, there is still no replacement for face-to-face daily human interactions. Putting developers and operators in the same physical space is going to increase the rate of adoption and efficacy of various DevOps tools and techniques. DevOps is all about updates Updates can be aimed at expanding functionality or simply fixing or streamlining existing processes. Updates present a couple of problems to developers and operators. First, we need to keep everybody working on the same codebase. This can be achieved by using a variety of continuous integration tools. The goal of continuous integration is to make sure that changes and updates to the codebase are implemented as close to continuously as possible. This helps avoid merging problems that can result from multiple developers working on the same codebase at the same time. Second, these updates need to be integrated into the final product. For this task, DevOps applies the concept of continuous deployment. This is essentially the same thing as continuous integration, but has to do with deploying changes to the codebase as opposed to integrating changes to the codebase. In terms of importance to the DevOps process, continues integration and deployment are equally important. Moving updates from a developer's workspace to the codebase to production should be seamless, smooth, and continuous. Implementing a microservices structure is imperative for an effective DevOps approach Microservices are an extension of the service-based structure. Basically a service structure calls for modulation of a solution’s codebase into units based on functionality. Microservices takes this a step further by implementing what consists of a service-based structure in which each service performs a single task. While a service-based or microservice structure is not required for implementation of DevOps, I have no idea why you wouldn’t because microservices lend themselves so well with DevOps. One way to think of a microservice structure is by imagining an ant hill in which all of the worker ants are microservices. Each ant has a specific set of abilities and is given a task from the queen. The ant then autonomously performs this task, usually gathering food, along with all of its ant friends. Remove a single ant from the pile, nothing really happens. Replace an old ant with a new ant, nothing really happens. The metaphor isn’t perfect, but it strikes at the heart of why microservices are valuable in a DevOps framework. If we need to be continuously integrating and deploying, shouldn’t we try to impact the codebase as directly as we can? When microservices are in use, changes can be made at an extremely granular level. This allows for continuous integration and deployment to really shine. Monitor your DevOps solutions In order to continuously deploy, applications need to also be continuously monitored. This allows for problems to be identified quickly. When problems are quickly identified, it tends to reduce the total effort required to fix the problems. Your application should obviously be monitored from the perspective of whether or not it is working as it currently should, but users need to be able to give feedback on the application’s functionality. When reasonable, this feedback can then be integrated into the application somehow. Monitoring user feedback tends to fall by the wayside when discussing DevOps. It shouldn’t. The whole point of the DevOps process is to improve the user experience. If you’re not getting feedback from users in a timely manner, it's kind of impossible to improve their experience. Keep it loose and experiment Part of the beauty of DevOps is that it can allow for more experimentation than other development frameworks. When microservices and continuous integration and deployment are being fully utilized, it's fairly easy to incorporate experimental changes to applications. If an experiment fails, or doesn’t do exactly what was expected, it can be removed just as easily. Basically, remember why DevOps is being used and really try to get the most out of it. DevOps can be complicated. Boiling anything down to five steps can be difficult but if you act on these five fundamental principles you will be well on your way to putting DevOps into practice. And while its fun to talk about what DevOps is and isn't, ultimately that's the whole point - to actually uncover a better way to work with others.
Read more
  • 0
  • 0
  • 23888
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-what-are-lightweight-architecture-decision-records
Richard Gall
16 May 2018
4 min read
Save for later

What are lightweight Architecture Decision Records?

Richard Gall
16 May 2018
4 min read
Architecture Decision Records (ADRs) document all the decisions made about software. Every change is recorded in a plain text file sitting inside a version control system (like GitHub). The record should be a complement to the information you can find in a version control system. The ADR provides context and information around every decision made about a piece of software. Why are lightweight Architecture Decision Records needed? We are always making decisions when we build software. Even the simplest piece of software will have required the engineer to take a number of different decisions. Often these decisions aren't obvious. If you've ever had to work with code written by someone else you're probably familiar with this sort of situation. You might have even found that when you come across someone else's code, you need to make a further decision. Either you can simply accept what has been written, and merely surmise and assume why it has been done in the way that it, or you can decide to change it, based on your own judgement. Neither option is ideal. This was what Michael Nygard identified in this blog post in 2011. This was when the concept of Architecture Decision Records first emerged. An ADR should prevent situations like this arising. That makes life easier for you. More importantly, it should mean that every decision is transparent to everyone involved. So, instead of blindly accepting something or immediately changing it, you can simply check the Architecture Decision Record. This will then inform how you proceed. Perhaps you need to make a change. But perhaps you also now understand the context of why something was built in the way it was. Any questions you might have should be explicitly answered in the architecture decision record. So, when you start asking yourself why has she done it like that? instead of floundering helplessly, you can find the answer in the ADR. Why lightweight Architecture Decision Records now? Architecture Decision Records aren't a new thing. Nygard wrote his post all the way back in 2011, after all. But the fact remains that the context from which Nygard was writing in 2011 was very specific. Today it is mainstream. As we've moved away from monolithic architecture towards microservices or serverless, decision making has become more and more important in software engineering. This is a point that is well explained in a blog post here: "The rise of lean development and microservices... complicates the ability to communicate architecture decisions. While these concepts are not inherently opposed to documentation, their processes often fail to effectively capture decision-making processes and reasoning. Another possible inefficiency when recording decisions is bad or out-of-date documentation. It's often a herculean effort to keep large, complex architecture documents current, making maintenance one of the most common barriers to entry." ADRs are, then, a way of managing the complexity in modern software engineering. They are a response to a fundamental need to better communicate decisions. Most importantly, they codify decision-making within the development process. It is when they are lightweight and sit within the project itself that they are most effective. Architecture Decision Record template Architecture Decision Records must follow a template. Not only does that mean everyone is working off the same page, it also means people are actually more likely to document their decisions. Think about it: if you're asked to note how you decide to do something without any guidelines, you're probably not going to do it at all. Below, you'll find an Architecture Decision Record example template. There are a number of different templates you can use, but it's probably best to sit down with your team and agree on what needs to be captured. An Architecture Decision Record example Date Decision makers [who was involved in the decision taken] Category [which part of the architecture does this decision pertain to] Contextual outline [Explain why this decision was made. Outline the key considerations and assumptions at play] Impact consequences [What does this decision mean for the project? What should someone reading this be aware of in terms of future decisions?] As I've already noted, there are a huge number of ways you may want to approach this. Use this as a starting point. Read next Enterprise Architecture Concepts Reactive Programming and the Flux Architecture
Read more
  • 0
  • 0
  • 21593

article-image-why-microservices-and-devops-are-match-made-heaven
Erik Kappelman
12 Oct 2017
4 min read
Save for later

Why microservices and DevOps are a match made in heaven

Erik Kappelman
12 Oct 2017
4 min read
What are microservices? In terms of software, ‘services’ could be thought of as little chunks of functionality. Services are a part of service-oriented architecture (SOA). Services are stateless, adhere to a contract (shared standards), are autonomous, relatively granular, and should be a ‘black-box’ for the user. Microservices are a logical extension of services. Microservices are services that perform only one function. This matches the Unix philosophy, “Do one thing, and do it well.” But who cares? And what about DevOps? Well, although the title is a cliche, DevOps and microservices based architecture are absolutely a match made in heaven. Following the philosophy of fully explaining terms, let's talk a bit about DevOps. What is DevOps? DevOps, which come from the words "development operations," is a process that is used to create software. DevOps is not one specific philosophy or process, there are many variants, but there are some shared features across most of the variants. DevOps advocates for a continuous development process where as many elements of this process are as automated as possible. Each iteration of a product is coded, built, tested, packaged, and released, and then monitored. This is referred to as the DevOps toolchain. When there is a need or desire to upgrade or change functionality or the way a product is designed, the process begins again. The idea is that DevOps should run in a circular fashion, always upgrading and always getting better. There are myriad tools in use right now within various flavors of DevOps. These tool are designed to meld with the DevOps tool chain and have revolutionized the development process for many developers and companies. Why microservices and DevOps go together You should already see why microservices and DevOps go together so well. DevOps calls for continuous monitoring, testing, and deployment of software. Microservices are inherently modular, because they are intended to perform a single function. Software that is modular easily fits into the DevOps structure. Incremental changes can be made to parts of a project, perhaps a single microservice. If the service contracts and control mechanisms are properly created, a single microservice should be able to be easily upgraded, built, tested, deployed and monitored without sending a cascading wave of bugs through adjacent services. DevOps really doesn’t make much sense outside of a structure like this. If your software is designed as a behemoth interconnected, interdependent ball of wax, changing part of the functionality will ‘break’ everything. This means that as changes or upgrades to a software are made, almost every change, no matter how big or small, will trigger what amounts to an almost full rewrite, or upgrade of the software in question. When applied to this kind of project, most DevOps processes would actually hinder the development process instead of helping. When projects are modularized at a relatively granular level, such as when a project is employing a microservice based structure, DevOps expedites delivery time and quality simultaneously. It should be noted that both a microservice architecture and DevOps processes are not tied to any specific tools or languages. These are philosophies for development and could be applied many different ways. That being said, there are many continuous integration and deployment tools, as well as, many different automation tools that are designed for use within a DevOps framework. Criticisms of microservices There are some criticisms of the microservice structure. One criticism is that using microservices does not get rid of the complexities of a traditional program. Those complexities are just moved onto the network that the services are using to communicate. Stress to the network is another criticism of the microservice architecture. This is because the service architecture distributes the elements of a program or process around a network in various places. In order for these services to perform cohesive functions, they must utilize the network to communicate. Depending on how many services make up a program or process, this could translate into significant network activity, which then creates problems of its own. Another criticism is that microservices can sometimes become, so called, ‘nanoservices.’ This is a service that performs a function so small that cost of the service actually outweighs the utility of the service. These criticisms should be kept in mind, but, in my opinion, they don’t amount to enough to impact the usual functionality of microservices in a DevOps environment. Using microservices in the DevOps process helps fully realize the potential of continuous integration, testing a deployment promised by DevOps. These tools combined can optimize computing in a manner that should be utilized during development whenever possible.
Read more
  • 0
  • 0
  • 21347

article-image-why-choose-ansible-for-your-automation-and-configuration-management-needs
Savia Lobo
03 Jul 2018
4 min read
Save for later

Why choose Ansible for your automation and configuration management needs?

Savia Lobo
03 Jul 2018
4 min read
Off late, organizations are moving towards getting their systems automated. The benefits are many. Firstly, it saves off a huge chunk of time and secondly saves investments in human resources for simple tasks such as updates and so on. Few years back, Chef and Puppet were the two popular names when asked about tools for software automation. Over the years, these have got a strong rival which has surpassed them and now sits as one of the famous tools for software automation. Ansible is the one! Ansible is an open source tool for IT configuration management, deployment, and orchestration. It is perhaps the definitive configuration management tool. Chef and Puppet may have got there first, but its rise over the last couple of years is largely down to its impressive automation capabilities. And with the demands on operations engineers and sysadmins facing constant time pressures, the need to automate isn’t “nice to have”, but a necessity. Its tagline is “allowing smart people to do smart things.” It’s hard to argue that any software should aim to do much more than that. Ansible’s rise in popularity Ansible, originated in the year 2013, is a leader in IT automation and DevOps. It was bought by Red Hat in the year 2015 to achieve their goal of creating frictionless IT. The reason Red Hat acquired Ansible was its simplicity and versatility. It got the second mover advantage of entering the DevOps world after Puppet. It meant that it can orchestrate multi-tier applications in the cloud. This results in server uptime by implementing an ‘Immutable server architecture’ for deploying, creating, delete, or migrate servers across different clouds. For those starting afresh, it is easy to write, maintain automation workflows and gives them a plethora of modules which make it easy for newbies to get started. Benefits Red Hat and its community Ansible complements Red Hat’s popular cloud products, OpenStack and OpenShift. Red Hat proved to be a complex yet safe open source software for enterprises. However, it was not easy-to-use. Due to this many developers started migrating to other cloud services for easy and simple deployment options. By adopting Ansible, Red Hat finally provided an easy option to automate and modernize theri IT solutions. Customers can now focus on automating various baseline tasks. It also aids Red Hat to refresh its traditional playbooks; it allows enterprises to use IT services and infrastructure together with the help of Ansible’s YAML. The most prominent benefit of using Ansible for both enterprises and individuals is that it is agentless. It achieves this by leveraging SSH and Windows remote Management. Both these approaches reuse connections and use minimal network traffic. The approach also has added security benefits and improves both client and central management server resource utilization. Thus, the user does not have to worry about the network or server management, and can focus on other priority tasks. What can you use it for? Easy Configurations: Ansible provides developers with easy to understand configurations; understood by both humans and machines. It also includes many modules and user-built roles. Thus, one need not start building from scratch. Application lifecycle management: One can be rest assured about their application development lifecycle with Ansible. Here, it is used for defining the application and Red Hat Ansible Tower is used for managing the entire deployment process. Continuous Delivery: Manage your business with the help of Ansible push-based architecture, which allows a more sturdy control over all the required operations. Orchestration of server configuration in batches makes it easy to roll out changes across the environment. Security and Compliance: While security policies are defined in Ansible, one can choose to integrate the process of scanning and solving issues across the site into other automated processes. Scanning of jobs and system tracking ensures that systems do not deviate from the parameters assigned. Additionally, Ansible Tower provides a secure storage for machine credentials and RBAC (role-based access control). Orchestration: It brings in a high amount of discipline and order within the environment. This ensures all application pieces work in unison and are easily manageable; despite the complexity of the said applications. Though it is popular as the IT automation tool, many organizations use it in combination with Chef and Puppet. This is because it may have scaling issues and lacks in performance for larger deployments. Don’t let that stop you from trying Ansible; it is most loved by DevOps as it is written in Python and thus it is easy to learn. Moreover, it offers a credible support and an agentless architecture, which makes it easy to control servers and much more within an application development environment. An In-depth Look at Ansible Plugins Mastering Ansible – Protecting Your Secrets with Ansible Zefflin Systems unveils ServiceNow Plugin for Red Hat Ansible 2.0
Read more
  • 0
  • 0
  • 19559

article-image-6-new-ebooks-for-programmers-to-watch-out-for-in-march
Richard Gall
20 Feb 2019
6 min read
Save for later

6 new eBooks for programmers to watch out for in March

Richard Gall
20 Feb 2019
6 min read
The biggest challenge for anyone working in tech is that you need multiple sets of eyes. Yes, you need to commit to regular, almost continuous learning, but you also need to look forward to what’s coming next. From slowly emerging trends that might not even come to fruition (we’re looking at you DataOps), to version updates and product releases, for tech professionals the horizon always looms and shapes the present. But it’s not just about the big trends or releases that get coverage - it’s also about planning your next (career) move, or even your next mini-project. That could be learning a new language (not necessarily new, but one you haven’t yet got round to learning), trying a new paradigm, exploring a new library, or getting to grips with cloud native approaches to software development. This sort of learning is easy to overlook but it is one that's vital to any developers' development. While the Packt library has a wealth of content for you to dig your proverbial claws into, if you’re looking forward, Packt has got some new titles available in pre-order that could help you plan your learning for the months to come. We’ve put together a list of some of our own top picks of our pre-order titles available this month, due to be released late February or March. Take a look and take some time to consider your next learning journey... Hands-on deep learning with PyTorch TensorFlow might have set the pace when it comes to artificial intelligence, but PyTorch is giving it a run for its money. It’s impossible to describe one as ‘better’ than the other - ultimately they both have valid use cases, and can both help you do some pretty impressive things with data. Read next: Can a production ready Pytorch 1.0 give TensorFlow a tough time? The key difference is really in the level of abstraction and the learning curve - TensorFlow is more like a library, which gives you more control, but also makes things a little more difficult. PyTorch, then, is a great place to start if you already know some Python and want to try your hand at deep learning. Or, if you have already worked with TensorFlow and simply want to explore new options, PyTorch is the obvious next step. Order Hands On Deep learning with PyTorch here. Hands-on DevOps for Architects Distributed systems have made the software architect role incredibly valuable. This person is not only responsible for deciding what should be developed and deployed, but also the means through which it should be done and maintained. But it’s also made the question of architecture relevant to just about everyone that builds and manages software. That’s why Hands on DevOps for Architects is such an important book for 2019. It isn’t just for those who typically describe themselves as software architects - it’s for anyone interested in infrastructure, and how things are put together, and be made to be more reliable, scalable and secure. With site reliability engineering finding increasing usage outside of Silicon Valley, this book could be an important piece in the next step in your career. Order Hands-on DevOps for Architects here. Hands-on Full stack development with Go Go has been cursed with a hell of a lot of hype. This is a shame - it means it’s easy to dismiss as a fad or fashion that will quickly disappear. In truth, Go’s popularity is only going to grow as more people experience, its speed and flexibility. Indeed, in today’s full-stack, cloud native world, Go is only going to go from strength to strength. In Hands-on Full Stack Development with Go you’ll not only get to grips with the fundamentals of Go, you’ll also learn how to build a complete full stack application built on microservices, using tools such as Gin and ReactJS. Order Hands-on Full Stack Development with Go here. C++ Fundamentals C++ is a language that often gets a bad rap. You don’t have to search the internet that deeply to find someone telling you that there’s no point learning C++ right now. And while it’s true that C++ might not be as eye-catching as languages like, say, Go or Rust, it’s nevertheless still a language that still plays a very important role in the software engineering landscape. If you want to build performance intensive apps for desktop C++ is likely going to be your go-to language. Read next: Will Rust replace C++? One of the sticks that’s often used to beat C++ is that it’s a fairly complex language to learn. But rather than being a reason not to learn it, if anything the challenge it presents to even relatively experienced developers is one well worth taking on. At a time when many aspects of software development seem to be getting easier, as new layers of abstraction remove problems we previously might have had to contend with, C++ bucks that trend, forcing you to take a very different approach. And although this approach might not be one many developers want to face, if you want to strengthen your skillset, C++ could certainly be a valuable language to learn. The stats don’t lie - C++ is placed 4th on the TIOBE index (as of February 2019), beating JavaScript, and commands a considerably high salary - indeed.com data from 2018 suggests that C++ was the second highest earning programming language in the U.S., after Python, with a salary of $115K. If you want to give C++ a serious go, then C++ Fundamentals could be a great place to begin. Order C++ Fundamentals here. Data Wrangling with Python & Data Visualization with Python Finally, we’re grouping two books together - Data Wrangling with Python and Data Visualization with Python. This is because they both help you to really dig deep into Python’s power, and better understand how it has grown to become the definitive language of data. Of course, R might have something to say about this - but it’s a fact the over the last 12-18 months Python has really grown in popularity in a way that R has been unable to match. So, if you’re new to any aspect of the data science and analysis pipeline, or you’ve used R and you’re now looking for a faster, more flexible alternative, both titles could offer you the insight and guidance you need. Order Data Wrangling with Python here. Order Data Visualization with Python here.
Read more
  • 0
  • 0
  • 19124
article-image-is-your-enterprise-measuring-the-right-devops-metrics
Guest Contributor
17 Sep 2018
6 min read
Save for later

Is your Enterprise Measuring the Right DevOps Metrics?

Guest Contributor
17 Sep 2018
6 min read
As of 2018, 17% of the companies worldwide have fully adopted DevOps while 14% are still in the consideration stage. Amazon, Netflix and Target are few of the companies that have attained success with DevOps. Amazon’s move to Amazon Web Services resulted in their ability to scale their capacity up or down as needed for the servers, thus allowing their engineers to deploy their own code to the server whenever they wanted to. This resulted in continuous deployment, thus reducing the duration as well as number of outages experienced by the companies using AWS. Netflix used DevOps to improve their cloud infrastructure and to ensure smooth streaming of videos online. When you say “we have adopted DevOps in your Enterprise”, what do you really mean? It means you have adopted a software philosophy that integrates software development and operations, thus reducing the time to market your end product. The questions which come next are: How do you measure the true success of DevOps in your organization? Have you been working on the right metrics all along? Let’s talk about first measuring DevOps in organizations. It is all about uptime, transactions per second, bugs fixed, the commits and other operational as well as productivity metrics. This is what most organizations tend to look at as metrics, when you talk about DevOps. But are these the Right DevOps Metrics? For a while, companies have been working on a set of metrics, discussed above, to determine the success of the DevOps. However, these are not the right metrics, and should not be considered. A metric is an indicator of the performance of the DevOps, and not every single indicator will determine the success. Your metrics might differ based on the data you collect. You would end up collecting large volumes of data; however, not every data available can be converted into a metric. Here’s how you can determine the metrics for your DevOps. Avoid using too many metrics You should, at the most, use 10 metrics. We suggest using less than 10 in fact. The fewer the metrics used, the better your judgment would be. You should broaden your perspective when choosing the metrics. It is important to choose metrics that account for the overall organizational health, and don’t just take into consideration the operational and development data. Metrics that connect with your organization What is the ultimate aim for your organization? How would you determine your organization is successful? The answer to these questions will help you determine the metrics. Most organizations determine their success based on customer experience and the overall operational efficiency. You will need to choose metrics that help you determine these two values. Tie the metrics to your goals As a businessperson, you are more concerned with customer attrition, bad feedback and non-returning customers than the lines of code that goes into creating a successful software product. You will need to tie your DevOps success metrics to these goals. While you are concerned about the failure of your website or the downtime, the true concern is the customer’s abandonment of your website. Causes that affect the DevOps While the business metrics will help you measure the success to a certain extent, there are certain things that affect the operations and development teams. You will need to check these causes, and go to the root to understand how it affects the DevOps teams  and what needs to be done to create a balance between the development and operational teams. Next, we will talk about the actual DevOps metrics that you should take into consideration when deriving value for your organization and measuring the success. The Velocity With most of the enterprise elements being automated, velocity is one of the most important metrics that will determine the success of your DevOps. The idea is to get the updates out to the users in the quickest and fastest way possible, without compromising on security or reliability. You stay competitive, offer new features and boost customer retention. The two variables that help measure this tangible metric include deployment frequency and deployment lead time. The former measures the frequency of releases and the latter measures the speed at which the team commits a code and pushes forth the update. Service Quality Service quality directly impacts the goals set forth by the organization, and is intangible. The idea is to maintain the service quality throughout the releases and  changes made to the application. The variables that determine this metric include change failure rate, number of support tickets and MTTR (Mean time to recovery). When you release an update, and that leads to an error or fault in the application, it is the change failure rate. In case there are bugs or performance issues in your releases, and these are being reported, then the variable number of support tickets or errors comes into existence. MTTR is the variable that measures the number of issues resolved and the time taken to solve them. The idea is to be more responsive to the problems faced by the customers. User Experience This is the final metric that impacts the success of your DevOps. You need to check if all the features and updates you have insisted upon are in sync with the user needs. The variables that are concerned with measuring this aspect include feature usage and business impact. You will need to check how many people from the target audience are using the new feature update you have released, and determine their personas. You can check the number of sessions, completed transactions and duration of the session to quantify the number of people. Check their profiles to get their personas.. Planning your DevOps strategy It is not easy to roll out DevOps in your organization, and expect agility immediately. You need to have a perfect strategy, align it to your business goals, and determine the effective DevOps metrics to determine the success of your roll out. Planning is of essence for a thorough roll out of DevOps. It is important to consider every data, when you have DevOps in your organization. Make sure you store and analyze every data, and use the data that suits the DevOps metrics you have determined for success. It is important that the DevOps metrics are aligned to your business goals and the objectives you have defined. About Author: Vishal Virani is a Founder and CEO of Coruscate Solutions, a mobile app development company. He enjoys writing about technology, mobile apps, custom web development and latest industry trends.
Read more
  • 0
  • 0
  • 18205

article-image-top-5-devops-tools-increase-agility
Darrell Pratt
14 Oct 2016
7 min read
Save for later

Top 5 DevOps Tools to Increase Agility

Darrell Pratt
14 Oct 2016
7 min read
DevOps has been broadly defined as a movement that aims to remove the barriers between the development and operations teams within an organization. Agile practices have helped to increase speed and agility within development teams, but the old methodology of throwing the code over the wall to an operations department to manage the deployment of the code to the production systems still persists. The primary goal of the adoption of DevOps practices is to improve both the communication between disparate operations and development groups, and the process by which they work. Several tools are being used across the industry to put this idea into practice. We will cover what I feel is the top set of those tools from the various areas of the DevOps pipeline, in no particular order. Docker “It worked on my machine…” Every operations or development manager has heard this at some point in their career. A developer commits their code and promptly breaks an important environment because their local machine isn’t configured to be identical to a larger production or integration environment.  Containerization has exploded onto the scene and Docker is at the nexus of the change to isolate code and systems into easily transferable modules. Docker is used in the DevOps suite of tools in a couple of methods. The quickest win is to first use Docker to provide the developers with easily useable containers that can mimic the various systems within the stack. If a developer is working on a new RESTful service, they can checkout the container that is setup to run Node.js or Spring Boot, and write the code for the new service with the confidence that the container will be identical to the service environment on the servers. With the success of using Docker in the development workflow, the next logical step is to use Docker in the build stage of the CI/CD pipeline. Docker can help to isolate the build environment’s requirements across different portions of the larger application. By containerizing this step, it is easy to use one generic pipeline to build components as different spanning from Ruby and Node.js to Java and Golang. Git & JFrog Artifactory Source control and artifact management acts as afunnel for the DevOps pipeline. The structure of an organization can dictate how they run these tools, be it hosted or served locally. Git’s decentralized source code management and high-performance merging features have helped it to become the most popular tool in version control systems. Atlassian BitBucket and GitHub both provide a good set of tooling around Git and are easy to use and to integrate with other systems. Source code control is vital to the pipeline, but the control and distribution of artifacts into the build and deployment chain is important as well. >Branching in Git Artifactory is a one-stop shop for any binary artifact hosted within a single repository, which now supports Maven, Docker, Bower, Ruby Gems, CocoaPods, RPM, Yum, and npm. As the codebase of an application grows and includes a broader set of technologies, the ability to control this complexity from a single point and integrate with a broad set of continuous integration tools cannot be stressed enough. Ensuring that the build scripts are using the correct dependencies, both external and internal, and serving a local set of Docker containers reduces the friction in the build chain and will make the lives of the technology team much easier. Jenkins There are several CI servers to choose from in the market. The hosted set of tools such as Travis CI, Codeship, Wercker and Circle CI are all very well suited to drive an integration pipeline and each caters slightly better to an organization that is more cloud focused (source control and hosting), with deep integrations with GitHub and cloud providers like AWS, Heroku, Google and Azure. The older and less flashy system is Jenkins. fcommunity that is constantly adding in new integrations and capabilities to the product. The Jenkins Pipeline feature provides a text-based DSL for creating complex workflows that can move code from repository to the glass with any number of testing stages and intermediate environment deployments. The pipeline DSL can be created from code and this enables a good scaffolding setup for new projects to be integrated into the larger stack’s workflow >Pipeline example Jenkins has continued to nurture a large community that is constantly adding in new integrations and capabilities to the product. The Jenkins Pipeline feature provides a text-based DSL for creating complex workflows that can move code from repository to the glass with any number of testing stages and intermediate environment deployments. The pipeline DSL can be created from code and this enables a good scaffolding setup for new projects to be integrated into the larger stack’s workflow. Hashicorp Terraform At this point we have a system that can build and manage applications through the software development lifecycle. The code is hosted in Git, orchestrated through testing and compilation with Jenkins, and running in reliable containers, and we are storing and proxying dependencies in Artifactory. The deployment of the application is where the operations and development groups come together in DevOps. Terraform is an excellent tool to manage the infrastructure required for running the applications as code itself. There are several vendors in this space — Chef, Puppet and Ansible to name just a few. Terraform sits at a higher level than many of these tools by acting as more of a provisioning system than a configuration management system. It has plugins to incorporate many of the configuration tools, so any investment that an organization has made in one of those systems can be maintained. Load balancer and instance config Where Terraform excels is in its ability to easily provision arbitrarily complex multi-tiered systems, both locally or cloud hosted. The syntax is simple and declarative, and because it is text, it can be versioned alongside the code and other assets of an application. This delivers on “Infrastructure as Code.” Slack A chat application was probably not what you were expecting in a DevOps article, but Slack has been a transformative application for many technology organizations. Slack provides an excellent platform for fostering communication between teams (text, voice and video) and integrating various systems.  The DevOps movement stresses onremoval of barriers between the teams and individuals who work together to build and deploy applications. Web hooks provide simple integration points for simple things such as build notifications, environment statuses and deployment audits. There is a growing number of custom integrations for some of the tools we have covered in this article, and the bot space is rapidly expanding into AI-backed members of the team that answer questions about builds and code to deploy code or troubleshoot issues in production. It’s not a surprise that this space has gained its own name, ChatOps. Articles covering the top 10 ChatOps strategies will surely follow. Summary In this article, we covered several of the tools that integrate into the DevOps culture and how those tools are used and are transforming all areas of the technology organization. While not an exhaustive list, the areas that were covered will give you an idea of the scope of the space and how these various systems can be integrated together. About Darrell Pratt Darrell Pratt is a technologist who is responsible for a range of technologies at Cars.com, where he is the director of software development and delivery. He is passionate about technology and still finds time to write a bit of code and hack on hardware projects. Find him on Twitter here: @darrellpratt.
Read more
  • 0
  • 0
  • 13511

article-image-so-you-want-be-devops-engineer
Darrell Pratt
20 Oct 2016
5 min read
Save for later

So you want to be a DevOps engineer

Darrell Pratt
20 Oct 2016
5 min read
The DevOps movement has come about to accomplish the long sought-after goal of removing the barriers between the traditional development and operations organizations. Historically, development teams have written code for an application and passed that code over to the operations team to both test and deploy onto the company’s servers. This practice generates many mistakes and misunderstandings in the software development lifecycle, in addition to the lack of ownership amongst developers that grows as a result of them not owning more of the deployment pipeline and production responsibilities. The new DevOps teams that are appearing now start as blended groups of developers, system administrators, and release engineers. The thought isthat the developers can assist the operations team members in the process of building and more deeply understanding the applications, and the operations team member can shed light on the environments and deployment processes that they must master to keep the applications running. As these teams evolve, we are seeing the trend to specifically hire people into the role of the DevOps Engineer. What this role is and what type of skills you might need to succeed as a DevOps engineer is what we will cover in this article. The Basics Almost every job description you are going to find for a DevOps engineer is going to require some level of proficiency in the desired production operating systems. Linux is probably the most common. You will need to have a very good level of understanding of how to administer and use a Linux-based machine. Words like grep, sed, awk, chmod, chown, ifconfig, netstat and others should not scare you. In the role of DevOps engineer, you are the go-to person for developers when they have issues with the server or cloud. Make sure that you have a good understanding of where the failure points can be in these systems and the commands that can be used to pinpoint the issues. Learn the package manager systems for the various distributions of Linux to better understand the underpinnings of how they work. From RPM and Yum to Apt and Apk, the managers vary widely but the common ideas are very similar in each. You should understand how to use the managers to script machine configurations and understand how the modern containers are built. Coding The type of language you need for a DevOps role is going to depend quite a bit on the particular company. Java, C#, JavaScript, Ruby and Python are all popular languages. If you are a devout Java follower then choosing a .NET shop might not be your best choice. Use your discretion here, but the job is going to require a working knowledge of coding in one more focused languages. At a minimum, you will need to understand how the build chain of the language works and should be comfortable understanding the error logging of the system and understand what those logs are telling you. Cloud Management Gone are the days of uploading a war file to a directory on the server. It’s very likely that you are going to be responsible for getting applications up and running on a cloud provider. Amazon Web Services is the gorilla in the space and having a good level of hands on experience with the various services that make up a standard AWS deployment is a much sought after skill set. From standard AMIs to load balancing, cloud formation and security groups, AWS can be complicated but luckily it is very inexpensive to experiment and there are many training classes of the different components. Source Code Control Git is the tool of choice currently for source code control. Git gives a team a decentralized SCM system that is built to handle branching and merging operations with ease. Workflows that teams use are varied, but a good understanding of how to merge branches, rebase and fix commit issues is required in the role. The DevOps engineers are usually looked to for help on addressing “interesting” Git issues, so good, hands-on experience is vital. Automation Tooling A new automation tool has probably been released in the time it takes to read this article. There will be new tools and platforms in this part of the DevOps space, but the most common are Chef, Puppet and Ansible. Each system provides a framework for treating the setup and maintenance of your infrastructure as code. Each system has a slightly different take on the method for writing the configurations and deploying them, but the concepts are similar and a good background in any one of these is more often than not a requirement for any DevOps role. Each of these systems requires a good understanding of either Ruby or Python and these languages appear quite a bit in the various tools used in the DevOps space. A desire to improve systems and processes While not an exhaustive list, mastering this set of skills will accelerate anyone’s journey towards becoming a DevOps engineer. If you can augment these skills with a strong desire to improve upon the systems and processes that are used in the development lifecycle, you will be an excellent DevOps engineer. About the author Darrell Pratt is the director of software development and delivery at Cars.com, where he is responsible for a wide range of technologies that drive the Cars.com website and mobile applications. He is passionate about technology and still finds time to write a bit of code and hack on hardware projects. You can find him on Twitter here: @darrellpratt.
Read more
  • 0
  • 0
  • 13068
article-image-5-common-misconceptions-about-devops
Hari Vignesh
08 Aug 2017
4 min read
Save for later

5 common misconceptions about DevOps

Hari Vignesh
08 Aug 2017
4 min read
DevOps is a transformative operational concept designed to help development and production teams coordinate operations more effectively. In theory, DevOps is designed to be focused on cultural changes that stimulate collaboration and efficiency, but the focus often ends up being placed on everyday tasks, distracting organizations from the core principles — and values  — that DevOps is built around. This has led to many technology professionals developing misconceptions about DevOps because they have been part of deployments,or know people who have been involved in DevOps plans, who have strayed from the core principles of the movement. Let’s discuss a few of the misconceptions. We need to employ ‘DevOps’ DevOps is not a job title or a specific role. Your organization probably already has Senior Systems guys and Senior Developers who have many of the traits needed to work in the way that DevOps promotes. With a bit of effort and help from outside consultants, mailing lists or conferences, you might easily be able to restructure your business around the principles you propose without employing new people — or losing old ones. Again, there is no such thing as a DevOp person. It is not a job title. Feel free to advertise for people who work with a DevOps mentality, but there are no DevOps job titles. Oftentimes, good people to consider in the role as a bridge between teams are generalists, architects, and Senior Systems Administrators and Developers. Many companies in the past decade have employed a number of specialists — a DNS Administrator is not unheard of. You can still have these roles, but you’ll need some generalists who have a good background in multiple technologies. They should be able to champion the values of simple systems over complex ones, and begin establishing automation and cooperation between teams. Adopting tools makes you DevOps Some who have recently caught wind of the DevOps movement believe they can instantly achieve this nirvana of software delivery simply by following a checklist of tools to implement within their team. Their assumption is, that if they purchase and implement a configuration management tool like Chef, a monitoring service like Librato, or an incident management platform like VictorOps, then they’ve achieved DevOps! But that's not quite true. DevOps requires a cultural shift beyond simply implementing a new lineup of tools. Each department, technical or not, needs to understand the cultural shift behind DevOps. It’s one that emphasizes empathy and better collaboration. It’s more about people. DevOps emphasizes continuous change There’s no way around it — you will need to deal with more change and release tasks when integrating DevOps principles into your operations — the focus is placed heavily on accelerating deployment through development and operations integration, after all. This perception comes out of DevOps’ initial popularity among web app developers. It has been explained that most businesses will not face change that is so frequent, and do not need to worry about continuous change deployment just because they are supporting DevOps. DevOps does not equal “developers managing production” DevOps means development and operations teams working together collaboratively to put the operations requirements about stability, reliability, and performance into the development practices, while at the same time bringing development into the management of the production environment (e.g. by putting them on call, or by leveraging their development skills to help automate key processes). It doesn’t mean a return to the laissez-faire “anything goes” model, where developers have unfettered access to the production environment 24/7 and can change things as and when they like. DevOps eliminates traditional IT roles If, in your DevOps environment, your developers suddenly need to be good system admins, change managers and database analysts, something went wrong. DevOps as a movement that eliminates traditional IT roles will put too much strain on workers. The goal is to break down collaboration barriers, not ask your developers to do everything. Specialized skills play a key role in support effective operations, and traditional roles are valuable in DevOps.  About the Author  Hari Vignesh Jayapalan is a Google Certified Android app developer, IDF Certified UI & UX Professional, street magician, fitness freak, technology enthusiast, and wannabe entrepreneur. He can be found on Twitter @HariofSpades. 
Read more
  • 0
  • 0
  • 9231
Modal Close icon
Modal Close icon