Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - DevOps

49 Articles
article-image-gitlab-new-devops-solution
Erik Kappelman
17 Jan 2018
5 min read
Save for later

GitLab's new DevOps solution

Erik Kappelman
17 Jan 2018
5 min read
Can it be real? The complete DevOps toolchain integrated into one tool, one UI and one process? GitLab seems to think so. GitLab has already made huge strides in terms of centralizing the DevOps process into a single tool. Up until now, most of the focus has been on creating a seamless development system and operations have not been as important. What’s new is the extension of the tool to include the operating side of DevOps as well as the development side. Let's talk a little bit about what DevOps is in order to fully appreciate the advances offered by GitLab. DevOps is basically a holistic approach to software development, quality assurance, and operations. While each of these elements of software creation is distinct, they are all heavily reliant on the other elements to be effective. The DevOps approach is to acknowledge this interdependence and then try to leverage the interdepence to increase productivity and to enhance the final user experience. Two of the most talked about elements of DevOps are continous integration and continuous deployment. Continuous integration and deployment Continuous integration and deployment are aimed at continuously integrating changes to a codebase, potentially from multiple sources, and then continuously deploying these changes into production. These tools require a pretty sophisticated automation and testing framework in order to be really effective. There are plenty of tools for one or the other, but the notion behind GitLab is essentially that if you can affect both of these processes from the same UI, these processes would be that much more efficient. GitLab has shown this to be true.  There is also the human side to consider, that is, coming up with what tasks need to be performed, assigning these tasks to developers and monitoring their progress. GitLab offers tools that help streamline this process as well. You can track issues, create issue boards to organize workflow and these issue boards can be sliced a number of different ways so that most imaginable human organizational needs can be met. Monitoring and delivery So far, we’ve seen that DevOps is about bringing everything together into a smooth process, and GitLab wants that process to occur in one place. GitLab can help you from planning to deployment and everywhere in between. But, GitLab isn’t satisfied with stopping at deployment, and they shouldn’t be. When we think about the three legs of DevOps, development, operations, and quality assurance and testing, what I’ve said about GitLab really only applies to the development leg. This is an unfortunately common problem with DevOps tools and organizational strategies. They seem to cater to developers and basically no one else. Maybe devs complain the most, I don’t know. GitLab has basically solved the DevOps problems between planning and deployment and, naturally, wants to move on to the monitoring and delivery of applications. This is a really exciting direction. After all, software is ultimately about making things happen. Sometimes it's easy to lose sight of this and only focus on the tools that make the software. It is sometimes tempting to view software development as being inherently important, but it's really not; it's a process of making stuff for people to use. If you get too far away from that truth, things can get sticky. I think this is part of the reason the Ops side of DevOps is often overlooked. Operations is concerned with managing the software out there in the wild. This includes dealing with network and hardware considerations and end users. GitLab wants operations to take place using the same UI as development. And why not? It’s the same application isn’t it? And in addition to technical performance, what about how the users are interacting with the application? If the application is somehow monetized, why shouldn’t that information also be available in the same UI as everything else having to do with this application? Again, it's still the same application. One tool to rule them all If you take a minute to step back and appreciate the vision of GitLab’s current direction, I think you can see why this is so exciting. If GitLab is successful in the long-term of extending out their reach into every element of an application's lifecycle including user interactions, productivity would skyrocket.  This idea isn’t really new. The ‘one tool to rule them all’ isn’t even that imaginative of a concept. It's just that no one has ever really created this ‘one tool.’ I believe we are about to enter, or have already entered, a DevOps space race. I believe GitLab is comfortably leading the pack, but they will need to keep working hard if they want it to stay that way. I believe we will be getting the one tool to rule them all, and I believe it is going to be soon. The way things are looking, GitLab is going to be the one to bring it to us, but only time will tell. Erik Kappelman wears many hats including blogger, developer, data consultant, economist, and transportation planner. He lives in Helena, Montana and works for the Department of Transportation as a transportation demand modeler.
Read more
  • 0
  • 0
  • 21302

article-image-create-a-teamcity-project-tutorial
Gebin George
12 Jul 2018
3 min read
Save for later

Create a TeamCity project [Tutorial]

Gebin George
12 Jul 2018
3 min read
TeamCity is one of the most prominent tools used by DevOps professionals to perform continuous integration and delivery, effectively. It plays an important role when it comes to Mobile-level DevOps implementation. In this article, we will see how to create a TeamCity Project. This article is an excerpt from the book, Mobile DevOps,  written by Rohin Tak and Jhalak Modi. Once the installation is done, the TeamCity web user interface will open in the browser and we can create a new TeamCity project there. To do so, follow these steps: Once you have logged in to TeamCity UI, click on Create project: To connect to our project from GitHub, click on From GitHub on the next screen: This will open a popup with instructions to add a TeamCity application to your GitHub account: Click on the register TeamCity link and it should take you to the GitHub page where you can register a new OAuth app. Give the details of the application, homepage URL, and callback URL, as shown in the following screenshot, and register the OAuth app: Once you register, on the next screen you'll get a Client ID and Client Secret; copy those details since they will be required for the TeamCity project: Go back to TeamCity, put the Client ID and Client Secret in the required fields, and click Save: Next, you need to do a one-time sign in to allow TeamCity to use GitHub repositories. Click on Sign in to GitHub: Authorize the TeamCity app to use GitHub by clicking on Authorize app: Once authorized, select the PhoneCallApp repository from the list of repositories shown on TeamCity: On the next screen, TeamCity will offer to create a new project from the URL selected. Give it a name and click Proceed: This should create two things. The first is a trigger in TeamCity for each code check-in you do; each will trigger a build. The second is a build step from the repository automatically: We need to configure the build steps manually and use the build scripts described in the Creating a build script section. Use those scripts, described sequentially in previous steps, to create the build steps in TeamCity. Finally, your build steps should look like the following screenshot, consisting of all the steps mentioned in the Creating a build script section: Now, your TeamCity continuous build is ready, and a trigger is already configured to perform this build on each code check-in, or whenever it finds any code changes in the repository. This finally provides you with an Android package that is ready to be distributed. To summarize, we created a TeamCity project for Mobile DevOps. If you found this post useful, do check out the book Mobile DevOps, to continuously improve your application development lifecycle. Introduction to TeamCity Getting Started with TeamCity Jenkins 2.0: The impetus for DevOps Movement
Read more
  • 0
  • 0
  • 19434

article-image-chef-goes-open-source-ditching-the-loose-open-core-model
Richard Gall
02 Apr 2019
5 min read
Save for later

Chef goes open source, ditching the Loose Open Core model

Richard Gall
02 Apr 2019
5 min read
Chef, the infrastructure automation tool, has today revealed that it is going completely open source. In doing so, the project has ditched the loose open core model. The news is particularly intriguing as it comes at a time when the traditional open source model appears to be facing challenges around its future sustainability. However, it would appear that from Chef's perspective the switch to a full open source license is being driven by a crowded marketplace where automation tools are finding it hard to gain a foothold inside organizations trying to automate their infrastructure. A further challenge for this market is what Chef has identified as 'The Coded Enterprise' - essentially technologically progressive organizations driven by an engineering culture where infrastructure is primarily viewed as code. Read next: Key trends in software infrastructure in 2019: observability, chaos, and cloud complexity Why is Chef going open source? As you might expect,  there's actually more to Chef's decision than pure commercialism. To get a good understanding, it's worth picking apart Chef's open core model and how this was limiting the project. The limitations of Open Core The Loose Open Core model has open source software at its center but is wrapped in proprietary software. So, it's open at its core, but is largely proprietary in how it is deployed and used by businesses. While at first glance this might make it easier to monetize the project, it also severely limits the projects ability to evolve and develop according to the needs of people that matter - the people that use it. Indeed, one way of thinking about it is that the open core model positions your software as a product - something that is defined by product managers and lives and dies by its stickiness with customers. By going open source, your software becomes a project, something that is shared and owned by a community of people that believe in it. Speaking to TechCrunch, Chef Co-Founder Adam Jacob said "in the open core model, you’re saying that the value is in this proprietary sliver. The part you pay me for is this sliver of its value. And I think that’s incorrect... the value was always in the totality of the product." Read next: Chef Language and Style Removing the friction between product and project Jacob published an article on Medium expressing his delight at the news. It's an instructive look at how Chef has been thinking about itself and the challenges it faces. "Deciding what’s in, and what’s out, or where to focus, was the hardest part of the job at Chef," Jacob wrote. "I’m stoked nobody has to do it anymore. I’m stoked we can have the entire company participating in the open source community, rather than burning out a few dedicated heroes. I’m stoked we no longer have to justify the value of what we do in terms of what we hold back from collaborating with people on." So, what's the deal with the Chef Enterprise Automation Stack? As well as announcing that Chef will be open sourcing its code, the organization also revealed that it was bringing together Chef Automate, Chef Infra, Chef InSpec, Chef Habitat and Chef Workstation under one single solution: the Chef Enterprise Automation Stack. The point here is to simplify Chef's offering to its customers to make it easier for them to do everything they can to properly build and automate reliable infrastructure. Corey Scobie, SVP of Product and Engineering said that "the introduction of the Chef Enterprise Automation Stack builds on [the switch to open source]... aligning our business model with our customers’ stated needs through Chef software distribution, services, assurances and direct engagement. Moving forward, the best, fastest, most reliable way to get Chef products and content will be through our commercial distributions.” So, essentially the Chef Enterprise Automation Stack will be the primary Chef distribution that's available commercially, sitting alongside the open source project. What does all this mean for Chef customers and users? If you're a Chef user or have any questions or concerns, the team have put together a very helpful FAQ. You can read it here. The key points for Chef users Existing commercial and non-commercial users don't need to do anything - everything will continue as normal. However, anyone else using current releases should be aware that support will be removed from those releases in 12 months time. The team have clarified that "customers who choose to use our new software versions will be subject to the new license terms and will have an opportunity to create a commercial relationship with Chef, with all of the accompanying benefits that provides." A big step for Chef - could it help determine the evolution of open source? This is a significant step for Chef and it will be of particular interest to its users. But even for those who have no interest in Chef, it's nevertheless a story that indicates that there's a lot of life in open source despite the challenges it faces. It'll certainly interesting to see whether Chef makes it work and what impact it has on the configuration management marketplace.
Read more
  • 0
  • 0
  • 19356

article-image-devops-concepts-and-assessment-framework
Packt
05 Jul 2017
21 min read
Save for later

DevOps Concepts and Assessment Framework

Packt
05 Jul 2017
21 min read
In this article by Mitesh Soni, the author of the book DevOps Bootcamp we will discuss how to get quick understanding of DevOps from 10000 feet with real world examples on how to prepare for changing a culture. This will allow us to build the foundation of the DevOps concepts by discussing what our goals are, as well as getting buy-in from Organization Management. Basically, we will try to cover DevOps practices that can make application lifecycle management easy and effective. It is very important to understand that DevOps is not a framework, tool or any technology. It is more about culture of any organization. It is also a way people work in an organization using defined processes and by utilizing automation tools to make daily work more effective and less manual. To understand the basic importance of DevOps, we will cover following topics in this article: Need for DevOps How DevOps culture can evolve? Importance of PPT – People, Process, and Technology Why DevOps is not all about Tools DevOps Assessment Questions (For more resources related to this topic, see here.) Need for DevOps There is a famous quote by Harriet Tubman which you can find on (http://harriettubmanbiography.com). It says : Every great dream begins with a dreamer. Always remember, you have within you the strength, the patience, and the passion to reach for the stars to change the world Change is the law of life and that is also applicable to organization as well. And if any organization or individuals look only at the past or present patterns, culture, or practices then they are certain to miss the future best practices. In the dynamic IT world, we need to keep pace with the technology evolution. We can relate to George Bernard Shaw's saying: Progress is impossible without change, and those who cannot change their minds cannot change anything. Here we are focusing on changing the way we manage application lifecycle. Important question is whether we really need this change? Do we really need to go through the pain of this change? Answer is Yes. One may ask that such kind of change in business or culture must not be forceful. Agree. Let's understand the pain points faced by organizations in Application lifecycle management in modern world with the help of the following figure:   Considering the changing patterns and competitive environment is business, it is the need of an hour to improve application lifecycle management. Are there any factors that can be helpful in this modern times that can help us to improve application lifecycle management? Yes. Cloud Computing has changed the game. It has open doors for many path breaking solutions and innovations. Let's understand what Cloud Computing is and then we will see overview of DevOps and how Cloud is useful in DevOps. Overview of Cloud Computing Cloud computing is a type of computing that provides multi-tenant or dedicated computing resources such as compute, storage, and network which are delivered to Cloud consumers on demand. It comes in different flavors that includes Cloud Deployment Models and Cloud Service Models. The most important thing in this is the way its pricing model works that is pay as you go. Cloud Deployment Models describes the way Cloud resources are deployed such as behind the firewall and on the premise exclusively for a specific organization that is Private Cloud; or Cloud resources that are available to all organizations and individuals that is Public Cloud; or Cloud resources that are available to specific set of organizations that share similar types of interests or similar types of requirements that is Community Cloud; or Cloud resources that combines two or more deployment models that is known as Hybrid Cloud. Cloud Service Models describes the way Cloud resources are made available to Cloud consumers. It can be in form of pure Infrastructure where virtual machines are accessible and controlled by Cloud consumer or end user that is Infrastructure as a Service (IaaS); or Platform where runtime environments are provided so installation and configuration of all software needed to run application are already available and managed by Cloud Service Provider that is Platform as a Service; or Software as a Service where whole application is made available by Cloud Service Provider with responsibility of Infrastructure and Platform remains with Cloud Service Provider. There are many Service Models that have emerged during last few years but IaaS, PaaS, and SaaS are based on the National Institute of Standards and Technology (NIST) definition. Cloud computing has few characteristics which are significant such as Multi-Tenancy, Pay as you Use similar to electricity or Gas connection, On demand Self Service, Resource Pooling for better utilization of compute, storage and network resources, Rapid Elasticity for scaling up and scaling down resources based on needs in automated fashion and Measured Service for billing. Over the years, usage of different Cloud Deployment Models has varied based on use cases. Initially Public Cloud was used for applications that were considered non-critical while Private Cloud was used for critical application where security was a major concern. Hybrid and Public Cloud usage evolved over the time with experience and confidence in the services provided by Cloud Service Providers. Similarly, usage of different Cloud Service Models has varied based on the use cases and flexibility. IaaS was the most popular in early days but PaaS is catching up in its maturity and ease of use with enterprise capabilities. Overview of DevOps DevOps is all about a culture of an organization, processes, and technology to develop communication and collaboration between Development and IT Operations teams to manage application life-cycle more effectively than the existing ways of doing it. We often tend to work based on patterns to find reusable solutions from similar kind of problems or challenges. Over the years, achievements and failed experiments, Best practices, automation scripts, configuration management tools, and methodologies becomes integral part of Culture. It helps to define practices for a way of designing, a way of developing, a way of testing, a way of setting up resources, a way of managing environments, a way of configuration management, a way of deploying an application, a way of gathering feedback, a way of code improvements, and a way of doing innovations. Following are some of the visible benefits that can be achieved by implementing DevOps practices. DevOps culture is considered as innovative package to integrate Dev and Ops team in effective manner that includes components such as Continuous Build Integration, Continuous Testing, Cloud Resource Provisioning, Continuous Delivery, Continuous Deployment, Continuous Monitoring, Continuous Feedback, Continuous Improvement, and Continuous Innovation to make application delivery faster as per the demand of Agile methodology. However, it is not only about development and operations team that are involved. Testing team, Business Analysts, Build Engineers, Automation team, Cloud Team, and many other stakeholders are involved in this exercise of evolving existing culture. DevOps culture is not much different than the Organization culture which has shared values and behavioral aspect. It needs adjustment in mindsets and processes to align with new technology and tools. Challenges for Development and Operations Team There are some challenges why this scenario has occurred and that is why DevOps is going in upward direction and talk of the town in all Information Technology related discussions. Challenges for the Development Team Developers are enthusiastic and willing to adopt new technologies and approaches to solve problems. However they face many challenges including below: The competitive market creates pressure of on-time delivery They have to take care of production-ready code management and new feature implementation The release cycle is often long and hence the development team has to make assumptions before the application deployment finally takes place. In such a scenario, it takes more time to fix the issues that occurred during deployment in the staging or production environment Challenges for the Operations Team Operations team is always careful in changing resources or using any new technologies or new approaches as they want stability. However they face many challenges including below: Resource contention: It's difficult to handle increasing resource demands Redesigning or tweaking: This is needed to run the application in the production environment Diagnosing and rectifying: They are supposed to diagnose and rectify issues after application deployment in isolation Considering all the challenges faced by development and operations team, how should we improve existing processes, make use of automation tools to make processes more effective, and change people's mindset? Let's see in the next section on how to evolve DevOps culture in the organization and improve efficiency and effectiveness. How DevOps culture can evolve? Inefficient estimation, long time to market, and other issues led to a change in the waterfall model, resulting in the agile model. Evolving a culture is not a time bound or overnight process. It can be a step by step and stage wise process that can be achieved without dependencies on the other stages. We can achieve Continuous Integration without Cloud Provisioning. We can achieve Cloud Provisioning without Configuration Management. We can achieve Continuous Testing without any other DevOps practices. Following are different types of stages to achieve DevOps practices. Agile Development Agile development or the agile based methodology are useful for building an application by empowering individuals and encouraging interactions, giving importance to working software, customer collaboration—using feedback for improvement in subsequent steps—and responding to change in efficient manner. One of the most attractive benefits of agile development is continuous delivery in short time frames or, in agile terms, sprints. Thus, the agile approach of application development, improvement in technology, and disruptive innovations and approaches have created a gap between development and operations teams. DevOps DevOps attempts to fill these gaps by developing a partnership between the development and operations teams. The DevOps movement emphasizes communication, collaboration, and integration between software developers and IT operations. DevOps promotes collaboration, and collaboration is facilitated by automation and orchestration in order to improve processes. In other words, DevOps essentially extends the continuous development goals of the agile movement to continuous integration and release. DevOps is a combination of agile practices and processes leveraging the benefits of cloud solutions. Agile development and testing methodologies help us meet the goals of continuously integrating, developing, building, deploying, testing, and releasing applications. Build Automation An automated build helps us create an application build using build automation tools such as Gradle, Apache Ant and Apache Maven. An automated build process includes the activities such as Compiling source code into class files or binary files, Providing references to third-party library files, Providing the path of configuration files, Packaging class files or binary files into Package files, Executing automated test cases, Deploying package files on local or remote machines and Reducing manual effort in creating the package file. Continuous Integration In simple words, Continuous Integration or CI is a software engineering practice where each check-in made by a developer is verified by either of the following: Pull mechanism: Executing an automated build at a scheduled time and Push mechanism: Executing an automated build when changes are saved in the repository. This step is followed by executing a unit test against the latest changes available in the source code repository. Continuous integration is a popular DevOps practice that requires developers to integrate code into a code repositories such as Git and SVN multiple times a day to verify integrity of the code. Each check-in is then verified by an automated build, allowing teams to detect problems early. Cloud Provisioning Cloud provisioning has opened the door to treat Infrastructure as a Code and that makes the entire process extremely efficient and effective as we are automating process that involved manual intervention to a huge extent. Pay as you go billing model has made required resources more affordable to not only large organizations but also to mid and small scale organizations as well as individuals. It helps to go for improvements and innovations as earlier resource constraints were blocking organizations to go for extra mile because of cost and maintenance. Once we have agility in infrastructure resources then we can think of automating installation and configuration of packages that are required to run the application. Configuration Management Configuration management (CM) manages changes in the system or, to be more specific, the server run time environment. There are many tools available in the market with which we can achieve configuration management. Popular tools are Chef, Puppet, Ansible, Salt, and so on. Let's consider an example where we need to manage multiple servers with same kind of configuration. For example, we need to install Tomcat on each server. What if we need to change the port on all servers or update some packages or provide rights to some users? Any kind of modification in this scenario is a manual and, if so, error-prone process. As the same configuration is being used for all the servers, automation can be useful here. Continuous Delivery Continuous Delivery and Continuous Deployment are used interchangeably. However, there is a small difference between them. Continuous delivery is a process of deploying an application in any environment in an automated fashion and providing continuous feedback to improve its quality. Automated approach may not change in Continuous Delivery and Continuous Deployment. Approval process and some other minor things can change. Continuous Testing and Deployment Continuous Testing is a very important phase of end to end application lifecycle management process. It involves functional testing, performance testing, security testing and so on. Selenium, Appium, Apache JMeter, and many other tools can be utilized for the same. Continuous deployment, on the other hand, is all about deploying an application with the latest changes to the production environment. Continuous Monitoring Continuous monitoring is a backbone of end-to-end delivery pipeline, and open source monitoring tools are like toppings on an ice cream scoop. It is desirable to have monitoring at almost every stage in order to have transparency about all the processes, as shown in the following diagram. It also helps us troubleshoot quickly. Monitoring should be a well thought-out implementation of a plan. Let's try to depict entire process as continuous approach in the diagram below. We need to understand here that it is a phased approach and it is not necessary to automate every phase of automation at once. It is more effective to take one DevOps practice at a time, implement it and realize its benefit before implementing another one. This way we are safe enough to assess the improvements of changing culture in the organization and remove manual efforts from the application lifecycle management. Importance of PPT – People, Process, and Technology PPT is an important word in any organization. Wait! We are not talking about Powerpoint Presentation. Here, we are focusing on People, Processes, and Tools / Technology. Let's understand why and how they are important in changing culture of any organization. People As per the famous quote from Jack Canfield : Successful people maintain a positive focus in life no matter what is going on around them. They stay focused on their past successes rather than their past failures, and on the next action steps they need to take to get them closer to the fulfillment of their goals rather than all the other distractions that life presents to them. Curious question can be, why People matter? In one sentence, if we try to answer it then it would be: Because We are trying to change Culture. So? People are important part of any culture and only people can drive the change or change themselves to adapt to new processes or defining new processes and to learn new tools or technologies. Let's understand how and why with “Formula for Change“. David Gleicher created the “Formula for Change” in early 1960s as per references available in Wikipedia. Kathie Dannemiller refined it in 1980. This formula provides a model to assess the relative strengths affecting the possible success of organisational change initiatives. Gleicher (original) version: C = (ABD) > X, where: C = change, A = the status quo dissatisfaction, B = a desired clear state, D = is practical steps to the desired state, X = the cost of the change. Dannemiller version: D x V x F > R; where D, V, and F must be present for organizational change to take place where: D = Dissatisfaction with how things are now; V = Vision of what is possible; F = First, concrete steps that can be taken towards the vision; If the product of these three factors is greater than R = Resistance then change is possible. Essentially, it implies that there has to be strong Dissatisfaction with existing things or processes, Vision of what is possible with new trends, technologies, and innovations with respect to market scenario; concrete steps that can be taken towards achieving the vision. For More Details on 'Formula for change' you can visit this wiki page : https://en.wikipedia.org/wiki/Formula_for_change#cite_note-myth-1 If it comes to sharing an experience, I would say it is very important to train people to adopt new culture. It is a game of patience. We can't change mindset of people overnight and we need to understand first before changing the culture. Often I see Job Opening with a DevOps knowledge or DevOps Engineers and I feel that they should not be imported but people should be trained in the existing environment with Changing things gradually to manage resistance. We don't need special DevOps team, we need more communication and collaboration between developers, test teams, automation enablers, and cloud or infrastructure team. It is essential for all to understand pain points of each other. In number of organization I have worked, we used to have COE (Center of Excellence) in place to manage new technologies, innovations or culture. As an automation enabler and part of DevOps team, we should be working as facilitator only and not a part of silo. Processes Here is a famous quote from Tom Peters which says : Almost all quality improvement comes via simplification of design, manufacturing… layout, processes, and procedures Quality is extremely important when we are dealing with evolving a culture. We need processes and policies for doing things in proper way and standardized across the projects so sequence of operations, constraints, rules and so on are well defined to measure success. We need to set processes for following things: Agile Planning Resource Planning & Provisioning Configuration Management Role based Access Control to Cloud resources and other tools used in Automation Static Code Analysis – Rules for Programming Languages Testing Methodology and Tools  Release Management These processes are also important for measuring success in the process of evolving DevOps culture. Technology Here is a famous quote from Steve Jobs which says: Technology is nothing. What's important is that you have a faith in people, that they're basically good and smart, and if you give them tools, they'll do wonderful things with them Technology helps people and organizations to bring creativity and innovations while changing the culture. Without Technology, it is difficult to achieve speed and effectiveness in the daily and routine automation operations. Cloud Computing, Configuration Management tools, and Build Pipeline are among few that is useful in resource provisioning, installing runtime environment, and orchestration. Essentially, it helps to speed up different aspects of application lifecycle management. Why DevOps is not all about Tools Yes, tools are nothing. It is not that important factor in changing the culture of any organization. Reason is very simple. No matter what technology we use, we will perform Continuous Integration, Cloud Provisioning, Configuration Management, Continuous Delivery, Continuous Deployment, Continuous Monitoring and so on. Category wise different tool sets can be used but all perform similar things. It is just the way that tool perform operation that differs else outcome is same. Following are some tools based on the categories: Category Tools Build Automation Nant, MSBuild, Maven, Ant, Gradle Repository Git, SVN Static Code Analysis Sonar, PMD Continuous Integration Jenkins, Atlassian Bamboo, VSTS Configuration Management Chef, Puppet, Ansible, Salt Cloud Platforms AWS, Microsoft Azure Cloud Management Tool RightScale Application Deployment Shell Scripts, Plugins Functional Testing Selenium, Appium Load Testing Apache Jmeter Repositories Artifactory, Nexus, Fabric  Let's see how different tools can be useful in different stages for different operations. This may change based on number of environments or the number of DevOps practices we follow in different organizations. If we need to categorize tools based on different DevOps best practices then we can categorize them based on open source and commercial categories. Below are just sample examples. Components Open Source IBM Urban Code Electric-Cloud Build Tools Ant or Maven or MS Build Ant or Maven or MS Build Ant or Maven or MS Build Code Repositories Git or Subversion Git or Atlassian Stash or Subversion or StarTeam Git or Subversion or StarTeam Code Analysis Tools Sonar Sonar Sonar Continuous Integration Jenkins Jenkins or Atlassian Bamboo Jenkins or ElectricAccelerator Continuous Delivery Chef Artifactory and IBM UrbanCode Deploy ElectricFlow In this book we will try to focus on the Open source category as well as Commercial tools. We will use Jenkins and Visual Studio Team Services for all the major automation and orchestration related activities. DevOps Assessment Questions DevOps is a culture and we are very much aware with that fact. However, before implementing automation, putting processes in place and evolving culture, we need to understand existing status of organizations' culture and whether we need to introduce new processes or automation tools. We need to be very clear that we need to make the existing culture more efficient rather than importing culture. To accommodate assessment framework is difficult but we will try to provide some questions and hints based on which it will be easier to create an assessment framework. Create categories for which we want to ask questions and get responses for specific application. Few Sample Questions: Do you follow Agile Principles / Scrum or Kanban? Do you use any tool to keep track of Scrum or Kanban? What is normal sprint duration (2 weeks or 3 weeks) Is there a definitive and explicit definition of done for all phases of work? Are you using any Source Code Repository? Which Source Code Repository Do you use? Are you using any build automation tool such as Ant or Maven or Gradle or not? Are you using any custom script for build automation? Do you have Android and iOS based applications? Are you using any tools for Static Code Analysis? Are you using multiple environment for application deployment for different teams such as Dev, Test, Stage, pre-prod, prod etc. ? Are you using On Premise Infrastructure or Cloud based Infrastructure? Are you using any Configuration management tool or script for installing application packages or runtime environment? Are you using any automated scripts to deploy applications in prod and non-prod environments? Are you using manual approval before application release in any specific environment? Are you using any orchestration tool or script for Application Lifecycle Management? Are you using automation tools for Functional Testing, Load Testing, Security Testing, and Mobile Testing? Are you using any tools for Application and Infrastructure Monitoring? How are defects logged, triaged, and prioritized for resolving them based on priority? Are you using notification services to let stakeholders know about the status of application lifecycle management? Once questions are ready, prepare responses and based on responses decide rating for each response that is given for the above questions. Make a framework flexible so even if we change any question in any category then it will be managed automatically. Once rating is given, capture responses and calculate overall ratings by introducing different conditions and intelligence into the framework. Create category wise final ratings and create different kind of charts from the final rating to improve the reading value of it. The important thing to note here is the significance of organizations' expertise in each area of Application lifecycle management. It will give assessment framework a new dimension to add intelligence and make it more effective. Summary In this article, we have set many goals to achieve throughout this book. We have covered Continuous Integration, Resource provisioning in the Cloud environment, Configuration Management, Continuous Delivery, Continuous Deployment, and Continuous Monitoring. Setting goals is the first step in turning the invisible into the visible. Tony Robbins We have seen how Cloud Computing has changed the way innovation was perceived earlier and how feasible it has become now. We have also covered need for DevOps and all different DevOps practices in brief. People, Processes, and Technology is also important in this whole process of changing existing culture of an organization. We tried to touch upon the reasons why they are important. Tools are important but not the show stopper; Any toolset can be utilized and changing a culture doesn't need specific set of tools. We have discussed in brief about DevOps Assessment Framework as well. It will help to get going on the path of changing culture. Resources for Article: Further resources on this subject: Introduction to DevOps [article] DevOps Tools and Technologies [article] Command Line Tools for DevOps [article]
Read more
  • 0
  • 1
  • 18722

article-image-puppet-announces-the-public-beta-of-project-nebula
Savia Lobo
10 Oct 2019
3 min read
Save for later

Puppet announces the public beta of Project Nebula

Savia Lobo
10 Oct 2019
3 min read
Today, Puppet announced the public beta of their Project Nebula at the Puppetize PDX, a two-day event (October 9-10) featuring user-focused DevOps and infrastructure delivery talks, and hands-on workshops. Project Nebula is a simplified workflow automation for the continuous deployment of cloud-native applications and infrastructure. It is designed for teams that are adopting cloud-native and serverless technologies and need an end-to-end workflow management system. Also Read: Puppet’s 2019 State of DevOps Report highlight security integration into DevOps practices result into higher business outcome Why Project Nebula? Puppet has worked closely with its private beta participants to understand their deployment workflows and pain points. On interviewing these participants they realized that they want to adopt cloud native technologies; however, they face multiple challenges in adopting containers, serverless infrastructure, microservices, and observability for even simple cloud-native applications. They also said that a major roadblock is a lack of simple automation today to easily compose multiple tools together for infrastructure provisioning, application deployment, and notifications into an end-to-end deployment. Another roadblock highlighted is the lack of a cohesive platform that multiple teams can use to share workflows and best practices and build them into their own deployments. “In-house efforts to build a deployment platform like this can take years, incur large maintenance and support costs, and often require specialized skill sets that many companies do not have today,” the company states. Project Nebula tries to help users in eliminating these roadblocks and gives teams a consistent, easy-to-use experience for deploying cloud-native apps in a safe, secure and continuous manner. Listen: Puppet’s VP of Ecosystem Engineering Nigel Kersten talks about key DevOps challenges [Podcast] Few features in Project Nebula With the focus on ease of use and improved productivity, Project Nebula also provides a single place to build, provision, and deploy cloud-native applications. Other notable features include: Built-in example Workflows: This will help users to get started with their deployments. Don’t start with a blank slate. Support for 20+: This is one of the most popular cloud-native deployment tools as configurable steps within your deployment, including Terraform, CloudFormation, Helm, Kubectl, Kustomize and more. Intuitive visualization: This provides a bird’s eye view of the entire deployment workflow. Easy to compose deployment workflows: One can easily compose deployment workflows that are checked into the source control repository and eliminate the process of writing messy, ad-hoc bash scripts. Know more about Puppet’s Project Nebula in detail, on its official website. Puppet launches Puppet Remediate, a vulnerability remediation solution for IT Ops Puppet announces updates in a bid to help organizations manage their “automation footprint” “This is John. He literally wrote the book on Puppet” – An Interview with John Arundel
Read more
  • 0
  • 0
  • 18497

article-image-kelsey-hightower-on-serverless-and-security-on-kubernetes-at-kubecon-cloudnativecon
Prasad Ramesh
14 Dec 2018
4 min read
Save for later

Kelsey Hightower on Serverless and Security on Kubernetes at KubeCon + CloudNativeCon

Prasad Ramesh
14 Dec 2018
4 min read
In a stream hosted earlier this week by The New Stack, Kelsey Hightower, developer advocate, Google Cloud Platform, talked about the serverless and security aspects of Kubernetes. The stream was from KubeCon + CloudNativeCon 2018. What are you exploring right now with respect to serverless? There are many managed services these days. Database, security etc is fully managed i.e., serverless. People have been on this trajectory for a while if you consider DNS, email, and even Salesforce. Now we have serverless since managed services are ‘eating that world as well’. That world being the server side world and related workloads. How are managed services eating the server side world? If someone has to run and build an API, one approach would be to use Kubernetes and manage the cluster and build the container, run it on Kubernetes and manage that. Even if it a fully managed cluster, you may still have to manage the things around Kubernetes. Another approach is to deal with a higher form of extraction. Serverless is coupled often with FaaS (Function as a Service). There are a lot of abstractions in terms of resources, i.e., resources are abstracted more these days. Hightower talks about a test: “If I walk up to a platform and the delta between me and my code is short, you’re probably closer to the serverless mindset.” This is different from creating a VM, then installing something, configuring something, and then running some code—this is not really serverless. Serverless in a Kubernetes context The point of view should be—can we improve the experience on Kubernetes by adopting some things from serverless? You can add a layer that does functions, so developers can stop worrying about containers and focus on the source. The big picture is—who autoscales the whole cluster? Kubernetes and just an additional layer can’t really be called serverless but it is going in that direction. Over time, if you do enough so that people don’t have to think about or even know that Kubernetes is there, you’re getting closer to being truly serverless. Security in Kubernetes Hightower loves the granular controls of serverless technologies. Comparing the serverless security model to other models For a long time in the industry, companies have been trying to do a least privilege approach. That is, limiting the access of applications so that it can perform only a specific action that is required. So if one server is compromised and it does not have access to anything else, then the effects are isolated. The Kubernetes approach can be different. The cloud providers try to make sure that all the credentials needed to do important things are segmented from VM, cloud functions, app engine or Kubernetes. Imagine if Kubernetes is where everything lives free. Instead of one machine being taken down, it is now easier for the whole cluster to be taken down in one shot. This is called ‘broadening the blast radius’. If you have Kubernetes and you give it keys to everything in your cluster, then everything is compromised when the Kubernetes API is compromised. Having just one cluster trades off on security. Another approach to serverless security A different security model is where you explicitly give credentials that may be needed. So there is no scope to ask for any credentials etc, it will not be allowed. You can also go wrong on a serverless but the system is better defined in ways that it limits what can be done. It’s easier to secure when the attack surface is smaller. For serverless security the same principles from engineering techniques apply, you just have to apply it to these new platforms. So you just need knowledge about what these new platforms are doing. The same principles apply, admins just have a different layer of abstraction that they may add some additional security to. The more people use the system, more flaws are continuously found. It takes a community to identify flaws and patch them. So as a community is more mature, dedicated security researchers come up and patch flaws before they can be exploited. To see the complete talk where Hightower talks about his views on what he is working on, go to The New Stack YouTube Channel. DigitalOcean launches its Kubernetes-as-a-service at KubeCon+CloudNativeCon to ease running containerized apps Elastic launches Helm Charts (alpha) for faster deployment of Elasticsearch and Kibana to Kubernetes NeuVector upgrades Kubernetes container security with the release of Containerd and CRI-O run-time support
Read more
  • 0
  • 0
  • 16630
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at €18.99/month. Cancel anytime
article-image-announcing-docker-enterprise-3-0-public-beta
Savia Lobo
02 May 2019
3 min read
Save for later

Announcing Docker Enterprise 3.0 Public Beta!

Savia Lobo
02 May 2019
3 min read
Update: On July 22, 2019, the Docker team announced that the Docker Enterprise 3.0 will be generally available. He also added that more than 2,000 people have tried the Docker Enterprise 3.0 public beta program On April 24, the team at Docker announced Docker Enterprise 3.0, an end-to-end container platform that enables developers to quickly build and share any type of application (from legacy to cloud-native) and securely run them anywhere, from hybrid cloud to the edge. It is now available in Public Beta Docker Enterprise 3.0 delivers new desktop capabilities, advanced development productivity tools, a simplified and secure Kubernetes stack, and a managed service option to make Docker Enterprise 3.0 the platform for digital transformation. Jay Lyman, the Principal Analyst for 451 Research, “Docker’s new Enterprise 3.0 promises to automate the 'development to production' experience with new tooling that aims to reduce the friction between dev and ops teams.” What can you do with the new Docker Enterprise 3.0? Integrated Docker Desktop Enterprise Docker Desktop Enterprise provides a consistent development-to-production experience with a set of automation tools. This makes it possible to start with the developer desktop, deliver an integrated and secure image registry with access to the Hub ecosystem, and then deploy to an enterprise-ready and Kubernetes-conformant environment. Docker Kubernetes Services (DKS) can simplify the scaling and deployment of applications Compatible with Docker Compose, Kubernetes YAML and Helm charts, DKS provides an automated and repeatable way to install, configure, manage and scale Kubernetes-based applications across hybrid and multi-cloud. DKS includes enhanced security, access controls, and automated lifecycle management bringing a new level of security to Kubernetes that integrates seamlessly with the Docker Enterprise platform. Customers will also have the option to use Docker Swarm Services (DSS) as part of the platform’s orchestration services. Docker Applications for high-velocity innovation Docker Applications are based on the CNAB open standard. It removes the friction between Dev and Ops by enabling teams to collaborate on an application by defining a group of related containers that work together to form an application. It also eliminates the configuration overhead by integrating and automating the creation of the Docker Compose and Kubernetes YAML files, Helm charts, etc. It also includes Application Templates, Application Designer and Version Packs, using which Docker Applications makes it possible for flexible deployment across different environments, delivering on the “code once, deploy anywhere” promise. With the announcement of Docker Enterprise 3.0, Docker also introduced Docker Enterprise-as-a-service - a fully-managed service on-premise or in the cloud. To know more about this news in detail, head over to Docker’s official announcement. DockerHub database breach exposes 190K customer data including tokens for GitHub and Bitbucket repositories Are Debian and Docker slowly losing popularity? Creating a Continuous Integration commit pipeline using Docker [Tutorial]
Read more
  • 0
  • 0
  • 16495

article-image-kubecon-cloudnativecon-eu-2019-highlights-microsofts-service-mesh-interface-enhancements-to-gke-virtual-kubelet-1-0-and-much-more
Savia Lobo
22 May 2019
7 min read
Save for later

KubeCon + CloudNativeCon EU 2019 highlights: Microsoft’s Service Mesh Interface, Enhancements to GKE, Virtual Kubelet 1.0, and much more!

Savia Lobo
22 May 2019
7 min read
The KubeCon+CloudNativeCon 2019 is live (May 21- May 23) at the Fira Gran Via exhibition center in Barcelona, Spain. This conference has a huge assemble of announcements for topics including Kubernetes, DevOps, and cloud-native application. There were many exciting announcements from Microsoft, Google, The Cloud Native Computing Foundation, and more!! Let’s have a brief overview of each of these announcements. Microsoft Kubernetes Announcements: Service Mesh Interface(SMI), Visual Studio Code Kubernetes extension 1.0, Virtual Kubelet 1.0, and Helm 3 alpha Service Mesh Interface(SMI) Microsoft launched the Service Mesh Interface (SMI) specification, the company’s new community project for collaboration around Service Mesh infrastructure. SMI defines a set of common, portable APIs that provide developers with interoperability across different service mesh technologies including Istio, Linkerd, and Consul Connect. The Service Mesh Interface provides: A standard interface for meshes on Kubernetes A basic feature set for the most common mesh use cases Flexibility to support new mesh capabilities over time Space for the ecosystem to innovate with mesh technology To know more about the Service Mesh Interface, head over to Microsoft’s official blog. Visual Studio Code Kubernetes extension 1.0, Virtual Kubelet 1.0, and first alpha of Helm 3 Microsoft released its Visual Studio Code’s open source Kubernetes extension version 1.0. The extension brings native Kubernetes integration to Visual Studio Code, and is fully supported for production management of Kubernetes clusters. Microsoft has also added an extensibility API that makes it possible for anyone to build their own integration experiences on top of Microsoft’s baseline Kubernetes integration. Microsoft also announced Virtual Kubelet 1.0. Brendan Burns, Kubernetes cofounder and Microsoft distinguished engineer said, “The Virtual Kubelet represents a unique integration of Kubernetes and serverless container technologies, like Azure Container Instances. We developed it and in the context of the Cloud Native Computing Foundation, where it’s a sandbox project.” He further added, “With 1.0, we’re saying ‘It’s ready.’ We think we’ve done all the work that we need in order for people to take production level dependencies on this project.” Microsoft also released the first alpha of Helm 3. Helm is the defacto standard for packaging and deploying Kubernetes applications. Helm 3 is simpler, supports all the modern security, identity, and authorization features of today’s Kubernetes. Helm 3 allows users to revisit and simplify Helm’s architecture, due to the growing maturity of Kubernetes identity and security features, like role-based access control (RBAC), and advanced features, such as custom resource definitions (CRDs). Know more about Helm 3 in detail on Microsoft’s official blog post. Google announces enhancements to Google Kubernetes Engine; Stackdriver Kubernetes Engine Monitoring ‘generally available’ On the first day of the KubeCon+CloudNative Con 2019, yesterday, Google announced the three release channels for its Google Kubernetes Engine (GKE), Rapid, Regular and Stable. Google, in its official blog post states, “Each channel offers different version maturity and freshness, allowing developers to subscribe their cluster to a stream of updates that match risk tolerance and business requirements.” This new feature will be launched into alpha with the first release in the Rapid channel, which will give developers early access to the latest versions of Kubernetes. Google also announced the general availability of Stackdriver Kubernetes Engine Monitoring, a tool that gives users a GKE observability (metrics, logs, events, and metadata) all in one place, to help provide faster time-to-resolution for issues, no matter the scale. To know more about the three release channels and the Stackdriver Kubernetes Engine Monitoring in detail, head over to Google’s official blog post. Cloud Native Foundation announcements: Announcing Harbor 1.8,  launches a new online course ‘Cloud Native Logging with Fluentd’, Intuit Inc. wins the CNCF End User Award, and Kong Inc. is now a Gold Member Harbor 1.8 The VMWare team released Harbor 1.8, yesterday, with new features and improvements, including enhanced automation integration, security, monitoring, and cross-registry replication support. Harbor is an open source trusted cloud native registry project that stores, signs, and scans content. Harbor 1.8 also brings various  other capabilities for both administrators and end users: Health check API, which shows detailed status and health of all Harbor components. Harbor extends and builds on top of the open source Docker Registry to facilitate registry operations like the pushing and pulling of images. In this release, we upgraded our Docker Registry to version 2.7.1 Support for defining cron-based scheduled tasks in the Harbor UI. Administrators can now use cron strings to define the schedule of a job. Scan, garbage collection, and replication jobs are all supported. API explorer integration. End users can now explore and trigger Harbor’s API via the Swagger UI nested inside Harbor’s UI. Enhancement of the Job Service engine to include internal webhook events, additional APIs for job management, and numerous bug fixes to improve the stability of the service. To know more about this release, read Harbor 1.8 official blogpost. A new online course on ‘Cloud Native Logging with Fluentd’ The Cloud Native Computing Foundation and The Linux Foundation have together designed a new, self-paced and hands-on course Cloud Native Logging with Fluentd. This course will provide users with the necessary skills to deploy Fluentd in a wide range of production settings. Eduardo Silva, Principal Engineer at Arm Treasure Data, said, “This course will explore the full range of Fluentd features, from installing Fluentd and running it in a container, to using it as a simple log forwarder or a sophisticated log aggregator and processor.” “As we see the Fluentd project growing into a full ecosystem of third party integrations and components, we are thrilled that this course will be offered so more people can realize the benefits it provides”, he further added. To know more about this course and its benefits in detail, visit the official blogpost. Intuit Inc. won the CNCF End User Award At the conference, yesterday, CNCF announced that Intuit Inc. has won the CNCF End User Award in recognition of its contributions to the cloud native ecosystem. Intuit is an active user, contributor and developer of open source technologies. As a part of its journey to the public cloud, Intuit has advanced the way it leverages cloud native technologies in production, including CNCF projects like Kubernetes and OPA. To know more about this achievement by Intuit in detail, read the official blog post. Kong Inc. is now a Gold Member of the CNCF The CNCF announced that Kong Inc., which provides open source API and service lifecycle management tool has upgraded its membership to Gold. The company backs the Kong project, a cloud native, fast, scalable and distributed microservice abstraction layer. Kong Kong is focused on building a service control platform that acts as the nervous system for an organization’s modern software architectures by intelligently brokering information across all services. Dan Kohn, Executive Director of the Cloud Native Computing Foundation, said, “With their focus on open source and cloud native, Kong is a strong member of the open source community and their membership provides resources for activities like bug bounties and security audits that help our community continue to thrive.” Head over to CNCF’s official announcement post. More announcements can be expected from this conference, to stay updated visit KubeCon+CloudNativeCon 2019 official website. F8 Developer Conference Highlights: Redesigned FB5 app, Messenger update, new Oculus Quest and Rift S, Instagram shops, and more RSA Conference 2019 Highlights: Top 5 cybersecurity products announced NSA releases Ghidra, a free software reverse engineering (SRE) framework, at the RSA security conference
Read more
  • 0
  • 0
  • 15142

article-image-xamarin-test-cloud-for-api-monitoring-tutorial
Gebin George
16 Jul 2018
7 min read
Save for later

Xamarin Test Cloud for API Monitoring [Tutorial]

Gebin George
16 Jul 2018
7 min read
Xamarin Test Cloud can help us identify applications' functionality-related issues on real devices. It is a great source of application monitoring in terms of testing on different mobile devices and with different versions of operating systems. Getting a detailed analysis of various applications' functions is very important to make sure our application is running as expected on our target devices. With that being said, it is also critical to the application to be able to run on different operating system versions, and to analyze how it performs and how much memory usage it has. In this mobile DevOps tutorial, we will discuss how to use Xamarin Test Cloud and the analytics after running an application on different sets of devices. This article is an excerpt from the book, Mobile DevOps,  written by Rohin Tak and Jhalak Modi. We will be using two different applications here to see the monitoring analytics and compare them, to get a better understanding of how this helps us identify various performance and functionality-related issues in our application. Below are the applications we will be using: PhoneCallApp Xamarin Store PhoneCallApp Let's go through some steps to see how to monitor our PhoneCallApp: Go to https://testcloud.xamarin.com/. Click on the PhoneCallApp icon to get to the details of the test runs: On the next page, you'll see a list of tests run for the application: Now, because we have only run one test so far, Test Cloud does not provide us with the graphical metrics shown in the preceding screenshot. In other examples we'll see next, you'll be able to see a more detailed comparison of different test runs. Click on the test run from the list to see its results: The test run listed is the one we ran earlier in previous chapters and uploaded from our machine to Xamarin Test Cloud using the command line. To get an idea of this interface, let's have a look at different parts of Xamarin Test Cloud's interface. Now, this is an overview screen that shows a summary of all the tests run for this application: This screen shows summary details, such as how many tests failed from the total number of tests run, how many times the app ran on a device, how many devices these tests were run on, and much more. This screen is very useful to get a brief idea when you want to get a report on how your application is doing on different devices and OS versions. The next thing you'll see in the left pane is the list of UITests included in the test run: This screen basically has a list of all the Xamarin.UITests that you included in your project. You can click on these different tests to see their respective results on the right side of the screen. Let's click on the test from the list in the preceding screen. This will take us to the next screen, which has detailed reports for the test run: Have a close look at the left pane on this screen. It gives us some steps of the test run on the device. These steps are only what we had written previously in the code to take a screenshot of every activity the test does. The steps are as mentioned (we are using the screens of the test code written in previous chapters here): App started: Take a screenshot when the app starts; this was written in the BeforeEachTest() method in the Tests.cs file: Call button pressed: This step is when the Xamarin.UITest presses the call button to make a call: Failed step (the assert): This is the last step and is shown to provide proof of the failed step, so you can see the outcome that we received and compare it with what was expected. This was the final assert that decides whether the test passes or not, based on the outcome in the Assert.IsTrue() condition. You can click on each of these steps in the left pane and analyze the screenshots taken to see exactly what went on during the test. This is a great way to see exactly what went wrong when the test failed. Now, sometimes the screenshots are not enough to identify the issue. For a more detailed analysis, Test Cloud also provides us with Device Log, as shown in the following screenshot: Device logs are a great way to see what's going on under the hood and get more detailed information about the application's behavior and how the device itself behaves when the application is run on it. This can help pinpoint the issues when a test fails on the device; logs are always a savior in that sort of scenario. Click on the Device Log and you can see step-by-step logs for each screenshot on the same screen: When a test fails, Test Cloud provides us with one more option, to see the Test Failures: It's very useful for automated test developers to see the exception information when a test fails. Last but not least, there is also a Test Log option, which can be used to get a consolidated log of the entire test run: Xamarin Store app Now that we have seen different options provided by Test Cloud to monitor our application and its functionality using test runs, let's see how the dashboard and tests look when we have multiple test runs on various physical devices with different OS versions. This will give us a better idea of how comparative monitoring can be done on Test Cloud to analyze an application's behavior on different devices, and compare them with one another. The Xamarin Store application is a sample application provided by Test Cloud on its platform to help understand the platform and get an idea of the dashboard. Let's go through the steps to understand how to monitor your application running on multiple devices, and how to compare different test runs: Go to the Test Cloud home page, just like in the previous example, and click on the Xamarin Store icon: On the next screen, you'll see a graphical representation of different test runs and brief information about how many tests failed of the total tests run, what the application size is, and its peak memory usage information during different test runs: This gives us a nice comparative look at how our application is performing on different test runs. It is possible that the application was performing fine during the first run, and then some code changes made some functionality fail. So, this graph is very useful to monitor a timeline of changes that affected application functionality. You can further click on the graph or the test run to see an overview of it. Now, this screen gives us a great view of how an application running on different devices can be monitored. It's a very nice way to keep track of the application on different devices and OS versions: Let's click on one of the steps to see the results of the step on multiple devices: The red icon shows failed tests. This page shows all the devices you chose to run the test on; it shows all the devices the test passed on, and shows a red flag on failed devices. You can further click on each device to get device-specific screens and logs. To summarize, we performed API monitoring efficiently using Xamarin Test Cloud. If you found this post useful, do check out the book Mobile DevOps, to deliver continuous integration and delivery for Mobile applications. API Gateway and its Need API and Intent-Driven Networking What is Azure API Management?
Read more
  • 0
  • 0
  • 15074

article-image-sdlc-puts-process-at-the-center-of-software-engineering
Richard Gall
22 May 2018
7 min read
Save for later

SDLC puts process at the center of software engineering

Richard Gall
22 May 2018
7 min read
What is SDLC? SDLC stands for software development lifecycle. It refers to all of the different steps that software engineers need to take when building software. This includes planning, creating, building and then deploying software, but maintenance is also crucial too. In some instances you may need to change or replace software - that is part of the software development lifecycle as well. SDLC is about software quality and development efficiency SDLC is about more than just the steps in the software development process. It's also about managing that process in a way that improves quality while also improving efficiency. Ultimately, there are numerous ways of approaching the software development lifecycle - Waterfall and Agile are the too most well known methodologies for managing the development lifecycle. There are plenty of reasons you might choose one over another. What is most important is that you pay close attention to what the software development lifecycle looks like. It sounds obvious, but it is very difficult to build software without a plan in place. Things can get chaotic very quickly. If it does, that's bad news for you, as the developer, and bad news for users as well. When you don't follow the software development lifecycle properly, you're likely to miss user requirements, and faults will also find their way into your code. The stages of the software development lifecycle (SDLC) There are a number of ways you might see an SDLC presented but the core should always be the same. And yes, different software methodologies like Agile and Waterfall outline very different ways of working, but broadly the steps should be the same. What differs between different software methodologies is how each step fits together. Step 1: Requirement analysis This is the first step in any SDLC. This is about understanding everything that needs to be understood in as much practical detail as possible. It might mean you need to find out about specific problems that need to be solved. Or, there might be certain things that users need that you need to make sure are in the software. To do this, you need to do good quality research, from discussing user needs with a product manager to revisiting documentation on your current systems. It is often this step that is the most challenging in the software development lifecycle. This is because you need to involve a wide range of stakeholders. Some of these might not be technical, and sometimes you might simply use a different vocabulary. It's essential that you have a shared language to describe everything from the user needs to the problems you might be trying to solve. Step 2: Design the software Once you have done a requirement analysis you can begin designing the software. You do this by turning all the requirements and software specifications into a design document. This might feel like it slows down the development process, but if you don't do this, not only are you wasting the time taken to do your requirement analysis, you're also likely to build poor quality or even faulty software. While it's important not to design by committee or get slowed down by intensive nave-gazing, keeping stakeholders updated and requesting feedback and input where necessary can be incredibly important. Sometimes its worth taking that extra bit of time, as it could solve a lot of problems later in the SDLC. Step 3: Plan the project Once you have captured requirements and feel you have properly understood exactly what needs to be delivered - as well as any potential constraints - you need to plan out how you're going to build that software. To do this you'll need to have an overview of the resources at your disposal. These are the sorts of questions you'll need to consider at this stage: Who is available? Are there any risks? How can we mitigate them? What budget do we have for this project? Are there any other competing projects? In truth, you'll probably do this during the design stage. The design document you create should, of course, be developed with context in mind. It's pointless creating a stunning design document, outlining a detailed and extensive software development project if it's simply not realistic for your team to deliver it. Step 4: Start building the software Now you can finally get down to the business of actually writing code. With all the work you have done in the previous steps this should be a little easier. However, it's important to remember that imperfection is part and parcel of software engineering. There will always be flaws in your software. That doesn't necessarily mean bugs or errors, however; it could be small compromises that need to be made in order to ensure something works. The best approach here is to deliver rapidly. The sooner you can get software 'out there' the faster you can make changes and improvements if (or more likely when) they're needed. It's worth involving stakeholders at this stage - transparency in the development process is a good way to build collaboration and ensure the end result delivers on what was initially in the requirements. Step 5: Testing the software Testing is, of course, an essential step in the software development lifecycle. This is where you identify any problems. That might be errors or performance issues, but you may find you haven't quite been able to deliver what you said you would in the design document. The continuous integration server is important here, as the continuous integration server can help to detect any problems with the software. The rise of automated software testing has been incredibly valuable; it means that instead of spending time manually running tests, engineers can dedicate more time to fixing problems and optimizing code. Step 6: Deploy the software The next step is to deploy the software to production. All the elements of the software should now be in place, and you want it to simply be used. It's important to remember that there will be problems here. Testing can never capture every issue, and feedback and insight from users are going to be much more valuable than automated tests run on a server. Continuous delivery pipelines allow you to deploy software very efficiently. This makes the build-test-deploy steps of the software development lifecycle to be relatively frictionless. Okay, maybe not frictionless - there's going to be plenty of friction when you're developing software. But it does allow you to push software into production very quickly. Step 7: Maintaining software Software maintenance is a core part of the day-to-day life of a software engineer. Its a crucial step in the SDLC. There are two forms of software maintenance; both are of equal importance. Evolutive maintenance and corrective maintenance. Evolutive maintenance As the name suggests, evolutive maintenance is where you evolve software by adding in new functionality or making larger changes to the logic of the software. These changes should be a response to feedback from stakeholders or, more importantly, users. There may be times when business needs dictate this type of maintenance - this is never ideal, but it is nevertheless an important part of a software engineer's work. Corrective maintenance Corrective maintenance isn't quite as interesting or creative as evolutive maintenance - it's about fixing bugs and errors in the code. This sort of maintenance can feel like a chore, and ideally you want to minimize the amount of time you spend doing this. However, if you're following SDLC closely, you shouldn't find too many bugs in your software. The benefits of SDLC are obvious The benefits of SDLC are clear. It puts process at the center of software engineering. Without those processes it becomes incredibly difficult to build the software that stakeholders and users want. And if you don't care about users then, really, why build software at all. It's true that DevOps has done a lot to change SDLC. Arguably, it is an area that is more important and more hotly debated than ever before. It's not difficult to find someone with an opinion on the best way to build something. Equally, as software becomes more fragmented and mutable, thanks to the emergence of cloud and architectural trends like microservices and serverless, the way we design, build and deploy software has never felt more urgent. Read next DevOps Engineering and Full-Stack Development – 2 Sides of the Same Agile Coin
Read more
  • 0
  • 0
  • 14572
article-image-jenkins-20-impetus-devops-movement
Packt
19 Sep 2016
15 min read
Save for later

Jenkins 2.0: The impetus for DevOps Movement

Packt
19 Sep 2016
15 min read
In this article by Mitesh Soni, the author of the book DevOps for Web Development, provides some insight into DevOps movement, benefits of DevOps culture, Lifecycle of DevOps, how Jenkins 2.0 is bridging the gaps between Continuous Integration and Continuous Delivery using new features and UI improvements, installation and configuration of Jenkins 2.0. (For more resources related to this topic, see here.) Understanding the DevOps movement Let's try to understand what DevOps is. Is it a real, technical word? No, because DevOps is not just about technical stuff. It is also neither simply a technology nor an innovation. In simple terms, DevOps is a blend of complex terminologies. It can be considered as a concept, culture, development and operational philosophy, or a movement. To understand DevOps, let's revisit the old days of any IT organization. Consider there are multiple environments where an application is deployed. The following sequence of events takes place when any new feature is implemented or bug fixed:   The development team writes code to implement a new feature or fix a bug. This new code is deployed to the development environment and generally tested by the development team. The new code is deployed to the QA environment, where it is verified by the testing team. The code is then provided to the operations team for deploying it to the production environment. The operations team is responsible for managing and maintaining the code.   Let's list the possible issues in this approach: The transition of the current application build from the development environment to the production environment takes weeks or months. The priorities of the development team, QA team, and IT operations team are different in an organization and effective, and efficient co-ordination becomes a necessity for smooth operations. The development team is focused on the latest development release, while the operations team cares about the stability of the production environment. The development and operations teams are not aware of each other's work and work culture. Both teams work in different types of environments; there is a possibility that the development team has resource constraints and they therefore use a different kind of configuration. It may work on the localhost or in the dev environment. The operations team works on production resources and there will therefore be a huge gap in the configuration and deployment environments. It may not work where it needs to run—the production environment. Assumptions are key in such a scenario, and it is improbable that both teams will work under the same set of assumptions. There is manual work involved in setting up the runtime environment and configuration and deployment activities. The biggest issue with the manual application-deployment process is its nonrepeatability and error-prone nature. The development team has the executable files, configuration files, database scripts, and deployment documentation. They provide it to the operations team. All these artifacts are verified on the development environment and not in production or staging. Each team may take a different approach for setting up the runtime environment and the configuration and deployment activities, considering resource constraints and resource availability. In addition, the deployment process needs to be documented for future usage. Now, maintaining the documentation is a time-consuming task that requires collaboration between different stakeholders. Both teams work separately and hence there can be a situation where both use different automation techniques. Both teams are unaware of the challenges faced by each other and hence may not be able to visualize or understand an ideal scenario in which the application works. While the operations team is busy in deployment activities, the development team may get another request for a feature implementation or bug fix; in such a case, if the operations team faces any issues in deployment, they may try to consult the development team, who are already occupied with the new implementation request. This results in communication gaps, and the required collaboration may not happen. There is hardly any collaboration between the development team and the operations team. Poor collaboration causes many issues in the application's deployment to different environments, resulting in back-and-forth communication through e-mail, chat, calls, meetings, and so on, and it often ends in quick fixes. Challenges for the development team: The competitive market creates pressure of on-time delivery. They have to take care of production-ready code management and new feature implementation. The release cycle is often long and hence the development team has to make assumptions before the application deployment finally takes place. In such a scenario, it takes more time to fix the issues that occurred during deployment in the staging or production environment. Challenges for the operations team: Resource contention: It's difficult to handle increasing resource demands Redesigning or tweaking: This is needed to run the application in the production environment Diagnosing and rectifying: They are supposed to diagnose and rectify issues after application deployment in isolation The benefits of DevOps This diagram covers all the benefits of DevOps: Collaboration among different stakeholders brings many business and technical benefits that help organizations achieve their business goals. The DevOps lifecycle – it's all about "continuous" Continuous Integration(CI),Continuous Testing(CT), and Continuous Delivery(CD) are significant part of DevOps culture. CI includes automating builds, unit tests, and packaging processes while CD is concerned with the application delivery pipeline across different environments. CI and CD accelerate the application development process through automation across different phases, such as build, test, and code analysis, and enable users achieve end-to-end automation in the application delivery lifecycle: Continuous integration and continuous delivery or deployment are well supported by cloud provisioning and configuration management. Continuous monitoring helps identify issues or bottlenecks in the end-to-end pipeline and helps make the pipeline effective. Continuous feedback is an integral part of this pipeline, which directs the stakeholders whether are close to the required outcome or going in the different direction. "Continuous effort – not strength or intelligence – is the key to unlocking our potential"                                                                                            -Winston Churchill Continuous integration What is continuous integration? In simple words, CI is a software engineering practice where each check-in made by a developer is verified by either of the following: Pull mechanism: Executing an automated build at a scheduled time Push mechanism: Executing an automated build when changes are saved in the repository This step is followed by executing a unit test against the latest changes available in the source code repository. The main benefit of continuous integration is quick feedback based on the result of build execution. If it is successful, all is well; else, assign responsibility to the developer whose commit has broken the build, notify all stakeholders, and fix the issue. Read more about CI at http://martinfowler.com/articles/continuousIntegration.html. So why is CI needed? Because it makes things simple and helps us identify bugs or errors in the code at a very early stage of development, when it is relatively easy to fix them. Just imagine if the same scenario takes place after a long duration and there are too many dependencies and complexities we need to manage. In the early stages, it is far easier to cure and fix issues; consider health issues as an analogy, and things will be clearer in this context. Continuous integration is a development practice that requires developers to integrate code into a shared repository several times a day. Each check-in is then verified by an automated build, allowing teams to detect problems early. CI is a significant part and in fact a base for the release-management strategy of any organization that wants to develop a DevOps culture. Following are immediate benefits of CI: Automated integration with pull or push mechanism Repeatable process without any manual intervention Automated test case execution Coding standard verification Execution of scripts based on requirement Quick feedback: build status notification to stakeholders via e-mail Teams focused on their work and not in the managing processes Jenkins, Apache Continuum, Buildbot, GitLabCI, and so on are some examples of open source CI tools. AnthillPro, Atlassian Bamboo, TeamCity, Team Foundation Server, and so on are some examples of commercial CI tools. Continuous integration tools – Jenkins Jenkins was originally an open source continuous integration software written in Java under the MIT License. However, Jenkins 2 an open source automation server that focuses on any automation, including continuous integration and continuous delivery. Jenkins can be used across different platforms, such as Windows, Ubuntu/Debian, Red Hat/Fedora, Mac OS X, openSUSE, and FreeBSD. Jenkins enables user to utilize continuous integration services for software development in an agile environment. It can be used to build freestyle software projects based on Apache Ant and Maven 2/Maven 3. It can also execute Windows batch commands and shell scripts. It can be easily customized with the use of plugins. There are different kinds of plugins available for customizing Jenkins based on specific needs for setting up continuous integration. Categories of plugins include source code management (the Git, CVS, and Bazaar plugins), build triggers (the Accelerated Build Now and Build Flow plugins), build reports (the Code Scanner and Disk Usage plugins), authentication and user management (the Active Directory and GitHub OAuth plugins), and cluster management and distributed build (Amazon EC2 and Azure Slave plugins). To know more about all plugins, visit https://wiki.jenkins-ci.org/display/JENKINS/Plugins. To explore how to create a new plugin, visit https://wiki.jenkins-ci.org/display/JENKINS/Plugin+tutorial. To download different versions of plugins, visit https://updates.jenkins-ci.org/download/plugins/. Visit the Jenkins website at http://jenkins.io/. Jenkins accelerates the software development process through automation: Key features and benefits Here are some striking benefits of Jenkins: Easy install, upgrade, and configuration. Supported platforms: Windows, Ubuntu/Debian, Red Hat/Fedora/CentOS, Mac OS X, openSUSE, FreeBSD, OpenBSD, Solaris, and Gentoo. Manages and controls development lifecycle processes. Non-Java projects supported by Jenkins: Such as .NET, Ruby, PHP, Drupal, Perl, C++, Node.js, Python, Android, and Scala. A development methodology of daily integrations verified by automated builds. Every commit can trigger a build. Jenkins is a fully featured technology platform that enables users to implement CI and CD. The use of Jenkins is not limited to CI and CD. It is possible to include a model and orchestrate the entire pipeline with the use of Jenkins as it supports shell and Windows batch command execution. Jenkins 2.0 supports a delivery pipeline that uses a Domain-Specific Language (DSL) for modeling entire deployments or delivery pipelines. Pipeline as code provides a common language—DSL—to help the development and operations teams to collaborate in an effective manner. Jenkins 2 brings a new GUI with stage view to observe the progress across the delivery pipeline. Jenkins 2.0 is fully backward compatible with the Jenkins 1.x series. Jenkins 2 now requires Servlet 3.1 to run. You can use embedded Winstone-Jetty or a container that supports Servlet 3.1 (such as Tomcat 8). GitHub, Collabnet, SVN, TFS code repositories, and so on are supported by Jenkins for collaborative development. Continuous integration: Automate build and test—automated testing (continuous testing), package, and static code analysis. Supports common test frameworks such as HP ALM Tools, Junit, Selenium, and MSTest. For continuous testing, Jenkins has plugins for both; Jenkins slaves can execute test suites on different platforms. Jenkins supports static code analysis tools such as code verification by CheckStyle and FindBug. It also integrates with Sonar. Continuous delivery and continuous deployment: It automates the application deployment pipeline, integrates with popular configuration management tools, and automates environment provisioning. To achieve continuous delivery and deployment, Jenkins supports automatic deployment; it provides a plugin for direct integration with IBM uDeploy. Highly configurable: Plugins-based architecture that provides support to many technologies, repositories, build tools, and test tools; it has an open source CI server and provides over 400 plugins to achieve extensibility. Supports distributed builds: Jenkins supports "master/slave" mode, where the workload of building projects is delegated to multiple slave nodes. It has a machine-consumable remote access API to retrieve information from Jenkins for programmatic consumption, to trigger a new build, and so on. It delivers a better application faster by automating the application development lifecycle, allowing faster delivery. The Jenkins build pipeline (quality gate system) provides a build pipeline view of upstream and downstream connected jobs, as a chain of jobs, each one subjecting the build to quality-assurance steps. It has the ability to define manual triggers for jobs that require intervention prior to execution, such as an approval process outside of Jenkins. In the following diagram Quality Gates and Orchestration of Build Pipeline is illustrated: Jenkins can be used with the following tools in different categories as shown here: Language Java .Net Code repositories Subversion, Git, CVS, StarTeam Build tools Ant, Maven NAnt, MS Build Code analysis tools Sonar, CheckStyle, FindBugs, NCover, Visual Studio Code Metrics, PowerTool Continuous integration Jenkins Continuous testing Jenkins plugins (HP Quality Center 10.00 with the QuickTest Professional add-in, HP Unified Functional Testing 11.5x and 12.0x, HP Service Test 11.20 and 11.50, HP LoadRunner 11.52 and 12.0x, HP Performance Center 12.xx, HP QuickTest Professional 11.00, HP Application Lifecycle Management 11.00, 11.52, and 12.xx, HP ALM Lab Management 11.50, 11.52, and 12.xx, JUnit, MSTest, and VsTest) Infrastructure provisioning Configuration management tool—Chef Virtualization/cloud service provider VMware, AWS, Microsoft Azure (IaaS), traditional environment Continuous delivery/deployment Chef/deployment plugin/shell scripting/Powershell scripts/Windows batch commands Installing Jenkins Jenkins provides us with multiple ways to install it for all types of users. We can install it on at least the following operating systems: Ubuntu/Debian Windows Mac OS X OpenBSD FreeBSD openSUSE Gentoo CentOS/Fedora/Red Hat One of the easiest options I recommend is to use a WAR file. A WAR file can be used with or without a container or web application server. Having Java is a must before we try to use a WAR file for Jenkins, which can be done as follows: Download the jenkins.war file from https://jenkins.io/. Open command prompt in Windows or a terminal in Linux, go to the directory where the jenkins.war file is stored, and execute the following command: java – jar jenkins.war Once Jenkins is fully up and running, as shown in the following screenshot, explore it in the web browser by visiting http://localhost:8080. By default, Jenkins works on port 8080. Execute the following command from the command line: java -jar jenkins.war --httpPort=9999 For HTTPS, use the following command: java -jar jenkins.war --httpsPort=8888 Once Jenkins is running, visit the Jenkins home directory. In our case, we have installed Jenkins 2 on a CentOS 6.7 virtual machine. Go to /home/<username>/.jenkins, as shown in the following screenshot. If you can't see the .jenkins directory, make sure hidden files are visible. In CentOS, press Ctrl+H to make hidden files visible. Setting up Jenkins Now that we have installed Jenkins, let's verify whether Jenkins is running. Open a browser and navigate to http://localhost:8080 or http://<IP_ADDRESS>:8080. If you've used Jenkins earlier and recently downloaded the Jenkins 2 WAR file, it will ask for a security setup. To unlock Jenkins, follow these steps: Go to the .Jenkins directory and open the initialAdminPassword file from the secrets subdirectory: Copy the password in that file, paste it in the Administrator password box, and click on Continue, as shown here: Clicking on Continue will redirect you to the Customize Jenkins page. Click on Install suggested plugins: The installation of the plugins will start. Make sure that you have a working Internet connection. Once all the required plugins have been installed, you will seethe Create First Admin User page. Provide the required details, and click on Save and Finish: Jenkins is ready! Our Jenkins setup is complete. Click on Start using Jenkins: Get Jenkins plugins from https://wiki.jenkins-ci.org/display/JENKINS/Plugins. Summary We have covered some brief details on DevOps culture and Jenkins 2.0 and its new features. DevOps for Web Developmentprovides more details on extending Continuous Integration to Continuous Delivery and Continuous Deployment using Configuration management tools such as Chef and Cloud Computing platforms such Microsoft Azure (App Services) and AWS (Amazon EC2 and AWS Elastic Beanstalk), you refer at https://www.packtpub.com/networking-and-servers/devops-web-development. To get more details Jenkins, refer to JenkinsEssentials, https://www.packtpub.com/application-development/jenkins-essentials. Resources for Article: Further resources on this subject: Setting Up and Cleaning Up [article] Maven and Jenkins Plugin [article] Exploring Jenkins [article]
Read more
  • 0
  • 0
  • 14222

article-image-start-treating-your-infrastructure-code
Packt
26 Dec 2016
18 min read
Save for later

Start Treating your Infrastructure as Code

Packt
26 Dec 2016
18 min read
In this article by Veselin Kantsev, the author of the book Implementing DevOps on AWS. Ladies and gentlemen, put your hands in the atmosphere for Programmable Infrastructure is here! Perhaps Infrastructure-as-Code(IaC) is not an entirely new concept considering how long Configuration Management has been around. Codifying server, storage and networking infrastructure and their relationships however is a relatively recent tendency brought about by the rise of cloud computing. But let us leave Config Management for later and focus our attention on that second aspect of IaC. You would recall from the previous chapter some of the benefits of storing all-the-things as code: Code can be kept under version control Code can be shared/collaborated on easily Code doubles as documentation Code is reproducible (For more resources related to this topic, see here.) That last point was a big win for me personally. Automated provisioning helped reduce the time it took to deploy a full-featured cloud environment from four hours down to one and the occurrences of human error to almost zero (one shall not be trusted with an input field). Being able to rapidly provision resources becomes a significant advantage when a team starts using multiple environments in parallel and needs those brought up or down on-demand. In this article we examine in detail how to describe (in code) and deploy one such environment on AWS with minimal manual interaction. For implementing IaC in the Cloud, we will look at two tools or services: Terraform and CloudFormation. We will go through examples on how to: Configure the tool Write an IaC template Deploy a template Deploy subsequent changes to the template Delete a template and remove the provisioned infrastructure For the purpose of these examples, let us assume our application requires a Virtual Private Cloude(VPC) which hosts a Relational Database Services(RDS) back-end and a couple of Elastic Compute Cloud (EC2) instances behind an Elastic Load Balancing (ELB). We will keep most components behind Network Address Translation (NAT), allowing only the load-balancer to be accessed externally. IaC using Terraform One of the tools that can help deploy infrastructure on AWS is HashiCorp's Terraform (https://www.terraform.io). HashiCorp is that genius bunch which gave us Vagrant, Packer and Consul. I would recommend you look up their website if you have not already. Using Terraform (TF), we will be able to write a template describing an environment, perform a dry run to see what is about to happen and whether it is expected, deploy the template and make any late adjustments where necessary—all of this without leaving the shell prompt. Configuration Firstly, you will need to have a copy of TF (https://www.terraform.io/downloads.html) on your machine and available on the CLI. You should be able to query the currently installed version, which in my case is 0.6.15: $ terraform --version Terraform v0.6.15 Since TF makes use of the AWS APIs, it requires a set of authentication keys and some level of access to your AWS account. In order to deploy the examples in this article you could create a new Identity and Access Management (IAM) user with the following permissions: "autoscaling:CreateAutoScalingGroup", "autoscaling:CreateLaunchConfiguration", "autoscaling:DeleteLaunchConfiguration", "autoscaling:Describe*", "autoscaling:UpdateAutoScalingGroup", "ec2:AllocateAddress", "ec2:AssociateAddress", "ec2:AssociateRouteTable", "ec2:AttachInternetGateway", "ec2:AuthorizeSecurityGroupEgress", "ec2:AuthorizeSecurityGroupIngress", "ec2:CreateInternetGateway", "ec2:CreateNatGateway", "ec2:CreateRoute", "ec2:CreateRouteTable", "ec2:CreateSecurityGroup", "ec2:CreateSubnet", "ec2:CreateTags", "ec2:CreateVpc", "ec2:Describe*", "ec2:ModifySubnetAttribute", "ec2:RevokeSecurityGroupEgress", "elasticloadbalancing:AddTags", "elasticloadbalancing:ApplySecurityGroupsToLoadBalancer", "elasticloadbalancing:AttachLoadBalancerToSubnets", "elasticloadbalancing:CreateLoadBalancer", "elasticloadbalancing:CreateLoadBalancerListeners", "elasticloadbalancing:Describe*", "elasticloadbalancing:ModifyLoadBalancerAttributes", "rds:CreateDBInstance", "rds:CreateDBSubnetGroup", "rds:Describe*" Please refer:${GIT_URL}/Examples/Chapter-2/Terraform/iam_user_policy.json One way to make the credentials of the IAM user available to TF is by exporting the following environment variables: $ export AWS_ACCESS_KEY_ID='user_access_key' $ export AWS_SECRET_ACCESS_KEY='user_secret_access_key' This should be sufficient to get us started. Template design Before we get to coding, here are some of the rules: You could choose to write a TF template as a single large file or a combination of smaller ones. Templates can be written in pure JSON or TF's own format. Terraform will look for files with extensions .tfor .tf.json in a given folder and load these in alphabetical order. TF templates are declarative, hence the order in which resources appear in them does not affect the flow of execution. A Terraform template generally consists of three sections: resources, variables and outputs. As mentioned in the preceding section, it is a matter of personal preference how you arrange these, however, for better readability I suggest we make use of the TF format and write each section to a separate file. Also, while the file extensions are of importance, the file names are up to you. Resources In a way, this file holds the main part of a template, as the resources represent the actual components that end up being provisioned. For example, we will be using a VPC resource, RDS, an ELB one and a few others. Since template elements can be written in any order, Terraform determines the flow of execution by examining any references that it finds (for example a VPC should exist before an ELB which is said to belong to it is created). Alternatively, explicit flow control attributes such as the depends_on are used, as we will observe shortly. To find out more, let us go through the contents of the resources.tf file. Please refer to: ${GIT_URL}/Examples/Chapter-2/Terraform/resources.tf First we tell Terraform what provider to use for our infrastructure: # Set a Provider provider "aws" { region = "${var.aws-region}" } You will notice that no credentials are specified, since we set those as environment variables earlier. Now we can add the VPC and its networking components: # Create a VPC resource "aws_vpc""terraform-vpc" { cidr_block = "${var.vpc-cidr}" tags { Name = "${var.vpc-name}" } } # Create an Internet Gateway resource "aws_internet_gateway""terraform-igw" { vpc_id = "${aws_vpc.terraform-vpc.id}" } # Create NAT resource "aws_eip""nat-eip" { vpc = true } So far we have declared the VPC, its Internet and NAT gateways plus a set of public and private subnets with matching routing tables. It will help clarify the syntax if we examined some of those resource blocks, line by line: resource "aws_subnet""public-1" { The first argument is the type of the resource followed by an arbitrary name. vpc_id = "${aws_vpc.terraform-vpc.id}" The aws_subnet resource named public-1 has a property vpc_id which refers to the id attribute of a different resource of type aws_vpc named terraform-vpc. Such references to other resources implicitly define the execution flow, that is to say the VPC needs to exist before the subnet can be created. cidr_block = "${cidrsubnet(var.vpc-cidr, 8, 1)}" We will talk more about variables in a moment, but the format is var.var_name. Here we use the cidrsubnet function with the vpc-cidr variable which returns a cidr_block to be assigned to the public-1 subnet. Please refer to the Terraform documentation for this and other useful functions. Next we add a RDS to the VPC: resource "aws_db_instance""terraform" { identifier = "${var.rds-identifier}" allocated_storage = "${var.rds-storage-size}" storage_type= "${var.rds-storage-type}" engine = "${var.rds-engine}" engine_version = "${var.rds-engine-version}" instance_class = "${var.rds-instance-class}" username = "${var.rds-username}" password = "${var.rds-password}" port = "${var.rds-port}" vpc_security_group_ids = ["${aws_security_group.terraform-rds.id}"] db_subnet_group_name = "${aws_db_subnet_group.rds.id}" } Here we see mostly references to variables with a few calls to other resources. Following the RDS is an ELB: resource "aws_elb""terraform-elb" { name = "terraform-elb" security_groups = ["${aws_security_group.terraform-elb.id}"] subnets = ["${aws_subnet.public-1.id}", "${aws_subnet.public-2.id}"] listener { instance_port = 80 instance_protocol = "http" lb_port = 80 lb_protocol = "http" } tags { Name = "terraform-elb" } } Lastly we define the EC2 auto scaling group and related resources: resource "aws_launch_configuration""terraform-lcfg" { image_id = "${var.autoscaling-group-image-id}" instance_type = "${var.autoscaling-group-instance-type}" key_name = "${var.autoscaling-group-key-name}" security_groups = ["${aws_security_group.terraform-ec2.id}"] user_data = "#!/bin/bash n set -euf -o pipefail n exec 1>>(logger -s -t $(basename $0)) 2>&1 n yum -y install nginx; chkconfig nginx on; service nginx start" lifecycle { create_before_destroy = true } } resource "aws_autoscaling_group""terraform-asg" { name = "terraform" launch_configuration = "${aws_launch_configuration.terraform-lcfg.id}" vpc_zone_identifier = ["${aws_subnet.private-1.id}", "${aws_subnet.private-2.id}"] min_size = "${var.autoscaling-group-minsize}" max_size = "${var.autoscaling-group-maxsize}" load_balancers = ["${aws_elb.terraform-elb.name}"] depends_on = ["aws_db_instance.terraform"] tag { key = "Name" value = "terraform" propagate_at_launch = true } } The user_data shell script above will install and start NGINX onto the EC2 node(s). Variables We have made great use of variables to define our resources, making the template as re-usable as possible. Let us now look inside variables.tf to study these further. Similarly to the resources list, we start with the VPC : Please refer to:${GIT_URL}/Examples/Chapter-2/Terraform/variables.tf variable "aws-region" { type = "string" description = "AWS region" } variable "aws-availability-zones" { type = "string" description = "AWS zones" } variable "vpc-cidr" { type = "string" description = "VPC CIDR" } variable "vpc-name" { type = "string" description = "VPC name" } The syntax is: variable "variable_name" { variable properties } Where variable_name is arbitrary, but needs to match relevant var.var_name references made in other parts of the template. For example, variable aws-region will satisfy the ${var.aws-region} reference we made earlier when describing the region of the provider aws resource. We will mostly use string variables, however there is another useful type called map which can hold lookup tables. Maps are queried in a similar way to looking up values in a hash/dict (Please see: https://www.terraform.io/docs/configuration/variables.html). Next comes RDS: variable "rds-identifier" { type = "string" description = "RDS instance identifier" } variable "rds-storage-size" { type = "string" description = "Storage size in GB" } variable "rds-storage-type" { type = "string" description = "Storage type" } variable "rds-engine" { type = "string" description = "RDS type" } variable "rds-engine-version" { type = "string" description = "RDS version" } variable "rds-instance-class" { type = "string" description = "RDS instance class" } variable "rds-username" { type = "string" description = "RDS username" } variable "rds-password" { type = "string" description = "RDS password" } variable "rds-port" { type = "string" description = "RDS port number" } Finally, EC2: variable "autoscaling-group-minsize" { type = "string" description = "Min size of the ASG" } variable "autoscaling-group-maxsize" { type = "string" description = "Max size of the ASG" } variable "autoscaling-group-image-id" { type="string" description = "EC2 AMI identifier" } variable "autoscaling-group-instance-type" { type = "string" description = "EC2 instance type" } variable "autoscaling-group-key-name" { type = "string" description = "EC2 ssh key name" } We now have the type and description of all our variables defined in variables.tf, however no values have been assigned to them yet. Terraform is quite flexible with how this can be done. We could: Assign default values directly in variables.tf variable "aws-region" { type = "string"description = "AWS region"default = 'us-east-1' } Not assign a value to a variable, in which case Terraform will prompt for it at run time Pass a -var 'key=value' argument(s) directly to the Terraform command, like so: -var 'aws-region=us-east-1' Store key=value pairs in a file Use environment variables prefixed with TF_VAR, as in TF_VAR_ aws-region Using a key=value pairs file proves to be quite convenient within teams, as each engineer can have a private copy (excluded from revision control). If the file is named terraform.tfvars it will be read automatically by Terraform, alternatively -var-file can be used on the command line to specify a different source. Below is the content of our sample terraform.tfvars file: Please refer to:${GIT_URL}/Examples/Chapter-2/Terraform/terraform.tfvars autoscaling-group-image-id = "ami-08111162" autoscaling-group-instance-type = "t2.nano" autoscaling-group-key-name = "terraform" autoscaling-group-maxsize = "1" autoscaling-group-minsize = "1" aws-availability-zones = "us-east-1b,us-east-1c" aws-region = "us-east-1" rds-engine = "postgres" rds-engine-version = "9.5.2" rds-identifier = "terraform-rds" rds-instance-class = "db.t2.micro" rds-port = "5432" rds-storage-size = "5" rds-storage-type = "gp2" rds-username = "dbroot" rds-password = "donotusethispassword" vpc-cidr = "10.0.0.0/16" vpc-name = "Terraform" A point of interest is aws-availability-zones, it holds multiple values which we interact with using the element and split functions as seen in resources.tf. Outputs The third, mostly informational part of our template contains the Terraform Outputs. These allow for selected values to be returned to the user when testing, deploying or after a template has been deployed. The concept is similar to how echo statements are commonly used in shell scripts to display useful information during execution. Let us add outputs to our template by creating an outputs.tf file: Please refer to:${GIT_URL}/Examples/Chapter-2/Terraform/outputs.tf output "VPC ID" { value = "${aws_vpc.terraform-vpc.id}" } output "NAT EIP" { value = "${aws_nat_gateway.terraform-nat.public_ip}" } output "ELB URI" { value = "${aws_elb.terraform-elb.dns_name}" } output "RDS Endpoint" { value = "${aws_db_instance.terraform.endpoint}" } To configure an output you simply reference a given resource and its attribute. As shown in preceding code, we have chosen the ID of the VPC, the Elastic IP address of the NAT gateway, the DNS name of the ELB and the Endpoint address of the RDS instance. The Outputs section completes the template in this example. You should now have four files in your template folder: resources.tf, variables.tf, terraform.tfvars and outputs.tf. Operations We shall examine five main Terraform operations: Validating a template Testing (dry-run) Initial deployment Updating a deployment Removal of a deployment In the following command line examples, terraform is run within the folder which contains the template files. Validation Before going any further, a basic syntax check should be done with the terraform validate command. After renaming one of the variables in resources.tf, validate returns an unknown variable error: $ terraform validate Error validating: 1 error(s) occurred: * provider config 'aws': unknown variable referenced: 'aws-region-1'. define it with 'variable' blocks Once the variable name has been corrected, re-running validate returns no output, meaning OK. Dry-run The next step is to perform a test/dry-run execution with terraform plan, which displays what would happen during an actual deployment. The command returns a colour coded list of resources and their properties or more precisely: $ terraform plan Resources are shown in alphabetical order for quick scanning. Green resources will be created (or destroyed and then created if an existing resource exists), yellow resources are being changed in-place, and red resources will be destroyed. To literally get the picture of what the to-be-deployed infrastructure looks like, you could use terraform graph: $ terraform graph > my_graph.dot DOT files can be manipulated with the Graphviz open source software (Please see : http://www.graphviz.org) or many online readers/converters. Below is a portion of a larger graph representing the template we designed earlier: Deployment If you are happy with the plan and graph, the template can now be deployed using terraform apply: $ terraform apply aws_eip.nat-eip: Creating... allocation_id: "" =>"<computed>" association_id: "" =>"<computed>" domain: "" =>"<computed>" instance: "" =>"<computed>" network_interface: "" =>"<computed>" private_ip: "" =>"<computed>" public_ip: "" =>"<computed>" vpc: "" =>"1" aws_vpc.terraform-vpc: Creating... cidr_block: "" =>"10.0.0.0/16" default_network_acl_id: "" =>"<computed>" default_security_group_id: "" =>"<computed>" dhcp_options_id: "" =>"<computed>" enable_classiclink: "" =>"<computed>" enable_dns_hostnames: "" =>"<computed>" Apply complete! Resources: 22 added, 0 changed, 0 destroyed. The state of your infrastructure has been saved to the following path. This state is required to modify and destroy your infrastructure, so keep it safe. To inspect the complete state use the terraform show command. State path: terraform.tfstate Outputs: ELB URI = terraform-elb-xxxxxx.us-east-1.elb.amazonaws.com NAT EIP = x.x.x.x RDS Endpoint = terraform-rds.xxxxxx.us-east-1.rds.amazonaws.com:5432 VPC ID = vpc-xxxxxx At the end of a successful deployment, you will notice the Outputs we configured earlier and a message about another important part of Terraform – the state file.(Please refer to: https://www.terraform.io/docs/state/): Terraform stores the state of your managed infrastructure from the last time Terraform was run. By default this state is stored in a local file named terraform.tfstate, but it can also be stored remotely, which works better in a team environment. Terraform uses this local state to create plans and make changes to your infrastructure. Prior to any operation, Terraform does a refresh to update the state with the real infrastructure. In a sense, the state file contains a snapshot of your infrastructure and is used to calculate any changes when a template has been modified. Normally you would keep the terraform.tfstate file under version control alongside your templates. In a team environment however, if you encounter too many merge conflicts you can switch to storing the state file(s) in an alternative location such as S3 (Please see: https://www.terraform.io/docs/state/remote/index.html). Allow a few minutes for the EC2 node to fully initialize then try loading the ELB URI from the preceding Outputs in your browser. You should be greeted by NGINX as shown in the following screenshot: Updates As per Murphy 's Law, as soon as we deploy a template, a change to it will become necessary. Fortunately, all that is needed for this is to update and re-deploy the given template. Let us say we need to add a new rule to the ELB security group (shown in bold below). Update the resource "aws_security_group""terraform-elb" block in resources.tf: resource "aws_security_group""terraform-elb" { name = "terraform-elb" description = "ELB security group" vpc_id = "${aws_vpc.terraform-vpc.id}" ingress { from_port = "80" to_port = "80" protocol = "tcp" cidr_blocks = ["0.0.0.0/0"] } ingress { from_port = "443" to_port = "443" protocol = "tcp" cidr_blocks = ["0.0.0.0/0"] } egress { from_port = 0 to_port = 0 protocol = "-1" cidr_blocks = ["0.0.0.0/0"] } } Verify what is about to change $ terraform plan ... ~ aws_security_group.terraform-elb ingress.#: "1" =>"2" ingress.2214680975.cidr_blocks.#: "1" =>"1" ingress.2214680975.cidr_blocks.0: "0.0.0.0/0" =>"0.0.0.0/0" ingress.2214680975.from_port: "80" =>"80" ingress.2214680975.protocol: "tcp" =>"tcp" ingress.2214680975.security_groups.#: "0" =>"0" ingress.2214680975.self: "0" =>"0" ingress.2214680975.to_port: "80" =>"80" ingress.2617001939.cidr_blocks.#: "0" =>"1" ingress.2617001939.cidr_blocks.0: "" =>"0.0.0.0/0" ingress.2617001939.from_port: "" =>"443" ingress.2617001939.protocol: "" =>"tcp" ingress.2617001939.security_groups.#: "0" =>"0" ingress.2617001939.self: "" =>"0" ingress.2617001939.to_port: "" =>"443" Plan: 0 to add, 1 to change, 0 to destroy. Deploy the change: $ terraform apply ... aws_security_group.terraform-elb: Modifying... ingress.#: "1" =>"2" ingress.2214680975.cidr_blocks.#: "1" =>"1" ingress.2214680975.cidr_blocks.0: "0.0.0.0/0" =>"0.0.0.0/0" ingress.2214680975.from_port: "80" =>"80" ingress.2214680975.protocol: "tcp" =>"tcp" ingress.2214680975.security_groups.#: "0" =>"0" ingress.2214680975.self: "0" =>"0" ingress.2214680975.to_port: "80" =>"80" ingress.2617001939.cidr_blocks.#: "0" =>"1" ingress.2617001939.cidr_blocks.0: "" =>"0.0.0.0/0" ingress.2617001939.from_port: "" =>"443" ingress.2617001939.protocol: "" =>"tcp" ingress.2617001939.security_groups.#: "0" =>"0" ingress.2617001939.self: "" =>"0" ingress.2617001939.to_port: "" =>"443" aws_security_group.terraform-elb: Modifications complete ... Apply complete! Resources: 0 added, 1 changed, 0 destroyed. Some update operations can be destructive (Please refer: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/using-cfn-updating-stacks-update-behaviors.html.You should always check the CloudFormation documentation on the resource you are planning to modify to see whether a change is going to cause any interruption. Terraform provides some protection via the prevent_destroy lifecycle property (Please refer: https://www.terraform.io/docs/configuration/resources.html#prevent_destroy). Removal This is a friendly reminder to always remove AWS resources after you are done experimenting with them to avoid any unexpected charges. Before performing any delete operations, we will need to grant such privileges to the (terraform) IAM user we created in the beginning of this article. As a shortcut, you could temporarily attach the Aministrato rAccess managed policy to the user via the AWS Console as shown in the following figure: To remove the VPC and all associated resources that we created as part of this example, we will use terraform destroy: $ terraform destroy Do you really want to destroy? Terraform will delete all your managed infrastructure. There is no undo. Only 'yes' will be accepted to confirm. Enter a value: yes Terraform asks for a confirmation then proceeds to destroy resources, ending with: Apply complete! Resources: 0 added, 0 changed, 22 destroyed. Next, we remove the temporary admin access we granted to the IAM user by detaching the Aministrator Access managed policy as shown in the following screenshot: Then verify that the VPC is no longer visible in the AWS Console. Summary In this aticlewe looked at the importance and usefulness of Infrastructure as Code and ways to implement it using Terraform or AWS CloudFormation. We examined the structure and individual components of both a Terraform and a CF template then practiced deploying those onto AWS using the CLI.I trust that the examples we went through have demonstrated the benefits and immediate gains from the practice of deploying infrastructure as code. So far however, we have only done half the job. With the provisioning stage completed, you would naturally want to start configuring your infrastructure. Resources for Article: Further resources on this subject: Provision IaaS with Terraform [article] Ansible – An Introduction [article] Design with Spring AOP [article]
Read more
  • 0
  • 0
  • 14052

article-image-bringing-devops-network-operations
Packt
14 Oct 2016
37 min read
Save for later

Bringing DevOps to Network Operations

Packt
14 Oct 2016
37 min read
In this article by Steven Armstrong, author of the book DevOps for Networking, we will focus on people and process with regards to DevOps. The DevOps initiative was initially about breaking down silos between development and operations teams and changing company's operational models. It will highlight methods to unblock IT staff and allow them to work in a more productive fashion, but these mindsets have since been extended to quality assurance testing, security, and now network operations. This article will primarily focus on the evolving role of the network engineer, which is changing like the operations engineer before them, and the need for network engineers to learn new skills that will allow network engineers to remain as valuable as they are today as the industry moves towards a completely programmatically controlled operational model. (For more resources related to this topic, see here.) This article will look at two differing roles, that of the CTO / senior manager and engineer, discussing at length some of the initiatives that can be utilized to facilitate the desired cultural changes that are required to create a successful DevOps transformation for a whole organization or even just allow a single department improve their internal processes by automating everything they do. In this article, the following topics will be covered: Initiating a change in behavior Top-down DevOps initiatives for networking teams Bottom-up DevOps initiatives for networking teams Initiating a change in behavior The networking OSI model contains seven layers, but it is widely suggested that the OSI model has an additional 8th layer named the user layer, which governs how end users integrate and interact with the network. People are undoubtedly a harder beast to master and manage than technology, so there is no one size fits all solution to the vast amount of people issues that exist. The seven layers of OS are shown in the following image: Initiating cultural change and changes in behavior is the most difficult task an organization will face, and it won't occur overnight. To change behavior there must first be obvious business benefits. It is important to first outline the benefits that these cultural changes will bring to an organization, which will enable managers or change agents to make business justifications to implement the required changes. Cultural change and dealing with people and processes is notoriously hard, so divorcing the tools and dealing with people and processes is paramount to the success of any DevOps initiative or project. Cultural change is not something that needs to be planned and a company initiative. In a recent study by Gartner, it was shown that selecting the wrong tooling was not the main reason that cloud projects were a failure, instead the top reason was failure to change the operational model. Reasons to implement DevOps When implementing DevOps, some myths are often perpetuated, such as DevOps only works for start-ups, it won't bring any value to a particular team, or that it is simply a buzz word and a fad. The quantifiable benefits of DevOps initiatives are undeniable when done correctly. Some of these benefits include improvements to the following: The velocity of change Mean time to resolve Improved uptime Increased number of deployments Cross-skilling between teams The removal of the bus factor of one Any team in the IT industry would benefit from these improvements, so really teams can't afford to not adopt DevOps, as it will undoubtedly improve their business functions. By implementing a DevOps initiative, it promotes repeatability, measurement, and automation. Implementing automation naturally improves the velocity of change and increased number of deployments a team can do in any given day and time to market. Automation of the deployment process allows teams to push fixes through to production quickly as well as allowing an organization to push new products and features to market. A byproduct of automation is that the mean time to resolve will also become quicker for infrastructure issues. If infrastructure or network changes are automated, they can be applied much more efficiently than if they were carried out manually. Manual changes depend on the velocity of the engineer implementing the change rather than an automated script that can be measured more accurately. Implementing DevOps also means measuring and monitoring efficiently too, so having effective monitoring is crucial on all parts of infrastructure and networking, as it means the pace in which root cause analysis can carried out improves. Having effective monitoring helps to facilitate the process of mean time to resolve, so when a production issue occurs, the source of the issue can be found quicker than numerous engineers logging onto consoles and servers trying to debug issues. Instead a well-implemented monitoring system can provide a quick notification to localize the source of the issue, silencing any resultant alarms that result from the initial root cause, allowing the issue to be highlighted and fixed efficiently. The monitoring then hands over to the repeatable automation, which can then push out the localized fix to production. This process provides a highly accurate feedback loop, where processes will improve daily. If alerts are missed, they will ideally be built into the monitoring system over time as part of the incident post-mortem. Effective monitoring and automation results in quicker mean time to resolve, which leads to happier customers, and results in improved uptime of products. Utilizing automation and effective monitoring also means that all members of a team have access to see how processes work and how fixes and new features are pushed out. This will mean less of a reliance on key individuals removing the bus factor of one where a key engineer needs to do the majority of tasks in the team as he is the most highly skilled individual and has all of the system knowledge stored in his head. Using a DevOps model means that the very highly skilled engineer can instead use their talents to help cross skill other team members and create effective monitoring that can help any team member carry out the root cause analysis they normally do manually. This builds the talented engineers deep knowledge into the monitoring system, so the monitoring system as opposed to the talented engineer becomes the go to point of reference when an issue first occurs, or ideally the monitoring system becomes the source of truth that alerts on events to prevent customer facing issues. To improve cross-skilling, the talented engineer should ideally help write automation too, so they are not the only member of the team that can carry out specific tasks. Reasons to implement DevOps for networking So how do some of those DevOps benefits apply to traditional networking teams? Some of the common complaints with siloed networking teams today are the following: Reactive Slow often using ticketing systems to collaborate Manual processes carried out using admin terminals Lack of preproduction testing Manual mistakes leading to network outages Constantly in firefighting mode Lack of automation in daily processes Network teams like infrastructure teams before them are essentially used to working in siloed teams, interacting with other teams in large organizations via ticketing systems or using suboptimal processes. This is not a streamlined or optimized way of working, which led to the DevOps initiative that sought to break down barriers between Development and Operations staff, but its remit has since widened. Networking does not seem to have been initially included in this DevOps movement yet, but software delivery can only operate as fast as the slowest component. The slowest component will eventually become the bottleneck or blocker of the entire delivery process. That slowest component often becomes the star engineer in a siloed team that can't process enough tickets in a day manually to keep up with demand, thus becoming the bus factor of one. If that engineer goes off sick, then work is blocked, the company becomes too reliant and cannot function efficiently without them. If a team is not operating in the same way as the rest of the business, then all other departments will be slowed down as the siloed department is not agile enough. Put simply, the reason networking teams exist in most companies is to provide a service to development teams. Development teams require networking to be deployed, so they deliver applications to production so that the business can make money from those products. So networking changes to ACL policies, load-balancing rules, and provisioning of new subnets for new applications can no longer be deemed acceptable if they take days, months or even weeks. Networking has a direct impact on the velocity of change, mean time to resolve, uptime, as well as the number of deployments, which are four of the key performance indicators of a successful DevOps initiative. So networking needs to be included in a DevOps model by companies, otherwise all of these quantifiable benefits will become constrained. Given the rapid way AWS, Microsoft Azure, OpenStack, and Software-defined Networking (SDN) can be used to provision network functions in the private and public cloud, it is no longer acceptable for network teams to not adapt their operational processes and learn new skills. But the caveat is that the evolution of networking has been quick, and they need the support and time to do this. If a cloud solution is implemented and the operational model does not change, then no real quantifiable benefits will be felt by the organization. Cloud projects traditionally do not fail because of technology, cloud projects fail because of the incumbent operational models that hinder them from being a success. There is zero value to be had from building a brand new OpenStack private cloud, with its open set of extensible APIs to manage compute, networking, and storage if a company doesn't change its operational model and allow end users to use those APIs to self-service their requests. If network engineers are still using the GUI to point and click and cut and paste then this doesn't bring any real business value as the network engineer that cuts and pastes the slowest is the bottleneck. The company may as well stick with their current processes as implementing a private cloud solution with manual processes will not result in a speeding up time to market or mean time to recover from failure. However, cloud should not be used as an excuse to deride your internal network staff with, as incumbent operational models in companies are typically not designed or set up by current staff, they are normally inherited. Moving to public cloud doesn't solve the problem of the operational agility of a company's network team, it is a quick fix and bandage that disguises the deeper rooted cultural challenges that exist. However, smarter ways of working allied with use of automation, measurement, and monitoring can help network teams refine their internal processes and facilitate the developers and operations staff that they work with daily. Cultural change can be initiated in two different ways, grass roots bottom-up initiatives coming from engineers, or top-down management initiatives. Top-down DevOps initiatives for networking teams Top-down DevOps initiatives are when a CTO, Directors, or Senior Manager have to buy in from the company to make changes to the operational model. These changes are required as the incumbent operational model is deemed suboptimal and not set up to deliver software at the speed of competitors, which inherently delays new products or crucial fixes from being delivered to market. When doing DevOps transformations from a top-down management level, it is imperative that some ground work is done with the teams involved, if large changes are going to be made to the operational model, it can often cause unrest or stress to staff on the ground. When implementing operational changes, upper management need to have the buy in of the people on the ground as they will operate within that model daily. Having teams buy in is a very important aspect; otherwise, the company will end up with an unhappy workforce, which will mean the best staff will ultimately leave. It is very important that upper management engage staff when implementing new operational processes and deal with any concerns transparently from the outset, as opposed to going for an offsite management meeting and coming back with an enforced plan, which is all too common a theme. Management should survey the teams to understand how they operate on a daily basis, what they like about the current processes and where their frustrations lie. The biggest impediment to changing an operational model is misunderstanding the current operational model. All initiatives should ideally be led and not enforced. So let's focus on some specific top-down initiatives that could be used to help. Analyzing successful teams One approach would be for the management is to look at other teams within the organization whose processes are working well and are delivering in an incremental agile fashion, if no other team in the organization is working in this fashion, then reach out to other companies. Ask if it would be possible to go and look at the way another company operate for a day. Most companies will happily use successful projects as reference cases to public audiences at conferences or meet-ups, as they enjoy showing their achievements, so it shouldn't be difficult to seek out companies that have overcome similar cultural challenges. It is good to attend some DevOps conferences and look at who is speaking, so approach the speakers and they will undoubtedly be happy to help. Management teams should initially book a meeting with the high-performing team and do a question and answer session focusing on the following points, if it is an external vendor then an introduction phone call can suffice. Some important questions to ask in the initial meeting are the following: Which processes normally work well? What tools they actually use on a daily basis? How is work assigned? How do they track work? What is the team structure? How do other teams make requests to the team? How is work prioritized? How do they deal with interruptions? How are meetings structured? It is important not to reinvent the wheel, if a team in the organization already has a proven template that works well, then that team could also be invaluable in helping facilitate cultural change within the networks team. It will be slightly more challenging if focus is put on an external team as the evangelist as it opens up excuses such as it being easier for them because of x, y, and z in their company. A good strategy, when utilizing a local team in the organization as the evangelist, is to embed a network engineer in that team for a few weeks and have them observe and give feedback how the other teams operate and document their findings. This is imperative, so the network engineers on the ground understand the processes. Flexibility is also important, as only some of the successful team's processes may be applicable to a network team, so don't expect two teams to work identically. The sum of parts and personal individuals in the team really do mean that every team is different, so focus on goals rather than the implementation of strict process. If teams achieve the same outcomes in slightly different ways, then as long as work can be tracked and is visible to management, it shouldn't be an issue as long as it can be easily reported on. Make sure pace is prioritized, select specific change agents to make sure teams are comfortable with new processes, so empower change agents in the network team to choose how they want to work by engaging with the team by creating new processes and also put them in charge of eventual tool selection. However, before selecting any tooling, it is important to start with process and agree on the new operational model to prevent tooling driving processes, this is a common mistake in IT. Mapping out activity diagrams A good piece of advice is to use an activity diagram as a visual aid to understand how a team's interactions work and where they can be improved. A typical development activity diagram, with manual hand-off to a quality assurance team is shown here: Utilizing activity diagrams as a visual aid is important as it highlights suboptimal business process flows. In the example, we see a development team's activity diagram. This process is suboptimal as it doesn't include the quality assurance team in the Test locally and Peer review phases. Instead it has a formalized QA hand-off phase, which is very late in the development cycle, and a suboptimal way of working as it promotes a development and QA silo, which is a DevOps anti-pattern. A better approach would be to have QA engineers work on creating test tasks and creating automated tests, whereas the development team works on coding tasks. This would allow the development Peer review process to have QA engineers' review and test developer code earlier in the development lifecycle and make sure that every piece of code written has appropriate test coverage before the code is checked in. Another shortcoming in the process is that it does not cater for software bugs found by the quality assurance team or in production by customers, so mapping these streams of work into the activity diagram would also be useful to show all potential feedback loops. If a feedback loop is missed in the overall activity diagram, then it can cause a breakdown in the process flow, so it is important to capture all permutations in the overarching flow that could occur before mapping tooling to facilitate the process. Each team should look at ways of shortening interactions to aid mean time to resolve and improve the velocity of change at which work can flow through the overall process. Management should dedicate some time in their schedule with the development, infrastructure, networking, and test teams and map out what they believe the team processes to be in their individual teams. Keep it high level, this should represent a simple activity swim-lane utilizing the start point where they accept work and the process the team goes through to deliver that work. Once each team has mapped out the initial approach, they should focus on optimizing it and removing the parts of the process they dislike and discuss ways the process could be improved as a team. It may take many iterations before this is mapped out effectively, so don't rush this process, it should be used as a learning experience for each team. The finalized activity diagram will normally include management and technical functions combined in an optimized way to show the overall process flow. Try not to bother using Business Process Management (BPM) software at this stage a simple white board will suffice to keep it simple and informal. It is a good practice to utilize two layers of an activity diagram, so the first layer can be a box that simply says Peer review, which then references a nested activity diagrams outlining what the teams peer review process is. Both need refined but the nested tier of business processes should be dictated by the individual teams as these are specific to their needs, so it's important to leave teams the flexibility they need at this level. It is important to split the two out tiers; otherwise, the overall top layer of activity diagram will be too complex to extract any real value from, so try and minimize the complexity at the top layer, as this will need to be integrated with other teams processes. The activity doesn't need to contain team-specific details such as how an internal team's Peer review process operates as this will always be subjective to that team; this should be included but will be a nested layer activity that won't be shared. Another team should be able to look at a team's top layer activity diagram and understand the process without explanation. It can sometimes be useful to first map out a high performing teams top layer activity diagram to show how an integrated joined up business process should look. This will help teams that struggle a bit more with these concepts and allow them to use that team's activity diagram as a guide. This can be used as a point of reference and show how these teams have solved their cross team interaction issues and facilitated one or more teams interacting without friction. The main aim of this exercise is to join up business processes, so they are not siloed between teams, so the planning and execution of work is as integrated as possible for joined up initiatives. Once each team has completed their individual activity diagram and optimized it to the way the team wants, the second phase of the process can begin. This involves layering each team's top layer of their activity diagrams together to create a joined up process. Teams should use this layering exercise as an excuse to talk about suboptimal processes and how the overall business process should look end to end. Utilize this session to remove perceived bottlenecks between teams, completely ignoring existing tools and the constraints of current tools, this whole exercise should be focusing on process not tooling. A good example of a suboptimum process flow that is constrained by tooling would be a stage on a top layer activity diagram that says raise ticket with ticketing system. This should be broken down so work is people focused, what does the person requesting the change actually require? Developers' day job involves writing code and building great features and products, so if a new feature needs a network change, then networking should be treated as part of that feature change. So the time taken for the network changes needs to be catered for as part of the planning and estimation for that feature rather than a ticketed request that will hinder the velocity of change when it is done reactively as an afterthought. This is normally a very successful exercise when engagement is good, it is good to utilize a senior engineer and manager from each team in the combined activity diagram layering exercise with more junior engineers involved in each team included in the team-specific activity diagram exercise. Changing the network team's operational model The network team's operational model at the end of the activity diagram exercise should ideally be fully integrated with the rest of the business. Once the new operational model has been agreed with all teams, it is time to implement it. It is important to note that because the teams on the ground created the operational model and joined up activity diagram, it should be signed off by all parties as the new business process. So this removes the issue of an enforced model from management as those using it have been involved in creating it. The operational model can be iterated and improved over time, but interactions shouldn't change greatly although new interaction points may be added that have been initially missed. A master copy of the business process can then be stored and updated, so anyone new joining the company knows exactly how to interact with other teams. Short term it may seem the new approach is slowing down development estimates as automation is not in place for network functions, so estimation for developer features becomes higher when they require network changes. This is often just a truer reflection of reality, as estimations didn't take into account network changes and then they became blockers as they were tickets, but once reported, it can be optimized and improved over time. Once the overall activity diagram has been merged together and agreed with all the teams, it is important to remember if the processes are properly optimized, there should not be pages and pages of high-level operations on the diagram. If the interactions are too verbose, it will take any change hours and hours to traverse each of the steps on the activity diagram. The activity diagram below shows a joined up business process, where work is either defined from a single roadmap producing user stories for all teams. New user stories, which are units of work, are then estimated out by cross-functional teams, including developers, infrastructure, quality assurance, and network engineers. Each team will review the user story and working out which cross-functional tasks are involved to deliver the feature. The user story then becomes part of the sprint with the cross-functional teams working on the user story together making sure that it has everything it needs to work prior to the check-in. After Peer review, the feature or change is then handed off to the automated processes to deliver the code, infrastructure, and network changes to production. The checked-in feature then flows through unit testing, quality assurance, integration, performance testing quality gates, which will include any new tests that were written by the quality assurance team before check-in. Once every stage is passed, the automation is invoked by a button press to push the changes to production. Each environment has the same network changes applied, so network changes are made first on test environments before production. This relies on treating networking as code, meaning automated network processes need to be created so the network team can be as agile as the developers. Once the agreed operational model is mapped out only then should the DevOps transformation begin. This will involve selecting the best of breed tools at every stage to deliver the desired outcome with the focus on the following benefits: The velocity of change Mean time to resolve Improved uptime Increased number of deployments Cross-skilling between teams The removal of the bus factor of one All business processes will be different for each company, so it is important to engage each department and have the buy in from all managers to make this activity a success. Changing the network teams behavior Once a new operational model has been established in the business, it is important to help prevent the network team from becoming the bottleneck in a DevOps-focused continuous delivery model. Traditionally, network engineers will be used to operating command lines and logging into admin consoles on network devices to make changes. Infrastructure engineers adjusted to automation as they already had scripting experience in bash and PowerShell coupled with a firm grounding in Linux or Windows operating systems, so transitioning to configuration management tooling was not a huge step. However, it may be more difficult to persuade network engineers from making that same transition initially. Moving network engineers towards coding against APIs and adopting configuration management tools may initially appear daunting, as it is a higher barrier to entry, but having an experienced automation engineer on hand, can help network engineers make this transition. It is important to be patient, so try to change this behavior gradually by setting some automation initiatives for the network team in their objectives. This will encourage the correct behavior and try and incentivize it too. It may be useful to start off automation initiatives by offering training or purchasing particular coding books for teams. It may also be useful to hold an initial automation hack day; this will give network engineers a day away from their day jobs and time to attempt to automate a small process, which is repeated everyday by network engineers. If possible, make this a mandatory exercise, so that it is adopted and make other teams available to cover for the network team, so they aren't distracted. This is a good way of seeing which members of the network team may be open to evangelizing DevOps and automation. If any particular individual stands out, then work with them to help push automation initiatives forward to the rest of the team by making them the champion for automation. Establishing an internal DevOps meet-up where teams present back their automation achievements is also a good way of promoting automation in network teams and this keeping the momentum going. Encourage each team across the business to present back interesting things they have achieved each quarter and incentivize this too by allowing each team time off from their day job to attend if they participate. This leads to a sense of community and illustrates to teams they are part of bigger movement that is bringing real cost benefits to the business. This also helps to focus teams on the common goal of making the company better and breaks down barriers between teams in the process. One approach that should be avoided at all costs is having other teams write all the network automation for networking teams. Ideally, it should be the networking team that evolve and adopt automation, so giving the network team a sense of ownership over the network automation is very important. This though requires full buy in from networking teams and discipline not to revert back to manual tasks at any point even if issues occur. To ease the transition offer to put an automation engineer into the network team from infrastructure or development, but this should only be a temporary measure. It is important to select an automation engineer that is respected by the network team and knowledgeable in networking, as no one should ever attempt to automate something that they cannot operate by hand, so having someone well-versed in networking to help with network automation is crucial, as they will be training the network team so have to be respected. If an automation engineer is assigned to the network team and isn't knowledgeable or respected, then the initiative will likely fail, so choose wisely. It is important to accept at an early stage, that this transition towards DevOps and automation may not be for everyone, so not every network engineer will be able to make the journey. It is all about the network team seizing the opportunity and showing initiative and willingness to pick up and learn new skills. It is important to stamp out disruptive behavior early on which may be a bad influence on the team. It is fine to have for people to have a cynical skepticism at first, but not attempting to change or build new skills shouldn't be tolerated, as it will disrupt the team dynamic and this should be monitored so it doesn't cause automation initiatives to fail or stall, just because individuals are proving to be blockers or being disruptive. It is important to note that every organization has its own unique culture and a company's rate of change will be subject to cultural uptake of the new processes and ways of working. When initiating cultural change, change agents are necessary and can come from internal IT staff or external sources depending on the aptitude and appetite of the staff to change. Every change project is different, but it is important that it has the correct individuals involved to make it a success along with the correct management sponsorship and backing. Bottom-up DevOps initiatives for networking teams Bottom-up DevOps initiatives are when an engineer, team leads, or lower management don't necessarily have buy in from the company to make changes to the operational model. However, they realize that although changes can't be made to the overall incumbent operational model, they can try and facilitate positive changes using DevOps philosophies within their team that can help the team perform better and make their productivity more efficient. When implementing DevOps initiatives from a bottom-up initiative, it is much more difficult and challenging at times as some individuals or teams may not be willing to change the way they work and operate as they don't have to. But it is important not to become disheartened and do the best possible job for the business. It is still possible to eventually convince upper management to implement a DevOps initiative using grass roots initiatives to prove the process brings real business benefits. Evangelizing DevOps in the networking team It is important to try and stay positive at all times, working on a bottom-up initiative can be tiring, but it is important to roll with the punches and not take things too personally. Always remain positive and try to focus on evangelizing the benefits associated with DevOps processes and positive behavior first within your own team. The first challenge is to convince your own team of the merits of adopting a DevOps approach before even attempting to convince other teams in the business. A good way of doing this is by showing the benefits that DevOps approach has made to other companies, such as Google, Facebook, and Etsy, focusing on what they have done in the networking space. A pushback from individuals may be the fact that these companies are unicorns and DevOps has only worked for companies for this reason, so be prepared to be challenged. Seek out initiatives that have been implemented by these companies that the networking team could adopt and are actually applicable to your company. In order to facilitate an environment of change, work out your colleagues drivers are, what motivates them? Try tailor the sell to individuals motivations, the sell to an engineer or manager may be completely different, an engineer on the ground may be motivated by the following: Doing more interesting work Developing skills and experience Helping automate menial daily tasks Learning sought-after configuration management skills Understanding the development lifecycle Learning to code A manager on the other hand will probably be more motivated by offering to measure KPI's that make his team look better such as: Time taken to implement changes Mean time to resolve failures Improved uptime of the network Another way to promote engagement is to invite your networking team to DevOps meet-ups arranged by forward thinking networking vendors. They may be amazed that most networking and load-balancing vendors are now actively promoting automation and DevOps and not yet be aware of this. Some of the new innovations in this space may be enough to change their opinions and make them interested in picking up some of the new approaches, so they can keep pace with the industry. Seeking sponsorship from a respected manager or engineer After making the network team aware of the DevOps initiatives, it is important to take this to the next stage. Seek out a respected manager or senior engineer in the networking team that may be open to trying out DevOps and automation. It is important to sell this person the dream, state how you are passionate about implementing some changes to help the team, and that you are keen to utilize some proven best practices that have worked well for other successful companies. It is important to be humble, try not to rant or spew generalized DevOps jargon to your peers, which can be very off-putting. Always make reasonable arguments and justify them while avoid making sweeping statements or generalizations. Try not to appear to be trying to undermine the manager or senior engineer, instead ask for their help to achieve the goal by seeking their approval to back the initiative or idea. A charm offensive may be necessary at this stage to convince the manager or engineer that it's a good idea but gradually building up to the request can help otherwise it may appear insincere if the request comes out the blue. Potentially analyze the situation over lunch or drinks and gauge if it is something they would be interested in, there is little point trying to convince people that are stubborn as they probably will not budge unless the initiative comes from above. Once you have found the courage to broach the subject, it is now time to put forward numerous suggestions on how the team could work differently with the help of a mediator that could take the form of a project manager. Ask for the opportunity to try this out on a small scale and offer to lead the initiative and ask for their support and backing. It is likely that the manager or senior engineer will be impressed at your initiative and allow you to run with the idea, but they may choose the initiative you implement. So, never suggest anything you can't achieve, you may only get one opportunity at this so it is important to make a good impression. Try and focus on a small task to start with; that's typically a pain point, and attempt to automate it. Anyone can write an automation script, but try and make the automation process easy to use, find what the team likes in the current process, and try and incorporate aspects of it. For example, if they often see the output from a command line displayed in a particular way, write the automation script so that it still displays the same output, so the process is not completely alien to them. Try not to hardcode values into scripts and extract them into a configuration files to make the automation more flexible, so it could potentially be used again in different ways. By showing engineers the flexibility of automation, it will encourage them to use it more, show other in the teams how you wrote the automation and ways they could adapt it to apply it to other activities. If this is done wisely, then automation will be adopted by enthusiastic members of the team, and you will gain enough momentum to impress the sponsor enough to take it forward onto more complex tasks. Automate a complex problem with the networking team The next stage of the process after building confidence by automating small repeatable tasks is to take on a more complex problem; this can be used to cement the use of automation within the networking team going forward. This part of the process is about empowering others to take charge, and lead automation initiatives themselves in the future, so will be more time-consuming. It is imperative that the more difficult to work with engineers that may have been deliberately avoided while building out the initial automation is involved this time. These engineers more than likely have not been involved in automation at all at this stage. This probably means the most certified person in the team and alpha of the team, nobody said it was going to be easy, but it will be worth it in the long run convincing the biggest skeptics of the merits of DevOps and automation. At this stage, automation within the network team should have enough credibility and momentum to broach the subject citing successful use cases. It's easier to involve all difficult individuals in the process rather than presenting ideas back to them at the end of the process. Difficult senior engineers or managers are less likely to shoot down your ideas in front of your peers if they are involved in the creation of the process and have contributed in some way. Try and be respectful, even if you do not agree with their viewpoints, but don't back down if you believe that you are correct or give up. Make arguments fact based and non-emotive, write down pros and cons, and document any concerns without ignoring them, you have to be willing to compromise but not to the point of devaluing the solution. There may actually be genuine risks involved that need addressed, so valid points should not be glossed over or ignored. Where possible seek backup from your sponsor if you are not sure on some of the points or feel individuals are being unreasonable. When implementing the complex automation task work as a team, not as an individual, this is a learning experience for others as well as yourself. Try and teach the network team a configuration management tool, they may just be scared try out new things, so go with a gentle approach. Potentially stopping at times to try out some online tutorials to familiarize everyone with the tool and try out various approaches to solve problems in the easiest way possible. Try and show the network engineers how easy it is to use configuration management tools and the benefits. Don't use complicated configuration management tools as it may put them off. The majority of network engineers can't currently code, something that will potentially change in the coming years. As stated before, infrastructure engineers at least had a grounding in bash or PowerShell to help get started, so pick tooling that they like and give them options. Try not to enforce tools they are not comfortable with. When utilizing automation, one of the key concerns for network engineers is peer review as they have a natural distrust that the automation has worked. Try and build in gated processes to address these concerns, automation doesn't mean any peer review so create a lightweight process to help. Make the automation easy to review by utilizing source control to show diffs and educate the network engineers on how to do this. Coding can be a scary prospect initially, so propose to do some team exercises each week on a coding or configuration management task. Work on it as a team. This makes it less threatening, and it is important to listen to feedback. If the consensus is that something isn't working well or isn't of benefit, then look at alternate ways to achieve the same goal that works for the whole team. Before releasing any new automated process, test it in preproduction environment, alongside an experienced engineer and have them peer review it, and try to make it fail against numerous test cases. There is only one opportunity to make a first impression, with a new process, so make sure it is a successful one. Try and set up knowledge-sharing session between the team to discuss the automation and make sure everyone knows how to do operations manually too, so they can easily debug any future issues or extend or amend the automation. Make sure that output and logging is clear to all users as they will all need to support the automation when it is used in production. Summary In this article, we covered practical initiatives, which when combined, will allow IT staff to implement successful DevOps models in their organization. Rather than just focusing on departmental issues, it has promoted using a set of practical strategies to change the day-to-day operational models that constrain teams. It also focuses on the need for network engineers to learn new skills and techniques in order to make the most of a new operational model and not become the bottleneck for delivery. This article has provided practical real-world examples that could help senior managers and engineers to improve their own companies, emphasizing collaboration between teams and showing that networking departments now required to automate all network operations to deliver at the pace expected by businesses. Key takeaways from this article are: DevOps is not just about development and operations staff it can be applied to network teams Before starting a DevOps initiative analyze successful teams or companies and what made them successful Senior management sponsorship is a key to creating a successful DevOps model Your own companies model will not identically mirror other companies, so try not to copy like for like, adapt it so that it works in your own organization Allow teams to create their own processes and don't dictate processes Allow change agents to initiate changes that teams are comfortable with Automate all operational work, start small, and build up to larger more complex problems once the team is comfortable with new ways of working Successful change will not happen overnight; it will only work through a model of continuous improvement Useful links on DevOps are: https://www.youtube.com/watch?v=TdAmAj3eaFI https://www.youtube.com/watch?v=gqmuVHw-hQw Resources for Article: Further resources on this subject: Jenkins 2.0: The impetus for DevOps Movement [article] Introduction to DevOps [article] Command Line Tools for DevOps [article]
Read more
  • 0
  • 0
  • 12947
article-image-observability-as-code-secrets-as-a-service-and-chaos-katas-thoughtworks-outlines-key-engineering-techniques-to-trial-and-assess
Richard Gall
14 Nov 2018
5 min read
Save for later

Observability as code, secrets as a service, and chaos katas: ThoughtWorks outlines key engineering techniques to trial and assess

Richard Gall
14 Nov 2018
5 min read
ThoughtWorks has just published vol. 19 of its essential radar report. As always, it's a vital insight into what's beginning to emerge in the technology field. In the techniques quadrant of its radar, there were some really interesting new entries. Let's take a look at some of them now, so you can better plan and evaluate your roadmap and skill set for 2019. 8 of the best new techniques you should be trialling (according to ThoughtWorks) 1% canary: a way to build better feedback loops This sounds like a weird one, but the concept is simple. It's essentially about building a quick feedback loop to a tiny segment of customers - say, 1%. This can allow engineering teams to learn things quickly and make changes on other aspects of the project as it evolves. Bounded buy: a smarter way to buy out-of-the-box software solutions Bounded buy mitigates the scope creep that can cause headaches for businesses dealing with out-of-the-box software. It means those responsible for purchasing software focus only on solutions that are modular, with each 'piece' directly connecting into a particular department's needs or workflow. Crypto shredding: securing sensitive data Crypto shredding is a method of securing data that might otherwise be easily replicated or copied. Essentially, it overwrites sensitive data with encryption keys which can easily be removed or deleted. It adds an extra layer of control over a large data set - a technique that could be particularly useful in a field like healthcare. Four key metrics - focus on what's most important to build a high performance team Building a high performance team, can be challenging. Accelerate, the team behind the State of DevOps report, highlighted key drivers that engineers and team leaders should focus on: lead time, deployment frequency, mean time to restore (MTTR), and change fail percentage. According to ThoughtWorks "each metric creates a virtuous cycle and focuses the teams on continuous improvement." Observability as code - breaking through the limits of traditional monitoring tools Observability has emerged as a bit of a buzzword over the last 12 months. But in the context of microservices, and increased complexity in software architecture, it is nevertheless important. However, the means through which you 'do' observability - a range of monitoring tools and dashboards - can be limiting in terms of making adjustments and replicating dashboards. This is why treating observability as code is going to become increasingly more important. It makes sense - if infrastructure as code is the dominant way we think about building software, why shouldn't it be the way we monitor it too? Run cost as architecture fitness function There's a wide assumption that serverless can save you money. This is true when you're starting out, or want to do something quickly, but it's less true as you scale up. If you're using serverless functions repeatedly, you're likely to be paying a lot - more than if you has a slightly less fashionable cloud or on premise server. To combat this complacency, you should instead watch how much services cost against the benefit delivered by them. Seems obvious, but easy to miss if you've just got excited about going serverless. Secrets as a service Without wishing to dampen what sounds incredibly cool, secrets as a service are ultimately just elaborate password managers. They can help organizations more easily decouple credentials, API keys from their source code, a move which should ensure improved security - and simplicity. By using credential rotation, organizations can be much better prepared at tackling and mitigating any security issues. AWS has 'Secrets Manager' while HashiCorp's Vault offers similar functionality. Security chaos engineering In the last edition of Radar, security chaos engineering was in the assess phase - which means ThoughtWorks thinks it's worth looking at, but maybe too early to deploy. With volume 19, security chaos engineering has moved into trial. Clearly, while chaos engineering more broadly has seen slower adoption, it would seem that over the last 12 months the security field has taken chaos engineering to heart. 2 new software engineering techniques to assess Chaos katas If chaos engineering is finding it hard to gain mainstream adoption, perhaps chaos katas is the way forward. This is essentially a technique that helps engineers deploy chaos practices in their respective domains using the training approach known as kata - a Japanese word that simply refers to a set of choreographed movements. In this context, the 'katas' are a set of code patterns that implement failures in a structured way, which engineers can then identify and explore. This is essentially a bottom up way of doing chaos engineering that also gives engineers a deeper insight into their software infrastructure. Infrastructure configuration scanner The question of who should manage your infrastructure is still a tricky one, with plenty of conflicting perspectives. However, from a productivity and agility perspective, putting the infrastructure in the hands of engineers makes a lot of sense. Of course, this could feel like an extra burden - but with an infrastructure configuration scanner, like Scout2 or Watchmen, engineers can ensure that everything is configured correctly. Software engineering techniques need to maintain simplicity as complexity increases There's clearly a diverse range of techniques on the ThoughtWorks Radar. Ultimately, however, the picture that emerges is one where efficiency and observability are key. A crucial part of software engineering will managing increased complexity and developing new tools and processes to instil some degree of simplicity and clarity. Was there anything ThoughtWorks missed?
Read more
  • 0
  • 0
  • 11976

article-image-devops-tools-and-technologies
Packt
11 Nov 2016
15 min read
Save for later

DevOps Tools and Technologies

Packt
11 Nov 2016
15 min read
In this article by Ritesh Modi, the author of the book DevOps with Windows Server 2016, we will introduce foundational platforms and technologies instrumental in enabling and implementing DevOps practices. (For more resources related to this topic, see here.) These include: Technology stack for implementing Continuous Integration, Continuous Deployment, Continuous Deliver, Configuration Management, and Continuous Improvement. These form the backbone for DevOps processes and include source code services, build services, and release services through Visual Studio Team Services. Platform and technology used to create and deploy a sample web application. This includes technologies such as Microsoft .NET, ASP.NET and SQL Server databases. Tools and technology for configuration management, testing of code and application, authoring infrastructure as code, and deployment of environments. Examples of these tools and technologies are Pester for environment validation, environment provisioning through Azure Resource Manager (ARM) templates, Desired State Configuration (DSC) and Powershell, application hosting on containers through Windows Containers and Docker, application and database deployment through Web Deploy packages, and SQL Server bacpacs. Cloud technology Cloud is ubiquitous. Cloud is used for our development environment, implementation of DevOps practices, and deployment of applications. Cloud is a relatively new paradigm in infrastructure provisioning, application deployment, and hosting space. The only options prior to the advent of cloud was either self-hosted on-premises deployments or using services from a hosting service provider. However, cloud is changing the way enterprises look at their strategy in relation to infrastructure and application development, deployment, and hosting. In fact, the change is so enormous that it has found its way into every aspect of an organization's software development processes, tools, and practices. Cloud computing refers to the practice of deploying applications and services on the Internet with a cloud provider. A cloud provider provides multiple types of services on cloud. They are divided into three categories based on their level of abstraction and degree of control on services. These categories are as follows: Infrastructure as a Service (IaaS) Platform as a Service (PaaS) Software as a Service (SaaS) These three categories differ based on the level of control a cloud provider exercises compared to the cloud consumer. The services provided by a cloud provider can be divided into layers, with each layer providing a type of service. As we move higher in the stack of layers, the level of abstraction increases in line with the cloud provider's control over services. In other words, the cloud consumer starts to lose control over services as you move higher in each column: Figure 1: Cloud Services – IaaS, PaaS and SaaS Figure 1 shows the three types of service available through cloud providers and the layers that comprise these services. These layers are stacked vertically on each other and show the level of control a cloud provider has compared to a consumer. From Figure 1, it is clear that for IaaS, a cloud provider is responsible for providing, controlling, and managing layers from the network layer up to the virtualization layer. Similarly, for PaaS, a cloud provider controls and manages from the hardware layer up to the runtime layer, while the consumer controls only the application and data layers. Infrastructure as a Service (IaaS) As the name suggests, Infrastructure as a Service is an infrastructure service provided by a cloud provider. This service includes the physical hardware and its configuration, network hardware and its configuration, storage hardware and its configuration, load balancers, compute, and virtualization. Any layer above virtualization is the responsibility of the consumer to provision, configure, and manage. The consumer can decide to use the provided underlying infrastructure in whatever way best suits their requirements. Consumers can consume the storage, network, and virtualization to provision their virtual machines on top of. It is the consumer's responsibility to manage and control the virtual machines and the things deployed within it. Platform as a Service (PaaS) Platform as a Service enables consumers to deploy their applications and services on the provided platform, consuming the underlying runtime, middleware, and services. The cloud provider provides the services from infrastructure to runtime. The consumers cannot provision virtual machines as they cannot access and control them. Instead, they can only control and manage their applications. This is a comparatively faster method of development and deployment because now the consumer can focus on application development and deployment. Examples of Platform as a Service include Azure Automation, Azure SQL, and Azure App Services. Software as a Service (SaaS) Software as a Service provides complete control of the service to the cloud provider. The cloud provider provisions, configures, and manages everything from infrastructure to the application. It includes the provisioning of infrastructure, deployment and configuration of applications, and provides application access to the consumer. The consumer does not control and manage the application, and can use and configure only parts of the application. They control only their data and configuration. Generally, multi-tenant applications used by multiple consumers, such as Office 365 and Visual Studio Team Services, are examples of SaaS. Advantages of using cloud computing There are multiple distinct advantages of using cloud technologies. The major among them are as follows: Cost effective: Cloud computing helps organizations to reduce the cost of storage, networks, and physical infrastructure. It also prevents them from having to buy expensive software licenses. The operational cost of managing these infrastructures also reduces due to lesser effort and manpower requirements. Unlimited capacity: Cloud provides unlimited resources to the consumer. This ensures applications will never get throttled due to limited resource availability. Elasticity: Cloud computing provides the notion of unlimited capacity and applications deployed on it can scale up or down on an as-needed basis. When demand for the application increases, cloud can be configured to scale up the infrastructure and application by adding additional resources. At the same time, it can scale down unnecessary resources during periods of low demand. Pay as you go: Using cloud eliminates capital expenditure and organizations pay only for what they use, thereby providing maximum return on investment. Organizations do not need to build additional infrastructure to host their application for times of peak demand. Faster and better: Cloud provides ready-to-use applications and faster provisioning and deployment of environments. Moreover, organizations get better-managed services from their cloud provider with higher service-level agreements. We will use Azure as our preferred cloud computing provider for the purpose of demonstrating samples and examples. However, you can use any cloud provider that provides complete end-to-end services for DevOps. We will use multiple features and services provided by Azure across IaaS and PaaS. We will consume Operational Insights and Application Insights to monitor our environment and application, which will help capture relevant telemetry for auditing purposes. We will provision Azure virtual machines running Windows and Docker Containers as a hosting platform. We will use Windows Server 2016 as the target operating system for our applications on cloud. Azure Resource Manager (ARM). We will also use Desired State Configuration and PowerShell as our configuration management platform and tool. We will use Visual Studio Team Services (VSTS), a suite of PaaS services on cloud provided by Microsoft, to set up and implement our end-to-end DevOps practices. Microsoft also provides the same services as part of Team Foundation Services (TFS) as an on-premises solution. Technologies like Pester, DSC, and PowerShell can be deployed and configured to run on any platform. These will help both in the validation of our environment and in the configuration of both application and environment, as part of our Configuration management process. Windows Server 2016 is a breakthrough operating system from Microsoft also referred to as Cloud Operating System. We will look into Windows Server 2016 in the following section. Windows Server 2016 Windows Server 2016 has come a long way. All the way from Windows NT to Windows 2000 and 2003, then Windows 2008 (R2) and 2012 (R2), and now Windows Server 2016. Windows NT was the first popular Windows server among enterprises. However, the true enterprise servers were Windows 2000 and Windows 2003. The popularity of Windows Server 2003 was unprecedented and it was widely adopted. With Windows Server 2008 and 2008 R2, the idea of the data center took priority and enterprises with their own data center adopted it. Even the Windows Server 2008 series was quite popular among enterprises. In 2010, the Microsoft cloud, Azure, was launched. The first steps towards a cloud operating system were Windows Server 2012 and 2012 R2. They had the blueprints and technology to be seamlessly provisioned on Azure. Now, when Azure and cloud are gaining enormous popularity, Windows Server 2016 is released as a true cloud operating system. The evolution of Windows Server is shown in Figure 2: Figure 2: Windows Server evolution Windows Server 2016 is referred to as a cloud operating system. It is built with cloud in mind. It is also referred to as the first operating system that enables DevOps seamlessly by providing relevant tools and technologies. It makes implementing DevOps simpler and easier through its productivity tools. Let us look briefly into these tools and technologies. Multiple choices for Application platform Windows Server 2016 comes with many choices for application platform for applications. It provides the following: Windows Server 2016 Nano Server Windows and Docker Containers Hyper-V Containers Nested virtual machines Windows Server as a hosting platform Windows server 2016 can be used in the ways it has always been used, such as hosting applications and providing server functionalities. It provides the services necessary to make applications secure, scalable, and highly available. It also provides virtualization, directory services, certificate services, web server, databases, and more. These services can be consumed by the enterprise’s services and applications. Nano Server Windows Server provides a new option to host applications and services. This is a new variety of lightweight, scaled-down Windows server containing only the kernel and drivers necessary to run as an operating system. They are also known as headless servers. They do not have any graphical user interface and the only way to interact and manage them is through remote PowerShell. Out of the box, they do not contain any service or feature. The services need to be added to Nano servers explicitly before use. So far, they are the most secure servers from Microsoft. They are very lightweight and their resource requirements and consumption is less than 80% of a normal Windows server. The number of services running, the number of ports open, the number of processes running and the amount of memory and storage required, also are less than 80% compared to normal Windows server. Even though Nano Server out of box just has the kernel and drivers, its capabilities can be enhanced by adding features and deploying any Windows application on it. Windows Containers and Docker Containers are one of the most revolutionary features added to Windows Server 2016 after Nano Server. With the popularity and adoption of Docker Containers, which primarily run on Linux, Microsoft decided to introduce container services to Windows Server 2016. Containers are operating system virtualization. This means that multiple containers can be deployed on the same operating system and each one of them will share the host operating system kernel. It is the next level of virtualization after server virtualization (virtual machines). Containers generate the notion of complete operating system isolation and independence, even though it uses the same host operating system underneath it. This is possible through the use of namespace isolation and image layering. Containers are created from images. Images are immutable and cannot be modified. Each image has a base operating system and a series of instructions that are executed against it. Each instruction creates a new image on top of the previous image and contains only the modification. Finally, a writable image is stacked on top of these images. These images are combined into a single image, which can then be used for provisioning containers. A container made up of multiple image layers is shown in Figure 3: Figure 3: Containers made up of multiple image layers Namespace isolation helps provide containers with pristine new environments. The containers cannot see the host resources and the host cannot view the container resources. For the application within the container, a complete new installation of the operating system is available. The containers share the host's memory, CPU, and storage. Containers offer operating system virtualization, which means the containers can host only those operating systems supported by the host operating system. There cannot be a Windows container running on a Linux host, and a Linux container cannot run on a Windows host operating system. Hyper-V containers Another type of container technology Windows Server 2016 provides is Hyper-V Containers. These containers are similar to Windows Containers. They are managed through the same Docker client and extend the same Docker APIs. However, these containers contain their own scaled down operating system kernel. They do not share the host operating system but have their own dedicated operating system, and their own dedicated memory and CPU assigned in exactly the same way virtual machines are assigned resources. Hyper-V Containers brings in a higher level of isolation of containers from the host. While Windows Containers runs in full trust on the host operating system, Hyper-V Containers does not have full trust from the host’s perspective. It is this isolation that differentiates Hyper-V Containers from Windows Containers. Hyper-V Containers is ideal for hosting applications that might harm the host server affecting every other container and service on it. Scenarios where users can bring in and execute their own code are examples of such applications. Hyper-V Containers provides adequate isolation and security to ensure that applications cannot access the host resources and change them. Nested virtual machines Another breakthrough innovation of Windows Server 2016 is that now, virtual machines can host virtual machines. Now, we can deploy multiple virtual machines containing all tiers of an application within a single virtual machine. This is made possible through software-defined networks and storage. Enabling Microservices Nano Servers and Containers helps provide advanced lightweight deployment options through which we can now decompose the entire application into multiple smaller, independent services, each with their own scalability and high availability configuration, and deploy them independently of each other. Microservices helps in making the entire DevOps lifecycle agile. With Microservices, changes to services do not demand that every other Microservices undergo every test validation. Only the changed service needs to be tested rigorously, along with its integration with other services. Compare this to a monolithic application. Even a single small change will result in having to test the entire application. Microservices helps in that it requires smaller teams for its development, testing of a service can happen independently of other services, and deployment can be done for each service in isolation. Continuous Integration, Continuous Deployment, and Continuous Delivery for each service can be executed in isolation rather than compiling, testing, and deploying the whole application every time there is a change. Reduced maintenance Because of their intrinsic nature, Windows Nano Servers and Containers are lightweight and quick to provision. They help to quickly provision and configure environments, thereby reducing the overall time needed for Continuous Integration and deployment. Also, these resources can be provisioned on Azure on-demand without waiting for hours. Because of their small footprint in terms of size, storage, memory, and features, they need less maintenance. These servers are patched less often, with fewer fixes, they are secure by default, and have less chance of failing applications, which makes them ideal for operations. The operations team needs to spend fewer hours maintaining these servers compared to normal servers. This reduces the overall cost for the organization and helps DevOps ensure a high-quality delivery. Configuration management tools Windows Server 2016 comes with Windows Management Framework 5.0 installed by default. Desired State Configuration (DSC) is the new configuration management platform available out of the box in Windows Server 2016. It has a rich, mature set of features that enables configuration management for both environments and applications. With DSC, the desired state and configuration of environments are authored as part of Infrastructure as Code and executed on every server on a scheduled basis. They help check the current state of servers with the documented desired state and bring them back to the desired state. DSC is available as part of PowerShell and PowerShell helps with authoring these configuration documents. Windows server 2016 provides a PowerShell unit testing framework known as PESTER. Historically, unit testing for infrastructure environments was always missing as a feature. PESTER enables the testing of infrastructure provisioned either manually or through Infrastructure as Code using DSC configuration or ARM templates. These help with the operational validation of the entire environment, bringing in a high level of cadence and confidence in Continuous Integration and deployment processes. Deployment and packaging Package management and the deployment of utilities and tools through automation is a new concept in the Windows world. Package management has been ubiquitous in the Linux world for a long time. Packing management helps search, save, install, deploy, upgrade, and remove software packages from multiple sources and repositories on demand. There are public repositories such as Chocolatey and PSGallery available for storing readily deployable packages. Tools such as NuGet can connect these repositories and help with package management. They also help with the versioning of packages. Applications that rely on a specific package version can download it on an as-needed basis. Package management helps with the building of environments and application deployment. Package deployment is much easier and faster with this out-of-the-box Windows feature. Summary We have covered a lot of ground in this article. DevOps concepts were discussed mapping technology to those concepts. In this we saw the impetus DevOps can get from technology. We looked at cloud computing and the different services provided by cloud providers. From there, we went on to look at the benefits Windows Server 2016 brings to DevOps practices and how Windows Server 2016 makes DevOps easier and faster with its native tools and features. Resources for Article: Further resources on this subject: Introducing Dynamics CRM [article] Features of Dynamics GP [article] Creating Your First Plug-in [article]
Read more
  • 0
  • 0
  • 11686
Modal Close icon
Modal Close icon