Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech Guides - Cloud & Networking

65 Articles
article-image-why-choose-ansible-for-your-automation-and-configuration-management-needs
Savia Lobo
03 Jul 2018
4 min read
Save for later

Why choose Ansible for your automation and configuration management needs?

Savia Lobo
03 Jul 2018
4 min read
Off late, organizations are moving towards getting their systems automated. The benefits are many. Firstly, it saves off a huge chunk of time and secondly saves investments in human resources for simple tasks such as updates and so on. Few years back, Chef and Puppet were the two popular names when asked about tools for software automation. Over the years, these have got a strong rival which has surpassed them and now sits as one of the famous tools for software automation. Ansible is the one! Ansible is an open source tool for IT configuration management, deployment, and orchestration. It is perhaps the definitive configuration management tool. Chef and Puppet may have got there first, but its rise over the last couple of years is largely down to its impressive automation capabilities. And with the demands on operations engineers and sysadmins facing constant time pressures, the need to automate isn’t “nice to have”, but a necessity. Its tagline is “allowing smart people to do smart things.” It’s hard to argue that any software should aim to do much more than that. Ansible’s rise in popularity Ansible, originated in the year 2013, is a leader in IT automation and DevOps. It was bought by Red Hat in the year 2015 to achieve their goal of creating frictionless IT. The reason Red Hat acquired Ansible was its simplicity and versatility. It got the second mover advantage of entering the DevOps world after Puppet. It meant that it can orchestrate multi-tier applications in the cloud. This results in server uptime by implementing an ‘Immutable server architecture’ for deploying, creating, delete, or migrate servers across different clouds. For those starting afresh, it is easy to write, maintain automation workflows and gives them a plethora of modules which make it easy for newbies to get started. Benefits Red Hat and its community Ansible complements Red Hat’s popular cloud products, OpenStack and OpenShift. Red Hat proved to be a complex yet safe open source software for enterprises. However, it was not easy-to-use. Due to this many developers started migrating to other cloud services for easy and simple deployment options. By adopting Ansible, Red Hat finally provided an easy option to automate and modernize theri IT solutions. Customers can now focus on automating various baseline tasks. It also aids Red Hat to refresh its traditional playbooks; it allows enterprises to use IT services and infrastructure together with the help of Ansible’s YAML. The most prominent benefit of using Ansible for both enterprises and individuals is that it is agentless. It achieves this by leveraging SSH and Windows remote Management. Both these approaches reuse connections and use minimal network traffic. The approach also has added security benefits and improves both client and central management server resource utilization. Thus, the user does not have to worry about the network or server management, and can focus on other priority tasks. What can you use it for? Easy Configurations: Ansible provides developers with easy to understand configurations; understood by both humans and machines. It also includes many modules and user-built roles. Thus, one need not start building from scratch. Application lifecycle management: One can be rest assured about their application development lifecycle with Ansible. Here, it is used for defining the application and Red Hat Ansible Tower is used for managing the entire deployment process. Continuous Delivery: Manage your business with the help of Ansible push-based architecture, which allows a more sturdy control over all the required operations. Orchestration of server configuration in batches makes it easy to roll out changes across the environment. Security and Compliance: While security policies are defined in Ansible, one can choose to integrate the process of scanning and solving issues across the site into other automated processes. Scanning of jobs and system tracking ensures that systems do not deviate from the parameters assigned. Additionally, Ansible Tower provides a secure storage for machine credentials and RBAC (role-based access control). Orchestration: It brings in a high amount of discipline and order within the environment. This ensures all application pieces work in unison and are easily manageable; despite the complexity of the said applications. Though it is popular as the IT automation tool, many organizations use it in combination with Chef and Puppet. This is because it may have scaling issues and lacks in performance for larger deployments. Don’t let that stop you from trying Ansible; it is most loved by DevOps as it is written in Python and thus it is easy to learn. Moreover, it offers a credible support and an agentless architecture, which makes it easy to control servers and much more within an application development environment. An In-depth Look at Ansible Plugins Mastering Ansible – Protecting Your Secrets with Ansible Zefflin Systems unveils ServiceNow Plugin for Red Hat Ansible 2.0
Read more
  • 0
  • 0
  • 19559

article-image-cloud-security-tips-locking-your-account-down-aws-identity-access-manger-iam
Robi Sen
15 Jul 2015
8 min read
Save for later

Cloud Security Tips: Locking Your Account Down with AWS Identity Access Manager (IAM)

Robi Sen
15 Jul 2015
8 min read
With the growth of cloud services such as Google’s Cloud Platform, Microsoft Azure, Amazon’s Web Service, and many others,developers and organizations have unprecedented access to low cost, high performance infrastructure that can scale as needed. Everyone from individuals to major companies have embraced the cloud as their platform of choice to host their IT services and applications; especially small companies and start-ups. Yet for many reasons, those who have embraced the cloud have often been slow to recognize the unique security considerations that face cloud users. When you host your own servers, the cloud operates on a shared risk model were the cloud provider focuses on providing physical security, failover, and high level network perimeter protection. The cloud user is understood to be securing their operating systems, data, applications, and the like. This means that while your cloud provider provides incredible services for your business, you are responsible for much of the security, including implementing access controls, intrusion prevention, intrusion detection, encryption, and the like. Often, because cloud services are made so accessible and easy to setup, users don’t bother to secure them, nor often even know the need to. If you’re new to the cloud and new to security, this post is for you. While we will focus on using Amazon Web Services,the basic concepts apply to most cloud services regardless of vendor. Access control Since you’re using virtual resources that are already setup, the AWS cloud, one of the most important things you need to do right away is secure access to your account and images. First, you want to lock down your AWS account. This is the login and password that you are assigned when you setup your AWS account and anyone who has access to this can purchase new services, change your services, and generally cause complete havoc for you. Indeed AWS accounts sell on hacker’s sites and Darknet sites for good money; usually so the person who buys your hacked/stolen AWS account wants to setup bit coin miners. Don’t give yours out or make it easily accessible. For example, many developers embed logins, passwords, and AWS keys into their code, which is a very bad practice, and then have their accounts compromised by criminals. The first thing you need to do after getting your Amazon login and password is to store it using a tool such as mSecure or LastPass that allows you to save them in an encrypted file or database. It should then never go into a file, document, or public place. It is also strongly advised to use Multi-Factor Authentication (MFA). Amazon allows you MFA via physical devices or straightforward smart phone applications. You can read more about Amazon’s MFA here and here. Once your AWS account information is secure you should then use AWS’s Identity and Access Management (IAM) system to give each user under your master AWS account access with specific privileges according to best practices. Even if you are the only person who uses your AWS account, you should consider using IAM to create a couple of users that have access based on their role, such as a content developer who only has the ability to move files in out of specific directories, or a developer who can start and stop instances, and the like. Then always use the role with the least privileges needed to get your work done as possible. While this might seem cumbersome, you will quickly get used to it, you will be much safer, and finally if your project grows, you will already have the groundwork to ramp up safely. Creating an IAM group and user In this section, we will create an administrator group and add ourselves as the user. If you do not currently have an AWS account, you can get a free account from AWS here. Be advised you will have to have a valid credit card and a phone number to validate your account with, but Amazon will give you the account to play with free for a year (see terms here). For this example, what you need to do is: Create an administrator group that we will give group permissions to for our AWS account’s resources Make a user for ourselves and then add the user to the administrator group Finally create a password for the user so we can access to the AWS Management Console To do this, first sign into the IAM console. Now click on the Groups link and then select Create New Group: Now name the new group Administrator and select Next Step: Next, we need to assign a group policy. You can build your own (see more information here), but this should generally be avoided until you really understand AWS security policies and AWS in general. Amazon has a number of predeveloped policy templates that work great until your applications and architecture gets more complex and grows. So for now just simply select the Administrator Access policy as shown here: You should now see a screen that shows your new policy. You can then click next and then Create Group: You should now see the new Administrator group policy under Group Name: In reality, you would probably want to create all your different group accounts and then associate your users, but for now we are just going to create the Administrator account then create a single user and add it to the Administrator group. Creating a new IAM user account Now that you have created an Administrator group, let's add a user to it. To do this, go to the navigation menu, select the user, and then click Create New Users. You should then see a new screen. You have the option to create access keys for this user. Depending on the user, you may or may not need to do this, but for now go ahead and select that option box and then select Create: IAM will now create the user and give you the option to view the new key or download and save it. Go ahead and download the credentials. Usually it’s good practice to then save those credentials into your password manager such as mSecure or LastPass and not share them with anyone except for the specific user. Once you have downloaded the userscredentials, go ahead and select Close, which will return you to the Users screen. Now click on the user you created. You should now see something like the following (the username has been removed from the figure): Now select Add User to Groups. You should now see the group listing, which only shows one if you’re following along.Now select the Administrator group and then select Add to Groups. You should be taken back to the Users content page and should now see that your user is assigned to the Administrator group. Now, staying in the same screen, scroll down until you see the Security Credentials part of the page. Now click Manage Password. You will now be asked to either select an auto-generated password or click Assign a custom password. Go ahead and create your own password and select Apply. You should be taken back to your user content screen and under security credentials section, you should now see that the password field has been updated from No to Yes. You should also strongly consider using your MFA tool, in my case the AWS virtual MFA Android application, to make the account even more secure. Summary In this article, we talked about the first step in securing your cloud services is controlling access to them. We looked at how AWS allows this via the IAM, allowing you to create groups and group security policies tied to a group, and then how to add users to the group enablingyou to secure your cloud resources based on best practices. Now that you have done that, you can go ahead and add more groups and or users to your AWS account as you need to.However, before you do that, make sure you thoroughly read the AWS IAM documentation; links are supplied at the end of the post. Resources for AWS IAM IAM User Guide Information on IAM Permissions and Policies IAM Best Practices About Author Robi Sen, CSO at Department 13, is an experienced inventor, serial entrepreneur, and futurist whose dynamic twenty-plus year career in technology, engineering, and research has led him to work on cutting edge projects for DARPA, TSWG, SOCOM, RRTO, NASA, DOE, and the DOD. Robi also has extensive experience in the commercial space, including the co-creation of several successful start-up companies. He has worked with companies such as UnderArmour, Sony, CISCO, IBM, and many others to help build out new products and services. Robi specializes in bringing his unique vision and thought process to difficult and complex problems allowing companies and organizations to find innovative solutions that they can rapidly operationalize or go to market with.            
Read more
  • 0
  • 0
  • 19235

article-image-is-your-enterprise-measuring-the-right-devops-metrics
Guest Contributor
17 Sep 2018
6 min read
Save for later

Is your Enterprise Measuring the Right DevOps Metrics?

Guest Contributor
17 Sep 2018
6 min read
As of 2018, 17% of the companies worldwide have fully adopted DevOps while 14% are still in the consideration stage. Amazon, Netflix and Target are few of the companies that have attained success with DevOps. Amazon’s move to Amazon Web Services resulted in their ability to scale their capacity up or down as needed for the servers, thus allowing their engineers to deploy their own code to the server whenever they wanted to. This resulted in continuous deployment, thus reducing the duration as well as number of outages experienced by the companies using AWS. Netflix used DevOps to improve their cloud infrastructure and to ensure smooth streaming of videos online. When you say “we have adopted DevOps in your Enterprise”, what do you really mean? It means you have adopted a software philosophy that integrates software development and operations, thus reducing the time to market your end product. The questions which come next are: How do you measure the true success of DevOps in your organization? Have you been working on the right metrics all along? Let’s talk about first measuring DevOps in organizations. It is all about uptime, transactions per second, bugs fixed, the commits and other operational as well as productivity metrics. This is what most organizations tend to look at as metrics, when you talk about DevOps. But are these the Right DevOps Metrics? For a while, companies have been working on a set of metrics, discussed above, to determine the success of the DevOps. However, these are not the right metrics, and should not be considered. A metric is an indicator of the performance of the DevOps, and not every single indicator will determine the success. Your metrics might differ based on the data you collect. You would end up collecting large volumes of data; however, not every data available can be converted into a metric. Here’s how you can determine the metrics for your DevOps. Avoid using too many metrics You should, at the most, use 10 metrics. We suggest using less than 10 in fact. The fewer the metrics used, the better your judgment would be. You should broaden your perspective when choosing the metrics. It is important to choose metrics that account for the overall organizational health, and don’t just take into consideration the operational and development data. Metrics that connect with your organization What is the ultimate aim for your organization? How would you determine your organization is successful? The answer to these questions will help you determine the metrics. Most organizations determine their success based on customer experience and the overall operational efficiency. You will need to choose metrics that help you determine these two values. Tie the metrics to your goals As a businessperson, you are more concerned with customer attrition, bad feedback and non-returning customers than the lines of code that goes into creating a successful software product. You will need to tie your DevOps success metrics to these goals. While you are concerned about the failure of your website or the downtime, the true concern is the customer’s abandonment of your website. Causes that affect the DevOps While the business metrics will help you measure the success to a certain extent, there are certain things that affect the operations and development teams. You will need to check these causes, and go to the root to understand how it affects the DevOps teams  and what needs to be done to create a balance between the development and operational teams. Next, we will talk about the actual DevOps metrics that you should take into consideration when deriving value for your organization and measuring the success. The Velocity With most of the enterprise elements being automated, velocity is one of the most important metrics that will determine the success of your DevOps. The idea is to get the updates out to the users in the quickest and fastest way possible, without compromising on security or reliability. You stay competitive, offer new features and boost customer retention. The two variables that help measure this tangible metric include deployment frequency and deployment lead time. The former measures the frequency of releases and the latter measures the speed at which the team commits a code and pushes forth the update. Service Quality Service quality directly impacts the goals set forth by the organization, and is intangible. The idea is to maintain the service quality throughout the releases and  changes made to the application. The variables that determine this metric include change failure rate, number of support tickets and MTTR (Mean time to recovery). When you release an update, and that leads to an error or fault in the application, it is the change failure rate. In case there are bugs or performance issues in your releases, and these are being reported, then the variable number of support tickets or errors comes into existence. MTTR is the variable that measures the number of issues resolved and the time taken to solve them. The idea is to be more responsive to the problems faced by the customers. User Experience This is the final metric that impacts the success of your DevOps. You need to check if all the features and updates you have insisted upon are in sync with the user needs. The variables that are concerned with measuring this aspect include feature usage and business impact. You will need to check how many people from the target audience are using the new feature update you have released, and determine their personas. You can check the number of sessions, completed transactions and duration of the session to quantify the number of people. Check their profiles to get their personas.. Planning your DevOps strategy It is not easy to roll out DevOps in your organization, and expect agility immediately. You need to have a perfect strategy, align it to your business goals, and determine the effective DevOps metrics to determine the success of your roll out. Planning is of essence for a thorough roll out of DevOps. It is important to consider every data, when you have DevOps in your organization. Make sure you store and analyze every data, and use the data that suits the DevOps metrics you have determined for success. It is important that the DevOps metrics are aligned to your business goals and the objectives you have defined. About Author: Vishal Virani is a Founder and CEO of Coruscate Solutions, a mobile app development company. He enjoys writing about technology, mobile apps, custom web development and latest industry trends.
Read more
  • 0
  • 0
  • 18205

article-image-how-move-server-serverless-10-steps
Erik Kappelman
27 Sep 2017
7 min read
Save for later

How to move from server to serverless in 10 steps

Erik Kappelman
27 Sep 2017
7 min read
If serverless computing sounds a little contrived to you, you’re right, it is. Serverless computing isn't really serverless, well not yet anyway. It would be more accurate to call it serverless development. If you are a backend boffin, or you spend most of your time writing Dockerfiles, you are probably not going to be super into serverless computing. This is because serverless computing allows for applications to consist of chunks of code that do things in response to stimulus. What makes this different that other development is that the chunks of code don’t need to be woven into a traditional frontend-backend setup. Instead, serverless computing allows code to execute without the need for complicated backend configurations. Additionally, the services that provide serverless computing can easily scale an application as necessary, based on the activity the application is receiving. How AWS Lambda supports serverless computing We will discuss Amazon Web Services (AWS) Lambda, Amazon’s serverless computing offering. We are going to go over one of Amazon’s use cases to better understand the value of serverless computing, and how someone can get started. Have an application, build an application, or have an idea for an application. This could also be step zero, but you can’t really have a serverless application without an application. We are going to be looking at a simple abstraction of an app, but if you want to put this into practice, you’ll need a project. Create an AWS account, if you don’t already have one, and set up the AWS Command Line Interface on your machine. Quick Note: I am on OSX and I had a lot of trouble getting the AWS Command Line Interface installed and working. AWS recommends using pip to install, but the bash command never seemed to end up in the right place. Instead I used Homebrew and then it worked fine. Navigate to the S3 on AWS and create two buckets for testing purposes. One is going to be used for uploading, and the other is going to receive uploaded pictures that have been transformed from the other bucket. The bucket used to receive the transformed pictures should have a name of this form “Other buckets name”+“resized”. The code we are using requires this format in order to work. If you really don’t like that, you can modify the code to use a different format. Navigate to the AWS Lambda Management Console and choose the Create Function option, choose Author from scratch, and click the empty box next to the Lambda symbol in order to create a trigger. Choose S3. Now specify the bucket that the pictures are going to be initially uploaded into. Then under the event type choose Object Created (All). Leave the trigger disabled and press the Next button. Give your function a name, and for now, we are done with the console. On your local machine set up a workspace creating a root directory for our project with a node_modules folder. Then install the async and gm libraries. Create a JavaScript file named index.js and copy and paste the code from the end of the blog into the file. It needs to be name index.js for this example to work. There are settings that determine what the function entry point is that can be changed to look for a different filename. The code we are using comes from an example on AWS located here. I recommend you check out their documentation. If we look at the code that we are pasting into our editor we can learn a few things about using Lambda. We can see that there is an aws-sdk in use and that we use that dependency to create an S3 object. We get the information about the source bucket from the event object that is passed into the main function. This is why we named our buckets the way we did. We can get our uploaded picture using the getObject method of our S3 object. We have the S3 file information we want to get from the event object passed into the main function. This code grabs that file, puts it into a buffer, uses the gm library to resize the object and then use the same S3 object, specifying the destination bucket this time, to upload the file. Now we are ready ZIP up your root folder and let's deploy this function to our new Lambda instance that we have created. Quick Note: While using OSX I had to zip my JS file and node_modules folder directly into a ZIP archive instead of recursively zipping the root folder. For some reason the upload doesn’t work unless the zipping is done this way. This is at least true when using OSX. We are going upload using the Lambda Management Console, if you’re fancy you can use the AWS Command Line Interface. So, get to the management console and choose Upload a .ZIP File. Click the upload button, specify your ZIP file and then press the Save button. Now we will test our work. Click the Actions drop down and choose the Configure test event option. Now choose the S3 PUT test event and specify the bucket that images will be uploaded too. This creates a test that simulates an upload and if everything goes according to plan, your function should pass. Profit! I hope this introduction in AWS Lambda serves as a primer on Serverless development in general. The goal here is to get you started. Serverless computing has some real promise. As a primarily front-end developer, I revel in the idea of serverless anything. I find that the absolute worst part of any development project is the back-end. That being said, I don’t think that sysadmins will be lining up for unemployment checks tomorrow. Once serverless computing catches on, and maybe grows and matures a little bit, we’re going to have a real juggernaut on our hands. The code below is used in this example and comes from AWS: // dependencies varasync = require('async'); var AWS = require('aws-sdk'); var gm = require('gm').subClass({ imageMagick: true }); // Enable ImageMagick integration. var util = require('util'); // constants var MAX_WIDTH = 100; var MAX_HEIGHT = 100; // get reference to S3 client var s3 = new AWS.S3(); exports.handler = function(event, context, callback) { // Read options from the event. console.log("Reading options from event:n", util.inspect(event, {depth: 5})); var srcBucket = event.Records[0].s3.bucket.name; // Object key may have spaces or unicode non-ASCII characters. var srcKey = decodeURIComponent(event.Records[0].s3.object.key.replace(/+/g, " ")); var dstBucket = srcBucket + "resized"; var dstKey = "resized-" + srcKey; // Sanity check: validate that source and destination are different buckets. if (srcBucket == dstBucket) { callback("Source and destination buckets are the same."); return; } // Infer the image type. var typeMatch = srcKey.match(/.([^.]*)$/); if (!typeMatch) { callback("Could not determine the image type."); return; } var imageType = typeMatch[1]; if (imageType != "jpg"&& imageType != "png") { callback('Unsupported image type: ${imageType}'); return; } // Download the image from S3, transform, and upload to a different S3 bucket. async.waterfall([ functiondownload(next) { // Download the image from S3 into a buffer. s3.getObject({ Bucket: srcBucket, Key: srcKey }, next); }, functiontransform(response, next) { gm(response.Body).size(function(err, size) { // Infer the scaling factor to avoid stretching the image unnaturally. var scalingFactor = Math.min( MAX_WIDTH / size.width, MAX_HEIGHT / size.height ); var width = scalingFactor * size.width; var height = scalingFactor * size.height; // Transform the image buffer in memory. this.resize(width, height) .toBuffer(imageType, function(err, buffer) { if (err) { next(err); } else { next(null, response.ContentType, buffer); } }); }); }, functionupload(contentType, data, next) { // Stream the transformed image to a different S3 bucket. s3.putObject({ Bucket: dstBucket, Key: dstKey, Body: data, ContentType: contentType }, next); } ], function (err) { if (err) { console.error( 'Unable to resize ' + srcBucket + '/' + srcKey + ' and upload to ' + dstBucket + '/' + dstKey + ' due to an error: ' + err ); } else { console.log( 'Successfully resized ' + srcBucket + '/' + srcKey + ' and uploaded to ' + dstBucket + '/' + dstKey ); } callback(null, "message"); } ); }; Erik Kappelman wears many hats including blogger, developer, data consultant, economist, and transportation planner. He lives in Helena, Montana and works for theDepartment of Transportation as a transportation demand modeler.
Read more
  • 0
  • 0
  • 18064

article-image-analyzing-enterprise-application-behavior-with-wireshark-2
Vijin Boricha
09 Jul 2018
19 min read
Save for later

Analyzing enterprise application behavior with Wireshark 2

Vijin Boricha
09 Jul 2018
19 min read
One of the important things that you can use Wireshark for is application analysis and troubleshooting. When the application slows down, it can be due to the LAN (quite uncommon in wired LAN), the WAN service (common due to insufficient bandwidth or high delay), or slow servers or clients. It can also be due to slow or problematic applications. The purpose of this article is to get into the details of how applications work, and provide relevant guidelines and recipes for isolating and solving these problems. In the first recipe, we will learn how to find out and categorize applications that work over our network. Then, we will go through various types of applications to see how they work, how networks influence their behavior, and what can go wrong. Further, we will learn how to use Wireshark in order to resolve and troubleshoot common applications that are used in an enterprise network. These are Microsoft Terminal Server and Citrix, databases, and Simple Network Management Protocol (SNMP). This is an excerpt from Network Analysis using Wireshark 2 Cookbook - Second Edition written by Nagendra Kumar Nainar, Yogesh Ramdoss, Yoram Orzach. Find out what is running over your network The first thing to do when monitoring a new network is to find out what is running over it. There are various types of applications and network protocols, and they can influence and interfere with each other when all of them are running over the network. In some cases, you will have different VLANs, different Virtual Routing and Forwardings (VRFs), or servers that are connected to virtual ports in a blade server. Eventually, everything is running on the same infrastructure, and they can influence each other. There is a common confusion between VRFs and VLANs. Even though their purpose is quite the same, they are configured in different places. While VLANs are configured in the LAN in order to provide network separation in the OSI layers 1 and 2, VRFs are multiple instances of routing tables to make them coexist in the same router. This is a layer 3 operation that separates between different customer's networks. VRFs are generally seen in service provider environments using Multi-Protocol Label Switching (MPLS) to provide layer 3 connectivity to different customers over the same router's network, in such a way that no customer can see any other customer's network. In this recipe, we will see how to get to the details of what is running over the network, and the applications that can slow it down. The term blade server refers to a server enclosure, which is a chassis of server shelves on the front and LAN switches on the back. There are several different acronyms for it; for example, IBM calls them blade center and HP calls them blade system. Getting ready When you get into a new network, the first thing to do is connect Wireshark to sniff what is running over the applications and protocols. Make sure you follow these points: When you are required to monitor a server, port-mirror it and see what is running on its connection to the network. When you are required to monitor a remote office, port-mirror the router port that connects you to the WAN connection. Then, check what is running over it. When you are required to monitor a slow connection to the internet, port-mirror it to see what is going on there. In this recipe, we will see how to use the Wireshark tools for analyzing what is running and what can cause problems. How to do it... For analyzing, follow these steps: Connect Wireshark using one of the options mentioned in the previous section. You can use the following tools: Navigate to Statistics | Protocol Hierarchy to view the protocols that run over the network and the percentage of the total traffic Navigate to Statistics | Conversations to see who is talking and what protocols are used In the Protocol Hierarchy feature, you will get a window that will help you analyze who is talking over the network. It is shown in the following screenshot: In the preceding screenshot, you can see the protocol distribution: Ethernet: IP, Logical-Link Control (LLC), and configuration test protocol (loopback) Internet Protocol Version 4: UDP, TCP, Protocol Independent Multicast (PIM), Internet Group Management Protocol (IGMP), and Generic Routing Encapsulation  (GRE) If you click on the + sign, all the underlying protocols will be shown. To see a specific protocol throughput, click down to the protocols as shown in the following screenshot. You will see the application average throughput during the capture (HTTP in this example): Clicking on the + sign to the left of HTTP will open a list of protocols that run over HTTP (XML, MIME, JavaScripts, and more) and their average throughput during the capture period. There's more... In some cases (especially when you need to prepare management reports), you are required to provide a graphical picture of the network statistics. There are various sources available for this, for example: Etherape (for Linux): http://etherape.sourceforge.net/ Compass (for Windows): http://download.cnet.com/Compass-Free/3000-2085_4-75447541.html?tag=mncol;1 Analyzing Microsoft Terminal Server and Citrix communications problems Microsoft Terminal Server, which uses Remote Desktop Protocol (RDP) and Citrix metaframe Independent Computing Architecture (ICA) protocols, are widely used for local and remote connectivity for PCs and thin clients. The important thing to remember about these types of applications is that they transfer screen changes over the network. If there, are only a few changes, they will require low bandwidth. If there many changes, they will require high bandwidth. Another thing is that the traffic in these applications is entirely asymmetric. Downstream traffic takes from tens of Kbps up to several Mbps, while the upstream traffic will be at most several Kbps. When working with these applications, don't forget to design your network according to this. In this recipe, we will see some typical problems of these applications and how to locate them. For the convenience of writing, we will refer to Microsoft Terminal Server, and every time we write Microsoft Terminal Server, we will refer to all applications in this category, for example, Citrix Metaframe. Getting ready When suspecting a slow performance with Microsoft Terminal Server, first check with the user what the problem is. Then, connect the Wireshark to the network with port-mirror to the complaining client or to the server. How to do it... For locating a problem when Microsoft Terminal Server is involved, start with going to the users and asking questions. Follow these steps: When users complain about a slow network, ask them a simple question: Do they see the slowness in the data presented on the screen or when they switch between windows? If they say that the switch between windows is very fast, it is not a Microsoft Terminal Server problem. Microsoft Terminal Server problems will cause slow window changes, picture freezes, slow scrolling of graphical documents, and so on. If they say that they are trying to generate a report (when the software is running over Microsoft Terminal Server), but the report is generated after a long period of time, this is a database problem and not Microsoft Terminal Server or Citrix. When a user works with Microsoft Terminal Server over a high-delay communication line and types very fast, they might experience delays with the characters. This is because Microsoft Terminal Server is transferring window changes, and with high delays, these windows changes will be transferred slowly. When measuring the communication line with Wireshark: Use I/O graphs to monitor the line Use filters to monitor the upstream and the downstream directions Configure bits per second on the y axis You will get the following screenshot: In the preceding screenshot, you can see a typical traffic pattern with high downstream and very low upstream traffic. Notice that the Y-Axis is configured to Bits/Tick. In the time between 485s and 500s, you see that the throughput got to the maximum. This is when applications will slow down and users will start to feel screen freezes, menus that move very slowly, and so on. When a Citrix ICA client connects to a presentation server, it uses TCP ports 2598 or 1494. When monitoring Microsoft Terminal Server servers, don't forget that the clients access the server with Microsoft Terminal Server and the servers access the application with another client that is installed on the server. The performance problem can come from Microsoft Terminal Server or the application. If the problem is an Microsoft Terminal Server problem, it is necessary to figure out whether it is a network problem or a system problem: Check the network with Wireshark to see if there are any loads. Loads such as the one shown in the previous screenshot can be solved by simply increasing the communication lines. Check the server's performance. Applications like Microsoft Terminal Server are mostly memory consuming, so check mostly for memory (RAM) issues. How it works... Microsoft Terminal Server, Citrix Metaframe, and applications simply transfer window changes over the network. From your client (PC with software client or thin client), you connect to the terminal server; and the terminal server, runs various clients that are used to connect from it to other servers. In the following screenshot, you can see the principle of terminal server operation: There's more... From the terminal server vendors, you will hear that their applications improve two things. They will say that it improves manageability of clients because you don't have to manage PCs and software for every user; you simply install everything on the server, and if something fails, you fix it on the server. They will also say that traffic over the network will be reduced. Well, I will not get into the first argument. This is not our subject, but I strongly reject the second one. When working with a terminal client, your traffic entirely depends on what you are doing: When working with text/character-based applications, for example, some Enterprise Resource Planning (ERP) screens, you type in and read data. When working with the terminal client, you will connect to the terminal server that will connect to the database server. Depending on the database application you are working with, the terminal server can improve performance significantly or does not improve it at all. We will discuss this in the database section. Here, you can expect a load of tens to hundreds of Kbps. If you are working with regular office documents such as Word, PowerPoint, and so on, it entirely depends on what you are doing. Working with a simple Word document will require tens to hundreds of Kbps. Working with PowerPoint will require hundreds of Kbps to several Mbps, and when you present the PowerPoint file with full screen (the F5 function), the throughput can jump up to 8 to 10 Mbps. Browsing the internet will take between hundreds of Kbps and several Mbps, depending on what you will do over it. High resolution movies over terminal server to the internet-well, just don't do it. Before you implement any terminal environment, test it. I once had a software house that wanted their logo (at the top-right corner of the user window) to be very clear and striking. They refreshed it 10 times a second, which caused the 2 Mbps communication line to be blocked. You never know what you don't test! Analyzing the database traffic and common problems Some of you may wonder why we have this section here. After all, databases are considered to be a completely different branch in the IT environment. There are databases and applications on one side and the network and infrastructure on the other side. It is correct since we are not supposed to debug databases; there are DBAs for this. But through the information that runs over the network, we can see some issues that can help the DBAs with solving the relevant problems. In most of the cases, the IT staff will come to us first because people blame the network for everything. We will have to make sure that the problems are not coming from the network and that's it. In a minority of the cases, we will see some details on the capture file that can help the DBAs with what they are doing. Getting ready When the IT team comes to us complaining about the slow network, there are some things to do just to verify that it is not the case. Follow the instructions in the following section to make sure you avoid the slow network issue. How to do it... In the case of database problems, follow these steps: When you get complaints about the slow network responses, start asking these questions: Is the problem local or global? Does it occur only in the remote offices or also in the center? When the problem occurs in the entire network, it is not a WAN bandwidth issue. Does it happen the same for all clients? If not, there might be a specific problem that happens only with some users because only those users are running a specific application that causes the problem. Is the communication line between the clients and the server loaded? What is the application that loads them? Do all applications work slowly, or is it only the application that works with the specific database? Maybe some PCs are old and tired, or is it a server that runs out of resources? When we are done with the questionnaire, let's start our work: Open Wireshark and start capturing packets. You can configure port-mirror to a specific PC, the server, a VLAN, or a router that connects to a remote office in which you have the clients. Look at the TCP events (expert info). Do they happen on the entire communication link, on specific IP address/addresses, or on specific TCP port number/numbers? This will help you isolate the problem and check whether it is on a specific link, server, or application. When measuring traffic on a connection to the internet, you will get many retransmissions and duplicate ACKs to websites, mail servers, and so on. This is the internet. In an organization, you should expect 0.1 to 0.5 percent of retransmissions. When connecting to the internet, you can expect much higher numbers. But there are some network issues that can influence database behavior. In the following example, we see the behavior of a client that works with the server over a communication line with a round trip delay of 35 to 40 ms. We are looking at the TCP stream number 8 (1) and the connection started with TCP SYN/SYN-ACK/ACK. I've set this as a reference (2). We can see that the entire connection took 371 packets (3): The connection continues, and we can see time intervals of around 35 ms between DB requests and responses: Since we have 371 packets travelling back and forth, 371 x 35 ms gives us around 13 seconds. Add to this some retransmissions that might happen and some inefficiencies, and this leads to a user waiting for 10 to 15 seconds and more for a database query. In this case, you should consult the DBA on how to significantly reduce the number of packets that run over the network, or you can move to another way of access, for example, terminal server or web access. Another problem that can happen is that you will have a software issue that will reflect in the capture file. If you have a look at the following screenshot, you will see that there are five retransmissions, and then a new connection is opened from the client side. It looks like a TCP problem but it occurs only in a specific window in the software. It is simply a software procedure that stopped processing, and this stopped the TCP from responding to the client: How it works... Well, how databases work was always be a miracle to me. Our task is to find out how they influence the network, and this is what we've learned in this section. There's more... When you right-click on one of the packets in the database client to the server session, a window with the conversation will open. It can be helpful to the DBA to see what is running over the network. When you are facing delay problems, for example, when working over cellular lines over the internet or over international connections, the database client to the server will not always be efficient enough. You might need to move to web or terminal access to the database. An important issue is how the database works. If the client is accessing the database server, and the database server is using files shared from another server, it can be that the client-server works great; but the problems come from the database server to the shared files on the file server. Make sure that you know all these dependencies before starting with your tests. And most importantly, make sure you have very professional DBAs among your friends. One day, you will need them! Analyzing SNMP SNMP is a well-known protocol that is used to monitor and manage different types of devices in a network by collecting data and statistics at regular intervals. Beyond just monitoring, it can also be used to configure and modify settings with appropriate authorization given to SNMP servers. Devices that typically support SNMP are switches, routers, servers, workstations, hosts, VoIP Phones, and many more. It is important to know that there are three versions of SNMP: SNMPv1, SNMPv2c, and SNMPv3. Versions v2c and v3, which came later, offer better performance and security. SNMP consists of three components: The device being managed (referred to as managed device). SNMP Agent. This is a piece of software running on the managed device that collects the data from the device and stores it in a database, referred to as the Managed Information Base (MIB) database. As configured, this SNMP agent exports the data/statistics to the server (using UDP port 161) at regular intervals, and also any events and traps. SNMP server, also called Network Management Server (NMS). This is a server that communicates with all the agents in the network to collect the exported data and build a central repository. SNMP server provides access to the IT staff managing network; they can monitor, manage, and configure the network remotely. It is very important to be aware that some of the MIBs implemented in a device could be vendor-specific. Almost all the vendors publicize these MIBs implemented in their devices. Getting ready Generally, the complaints we get from the network management team are about not getting any statistics or traps from a device(s) for a specific interval, or having completely no visibility to a device(s). Follow the instructions in the following section to analyze and troubleshoot these issues. How to do it... In the case of SNMP problems, follow these steps. When you get complaints about SNMP, start asking these questions: Is this a new managed device that has been brought into the network recently? In other words, did the SNMP in the device ever work properly? If this is a new device, talk to relevant device administrator and/or check the SNMP-related configurations, such as community strings. If SNMP configurations looks correct, make sure that the NMS's IP address configured is correct and also check the relevant password credentials. If SNMP v3 is in use, which supports encryption, make sure to check encryption-related settings like transport methods. If the setting and configuration looks valid and correct, make sure the managed devices have connectivity with the NMS, which can be verified by simple ICMP pings. If it is a managed device that has been working properly and didn't report any statistics or alerts for a specific duration: Did the device in discussion have any issues in the control plane or management plane that stopped it from exporting SNMP statistics? Please be aware that for most devices in the network, SNMP is a least-priority protocol, which means that if a device has a higher-priority process to work on, it will hold the SNMP requests and responses in the queue. Is the issue experienced only for a specific device or for multiple devices in the network? Did the network (between managed device and NMS) experience any issue? For example, during any layer 2 spanning-tree convergence, traffic loss could occur between the managed device and SNMP server, by which NMS would lose visibility to the managed devices. As you can see in the following picture, an SNMP Server with IP address 172.18.254.139 is performing SNMP walk with a sequence of GET-NEXT-REQUEST to a workstation with IP address 10.81.64.22, which in turn responds with GET-RESPONSE. For simplicity, the Wireshark filter used for these captures is SNMP. The workstation is enabled with SNMP v2c, with community string public. Let's discuss some of the commonly seen failure scenarios. Polling a managed device with a wrong SNMP version As I mentioned earlier, the workstation is enabled with v2c, but when the NMS polls the device with the wrong SNMP version, it doesn't get any response. So, it is very important to make sure that the managed devices are polled with the correct SNMP version. Polling a managed device with a wrong MIB object ID (OID) In the following example, the NMS is polling the managed device to get a number of bytes sent out on interfaces. The MIB OID for byte count is .1.3.6.1.2.1.2.2.1.16, which is ifOutOctets. The managed device in discussion has two interfaces, mapped to OID .1.3.6.1.2.1.2.2.1.16.1 and .1.3.6.1.2.1.2.2.1.16.2. When NMS polls the device to check the statistics for the third interface (which is not present), it returns a noSuchInstance error. How it works... As you have learned in the earlier sections, SNMP is a very simple and straightforward protocol and all its related information on standards and MIB OIDs is readily available in the internet. There's more... Here are some of the websites with good information about SNMP and MIB OIDs: Microsoft TechNet SNMP: https://technet.microsoft.com/en-us/library/cc776379(v=ws.10).aspx Cisco IOS MIB locator: http://mibs.cloudapps.cisco.com/ITDIT/MIBS/servlet/index We have learned to perform enterprise-level network analysis with real-world examples like Analyzing Microsoft Terminal Server and Citrix communications problems. Get to know more about security and network forensics from our book Network Analysis using Wireshark 2 Cookbook - Second Edition. What’s new in Wireshark 2.6 ? Top 5 penetration testing tools for ethical hackers 5 pen testing rules of engagement: What to consider while performing Penetration testing
Read more
  • 0
  • 0
  • 17702

article-image-techs-culture-war-entrepreneur-egos-v-engineer-solidarity
Richard Gall
12 Jul 2018
10 min read
Save for later

Tech’s culture war: entrepreneur egos v. engineer solidarity

Richard Gall
12 Jul 2018
10 min read
There is a rift in the tech landscape that has been shifting quietly for some time. But 2018 is the year it has finally properly opened. This is the rift between the tech’s entrepreneurial ‘superstars’ and a nascent solidarity movement, both of which demonstrate the two faces of the modern tech industry. But within this ‘culture war’ there’s a broader debate about what technology is for and who has the power to make decisions about it. And that can only be a good thing - this is a conversation we’ve needed for some time. With the Cambridge Analytica scandal, and the shock election results to which it was tied, much contemporary political conversation is centered on technology’s impact on the social sphere. But little attention has been paid to the way these social changes or crises are actually enforcing changes within the tech industry itself. If it feels like we’re all having to pick sides when it comes to politics, the same is true when it comes to tech. The rise of the tech ego If you go back to the early years of software, in the early part of the twentieth century, there was little place for ego. It’s no accident that during this time computing was feminized - it was widely viewed as administrative. It was only later that software became more male dominated, thanks to a sexist cultural drive to establish male power in the field. This was arguably the start of egos tech takeover- after all, men wanted their work to carry a certain status. Women had to be pushed out to give them it. It’s no accident that the biggest names in technology - Bill Gates, Steve Wozniak, Steve Jobs - are all men. Their rise was, in part, a consequence of a cultural shift in the sixties. But it’s recognise the fact that in the eighties, these were still largely faceless organizations. Yes, they were powerful men, but the organizations they led were really just the next step out from the military industrial complex that helped develop software as we know it today. It was only when ‘tech’ properly entered the consumer domain that ego took on a new value. As PCs became part of every day life, attaching these products to interesting and intelligent figures was a way of marketing these products. It’s worth remarking that it isn’t really important whether these men had huge egos at all. All that matters is that they were presented in that way, and granted an incredible amount of status and authority. This meant that complexity of software and the literal labor of engineering could be reduced to a relatable figure like Gates or Jobs. We can still feel the effects of that today: just think of the different ways Apple and Microsoft products are perceived. Tech leaders personify technology. They make it marketable. Perhaps tech ‘egos’ were weirdly necessary. Because technology was starting to enter into everyone’s lives, these figures - as much entrepreneurs as engineers - were able to make it accessible and relatable. If that sounds a little far fetched, consider what the tech ‘ninja’ or the ‘guru’ really means for modern businesses. It often isn’t so much about doing something specific, but instead about making the value and application of those technologies clear, simple, and understandable. When companies advertise for these roles using this sort of language they’re often trying to solve an organizational problem as much as a technical one. That’s not to say that being a DevOps guru at some middling eCommerce company is the same as being Bill Gates. But it is important to note how we started talking in this way. Similarly, not everyone who gets called a ‘guru’ is going to have a massive ego (some of my best friends are cloud gurus!), but this type of language does encourage a selfish and egotistical type of thinking. And as anyone who’s worked in a development team knows, that can be incredibly dangerous. From Zuckerberg to your sprint meeting - egos don’t care about you Today, we are in a position where the discourse of gurus and ninjas is getting dangerous. This is true on a number of levels. On the one hand we have a whole new wave of tech entrepreneurs. Zuckerberg, Musk, Kalanick, Chesky, these people are Gates and Jobs for a new generation. For all their innovative thinking, it’s not hard to discern a certain entitlement from all of these men. Just look at Zuckerberg and his role in the Cambridge Analytica Scandal. Look at Musk and his bizarre intervention in Thailand. Kalanick’s sexual harassment might be personal, but it reflects a selfish entitlement that has real professional consequences for his workforce. Okay, so that’s just one extreme - but these people become the images of how technology should work. They tell business leaders and politicians that tech is run by smart people who ostensibly should be trusted. This not only has an impact on our civic lives but also on our professional lives too. Ever wonder why your CEO decides to spend big money on a CTO? It’s because this is the model of modern tech. That then filters down to you and the projects you don’t have faith in. If you feel frustrated at work, think of how these ideas and ways of describing things cascade down to what you do every day. It might seem small, but it does exist. The emergence of tech worker solidarity While all that has been happening, we’ve also seen a positive political awakening across the tech industry. As the egos come to dictate the way we work, what we work on, and who feels the benefits, a large group of engineers are starting to realize that maybe this isn’t the way things should be. Disaffection in Silicon Valley This year in Silicon Valley, worker protests against Amazon, Microsoft and Google have all had an impact on the way their companies are run. We don’t necessarily hear about these people - but they’re there. They’re not willing to let their code be used in ways that don’t represent them. The Cambridge Analytica scandal was the first instance of a political crisis emerging in tech. It wasn’t widely reported, but some Facebook employees asked to move across to different departments like Instagram or WhatsApp. One product designer, Westin Lohne, posted on Twitter that he had left his position saying “morally, it was extremely difficult to continue working there as a product designer.” https://twitter.com/westinlohne/status/981731786337251328 But while the story at Facebook was largely disorganized disaffection, at Google there was real organization against Project Maven. 300 Google employees signed a petition against the company’s AI initiative with the Pentagon. In May, a number of employees resigned over the issue. One is reported as saying “over the last couple of months, I’ve been less and less impressed with Google’s response and the way our concerns are being listened to.” Read next: Google employees quit over company’s continued Artificial Intelligence ties with the Pentagon A similar protest happened at Amazon, with an internal letter to Jeff Bezos protesting the use of Rekognition - Amazon’s facial recognition technology - by law enforcement agencies, including ICE. “Along with much of the world we watched in horror recently as U.S. authorities tore children away from their parents,” the letter stated, according to Gizmodo. “In the face of this immoral U.S. policy, and the U.S.’s increasingly inhumane treatment of refugees and immigrants beyond this specific policy, we are deeply concerned that Amazon is implicated, providing infrastructure and services that enable ICE and DHS.” Microsoft saw a similar protest, sparked, in part, by the shocking images of families being separated at the U.S./Mexico border. Despite the company distancing itself over ICE’s activities, many employees were vocal in their opposition. “This is the sort of thing that would make me question staying,” said one employee, speaking to Gizmodo. A shift in attitudes as tensions emerge True, when taken individually, these instances of disaffection may not look like full-blown solidarity. But together, it amounts to a changing consciousness across Silicon Valley. Of course, it wouldn’t be wrong to say that a relationship between tech, the military, and government has always existed. But the reason things are different is precisely because these tensions have become more visible, attitudes more prominent in public discourse. It’s worth thinking about these attitudes and actions in the context of hyper-competitive Silicon Valley where ego is the norm, and talent and flair is everything. Signing petitions carries with it some risk - leaving a well-paid job you may have spent years working towards is no simple decision. It requires a decisive break with the somewhat egotistical strand that runs through tech to make these sorts of decisions. While it might seem strange, it also shouldn’t be that surprising. If working in software demands a high level of collaboration, then collaboration socially and politically is really just the logical development from our professional lives. All this talk about ‘ninjas’, ‘gurus’ and geniuses only creates more inequality within the tech job market - whether you’re in Silicon Valley, Stoke, or Barcelona, or Bangalore, this language actually hides the skills and knowledge that are actually most valuable in tech. Read next: Don’t call us ninjas or rockstars, say developers Where do we go next? The future doesn’t look good. But if the last six months or so are anything to go by there are a number of things we can do. On the one hand more organization could be the way forward. The publishing and media industries have been setting a great example of how unionization can work in a modern setting and help workers achieve protection and collaborative power at work. If the tech workforce is going to grow significantly over the next decade, we’re going to see more unionization. We’ve already seen technology lead to more unionization and worker organization in the context of the gig economy - Deliveroo and Uber drivers, for example. Gradually it’s going to return to tech itself. The tech industry is transforming the global economy. It’s not immune from the changes it’s causing. But we can also do more to challenge the ideology of the modern tech ego. Key to this is more confidence and technological literacy. If tech figureheads emerge to make technology marketable and accessible, the way to remove that power is to demystify it. It’s to make it clear that technology isn’t a gift, the genius invention of an unfathomable mind, but instead that it’s a collaborative and communal activity, and a skill that anyone can master given the right attitude and resources. At its best, tech culture has been teaching the world that for decades. Think about this the next time someone tells you that technology is magic. It’s not magic, it’s built by people like you. People who want to call it magic want you to think they’re a magician - and like any other magician, they’re probably trying to trick you.
Read more
  • 0
  • 0
  • 17170
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-things-consider-when-migrating-cloud
Kristen Hardwick
01 Jul 2014
5 min read
Save for later

Things to Consider When Migrating to the Cloud

Kristen Hardwick
01 Jul 2014
5 min read
After the decision is made to make use of a cloud solution like Amazon Web Services or Microsoft Azure, there is one main question that needs to be answered – “What’s next?...” There are many factors to consider when migrating to the cloud, and this post will discuss the major steps for completing the transition. Gather background information Before getting started, it’s important to have a clear picture of what is meant to be accomplished in order to call the transition a success.Keeping the following questions at the forefront during the planning stages will help guide your process and ensure the success of the migration. What are the reasons for moving to the cloud? There are many benefits of moving to the cloud, and it is important to know what the focus of the transition should be. If the cost savings are the primary driver, vendor choice may be important. Prices between vendors vary, as do the support services that are offered–that might make a difference in future iterations. In other cases, the elasticity of hardware may be the main appeal. It will be important to ensure that the customization options are available at the desired level. Which applications are being moved? When beginning the migration process, it is important to make sure that the scope of the effort is clear. Consider the option of moving data and applications to the cloud selectively in order to ease the transition. Once the organization has completed a successful small-scale migration into the cloud, a second iteration of the process can take care of additional applications. What is the anticipated cost? A cloud solution will have variable costs associated with it, but it is important to have some estimation of what is expected. This will help when selecting vendors, and it will allow for guidance in configuring the system. What is the long-term plan? Is the new environment intended to eventually replace the legacy system? To work alongside it? Begin to think about the plan beyond the initial migration. Ensure that the selected vendor provides service guarantees that may become requirements in the future, like disaster recovery options or automatic backup services. Determine your actual cloud needs One important thing to maximize the benefits of making use of the cloud is to ensure that your resources are sufficient for your needs. Cloud computing services are billed based on actual usage, including processing power, storage, and network bandwidth. Configuring too few nodes will limit the ability to support the required applications, and too many nodes will inflate costs. Determine the list of applications and features that need to be present in the selected cloud vendor. Some vendors include backup services or disaster recovery options as add-on services that will impact the cost, so it important to decide whether or not these services are necessary. A benefit with most vendors is that these services are extremely configurable, so subscriptions can be modified. However, it is important to choose a vendor with packages that make sense for your current and future needs as much as possible, since transitioning between vendors is not typically desirable. Implement security policies Since the data and applications in the cloud are accessed over the Internet, it is of the utmost importance to ensure that all available vendor security policies are implemented correctly. In addition to the main access policies, determine if data security is a concern. Sensitive data such as PII or PCI may have regulations that impact data encryption rules, especially when being accessed through the cloud. Ensure that the selected vendor is reliable in order to safeguard this information properly. In some cases, applications that are being migrated will need to be refactored so that they will work in the cloud. Sometimes this means making adjustments to connection information or networking protocols. In other cases, this means adjusting access policies or opening ports. In all cases, a detailed plan needs to be made at the networking, software, and data levels in order to make the transition smooth. Let’s get to work! Once all of the decisions have been made and the security policies have been established and implemented, the data appropriate for the project can be uploaded to the cloud. After the data is transferred, it is important to ensure that everything was successful by performing data validation and testing of data access policies. At this point, everything will be configured and any application-specific refactoring or testing can begin. In order to ensure the success of the project, consider hiring a consulting firm with cloud experience that can help guide the process. In any case, the vendor, virtual machine specifications, configured applications and services, and privacy settings must be carefully considered in order to ensure that the cloud services provide the solution necessary for the project. Once the initial migration is complete, the plan can be revised in order to facilitate the migration of additional datasets or processes into the cloud environment. About the author Kristen Hardwick has been gaining professional experience with software development in parallel computing environments in the private, public, and government sectors since 2007. She has interfaced with several different parallel paradigms, including Grid, Cluster, and Cloud. She started her software development career with Dynetics in Huntsville, AL, and then moved to Baltimore, MD, to work for Dynamics Research Corporation. She now works at Spry where her focus is on designing and developing big data analytics for the Hadoop ecosystem.
Read more
  • 0
  • 0
  • 17004

article-image-twilio-whatsapp-api-great-tool-reach-new-businesses
Amarabha Banerjee
15 Aug 2018
3 min read
Save for later

Twilio WhatsApp API: A great tool to reach new businesses

Amarabha Banerjee
15 Aug 2018
3 min read
The trend in the last few years have indicated that businesses want to talk to their customers in the same way they communicate with their friends and family. This enables them to cater to their specific need and to create customer centric  products. Twilio, a cloud and communication based platform has been at the forefront of creating messaging solutions for businesses. Recently, Twilio has enabled developers to integrate SMSing and calling facilities into their applications using the Twilio Web Services API. Over the last decade, Twilio customers have used Programmable SMS to build innovative messaging experiences for their users, whether it is sending instant transaction notifications for money transfers, food delivery alerts, or helping millions of people with the parking tickets. This latest feature added to the Twilio API integrates WhatsApp messaging into the application and manages messages and WhatsApp contacts with a business account. Why is the Twilio Whatsapp integration so significant? WhatsApp is one of the most popular instant messaging apps in the world presently. Everyday, 30 million messages are exchanged using WhatsApp. The visualization below shows the popularity of WhatsApp across different countries. Source: Twilio Integrating WhatsApp communications in the business applications would mean greater flexibility and ability to reach to a larger segment of audience. How is it done The operational overhead of integrating directly with the WhatsApp messaging network requires hosting, managing, and scaling containers in your own cloud infrastructure. This can be a tough task for any developer or business with a different end-objective and limited budget. The Twilio API makes it easier for you. WhatsApp delivers end-to-end message encryption through containers. These containers manage encryption keys and messages between the business and users. The containers need to be hosted in multiple regions for high availability and to scale efficiently, as messaging volume grows. Twilio solves this problem for you with a simple and reliable REST API. Other failsafe messaging features like: User opt-out options from WhatsApp messages Automatic switching to sms messaging in the absence of data network Shift to another messaging service in regions where WhatsApp is absent etc; can be implemented easily using the Twilio API. Also, you do not have to use separate APIs to get connected with different messaging services like Facebook messenger, MMS, RCS, LINE etc as all of them are possible within this API. WhatsApp is taking things at a slower pace currently. It initially allows you to develop a test application using the Twilio Sandbox for WhatsApp. This lets you to test your application first, and send messages to a limited number of users only. After your app gets production ready, you can create a WhatsApp business profile and get a dedicated Twilio number to work with WhatsApp. Source: Twilio With the added feature, Twilio enables you to leave aside the maintenance aspect of creating a separate WhatsApp integration service. Twilio takes care of the cloud containers and the security aspect of the application. It gives developers an opportunity to focus on creating customer centric products to communicate with them easily and efficiently. Make phone calls and send SMS messages from your website using Twilio Securing your Twilio App Building a two-way interactive chatbot with Twilio: A step-by-step guide
Read more
  • 0
  • 0
  • 16242

article-image-pets-cattle-analogy-demonstrates-how-serverless-fits-software-infrastructure-landscape
Russ McKendrick
20 Feb 2018
8 min read
Save for later

The pets and cattle analogy demonstrates how serverless fits into the software infrastructure landscape

Russ McKendrick
20 Feb 2018
8 min read
When you say serverless to someone, the first conclusion they jump to is that you are running your code without any servers. This can be quite a valid conclusion if you are using a public cloud service like AWS, but when it comes to running in your own environment, you can't avoid having to run on a server of some sort. This blog post is an extract from Kubernetes for Serverless Applications by Russ McKendrick. Before we discuss what we mean by serverless and Functions as a Service, we should discuss how we got here. As people who work with me will no doubt tell you, I like to use the pets versus cattle analogy a lot as this is quite an easy way to explain the differences in modern cloud infrastructures versus a more traditional approach. The pets, cattle, chickens insects, and snowflakes analogy I first came across the pets versus cattle analogy back in 2012 from a slide deck published by Randy Bias. The slide deck was used during a talk Randy Bias gave at the cloudscaling conference on architectures for open and scalable clouds. Towards the end of the talk, he introduced the concept of pets versus cattle, which Randy attributes to Bill Baker who at the time was an engineer at Microsoft. The slide deck primarily talks about scaling out and not up; let's go into this in a little more detail and discuss some of the additions that have been made since the presentation was first given five years ago. Pets: the bare metal servers and virtual machines Pets are typically what we, as system administrators, spend our time looking after. They are traditional bare metal servers or virtual machines: We name each server as you would a pet. For example, app-server01.domain.com and database-server01.domain.com. When our pets are ill, you will take them to the vets. This is much like you, as a system administrator, would reboot a server, check logs, and replace the faulty components of a server to ensure that it is running healthily. You pay close attention to your pets for years, much like a server. You monitor for issues, patch them, back them up, and ensure they are fully documented. There is nothing much wrong with running pets. However, you will find that the majority of your time is spent caring for them—this may be alright if you have a few dozen servers, but it does start to become unmanageable if you have a few hundred servers. Cattle: the sort of instances you run on public clouds Cattle are more representative of the instance types you should be running in public clouds such as Amazon Web Services (AWS) or Microsoft Azure, where you have auto scaling enabled. You have so many cattle in your herd you don't name them; instead they are given numbers and tagged so you can track them. In your instance cluster, you can also have too many to name so, like cattle, you give them numbers and tag them. For example, an instance could be called ip123067099123.domain.com and tagged as app-server. When a member of your herd gets sick, you shoot it, and if your herd requires it you replace it. In much the same way, if an instance in your cluster starts to have issues it is automatically terminated and replaced with a replica. You do not expect the cattle in your herd to live as long as a pet typically would, likewise you do not expect your instances to have an uptime measured in years. Your herd lives in a field and you watch it from afar, much like you don't monitor individual instances within your cluster; instead, you monitor the overall health of your cluster. If your cluster requires additional resources, you launch more instances and when you no longer require a resource, the instances are automatically terminated, returning you to your desired state. Chickens: an analogy for containers In 2015,  Bernard Golden added to the pets versus cattle analogy by introducing chickens to the mix in a blog post titled Cloud Computing: Pets, Cattle and Chickens? Bernard suggested that chickens were a good term for describing containers alongside pets and cattle: Chickens are more efficient than cattle; you can fit a lot more of them into the same space your herd would use. In the same way, you can fit a lot more containers into your cluster as you can launch multiple containers per instance. Each chicken requires fewer resources than a member of your herd when it comes to feeding. Likewise, containers are less resource-intensive than instances, they take seconds to launch, and can be configured to consume less CPU and RAM. Chickens have a much lower life expectancy than members of your herd. While cluster instances can have an uptime of a few hours to a few days, it is more than possible that a container will have a lifespan of minutes. Insects: An analogy for serverless Keeping in line with the animal theme, Eric Johnson wrote a blog post for RackSpace which introduced insects. This term was introduced to describe serverless and Functions as a Service. Insects have a much lower life expectancy than chickens; in fact, some insects only have a lifespan of a few hours. This fits in with serverless and Functions as a Service as these have a lifespan of seconds. Snowflakes Around the time Randy Bias gave his talk which mentioned pets versus cattle, Martin Fowler wrote a blog post titled SnowflakeServer. The post described every system administrator's worst nightmare: Every snowflake is unique and impossible to reproduce. Just like that one server in the office that was built and not documented by that one guy who left several years ago. Snowflakes are delicate. Again, just like that one server—you dread it when you have to log in to it to diagnose a problem and you would never dream of rebooting it as it may never come back up. Bringing the pets, cattle, chickens, insects and snowflakes analogy together... When I explain the analogy to people, I usually sum up by saying something like this: Organizations who have pets are slowly moving their infrastructure to be more like cattle. Those who are already running their infrastructure as cattle are moving towards chickens to get the most out of their resources. Those running chickens are going to be looking at how much work is involved in moving their application to run as insects by completely decoupling their application into individually executable components. But the most important take away is this:  No one wants to or should be running snowflakes. Serverless and insects As already mentioned, using the word serverless gives the impression that servers will not be needed. Serverless is a term used to describe an execution model. When executing this model you, as the end user, do not need to worry about which server your code is executed on as all of the decisions on placement, server management, and capacity are abstracted away from you—it does not mean that you literally do not need any servers. Now there are some public cloud offerings which abstract so much of the management of servers away from the end user that it is possible to write an application which does not rely on any user-deployed services and that the cloud provider will manage the compute resources needed to execute your code. Typically these services, which we will look at in the next section, are billed for the resources used to execute your code in per second increments. So how does that explanation fits in with the insect analogy? Let's say I have a website that allows users to upload photos. As soon as the photos are uploaded they are cropped, creating several different sizes which will be used to display as thumbnails and mobile-optimized versions on the site. In the pets and cattle world, this would be handled by a server which is powered on 24/7 waiting for users to upload images. Now this server probably is not just performing this one function; however, there is a risk that if several users all decide to upload a dozen photos each, then this will cause load issues on the server where this function is being executed. We could take the chickens approach, which has several containers running across several hosts to distribute the load. However, these containers would more than likely be running 24/7 as well; they will be watching for uploads to process. This approach could allow us to horizontally scale the number of containers out to deal with an influx of requests. Using the insects approach, we would not have any services running at all. Instead, the function should be triggered by the upload process. Once triggered, the function will run, save the processed images, and then terminate. As the developer, you should not have to care how the service was called or where the service was executed, so long as you have your processed images at the end of it.
Read more
  • 0
  • 0
  • 15956

article-image-will-oracle-become-key-cloud-player-and-what-will-it-mean-development-architecture-com
Phil Wilkins
13 Jun 2017
10 min read
Save for later

Will Oracle become a key cloud player, and what will it mean to development & architecture community?

Phil Wilkins
13 Jun 2017
10 min read
This sort of question and provoke some emotive reactions, and many technologists despite the stereotype can get pretty passionate about our views. So let me put my cards on the table. My first book as an author is about Oracle middleware (Implementing Oracle Integration Cloud). I am Oracle Ace Associate (soon to be full Ace) which is comparable to a Java Rockstar, Microsoft MVP or SAP Mentor. I work for Capgemini as a Senior Consultant as large SI we work with many vendors, so I need to able have a feel for all options, even though I specialise in Oracle now. Before I got involved with Oracle I worked with primarily Open Source technologies particularly JBoss and Fuse (before and after both where absorbed into RedHat) and I have technically reviewed a number of Open source books for Packt. So I should be able to provide a balanced argument. So onto the … A lot has been said about Oracle’s CIO Larry Ellison and his position on cloud technologies. Most notably for rubbishing it 2008, which is ironic since those of us who remember the late 90s Oracle heavily committed to a concept called the Network Machine which could have led to a more cloud like ecosystem had the conditions been right. The interesting thing about cloud computing is that we’ve redefined cloud computing to include everything that we already do. ... The computer industry is the only industry that is more fashion-driven than women’s fashion. Maybe I’m an idiot, but I have no idea what anyone is talking about. What is it? It’s complete gibberish. It’s insane. When is this idiocy going to stop?[1] Since then we’ve seen a slow change.  The first cloud offerings we saw came in the form of Mobile Cloud Service which provided a Mobile Backend as a Service (MBaaS). At this time Oracle’s extensive programme to try and rationalize its portfolio and bring the best ideas and design together from Peoplesoft, E-Business Suite, Siebel to a single cohesive product portfolio started to show progress – Fusion applications. Fusion applications built with the WebLogic core and exploiting other investments provided the company with a product that had the potential to become cloud enabled. If that initiative hadn’t been started when it did then Oracle’s position may look very different.  But from a solid standardised container based product portfolio the transition to cloud has become a great deal easier, facilitated by the arrival of Oracle database 12c which provided the means to easily make the data storage at least multi-tenant. This combination gave Oracle its ability to then sell ERP modules as SaaS and meant that Oracle cloud start to think about competing with the SaaS darlings of SalesForce, NetSuite and Workday. However,ERPs don’t live in isolation. Any organisation has to deal with its oddities, special needs, departmental solutions as well as those systems that are unique and differentiate companies form their competition. This has driven the need to provide the means to provide PaaS and IaaS. Not only that, Oracle themselves admitted making SaaS as cost effective as possible it needed to revise the infrastructure and software platform to maximise the application density. A lesson that Amazon with AWS has long understood from the outset and done well in realizing. It has also had the benefit of being a later starter, looked at what has and hasn’t worked, and used to its deep pockets to ensure it got the best skills to build the ideal answers by passing many of the mistakes and issues the pioneers had to go through. This brought us to the state a couple of years ago, where its core products had a cloud existence and Oracle where making headway winning new mid-market customers – after all Oracle ERP is seen as something of a Rolls Royce of ERPs, globally capable and well tested and now cost accessible to more of the mid-market. So as an ERP vendor Oracle will continue to be a player, if there is a challenger, Oracle’s pockets are deep enough to buy the competition which is what happened with Netsuite.This maybe very interesting to enterprise architects who need to take off the shelf building blocks and provide the solid corporate foundation, but those of us who prefer to build, do something different not so exciting. In the last few years we have seen a lot of talk about digital disruptors, the need for serious agility (as in the ability to change and react rather than the development ethos). To have this capability you need to be able build, radically change solutions quickly and yet still work with those core backbone accounting tasks.  To use a Gartner expression, we need to be bimodal[2], to innovate.  When applications packages change comparatively slowly (they need to be slow and steady, if you want to show that your accounting isn’t going to look like Enron[3] or Lehmann Brothers[4]). With this growing need to drive innovation and change ever faster we have seen some significant changes in the way things tend to be done. In a way the need to innovate has impacted to the point that,you could almost say in the process of trying to disrupt existing businesses through IT we have achieved the disruption of software development. With the facilitation of the cloud particularly IaaS, the low cost of startupandtry new solutions and either grow them if they succeed or mothball them with minimal capital loss or delay if they don't; we have seen … The pace of service adoption accelerate exponentially meaning the rate of scale up and dynamic demand particularly for end user facing services has needed new techniques for scaling. Standards moving away from being formulated by committee of companies wanting to influence/dominate a market segment which while resulted in some great ideas (UDDI as a concept was fabulous) but often very unwieldy (ebXML, SOAP, UDDI for example) to simpler standards that have largely evolved through simplicity and quickly recognized value (JSON, REST) to become de-facto standards. New development paradigms that enable large solutions to be delivered whilst still keeping delivery on short cycles and supporting organic change (Agile, microservices). Continuous Integration and DevOps breaking down organisational structures and driving accountability – you build it, you make it run. The open source business model as a way to break into the industry with a new software technologywithout needing deep pockets for marketing etc has become the predominant. route in, and at the same time acceptance that open source software can be as well supported as a closed source product. For a long time, despite Oracle being the ‘guardian’ for Java and then a little more recently MySQL they haven't really managed to establish themselves as a ‘cool’ vendor. If you wanted, a cool vendor you’d historically probably look at RedHat one of the first businesses to really get open source and community thinking. The perception at least has been Oracle have acquired these technologies either as a biproduct of a bigger game or as a view as creating an ‘on ramp’ to their bigger more profitable products. Oracle have started to recognise that to be seriously successful in the cloud like AWS you need to be pretty pervasive and not only connect with the top of the decision tree but also those at the code face. To do that you need a bit of the ‘cool’ factor. That means doing things beyond just the database and your core middleware. These areas are becoming more and more frequently being subject to potential disruption such as Hadoop and big data, NoSQL and things like Kafka in the middleware space. This also fits with the narrative that do do well with SaaS you at least a very good IaaS and the way Oracle has approached SaaS you definitely need good PaaS. So they might as well also make these commercial offerings. This has resulted in Oracle moving from half dozen cloud offerings to something in the order of nearly forty offerings classified as PaaS. Plus a range of IaaS offerings that will appeal to developers and architects such as direct support for Docker through to Container Cloud which provides a simplified Docker model, and onto Kafka, Node.js, MySQL, NoSQL and others. The web tier is pretty interesting with JET which is an enterprise hardened certified version of Angular, React and Express with extra tooling which has been made available as open source. So the technology options are becoming a lot more interesting. Oracle are also starting to target new startups and looking to get new organisations onto the Oracle platform from day one, in the same way it is easy for a startup to leverage AWS. Oracle have made some commitment to the Java developer community though JavaOne which runs alongside the big brother conference of Open World. They are now seriously trying to reach out to the hardcore development community (not just Java as the new Oracle cloud offerings are definitely polyglot) through Oracle Code. I was fortunate enough to present at the London occurrence of the event (see my blog here). What Oracle has not yet quiet reached the point of being clearly easy to start working with compared to AWSand Azure. Yes, Oracle provide new sign ups with 300 dollars of credit but when you have a reputation (deserved or otherwise) of being expensive it isn't going to necessarily get people onboard in droves – say compared to AWS’s free micro-instance for a year. Conclusion  In all of this, I am of the view that Oracle are making headway, they are recognising what needs to be done to be a player; I have said in the past, and I believe it is still true – Oracle is a like an oil tanker or aircraft carrier, takes time to decide to turn, and turning isn't quick, but once a coarse is set a real head of stream and momentum will be built, and I wouldn't want to be in the company’s path.so let’s look at some hard facts – Oracle’s revenues remain pretty steady, surprisingly Oracle showed up in the last week on LinkedIn’s top employers list[5]. Oracle isn’t going to just disappear, it's Database business alone will keep it alive for a very long time to come. Its SaaS business appears to be on a good trajectory although more work on API enablement needs to take place. As an IaaS andPaaS technology provider Oracle appear to be getting a handle on things. Oracle is going to be attractive to end user executives as it is one of the very few vendors that covers all tiers of cloud from IaaS to PaaS providing the benefits of traditional hosting when needed and fully managed solutions and the benefits it offers.Oracle does still need to overcome some perception challenges, in many respects Oracle are seen in the same way Microsoft were in 90s and 2000s, something as a necessary evil and can be expensive. [1]http://www.businessinsider.com/best-larry-ellison-quotes-2013-4?op=1&IR=T/#oud-computing-maybe-im-an-idiot-but-i-have-no-idea-what-anyone-is-talking-about-1 [2]http://www.gartner.com/it-glossary/bimodal/ [3]http://www.investopedia.com/updates/enron-scandal-summary/ [4]https://en.wikipedia.org/wiki/Bankruptcy_of_Lehman_Brothers [5]https://www.linkedin.com/pulse/linkedin-top-companies-2017-where-us-wants-work-now-daniel-roth
Read more
  • 0
  • 0
  • 14342
article-image-top-5-devops-tools-increase-agility
Darrell Pratt
14 Oct 2016
7 min read
Save for later

Top 5 DevOps Tools to Increase Agility

Darrell Pratt
14 Oct 2016
7 min read
DevOps has been broadly defined as a movement that aims to remove the barriers between the development and operations teams within an organization. Agile practices have helped to increase speed and agility within development teams, but the old methodology of throwing the code over the wall to an operations department to manage the deployment of the code to the production systems still persists. The primary goal of the adoption of DevOps practices is to improve both the communication between disparate operations and development groups, and the process by which they work. Several tools are being used across the industry to put this idea into practice. We will cover what I feel is the top set of those tools from the various areas of the DevOps pipeline, in no particular order. Docker “It worked on my machine…” Every operations or development manager has heard this at some point in their career. A developer commits their code and promptly breaks an important environment because their local machine isn’t configured to be identical to a larger production or integration environment.  Containerization has exploded onto the scene and Docker is at the nexus of the change to isolate code and systems into easily transferable modules. Docker is used in the DevOps suite of tools in a couple of methods. The quickest win is to first use Docker to provide the developers with easily useable containers that can mimic the various systems within the stack. If a developer is working on a new RESTful service, they can checkout the container that is setup to run Node.js or Spring Boot, and write the code for the new service with the confidence that the container will be identical to the service environment on the servers. With the success of using Docker in the development workflow, the next logical step is to use Docker in the build stage of the CI/CD pipeline. Docker can help to isolate the build environment’s requirements across different portions of the larger application. By containerizing this step, it is easy to use one generic pipeline to build components as different spanning from Ruby and Node.js to Java and Golang. Git & JFrog Artifactory Source control and artifact management acts as afunnel for the DevOps pipeline. The structure of an organization can dictate how they run these tools, be it hosted or served locally. Git’s decentralized source code management and high-performance merging features have helped it to become the most popular tool in version control systems. Atlassian BitBucket and GitHub both provide a good set of tooling around Git and are easy to use and to integrate with other systems. Source code control is vital to the pipeline, but the control and distribution of artifacts into the build and deployment chain is important as well. >Branching in Git Artifactory is a one-stop shop for any binary artifact hosted within a single repository, which now supports Maven, Docker, Bower, Ruby Gems, CocoaPods, RPM, Yum, and npm. As the codebase of an application grows and includes a broader set of technologies, the ability to control this complexity from a single point and integrate with a broad set of continuous integration tools cannot be stressed enough. Ensuring that the build scripts are using the correct dependencies, both external and internal, and serving a local set of Docker containers reduces the friction in the build chain and will make the lives of the technology team much easier. Jenkins There are several CI servers to choose from in the market. The hosted set of tools such as Travis CI, Codeship, Wercker and Circle CI are all very well suited to drive an integration pipeline and each caters slightly better to an organization that is more cloud focused (source control and hosting), with deep integrations with GitHub and cloud providers like AWS, Heroku, Google and Azure. The older and less flashy system is Jenkins. fcommunity that is constantly adding in new integrations and capabilities to the product. The Jenkins Pipeline feature provides a text-based DSL for creating complex workflows that can move code from repository to the glass with any number of testing stages and intermediate environment deployments. The pipeline DSL can be created from code and this enables a good scaffolding setup for new projects to be integrated into the larger stack’s workflow >Pipeline example Jenkins has continued to nurture a large community that is constantly adding in new integrations and capabilities to the product. The Jenkins Pipeline feature provides a text-based DSL for creating complex workflows that can move code from repository to the glass with any number of testing stages and intermediate environment deployments. The pipeline DSL can be created from code and this enables a good scaffolding setup for new projects to be integrated into the larger stack’s workflow. Hashicorp Terraform At this point we have a system that can build and manage applications through the software development lifecycle. The code is hosted in Git, orchestrated through testing and compilation with Jenkins, and running in reliable containers, and we are storing and proxying dependencies in Artifactory. The deployment of the application is where the operations and development groups come together in DevOps. Terraform is an excellent tool to manage the infrastructure required for running the applications as code itself. There are several vendors in this space — Chef, Puppet and Ansible to name just a few. Terraform sits at a higher level than many of these tools by acting as more of a provisioning system than a configuration management system. It has plugins to incorporate many of the configuration tools, so any investment that an organization has made in one of those systems can be maintained. Load balancer and instance config Where Terraform excels is in its ability to easily provision arbitrarily complex multi-tiered systems, both locally or cloud hosted. The syntax is simple and declarative, and because it is text, it can be versioned alongside the code and other assets of an application. This delivers on “Infrastructure as Code.” Slack A chat application was probably not what you were expecting in a DevOps article, but Slack has been a transformative application for many technology organizations. Slack provides an excellent platform for fostering communication between teams (text, voice and video) and integrating various systems.  The DevOps movement stresses onremoval of barriers between the teams and individuals who work together to build and deploy applications. Web hooks provide simple integration points for simple things such as build notifications, environment statuses and deployment audits. There is a growing number of custom integrations for some of the tools we have covered in this article, and the bot space is rapidly expanding into AI-backed members of the team that answer questions about builds and code to deploy code or troubleshoot issues in production. It’s not a surprise that this space has gained its own name, ChatOps. Articles covering the top 10 ChatOps strategies will surely follow. Summary In this article, we covered several of the tools that integrate into the DevOps culture and how those tools are used and are transforming all areas of the technology organization. While not an exhaustive list, the areas that were covered will give you an idea of the scope of the space and how these various systems can be integrated together. About Darrell Pratt Darrell Pratt is a technologist who is responsible for a range of technologies at Cars.com, where he is the director of software development and delivery. He is passionate about technology and still finds time to write a bit of code and hack on hardware projects. Find him on Twitter here: @darrellpratt.
Read more
  • 0
  • 0
  • 13511

article-image-so-you-want-be-devops-engineer
Darrell Pratt
20 Oct 2016
5 min read
Save for later

So you want to be a DevOps engineer

Darrell Pratt
20 Oct 2016
5 min read
The DevOps movement has come about to accomplish the long sought-after goal of removing the barriers between the traditional development and operations organizations. Historically, development teams have written code for an application and passed that code over to the operations team to both test and deploy onto the company’s servers. This practice generates many mistakes and misunderstandings in the software development lifecycle, in addition to the lack of ownership amongst developers that grows as a result of them not owning more of the deployment pipeline and production responsibilities. The new DevOps teams that are appearing now start as blended groups of developers, system administrators, and release engineers. The thought isthat the developers can assist the operations team members in the process of building and more deeply understanding the applications, and the operations team member can shed light on the environments and deployment processes that they must master to keep the applications running. As these teams evolve, we are seeing the trend to specifically hire people into the role of the DevOps Engineer. What this role is and what type of skills you might need to succeed as a DevOps engineer is what we will cover in this article. The Basics Almost every job description you are going to find for a DevOps engineer is going to require some level of proficiency in the desired production operating systems. Linux is probably the most common. You will need to have a very good level of understanding of how to administer and use a Linux-based machine. Words like grep, sed, awk, chmod, chown, ifconfig, netstat and others should not scare you. In the role of DevOps engineer, you are the go-to person for developers when they have issues with the server or cloud. Make sure that you have a good understanding of where the failure points can be in these systems and the commands that can be used to pinpoint the issues. Learn the package manager systems for the various distributions of Linux to better understand the underpinnings of how they work. From RPM and Yum to Apt and Apk, the managers vary widely but the common ideas are very similar in each. You should understand how to use the managers to script machine configurations and understand how the modern containers are built. Coding The type of language you need for a DevOps role is going to depend quite a bit on the particular company. Java, C#, JavaScript, Ruby and Python are all popular languages. If you are a devout Java follower then choosing a .NET shop might not be your best choice. Use your discretion here, but the job is going to require a working knowledge of coding in one more focused languages. At a minimum, you will need to understand how the build chain of the language works and should be comfortable understanding the error logging of the system and understand what those logs are telling you. Cloud Management Gone are the days of uploading a war file to a directory on the server. It’s very likely that you are going to be responsible for getting applications up and running on a cloud provider. Amazon Web Services is the gorilla in the space and having a good level of hands on experience with the various services that make up a standard AWS deployment is a much sought after skill set. From standard AMIs to load balancing, cloud formation and security groups, AWS can be complicated but luckily it is very inexpensive to experiment and there are many training classes of the different components. Source Code Control Git is the tool of choice currently for source code control. Git gives a team a decentralized SCM system that is built to handle branching and merging operations with ease. Workflows that teams use are varied, but a good understanding of how to merge branches, rebase and fix commit issues is required in the role. The DevOps engineers are usually looked to for help on addressing “interesting” Git issues, so good, hands-on experience is vital. Automation Tooling A new automation tool has probably been released in the time it takes to read this article. There will be new tools and platforms in this part of the DevOps space, but the most common are Chef, Puppet and Ansible. Each system provides a framework for treating the setup and maintenance of your infrastructure as code. Each system has a slightly different take on the method for writing the configurations and deploying them, but the concepts are similar and a good background in any one of these is more often than not a requirement for any DevOps role. Each of these systems requires a good understanding of either Ruby or Python and these languages appear quite a bit in the various tools used in the DevOps space. A desire to improve systems and processes While not an exhaustive list, mastering this set of skills will accelerate anyone’s journey towards becoming a DevOps engineer. If you can augment these skills with a strong desire to improve upon the systems and processes that are used in the development lifecycle, you will be an excellent DevOps engineer. About the author Darrell Pratt is the director of software development and delivery at Cars.com, where he is responsible for a wide range of technologies that drive the Cars.com website and mobile applications. He is passionate about technology and still finds time to write a bit of code and hack on hardware projects. You can find him on Twitter here: @darrellpratt.
Read more
  • 0
  • 0
  • 13068

article-image-what-zerovm
Lars Butler
30 Jun 2014
6 min read
Save for later

What is ZeroVM?

Lars Butler
30 Jun 2014
6 min read
ZeroVM is a lightweight virtualization technology based on Google Native Client (NaCl). While it shares some similarities with traditional hypervisors and container technologies, it is unique in a number of respects. Unlike KVM and LXC, which provide an entire virtualized operating system environment, it isolates single processes and provides no operating system or kernel. This allows instances to start up in a very short time: about five milliseconds. Combined with a high level of security and zero execution overhead, ZeroVM is well-suited to ephemeral processes running untrusted code in multi-tenant environments. There are of course some limitations inherent in the design. ZeroVM cannot be used as a drop-in replacement for something like KVM or LXC. These limitations, however, were the deliberate design decisions necessary in order to create a virtualization platform specifically for building cloud applications. How ZeroVM is different to other virtualization tools Blake Yeager and Camuel Gilyadov gave a talk at the 2014 OpenStack Summit in Atlanta which summed up nicely the main differences between hypervisor-based virtual machines (KVM, Xen, and so on), containers (LXC, Docker, and so on), and ZeroVM. Here are the key differences they outlined: Traditional VM Container ZeroVM Hardware Shared Shared Shared Kernel/OS Dedicated Shared None Overhead High Low Very low Startup time Slow Fast Fast Security Very secure Somewhat secure Very secure Traditional VMs and containers provide a way to partition and schedule shared server resources for multiple tenants. ZeroVM accomplishes the same goal using a different approach and with finer granularity. Instead of running one or more application processes in a traditional virtual machine, applications written for ZeroVM must be decomposed in microprocesses, and each one gets its own instance. The advantage of in this case is that you can avoid long running VMs/processes which accumulate state (leading to memory leaks and cache problems). The disadvantage, however, is that it can be difficult to port existing applications. Each process running on ZeroVM is a single stateless unit of computation (much like a function in the “purely functional” sense; more on that to follow), and applications need to be structured specifically to fit this model. Some applications, such as long-running server applications, would arguably be impossible to re-implement entirely on ZeroVM, although some parts could be abstracted away to run inside ZeroVM instances. Applications that are predominantly parallel and involve many small units of computation are better suited to run on ZeroVM. Determinism ZeroVM provides a guarantee of functional determinism. What this means in practice is that with a given set of inputs (parameters, data, and so on), outputs are guaranteed to always be the same. This works because there are no sources of entropy. For example, the ZeroVM toolchain includes a port of glibc, which has a custom implementation of time functions such that time advances in a deterministic way for CPU and I/O operations. No state is accumulated during execution and no instances can be reused. The ZeroVM Run-Time environment (ZRT) does provide an in-memory virtual file system which can be used to read/write files during execution, but all writes are discarded when the instance terminates unless an output “channel” is used to pipe data to the host OS or elsewhere. Channels and I/O “Channels” are the basic I/O abstraction for ZeroVM instances. All I/O between the host OS and ZeroVM must occur over channels, and channels must be declared explicitly in advance. On the host, a channel can map to a file, character device, pipe, or socket. Inside an instance, all channels are presented as files that can be written to/read from, including devices like stdin, stdout, and stderr. Channels can also be used to connect multiple instances together to create arbitrary multi-stage job pipelines. For example, a MapReduce-style search application with multiple filters could be implemented on ZeroVM by writing each filter as a separate application/script and piping data from one to the next. Security ZeroVM has two key security components: static binary validation and a limited system call API. Static validation occurs before “untrusted” user code is executed to ensure that there are no accidental or malicious instructions that could break out of the sandbox and compromise the host system. Binary validation in this instance is largely based on the NaCl validator. (For more information about NaCl and its validation, you can read the following whitepaper http://static.googleusercontent.com/media/research.google.com/en//pubs/archive/34913.pdf.) To further lock down the execution environment, ZeroVM only supports six system calls via a "trap" interface: pread, pwrite, jail, unjail, fork, and exit. By comparison, containers (LXC) expose the entire Linux system call API which presents a larger attack surface and more potential for exploitation. ZeroVM is lightweight ZeroVM is very lightweight. It can start in about five milliseconds. After the initial validation, program code is executed directly on the hardware without interpretation overhead or hardware virtualization. It's easy to embed in existing systems The security and lightweight nature of ZeroVM makes it ideal to embed in existing systems. For example, it can be used for arbitrary data-local computation in any kind of data store, akin to stored procedures. In this scenario, untrusted code provided by any user with access to the system can be executed safely. Because inputs and outputs must be declared explicitly upfront, the only concerns remaining are data access rules and quotas for storage and computation. Contrasted with a traditional model, where storage and compute nodes are separate, data-local computing can be a more efficient model when the cost of transferring data over the network to/from compute nodes outweighs the actual computation time itself. The tool has already been integrated with OpenStack Swift using ZeroCloud (middleware for Swift). This turns Swift into a “smart” data store, which can be used to scale parallel computations (such as multi-stage MapReduce jobs) across large collections of objects. Language support C and C++ applications can run on ZeroVM, provided that they are cross-compiled to NaCl using the provided toolchain. At present there is also support for Python 2.7 and Lua. Licensing All projects under the ZeroVM umbrella are licensed under Apache 2.0, which makes ZeroVM suitable for both commercial and non-commercial applications (the same as OpenStack).
Read more
  • 0
  • 0
  • 10846
article-image-encryption-cloud-overview
Robi Sen
31 Mar 2015
9 min read
Save for later

Encryption in the Cloud: An Overview

Robi Sen
31 Mar 2015
9 min read
In this post we will look at how to secure your AWS solution data using encryption (if you need a primer on encryption here is a good one). We will also look at some of various services from AWS and other third party vendors that will help you not only encrypt your data, but take care of more problematic issues such as managing keys. Why Encryption Whether it’s Intellectual Property (IP) or simply just user names and passwords, your data is important to you and your organization. So, keeping it safe is important. Although hardening your network, operating systems, access management and other steps can greatly reduce the chance of being compromised, the cold hard reality is that, at some point, in your companies’ existence that data will be compromised. So, assuming that you will be compromised is one major reason we need to encrypt data. Another major reason is the likelihood of accidental or purposeful inappropriate data access and leakage by employees which, depending on what studies you look at, is perhaps the largest reason for data exposure. Regardless of the reason or vector, you never want to expose important data unintentionally, and for this reason encrypting your sensitive information is fundamental to basic security. Three states of data Generally we classify data as having three distinct states: Data at rest, such as when your data is in files on a drive or data in a database Data in motion, such as web requests going over the Internet via port 80 Data in use, which is generally data in RAM or data being used by the CPU In general, the most at risk data is data at rest and data in motion, both of which are reasonably straight forward to secure in the cloud, although their implementation needs to be carefully managed to maintain strong security. What to encrypt and what not to Most security people would love to encrypt anything and everything all the time, but encryption creates numerous real or potential problems. The first of these is that encryption is often computationally expensive and can consume CPU resources, especially when you’re constantly encrypting and decrypting data. Indeed, this has been one of the main reasons why vendors like Google did not encrypt all search traffic until recently. Another reason people often do not widely apply encryption is that it creates potential system administration and support issues since, depending on the encryption approach you take, you can create complex issues for managing your keys. Indeed, even the most simple encryption systems, such as encrypting a whole drive with a single key, requires strong key management in order to be effective. This can create added expense and resource costs since organizations have to implement human and automated systems to manage and control keys. While there are many more reasons people do not widely implement encryption, the reality is that you usually have to make determinations on what to encrypt. Most organizations follow a process for deciding on what to encrypt in the following manner: 1- What data must be private? This might be Personal Identifying Information, credit card numbers, or the like that is required to be private for compliances reasons such as PCI or FISMA. 2- What level of sensitivity is this data? Some data such as PII often has federal data security requirements that are dictated by what industry you are in. For example, in health care HIPPA requirements dictate the minimum level of encryption you must use (see here for an example). Other data might require further encryption levels and controls. 3-What is the data’s value to my business? This is a tricky one. Many companies decide they need little to no encryption for data assuming it is not important, such as their user’s email addresses. Then they get compromised and their users spammed and have their identities stolen potentially causing real legal damages to the company or destroying their reputation. Depending on your business and your business model, even if you are not required to encrypt your data, you may want to in order to protect your company, its reputation or the brand. 4-What is the performance cost of using a specific encryption approach to data and how will it affect my business? These high level steps will give you a sense of what you should encrypt or need to encrypt and how to encrypt it. Item 4 is specifically important, in that while it might be nice to encrypt all your data with 4096 Elliptic Curve encryption keys, this will most likely create too high of a computational load and bottle neck on any high transactional application, such as an e-commerce store, to be practical to implement. This takes us to our next topic, which is choosing encryption approaches. Encryption choices in the cloud for Data at Rest Generally there are two major choices to make when encrypting data, especially data at rest. These are: 1 – Encrypt only key sensitive data such as logins, passwords, social security and similar data. 2 – Encrypt everything. As we have pointed out, while encrypting everything would be nice, there are a lot of potential issues with this. In some cases, however, such as backing up data to S3 or Glacier for long term storage, it might be a total no brainer. More typically, thought, numerous factors weigh in. Another choice you have to make with cloud solutions is where you will do your encryption. This needs to be influenced by your specific application requirements, business requirements, and the like. When deploying cloud solutions you also need think about how you interact with your cloud system. While you might be using a secure VPN from your office or home, you need to think about encrypting your data on your client systems that interact with your AWS-based system. For example, if you upload data to your system, don’t just trust in SSL. You should make sure you use the same level of encryption you use on AWS on your home or office systems. AWS allows you to support server side encryption, client side encryption, or server side encryption with the ability to use your own keys that you manage on the client. This is an important and recent feature - the ability to use your own - since various federal and business security standards require you to maintain possession of your own cryptographic keys. That being said, managing your own keys can be difficult to do well. AWS offers some help with Hardware Security Modules with their CloudHSM. Another route is the multiple vendors that offer services to help you manager enterprise key management such as CloudCipher. Data in Motion Depending on your application users, you may need to send sensitive data to your AWS instances without being able to encrypt the data on their side first. An example is when creating a membership to your site where you want to protect their password or during an e-commerce transition were you want to protect credit card and other information. In these cases, instead of using regular HTTP, you want to use HTTP Secure protocol or HTTPS. HTTPS makes use of SSL/TLS, an encryption protocol for data in motion, to encrypt data as it travels over the network. While HTTPS can affect performance of web servers or network applications, its benefits often far outweigh the negligible overheard it creates. Indeed, AWS makes extensive use of SSL/TLS to protect network traffic between you and AWS and between various AWS services. As such, you should make sure to protect any data, in motion, with a reputable SSL certificate. Also, if you are new to using SSL for your application, you should strongly consider reviewing OWASP’s excellent cheat sheet on SSL. Finally, as stated earlier, don’t just trust in SSL when sharing sensitive data. The best practice is to hash or encrypt any and all sensitive data when possible, since attackers can sometimes, and have, compromised SSL security. Data in Use Data in use encryption, the encryption of data when it’s being used in RAM or by the CPU, is generally a special case in encryption that is mostly ignored in modern hosted applications. This is because it is very difficult and often not considered worth the effort for systems hosted on the premise. Cloud vendors though, like AWS, create special considerations for customers, since the cloud vendor controls have physical access to your computer. This can potentially allow a malicious actor with access to that hardware to circumvent data encryption by accessing a system’s physical memory to steal encryption keys or steal data that is in plain text in memory. As of 2012, the Cloud Security Alliance has started to recommend the use of encryption for data in use as a best practice; see here. For this reason, a number of vendors have started offering data in use encryption specifically for cloud systems like AWS. This should be considered only for systems or applications that have the most extreme security requirements such as national security. Companies like Privatecore and Vaultive currently offer services that allow you to encrypt your data even from your service provider. Summary Encryption and its proper use is a huge subject and we have only been able to lightly touch on the topic. Implementing encryption is rarely easy, yet AWS takes much of the difficult out of encryption by providing a number of services for you. That being said, being aware of what your risks are, how encryption can help mitigate those risks, what specific types of encryption to use, and how it will affect your solution requires continued study. To help you with this, some useful reference material has been provided. Encryption References OWASP: Guide to Cryptography OWASP: Password Storage Cheat Sheet OWASP: Cryptographic Storage Cheat Sheet Best Practices: Encryption Technology Cloud Security Alliance: Implementation Guidance, Category 8: Encryption AWS Security Best Practices From 4th to the 10th April join us for Cloud Week - save 50% on our top cloud titles or pick up any 5 for just $50! Find them here. About the author Robi Sen, CSO at Department 13, is an experienced inventor, serial entrepreneur, and futurist whose dynamic twenty-plus year career in technology, engineering, and research has led him to work on cutting edge projects for DARPA, TSWG, SOCOM, RRTO, NASA, DOE, and the DOD. Robi also has extensive experience in the commercial space, including the co-creation of several successful start-up companies. He has worked with companies such as UnderArmour, Sony, CISCO, IBM, and many others to help build out new products and services. Robi specializes in bringing his unique vision and thought process to difficult and complex problems allowing companies and organizations to find innovative solutions that they can rapidly operationalize or go to market with.
Read more
  • 0
  • 0
  • 10701

article-image-biggest-cloud-adoption-challenges
Rick Blaisdell
10 Sep 2017
3 min read
Save for later

The biggest cloud adoption challenges

Rick Blaisdell
10 Sep 2017
3 min read
The cloud technology industry is growing rapidly, as companies are understanding the profitability and efficiency benefits that cloud computing can provide. Public, private, or a combination of various cloud models are used by 70 percent of U.S. companies who have at least one application in the cloud according to IDG Enterprise Cloud Computing Survey. In addition, almost 93 percent of organizations across the world use cloud services according to Building Trust in a Cloudy Sky Survey. Even though cloud adoption is increasing, it's important that companies develop a strategy before moving their data and using cloud technology to increase efficiency. This strategy is especially important because transitioning to the cloud is often a challenging process. If you're thinking of making this transition, here is a list of cloud adoption challenges that you should be aware of. Technology It's important to take into consideration the complex issues that can arise with new technology. For example, some applications are not built for cloud, or require certain compliance requirements that will not be met in a pure cloud environment. In this instance, a solution could be a hybrid environment with configured security requirements. People Moving to the cloud could be met with resistance, especially from people who have spent most of their time managing physical infrastructure. The largest organization will have a long transition to full cloud adoption. Small companies that are tech savvy will have an easier time making this change. Most modern IT departments will choose an agile approach to cloud adoption, although some employers might not be that experiences in these types of operational changes. The implementation takes time, but you can transform existing operating models to enable a cloud to be more approachable for the company. Psychological barriers Psychologically, there will be many questions. Will the cloud be more secure? Can I maintain my SLAs? Will I find the right technical support services? In 2017, cloud providers can meet all of those expectations and at the same time, reduce overall expenses. Costs Many organizations that decide to move to the cloud do not estimate costs properly. Even though the pricing seems to be simple, the more moving parts there are, the more the liklihood of incorrect costs estimates. When starting the migration to the cloud, look for tools that will help you estimate cloud costs and ROI, whilst taking into consideration all possible variables. Security One of the CIO's concerns when it comes to moving to the cloud is security and privacy. The management team needs to know if the cloud provider they plan to work with has a bullet proof environment. This is a big challenge because a data breach could not only put the company reputation at risk, but could also be the result of a huge financial loss for a company. The first step in adopting cloud services is to be able to identify all of the challenges that will come with the process. It is essential to work with the cloud provider to facilitate a successful cloud implementation. Are there any challenges that you consider crucial to a cloud transition? Let us know what you think in the comments section. About the Author Rick Blaisdell is an experienced CTO, offering cloud services and creating technical strategies, which reduce IT operational costs and improve efficiency. He has 20 years of product, business development, and high-tech experience with Fortune 500 companies developing innovative technology strategies.
Read more
  • 0
  • 0
  • 10552
Modal Close icon
Modal Close icon