Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Cloud Computing

121 Articles
article-image-businesses-need-to-learn-how-to-manage-cloud-costs-to-get-real-value-from-serverless-and-machine-learning-as-a-service
Richard Gall
10 Jun 2019
7 min read
Save for later

Businesses need to learn how to manage cloud costs to get real value from serverless and machine learning-as-a-service

Richard Gall
10 Jun 2019
7 min read
This year’s Skill Up survey threw a spotlight on the challenges developers and engineering teams face when it comes to cloud. Indeed, it even highlighted the extent to which cloud is still a nascent trend for many developers, even though it feels so mainstream within the industry - almost half of respondents aren’t using cloud at all. But for those that do use cloud, the survey results also illustrated some of the specific ways that people are using or plan to use cloud platforms, as well as highlighting the biggest challenges and mistakes organisations are making when it comes to cloud. What came out as particularly important is that the limitations and the opportunities of cloud must be thought of together. With our research finding that cost only becomes important once a cloud platform is being used, it’s clear that if we’re to successfully - and cost effectively - use the cloud platforms we do, understanding the relationship with cost and opportunity over a sustained period of time (rather than, say, a month) is absolutely essential. As one of our respondents told us “businesses are still figuring out how to leverage cloud computing for their business needs and haven't quite got the cost model figured out.” Why does cost pose such a problem when it comes to cloud computing? In this year’s survey, we asked people what their primary motivations for using cloud are. The key motivators were use case and employment (ie. the decision was out of the respondent’s hands), but it was striking to see cost as only a minor consideration. Placed in the broader context of discussions around efficiency and a tightening global market, this seemed remarkable. It appears that people aren’t entering the cloud marketplace with cost as a top consideration. In contrast however, this picture changes when we asked respondents about the biggest limiting factors for their chosen cloud platforms. At this point, cost becomes a much more important factor. This highlights that the reality of cloud costs only become apparent - or rather, becomes more apparent - once a cloud platform is implemented and being used. From this we can infer that there is a lack of strategic planning in cloud purchasing. It’s almost as if technology leaders are falling into certain cloud platforms based on commonplace assumptions about what’s right. This then has consequences further down the line. We need to think about cloud cost and functionality together The fact that functionality is also a key limitation is also important to note here - in fact, it is actually closely tied up with cost, insofar as the functionality of each respective cloud platform is very neatly defined by its pricing structure. Take serverless, for example - although it’s typically regarded as something that can be cost-effective for organizations, it can prove costly when you start to scale workloads. You might save more money simply by optimizing your infrastructure. What this means in practice is that the features you want to exploit within your cloud platform should be approached with a clear sense of how it’s going to be used and how it’s going to fit in the evolution of your business and technology in the medium and long term future. Getting the most from leading cloud trends There were two distinct trends that developers identified as the most exciting: machine learning and serverless. Although both are very different, they both hold a promise of efficiency. Whether that’s the efficiency in moving away from traditional means of hosting to cloud-based functions to powerful data processing and machine-led decision making at scale, the fundamentals of both trends are about managing economies of scale in ways that would have been impossible half a decade ago. This plays into some of the issues around cost. If serverless and machine learning both appear to offer ways of saving on spending or radically driving growth, when that doesn’t quite turn out in the way technology purchasers expected it would, the relationship between cost and features can become a little bit strained. Serverless The idea that serverless will save you money is popular. And in general, it is inexpensive. The pricing structures of both AWS and Azure make Functions as a Service (FaaS) particularly attractive. It means you’ll no longer be spending money on provisioning compute resources you don’t actually need, with your provider managing the necessary elasticity. Read next: The Future of Cloud lies in revisiting the designs and limitations of today’s notion of ‘serverless computing’, say UC Berkeley researchers However, as we've already seen, serverless doesn't guarantee cost efficiency. You need to properly understand how you're going to use serverless to ensure that it's not costing you big money without you realising it. One way of using it might be to employ it for very specific workloads, allowing you to experiment in a relatively risk-free manner before employing it elsewhere - whatever you decide, you must ensure that the scope and purpose of the project is clear. Machine learning as a Service Machine learning - or deep learning in particular - is very expensive to do. This is one of the reasons that machine learning on cloud - machine learning as a service - is one of the most attractive features of many cloud platforms. But it’s not just about cost. Using cloud-based machine learning tools also removes some of the barriers to entry, making it easier for engineers who don’t necessarily have extensive training in the field to actually start using machine learning models in various ways. However, this does come with some limitations - and just as with serverless, you really do need to understand and even visualize how you’re going to use machine learning to ensure that you’re not just wasting time and energy with machine learning cloud features. You need to be clear about exactly how you’re going to use machine learning, what data you’re going to use, where it’s going to be stored, and what the end result should look like. Perhaps you want to embed machine learning capabilities inside an app? Or perhaps you want to run algorithms on existing data to inform internal decisions? Whatever it is, all these questions are important. These types of questions will also impact the type of platform you select. Google’s Cloud Platform is far and away the go-to platform for machine learning (this is one of the reasons why so many respondents said their motivation for using it was use case), but bear in mind that this could lead to some issues if the bulk of your data is typically stored on, say, AWS - you’ll need to build some kind of integration, or move your data to GCP (which is always going to be a headache). The hidden costs of innovation These types of extras are really important to consider when it comes to leveraging exciting cloud features. Yes you need to use a pricing calculator and spend time comparing platforms, but factoring additional development time to build integrations or move things is something that a calculator clearly can’t account for. Indeed, this is true in the context of both machine learning and serverless. The organizational implications of your purchases are perhaps the most important consideration and one that’s often the easiest to miss. Control the scope and empower your team However, although the organizational implications aren’t necessarily problems to be resolved - they could well be opportunities that you need to embrace. You need to prepare and be ready for those changes. Ultimately, preparation is key when it comes to leveraging the benefits of cloud. Defining the scope is critical and to do that you need to understand what your needs are and where you want to get to. That sounds obvious, but it’s all too easy to fall into the trap of focusing on the possibilities and opportunities of cloud without paying careful consideration to how to ensure it works for you. Read the results of Skill Up 2019. Download the report here.
Read more
  • 0
  • 0
  • 24641

article-image-elastic-load-balancing
Packt
09 Feb 2016
21 min read
Save for later

Elastic Load Balancing

Packt
09 Feb 2016
21 min read
In this article by Yohan Wadia, the author of the book AWS Administration – The Definitive Guide, we are going continue where we last dropped off and introduce an amazing and awesome concept called as Auto Scaling! AWS has been one of the first Public Cloud providers to provide this feature and really it is something that you must try out and use in your environments! This chapter will teach you the basics of Auto Scaling, its concepts and terminologies, and even how to create an auto scaled environment using AWS. It will also cover Amazon Elastic Load Balancers and how you can use them in conjuncture with Auto Scaling to manage your applications more effectively! So without wasting any more time, let's first get started by understanding what Auto Scaling is and how it actually works! (For more resources related to this topic, see here.) An overview of Auto Scaling We have been talking about AWS and the concept of dynamic scalability a.k.a. Elasticity in general throughout this book; well now is the best time to look at it in depth with the help of Auto Scaling! Auto Scaling basically enables you to scale your compute capacity (EC2 instances) either up or down, depending on the conditions you specify. These conditions could be as simple as a number that maintains the count of your EC2 instances at any given time, or even complex conditions that measures the load and performance of your instances such as CPU utilization, memory utilization, and so on. But a simple question that may arise here is why do I even need Auto Scaling? Is it really that important? Let's look at a dummy application's load and performance graph to get a better understanding of things, let's take a look at the following screenshot: The graph to the left depicts the traditional approach that is usually taken to map an application's performance requirements with a fixed infrastructure capacity. Now to meet this application's unpredictable performance requirement, you would have to plan and procure additional hardware upfront, as depicted by the red line. And since there is no guaranteed way to plan for unpredictable workloads, you generally end up procuring more than you need. This is a standard approach employed by many businesses and it doesn't come without its own sets of problems. For example, the region highlighted in red is when most of the procured hardware capacity is idle and wasted as the application simply does not have that high a requirement. Whereas there can be cases as well where the procured hardware simply did not match the application's high performance requirements, as shown by the green region. All these issues, in turn, have an impact on your business, which frankly can prove to be quite expensive. That's where the elasticity of a Cloud comes into play. Rather than procuring at the nth hour and ending up with wasted resources, you grow and shrink your resources dynamically as per your application's requirements, as depicted in the graph on the right. This not only helps you in saving overall costs but also makes your application's management a lot more easy and efficient. And don't worry if your application does not have an unpredictable load pattern! Auto Scaling is designed to work with both predictable and unpredictable workloads so that no matter what application you may have, you can also be rest assured that the required compute capacity is always going to be made available for use when required. Keeping that in mind, let us summarize some of the benefits that AWS Auto Scaling provides: Cost Savings: By far the biggest advantage provided by Auto Scaling, you can actually gain a lot of control over the deployment of your instances as well as costs by launching instances only when they are needed and terminating them when they aren't required. Ease of Use: AWS provides a variety of tools using which you can create and manage your Auto Scaling such as the AWS CLI and even using the EC2 Management Dashboard. Auto Scaling can be programmatically created and managed via a simple and easy to use web service API as well. Scheduled Scaling Actions: Apart from scaling instances as per a given policy, you can additionally even schedule scaling actions that can be executed in the future. This type of scaling comes in handy when your application's workload patterns are predictable and well known in advance. Geographic Redundancy and Scalability: AWS Auto Scaling enables you to scale, distribute, as well as load balance your application automatically across multiple Availability Zones within a given region. Easier Maintenance and Fault Tolerance: AWS Auto Scaling replaces unhealthy instances automatically based on predefined alarms and threshold. With these basics in mind, let us understand how Auto Scaling actually works out in AWS. Auto scaling components To get started with Auto Scaling on AWS, you will be required to work with three primary components, each described briefly as follows. Auto scaling group An Auto Scaling Group is a core component of the Auto Scaling service. It is basically a logical grouping of instances that share some common scaling characteristics between them. For example, a web application can contain a set of web server instances that can form one Auto Scaling Group and another set of application server instances that become a part of another Auto Scaling Group and so on. Each group has its own set of criteria specified that includes the minimum and maximum number of instances that the Group should have along with the desired number of instances that the group must have at all times. Note: The desired number of instances is an optional field in an Auto Scaling Group. If the desired capacity value is not specified, then the Auto Scaling Group will consider the minimum number of instance value as the desired value instead. Auto Scaling Groups are also responsible for performing periodic health checks on the instances contained within them. An instance with a degraded health is then immediately swapped out and replaced by a new one by the Auto Scaling Group, thus ensuring that each of the instances within the Group work at optimum levels. Launch configurations A Launch Configuration is a set of blueprint statements that the Auto Scaling Group uses to launch instances. You can create a single Launch Configuration and use it with multiple Auto Scaling Groups; however, you can only associate one Launch Configuration with a single Auto Scaling Group at a time. What does a Launch Configuration contain? Well to start off with, it contains the AMI ID using which Auto Scaling launches the instances in the Auto Scaling Group. It also contains additional information about your instances such as instance type, the security group it has to be associated with, block device mappings, key pairs, and so on. An important thing to note here is that once you create a Launch Configuration, there is no way you can edit it again. The only way to make changes to a Launch Configuration is by creating a new one in its place and associating that with the Auto Scaling Group. Scaling plans With your Launch Configuration created, the final step left is to create one or more Scaling Plans. Scaling Plans describe how the Auto Scaling Group should actually scale. There are three scaling mechanisms you can use with your Auto Scaling Groups, each described as follows: Manual Scaling: Manual Scaling by far is the simplest way of scaling your resources. All you need to do here is specify a new desired number of instances value or change the minimum or maximum number of instances in an Auto Scaling Group and the rest is taken care of by the Auto Scaling service itself. Scheduled Scaling: Scheduled Scaling is really helpful when it comes to scaling resources based on a particular time and date. This method of scaling is useful when the application's load patterns are highly predictable, and thus you know exactly when to scale up or down. For example, an application that process a company's payroll cycle is usually load intensive during the end of each month, so you can schedule the scaling requirements accordingly. Dynamic Scaling: Dynamic Scaling or scaling on demand is used when the predictability of your application's performance is unknown. With Dynamic Scaling, you generally provide a set of scaling policies using some criteria, for example, scale the instances in my Auto Scaling Group by 10 when the average CPU Utilization exceeds 75 percent for a period of 5 minutes. Sounds familiar right? Well that's because these dynamic scaling policies rely on Amazon CloudWatch to trigger scaling events. CloudWatch monitors the policy conditions and triggers the auto scaling events when certain thresholds are beached. In either case, you will require a minimum of two such scaling polices: one for scaling in (terminating instances) and one for scaling out (launching instances). Before we go ahead and create our first Auto Scaling activity, we need to understand one additional AWS service that will help us balance and distribute the incoming traffic across our auto scaled EC2 instances. Enter the Elastic Load Balancer! Introducing the Elastic Load Balancer Elastic Load Balancer or ELB is a web service that allows you to automatically distribute incoming traffic across a fleet of EC2 instances. In simpler terms, an ELB acts as a single point of contact between your clients and the EC2 instances that are servicing them. The clients query your application via the ELB; thus, you can easily add and remove the underlying EC2 instances without having to worry about any of the traffic routing or load distributions. It is all taken care of by the ELB itself! Coupled with Auto Scaling, ELB provides you with a highly resilient and fault tolerant environment to host your applications. While the Auto Scaling service automatically removes any unhealthy EC2 instances from its Group, the ELB automatically reroutes the traffic to some other healthy instance. Once a new healthy instance is launched by the Auto Scaling service, ELB will once again re-route the traffic through it and balance out the application load as well. But the work of the ELB doesn't stop there! An ELB can also be used to safeguard and secure your instances by enforcing encryption and by utilizing only HTTPS and SSL connections. Keeping these points in mind, let us look at how an ELB actually works. Well to begin with, when you create an ELB in a particular AZ, you are actually spinning up one or more ELB nodes. Don't worry, you cannot physically see these nodes nor perform any much actions on them. They are completely managed and looked after by AWS itself. This node is responsible for forwarding the incoming traffic to the healthy instances present in that particular AZ. Now here's the fun part! If you configure the ELB to work across multiple AZs and assume that one entire AZ goes down or the instances in that particular AZ become unhealthy for some reason, then the ELB will automatically route traffic to the healthy instances present in the second AZ. How does it do the routing? The ELB by default is provided with a Public DNS name, something similar to MyELB-123456789.region.elb.amazonaws.com. The clients send all their requests to this particular Public DNS name. The AWS DNS Servers then resolve this public DNS name to the public IP addresses of the ELB nodes. Each of the nodes has one or more Listeners configured on them which constantly checks for any incoming connections. Listeners are nothing but a process that are configured with a combination of protocol, for example, HTTP and a port, for example, 80. The ELB node that receives the particular request from the client then routes the traffic to a healthy instance using a particular routing algorithm. If the Listener was configured with a HTTP or HTTPS protocol, then the preferred choice of routing algorithm is the least outstanding requests routing algorithm. Note: If you had configured your ELB with a TCP listener, then the preferred routing algorithm is Round Robin. Confused? Well don't be as most of these things are handled internally by the ELB itself. You don't have to configure the ELB nodes nor the routing tables. All you need to do is set up the Listeners in your ELB and point all client requests to the ELB's Public DNS name, that's it! Keeping these basics in mind, let us go ahead and create our very first ELB! Creating your first Elastic Load Balancer Creating and setting up an ELB is a fairly easy and straightforward process provided you have planned and defined your Elastic Load Balancer's role from the start. The current version of ELB supports HTTP, HTTPS, TCP, as well as SSL connection protocols; however, for the sake of simplicity, we will be creating a simple ELB for balancing HTTP traffic only. I'll be using the same VPC environment that we have been developing since Chapter 5, Building your Own Private Clouds using Amazon VPC; however, you can easily substitute your own infrastructure in this place as well. To access the ELBDashboard, you will have to first access the EC2ManagementConsole. Next, from the navigation pane, select the LoadBalancers option, as shown in the following screenshot. This will bring up the ELBDashboard as well using which you can create and associate your ELBs. An important point to note here is that although ELBs are created using this particular portal, you can, however, use them for both your EC2 and VPC environments. There is no separate portal for creating ELBs in a VPC environment. Since this is our first ELB, let us go ahead and select the CreateLoadBalancer option. This will bring up a seven-step wizard using which you can create and customize your ELBs. Step 1 – Defining Load Balancer To begin with, provide a suitable name for your ELB in the LoadBalancername field. In this case, I have opted to stick to my naming convention and named the ELB as US-WEST-PROD-LB-01. Next up, select the VPC option in which you wish to deploy your ELB. Again, I have gone ahead and selected the US-WEST-PROD-1 (192.168.0.0/16) VPC that we created in Chapter 5, Building your Own Private Clouds using Amazon VPC. You can alternatively select your own VPC environment or even select a standalone EC2 environment if it is available. Do not check the Create an internal load balancer option as in this scenario we are creating an Internet-facing ELB for our Web Server instances. There are two types of ELBs that you can create and use based on your requirements. The first is an Internet-facing Load Balancer, which is used to balance out client requests that are inbound from the Internet. Ideally, such Internet-facing load balancers connect to the Public Subnets of a VPC. Similarly, you also have something called as Internal Load Balancers that connect and route traffic to your Private Subnets. You can use a combination of these depending on your application's requirements and architecture, for example, you can have one Internet-facing ELB as your application's main entry point and an internal ELB to route traffic between your Public and Private Subnets; however, for simplicity, let us create an Internet-facing ELB for now. With these basic settings done, we now provide our ELB's Listeners. A Listener is made up of two parts: a protocol and port number for your frontend connection (between your Client and the ELB), and a protocol and a port number for a backend connection (between the ELB and the EC2 instances). In the ListenerConfiguration section, select HTTP from the Load Balancer Protocol dropdown list and provide the port number 80 in the Load Balancer Port field, as shown in the following screenshot. Provide the same protocol and port number for the Instance Protocol and Instance Port field as well. What does this mean? Well this listener is now configured to listen on the ELB's external port (Load Balancer Port) 80 for any client's requests. Once it receives the requests, it will then forward it out to the underlying EC2 instances using the Instance Port, which in this case is port 80 as well. There is no thumb rule as such that both the port values must match; in fact, it is actually a good practice to keep them different. Although your ELB can listen on port 80 for any client's requests, it can use any ports within the range of 1-65,535 for forwarding the request to the instances. You can optionally add additional listeners to your ELB such as a listener for the HTTPS protocol running on port 443 as well; however, that is something that I will leave you do to later. The final configuration item left in step 1 is where you get to select the Subnets option to be associated with your new Load Balancer. In my case, I have gone ahead and created a set of subnets each in two different AZs so as to mimic a high availability scenario. Select any particular Subnets and add them to your ELB by selecting the adjoining + sign. In my case, I have selected two Subnets, both belonging to the web server instances; however, both present in two different AZs. Note: You can select a single Subnet as well; however, it is highly recommended that you go for a high available architecture, as described earlier. Once your subnets are added, click on Next: Assign Security Groups to continue over to step 2. Step 2 – Assign Security Groups Step 2 is where we get to assign our ELB with a Security Group. Now here a catch: You will not be prompted for a Security Group if you are using an EC2-Classic environment for your ELB. This Security Group is only necessary for VPC environments and will basically allow the port you designated for inbound traffic to pass through. In this case, I have created a new dedicated Security Group for the ELB. Provide a suitable Security group name as well as Description, as shown in the preceding screenshot. The new security group already contains a rule that allows traffic to the port that you configured your Load Balancer to use, in my case its port 80. Leave the rule to its default value and click on Next: Configure Security Settingsto continue. Step 3 – Configure Security Settings This is an optional page that basically allows you to secure your ELB by using either the HTTPS or the SSL protocol for your frontend connection. But since we have opted for a simple HTTP-based ELB, we can ignore this page for now. Click on Next: Configure Health Check to proceed on to the next step. Step 4 – Configure Health Check Health Checks are a very important part of an ELB's configuration and hence you have to be extra cautious when setting it up. What are Health Checks? To put it in simple terms, these are basic tests that the ELB conducts to ensure that your underlying EC2 instances are healthy and running optimally. These tests include simple pings, attempted connections, or even some send requests. If the ELB senses either of the EC2 instances in an unhealthy state, it immediately changes its Health Check Status to OutOfService. Once the instance is marked as OutOfService, the ELB no longer routes any traffic to it. The ELB will only start sending traffic back to the instance only if its Health Check State changes to InService again. To configure the Health Checks for your ELB, fill in the following information as described here: Ping Protocol: This field indicates which protocol the ELB should use to connect to your EC2 instances. You can use the TCP, HTTP, HTTPS, or the SSL options; however, for simplicity, I have selected the HTTP protocol here. Ping Port: This field is used to indicate the port which the ELB should use to connect to the instance. You can supply any port value from the range 1 to 65,535; however, since we are using the HTTP protocol, I have opted to stick with the default value of port 80. This port value is really essential as the ELB will periodically ping the EC2 instances on this port number. If any instance does not reply back in a timely fashion, then that instance will be deemed unhealthy by the ELB. Ping Path: This value is usually used for the HTTP and HTTPS protocols. The ELB sends a simple GET request to the EC2 instances based on the Ping Port and Ping Path. If the ELB receives a response other than an "OK," then that particular instance is deemed to be unhealthy by the ELB and it will no longer route any traffic to it. Ping Paths generally are set with a forward slash "/", which indicates the default home page of a web server. However, you can also use a /index.html or a /default.html value as you seem fit. In my case, I have provided the /index.php value as my dummy web application is actually a PHP app. Besides the Ping checks, there are also a few advanced configuration details that you can configure based on your application's health check needs: Response Time: The Response Time is the time the ELB has to wait in order to receive a response. The default value is 5 seconds with a max value up to 60 seconds. Let's take a look at the following screenshot: Health Check Interval: This field indicates the amount of time (in seconds) the ELB waits between health checks of an individual EC2 instance. The default value is 30 seconds; however, you can specify a max value of 300 seconds as well. Unhealthy Threshold: This field indicates the number of consecutive failed health checks an ELB must wait before declaring an instance unhealthy. The default value is 2 with a max threshold value of 10. Healthy Threshold: This field indicates the number of consecutive successful health checks an ELB must wait before declaring an instance healthy. The default value is 2 with a max threshold value of 10. Once you have provided your values, go ahead and select the Next: Add EC2 Instances option. Step 5 – Add EC2 Instances In this section of the Wizard, you can select any running instance from your Subnets to be added and registered with the ELB. But since we are setting this particular ELB for use with Auto Scaling, we will leave this section for now. Click on Next: Add Tags to proceed with the wizard. Step 6 – Add Tags We already know the importance of tagging our AWS resources, so go ahead and provide a suitable tag for categorizing and identifying your ELB. Note that you can always add/edit and remove tags at a later time as well using the ELB Dashboard. With the Tags all set up, click on Review and Create. Step 7 – Review and Create The final steps of our ELB creation wizard is where we simply review our ELB's settings, including the Health Checks, EC2 instances, Tags, and so on. Once reviewed, click on Create to begin your ELB's creation and configuration. The ELB takes a few seconds to get created, but once it's ready, you can view and manage it just like any other AWS resource using the ELBDashboard, as shown in the following screenshot: Select the newly created ELB and view its details in the Description tab. Make a note of the ELB's public DNS Name as well. You can optionally even view the Status as well as the ELBScheme (whether Internet-facing or internal) using the Description tab. You can also view the ELB's Health Checks as well as the Listeners configured with your ELB. Before we proceed with the next section of this chapter, here are a few important pointers to keep in mind when working with ELB. Firstly, the configurations that we performed on our ELB are all very basic and will help you to get through the basics; however, ELB also provides us with additional advanced configuration options such as Cross-Zone Load Balancing, Proxy Protocols, and Sticky Sessions, and so on, which can all be configured using the ELB Dashboard. To know more about these advanced settings, refer to http://docs.aws.amazon.com/ElasticLoadBalancing/latest/DeveloperGuide/elb-configure-load-balancer.html. Second important thing worth mentioning is the ELB's costs. Although it is free (Terms and Conditions apply) to use under the Free Tier eligibility, ELBs are charged approximately $0.025 per hour used. There is a nominal charge on the data transferring charge as well, which is approximately $0.008 per GB of data processed. Summary I really hope that you have got to learn about Amazon ELB as much as possible. We talked about the importance of Auto Scaling and how it proves to be super beneficial when compared to the traditional mode of scaling infrastructure. We then learnt a bit about AWS Auto Scaling and its core components. Next, we learnt about a new service offering called as Elastic Load Balancers and saw how easy it is to deploy one for our own use. Resources for Article: Further resources on this subject: Achieving High-Availability on AWS Cloud [article] Amazon Web Services [article] Patterns for Data Processing [article]
Read more
  • 0
  • 0
  • 24464

article-image-top-reasons-why-businesses-should-adopt-enterprise-collaboration-tools
Guest Contributor
05 Mar 2019
8 min read
Save for later

Top reasons why businesses should adopt enterprise collaboration tools

Guest Contributor
05 Mar 2019
8 min read
Following the trends of the modern digital workplace, organizations apply automation even to the domains that are intrinsically human-centric. Collaboration is one of them. And if we can say that organizations have already gained broad experience in digitizing business processes while foreseeing potential pitfalls, the situation is different with collaboration. The automation of collaboration processes can bring a significant number of unexpected challenges even to those companies that have tested the waters. State of Collaboration 2018 reveals a curious fact: even though organizations can be highly involved in collaborative initiatives, employees still report that both they and their companies are poorly prepared to collaborate. Almost a quarter of respondents (24%) affirm that they lack relevant enterprise collaboration tools, while 27% say that their organizations undervalue collaboration and don't offer any incentives for them to support it. Two reasons can explain these stats: The collaboration process can be hardly standardized and split into precise workflows. The number of collaboration scenarios is enormous, and it’s impossible to get them all into a single software solution. It’s also pretty hard to manage collaboration, assess its effectiveness, or understand bottlenecks. Unlike business process automation systems that play a critical role in an organization and ensure core production or business activities, enterprise collaboration tools are mostly seen as supplementary solutions, so they are the last to be implemented. Moreover, as organizations often don’t spend much effort on adapting collaboration tools to their specifics, the end solutions are frequently subject to poor adoption. At the same time, the IT market offers numerous enterprise collaboration tools Slack, Trello, Stride, Confluence, Google Suite, Workplace by Facebook, SharePoint and Office 365, to mention a few, compete to win enterprises’ loyalty. But how to choose the right enterprise Collaboration tools and make them effective? Or how to make employees use the implemented enterprise Collaboration tools actively? To answer these questions and understand how to succeed in their collaboration-focused projects, organizations have to examine both tech- and employee-related challenges they may face. Challenges rooted in technologies From the enterprise Collaboration tools' deployment model to its customization and integration flexibility, companies should consider a whole array of aspects before they decide which solution they will implement. Selecting a technologically suitable solution Finding a proper solution is a long process that requires companies to make several important decisions: Cloud or on-premises? By choosing the deployment type, organizations define their future infrastructure to run the solution, required management efforts, data location, and the amount of customization available. Cloud solutions can help enterprises save both technical and human resources. However, companies often mistrust them because of multiple security concerns. On-premises solutions can be attractive from the customization, performance, and security points of view, but they are resource-demanding and expensive due to high licensing costs. Ready-to-use or custom? Today many vendors offer ready-made enterprise collaboration tools, particularly in the field of enterprise intranets. This option is attractive for organizations because they can save on customizing a solution from scratch. However, with ready-made products, organizations can face a bigger risk of following a vendor’s rigid politics (subscription/ownership price, support rates, functional capabilities, etc.). If companies choose custom enterprise collaboration software, they have a wider choice of IT service providers to cooperate with and adjust their solutions to their needs. One tool or several integrated tools? Some organizations prefer using a couple of apps that cover different collaboration needs (for example, document management, video conferencing, instant messaging). At the same time, companies can also go for a centralized solution, such as SharePoint or Office 365 that can support all collaboration types and let users create a centralized enterprise collaboration environment. Exploring integration options Collaboration isn’t an isolated process. It is tightly related to business or organizational activities that employees do. That’s why integration capabilities are among the most critical aspects companies should check before investing in their collaboration stack. Connecting an enterprise Collaboration tool to ERP, CRM, HRM, or ITSM solutions will not only contribute to the business process consistency but will also reduce the risk of collaboration gaps and communication inconsistencies. Planning ongoing investment Like any other business solution, an enterprise collaboration tool requires financial investment to implement, customize (even ready-made solutions require tuning), and support it. The initial budget will strongly depend on the deployment type, the estimated number of users, and needed customizations. While planning their yearly collaboration investment, companies should remember that their budgets should cover not only the activities necessary to ensure the solution’s technical health but also a user adoption program. Eliminating duplicate functionality Let’s consider the following scenario: a company implements a collaboration tool that includes the project management functionality, while they also run a legacy project management system. The same situation can happen with time tracking, document management, knowledge management systems, and other stand-alone solutions. In this case, it will be reasonable to consider switching to the new suite completely and depriving the legacy one. For example, by choosing SharePoint Server or Online, organizations can unite various functions within a single solution. To ensure a smooth transition to a new environment, SharePoint developers can migrate all the data from legacy systems, thus making it part of the new solution. Choosing a security vector As mentioned before, the solution’s deployment model dictates the security measures that organizations have to take. Sometimes security is the paramount reason that holds enterprises’ collaboration initiatives back. Security concerns are particularly characteristic of organizations that hesitate between on-premises and cloud solutions. SharePoint and Office 365 trends 2018 show that security represents the major worry for organizations that consider changing their on-premises deployments for cloud environments. What’s even more surprising is that while software providers, like Microsoft, are continually improving their security measures, the degree of concern keeps on growing. The report mentioned above reveals that 50% of businesses were concerned about security in 2018 compared to 36% in 2017 and 32% in 2016. Human-related challenges Technology challenges are multiple, but they all can be solved quite quickly, especially if a company partners with a professional IT service provider that backs them up at the tech level. At the same time, companies should be ready to face employee-related barriers that may ruin their collaboration effort. Changing employees’ typical style of collaboration Don’t expect that your employees will welcome the new collaboration solution. It’s about to change their typical collaboration style, which may be difficult for many. Some employees won’t share their knowledge openly, while others will find it difficult to switch from one-to-one discussions to digitized team meetings. In this context, change management should work at two levels: a technological one and a mental one. Companies should not just explain to employees how to use the new solution effectively, but also show each team how to adapt the collaboration system to the needs of each team member without damaging the usual collaboration flow. Finding the right tools for collaborators and non-collaborators Every team consists of different personalities. Some people can be open to collaboration; others can be quite hesitant. The task is to ensure a productive co-work of these two very different types of employees and everyone in between. Teams shouldn’t wait for instant collaboration consistency or general satisfaction. These are only possible to achieve if the entire team works together to create an optimal collaboration area for each individual. Launching digital collaboration within large distributed teams When it’s about organizing collaboration within a small or medium-sized team, collaboration difficulties can be quite simple to avoid, as the collaboration flow is moderate. But when it comes to collaboration in big teams, the risk of failure increases dramatically. Organizing effective communication of remote employees, connecting distributed offices, offering relevant collaboration areas to the entire team and subteams, enable cross-device consistency of collaboration — these are just a few steps to undertake for effective teamwork. Preparing strategies to overcome adoption difficulties He biggest human-related the poor adoption of an enterprise collaboration system. It can be hard for employees to get used to the new solution, accept the new communication medium, its UI and logic. Adoption issues are critical to address because they may engender more severe consequences than the tech-related ones. Say, if there is a functional defect in a solution, a company can fix it within a few days. However, if there are adoption issues, it means that all the efforts an organization puts into technology polishing can be blown away because their employees don’t use the solution at all. Ongoing training and communication between collaboration manager and particular teams is a must to keep employees’ satisfied with the solution they use. Is there more pain than gain? On recognizing all the challenges, companies might feel that there are too many barriers to overcome to get a decent collaboration solution. So maybe it’s reasonable to stay away from the collaboration race? Is it the case? Not really. If you take a look at Internet Trends 2018, you will see that there are multiple improvements that companies get as they adopt enterprise collaboration tools. Typical advantages include reduced meeting time, quicker onboarding, less time required for support, more effective document management, and a substantial rise in teams’ productivity. If your company wants to get all these advantages, be brave to face the possible collaboration challenges to get a great reward. Author Bio Sandra Lupanova is SharePoint and Office 365 Evangelist at Itransition, a software development and IT consulting company headquartered in Denver. Sandra focuses on the SharePoint and Office 365 capabilities, challenges that companies face while adopting these platforms, as well as shares practical tips on how to improve SharePoint and Office 365 deployments through her articles.
Read more
  • 0
  • 0
  • 24308

article-image-amazon-web-services
Packt
20 Nov 2014
16 min read
Save for later

Amazon Web Services

Packt
20 Nov 2014
16 min read
 In this article, by Prabhakaran Kuppusamy and Uchit Vyas, authors of AWS Development Essentials, you will learn different tools and methods available to perform the same operation with different, varying complexities. Various options are available, depending on the user's level of experience. In this article, we will start with an overview of each service, learn about the various tools available for programmer interaction, and finally see the troubleshooting and best practices to be followed while using these services. AWS provides a handful of services in every area. In this article, we will cover the following topics: Navigate through the AWS Management Console Describe the security measures that AWS provides AWS interaction through the SDK and IDE tools (For more resources related to this topic, see here.) Background of AWS and its needs AWS is based on an idea presented by Chris Pinkham and Benjamin Black with a vision towards Amazon's retail computing infrastructure. The first Amazon offering was SQS, in the year 2004. Officially, AWS was launched and made available online in 2006, and within a year, 200,000 developers signed up for these services. Later, due to a natural disaster (June 29, 2012 storm in North Virginia, which brought down most of the servers residing at this location) and technical events, AWS faced a lot of challenges. A similar event happened on December 2012, after which AWS has been providing services as stated. AWS learned from these events and made sure that the same kind of outage didn't occur even if the same event occurred again. AWS is an idea born in a single room, but the idea is now made available and used by almost all the cloud developers and IT giants. AWS is greatly loved by all kinds of technology admirers. Irrespective of the user's expertise, AWS has something for various types of users. For an expert programmer, AWS has SDKs for each service. Using these SDKs, the programmer can perform operations by entering commands in the command-line interface. However an end user with limited knowledge of programming can still perform similar operations using the graphical user interface of the AWS Management Console, which is accessible through a web browser. If the programmers need interactions between a low-level (SDK) and a high-level (Management Console), they can go for the integrated development environment (IDE) tools, for which AWS provides plugins and add-ons. One such commonly used IDE for which AWS has provided add-ons is the Eclipse IDE. As of now, we will start with the AWS Management Console. The AWS Management Console The most popular method of accessing AWS is via the Management Console because of its simplicity of usage and power. Another reason why the end user prefers the Management Console is that it doesn't require any software to start with; having an Internet connection and a browser is sufficient. As the name suggests, the Management Console is a place where administrative and advanced operations can be performed on your AWS account details or AWS services. The Management Console mainly focuses on the following features: One-click access to AWS's services AWS account administration AWS management using handheld devices AWS infrastructure management across the globe One-click access to the AWS services To access the Management Console, all you need to do is first sign up with AWS. Once done, the Management Console will be available at https://console.aws.amazon.com/. Once you have signed up, you will be directed to the following page: Each and every icon on this page is an Amazon Web Service. Two or more services will be grouped under a category. For example, in the Analytics category, you can see three services, namely, Data Pipeline, Elastic MapReduce, and Kinesis. Starting with any of these services is very easy. Have a look at the description of the service at the bottom of the service icon. As soon as you click on the service icon, it will take you to the Getting started page of the corresponding service, where brief as well as detailed guidelines are available. In order to start with any of the services, only two things are required. The first one is an AWS account and the second one is the supported browser. The Getting started section usually will have a video, which explains the specialty and use cases of the service that you selected. Once you finish reading the Getting started section, optionally you can go through the DOC files specific to the service to know more about the syntaxes and usage of the service operations. AWS account administration The account administration is one of the most important things to make note of. To do this, click on your displayed name (in this case, Prabhakar) at the top of the page, and then click on the My Account option, as shown in the preceding screenshot. At the beginning of every month, you don't want AWS to deduct all your salary by stating that you have used these many services costing this much money; hence, all this management information is available in the Management Console. Using the Management Console, you can infer the following information: The monthly billing in brief as well as the detailed manner (cost split-up of each service) along with a provision to view VAT and tax exemption Account details, such as the display name and contact information Provision to close the AWS account All the preceding operations and much more are possible. AWS management using handheld devices Managing and accessing the AWS services is through (but not limited to) PC. AWS provides a handful of applications almost for all or most of the mobile platforms, such as Android, iOS, and so on. Using these applications, you can perform all the AWS operations on the move. You won't believe that having a 7-inch Android tablet with the installed AWS Console application from Google Play will enable you to ask for any Elastic Compute Cloud (EC2) instance from Amazon and control it (start, stop, and terminate) very easily. You can install an SSH client in the tablet and connect to the Linux terminal. However, if you wish to make use of the Windows instance from EC2, you might use the Graphics User Interface (GUI) more frequently than a command line. A few more sophisticated software and hardware might be needed, for example, you should have a VNC viewer or remote desktop connection software to get the GUI of the EC2 instance borrowed. As you are making use of the GUI in addition to the keyboard, you will need a pointer device, such as a mouse. As a result, you will almost get addicted to the concept of cloud computing going mobile. AWS infrastructure management across the globe At this point, you might be aware that you can get all of these AWS services from servers residing at any of the following locations. To control these services used by you in different regions, you don't have to go anywhere else. You can control it right here in the same Management Console. Using the same Management Console, just by clicking on N.Virginia and choosing the location (at the top of the Management Console), you can make the service available in that region, as shown in the following screenshot: You can choose the server location at which you want the service (data and machine) to be made available based on the following two factors: The first factor is the distance between the server's location and the client's location. For example, if you have deployed a web application for a client from North California at a Tokyo location, obviously the latency will be high while accessing the application. Therefore, choosing the optimum service location is the primary factor. The second factor is the charge for the service in a specific location. AWS charges more for certain crowded servers. Just for illustration, assume that the server for North California is used by many critical companies. So this might cost you twice if you create your servers at North California compared to the other locations. Hence, you should always consider the tradeoff between the location and cost and then decide on the server location. Whenever you click on any of the services, AWS will always select the location that costs you less money as the default. AWS security measures Whenever you think of moving your data center to a public cloud, the first question that arises in your mind is about data security. In a public cloud, through virtualization technology, multiple users might be using the same hardware (server) in which your data is available. You will learn in detail about how AWS ensures data security. Instance isolation Before learning about instance isolation, you must know how AWS EC2 provisions the instances to the user. This service allows you to rent virtual machines (AWS calls it instances) with whatever configurations you ask. Let's assume that you requested AWS to provision a 2 GB RAM, a 100 GB HDD, and an Ubuntu instance. Within a minute, you will be given the instance's connection details (public DNS, private IP, and so on), and the instance starts running. Does this mean that AWS assembled a 2*1 GB RAM and 100 GB HDD into a CPU cabinet and then installed Ubuntu OS in it and gave you the access? The answer is no. The provisioned instance is not a single PC (or bare metal) with an OS installed in it. The instance is the outcome of a virtual machine provisioned by Amazon's private cloud. The following diagram shows how a virtual machine can be provisioned by a private cloud: Let's examine the diagram from bottom to top. First, we will start with the underlying Hardware/Host. Hardware is the server, which usually has a very high specification. Here, assume that your hardware has the configuration of a 99 GB RAM, a 450 TB HDD, and a few other elements, such as NIC, which you need not consider now. The next component in your sights is the Hypervisor. A hypervisor or virtual machine monitor (VMM) is used to create and run virtual machines on the hardware. In private cloud terms, whichever machine runs a hypervisor on it is called the host machine. Three users can request each of them need instances with a 33 GB RAM and 150 TB HDD space. This request goes to the hypervisor and it then starts creating those VMs. After creating the VMs, a notification about the connection parameters will be sent to each user. In the preceding diagram, you can see the three virtual machines (VMs) created by the hypervisor. All the three VMs are running on different operating systems. Even if all the three virtual machines are used by different users, each will feel that only he/she has access to the single piece of hardware, which is only used by them; user 1 might not know that the same hardware is also being used by user 2, and so on. The process of creating a virtual version of a machine or storage or network is called virtualization. The funny part is that none of the virtual machines knows that it is being virtualized (that is, all the VMs are created on the same host). After getting this information about your instances, some users may feel deceived, and some will be even disappointed and cry out loud, has your instance been created on a shared disc or resource? Even though the disc (or hardware) is shared, one instance (or owner of the instance) is isolated from the other instances on the same disc through a firewall. This concept is termed as instance isolation. The following diagram demonstrates instance isolation in AWS: The preceding diagram clearly demonstrates how EC2 provides instances to every user. Even though all the instances are lying in the same disc, they are isolated by hypervisor. Hypervisor has a firewall that does this isolation. So, the physical interface will not interact with the underlying hardware (machine or disc where instances are available) or virtual interface directly. All these interactions will be through hypervisor's firewall. This way AWS ensures that no user can directly access the disc, and no instance can directly interact with another instance even if both instances are running on the same hardware. In addition to the firewall, during the creation of the EC2 instance, the user can specify the permitted and denied security groups of the instance. These two ideologies provide instance isolation. In the preceding diagram, Customer 1, Customer 2, and so on are virtualized discs since the customer instances have no access to raw or actual disc devices. As an added security measure, the user can encrypt his/her disc so that other users cannot access the disc content (even if someone gets in contact with the disc). Isolated GovCloud Similar to North California or Asia Pacific, GovCloud is also a location where you can get your AWS services. This location is specifically designed only for government and agencies whose data is very confidential and valuable, and disclosing this data might result in disaster. By default, this location will not be available to the user. If you want access to this location, then you need to raise a compliance request at http://aws.amazon.com/compliance/contact/ submit the FedRAMP Package Request Form downloadable at http://cloud.cio.gov/document/fedramp-package-request-form. From these two URLs, you can understand how secured the cloud location really is. CloudTrail CloudTrail is an AWS service that performs the user activity and changes tracking. Enabling CloudTrail will log all the API request information into your S3 bucket, which you have created solely for this purpose. CloudTrail also allows you to create an SNS topic as soon as a new logfile is created by CloudTrail. CloudTrail, in hand with SNS, provides real-time user activity as messages to the user. Password This might sound funny. After looking at CloudTrail, if you feel that someone else is accessing your account, the best option is to change the password. Never let anyone look at your password, as this could easily comprise an entire account. Sharing the password is like leaving your treasury door open. Multi-Factor Authentication Until now, to access AWS through a browser, you had to log in at http://aws.amazon.com and enter your username and password. However, enabling Multi-Factor Authentication (MFA) will add another layer of security and ask you to provide an authentication code sent to the device configured with this account. In the security credential page at https://console.aws.amazon.com/iam/home?#security_credential, there is a provision to enable MFA. Clicking on Enable will display the following window: Selecting the first option A virtual MFA device will not cost you money, but this requires a smartphone (with an Android OS), and you need to download an app from the App Store. After this, during every login, you need to look at your smartphone and enter the authentication token. More information is available at https://youtu.be/MWJtuthUs0w. Access Keys (Access Key ID and Secret Access Key) In the same security credentials page, next to MFA, these access keys will be made available. AWS will not allow you to have more than two access keys. However, you can delete and create as many access keys as possible, as shown in the following screenshot: This access key ID is used while accessing the service via the API and SDK. During this time, you must provide this ID. Otherwise, you won't be able to perform any operation. To put it in other words, if someone else gets or knows this ID, they could pretend to be you through the SDK and API. In the preceding screenshot, the first key is inactive and the second key is active. The Create New Access Key button is disabled because I already have a maximum number of allowed access keys. As an added measure, I forged my actual IDs. It is a very good practice to delete a key and create a new key every month using the Delete command link and toggle the active keys every week (by making it active and inactive) by clicking on the Make Active or Make Inactive command links. Never let anyone see these IDs. If you are ever in doubt, delete the ID and create a new one. Clicking on Create New Access Key button (assuming that you have less than two IDs) will display the following window, asking you to download the new access key ID as a CSV file: The CloudFront key pairs The CloudFront key pairs are very similar to the access-key IDs. Without these keys, you will not be able to perform any operation on CloudFront. Unlike the access key ID (which has only access key ID and secret access key), here you will have a private key and a public key along with the access key ID, as shown in the following screenshot: If you lose these keys once, then you need to delete the key pair and create a new key pair. This is also an added security measure. X.509 certificates X.509 certificates are mandatory if you wish to make any SOAP requests on any AWS service. Clicking on Create new certificate will display the following window, which performs exactly the same function: Account identifiers There are two IDs that are used to identify ourselves when accessing the service via the API or SDK. These are the AWS account ID and the canonical user ID. These two IDs are unique. Just as with the preceding parameters, never share these IDs or let anyone see them. If someone has your access ID or key pair, the best option is generate a new one. But it is not possible to generate a new account ID or canonical user ID. Summary In this article, you learned the AWS Management Console and its commonly used SDKs and IDEs. You also learned how AWS secures your data. Then, you looked at the AWS plugin configuration on the Eclipse IDE. The first part made the user familiar with the AWS Management Console. After that, you explored a few of the important security aspects of AWS and learned how AWS handles it. Finally, you learned about the different AWS tools available to the programmer to make his development work easier. In the end, you examined the common SDKs and IDE tools of AWS. Resources for Article: Further resources on this subject: Amazon DynamoDB - Modelling relationships, Error handling [article] A New Way to Scale [article] Deployment and Post Deployment [article]
Read more
  • 0
  • 0
  • 23415

article-image-googles-event-driven-serverless-platform-cloud-function-is-now-generally-available
Vijin Boricha
26 Jul 2018
2 min read
Save for later

Google's event-driven serverless platform, Cloud Function, is now generally available

Vijin Boricha
26 Jul 2018
2 min read
Early this week, Google announced the general availability of its most awaited service Cloud Function at its Google Cloud Next ‘18 conference, San Francisco. Google finally managed to board the serverless bus, that it missed two years ago allowing AWS and Azure to reach their current milestones. This move takes the current cloud platform war to a new level. Google’s Cloud Function now directly competes with Amazon’s Lambda and Microsoft’s Azure Functions. Of late, application development has changed massively, with developers now focusing on application logic instead of infrastructure management, thanks to serverless computing. Developers can now prioritize agility, application quality, and faster deployment with zero server management, auto-scaling traffic management, and integrated security.   Source: Google Cloud website Google’s event driven serverless platform showcases the ability to scale automatically, run codes in response to events, pay while your code runs, and zero server management. Cloud Functions can be used to build: Serverless application backends Developers can quickly build highly available, secure and cost-effective applications as a connective layer of logic is present in Cloud Functions that helps integrate and extend GCP and third-party services. In other words, you can directly call your code from any web, mobile, or backend application or trigger it from GCP services. Real-time data processing Cloud Functions can provide a variety of real-time data processing systems as it responds to events from GCP services such as Stackdriver logging, Cloud Storage, and more. This helps developers to execute their code in response to any change in data. Intelligent applications Developers can leverage Cloud Functions to build intelligent applications with Google Cloud AI integration. One can easily introduce pre-trained machine learning models into the application that can later analyze videos, classify images, convert speech to text, perform NLP (natural language processing) and more. Developers can start making the most of Google Cloud Functions unless they are deploying functions written in Node.js 8 or Python, as these still remain in Beta. In addition to Cloud Functions, Google also announced a preview of serverless containers, a refreshed way of running containers in a fully managed environment. You can read more about this release from Google Cloud release notes. Related Links A serverless online store on AWS could save you money. Build one Learn Azure serverless computing for free – Download a free eBook from Microsoft AWS SAM (AWS Serverless Application Model) is now open source!
Read more
  • 0
  • 0
  • 23187

article-image-chaos-conf-2018-recap-chaos-engineering-hits-maturity-as-community-moves-towards-controlled-experimentation
Richard Gall
12 Oct 2018
11 min read
Save for later

Chaos Conf 2018 Recap: Chaos engineering hits maturity as community moves towards controlled experimentation

Richard Gall
12 Oct 2018
11 min read
Conferences can sometimes be confusing. Even at the most professional and well-planned conferences, you sometimes just take a minute and think what's actually the point of this? Am I learning anything? Am I meant to be networking? Will anyone notice if I steal extra food for the journey home? Chaos Conf 2018 was different, however. It had a clear purpose: to take the first step in properly forging a chaos engineering community. After almost a decade somewhat hidden in the corners of particularly innovative teams at Netflix and Amazon, chaos engineering might feel that its time has come. As software infrastructure becomes more complex, less monolithic, and as business and consumer demands expect more of the software systems that have become integral to the very functioning of life, resiliency has never been more important but more challenging to achieve. But while it feels like the right time for chaos engineering, it hasn't quite established itself in the mainstream. This is something the conference host, Gremlin, a platform that offers chaos engineering as a service, is acutely aware of. On the hand it's actively helping push chaos engineering into the hands of businesses, but on the other its growth and success, backed by millions of VC cash (and faith), depends upon chaos engineering becoming a mainstream discipline in the DevOps and SRE worlds. It's perhaps this reason that the conference felt so important. It was, according to Gremlin, the first ever public chaos engineering conference. And while it was relatively small in the grand scheme of many of today's festival-esque conferences attended by thousands of delegates (Dreamforce, the Salesforce conference, was also running in San Francisco in the same week), the fact that the conference had quickly sold out all 350 of its tickets - with more hoping on waiting lists - indicates that this was an event that had been eagerly awaited. And with some big names from the industry - notably Adrian Cockcroft from AWS and Jessie Frazelle from Microsoft - Chaos Conf had the air of an event that had outgrown its insider status before it had even began. The renovated cinema and bar in San Francisco's Mission District, complete with pinball machines upstairs, was the perfect container for a passionate community that had grown out of the clean corporate environs of Silicon Valley to embrace the chaotic mess that resembles modern software engineering. Kolton Andrus sets out a vision for the future of Gremlin and chaos engineering Chaos Conf was quick to deliver big news. They keynote speech, by Gremlin co-founder Kolton Andrus launched Gremlin's brand new Application Level Fault Injection (ALFI) feature, which makes it possible to run chaos experiments at an application level. Andrus broke the news by building towards it with a story of the progression of chaos engineering. Starting with Chaos Monkey, the tool first developed by Netflix, and moving from infrastructure to network, he showed how, as chaos engineering has evolved, it requires and faciliates different levels of control and insight on how your software works. "As we're maturing, the host level failures and the network level failures are necessary to building a robust and resilient system, but not sufficient. We need more - we need a finer granularity," Andrus explains. This is where ALFI comes in. By allowing Gremlin users to inject failure at an application level, it allows them to have more control over the 'blast radius' of their chaos experiments. The narrative Andrus was setting was clear, and would ultimately inform the ethos of the day - chaos engineering isn't just about chaos, it's about controlled experimentation to ensure resiliency. To do that requires a level of intelligence - technical and organizational - about how the various components of your software work, and how humans interact with them. Adrian Cockcroft on the importance of historical context and domain knowledge Adrian Cockcroft (@adrianco) VP at AWS followed Andrus' talk. In it he took the opportunity to set the broader context of chaos engineering, highlighting how tackling system failures is often a question of culture - how we approach system failure and think about our software. Developers love to learn things from first principles" he said. "But some historical context and domain knowledge can help illuminate the path and obstacles." If this sounds like Cockcroft was about to stray into theoretical territory, he certainly didn't. He offered plenty of practical frameworks for thinking through potential failure. But the talk wasn't theoretical - Cockcroft offered a taxonomy of failure that provides a useful framework for thinking through potential failure at every level. He also touched on how he sees the future of resiliency evolving, focusing on: Observability of systems Epidemic failure modes Automation and continuous chaos The crucial point Cockcroft makes is that cloud is the big driver for chaos engineering. "As datacenters migrate to the cloud, fragile and manual disaster recovery will be replaced by chaos engineering" read one of his slides. But more than that, the cloud also paves the way for the future of the discipline, one where 'chaos' is simply an automated part of the test and deployment pipeline. Selling chaos engineering to your boss Kriss Rochefolle, DevOps engineer and author of one of the best selling DevOps books in French, delivered a short talk on how engineers can sell chaos to their boss. He takes on the assumption that a rational proposal, informed by ROI is the best way to sell chaos engineering. He suggests instead that engineers need to play into emotions, and presenting chaos engineer as a method for tackling and minimizing the fear of (inevitable failure. Follow Kriss on Twitter: @crochefolle Walmart and chaos engineering Vilas Veraraghavan, the Director of Engineering was keen to clarify that Walmart doesn't practice chaos. Rather it practices resiliency - chaos engineering is simply a method the organization uses to achieve that. It was particularly useful to note the entire process that Vilas' team adopts when it comes to resiliency, which has largely developed out of Vilas' own work building his team from scratch. You can learn more about how Walmart is using chaos engineering for software resiliency in this post. Twitter's Ronnie Chen on diving and planning for failure Ronnie Chen (@rondoftw) is an engineering manager at Twitter. But she didn't talk about Twitter. In fact, she didn't even talk about engineering. Instead she spoke about her experience as a technical diver. By talking about her experiences, Ronnie was able to make a number of vital points about how to manage and tackle failure as a team. With mortality rates so high in diving, it's a good example of the relationship between complexity and risk. Chen made the point that things don't fail because of a single catalyst. Instead, failures - particularly fatal ones - happen because of a 'failure cascade'. Chen never makes the link explicit, but the comparison is clear - the ultimate outcome (ie. success or failure) is impacted by a whole range of situational and behavioral factors that we can't afford to ignore. Chen also made the point that, in diving, inexperienced people should be at the front of an expedition. "If you're inexperienced people are leading, they're learning and growing, and being able to operate with a safety net... when you do this, all kinds of hidden dependencies reveal themselves... every undocumented assumption, every piece of ancient team lore that you didn't even know you were relying on, comes to light." Charity Majors on the importance of observability Charity Majors (@mipsytipsy), CEO of Honeycomb, talked in detail about the key differences between monitoring and observability. As with other talks, context was important: a world where architectural complexity has grown rapidly in the space of a decade. Majors made the point that this increase in complexity has taken us from having known unknowns in our architectures, to many more unknown unknowns in a distributed system. This means that monitoring is dead - it simply isn't sophisticated enough to deal with the complexities and dependencies within a distributed system. Observability, meanwhile, allows you to to understand "what's happening in your systems just by observing it from the outside." Put simply, it lets you understand how your software is functioning from your perspective - almost turning it inside out. Majors then linked the concept to observability to the broader philosophy of chaos engineering - echoing some of the points raised by Adrian Cockcroft in his keynote. But this was her key takeaway: "Software engineers spend too much time looking at code in elaborately falsified environments, and not enough time observing it in the real world." This leads to one conclusion - the importance of testing in production. "Accept no substitute." Tammy Butow and Ana Medina on making an impact Tammy Butow (@tammybutow) and Ana Medina  (@Ana_M_Medina) from Gremlin took us through how to put chaos engineering into practice - from integrating it into your organizational culture to some practical tests you can run. One of the best examples of putting chaos into practice is Gremlin's concept of 'Failure Fridays', in which chaos testing becomes a valuable step in the product development process, to dogfood it and test out how a customer experiences it. Another way which Tammy and Ana suggested chaos engineering can be used was as a way of testing out new versions of technologies before you properly upgrade in production. To end, their talk, they demo'd a chaos battle between EKS (Kubernetes on AWS) and AKS (Kubernetes on Azure), doing an app container attack, a packet loss attack and a region failover attack. Jessie Frazelle on how containers can empower experimentation Jessie Frazelle (@jessfraz) didn't actually talk that much about chaos engineering. However, like Ronnie Chen's talk, chaos engineering seeped through what she said about bugs and containers. Bugs, for Frazelle, are a way of exploring how things work, and how different parts of a software infrastructure interact with each other: "Bugs are like my favorite thing... some people really hate when they get one of those bugs that turns out to be a rabbit hole and your kind of debugging it until the end of time... while debugging those bugs I hate them but afterwards, I'm like, that was crazy!" This was essentially an endorsement of the core concept of chaos engineering - injecting bugs into your software to understand how it reacts. Jessie then went on to talk about containers, joking that they're NOT REAL. This is because they're made up of  numerous different component parts, like Cgroups, namespaces, and LSMs. She contrasted containers with Virtual machines, zones and jails, which are 'first class concepts' - in other worlds, real things (Jessie wrote about this in detail last year in this blog post). In practice what this means is that whereas containers are like Lego pieces, VMs, zones, and jails are like a pre-assembled lego set that you don't need to play around with in the same way. From this perspective, it's easy to see how containers are relevant to chaos engineering - they empower a level of experimentation that you simply don't have with other virtualization technologies. "The box says to build the death star. But you can build whatever you want." The chaos ends... Chaos Conf was undoubtedly a huge success, and a lot of credit has to go to Gremlin for organizing the conference. It's clear that the team care a lot about the chaos engineering community and want it to expand in a way that transcends the success of the Gremlin platform. While chaos engineering might not feel relevant to a lot of people at the moment, it's only a matter of time that it's impact will be felt. That doesn't mean that everyone will suddenly become a chaos engineer by July 2019, but the cultural ripples will likely be felt across the software engineering landscape. But without Chaos Conf, it would be difficult to see chaos engineering growing as a discipline or set of practices. By sharing ideas and learning how other people work, a more coherent picture of chaos engineering started to emerge, one that can quickly make an impact in ways people wouldn't have expected six months ago. You can watch videos of all the talks from Chaos Conf 2018 on YouTube.
Read more
  • 0
  • 0
  • 23119
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-cloudflare-terminates-services-to-8chan-following-yet-another-set-of-mass-shootings-in-the-us-tech-awakening-or-liability-avoidance
Sugandha Lahoti
06 Aug 2019
9 min read
Save for later

Cloudflare terminates services to 8chan following yet another set of mass shootings in the US. Tech awakening or liability avoidance?

Sugandha Lahoti
06 Aug 2019
9 min read
Update: Jim Watkins, the owner of 8chan has spoken against the ongoing backlash in a defensive video statement on uploaded 6th August on YouTube. "My company takes a firm stand in helping law enforcement and within minutes of these two tragedies, we were working with FBI agents to find out what information we could to help in their investigations. There are about 1 million users of 8chan. 8chan is an empty piece of paper for writing on it is disturbing to me that it can be so easily shut down. Over the weekend the domain name service for 8chan was abruptly terminated by the provider Cloudflare.", he states in the video. He adds, "First of all the El Paso shooter posted on Instagram, not 8chan. Later someone uploaded a manifesto; however, that manifesto was not uploaded by the Walmart shooter. It is unfortunate that this place of free speech has temporarily been removed we are working to restore service. It is clearly a political move to remove 8chan from CloudFlare; it has dispersed a peacefully assembled group of people. " Watkins went on to call Cloudflare's decision 'cowardly'. He said, "Contrary to the unfounded claim by Mr. Prince of CloudFlare 8-chan is a lawful community abiding by the laws of the United States and enforced in the Ninth Circuit Court. His accusation has caused me tremendous damage. In the meantime, I wish his company the best and hold no animosity towards him or his cowardly and not thought-out actions against 8-chan." Saturday witnessed two horrific mass shooting tragedies, one when a maniac gunman shot at least 20 people at a sprawling Walmart shopping complex in El Paso, Texas. The other in Dayton, Ohio at the entrance of Ned Peppers Bar where ten people were killed, including the perpetrator, and at least 27 others were injured. The gunman in the El Paso shooting has been identified as Patrick Crusius according to CNN sources. He appears to have been inspired by the online forum known as 8chan. 8chan is an online message board which is home to online extremists who share racist and anti-Semitic conspiracy theories. According to police officials, a four-page document was posted to 8chan, 20 minutes before the shootings that they believe was written by Crusius. The post said, "I'm probably going to die today." His post blamed white nationalists and immigrants for taking away jobs and spewed racist hatred towards immigrants and Hispanics. The El Paso post is not the only incident. 8chan has been filled with unmoderated violent and extremist content over time. Nearly the same thing happened on 8chan before the terror attack in Christchurch, New Zealand. In his post, the El Paso shooter referenced the Christchurch incident saying he was inspired by the Christchurch content on 8chan which glorified the previous massacre. The suspected killer in the synagogue shootings in Poway, California also posted a hate-filled “open letter” on 8chan. In March, this year Australian telecom company Telstra denied access to millions of Australians to the websites 4chan, 8chan, Zero Hedge, and LiveLeak as a reaction to the Christchurch mosque shootings. Cloudflare first defends 8chan citing ‘moral obligations’ but later cuts all ties Post this disclosure, Cloudflare, that provides internet infrastructure services to 8chan continued to defend hosting 8chan calling it their 'moral obligation' to provide 8chan their services. Keeping 8chan within its network is a “moral obligation”, said Cloudflare, adding: “We, as well as all tech companies, have an obligation to think about how we solve real problems of real human suffering and death. What happened in El Paso today is abhorrent in every possible way, and it’s ugly, and I hate that there’s any association between us and that … For us, the question is which is the worse evil? Is the worse evil that we kick the can down the road and don’t take responsibility? Or do we get on the phone with people like you and say we need to own up to the fact that the internet is home to many amazing things and many terrible things and we have an absolute moral obligation to deal with that.” https://twitter.com/slpng_giants/status/1158214314198745088 https://twitter.com/iocat/status/1158218861658791937 Cloudflare has been under the spotlight over the past few years for continuing to work with websites that foster hate. Previous to 8chan, in 2017, Cloudflare had to discontinue services to neo-Nazi blog, The Daily Stormer, after the terror at Charlottevelle. However, Daily Stormer continues to run today having moved to a different infrastructure service with allegedly more readers than ever. After an intense public and media backlash over the weekend, Cloudflare announced that it would completely stop providing support for 8chan. Cloudflare is also readying for an initial public offering in September which may have been the reason why they cut ties with 8chan. In a blog post today, they explained the decision to cut off 8chan. "We just sent notice that we are terminating 8chan as a customer effective at midnight tonight Pacific Time. The rationale is simple: they have proven themselves to be lawless and that lawlessness has caused multiple tragic deaths." Cloudflare has also cut off 8chan's access to its DDOS protection service. Although, this will have a short term impact; 8chan can always come up with another cloud partner and resume operations. Cloudflare acknowledges it as well, “While removing 8chan from our network takes heat off of us, it does nothing to address why hateful sites fester online. It does nothing to address why mass shootings occur. It does nothing to address why portions of the population feel so disenchanted they turn to hate. In taking this action we've solved our own problem, but we haven't solved the Internet’s.” The company added, “We feel incredibly uncomfortable about playing the role of content arbiter and do not plan to exercise it often,” adding that this is not “due to some conception of the United States’ First Amendment,” since Cloudflare is a private company (and most of its customers, and more than half of its revenue, are outside the United States). Instead, Cloudflare “will continue to engage with lawmakers around the world as they set the boundaries of what is acceptable in those countries through due process of law. And we will comply with those boundaries when and where they are set.” Founder of 8chan wants the site to be shut off 8chan founder Fredrick Brennan also appreciated Cloudfare’s decision to block the site. Post the gruesome El Paso shootings, he also told the Washington Post that the site’s owners should “do the world a favor and shut it off.” However, he told Buzzfeed News, shutting down 8chan wouldn't stop the extremism we're now seeing entirely, but it would make it harder for them to organize. https://twitter.com/HW_BEAT_THAT/status/1158194175755485191 In a March interview with The Wall Street Journal, he expressed his regrets over his role in the site’s creation and warned that the violent culture that had taken root on 8chan’s boards could lead to more mass shootings. Brennan founded the site in 2011 and announced his departure from the company in July 2016. 8Chan is owned by Jim Watkins and run by his son, Ron. He posted on Twitter that 8chan will be moving to another service ASAP. He has also resisted calls to moderate or shut down the site. On Sunday, a banner at the top of 8chan’s home page read, “Welcome to 8chan, the Darkest Reaches of the Internet.” https://twitter.com/CodeMonkeyZ/status/1158202303096094720 Cloudflare acted too late, too little Cloudflare's decision to simply block 8chan was not seen as an adequate response by some who say Cloudflare should have acted earlier. 8chan has been known for enabling child pornography in 2015 and as a result, was removed from Google Search. Coupled with the Christchurch mosque and the Poway synagogue shootings earlier in the year, there was increased pressure on those providing 8chan's Internet and financial service infrastructures to terminate their support. https://twitter.com/BinaryVixen899/status/1158216197705359360 Laurie Voss, the cofounder of npmjs, called out Cloudflare and subsequently, other content sites (Facebook, Twitter) for shirking responsibility under the guise of them being infrastructure companies and therefore cannot enforce content standards. https://twitter.com/seldo/status/1158204950595420160 https://twitter.com/seldo/status/1158206331662323712 “Facebook, Twitter, Cloudflare, and others pretend that they can't. They can. They just don't want to.” https://twitter.com/seldo/status/1158206867438522374 “I am super, super tired of companies whose profits rely on providing maximum communication with minimum moderation pretending this is some immutable law and not just the business model they picked,” he tweeted. Others also agreed that Cloudflare’s statement eschews responsibility. https://twitter.com/beccalew/status/1158196518983045121 https://twitter.com/slpng_giants/status/1158214314198745088 Voxility, 8chan’s hardware provider also bans the site Web services company Voxility has also banned 8chan and it’s new host Epik, which had been leasing web space from it. Epik’s website remains accessible, but 8chan now returns an error message. “As soon as we were notified of the content that Epik was hosting, we made the decision to totally ban them,” Voxility business development VP Maria Sirbu told The Verge. Sirbu said it was unlikely that Voxility would work with Epik again. “This is the second situation we’ve had with the reseller and this is not tolerable,” she said. https://twitter.com/alexstamos/status/1158392795687575554 Does de-platforming even work? De-platforming or banning people that spread extremist or banning these people is not a solution since they will eventually migrate to other platforms and still able to circulate their ideology. Closing 8chan is not the solution to the bigger problem of controlling racism and extremism. Closing one 8chan will sprout another 20chan. “8chan is no longer a refuge for extremist hate — it is a window opening onto a much broader landscape of racism, radicalization, and terrorism. Shutting down the site is unlikely to eradicate this new extremist culture because 8chan is anywhere. Pull the plug, it will appear somewhere else, in whatever locale will host it. Because there's nothing particularly special about 8chan, there are no content algorithms, hosting technology immaterial. The only thing radicalizing 8chan users are other 8chan users.”, Ryan Broderick from Buzzfeed wrote. A group of users told BuzzFeed that it’s now common for large 4chan threads to migrate over into Discord servers before the 404. After Cloudflare, Amazon is beginning to face public scrutiny as 8chan’s operator Jim Watkins sells audiobooks on Amazon.com and Audible. https://twitter.com/slpng_giants/status/1158213239697747968 Facebook will ban white nationalism, and separatism content in addition to white supremacy content. 8 tech companies and 18 governments sign the Christchurch Call to curb online extremism; the US backs off. How social media enabled and amplified the Christchurch terrorist attack
Read more
  • 0
  • 0
  • 21463

article-image-cloud-native-architectures-microservices-containers-serverless-part-1
Guest Contributor
13 Aug 2018
9 min read
Save for later

Modern Cloud Native architectures: Microservices, Containers, and Serverless - Part 1

Guest Contributor
13 Aug 2018
9 min read
This whitepaper is written by Mina Andrawos, an experienced engineer who has developed deep experience in the Go language, and modern software architectures. He regularly writes articles and tutorials about the Go language, and also shares open source projects. Mina Andrawos has authored the book Cloud Native programming with Golang, which provides practical techniques, code examples, and architectural patterns required to build cloud native microservices in the Go language.He is also the author of the Mastering Go Programming, and the Modern Golang Programming video courses. This paper sheds some light and provides practical exposure on some key topics in the modern software industry, namely cloud native applications.This includes microservices, containers , and serverless applications. The paper will cover the practical advantages, and disadvantages of the technologies covered. Microservices The microservices architecture has gained reputation as a powerful approach to architect modern software applications. So what are microservices? Microservices can be described as simply the idea of separating the functionality required from a software application into multiple independent small software services or “microservices.” Each microservice is responsible for an individual focused task. In order for microservices to collaborate together to form a large scalable application, they communicate and exchange data. Microservices were born out of the need to tame the complexity, and inflexibility of “monolithic” applications. A monolithic application is a type of application, where all required functionality is coded together into the same service. For example, here is a diagram representing a monolithic events (like concerts, shows..etc) booking application that takes care of the booking payment processing and event reservation: The application can be used by a customer to book a concert or a show. A user interface will be needed. Furthermore, we will also need a search functionality to look for events, a bookings handler to process the user booking then save it, and an events handler to help find the event, ensure it has seats available, then link it to the booking. In a production level application, more tasks will be needed like payment processing for example, but for now let’s focus on the four tasks outlined in the above figure. This monolithic application will work well with small to medium load. It will run on a single server, connect to a single database and will be written probably in the same programming language. Now, what will happen if the business grows exponentially and hundreds of thousands or millions of users need to be handled and processed? Initially, the short term solution would be to ensure that the server where the application runs, has powerful hardware specifications to withstand higher loads, and if not then add more memory, storage, and processing power to the server. This is called vertical scaling, which is the act of increasing the power of the hardware  like RAM and hard drive capacity to run heavy applications.However, this is typically not  sustainable in the long run as the load on the application continues to grow. Another challenge with monolithic applications is the inflexibility caused by being limited to only one or two programming languages. This inflexibility can affect the overall quality, and efficiency of the application. For example, node.js is a popular JavaScript framework for building web applications, whereas R is popular for data science applications. A monolithic application will make it difficult to utilize both technologies, whereas in a microservices application, we can simply build a data science service written in R and a web service written in Node.js. The microservices version of the events application will take the below form: This application will be capable of scaling among multiple servers, a practice known as horizontal scaling. Each service can be deployed on a different server with dedicated resources or in separate containers (more on that later). The different services can be written in different programming languages enabling greater flexibility, and different dedicated teams can focus on different services achieving more overall quality for the application. Another notable advantage of using microservices is the ease of continuous delivery, which is the ability to deploy software often, and at any time. The reason why microservices make continuous delivery easier is because a new feature deployed to one microservices is less likely to affect other microservices compared to monolithic applications. Issues with Microservices One notable drawback of relying heavily on microservices is the fact that they can become too complicated to manage in the long run as they grow in numbers and scope. There are approaches to mitigate this by utilizing monitoring tools such as Prometheus to detect problems, container technologies such as Docker to avoid pollutions of the host environments and avoiding over designing the services. However, these approaches take effort and time. Cloud native applications Microservices architectures are a natural fit for cloud native applications. A cloud native application is simply defined as an application built from the ground up for cloud computing architectures. This simply means that our application is cloud native, if we design it as if it is expected to be deployed on a distributed, and scalable infrastructure. For example, building an application with a redundant microservices architecture -we’ll see an example shortly- makes the application cloud native, since this architecture allows our application to be deployed in a distributed manner that allows it to be scalable and almost always available. A cloud native application does not need to always be deployed to a public cloud like AWS, we can deploy it to our own distributed cloud-like infrastructure instead if we have one. In fact, what makes an application fully cloud native is beyond just using microservices. Your application  should employ continuous delivery, which is your ability to continuously deliver updates to your production applications without  disruptions. Your application should also make use of services like message queues and technologies like containers, and serverless (containers and serverless are important topics for modern software architectures, so we’ll be discussing them in the next few sections). Cloud native applications assume access to numerous server nodes, having access to pre-deployed software services like message queues or load balancers, ease of integration with continuous delivery services, among other things. If you deploy your cloud native application to a commercial cloud like AWS or Azure, your application gets the option to utilize cloud only software services. For example, DynamoDB is a powerful database engine that can only be used on Amazon Web Services for production applications. Another example is the DocumentDB database in Azure. There are also cloud only message queues such as Amazon Simple Queue Service (SQS), which can be used to allow communication between microservices in the Amazon Web Services cloud. As mentioned earlier, cloud native microservices should be designed to allow redundancy between services. If we take the events booking application as an example, the application will look like this: Multiple server nodes would be allocated per microservice, allowing a redundant microservices architecture to be deployed. If the primary node or service fails for any reason, the secondary can take over ensuring lasting reliability and availability for cloud native applications. This availability is vital for fault intolerant applications such as e-commerce platforms, where downtime translates into large amounts of lost revenue. Cloud native applications provide great value for developers, enterprises, and startups. A notable tool worth mentioning in the world of microservices and cloud computing is Prometheus. Prometheus is an open source system monitoring and alerting tool that can be used to monitor complex microservices architectures and alert when an action needs to be taken. Prometheus was originally created by SoundCloud to monitor their systems, but then grew to become an independent project. The project is now a part of the cloud native computing foundation, which is a foundation tasked with building a sustainable ecosystem for cloud native applications. Cloud native limitations For cloud native applications, you will face some challenges if the need arises to migrate some or all of the applications. That is due to multiple reasons, depending on where your application is deployed. For example,if your cloud native application is deployed on a public cloud like AWS, cloud native APIs are not cross cloud platform. So, a DynamoDB database API utilized in an application will only work on AWS but not on Azure, since DynamoDB belongs exclusively to AWS. The API will also never work in a local environment because DynamoDB can only be utilized in AWS in production. Another reason is because there are some assumptions made when some cloud native applications are built, like the fact that there will be virtually unlimited number of server nodes to utilize when needed and that a new server node can be made available very quickly. These assumptions are sometimes hard to guarantee in a local data center environment, where real servers, networking hardware, and wiring need to be purchased. This brings us to the end of Part 1 of this whitepaper. Check out Part 2 tomorrow to learn about Containers and Serverless applications along with their practical advantages and limitations. About Author: Mina Andrawos Mina Andrawos is an experienced engineer who has developed deep experience in Go from using it personally and professionally. He regularly authors articles and tutorials about the language, and also shares Go's open source projects. He has written numerous Go applications with varying degrees of complexity. Other than Go, he has skills in Java, C#, Python, and C++. He has worked with various databases and software architectures. He is also skilled with the agile methodology for software development. Besides software development, he has working experience of scrum mastering, sales engineering, and software product management. Building microservices from a monolith Java EE app [Tutorial] 6 Ways to blow up your Microservices! Have Microservices killed the monolithic architecture? Maybe not!
Read more
  • 0
  • 0
  • 21139

article-image-google-cloud-next19-day-1-open-source-partnerships-hybrid-cloud-platform-cloud-run-and-more
Bhagyashree R
10 Apr 2019
6 min read
Save for later

Google Cloud Next’19 day 1: open-source partnerships, hybrid-cloud platform, Cloud Run, and more

Bhagyashree R
10 Apr 2019
6 min read
Google Cloud Next’ 19 kick started yesterday in San Francisco. On day 1 of the event, Google showcased its new tools for application developers, its partnership with open-source companies, and outlined its strategy to make a mark in the Cloud industry, which is currently dominated by Amazon and Microsoft. Here’s the rundown of the announcements Google made yesterday: Google Cloud’s new CEO is set to expand its sales team Cloud Next’19 is the first event where the newly-appointed Google Cloud CEO, Thomas Kurian took on stage to share his plans for Google Cloud. He plans to make Google Cloud "the best strategic partner" for organizations modernizing their IT infrastructure. To step up its game in the Cloud industry, Google needs to put more focus on understanding its customers, providing them better support, and making it easier for them to conduct business. This is why Kurian is planning to expand the sales team and add more technical specialists. Kurian, who joined Google after working at Oracle for 22 years, also shared that the team is rolling out new contracts to make contracting easier and also promised simplified pricing. Anthos, Google’s hybrid cloud platform is coming to AWS and Azure During the opening keynote, Sundar Pichai, Google’s CEO confirmed the rebranding of Cloud Services Platform, a platform for building and managing hybrid applications, as it enters general availability. This rebranded version named Anthos provides customers a single managed service, which is not limited to just Google-based environments and comes with extended support for Amazon Web Services (AWS) and Azure. With this extended support, Google aims to provide organizations that have multi-cloud sourcing strategy a more consistent experience across all three clouds. Urs Hölzle, Google’s Senior Vice President for Technical Infrastructure, shared in a press conference, “I can’t really stress how big a change that is in the industry, because this is really the stack for the next 20 years, meaning that it’s not really about the three different clouds that are all randomly different in small ways. This is the way that makes these three clouds — and actually on-premise environments, too — look the same.” Along with this extended support, another plus point of Anthos is that it is hardware agnostic, which means customers can run the service on top of their current hardware without having to immediately invest in new servers. It is a subscription-based service, with prices starting at $10,000/month per 100 vCPU block. Google also announced the first beta release of Anthos Migrate, a service that auto-migrates VMs from on-premises, or other clouds, directly into containers in Google Kubernetes Environment (GKE) with minimum effort. Explaining the advantage of this tool, Google wrote in a blog post, “Through this transformation, your IT team is free from managing infrastructure tasks like VM maintenance and OS patching, so it can focus on managing and developing applications.” Google Cloud partners with top open-source projects challenging AWS Google has partnered with several top open-source data management and analytics companies including Confluent, DataStax, Elastic, InfluxData, MongoDB, Neo4j and Redis Labs. The services and products provided by these companies will be deeply integrated into the Google Cloud Platform. With this integration, Google aims to provide customers a seamless experience by allowing them to use these open source technologies at a single place, Google Cloud. These will be managed services and the invoicing and billing of these services will be handled by Google Cloud. Customer support will also be the responsibility of Google so that users manage and log tickets across all of these services via a single platform. Google’s approach of partnering with these open source companies is quite different from that of other cloud providers. Over the past few years, we have come across cases where cloud providers sell open-source projects as service, often without giving any credits to the original project. This led to companies revisiting their open-source licenses to stop such behavior. For instance, Redis adopted the Common Clause license for its Redis Modules and later dropped its revised license in February. Similarly, MongoDB, Neo4j, and Confluent also embraced a similar strategy. Kurian said, “In order to sustain the company behind the open-source technology, they need a monetization vehicle. If the cloud provider attacks them and takes that away, then they are not viable and it deteriorates the open-source community.” Cloud Run for running stateless containers serverlessly Google has combined serverless computing and containerization into a single product called Cloud Run. Yesterday, Oren Teich, Director Product Management for Serverless, announced the beta release of Cloud Run and also explained how it works. Cloud Run is a managed compute platform for running stateless containers that can be invoked via HTTP requests. It is built on top of Knative, a Kubernetes-based platform for building, deploying, and managing serverless workloads. You get two options to choose from, either you can run your containers fully-managed with Cloud Run or in your Google Kubernetes Engine cluster with Cloud Run on GKE. Announcing the release of Cloud Run, Teich wrote in a blog post, “Cloud Run is introducing a brand new product that takes Docker containers and instantly gives you a URL. This is completely unique in the industry. We’re taking care of everything from the top end of SSL provisioning and routing, all the way down to actually running the container for you. You pay only by the hundred milliseconds of what you need to use, and it’s end-to-end managed.” Google releases closed source VS Code plugin Google announced the beta release of “Cloud Code for VS Code” as a closed source library. It allows you to extend the VS Code to bring the convenience of IDEs to developing cloud-native Kubernetes applications. This extension aims to speed up the builds, deployment, and debugging cycles. You can deploy your applications to either local clusters or across multiple cloud providers. Under the hood, Cloud Code for VS Code uses Google’s popular command-line tools such as skaffold and kubectl, to provide users continuous feedback as they build their projects. It also supports deployment profiles that lets you define different environments to make testing and debugging easier on your workstation or in the cloud. Cloud SQL now supports PostgreSQL 11.1 Beta Cloud SQL is Google’s fully-managed database service that makes it easier to set up, maintain, manage, and administer your relational databases on GCP. It now comes with support for PostgreSQL 11.1 Beta. Along with that, it supports the following relational databases: MySQL 5.5, 5.6, and 5.7 PostgreSQL 9.6 Google’s Cloud Healthcare API is now available in beta Ian Goodfellow quits Google and joins Apple as a director of machine learning Google Podcasts is transcribing full podcast episodes for improving search results  
Read more
  • 0
  • 0
  • 21072

article-image-kubecon-cloudnativecon-north-america-2019-highlights-helm-3-0-release-codeready-workspaces-2-0-and-more
Savia Lobo
26 Nov 2019
6 min read
Save for later

KubeCon + CloudNativeCon North America 2019 Highlights: Helm 3.0 release, CodeReady Workspaces 2.0, and more!

Savia Lobo
26 Nov 2019
6 min read
Update: On November 26, the OpenFaaS community released a post including a few of its highlights at KubeCon, San Diego. The post also includes a few highlights from OpenFaaS Cloud, Flux from Weaveworks, Okteto, Dive from Buoyant and k3s going GA. The KubeCon + CloudNativeCon 2019 held at San Diego, North America from 18 - 21 November, witnessed over 12,000 attendees to discuss and improve their education and advancement about containers, Kubernetes and cloud-native. The conference was home to many major announcements including the release of Helm 3.0, Red Hat’s CodeReady Workspaces 2.0, GA of Managed Istio on IBM Cloud Kubernetes Service, and many more. Major highlights at the KubeCon + CloudNativeCon 2019 General availability of Managed Istio on IBM Cloud Kubernetes Service IBM cloud announced that the managed Istio on its cloud Kubernetes service is generally available. This service provides a seamless installation of Istio, automatic updates, lifecycle management of Istio control plane components, and integration with platform logging and monitoring tools. With managed Istio, a user’s service mesh is tuned for optimal performance in IBM Cloud Kubernetes Service. Istio is a service mesh that it is able to provide its features without developers having to make any modifications to their applications. The Istio installation is tuned to perform optimally on IBM Cloud Kubernetes Service and is pre-configured to work out of the box with IBM Log Analysis with LogDNA and IBM Cloud Monitoring with Sysdig. Red Hat announces CodeReady Workspaces 2.0 CodeReady Workspaces 2.0 helps developers to build applications and services similar to the production environment, i.e all apps run on Red Hat OpenShift. A few new services and tools in the CodeReady Workspaces 2.0 include: Air-gapped installs: These enable CodeReady Workspaces to be downloaded, scanned and moved into more secure environments when access to the public internet is limited or unavailable. It doesn’t "call back" to public internet services. An updated user interface: This brings an improved desktop-like experience to developers. Support for VSCode extensions: This gives developers access to thousands of IDE extensions. Devfile: A sharable workspace configuration that specifies everything a developer needs to work, including repositories, runtimes, build tools and IDE plugins, and is stored and versioned with the code in Git. Production consistent containers for developers: This clones the sources in where needed and adds development tools (such as debuggers, language servers, unit test tools, build tools) as sidecar containers so that the running application container mirrors production. Brad Micklea, vice president of Developer Tools, Developer Programs, and Advocacy, Red Hat, said, “Red Hat is working to make developing in cloud native environments easier offering the features developers need without requiring deep container knowledge. Red Hat CodeReady Workspaces 2 is well-suited for security-sensitive environments and those organizations that work with consultants and offshore development teams.” To know more about CodeReady Workspaces 2.0, read the press release on the Red Hat official blog. Helm 3.0 released Built upon the success of Helm 2, the internal implementation of Helm 3 has changed considerably from Helm 2. The most apparent change in Helm 3.0 is the removal of Tiller. A rich set of new features have been added as a result of the community’s input and requirements. A few features include: Improved Upgrade strategy: Helm 3 uses 3-way strategic merge patches Secrets as the default storage driver Go import path changes Validating Chart Values with JSONSchema Some features have been deprecated or refactored in ways that make them incompatible with Helm 2. Some new experimental features have also been introduced, including OCI support. Also, the Helm Go SDK has been refactored for general use. The goal is to share and re-use code open sourced with the broader Go community. To know more about Helm 3.0 in detail, read the official blog post. AWS, Intuit and WeaveWorks Collaborate on Argo Flux Recently, Weaveworks announced a partnership with Intuit to create Argo Flux, a major open-source project to drive GitOps application delivery for Kubernetes via an industry-wide community. Argo Flux combines the Argo CD project led by Intuit with the Flux CD project driven by Weaveworks, two well known open source tools with strong community support. At KubeCon, AWS announced that it is integrating the GitOps tooling based on Argo Flux in Elastic Kubernetes Service and Flagger for AWS App Mesh. The collaboration resulted in a new project called GitOps Engine to simplify application deployment in Kubernetes. The GitOps Engine will be responsible for the following functionality: Access to Git repositories Kubernetes resource cache Manifest Generation Resources reconciliation Sync Planning To know more about this collaboration in detail, read the GitOps Engine page on GitHub. Grafana Labs announces general availability of Loki 1.0 Grafana Labs, an open source analytics and monitoring solution provider, announced that Loki version 1.0 is generally available for production use. Loki is an open source logging platform that provides developers with an easy-to-use, highly efficient and cost-effective approach to log aggregation. With Loki 1.0, users can instantaneously switch between metrics and logs, preserving context and reducing MTTR. By storing compressed, unstructured logs and only indexing metadata, Loki is cost-effective and simple to operate by design. It includes a set of components that can be composed into a fully-featured logging stack. Grafana Cloud offers a high-performance, hosted Loki service that allows users to store all logs together in a single place with usage-based pricing. Read about Loki 1.0 on GitHub to know more in detail. Rancher Extends Kubernetes to the Edge with the general availability of K3s Rancher, creator of the vendor-agnostic and cloud-agnostic Kubernetes management platform, announced the general availability of K3s, a lightweight, certified Kubernetes distribution purpose-built for small footprint workloads. Rancher partnered with ARM to build a highly optimized version of Kubernetes for the edge. It is packaged as a single <40MB binary with a small footprint which reduces the dependencies and steps needed to install and run Kubernetes in resource-constrained environments such as IoT and edge devices. To know more about this announcement in detail, read the official press release. There were many additional announcements including Portworx launched PX-Autopilot, Huawei presented their latest advances on KubeEdge, Diamanti Announced Spektra Hybrid Cloud Solution, and may more! To know more about all the keynotes and tutorials in KubeCon North America 2019, visit its GitHub page. Chaos engineering comes to Kubernetes thanks to Gremlin “Don’t break your users and create a community culture”, says Linus Torvalds, Creator of Linux, at KubeCon + CloudNativeCon + Open Source Summit China 2019 KubeCon + CloudNativeCon EU 2019 highlights: Microsoft’s Service Mesh Interface, Enhancements to GKE, Virtual Kubelet 1.0, and much more!
Read more
  • 0
  • 0
  • 20934
article-image-amazon-simple-notification-service
Vijin Boricha
30 Apr 2018
5 min read
Save for later

Using Amazon Simple Notification Service (SNS) to create an SNS topic

Vijin Boricha
30 Apr 2018
5 min read
Simple Notification Service is a managed web service that you, as an end user, can leverage to send messages to various subscribing endpoints. SNS works in a publisher–subscriber or producer and consumer model, where producers create and send messages to a particular topic, which is in turn consumed by one or more subscribers over a supported set of protocols. At the time of writing this book, SNS supports HTTP, HTTPS, email, push notifications in the form of SMS, as well as AWS Lambda and Amazon SQS, as the preferred modes of subscribers. In today's tutorial, we will learn about Amazon Simple Notification Service (SNS) and how to create your very own simple SNS topics: SNS is a really simple and yet extremely useful service that you can use for a variety of purposes, the most common being pushing notifications or system alerts to cloud administrators whenever a particular event occurs. We have been using SNS throughout this book for this same purpose; however, there are many more features and use cases that SNS can be leveraged for. For example, you can use SNS to send out promotional emails or SMS to a large group of targeted audiences or even use it as a mobile push notification service where the messages are pushed directly to your Android or IOS applications. With this in mind, let's quickly go ahead and create a simple SNS topic of our own: To do so, first log in to your AWS Management Console and, from the Filter option, filter out SNS service. Alternatively, you can also access the SNS dashboard by selecting https://console.aws.amazon.com/sns. If this is your first time with SNS, simply select the Get Started option to begin. Here, at the SNS dashboard, you can start off by selecting the Create topic option, as shown in the following screenshot: Once selected, you will be prompted to provide a suitable Topic name and its corresponding Display name. Topics form the core functionality for SNS. You can use topics to send messages to a particular type of subscribing consumer. Remember, a single topic can be subscribed by more than one consumer. Once you have typed in the required fields, select the Create topic option to complete the process. That's it! Simple, isn't it? Having created your topic, you can now go ahead and associate it with one or more subscribers. To do so, first we need to create one or more subscriptions. Select the Create subscription option provided under the newly created topic, as shown in the following screenshot: Here, in the Create subscription dialog box, select a suitable Protocol that will subscribe to the newly created topic. In this case, I've selected Email as the Protocol. Next, provide a valid email address in the subsequent Endpoint field. The Endpoint field will vary based on the selected protocol. Once completed, click on the Create subscription button to complete the process. With the subscription created, you will now have to validate the subscription. This can be performed by launching your email application and selecting the Confirm subscription link in the mail that you would have received. Once the subscription is confirmed, you will be redirected to a confirmation page where you can view the subscribed topic's name as well as the subscription ID, as shown in the following screenshot: You can use the same process to create and assign multiple subscribers to the same topic. For example, select the Create subscription option, as performed earlier, and from the Protocol drop-down list, select SMS as the new protocol. Next, provide a valid phone number in the subsequent Endpoint field. The number can be prefixed by your country code, as shown in the following screenshot. Once completed, click on the Create subscription button to complete the process: With the subscriptions created successfully, you can now test the two by publishing a message to your topic. To do so, select the Publish to topic option from your topics page. Once a message is published here, SNS will attempt to deliver that message to each of its subscribing endpoints; in this case, to the email address as well as the phone number. Type in a suitable Subject name followed by the actual message that you wish to send. Note that if your character count exceeds 160 for an SMS, SNS will automatically send another SMS with the remainder of the character count. You can optionally switch the Message format between Raw and JSON to match your requirements. Once completed, select Publish Message. Check your email application once more for the published message. You should receive an mail, as shown in the following screenshot: Similarly, you can create and associate one or more such subscriptions to each of the topics that you create. We learned about creating simple Amazon SNS topics. You read an excerpt from the book AWS Administration - The Definitive Guide - Second Edition written by Yohan Wadia.  This book will help you build a highly secure, fault-tolerant, and scalable Cloud environment. Getting to know Amazon Web Services AWS IoT Analytics: The easiest way to run analytics on IoT data How to set up a Deep Learning System on Amazon Web Services    
Read more
  • 0
  • 0
  • 20535

article-image-aws-reinvent-2019-day-2-highlights-aws-wavelength-provisioned-concurrency-for-lambda-functions-and-more
Savia Lobo
04 Dec 2019
6 min read
Save for later

AWS re:Invent 2019 Day 2 highlights: AWS Wavelength, Provisioned Concurrency for Lambda functions, and more!

Savia Lobo
04 Dec 2019
6 min read
Day 2 of the ongoing AWS re:Invent 2019 conference at Las Vegas, included a lot of new announcements such as AWS Wavelength, Provisioned Concurrency for Lambda functions, Amazon Sagemaker Autopilot, and much more. The Day 1 highlights included a lot of exciting releases too, such as preview of AWS’ new quantum service, Braket; Amazon SageMaker Operators for Kubernetes, among others. Day Two announcements at AWS re:Invent 2019 AWS Wavelength to deliver ultra-low latency applications for 5G devices With AWS Wavelength, developers can build applications that deliver single-digit millisecond latencies to mobile devices and end-users. AWS developers can deploy their applications to Wavelength Zones, AWS infrastructure deployments that embed AWS compute and storage services within the telecommunications providers’ datacenters at the edge of the 5G networks, and seamlessly access the breadth of AWS services in the region. This enables developers to deliver applications that require single-digit millisecond latencies such as game and live video streaming, machine learning inference at the edge, and augmented and virtual reality (AR/VR). AWS Wavelength brings AWS services to the edge of the 5G network. This minimizes the latency to connect to an application from a mobile device. Application traffic can reach application servers running in Wavelength Zones without leaving the mobile provider’s network. This reduces the extra network hops to the Internet that can result in latencies of more than 100 milliseconds, preventing customers from taking full advantage of the bandwidth and latency advancements of 5G. To know more about AWS Wavelength, read the official post. Provisioned Concurrency for Lambda Functions To provide customers with improved control over their mission-critical app performance on serverless, AWS introduces Provisioned Concurrency, which is a Lambda feature and works with any trigger. For example, you can use it with WebSockets APIs, GraphQL resolvers, or IoT Rules. This feature gives you more control when building serverless applications that require low latency, such as web and mobile apps, games, or any service that is part of a complex transaction. This is a feature that keeps functions initialized and hyper-ready to respond in double-digit milliseconds. This addition is helpful for implementing interactive services, such as web and mobile backends, latency-sensitive microservices, or synchronous APIs. On enabling Provisioned Concurrency for a function, the Lambda service will initialize the requested number of execution environments so they can be ready to respond to invocations. To know more Provisioned Concurrency in detail, read the official document. Amazon Managed Cassandra Service open preview launched Amazon Managed Apache Cassandra Service (MCS) is a scalable, highly available, and managed Apache Cassandra-compatible database service. Since the Amazon MCS is serverless, you pay for only the resources you use and the service automatically scales tables up and down in response to application traffic. You can build applications that serve thousands of requests per second with virtually unlimited throughput and storage. With Amazon MCS, it becomes easy to run Cassandra workloads on AWS using the same Cassandra application code and developer tools that you use today. Amazon MCS implements the Apache Cassandra version 3.11 CQL API, allowing you to use the code and drivers that you already have in your applications. Updating your application is as easy as changing the endpoint to the one in the Amazon MCS service table. To know more about Amazon MCS in detail, read AWS official blog post. Introducing Amazon SageMaker Autopilot to auto-create high-quality Machine Learning models with full control and visibility The AWS team launched Amazon SageMaker Autopilot to automatically create classification and regression machine learning models with full control and visibility. SageMaker Autopilot first checks the dataset and then runs a number of candidates to figure out the optimal combination of data preprocessing steps, machine learning algorithms, and hyperparameters. All this with a single API call or few clicks in the Amazon SageMaker Studio. Further, it uses this combination to train an Inference Pipeline, which can be easily deployed either on a real-time endpoint or for batch processing. All of this takes place on fully-managed infrastructure. SageMaker Autopilot also generates Python code showing exactly how data was preprocessed: not only can you understand what SageMaker Autopilot does, you can also reuse that code for further manual tuning if you’re so inclined. SageMaker Autopilot supports: Input data in tabular format, with automatic data cleaning and preprocessing, Automatic algorithm selection for linear regression, binary classification, and multi-class classification, Automatic hyperparameter optimization, Distributed training, Automatic instance and cluster size selection. To know more about Amazon Sagemaker Autopilot, read the official document. Announcing ML-powered Amazon Kendra Amazon Kendra is an ML-powered highly accurate enterprise search service. It provides powerful natural language search capabilities to your websites and applications such that end users can easily find the required information within the vast amount of content spread across the organization. Key benefits of Kendra include: Users can get immediate answers to questions asked in natural language. This eliminates sifting through long lists of links and hoping one has the information you need. Kendra lets you easily add content from file systems, SharePoint, intranet sites, file-sharing services, and more, into a centralized location so you can quickly search all of your information to find the best answer. The search results get better over time as Kendra’s machine learning algorithms learn which results users find most valuable. To know more about Amazon Kendra in detail, read the official document. Introducing preview of Amazon Codeguru Amazon CodeGuru is a machine learning service for automated code reviews and application performance recommendations. It helps developers find the most expensive lines of code that affect application performance and causes difficulty while troubleshooting. CodeGuru is powered by machine learning, best practices, and hard-learned lessons across millions of code reviews and thousands of applications profiled on open source projects and internally at Amazon. It helps developers find and fix code issues such as resource leaks, potential concurrency race conditions, and wasted CPU cycles. To know more about Amazon Codeguru in detail, read the official blog post. A few other highlights of Day two at AWS re:Invent 2019 include: General availability of Amazon EKS on AWS Fargate, AWS Fargate Spot, and ECS Cluster Auto Scaling. The Deep Graph Library, an open source library built for easy implementation of graph neural networks, is now available on Amazon SageMaker. Amazon re:Invent will continue throughout this week till the 6th of December. You can access the Livestream. Keep checking this space for news for further updates and releases. Amazon re:Invent 2019 Day One: AWS launches Braket, its new quantum service and releases SageMaker Operators for Kubernetes Amazon’s hardware event 2019 highlights: a high-end Echo Studio, the new Echo Show 8, and more 10 key announcements from Microsoft Ignite 2019 you should know about
Read more
  • 0
  • 0
  • 20222

article-image-what-to-expect-at-cloud-data-summit-2019-a-summit-hosted-in-the-cloud
Sugandha Lahoti
09 Oct 2019
2 min read
Save for later

What to expect at Cloud Data Summit 2019 - a summit hosted in the cloud

Sugandha Lahoti
09 Oct 2019
2 min read
2019’s Cloud Data Summit is quickly approaching (scheduled to take place on October 16th-17th). This year it will be different; it will be hosted online as a 100% virtual summit. This event will feature industry-leading speakers and thought leaders talking about the hype of AI, big data, machine learning, PaaS and IaaS technologies.  Although it will be a 100% virtual summit, this conference will have all the features of a standard conference - main stage discussions, speaker panels, peer networking sessions, roundtables, breakout sessions, and group lunches. All topics will be presented in a way that is comfortable for both technical and non-technically inclined attendees and will cover real-world implementations, how-to’s, best practices, potential pitfalls, and how to leverage the full potential of the cloud’s data and processing power. Attendees of the Cloud Data Summit include Google, Spotify, IBM, SAP, Microsoft, Apple and more.   Here’s a list of featured speakers: Jay Natarajan, US AI Lead, Microsoft, Lead Architect, Microsoft Dan Linstedt, Inventor of Data Vault Methodology CEO of LearnDataVault.com CTO and Co-Founder of Scalefree. T. Scott Clendaniel, Co-founder & Consultant, Cottrell Consulting Barr Moses, Co-founder & CEO of Monte Carlo Data Dr. Joe Perez, Sr. Systems Analyst, Team Lead NC Department of Health and Human Services Kurt Cagle, CEO of Semantical LLC, Contributor to Forbes and Managing Editor of Cognitive World Joshua Cottrell, Co-founder & Consultant Cottrell Consulting Jawad Sartaj, Chief Analytics Officer Somos Community Care Daniel O’Connor, Head of Product Data Practice Aware Web Solutions Inc. Eric Axelrod, Founder of Cloud Data Summit, President & Chief Architect, DIGR, and Executive Advisor Those individuals or organizations interested in learning more about the Cloud Data Summit or to register for attendance can visit their official website. If you fall in any of these categories - Business Executives, Data and IT Executives, Data Managers, Data Scientists, Data Engineers, Data Warehouse Architects  (so, anyone who is interested in learning about cloud migration and its consequences) -, Cloud Data Summit is not to be missed. For current students and new graduates, tickets are up to 80% off via the special student registration form.
Read more
  • 0
  • 0
  • 20109
article-image-chef-goes-open-source-ditching-the-loose-open-core-model
Richard Gall
02 Apr 2019
5 min read
Save for later

Chef goes open source, ditching the Loose Open Core model

Richard Gall
02 Apr 2019
5 min read
Chef, the infrastructure automation tool, has today revealed that it is going completely open source. In doing so, the project has ditched the loose open core model. The news is particularly intriguing as it comes at a time when the traditional open source model appears to be facing challenges around its future sustainability. However, it would appear that from Chef's perspective the switch to a full open source license is being driven by a crowded marketplace where automation tools are finding it hard to gain a foothold inside organizations trying to automate their infrastructure. A further challenge for this market is what Chef has identified as 'The Coded Enterprise' - essentially technologically progressive organizations driven by an engineering culture where infrastructure is primarily viewed as code. Read next: Key trends in software infrastructure in 2019: observability, chaos, and cloud complexity Why is Chef going open source? As you might expect,  there's actually more to Chef's decision than pure commercialism. To get a good understanding, it's worth picking apart Chef's open core model and how this was limiting the project. The limitations of Open Core The Loose Open Core model has open source software at its center but is wrapped in proprietary software. So, it's open at its core, but is largely proprietary in how it is deployed and used by businesses. While at first glance this might make it easier to monetize the project, it also severely limits the projects ability to evolve and develop according to the needs of people that matter - the people that use it. Indeed, one way of thinking about it is that the open core model positions your software as a product - something that is defined by product managers and lives and dies by its stickiness with customers. By going open source, your software becomes a project, something that is shared and owned by a community of people that believe in it. Speaking to TechCrunch, Chef Co-Founder Adam Jacob said "in the open core model, you’re saying that the value is in this proprietary sliver. The part you pay me for is this sliver of its value. And I think that’s incorrect... the value was always in the totality of the product." Read next: Chef Language and Style Removing the friction between product and project Jacob published an article on Medium expressing his delight at the news. It's an instructive look at how Chef has been thinking about itself and the challenges it faces. "Deciding what’s in, and what’s out, or where to focus, was the hardest part of the job at Chef," Jacob wrote. "I’m stoked nobody has to do it anymore. I’m stoked we can have the entire company participating in the open source community, rather than burning out a few dedicated heroes. I’m stoked we no longer have to justify the value of what we do in terms of what we hold back from collaborating with people on." So, what's the deal with the Chef Enterprise Automation Stack? As well as announcing that Chef will be open sourcing its code, the organization also revealed that it was bringing together Chef Automate, Chef Infra, Chef InSpec, Chef Habitat and Chef Workstation under one single solution: the Chef Enterprise Automation Stack. The point here is to simplify Chef's offering to its customers to make it easier for them to do everything they can to properly build and automate reliable infrastructure. Corey Scobie, SVP of Product and Engineering said that "the introduction of the Chef Enterprise Automation Stack builds on [the switch to open source]... aligning our business model with our customers’ stated needs through Chef software distribution, services, assurances and direct engagement. Moving forward, the best, fastest, most reliable way to get Chef products and content will be through our commercial distributions.” So, essentially the Chef Enterprise Automation Stack will be the primary Chef distribution that's available commercially, sitting alongside the open source project. What does all this mean for Chef customers and users? If you're a Chef user or have any questions or concerns, the team have put together a very helpful FAQ. You can read it here. The key points for Chef users Existing commercial and non-commercial users don't need to do anything - everything will continue as normal. However, anyone else using current releases should be aware that support will be removed from those releases in 12 months time. The team have clarified that "customers who choose to use our new software versions will be subject to the new license terms and will have an opportunity to create a commercial relationship with Chef, with all of the accompanying benefits that provides." A big step for Chef - could it help determine the evolution of open source? This is a significant step for Chef and it will be of particular interest to its users. But even for those who have no interest in Chef, it's nevertheless a story that indicates that there's a lot of life in open source despite the challenges it faces. It'll certainly interesting to see whether Chef makes it work and what impact it has on the configuration management marketplace.
Read more
  • 0
  • 0
  • 19356

article-image-openstack-networking-and-security-with-ansible-2
Vijin Boricha
03 Jul 2018
9 min read
Save for later

Automating OpenStack Networking and Security with Ansible 2 [Tutorial]

Vijin Boricha
03 Jul 2018
9 min read
OpenStack is a software that can help us build a system similar to popular cloud providers, such as AWS or GCP. OpenStack provides an API and a dashboard to manage the resources that it controls. Basic operations, such as creating and managing virtual machines, block storage, object storage, identity management, and so on, are supported out of the box. This is an excerpt from Ansible 2 Cloud Automation Cookbook written by Aditya Patawari, Vikas Aggarwal. No matter the Cloud Platform you are using this book will help you orchestrate your cloud infrastructure.  In the case of OpenStack, we control the underlying hardware and network, which comes with its own pros and cons. In this article we will leverage Ansible 2 to automate not so common networking tasks in OpenStack. We can use custom network solutions. We can use economical equipment or high-end devices, depending upon the actual need. This can help us get the features that we want and may end up saving money. Caution: Although OpenStack can be hosted on premises, several cloud providers provide OpenStack as a service. Sometimes these cloud providers may choose to turn off certain features or provide add-on features. Sometimes, even while configuring OpenStack on a self-hosted environment, we may choose to toggle certain features or configure a few things differently. Therefore, inconsistencies may occur. All the code examples in this article are tested on a self-hosted OpenStack released in August 2017, named Pike. The underlying operating system was CentOS 7.4. Managing security groups Security groups are the firewalls that can be used to allow or disallow the flow of traffic. They can be applied to virtual machines. Security groups and virtual machines have a many-to-many relationship. A single security group can be applied to multiple virtual machines and a single virtual machine can have multiple security groups. How to do it… Let's create a security group as follows: - name: create a security group for web servers os_security_group: name: web-sg state: present description: security group for web servers The name parameter has to be unique. The description parameter is optional, but we recommend using it to state the purpose of the security group. The preceding task will create a security group for us, but there are no rules attached to it. A firewall without any rules is of little use. So let's go ahead and add a rule to allow access to port 80 as follows: - name: allow port 80 for http os_security_group_rule: security_group: web-sg protocol: tcp port_range_min: 80 port_range_max: 80 remote_ip_prefix: 0.0.0.0/0 We also need SSH access to this server, so we should allow port 22 as well: - name: allow port 80 for SSH os_security_group_rule: security_group: web-sg protocol: tcp port_range_min: 22 port_range_max: 22 remote_ip_prefix: 0.0.0.0/0 How it works… For this module, we need to specify the name of the security group. The rule that we are creating will be associated with this group. We have to supply the protocol and the port range information. If we just want to whitelist only one port, then that would be the upper and lower bound of the range. Lastly, we have to specify the allowed addresses in the form of CIDR. The address 0.0.0.0/0 signifies that port 80 is open for everyone. This task will add an ingress type rule and allow traffic on port 80 to reach the instance. Similarly, we have to add a rule to allow traffic on port 22 as well. Managing network resources A network is a basic building block of the infrastructure. Most of the cloud providers will supply a sample or default network. While setting up a self-hosted OpenStack instance, a single network is typically created automatically. However, if the network is not created, or if we want to create another network for the purpose of isolation or compliance, we can do so using the os_network module. How to do it… Let's go ahead and create an isolated network and name it private, as follows: - name: creating a private network os_network: state: present name: private In the preceding example, we created a logical network with no subnets. A network with no subnets is of little use, so the next step would be to create a subnet: - name: creating a private subnet os_subnet: state: present network_name: private name: app cidr: 192.168.0.0/24 dns_nameservers: - 8.8.4.4 - 8.8.8.8 host_routes: - destination: 0.0.0.0/0 nexthop: 104.131.86.234 - destination: 192.168.0.0/24 nexthop: 192.168.0.1 How it works… The preceding task will create a subnet named app in the network called private. We have also supplied a CIDR for the subnet, 192.168.0.0/24. We are using Google DNS for nameservers as an example here, but this information should be obtained from the IT department of the organization. Similarly, we have set up the example host routes, but this information should be obtained from the IT department as well. After successful execution of this recipe, our network is ready to use. User management OpenStack provides an elaborate user management mechanism. If we are coming from a typical third-party cloud provider, such as AWS or GCP, then it can look overwhelming. The following list explains the building blocks of user management: Domain: This is a collection of projects and users that define an administrative entity. Typically, they can represent a company or a customer account. For a self-hosted setup, this could be done on the basis of departments or environments. A user with administrative privileges on the domain can further create projects, groups, and users. Group: A group is a collection of users owned by a domain. We can add and remove privileges from a group and our changes will be applied to all the users within the group. Project: A project creates a virtual isolation for resources and objects. This can be done to separate departments and environments as well. Role: This is a collection of privileges that can be applied to groups or users. User: A user can be a person or a virtual entity, such as a program, that accesses OpenStack services. For a complete documentation of the user management components, go through the OpenStack Identity document at https://docs.openstack.org/keystone/pike/admin/identity-concepts.html.  How to do it… Let's go ahead and start creating some of these basic building blocks of user management. We should note that, most likely, a default version of these building blocks will already be present in most of the setups: We'll start with a domain called demodomain, as follows: - name: creating a demo domain os_keystone_domain: name: demodomain description: Demo Domain state: present register: demo_domain After we get the domain, let's create a role, as follows: - name: creating a demo role os_keystone_role: state: present name: demorole Projects can be created, as follows: - name: creating a demo project os_project: state: present name: demoproject description: Demo Project domain_id: "{{ demo_domain.id }}" enabled: True Once we have a role and a project, we can create a group, as follows: - name: creating a demo group os_group: state: present name: demogroup description: "Demo Group" domain_id: "{{ demo_domain.id }}" Let's create our first user: - name: creating a demo user os_user: name: demouser password: secret-pass update_password: on_create email: demo@example.com domain: "{{ demo_domain.id }}" state: present Now we have a user and a group. Let's add the user to the group that we created before: - name: adding user to the group os_user_group: user: demouser group: demogroup We can also associate a user or a group with a role: - name: adding role to the group os_user_role: group: demo2 role: demorole domain: "{{ demo_domain.id }}" How it works… In step 1, the os_keystone_domain would take a name as a mandatory parameter. We also supplied a description for our convenience. We are going to use the details of this domain, so we saved it in a variable called demo_domain. In step 2, the os_keystone_role would just take a name and create a role. Note that a role is not tied up with a domain. In step 3, the os_project module would require a name. We have added the description for our convenience. The projects are tied to a domain, so we have used the demo_domain variable that we registered in a previous task. In step 4, the groups are tied to domains as well. So, along with the name, we would specify the description and domain ID like we did before. At this point, the group is empty, and there are no users associated with this group. In step 5, we supply name along with a password for the user. The update_password parameter is set to on_create, which means that the password won't be modified for an existing user. This is great for the sake of idempotency. We also specify the email ID. This would be required for recovering the password and several other use cases. Lastly, we add the domain ID to create the user in the right domain. In step 6, the os_user_group module will help us associate the demouser with the demogroup. In step 7, the os_user_role will take a parameter for user or group and associate it with a role. A lot of these divisions might not be required for every organization. We recommend going through the documentation and understanding the use case of each of them. Another point to note is that we might not even see the user management bits on a day-to-day basis. Depending on the setup and our responsibilities, we might only interact with modules that involve managing resources, such as virtual machines and storage. We learned how to successfully to solve complex OpenStack networking tasks with Ansible 2. To learn more about managing other public cloud platforms like AWS and Azure refer to our book  Ansible 2 Cloud Automation Cookbook. Getting Started with Ansible 2 System Architecture and Design of Ansible An In-depth Look at Ansible Plugins
Read more
  • 0
  • 0
  • 18977
Modal Close icon
Modal Close icon