Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials

7007 Articles
article-image-building-your-first-chatbot-using-chatfuel-with-no-code-tutorial
Bhagyashree R
09 Oct 2018
12 min read
Save for later

Building your first chatbot using Chatfuel with no code [Tutorial]

Bhagyashree R
09 Oct 2018
12 min read
Building chatbots is fun. Although chatbots are just another kind of software, they are very different in terms of the expectations that they create in users. Chatbots are conversational. This ability to process language makes them project a kind of human personality and intelligence, whether we as developers intend it to be or not. To develop software with a personality and intelligence is quite challenging and therefore interesting. This tutorial is an excerpt from a book written by Srini Janarthanam titled Hands-On Chatbots and Conversational UI Development. In this article, we will explore a popular tool called Chatfuel, and learn how to build a chatbot from scratch. Chatfuel is a tool that enables you to build a chatbot without having to code at all. It is a web-based tool with a GUI editor that allows the user to build a chatbot in a modular fashion. Getting started with Chatfuel Let's get started. Go to Chatfuel's website to create an account: Click GET STARTED FOR FREE. Remember, the Chatfuel toolkit is currently free to use. This will lead you to one of the following two options: If you are logged into Facebook, it will ask for permission to link your Chatfuel account to your Facebook account If you are not logged in, it will ask you to log into Facebook first before asking for permission Chatfuel links to Facebook to deploy bots. So it requires permission to use your Facebook account: Authorize Chatfuel to receive information about you and to be your Pages manager: That's it! You are all set to build your very first bot: Building your first bot Chatfuel bots can be published on two deployment platforms: Facebook Messenger and Telegram. Let us build a chatbot for Facebook Messenger first. In order to do that, we need to create a Facebook Page. Every chatbot on Facebook Messenger needs to be attached to a page. Here is how we can build a Facebook Page: Go to https://www.facebook.com/pages/create/. Click the category appropriate to the page content. In our case, we will use Brand or Product and choose App Page. Give the page a name. In our case, let's use Get_Around_Edinburgh. Note that Facebook does not make it easy to change page names. So choose wisely. Once the page is created, you will see Chatfuel asking for permission to connect to the page: Click CONNECT TO PAGE. You will be taken to the bot editor. The name of the bot is set to My First Bot. It has a Messenger URL, which you can see by the side of the name. Messenger URLs start with m.me. You might notice that the bot also comes with a Welcome message that is built in. On the left, you see the main menu with a number of options, with the Build option selected by default: Click the Messenger URL to start your first conversation with the bot. This will open Facebook Messenger in your browser tab: To start the conversation, click the Get Started button at the bottom of the chat window. There you go! Your conversation has just started. The bot has sent you a welcome message: Notice how it greets you with your name. It is because you have given the bot access to your info on Facebook. Now that you have built your first bot and had a conversation with it, give yourself a pat on your back. Welcome to the world of chatbots! Adding conversational flow Let's start building our bot: On the welcome block, click the default text and edit it.  Hovering the mouse around the block can reveal options such as deleting the card, rearranging the order of the cards, and adding new cards between existing cards. Delete the Main menu button: Add a Text card. Let's add a follow-up text card and ask the user a question. Add buttons for user responses. Click ADD BUTTON and type in the name of the button. Ignore block names for now. Since they are incomplete, they will appear in red. Remember, you can add up to three buttons to a text card: Button responses need to be tied to blocks so that when users hit a button the chatbot would know what to do or say. Let's add a few blocks. To add a new block, click ADD BLOCK in the Bot Structure tab. This creates a new untitled block. On the right side, fill in the name of the block. Repeat the same for each block you want to build: Now, go back to the buttons and specify block names to connect to. Click the button, choose Blocks, and provide the name of the block: For each block, you created, add content by adding appropriate cards. Remember, each block can have more than one card. Each card will appear as a response, one after another: Repeat the preceding steps to add more blocks and connect them to buttons of other blocks. When you are done, you can test it by clicking the TEST THIS CHATBOT button in the top-right corner of the editor. You should now see the new welcome message with buttons for responses. Go on and click one of them to have a conversation: Great! You now have a bot with a conversational flow. Handling navigation How can the user and the chatbot navigate through the conversation? How do they respond to each other and move the conversation forward? In this section, we will examine the devices to facilitate conversation flow. Buttons Buttons are a way to let users respond to the chatbot in an unambiguous manner. You can add buttons to text, gallery, and list cards. Buttons have a label and a payload. The label is what the user gets to see. Payload is what happens in the backend when the user clicks the button: A button can take one of four types of payloads: next block, URL, phone number, or share: The next block is identified by the name of the block. This will tell the chatbot which block to execute when the button is pressed. The URL can be specified, if the chatbot is to open a web page on the embedded web browser. Since the browser is embedded, the size of the window can also be specified. The phone number can be specified, if the chatbot is to make a voice call to someone. The share option can be used in cards such as lists and galleries to share the card with other contacts of the user. Go to block cards Buttons can be used to navigate the user from one block to another, however, the user has to push the button to enable navigation. However, there may be circumstances where the navigation needs to happen automatically. For instance, if the chatbot is giving the user step-by-step instructions on how to do something, it can be built by putting all the cards (one step of information per card) in one block. However, it might be a good idea to put them in different blocks for the sake of modularity. In such a case, we need to provide the user a next step button to move on to the next step. In Chatfuel, we can use the Go to Block card to address this problem. A Go to Block card can be placed at the end of any block to take the chatbot to another block. Once the chatbot executes all the cards in a block, it moves to another block automatically without any user intervention. Using Go to Block cards, we can build the chatbot in a modular fashion. To add a Go to Block card at the end of a block, choose ADD A CARD, click the + icon and choose Go to Block card. Fill in the block name for redirection: Redirections can also be made random and conditional. By choosing the random option, we can make the chatbot choose one of the mentioned blocks randomly. This adds a bit of uncertainty to the conversation. However, this needs to be used very carefully because the context of the conversation may get tricky to maintain. Conditional redirections can be done if there is a need to check the context before the redirection is done. Let's revisit this option after we discuss context. Managing context In any conversation, the context of conversation needs to be managed. Context can be maintained by creating a local cache where the information transferred between the two dialogue partners can be stored. For instance, the user may tell the chatbot their food preferences, and this information can be stored in context for future reference if not used immediately. Another instance is in a conversation where the user is asking questions about a story. These questions may be incomplete and may need to be interpreted in terms of the information available in the context. In this section, we will explore how context can be recorded and utilized during the conversation in Chatfuel. Let's take the task of finding a restaurant as part of your tour guide chatbot. The conversation between the chatbot and the user might go as follows: User : Find a restaurant Bot : Ok. Where? User : City center. Bot : Ok. Any cuisine that you fancy? User : Indian Bot : Ok. Let me see... I found a few Indian restaurants in the city center. Here they are. In the preceding conversation, up until the last bot utterance, the bot needs to save the information locally. When it has gathered all the information it needs, it can go off and search the database with appropriate parameters. Notice that it also needs to use that information in generating utterances dynamically. Let's explore how to do these two—dynamically generating utterances and searching the database. First, we need to build the conversational flow to take the user through the conversation just as we discussed in the Next steps section. Let's assume that the user clicks the Find_a_restaurant button on the welcome block. Let's build the basic blocks with text messages and buttons to navigate through the conversation: User input cards As you can imagine, building the blocks for every cuisine and location combination can become a laborious task. Let's try to build the same functionality in another way—forms. In order to use forms, the user input card needs to be used. Let's create a new block called Restaurant_search and to it, let's add a User Input card. To add a User Input card, click ADD A CARD, click the + icon, and select the User Input card. Add all the questions you want to ask the user under MESSAGE TO USER. The answers to each of these questions can be saved to variables. Name the variables against every question. These variables are always denoted with double curly brackets (for example, {{restaurant_location}}): Information provided by the user can also be validated before acceptance. In case the required information is a phone number, email address, or a number, these can be validated by choosing the appropriate format of the input information. After the user input card, let's add a Go to Block card to redirect the flow to the results page: And add a block where we present the results. As you can see here, the variables holding information can be used in chatbot utterances. These will be dynamically replaced from the context when the conversation is happening: The following screenshot shows the conversation so far on Messenger: Setting user attributes In addition to the user input cards, there is also another way to save information in context. This can be done by using the set up user attribute card. Using this card, you can set context-specific information at any point during the conversation. Let's take a look at how to do it. To add this card, choose ADD A CARD, click the + icon, and choose the Set Up User Attribute card: The preceding screenshot shows the user-likes-history variable being set to true when the user asked for historical attractions. This information can later be used to drive the conversation (as used in the Go to Block card) or to provide recommendations. Variables that are already in the context can be reset to new values or no value at all. To clear the value of a variable, use the special NOT SET value from the drop-down menu that appears as soon as you try to fill in a value for the variable. Also, you can set/reset more than one variable in a card. Default contextual variables Besides defining your own contextual variables, you can also use a list of predefined variables. The information contained in these variables include the following: Information that is obtained from the deployment platform (that is, Facebook) including the user's name, gender, time zone, and locale Contextual information—last pushed button, last visited block name, and so on To get a full list of variables, create a new text card and type {{. This will open the drop-down menu with a list of variables you can choose from. This list will also include the variables created by you: As with the developer-defined variables, these built-in variables can also be used in text messages and in conditional redirections using the Go to Block cards. Congratulations! In this tutorial, you have started a journey toward building awesome chatbots. Using tour guiding as the use case, we explored a variety of chatbot design and development topics along the way. If you found this post useful, do check out the book, Hands-On Chatbots and Conversational UI Development, which will help you explore the world of conversational user interfaces. Facebook’s Wit.ai: Why we need yet another chatbot development framework? Building a two-way interactive chatbot with Twilio: A step-by-step guide How to create a conversational assistant or chatbot using Python
Read more
  • 0
  • 0
  • 13337

article-image-microsoft-open-sources-infer-net-its-popular-model-based-machine-learning-framework
Melisha Dsouza
08 Oct 2018
3 min read
Save for later

Microsoft open sources Infer.NET, it’s popular model-based machine learning framework

Melisha Dsouza
08 Oct 2018
3 min read
Last week, Microsoft open sourced Infer.NET, the cross-platform framework used for model-based machine learning. This popular machine learning engine used in Office, Xbox and Azure, will be available on GitHub under the permissive MIT license for free use in commercial applications. Features of  Infer.NET The team at Microsoft Research in Cambridge initially envisioned Infer.NET as a research tool and released it for academic use in 2008. The framework has served as a base to publish hundreds of papers across a variety of fields, including information retrieval and healthcare. The team then started using the framework as a machine learning engine within a wide range of Microsoft products. A model-based approach to machine learning Infer.NET allows users to incorporate domain knowledge into their model. The framework can be used to build bespoke machine learning algorithms directly from their model. To sum it up, this framework actually constructs a learning algorithm for users based on the model they have provided. Facilitates interpretability Infer.NET also facilitates interpretability. If users have designed the model themselves and the learning algorithm follows that model, they can understand why the system behaves in a particular way or makes certain predictions. Probabilistic Approach In Infer.NET, models are described using a probabilistic program. This is used to describe real-world processes in a language that machines understand. Infer.NET compiles the probabilistic program into high-performance code for implementing something cryptically called deterministic approximate Bayesian inference. This approach allows a notable amount of scalability. For instance, it can be used in a system that automatically extracts knowledge from billions of web pages, comprising petabytes of data. Additional Features The framework also supports the ability of the system to learn as new data arrives. The team is also working towards developing and growing it further. Infer.NET will become a part of ML.NET (the machine learning framework for .NET developers). They have already set up the repository under the .NET Foundation and moved the package and namespaces to Microsoft.ML.Probabilistic.  Being cross platform, Infer.NET supports .NET Framework 4.6.1, .NET Core 2.0, and Mono 5.0. Windows users get to use Visual Studio 2017, while macOS and Linux folks have command-line options, which could be incorporated into the code wrangler of their choice. Download the framework to learn more about Infer.NET. You can also check the documentation for a detailed User Guide. To know more about this news, head over to Microsoft’s official blog. Microsoft announces new Surface devices to enhance user productivity, with style and elegance Neural Network Intelligence: Microsoft’s open source automated machine learning toolkit Microsoft’s new neural text-to-speech service lets machines speak like people
Read more
  • 0
  • 0
  • 23092

article-image-minecraft-java-team-are-open-sourcing-some-of-minecrafts-code-as-libraries
Sugandha Lahoti
08 Oct 2018
2 min read
Save for later

Minecraft Java team are open sourcing some of Minecraft's code as libraries

Sugandha Lahoti
08 Oct 2018
2 min read
Stockholm's Minecraft Java team are open sourcing some of Minecraft's code as libraries for game developers. Developers can now use them to improve their Minecraft mods, use them for their own projects, or help improve pieces of the Minecraft Java engine. The team will open up different libraries gradually. These libraries are open source and MIT licensed. For now, they have open sourced two libraries Brigadier and DataFixerUpper. Brigadier The first library, Brigadier takes random strings of text entered into Minecraft and turns into an actual function that the game will perform. Basically, if you enter in the game something like /give Dinnerbone sticks, it goes internally into Brigadier and breaks it down into pieces. Then it tries to figure out what the developer is trying to do with this random piece of text. Nathan Adams, a Java developer hopes that giving the Minecraft community access to Brigadier can make it “extremely user-friendly one day.” Brigadier has been available for a week now. It has already seen improvements in the code and the readme doc. DataFixerUpper Another important library of the Minecraft game engine, the DataFixerUpper is also being open sourced. When a developer adds a new feature into Minecraft, they have to change the way level data and save files are stored. DataFixerUpper turns these data formats to what the game should currently be using now. Also in consideration for open sourcing is the Blaze3D library, which is a complete rewrite of the render engine for Minecraft 1.14. You can check out the announcement on the Minecraft website. You can also download Brigadier and DataFixerUpper. Minecraft is serious about global warming, adds a new (spigot) plugin to allow changes in climate mechanics. Learning with Minecraft Mods A Brief History of Minecraft Modding
Read more
  • 0
  • 0
  • 28334

article-image-how-to-deploy-splunk-binary-and-set-up-its-configuration-tutorial
Savia Lobo
08 Oct 2018
13 min read
Save for later

How to deploy Splunk binary and set up its configuration [Tutorial]

Savia Lobo
08 Oct 2018
13 min read
Splunk provides binary distributions for Windows and a variety of Unix operating systems. For all Unix operating systems, a compressed .tar file is provided. For some platforms, packages are also provided. This article is an excerpt taken from the book Implementing Splunk 7 - Third Edition written by James Miller. This book covers the new modules of Splunk: Splunk Cloud and the Machine Learning Toolkit to ease data usage and more. In this tutorial, you will learn how to deploy Splunk library effectively within your system. It also includes how to set up configuration distributions in Splunk. If your organization uses packages, such as deb or rpm, you should be able to use the provided packages in your normal deployment process. Otherwise, installation starts by unpacking the provided tar to the location of your choice. The process is the same, whether you are installing the full version of Splunk or the Splunk universal forwarder. The typical installation process involves the following steps: Installing the binary Adding a base configuration Configuring Splunk to launch at boot Restarting Splunk Having worked with many different companies over the years, I can honestly say that none of them used the same product or even methodology for deploying software. Splunk takes a hands-off approach to fit in as easily as possible into customer workflows. Deploying from a tar file To deploy from a tar file, the command depends on your version of tar. With a modern version of tar, you can run the following command: tar xvzf splunk-7.0.x-xxx-Linux-xxx.tgz Older versions may not handle gzip files directly, so you may have to run the following command: gunzip -c splunk-7.0.x-xxx-Linux-xxx.tgz | tar xvf - This will expand into the current directory. To expand into a specific directory, you can usually add -C, depending on the version of TAR, as follows: tar -C /opt/ -xvzf splunk-7.0.x-xxx-Linux-xxx.tgz Deploying using msiexec In Windows, it is possible to deploy Splunk using msiexec. This makes it much easier to automate deployment on a large number of machines. To install silently, you can use the combination of AGREETOLICENSE and /quiet, as follows: msiexec.exe /i splunk-xxx.msi AGREETOLICENSE=Yes /quiet If you plan to use a deployment server, you can specify the following value: msiexec.exe /i splunk-xxx.msi AGREETOLICENSE=Yes DEPLOYMENT_SERVER="deployment_server_name:8089" /quiet Or, if you plan to overlay an app that contains deploymentclient.conf, you can forego starting Splunk until that app has been copied into place, as follows: msiexec.exe /i splunk-xxx.msi AGREETOLICENSE=Yes LAUNCHSPLUNK=0 /quiet There are options available to start reading data immediately, but I would advise deploying input configurations to your servers, instead of enabling inputs via installation arguments. Adding a base configuration If you are using the Splunk deployment server, this is the time to set up deploymentclient.conf. This can be accomplished in several ways, as follows: On the command line, by running the following code: $SPLUNK_HOME/bin/splunk set deploy-poll deployment_server_name:8089 By placing a deploymentclient.conf in: $SPLUNK_HOME/etc/system/local/ By placing an app containing deploymentclient.conf in: $SPLUNK_HOME/etc/apps/ The third option is what I would recommend because it allows overriding this configuration, via a deployment server, at a later time. We will work through an example later in the Using Splunk deployment server section. If you are deploying configurations in some other way, for instance with puppet, be sure to restart the Splunk forwarder processes after deploying the new configuration. Configuring Splunk to launch at boot On Windows machines, Splunk is installed as a service that will start after installation and on reboot. On Unix hosts, the Splunk command line provides a way to create startup scripts appropriate for the operating system that you are using. The command looks like this: $SPLUNK_HOME/bin/splunk enable boot-start To run Splunk as another user, provide the flag -user, as follows: $SPLUNK_HOME/bin/splunk enable boot-start -user splunkuser The startup command must still be run as root, but the startup script will be modified to run as the user provided. If you do not run Splunk as root, and you shouldn't if you can avoid it, be sure that the Splunk installation and data directories are owned by the user specified in the enable boot-start command. You can ensure this by using chmod, such as in chmod -R splunkuser $SPLUNK_HOME On Linux, you could then start the command using service splunk start. Configuration distribution in Splunk As we have covered, in some depth, configurations in Splunk are simply directories of plain text files. Distribution essentially consists of copying these configurations to the appropriate machines and restarting the instances. You can either use your own system for distribution, such as puppet or simply a set of scripts, or use the deployment server included with Splunk. Using your own deployment system The advantage of using your own system is that you already know how to use it. Assuming that you have normalized your apps, as described in the section Using apps to organize configuration, deploying apps to a forwarder or indexer consists of the following steps: Set aside the existing apps at $SPLUNK_HOME/etc/apps/. Copy the apps into $SPLUNK_HOME/etc/apps/. Restart Splunk forwarder. Note that this needs to be done as the user that is running Splunk, either by calling the service script or calling su. In Windows, restart the splunkd service. Assuming that you already have a system for managing configurations, that's it. If you are deploying configurations to indexers, be sure to only deploy the configurations when downtime is acceptable, as you will need to restart the indexers to load the new configurations, ideally in a rolling manner. Do not deploy configurations until you are ready to restart, as some (but not all) configurations will take effect immediately. Using the Splunk deployment server If you do not have a system for managing configurations, you can use the deployment server included with Splunk. Some advantages of the included deployment server are as follows: Everything you need is included in your Splunk installation It will restart forwarder instances properly when new app versions are deployed It is intelligent enough not to restart when unnecessary It will remove apps that should no longer be installed on a machine It will ignore apps that are not managed The logs for the deployment client and server are accessible in Splunk itself Some disadvantages of the included deployment server are: As of Splunk 4.3, there are issues with scale beyond a few hundred deployment clients, at which point tuning is required (although a solution option is to use multiple instances of deployment servers). The configuration is complicated and prone to typos With these caveats out of the way, let's set up a deployment server for the apps that we laid out before. Step 1 – deciding where your deployment server will run For a small installation with less than a few dozen forwarders, your main Splunk instance can run the deployment server without any issue. For more than a few dozen forwarders, a separate instance of Splunk makes sense. Ideally, this instance would run on its own machine. The requirements for this machine are not large, perhaps 4 gigabytes of RAM and two processors, or possibly less. A virtual machine would be fine. Define a DNS entry for your deployment server, if at all possible. This will make moving your deployment server later, much simpler. If you do not have access to another machine, you could run another copy of Splunk on the same machine that is running some other part of your Splunk deployment. To accomplish this, follow these steps: Install Splunk in another directory, perhaps /opt/splunk-deploy/splunk/. Start this instance of Splunk by using /opt/splunk-deploy/splunk/bin/splunk start. When prompted, choose different port numbers apart from the default and note what they are. I would suggest one number higher: 8090 and 8001. Unfortunately, if you run splunk enable boot-start in this new instance, the existing startup script will be overwritten. To accommodate both instances, you will need to either edit the existing startup script, or rename the existing script so that it is not overwritten. Step 2 - defining your deploymentclient.conf configuration Using the address of our new deployment server, ideally a DNS entry, we will build an app named deploymentclient-yourcompanyname. This app will have to be installed manually on forwarders but can then be managed by the deployment server. This app should look somewhat like this: deploymentclient-yourcompanyname local/deploymentclient.conf [deployment-client] [target-broker:deploymentServer] targetUri=deploymentserver.foo.com:8089 Step 3 - defining our machine types and locations Starting with what we defined in the Separate configurations by purpose section, we have, in the locations west and east, the following machine types: Splunk indexers db servers Web servers App servers Step 4 - normalizing our configurations into apps appropriately Let's use the apps that we defined in the section Separate configurations by purpose plus the deployment client app that we created in the Step 2 - defining your deploymentclient.conf configuration section. These apps will live in $SPLUNK_HOME/etc/deployment-apps/ on your deployment server. Step 5 - mapping these apps to deployment clients in serverclass.conf To get started, I always start with example 2 from SPLUNK_HOME/etc/system/README/serverclass.conf example: [global] [serverClass:AppsForOps] whitelist.0=*.ops.yourcompany.com [serverClass:AppsForOps:app:unix] [serverClass:AppsForOps:app:SplunkLightForwarder] Let's assume that we have the machines mentioned next. It is very rare for an organization of any size to have consistently named hosts, so I threw in a couple of rogue hosts at the bottom, as follows: spl-idx-west01 spl-idx-west02 spl-idx-east01 spl-idx-east02 app-east01 app-east02 app-west01 app-west02 web-east01 web-east02 web-west01 web-west02 db-east01 db-east02 db-west01 db-west02 qa01 homer-simpson The structure of serverclass.conf is essentially as follows: [serverClass:<className>] #options that should be applied to all apps in this class [serverClass:<className>:app:<appName>] #options that should be applied only to this app in this serverclass Please note that: <className> is an arbitrary name of your choosing. <appName> is the name of a directory in $SPLUNK_HOME/etc/deploymentapps/. The order of stanzas does not matter. Be sure to update <className> if you copy an :app: stanza. This is, by far, the easiest mistake to make. It is important that configuration changes do not trigger a restart of indexers. Let's apply this to our hosts, as follows: [global] restartSplunkd = True #by default trigger a splunk restart on configuration change ####INDEXERS ##handle indexers specially, making sure they do not restart [serverClass:indexers] whitelist.0=spl-idx-* restartSplunkd = False [serverClass:indexers:app:indexerbase] [serverClass:indexers:app:deploymentclient-yourcompanyname] [serverClass:indexers:app:props-web] [serverClass:indexers:app:props-app] [serverClass:indexers:app:props-db] #send props-west only to west indexers [serverClass:indexers-west] whitelist.0=spl-idx-west* restartSplunkd = False [serverClass:indexers-west:app:props-west] #send props-east only to east indexers [serverClass:indexers-east] whitelist.0=spl-idx-east* restartSplunkd = False [serverClass:indexers-east:app:props-east] ####FORWARDERS #send event parsing props apps everywhere #blacklist indexers to prevent unintended restart [serverClass:props] whitelist.0=* blacklist.0=spl-idx-* [serverClass:props:app:props-web] [serverClass:props:app:props-app] [serverClass:props:app:props-db] #send props-west only to west datacenter servers #blacklist indexers to prevent unintended restart [serverClass:west] whitelist.0=*-west* whitelist.1=qa01 blacklist.0=spl-idx-* [serverClass:west:app:props-west] [serverClass:west:app:deploymentclient-yourcompanyname] #send props-east only to east datacenter servers #blacklist indexers to prevent unintended restart [serverClass:east] whitelist.0=*-east* whitelist.1=homer-simpson blacklist.0=spl-idx-* [serverClass:east:app:props-east] [serverClass:east:app:deploymentclient-yourcompanyname] #define our appserver inputs [serverClass:appservers] whitelist.0=app-* whitelist.1=qa01 whitelist.2=homer-simpson [serverClass:appservers:app:inputs-app] #define our webserver inputs [serverClass:webservers] whitelist.0=web-* whitelist.1=qa01 whitelist.2=homer-simpson [serverClass:webservers:app:inputs-web] #define our dbserver inputs [serverClass:dbservers] whitelist.0=db-* whitelist.1=qa01 [serverClass:dbservers:app:inputs-db] #define our west coast forwarders [serverClass:fwd-west] whitelist.0=app-west* whitelist.1=web-west* whitelist.2=db-west* whitelist.3=qa01 [serverClass:fwd-west:app:outputs-west] #define our east coast forwarders [serverClass:fwd-east] whitelist.0=app-east* whitelist.1=web-east* whitelist.2=db-east* whitelist.3=homer-simpson [serverClass:fwd-east:app:outputs-east] You should organize the patterns and classes in a way that makes sense to your organization and data centers, but I would encourage you to keep it as simple as possible. I would strongly suggest opting for more lines than more complicated logic. A few more things to note about the format of serverclass.conf: The number following whitelist and blacklist must be sequential, starting with zero. For instance, in the following example, whitelist.3 will not be processed, since whitelist.2 is commented: [serverClass:foo] whitelist.0=a* whitelist.1=b* # whitelist.2=c* whitelist.3=d* whitelist.x and blacklist.x are tested against these values in the following order: clientName as defined in deploymentclient.conf: This is not commonly used but is useful when running multiple Splunk instances on the same machine, or when the DNS is completely unreliable. IP address: There is no CIDR matching, but you can use string patterns. Reverse DNS: This is the value returned by the DNS for an IP address. If your reverse DNS is not up to date, this can cause you problems, as this value is tested before the value of hostname, as provided by the host itself. If you suspect this, try ping <ip of machine> or something similar to see what the DNS is reporting. Hostname as provided by forwarder: This is always tested after reverse DNS, so be sure your reverse DNS is up to date. When copying :app: lines, be very careful to update the <className> appropriately! This really is the most common mistake made in serverclass.conf. Step 6 - restarting the deployment server If serverclass.conf did not exist, a restart of the Splunk instance which is running deployment server is required to activate the deployment server. After the deployment server is loaded, you can use the following command: $SPLUNK_HOME/bin/splunk reload deploy-server This command should be enough to pick up any changes in serverclass.conf a in etc/deployment-apps. Step 7 - installing deploymentclient.conf Now that we have a running deployment server, we need to set up the clients to call home. On each machine that will be running the deployment client, the procedure is essentially as follows: Copy the deploymentclient-yourcompanyname app to $SPLUNK_HOME/etc/apps/ Restart Splunk If everything is configured correctly, you should see the appropriate apps appear in $SPLUNK_HOME/etc/apps/, within a few minutes. To see what is happening, look at the log $SPLUNK_HOME/var/log/splunk/splunkd.log. If you have problems, enable debugging on either the client or the server by editing $SPLUNK_HOME/etc/log.cfg, followed by a restart. Look for the following lines: category.DeploymentServer=WARN category.DeploymentClient=WARN Once found, change them to the following lines and restart Splunk: category.DeploymentServer=DEBUG category.DeploymentClient=DEBUG After restarting Splunk, you will see the complete conversation in $SPLUNK_HOME/var/log/splunk/splunkd.log. Be sure to change the setting back once you no longer need the verbose logging! To summarize, we learned how to deploy a binary and set up configuration distribution in Splunk. If you've enjoyed this excerpt, head over to the book, Implementing Splunk 7 - Third Edition to learn how to use the Machine Learning Toolkit and best practices and tips to help you implement Splunk services effectively and efficiently. Splunk introduces machine learning capabilities in Splunk Enterprise and Cloud Creating effective dashboards using Splunk [Tutorial] Why should enterprises use Splunk?
Read more
  • 0
  • 0
  • 10133

article-image-7-tips-for-using-git-and-github-the-right-way
Sugandha Lahoti
06 Oct 2018
3 min read
Save for later

7 tips for using Git and GitHub the right way

Sugandha Lahoti
06 Oct 2018
3 min read
GitHub has become a widely accepted and integral part of software development owing to the imperative features of change tracking that it offers. It was created in 2005 by Linus Torvalds to support the development of the Linux kernel. In this post, Alex Magana and Joseph Muli, the authors of Introduction to Git and GitHub course, discuss some of the best practices you should keep in mind while learning or using Git and GitHub. Document everything A good best practice that eases work in any team is ample documentation. Documenting something as simple as a repository goes a long way in presenting work and attracting contributors. It’s more of a first impression aspect when it comes to looking for a tool to aid in development. Utilize the README.MD and wikis One should also utilize the README.MD and wikis to elucidate the functionality delivered by the application and categorize guide topics and material. You should Communicate the solution of the code in the repository avails. Specify the guidelines and rules of engagement that govern contributing to the codebase. Indicate the dependencies required to set-up the working environment. Stipulate set-up instructions to get a working version of the application in a contributor’s local environment. Keep simple and concise naming conventions Naming conventions are also highly encouraged when it comes to repositories and branches. They should be simple, descriptive and concise. For instance, a repository that houses code intended for a Git course could be simply named as “git-tutorial-material”. As a learner on the bespoke course, it’s easier for a user to get the material, compared to a repository with a name such as “material”. Adopt naming prefixes You should also adopt institute naming prefixes for different task types for branch naming. For example, you may use feat- for feature branches, bug- for bugs and fix- for fix branches. Also, make use of templates that encompass a checklist for Pull Requests. Correspond a PR and Branch to a ticket or task A PR and Branch should correspond to a ticket or task on the project management board. This aids in aligning efforts employed on a product to the appropriate milestones and product vision. Organize and track tasks using issues Tasks such as technical debts, bugs, and workflow improvements should be organized and tracked using Issues. You should also enforce Push and Pull restrictions on the default branch and use webhooks to automate deployment and run pre-merge test suites. Use atomic commits Another best practice is to use atomic commits. An atomic commit is an operation that applies a set of distinct changes as a single operation. You should persist changes in small changesets and use descriptive and concise commit messages to record effected changes. You read a guest post from Alex Magana and Joseph Muli, the authors of Introduction to Git and GitHub. We hope that these best practices help you manage your Git and GitHub more smoothly. Don’t forget to check out Alex and Joseph’s Introduction to Git and GitHub course to learn how to create and enforce checks and controls for the introduction, scrutiny, approval, merging, and reversal of changes. GitHub introduces ‘Experiments’, a platform to share live demos of their research projects. Packt’s GitHub portal hits 2,000 repositories
Read more
  • 0
  • 0
  • 31138

article-image-a-decade-of-android-slayer-of-blackberry-challenger-of-iphone-mother-of-the-modern-mobile-ecosystem
Sandesh Deshpande
06 Oct 2018
6 min read
Save for later

A decade of Android: Slayer of Blackberry, challenger of iPhone, mother of the modern mobile ecosystem

Sandesh Deshpande
06 Oct 2018
6 min read
If someone says Eclair, Honeycomb, Ice Cream Sandwich, or Jelly Bean then apart from getting a sugar rush, you will probably think of Android OS. From just being a newly launched OS, filled with apprehensions, to being the biggest and most loved operating system in the history, Android has seen it all. The OS which powers our phones and makes our everyday life simpler recently celebrated its 10th anniversary. Android’s rise from the ashes The journey to become the most popular mobile OS since its launch in 2008, was not that easy for Android. Back then it competed with iOS and Blackberry, which were considered the go-to smartphones of that time. Google’s idea was to give users a Blackberry-like experience as the 'G1' had a full-sized physical Qwerty keypad just like Blackberry. But G1 had some limitations as it could play videos only on YouTube as it didn’t have any inbuilt video player app and Android Market (now Google Play) and had just a handful of apps. Though the idea to give users blackberry like the experience was spot on, it was not a hit with the users as by then Apple had made touchscreen all the rage with its iPhone. But one thing Google did right with Android OS, which its competitors didn't offer, was customizations and that's where Google scored a home run. Blackberry and iPhone were great and users loved them. But both the OS tied the users in their ecosystem. Motorola saw the potential for customization and it adopted Android to launch Motorola Droid in 2009.  This is when Android OS came of age and started competing with Apple's iOS. With Android OS, people could customize their phones and with its open source platform developers could tweak the Base OS and customize it to their liking. This resulted in users having options to choose themes, wallpapers, and launchers. This change pioneered the requirement for customization which was later adopted in iPhone as well. By virtue of it being an open platform and thanks to regular updates from Google, there was a huge surge in Android adoption and mobile manufacturers like Motorola, HTC, and Samsung launched their devices powered by Android OS. Because of this rapid adoption of Android by a large number of manufacturers, Android became the most popular mobile platform, beating Nokia's Symbian OS by the end of 2010. This Android phenomenon saved many manufacturers like HTC, Motorola, Samsung, Sony for losing significant market share to the then mobile handset market leaders, Nokia, Blackberry and Apple. They sensed the change in user preferences and adopted Android OS. Nokia, on the other hand, didn’t adopt Android and stuck to it’s Symbian OS which resulted in customer and market loss. Android: Sugar, and spice and everything nice In the subsequent years, Google launched Android versions like Cupcake, Donut, Eclair, Froyo, Gingerbread, Ice cream Sandwich, Jelly Bean, KitKat, Lollipop, and Marshmallow. The Android team sure love their sugars evident from all Android operating systems named after desserts. It's not new that tech companies get unique names for their software versions. For instance,  Apple names its OS after cats like Tiger, Leopard and Snow Leopard. But Google officially never revealed why their OS is named after desserts. Just in case that wasn't nerdy enough, Google put these sugary names in alphabetical order. Each update came with some cool features. Here’s a quick list of some popular features with the respective versions. Eclair (2009): Phone which came with Eclair onboard had digital zoom and flashlight for photos for the first time ever. Honeycomb (2011): Honeycomb was compatible with a tablet without any major glitches. Ice cream Sandwich (2011): Probably not as sophisticated as today but Ice cream sandwich had facial recognition and also a feature to take screenshots. Lollipop (2014): With Android Lollipop rounded icons were introduced in Android for the first time. Nougat (2016): With Nougat update Google introduced more natural looking emojis including skin tone modifiers, Unicode 9 emojis, and a removal of previously gender-neutral characters. Pie (2018): The latest Android update Android Pie also comes  with a bunch of cool features. However, the standout feature in this release is the  Indoor navigation which enables indoor GPS style tracking by determining your location within a building and facilitating turn-by-turn directions to help you navigate indoors. Android’s greatest strength probably is its large open platform community which helps developers to develop apps for Android. Though developers can write Android apps in any Java virtual machine (JVM) compatible programming language and can run on JVM, Google’s primary language for writing Android apps was Java (besides C++). At Google I/O 2018, Google announced that it will officially support Kotlin on Android as a “first-class” language. Kotlin is a super new programming language built by JetBrains, which also coincidentally develops the JetBrains IDE that powers the Android Studio. Apart from rich features and strong open platform community, Google also enhanced security with the newer Android versions which made it unbeatable. Manufacturers like Samsung leveraged the power of Android with their Galaxy S series making them one of the leading mobile manufacturers. Today, Google have proven themselves as strong players in the mobile market not only with Android OS but also with their Flagship phones like the Pixel series which receive updates before any other Smartphone with Android OS. Android today: love it, hate it, but you can’t escape it Today with a staggering 2 Billion active devices, Android is the market leader in mobile OS platform by far. A decade ago, no one anticipated that one mobile OS could have such dominance. Google has developed the OS for televisions, smartwatches, smart home devices, VR Headsets and has even developed Android Auto for cars. As Google showcased in Google I/O 2018 the power of machine learning with Smart compose for Gmail and Google Duplex for Google assistant, with Google assistant now being introduced on almost all latest android phones it is making Android more powerful than ever. However, all is not all sunshine and rainbows in the Android nation. In July this year, EU slapped Google with $5 billion fine as an outcome of its antitrust investigations around Android. Google was found guilty of imposing illegal restrictions on Android device manufacturers and network operators, since 2011, in an attempt to get all the traffic from these devices to the Google search engine. It is ironic that the very restrictive locked-in ecosystems that Android rebelled against in its early days are something it is now increasingly endorsing. Furthermore, as interfaces become less text and screen-based and more touch, voice, and gesture-based, Google does seem to realize Android’s limitations to some extent. They have been investing a lot into Project Fuschia lately, which many believe could be Android’s replacement in the future. With the tech landscape changing more rapidly than ever it will be interesting to see what the future holds for Android but for now, Android is here to stay. 6 common challenges faced by Android App developers Entry level phones to taste the Go edition of the Android 9.0 Pie version Android 9 pie’s Smart Linkify: How Android’s new machine learning based feature works
Read more
  • 0
  • 0
  • 20241
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at €18.99/month. Cancel anytime
article-image-implementing-concurrency-with-kotlin-tutorial
Savia Lobo
06 Oct 2018
10 min read
Save for later

Implementing Concurrency with Kotlin [Tutorial]

Savia Lobo
06 Oct 2018
10 min read
Correct concurrent code is one that allows for a certain variability in the order of its execution while still being deterministic on the result. For this to be possible, different parts of the code need to have some level of independence, and some degree of orchestration may also be required. This tutorial is an excerpt taken from the book, Learning Concurrency in Kotlin written by Miguel Angel Castiblanco Torres. In this book, you will learn how to handle errors and exceptions, as well as how to leverage multi-core processing. In this article, we will be introduced to concurrency and how Kotlin approaches concurrency challenges. The best way to understand concurrency is by comparing sequential code with concurrent code. Let's start by looking at some non-concurrent code: fun getProfile(id: Int) : Profile { val basicUserInfo = getUserInfo(id) val contactInfo = getContactInfo(id) return createProfile(basicUserInfo, contactInfo) } If I ask you what is going to be obtained first, the user information or the contact information – assuming no exceptions – you will probably agree with me that 100% of the time the user information will be retrieved first. And you will be correct. That is, first and foremost, because the contact information is not being requested until the contact information has already been retrieved: Timeline of getProfile And that's the beauty of sequential code: you can easily see the exact order of execution, and you will never have surprises on that front. But sequential code has two big issues: It may perform poorly compared to concurrent code It may not take advantage of the hardware that it's being executed on Let's say, for example, that both getUserInfo and getContactInfo call a web service, and each service will constantly take around one second to return a response. That means that getProfile will take not less than two seconds to finish, always. And since it seems like getContactInfo doesn't depend on getUserInfo, the calls could be done concurrently, and by doing so it would be possible can halve the execution time of getProfile. Let's imagine a concurrent implementation of getProfile: suspend fun getProfile(id: Int) { val basicUserInfo = asyncGetUserInfo(id) val contactInfo = asyncGetContactInfo(id) createProfile(basicUserInfo.await(), contactInfo.await()) } In this updated version of the code, getProfile() is a suspending function – notice the suspend modifier in its definition – and the implementation of asyncGetUserInfo() and asyncGetContactInfo() are asynchronous – which will not be shown in the example code to keep things simple. Because asyncGetUserInfo() and asyncGetContactInfo() are written to run in different threads, they are said to be concurrent. For now, let's think of it as if they are being executed at the same time – we will see later that it's not necessarily the case, but will do for now. This means that the execution of asyncGetContactInfo() will not depend on the completion of asyncGetUserInfo(), so the requests to the web services could be done around at the same time. And since we know that each service takes around one second to return a response, createProfile() will be called around one second after getProfile() is started, sooner than it could ever be in the sequential version of the code, where it will always take at least two seconds to be called. Let's take a look at how this may look: Concurrent timeline for getProfile But in this updated version of the code, we don't really know if the user information will be obtained before the contact information. Remember, we said that each of the web services takes around one second, and we also said that both requests will be started at around the same time. This means that if asyncGetContactInfo is faster than asyncGetUserInfo, the contact information will be obtained first; but the user information could be obtained first if asyncGetUserInfo returns first; and since we are at it, it could also happen that both of them return the information at the same time. This means that our concurrent implementation of getProfile, while possibly performing twice as fast as the sequential one, has some variability in its execution. That's the reason there are two await() calls when calling createProfile(). What this is doing is suspending the execution of getProfile() until both asyncGetUserInfo() and asyncGetContactInfo() have completed. Only when both of them have completed createProfile() will be executed. This guarantees that regardless of which of the concurrent call ends first, the result of getProfile() will be deterministic. And that's where the tricky part of concurrency is. You need to guarantee that no matter the order in which the semi-independent parts of the code are completed, the result needs to be deterministic. For this example, what we did was suspend part of the code until all the moving parts completed, but as we will see later in the book, we can also orchestrate our concurrent code by having it communicate between coroutines. Concurrency in Kotlin Now that we have covered the basics of concurrency, it's a good time to discuss the specifics of concurrency in Kotlin. This section will showcase the most differentiating characteristics of Kotlin when it comes to concurrent programming, covering both philosophical and technical topics. Non-blocking Threads are heavy, expensive to create, and limited—only so many threads can be created—So when a thread is blocked it is, in a way, being wasted. Because of this, Kotlin offers what is called Suspendable Computations; these are computations that can suspend their execution without blocking the thread of execution. So instead of, for example, blocking thread X to wait for an operation to be made in a thread Y, it's possible to suspend the code that has to wait and use thread X for other computations in the meantime. Furthermore, Kotlin offers great primitives like channels, actors, and mutual exclusions, which provide mechanisms to communicate and synchronize concurrent code effectively without having to block a thread. Being explicit Concurrency needs to be thought about and designed for, and because of that, it's important to make it explicit in terms of when a computation should run concurrently. Suspendable computations will run sequentially by default. Since they don't block the thread when suspended, there's no direct drawback: fun main(args: Array<String>) = runBlocking { val time = measureTimeMillis { val name = getName() val lastName = getLastName() println("Hello, $name $lastName") } println("Execution took $time ms") } suspend fun getName(): String { delay(1000) return "Susan" } suspend fun getLastName(): String { delay(1000) return "Calvin" } In this code, main() executes the suspendable computations getName() and getLastName() in the current thread, sequentially. Executing main() will print the following: This is convenient because it's possible to write non-concurrent code that doesn't block the thread of execution. But after some time and analysis, it becomes clear that it doesn't make sense to have getLastName() wait until after getName() has been executed since the computation of the latter has no dependency on the former. It's better to make it concurrent: fun main(args: Array<String>) = runBlocking { val time = measureTimeMillis { val name = async { getName() } val lastName = async { getLastName() } println("Hello, ${name.await()} ${lastName.await()}") } println("Execution took $time ms") } Now, by calling async {...} it's clear that both of them should run concurrently, and by calling await() it's requested that main() is suspended until both computations have a result: Readable Concurrent code in Kotlin is as readable as sequential code. One of the many problems with concurrency in other languages like Java is that often it's difficult to read, understand, and/or debug concurrent code. Kotlin's approach allows for idiomatic concurrent code: suspend fun getProfile(id: Int) { val basicUserInfo = asyncGetUserInfo(id) val contactInfo = asyncGetContactInfo(id) createProfile(basicUserInfo.await(), contactInfo.await()) } By convention, a function that is going to run concurrently by default should indicate this in its name, either by starting with async or ending in Async. This suspend method calls two methods that will be executed in background threads and waits for their completion before processing the information. Reading and debugging this code is as simple as it would be for sequential code. In many cases, it is better to write a suspend function and call it inside an async {} or launch {} block, rather than writing functions that are already async. This is because it gives more flexibility to the callers of the function to have a suspend function; that way the caller can decide when to run it concurrently, for example. In other cases, you may want to write both the concurrent and the suspend function. Leveraged Creating and managing threads is one of the difficult parts of writing concurrent code in many languages. As seen before, it's important to know when to create a thread, and almost as important to know how many threads are optimal. It's also important to have threads dedicated to I/O operations, while also having threads to tackle CPU-bound operations. And communicating/syncing threads is a challenge in itself. Kotlin has high-level functions and primitives that make it easier to implement concurrent code: To create a thread it's enough to call newSingleThreadContext(), a function that only takes the name of the thread. Once created, that thread can be used to run as many coroutines as needed. Creating a pool of threads is as easy, by calling newFixedThreadPoolContext() with the size and the name of the pool. CommonPool is a pool of threads optimal for CPU-bound operations. Its maximum size is the amount of cores in the machine minus one. The runtime will take charge of moving a coroutine to a different thread when needed. There are many primitives and techniques to communicate and synchronize coroutines, such as channels, mutexes, and thread confinement. Flexible Kotlin offers many different primitives that allow for simple-yet-flexible concurrency. You will find that there are many ways to do concurrent programming in Kotlin. Here is a list of some of the topics we will look at throughout the book: Channels: Pipes that can be used to safely send and receive data between coroutines. Worker pools: A pool of coroutines that can be used to divide the processing of a set of operations in many threads. Actors: A wrapper around a state that uses channels and coroutines as a mechanism to offer the safe modification of a state from many different threads. Mutual exclusions (Mutexes): A synchronization mechanism that allows the definition of a critical zone so that only one thread can execute at a time. Any coroutine trying to access the critical zone will be suspended until the previous coroutine leaves. Thread confinement: The ability to limit the execution of a coroutine so that it always happens in a specified thread. Generators (Iterators and sequences): Data sources that can produce information on demand and be suspended when no new information is required. All of these are tools that are at your fingertips when writing concurrent code in Kotlin, and their scope and use will help you to make the right choices when implementing concurrent code. To summarize, we learned the basics of concurrency and how to implement concurrency in Kotlin programming language. If you've enjoyed reading this excerpt, head over to the book, Learning Concurrency in Kotlin to learn how to use the Machine Learning Toolkit and best practices and tips to help you implement Splunk services effectively and efficiently. KotlinConf 2018: Kotlin 1.3 RC out and Kotlin/Native hits beta Kotlin 1.3 RC1 is here with compiler and IDE improvements Kotlin 1.3 M1 arrives with coroutines, and new experimental features like unsigned integer types
Read more
  • 0
  • 0
  • 18469

article-image-bloombergs-big-hack-expose-says-china-had-microchips-on-servers-for-covert-surveillance-of-big-tech-and-big-brother-big-tech-deny-supply-chain-compromise
Savia Lobo
05 Oct 2018
4 min read
Save for later

Bloomberg's Big Hack Exposé says China had microchips on servers for covert surveillance of Big Tech and Big Brother; Big Tech deny supply chain compromise

Savia Lobo
05 Oct 2018
4 min read
According to an in-depth report by Bloomberg yesterday, Chinese spies secretly inserted microchips within servers at Apple, Amazon, the US Department of Defense, the Central Intelligence Agency, the Navy, among others. What Bloomberg’s Big Hack Exposé revealed? The tiny chips were made to be undetectable without specialist equipment. These were later implanted on to the motherboards of servers on the production line in China. These servers were allegedly assembled by Super Micro Computer Inc., a San Jose-based company, one of the world’s biggest suppliers of server motherboards. Supermicro's customers include Elemental Technologies, a streaming services startup which was acquired by Amazon in 2015 and provided the foundation for the expansion of the Amazon Prime Video platform. According to the report, the Chinese People's Liberation Army (PLA) used illicit chips on hardware during the manufacturing process of server systems in factories. How did Amazon detect these microchips? In late 2015, Elemental’s staff boxed up several servers and sent them to Ontario, Canada, for the third-party security company to test. The testers found a tiny microchip, not much bigger than a grain of rice, that wasn’t part of the boards’ original design, nested on the servers’ motherboards. Following this, Amazon reported the discovery to U.S. authorities which shocked the intelligence community. This is because, Elemental’s servers are ubiquitous and used across US key government agencies such as in Department of Defense data centers, the CIA’s drone operations, and the onboard networks of Navy warships. And Elemental was just one of the hundreds of Supermicro customers. According to the Bloomberg report, “The chips were reportedly built to be as inconspicuous as possible and to mimic signal conditioning couplers. It was determined during an investigation, which took three years to conclude, that the chip "allowed the attackers to create a stealth doorway into any network that included the altered machines.” The report claims Amazon became aware of the attack during moves to purchase streaming video compression firm Elemental Technologies in 2015. Elemental's services appear to have been an ideal target for Chinese state-sponsored attackers to conduct covert surveillance. According to Bloomberg, Apple was one of the victims of the apparent breach. Bloomberg says that Apple found the malicious chips in 2015 subsequently cutting ties with Supermicro in 2016. Amazon, Apple, and Supermicro deny supply chain compromise Amazon and Apple have both strongly denied the results of the investigation. Amazon said, "It's untrue that AWS knew about a supply chain compromise, an issue with malicious chips, or hardware modifications when acquiring Elemental. It's also untrue that AWS knew about servers containing malicious chips or modifications in data centers based in China, or that AWS worked with the FBI to investigate or provide data about malicious hardware." Apple affirms that internal investigations have been conducted based on Bloomberg queries, and no evidence was found to support the accusations. The only infected driver was discovered in 2016 on a single Supermicro server found in Apple Labs. It was this incident which may have led to the severed business relationship back in 2016, rather than the discovery of malicious chips or a widespread supply chain attack. Supermicro confirms that they were not aware of any investigation regarding the topic nor were they contacted by any government agency in this regard. Bloomberg says the denials are in direct contrast to the testimony of six current and former national security officials, as well as confirmation by 17 anonymous sources which said the nature of the Supermicro compromise was accurate. Bloomberg's investigation has not been confirmed on the record by the FBI. To know about this news in detail, visit Bloomberg News. “Intel ME has a Manufacturing Mode vulnerability, and even giant manufacturers like Apple are not immune,” say researchers Amazon increases the minimum wage of all employees in the US and UK Bloomberg says Google, Mastercard covertly track customers’ offline retail habits via a secret million dollar ad deal
Read more
  • 0
  • 0
  • 13884

article-image-redhat-shares-what-to-expect-from-next-weeks-first-ever-dnssec-root-key-rollover
Melisha Dsouza
05 Oct 2018
4 min read
Save for later

RedHat shares what to expect from next week’s first-ever DNSSEC root key rollover

Melisha Dsouza
05 Oct 2018
4 min read
On Thursday, October 11, 2018, at 16:00 UTC, ICANN will change the cryptographic key that is at the center of the DNS security system- what is called  DNSSEC. The current key has been in place since July 15, 2010. This switching of the central security key for DNS is called the "Root Key Signing Key (KSK) Rollover". The above replacement was originally planned a year back. The procedure was previously postponed because of data that suggested that a significant number of resolvers where not ready for the rollover. However, the question is 'Are your systems prepared so that DNS will keep functioning for your networks?' DNSSEC is a system of digital signatures to prevent DNS spoofing. Maintaining an up-to-date KSK is essential to ensuring DNSSEC-validating DNS resolvers continue to function following the rollover. If the KSK isn’t up to date, the DNSSEC-validating DNS resolvers will be unable to resolve any DNS queries. If the switch happens smoothly, users will not notice any visible changes and their systems will work as usual. However, if their DNS resolvers are not ready to use the new key, users may not be able to reach many websites! That being said, there’s good news for those who have been keeping up with recent DNS updates. Since this rollover was delayed for almost a year, most of the DNS resolver software has been shipping for quite some time with the new key. Users who have been up to date with this news will have their systems all set for this change. If not, we’ve got you covered! But first, let’s do a quick background check. As of today, the root zone is still signed by the old KSK with the ID 19036 also called KSK-2010. The new KSK with the ID 20326 was published in the DNS in July 2017 and is called KSK-2017. The KSK-2010 is currently used to sign itself, to sign the Root Zone Signing Key (ZSK) and to sign the new KSK-2017. The rollover only affects the KSK. The ZSK gets updated and replaced more frequently without the need to update the trust anchor inside the resolver. Source: Suse.com You can go ahead and watch ICANN’s short video to understand all about the switch Now that you have understood all about the switch, let’s go through all the checks you need to perform #1 Check if you have the new DNSSEC Root Key installed Users need to ensure they have the new DNSSEC Root key ( KSK-2017) has Key ID 20326. To check this, perform the following: - bind - see /etc/named.root.key - unbound / libunbound - see /var/lib/unbound/root.key - dnsmasq - see /usr/share/dnsmasq/trust-anchors.conf - knot-resolver - see /etc/knot-resolver/root.keys There is no need for them to restart their DNS service unless the server has been running for more than a year and does not support RFC 5011. #2 A Backup plan in case of unexpected problems The RedHat team assures users that there is very less probability of occurrence of DNS issues. Incase they do encounter a problem, they have been advised to restart their DNS server. They can try to use the  dig +dnssec dnskey. If all else fails, they should temporarily switch to a public DNS operator. #3 Check for an incomplete RFC 5011 process When the switch happens, containers or virtual machines that have old configuration files and new software would only have the soon to be removed DNSSEC Root Key. This is logged via RFC 8145 to the root name servers which provides ICANN with the above statistics. The server then updates the key via RFC 5011. But there are possibilities that it shuts down before the 30 day hold timer has been reached. It can also happen that the hold timer is reached and the configuration file cannot be written to or the file could be written to, but the image is destroyed on shutdown/reboot and a restart of the container that does not contain the new key. RedHat believes the switch of this cryptographic key will lead to an improved level of security and trust that users can have online! To know more about this rollover, head to RedHat’s official Blog. What Google, RedHat, Oracle, and others announced at KubeCon + CloudNativeCon 2018 Google, IBM, RedHat and others launch Istio 1.0 service mesh for microservices Red Hat infrastructure migration solution for proprietary and siloed infrastructure  
Read more
  • 0
  • 0
  • 15705

article-image-implementing-web-application-vulnerability-scanners-with-kali-linux-tutorial
Savia Lobo
05 Oct 2018
10 min read
Save for later

Implementing Web application vulnerability scanners with Kali Linux [Tutorial]

Savia Lobo
05 Oct 2018
10 min read
Vulnerability scanners suffer the common shortcomings of all scanners (a scanner can only detect the signature of a known vulnerability; they cannot determine if the vulnerability can actually be exploited; there is a high incidence of false-positive reports). Furthermore, web vulnerability scanners cannot identify complex errors in business logic, and they do not accurately simulate the complex chained attacks used by hackers. This tutorial is an excerpt taken from the book, Mastering Kali Linux for Advanced Penetration Testing - Second Edition written by Vijay Kumar Velu. In this book, we will be using a laboratory environment to validate tools and techniques, and using an application that supports a collaborative approach to penetration testing. This article includes a list of web application vulnerability scanners and how we can implement them using Kali Linux. In an effort to increase reliability, most penetration testers use multiple tools to scan web services; when multiple tools report that a particular vulnerability may exist, this consensus will direct the tester to areas that may require manually verifying the findings. Kali comes with an extensive number of vulnerability scanners for web services and provides a stable platform for installing new scanners and extending their capabilities. This allows penetration testers to increase the effectiveness of testing by selecting scanning tools that: Maximize the completeness (the total number of vulnerabilities that are identified) and accuracy (the vulnerabilities that are real and not false-positive results) of testing. Minimize the time required to obtain usable results. Minimize any negative impacts on the web services being tested. This can include slowing down the system due to an increase of traffic throughput. For example, one of the most common negative effects is a result of testing forms that input data to a database and then email an individual providing an update of the change that has been made-uncontrolled testing of such forms can result in more than 30,000 emails being sent! There is significant complexity in choosing the most effective tool. In addition to the factors already listed, some vulnerability scanners will also launch the appropriate exploit and support the post-exploit activities. For our purposes, we will consider all tools that scan for exploitable weaknesses to be vulnerability scanners. Kali provides access to several different vulnerability scanners, including the following: Scanners that extend the functionality of traditional vulnerability scanners to include websites and associated services (Metasploit framework and Websploit) Scanners that extend the functionality of non-traditional applications, such as web browsers, to support web service vulnerability scanning (OWASP Mantra) Scanners that are specifically developed to support reconnaissance and exploit detection in websites and web services (Arachnid, Nikto, Skipfish, Vega, w3af, and so on) Introduction to Nikto and Vega Nikto is one of the most utilized active web application scanners that performs comprehensive tests against web servers. Basic functionality is to check for 6,700+ potentially dangerous files or programs, along with outdated versions of servers and vulnerabilities specific to versions over 270 servers; server mis-configuration, index files, HTTP methods, and also attempts to identify the installed web server and the software version. Nikto is released based on Open-General Public license versions (https://opensource.org/licenses/gpl-license). A Perl-based open-source scanner allows IDS evasion and user changes to scan modules; however, this original web scanner is beginning to show its age, and is not as accurate as some of the more modern scanners. Most testers start testing a website by using Nikto, a simple scanner (particularly with regards to reporting) that generally provides accurate but limited results; a sample output of this scan is shown in the following screenshot: The next step is to use more advanced scanners that scan a larger number of vulnerabilities; in turn, they can take significantly longer to run to completion. It is not uncommon for complex vulnerability scans (as determined by the number of pages to be scanned as well as the site's complexity, which can include multiple pages that permit user input such as search functions or forms that gather data from the user for a backend database) to take several days to be completed. One of the most effective scanners based on the number of verified vulnerabilities discovered is Subgraph's Vega. As shown in the following screenshot, it scans a target and classifies the vulnerabilities as high, medium, low, and informational. The tester is able to click on the identified results to drill down to specific findings. The tester can also modify the search modules, which are written in Java, to focus on particular vulnerabilities or identify new vulnerabilities: Vega can help you find vulnerabilities such as reflected cross-site scripting, stored cross-site scripting, blind SQL injection, remote file include, shell injection, and others. Vega also probes for TLS/SSL security settings and identifies opportunities for improving the security of your TLS servers. Also, Vega provides special features of Proxy section, which allows the penetration testers to query back the request and observe the response to perform the validation, which we call manual PoC. The following screenshot provides the proxy section of Vega: Customizing Nikto and Vega From Nikto version 2.1.1, the community allowed developers to debug and call specific plugins, the same can be customized accordingly from version 2.1.2, the listing can be done for all the plugins and then specify a specific plugin to perform any scan. There are currently around 35 plugins that can be utilized by penetration testers; the following screenshot provides the list of plugins that are currently available in the latest version of Nikto: For example, if attackers found a banner information as Apache server 2.2.0 then the first point that Nikto scans to burp or any proxy tool by nikto.pl -host <hostaddress> -port <hostport> -useragentnikto -useproxy http://127.0.0.1:8080, Nikto can be customized to run specific plugins only for Apache user enumeration by running the following command: nikto.pl -host target.com -Plugins "apacheusers(enumerate,dictionary:users.txt);report_xml" -output apacheusers.xml Penetration testers should be able to see the following screenshot: When the Nikto plugin is run successfully, the output file apacheusers.xml should include the active users on the target host. Similar to Nikto, Vega also allows us to customize the scanner by navigating to the window and selecting Preferences whereby one can set up a general proxy configuration or even point the traffic to a third-party proxy tool. However, Vega has its own proxy tool that can be utilized. The following screenshot provides the scanner options that can be set before beginning any web application scan: Attackers can define their own user agent or mimic any well-known user agents, such as IRC bot or Google bot and also configure the maximum number of total descendants and sub-processes, the number of paths that can be traversed. For example, if the spider reveals www.target.com/admin/, - there is a dictionary to add to the URL as www.target.com/admin/secret/. The maximum by default is set to 16, but attackers would be able to drill down by utilizing other tools to maximize the effectiveness of Vega and would select precisely the right number of paths and also, in the case of any protection mechanisms in place, such as WAF or network level IPS, pentesters can select to scan the target with a slow rate of connections per second to send to the target. One can also set the maximum number of responses size to be set, by default it is set: to 1 MB (1,024 kB). Once the preferences are set, the scan can be further customized while adding a new scan. When penetration testers click on a new scan and enter the base URL to scan and click next, the following screen should take the testers to customize the scan and they should be able to see the following screenshot: Vega provides two sections to customize: one is Injection modules and the other is Response processing modules: Injection modules: This includes a list of exploit modules that are available as part of the built-in Vega web vulnerability databases and it tests the target for those vulnerabilities, such as blind SQL injection, XSS, remote file inclusion, local file inclusion, header injections, and so on. Response processing modules: This includes the list of security misconfigurations that can be picked up as part of the HTTP response in itself, such as directory listing, error pages, cross-domain policies, version control strings, and so on. Vega also supports testers to add their own plugin modules (https://github.com/subgraph/Vega/). To know about vulnerability scanners for mobile applications, head over to the book. The OpenVAS network vulnerability scanner Open Vulnerability Assessment System (OpenVAS) is an open source vulnerability assessment scanner and also a vulnerability management tool often utilized by attackers to scan a wide range of networks, which includes around 47,000 vulnerabilities in its database; however, this can be considered as a slow network vulnerability scanner compared with other commercial tools, such as Nessus, nexpose, Qualys, and so on. If OpenVAS is already not installed, make sure your Kali is up to date and install the latest OpenVAS by running the apt-get install Openvas command. Once done, run the openvas-setup command to setup OpenVAS. To make sure the installation is okay, the penetration testers can run the command openvas-check-setup and it will list down the top 10 items that are required to run OpenVAS effectively. Upon successful installation, testers should be able to see the following screenshot: Next, create an admin user by running the openvasmd -user=admin -new-password=YourNewPassword1,-new-password=YourNewPassword1command, and start up the OpenVAS scanner and OpenVAS manager services by running the openvas-start command from the prompt. Depending on bandwidth and computer resources, this could take a while. Once the installation and update has been completed, penetration testers should be able to access the OpenVAS server on port 9392 with SSL (https://localhost:9392), as shown in the following screenshot: The next step is to validate the user credentials, by entering the username as admin and password with yournewpassword1 and testers should be able to login without any issues and see the following screenshot. Attackers are now set to utilize OpenVAS by entering the target information and clicking Start Scan from the scanner portal: Customizing OpenVAS Unlike any other scanners, OpenVAS is also customizable for scan configuration, it allows the testers to add credentials, disable particular plugins, and set the maximum and minimum number of connections that can be made and so on. The following sample screenshot shows the place where attackers are allowed to change all the required settings to customize it accordingly: To summarize, in this article we focused on multiple vulnerability assessment tools and techniques implementation using Kali Linux. If you've enjoyed this post, do check out our book Mastering Kali Linux for Advanced Penetration Testing - Second Edition to explore approaches to carry out advanced penetration testing in tightly secured environments. Getting Started with Metasploitable2 and Kali Linux Introduction to Penetration Testing and Kali Linux Wireless Attacks in Kali Linux
Read more
  • 0
  • 0
  • 43075
article-image-wi-fi-alliance-introduces-wi-fi-6-the-start-of-a-generic-naming-convention-for-next-gen-802-11ax-standard
Melisha Dsouza
04 Oct 2018
4 min read
Save for later

Wi-Fi Alliance introduces Wi-Fi 6, the start of a Generic Naming Convention for Next-Gen 802.11ax Standard

Melisha Dsouza
04 Oct 2018
4 min read
Yesterday, Wi-Fi Alliance introduced Wi-Fi 6, the designation for devices that support the next generation of WiFi based on 802.11ax standard. Wi-Fi 6 is part of a new naming approach by Wi-Fi Alliance that provides users with an easy-to-understand designation for both the Wi-Fi technology supported by their device and used in a connection the device makes with a Wi-Fi network. This marks the beginning of using generational names for certification programs for all major IEEE 802.11 releases. For instance, instead of devices being called 802.11ax compatible, they will now be called Wi-Fi Certified 6. As video and image files grow with higher resolution cameras and sensors, there is a need for faster transfer speeds and an increasing amount of bandwidth to transfer those files around a wireless network. Wi-Fi Alliance aims to achieve this goal with its Wi-Fi 6 technology. Features of  Wi-Fi 6 Wi-Fi 6 brings an improved user experience It aims to address device and application needs in the consumer and enterprise environment. Wi-Fi 6 will be used to describe the capabilities of a device This is the most advanced of all Wi-Fi generations, bringing faster speeds, greater capacity, and coverage. It will provide an uplink and downlink orthogonal frequency division multiple access (OFDMA) while increasing efficiency and lowering latency for high demand environments Its 1024 quadrature amplitude modulation mode (1024-QAM) enables peak gigabit speeds for bandwidth-intensive use cases Wi-Fi 6 comes with an improved medium access control (MAC) control signaling increases throughput and capacity while reducing latency Increased symbol durations make outdoor network operations more robust Support for customers and Industries The new numerical naming convention will be applied retroactively to previous standards such as 802.11n or 802.11ac. The numerical sequence includes: Wi-Fi 6 to identify devices that support 802.11ax technology Wi-Fi 5 to identify devices that support 802.11ac technology Wi-Fi 4 to identify devices that support 802.11n technology This new consumer-friendly Wi-Fi 6 naming convention will allow users looking for new networking gear to stop focusing on technical naming conventions and focus on an easy-to-remember naming convention. The convention aims to show users at a glance if the device they are considering supports the latest Wi-Fi speeds and features. It will let consumers differentiate phones and wireless routers based on their Wi-Fi capabilities, helping them pick the device that is best suited for their needs. Consumers will better understand the latest Wi-Fi technology advancements and make more informed buying decisions for their connectivity needs. As for manufacturers and OS builders of Wi-Fi devices, they are expected to use the terminology in user interfaces to signify the type of connection made. Some of the biggest names in wireless networking have expressed their views about the change in naming convention, including Netgear, CEVA, Marvell Semiconductor, MediaTek, Qualcomm, Intel, and many more. "Given the central role Wi-Fi plays in delivering connected experiences to hundreds of millions of people every day, and with next-generation technologies like 802.11ax emerging, the Wi-Fi Alliance generational naming scheme for Wi-Fi is an intuitive and necessary approach to defining Wi-Fi’s value for our industry and consumers alike. We support this initiative as a global leader in Wi-Fi shipments and deployment of Wi-Fi 6, based on 802.11ax technology, along with customers like Ruckus, Huawei, NewH3C, KDDI Corporation/NEC Platforms, Charter Communications, KT Corp, and many more spanning enterprise, venue, home, mobile, and computing segments." – Rahul Patel, senior vice president and general manager, connectivity and networking, Qualcomm Technologies, Inc. Beginning with Wi-Fi 6, Wi-Fi Alliance certification programs based on major IEEE 802.11 releases, Wi-Fi CERTIFIED 6™ certification, will be implemented in 2019. To know more about this announcement, head over to Wi-Fi Alliance’s official blog. The Haiku operating system has released R1/beta1 Mozilla releases Firefox 62.0 with better scrolling on Android, a dark theme on macOS, and more Anaconda 5.3.0 released, takes advantage of Python’s Speed and feature improvements
Read more
  • 0
  • 0
  • 20013

article-image-creating-a-continuous-integration-commit-pipeline-using-docker-tutorial
Savia Lobo
04 Oct 2018
10 min read
Save for later

Creating a Continuous Integration commit pipeline using Docker [Tutorial]

Savia Lobo
04 Oct 2018
10 min read
The most basic Continuous Integration process is called a commit pipeline. This classic phase, as its name says, starts with a commit (or push in Git) to the main repository and results in a report about the build success or failure. Since it runs after each change in the code, the build should take no more than 5 minutes and should consume a reasonable amount of resources. This tutorial is an excerpt taken from the book, Continuous Delivery with Docker and Jenkins written by Rafał Leszko. This book provides steps to build applications on Docker files and integrate them with Jenkins using continuous delivery processes such as continuous integration, automated acceptance testing, and configuration management. In this article, you will learn how to create Continuous Integration commit pipeline using Docker. The commit phase is always the starting point of the Continuous Delivery process, and it provides the most important feedback cycle in the development process, constant information if the code is in a healthy state.  A developer checks in the code to the repository, the Continuous Integration server detects the change, and the build starts. The most fundamental commit pipeline contains three stages: Checkout: This stage downloads the source code from the repository Compile: This stage compiles the source code Unit test: This stage runs a suite of unit tests Let's create a sample project and see how to implement the commit pipeline. This is an example of a pipeline for the project that uses technologies such as Git, Java, Gradle, and Spring Boot. Nevertheless, the same principles apply to any other technology. Checkout Checking out code from the repository is always the first operation in any pipeline. In order to see this, we need to have a repository. Then, we will be able to create a pipeline. Creating a GitHub repository Creating a repository on the GitHub server takes just a few steps: Go to the https://github.com/ page. Create an account if you don't have one yet. Click on New repository. Give it a name, calculator. Tick Initialize this repository with a README. Click on Create repository. Now, you should see the address of the repository, for example, https://github.com/leszko/calculator.git. Creating a checkout stage We can create a new pipeline called calculator and, as Pipeline script, put the code with a stage called Checkout: pipeline { agent any stages { stage("Checkout") { steps { git url: 'https://github.com/leszko/calculator.git' } } } } The pipeline can be executed on any of the agents, and its only step does nothing more than downloading code from the repository. We can click on Build Now and see if it was executed successfully. Note that the Git toolkit needs to be installed on the node where the build is executed. When we have the checkout, we're ready for the second stage. Compile In order to compile a project, we need to: Create a project with the source code. Push it to the repository. Add the Compile stage to the pipeline. Creating a Java Spring Boot project Let's create a very simple Java project using the Spring Boot framework built by Gradle. Spring Boot is a Java framework that simplifies building enterprise applications. Gradle is a build automation system that is based on the concepts of Apache Maven. The simplest way to create a Spring Boot project is to perform the following steps: Go to the http://start.spring.io/ page. Select Gradle project instead of Maven project (you can also leave Maven if you prefer it to Gradle). Fill Group and Artifact (for example, com.leszko and calculator). Add Web to Dependencies. Click on Generate Project. The generated skeleton project should be downloaded (the calculator.zip file). The following screenshot presents the http://start.spring.io/ page: Pushing code to GitHub We will use the Git tool to perform the commit and push operations: In order to run the git command, you need to have the Git toolkit installed (it can be downloaded from https://git-scm.com/downloads). Let's first clone the repository to the filesystem: $ git clone https://github.com/leszko/calculator.git Extract the project downloaded from http://start.spring.io/ into the directory created by Git. If you prefer, you can import the project into IntelliJ, Eclipse, or your favorite IDE tool. As a result, the calculator directory should have the following files: $ ls -a . .. build.gradle .git .gitignore gradle gradlew gradlew.bat README.md src In order to perform the Gradle operations locally, you need to have Java JDK installed (in Ubuntu, you can do it by executing sudo apt-get install -y default-jdk). We can compile the project locally using the following code: $ ./gradlew compileJava In the case of Maven, you can run ./mvnw compile. Both Gradle and Maven compile the Java classes located in the src directory. You can find all possible Gradle instructions (for the Java project) at https://docs.gradle.org/current/userguide/java_plugin.html. Now, we can commit and push to the GitHub repository: $ git add . $ git commit -m "Add Spring Boot skeleton" $ git push -u origin master After running the git push command, you will be prompted to enter the GitHub credentials (username and password). The code is already in the GitHub repository. If you want to check it, you can go to the GitHub page and see the files. Creating a compile stage We can add a Compile stage to the pipeline using the following code: stage("Compile") { steps { sh "./gradlew compileJava" } } Note that we used exactly the same command locally and in the Jenkins pipeline, which is a very good sign because the local development process is consistent with the Continuous Integration environment. After running the build, you should see two green boxes. You can also check that the project was compiled correctly in the console log. Unit test It's time to add the last stage that is Unit test, which checks if our code does what we expect it to do. We have to: Add the source code for the calculator logic Write unit test for the code Add a stage to execute the unit test Creating business logic The first version of the calculator will be able to add two numbers. Let's add the business logic as a class in the src/main/java/com/leszko/calculator/Calculator.java file: package com.leszko.calculator; import org.springframework.stereotype.Service; @Service public class Calculator { int sum(int a, int b) { return a + b; } } To execute the business logic, we also need to add the web service controller in a separate file src/main/java/com/leszko/calculator/CalculatorController.java: package com.leszko.calculator; import org.springframework.beans.factory.annotation.Autowired; import org.springframework.web.bind.annotation.RequestMapping; import org.springframework.web.bind.annotation.RequestParam; import org.springframework.web.bind.annotation.RestController; @RestController class CalculatorController { @Autowired private Calculator calculator; @RequestMapping("/sum") String sum(@RequestParam("a") Integer a, @RequestParam("b") Integer b) { return String.valueOf(calculator.sum(a, b)); } } This class exposes the business logic as a web service. We can run the application and see how it works: $ ./gradlew bootRun It should start our web service and we can check that it works by navigating to the browser and opening the page http://localhost:8080/sum?a=1&b=2. This should sum two numbers ( 1 and 2) and show 3 in the browser. Writing a unit test We already have the working application. How can we ensure that the logic works as expected? We have tried it once, but in order to know constantly, we need a unit test. In our case, it will be trivial, maybe even unnecessary; however, in real projects, unit tests can save from bugs and system failures. Let's create a unit test in the file src/test/java/com/leszko/calculator/CalculatorTest.java: package com.leszko.calculator; import org.junit.Test; import static org.junit.Assert.assertEquals; public class CalculatorTest { private Calculator calculator = new Calculator(); @Test public void testSum() { assertEquals(5, calculator.sum(2, 3)); } } We can run the test locally using the ./gradlew test command. Then, let's commit the code and push it to the repository: $ git add . $ git commit -m "Add sum logic, controller and unit test" $ git push Creating a unit test stage Now, we can add a Unit test stage to the pipeline: stage("Unit test") { steps { sh "./gradlew test" } } In the case of Maven, we would have to use ./mvnw test. When we build the pipeline again, we should see three boxes, which means that we've completed the Continuous Integration pipeline: Placing the pipeline definition inside Jenkinsfile All the time, so far, we created the pipeline code directly in Jenkins. This is, however, not the only option. We can also put the pipeline definition inside a file called Jenkinsfile and commit it to the repository together with the source code. This method is even more consistent because the way your pipeline looks is strictly related to the project itself. For example, if you don't need the code compilation because your programming language is interpreted (and not compiled), then you won't have the Compile stage. The tools you use also differ depending on the environment. We used Gradle/Maven because we've built the Java project; however, in the case of a project written in Python, you could use PyBuilder. It leads to the idea that the pipelines should be created by the same people who write the code, developers. Also, the pipeline definition should be put together with the code, in the repository. This approach brings immediate benefits, as follows: In case of Jenkins' failure, the pipeline definition is not lost (because it's stored in the code repository, not in Jenkins) The history of the pipeline changes is stored Pipeline changes go through the standard code development process (for example, they are subjected to code reviews) Access to the pipeline changes is restricted exactly in the same way as the access to the source code Creating Jenkinsfile We can create the Jenkinsfile and push it to our GitHub repository. Its content is almost the same as the commit pipeline we wrote. The only difference is that the checkout stage becomes redundant because Jenkins has to checkout the code (together with Jenkinsfile) first and then read the pipeline structure (from Jenkinsfile). This is why Jenkins needs to know the repository address before it reads Jenkinsfile. Let's create a file called Jenkinsfile in the root directory of our project: pipeline { agent any stages { stage("Compile") { steps { sh "./gradlew compileJava" } } stage("Unit test") { steps { sh "./gradlew test" } } } } We can now commit the added files and push to the GitHub repository: $ git add . $ git commit -m "Add sum Jenkinsfile" $ git push Running pipeline from Jenkinsfile When Jenkinsfile is in the repository, then all we have to do is to open the pipeline configuration and in the Pipeline section: Change Definition from Pipeline script to Pipeline script from SCM Select Git in SCM Put https://github.com/leszko/calculator.git in Repository URL After saving, the build will always run from the current version of Jenkinsfile into the repository. We have successfully created the first complete commit pipeline. It can be treated as a minimum viable product, and actually, in many cases, it's sufficient as the Continuous Integration process. In the next sections, we will see what improvements can be done to make the commit pipeline even better. To summarize, we covered some aspects of the Continuous Integration pipeline, which is always the first step for Continuous Delivery. If you've enjoyed reading this post, do check out the book,  Continuous Delivery with Docker and Jenkins to know more on how to deploy applications using Docker images and testing them with Jenkins. Gremlin makes chaos engineering with Docker easier with new container discovery feature Docker faces public outcry as Docker for Mac and Windows can be downloaded only via Docker Store login Is your Enterprise Measuring the Right DevOps Metrics?
Read more
  • 0
  • 0
  • 21698

article-image-9-reasons-why-rust-programmers-love-rust
Richa Tripathi
03 Oct 2018
8 min read
Save for later

9 reasons why Rust programmers love Rust

Richa Tripathi
03 Oct 2018
8 min read
The 2018 survey of the RedMonk Programming Language Rankings marked the entry of a new programming language in their Top 25 list. It has been an incredibly successful year for the Rust programming language in terms of its popularity. It also jumped from the 46th most popular language on GitHub to the 18th position. The Stack overflow survey of 2018 is another indicator of the rise of Rust programming language. Almost 78% of the developers who are working with Rust loved working on it. It topped the list of the most loved programming language among the developers who took the survey for a straight third year in the row. Not only that but it ranked 8th in the most wanted programming language in the survey, which means that the respondent of the survey who has not used it yet but would like to learn. Although, Rust was designed as a low-level language, best suited for systems, embedded, and other performance critical code, it is gaining a lot of traction and presents a great opportunity for web developers and game developers. RUST is also empowering novice developers with the tools to start shipping code fast. So, why is Rust so tempting? Let's explore the high points of this incredible language and understand the variety of features that make it interesting to learn. Automatic Garbage Collection Garbage collection and non-memory resources often create problems with some systems languages. But Rust pays no head to garbage collection and removes the possibilities of failures caused by them. In Rust, garbage collection is completely taken care of by RAII (Resource Acquisition Is Initialization). Better support for Concurrency Concurrency and parallelism are incredibly imperative topics in computer science and are also a hot topic in the industry today. Computers are gaining more and more cores, yet many programmers aren't prepared to fully utilize the power of them. Handling concurrent programming safely and efficiently is another major goal of Rust language. Concurrency is difficult to reason about. In Rust, there is a strong, static type system that helps to reason about your code. As such, Rust also gives you two traits Send and Sync to help you make sense of code that can possibly be concurrent. Rust's standard library also provides a library for threads, which enable you to run Rust code in parallel. You can also use Rust’s threads as a simple isolation mechanism. Error Handling in Rust is beautiful A programmer is bound to make errors, irrespective of the programming language they use. Making errors while programming is normal, but it's the error handling mechanism of that programming language, which enhances the experience of writing the code. In Rust, errors are divided into types: unrecoverable errors and recoverable errors. Unrecoverable errors An error is classified as 'unrecoverable' when there is no other option other than to abort the program. The panic! macro in Rust is very helpful in these cases, especially when a bug has been detected in the code but the programmer is not clear how to handle that error. The panic! macro generates a failure message that helps the user to debug a problem. It also helps to stop the execution before more catastrophic events occur. Recoverable errors The errors which can be handled easily or which do not have a serious impact on the execution of the program are known as recoverable errors. It is represented by the Result<T, E>. The Result<T, E> is an enum that consists of two variants, i.e., OK<T> and Err<E>. It describes the possible error in the program. OK<T>: The 'T' is a type of value which returns the OK variant in the success case. It is an expected outcome. Err<E>: The 'E' is a type of error which returns the ERR variant in the failure. It is an unexpected outcome. Resource Management The one attribute that makes Rust stand out (and completely overpowers Google’s Go for that matter), is the algorithm used for resource management. Rust follows the C++ lead, with concepts like borrowing and mutable borrowing on the plate and thus resource management becomes an elegant process. Furthermore, Rust didn’t need a second chance to know that resource management is not just about memory usage; the fact that they did it right first time makes them a standout performer on this point. Although the Rust documentation does a good job of explaining the technical details, the article by Tim explains the concept in a much friendlier and easy to understand language. As such I thought, it would be good to list his points as well here. The following excerpt is taken from the article written by M.Tim Jones. Reusable code via modules Rust allows you to organize code in a way that promotes its reuse. You attain this reusability by using modules which are nothing but organized code as packages that other programmers can use. These modules contain functions, structures and even other modules that you can either make public, which can be accessed by the users of the module or you can make it private which can be used only within the module and not by the module user. There are three keywords to create modules, use modules, and modify the visibility of elements in modules. The mod keyword creates a new module The use keyword allows you to use the module (expose the definitions into the scope to use them) The pub keyword makes elements of the module public (otherwise, they're private). Cleaner code with better safety checks In Rust, the compiler enforces memory safety and another checking that make the programming language safe. Here, you will never have to worry about dangling pointers or bother using an object after it has been freed. These things are part of the core Rust language that allows you to write clean code. Also, Rust includes an unsafe keyword with which you can disable checks that would typically result in a compilation error. Data types and Collections in Rust Rust is a statically typed programming language, which means that every value in Rust must have a specified data type. The biggest advantage of static typing is that a large class of errors is identified earlier in the development process. These data types can be broadly classified into two types: scalar and compound. Scalar data types represent a single value like integer, floating-point, and character, which are commonly present in other programming languages as well. But Rust also provides compound data types which allow the programmers to group multiple values in one type such as tuples and arrays. The Rust standard library provides a number of data structures which are also called collections. Collections contain multiple values but they are different from the standard compound data types like tuples and arrays which we discussed above. The biggest advantage of using collections is the capability of not specifying the amount of data at compile time which allows the structure to grow and shrink as the program runs. Vectors, Strings, and hash maps are the three most commonly used collections in Rust. The friendly Rust community Rust owes it success to the breadth and depth of engagement of its vibrant community, which supports a highly collaborative process for helping the language to evolve in a truly open-source way. Rust is built from the bottom up, rather than any one individual or organization controlling the fate of the technology. Reliable Robust Release cycles of Rust What is common between Java, Spring, and Angular? They never release their update when they promise to. The release cycle of the Rust community works with clockwork precision and is very reliable. Here’s an overview of the dates and versions: In mid-September 2018, the Rust team released Rust 2018 RC1 version. Rust 2018 is the first major new edition of Rust (after Rust 1.0 released in 2015). This new release would mark the culmination of the last three years of Rust’s development from the core team, and brings the language together in one neat package. This version includes plenty of new features like raw identifiers, better path clarity, new optimizations, and other additions. You can learn more about the Rust language and its evolution at the Rust blog and download from the Rust language website. Note: the headline was edited 09.06.2018 to make it clear that Rust was found to be the most loved language among developers using it. Rust 2018 RC1 now released with Raw identifiers, better path clarity, and other changes Rust as a Game Programming Language: Is it any good? Rust Language Server, RLS 1.0 releases with code intelligence, syntax highlighting and more
Read more
  • 0
  • 3
  • 28344
article-image-intel-me-has-a-manufacturing-mode-vulnerability-and-even-giant-manufacturers-like-apple-are-not-immune-say-researchers
Savia Lobo
03 Oct 2018
4 min read
Save for later

“Intel ME has a Manufacturing Mode vulnerability, and even giant manufacturers like Apple are not immune,” say researchers

Savia Lobo
03 Oct 2018
4 min read
Yesterday, a group of European information security researchers announced that they have discovered a vulnerability in Intel’s Management Engine (Intel ME) INTEL-SA-00086. They say that the root of this problem is an undocumented Intel ME mode, specifically known as the Manufacturing Mode. Undocumented commands enable overwriting SPI flash memory and implementing the doomsday scenario. The vulnerability could locally exploit of an ME vulnerability (INTEL-SA-00086). What is Manufacturing Mode? Intel ME Manufacturing Mode is intended for configuration and testing of the end platform during manufacturing. However, this mode and its potential risks are not described anywhere in Intel's public documentation. Ordinary users do not have the ability to disable this mode since the relevant utility (part of Intel ME System Tools) is not officially available. As a result, there is no software that can protect, or even notify, the user if this mode is enabled. This mode allows configuring critical platform settings stored in one-time-programmable memory (FUSEs). These settings include those for BootGuard (the mode, policy, and hash for the digital signing key for the ACM and UEFI modules). Some of them are referred to as FPFs (Field Programmable Fuses). An output of the -FPFs option in FPT In addition to FPFs, in Manufacturing Mode the hardware manufacturer can specify settings for Intel ME, which are stored in the Intel ME internal file system (MFS) on SPI flash memory. These parameters can be changed by reprogramming the SPI flash. The parameters are known as CVARs (Configurable NVARs, Named Variables). CVARs, just like FPFs, can be set and read via FPT. Manufacturing mode vulnerability in Intel chips within Apple laptops The researchers analyzed several platforms from a number of manufacturers, including Lenovo and Apple MacBook Prо laptops. The Lenovo models did not have any issues related to Manufacturing Mode. However, they found that the Intel chipsets within the Apple laptops are running in Manufacturing Mode and was found to include the vulnerability CVE-2018-4251. This information was reported to Apple and the vulnerability was patched in June, in the macOS High Sierra update 10.13.5. By exploiting CVE-2018-4251, an attacker could write old versions of Intel ME (such as versions containing vulnerability INTEL-SA-00086) to memory without needing an SPI programmer and without physical access to the computer. Thus, a local vector is possible for exploitation of INTEL-SA-00086, which enables running arbitrary code in ME. The researchers have also stated, in the notes for the INTEL-SA-00086 security bulletin, Intel does not mention enabled Manufacturing Mode as a method for local exploitation in the absence of physical access. Instead, the company incorrectly claims that local exploitation is possible only if access settings for SPI regions have been misconfigured. How can users save themselves from this vulnerability? To keep users safe, the researchers decided to describe how to check the status of Manufacturing Mode and how to disable it. Intel System Tools includes MEInfo in order to allow obtaining thorough diagnostic information about the current state of ME and the platform overall. They demonstrated this utility in their previous research about the undocumented HAP (High Assurance Platform) mode and showed how to disable ME. The utility, when called with the -FWSTS flag, displays a detailed description of status HECI registers and the current status of Manufacturing Mode (when the fourth bit of the FWSTS status register is set, Manufacturing Mode is active). Example of MEInfo output They also created a program for checking the status of Manufacturing Mode if the user for whatever reason does not have access to Intel ME System Tools. Here is what the script shows on affected systems: mmdetect script To disable Manufacturing Mode, FPT has a special option (-CLOSEMNF) that allows setting the recommended access rights for SPI flash regions in the descriptor. Here is what happens when -CLOSEMNF is entered: Process of closing Manufacturing Mode with FPT Thus, the researchers demonstrated that Intel ME has a Manufacturing Mode problem. Even major commercial manufacturers such as Apple are not immune to configuration mistakes on Intel platforms. Also, there is no public information on the topic, leaving end users in the dark about weaknesses that could result in data theft, persistent irremovable rootkits, and even ‘bricking’ of hardware. To know about this vulnerability in detail, visit Positive research’s blog. Meet ‘Foreshadow’: The L1 Terminal Fault in Intel’s chips SpectreRSB targets CPU return stack buffer, found on Intel, AMD, and ARM chipsets Intel faces backlash on Microcode Patches after it prohibited Benchmarking or Comparison  
Read more
  • 0
  • 0
  • 19520

article-image-how-to-deploy-serverless-applications-in-go-using-aws-lambda-tutorial
Savia Lobo
03 Oct 2018
12 min read
Save for later

How to deploy Serverless Applications in Go using AWS Lambda [Tutorial]

Savia Lobo
03 Oct 2018
12 min read
Building a serverless application allows you to focus on your application code instead of managing and operating infrastructure. If you choose AWS for this purpose, you do not have to think about provisioning or configuring servers since AWS will handle all of this for you. This reduces your infrastructure management burden and helps you get faster time-to-market. This tutorial is an excerpt taken from the book Hands-On Serverless Applications with Go written by Mohamed Labouardy. In this book, you will learn how to design and build a production-ready application in Go using AWS serverless services with zero upfront infrastructure investment. This article will cover the following points: Build, deploy, and manage our Lambda functions going through some advanced AWS CLI commands Publish multiple versions of the API Learn how to separate multiple deployment environments (sandbox, staging, and production) with aliases Cover the usage of the API Gateway stage variables to change the method endpoint's behavior. Lambda CLI commands In this section, we will go through the various AWS Lambda commands that you might use while building your Lambda functions. We will also learn how you can use them to automate your deployment process. The list-functions command As its name implies, it lists all Lambda functions in the AWS region you provided. The following command will return all Lambda functions in the North Virginia region: aws lambda list-functions --region us-east-1 For each function, the response includes the function's configuration information (FunctionName, Resources usage, Environment variables, IAM Role, Runtime environment, and so on), as shown in the following screenshot: To list only some attributes, such as the function name, you can use the query filter option, as follows: aws lambda list-functions --query Functions[].FunctionName[] The create-function command You should be familiar with this command as it has been used multiple times to create a new Lambda function from scratch. In addition to the function's configuration, you can use the command to provide the deployment package (ZIP) in two ways: ZIP file: It provides the path to the ZIP file of the code you are uploading with the --zip-file option: aws lambda create-function --function-name UpdateMovie \ --description "Update an existing movie" \ --runtime go1.x \ --role arn:aws:iam::ACCOUNT_ID:role/UpdateMovieRole \ --handler main \ --environment Variables={TABLE_NAME=movies} \ --zip-file fileb://./deployment.zip \ --region us-east-1a S3 Bucket object: It  provides the S3 bucket and object name with the --code option: aws lambda create-function --function-name UpdateMovie \ --description "Update an existing movie" \ --runtime go1.x \ --role arn:aws:iam::ACCOUNT_ID:role/UpdateMovieRole \ --handler main \ --environment Variables={TABLE_NAME=movies} \ --code S3Bucket=movies-api-deployment-package,S3Key=deployment.zip \ --region us-east-1 The as-mentioned commands will return a summary of the function's settings in a JSON format, as follows: It's worth mentioning that while creating your Lambda function, you might override the compute usage and network settings based on your function's behavior with the following options: --timeout: The default execution timeout is three seconds. When the three seconds are reached, AWS Lambda terminates your function. The maximum timeout you can set is five minutes. --memory-size: The amount of memory given to your function when executed. The default value is 128 MB and the maximum is 3,008 MB (increments of 64 MB). --vpc-config: This deploys the Lambda function in a private VPC. While it might be useful if the function requires communication with internal resources, it should ideally be avoided as it impacts the Lambda performance and scaling. AWS doesn't allow you to set the CPU usage of your function as it's calculated automatically based on the memory allocated for your function. CPU usage is proportional to the memory. The update-function-code command In addition to AWS Management Console, you can update your Lambda function's code with AWS CLI. The command requires the target Lambda function name and the new deployment package. Similarly to the previous command, you can provide the package as follows: The path to the new .zip file: aws lambda update-function-code --function-name UpdateMovie \ --zip-file fileb://./deployment-1.0.0.zip \ --region us-east-1 The S3 bucket where the .zip file is stored: aws lambda update-function-code --function-name UpdateMovie \ --s3-bucket movies-api-deployment-packages \ --s3-key deployment-1.0.0.zip \ --region us-east-1   This operation prints a new unique ID (called RevisionId) for each change in the Lambda function's code: The get-function-configuration command In order to retrieve the configuration information of a Lambda function, issue the following command: aws lambda get-function-configuration --function-name UpdateMovie --region us-east-1 The preceding command will provide the same information in the output that was displayed when the create-function command was used. To retrieve configuration information for a specific Lambda version or alias (following section), you can use the --qualifier option. The invoke command So far, we invoked our Lambda functions directly from AWS Lambda Console and through HTTP events with API Gateway. In addition to that, Lambda can be invoked from the AWS CLI with the invoke command: aws lambda invoke --function-name UpdateMovie result.json The preceding command will invoke the UpdateMovie function and save the function's output in result.json file: The status code is 400, which is normal, as UpdateFunction is expecting a JSON input. Let's see how to provide a JSON to our function with the invoke command. Head back to the DynamoDB movies table, and pick up a movie that you want to update. In this example, we will update the movie with the ID as 13, shown as follows: Create a JSON file with a body attribute that contains the new movie item attribute, as the Lambda function is expecting the input to be in the API Gateway Proxy request format: { "body": "{\"id\":\"13\", \"name\":\"Deadpool 2\"}" } Finally, run the invoke function command again with the JSON file as the input parameter: aws lambda invoke --function UpdateMovie --payload file://input.json result.json If you print the result.json content, the updated movie should be returned, shown as follows: You can verify that the movie's name is updated in the DynamoDB table by invoking the FindAllMovies function: aws lambda invoke --function-name FindAllMovies result.json The body attribute should contain the new updated movie, shown as follows: Head back to DynamoDB Console; the movie with the ID of 13 should have a new name, as shown in the following  screenshot: The delete-function command To delete a Lambda function, you can use the following command: aws lambda delete-function --function-name UpdateMovie By default, the command will delete all function versions and aliases. To delete a specific version or alias, you might want to use the --qualifier option. By now, you should be familiar with all the AWS CLI commands you might use and need while building your serverless applications in AWS Lambda. In the upcoming section, we will see how to create different versions of your Lambda functions and maintain multiple environments with aliases. Versions and aliases When you're building your serverless application, you must separate your deployment environments to test new changes without impacting your production. Therefore, having multiple versions of your Lambda functions makes sense. Versioning A version represents a state of your function's code and configuration in time. By default, each Lambda function has the $LATEST version pointing to the latest changes of your function, as shown in the following screenshot: In order to create a new version from the $LATEST version, click on Actions and Publish new version. Let's call it 1.0.0, as shown in  the next screenshot:   The new version will be created with an ID=1 (incremental). Note the ARN Lambda function at the top of the window in the following screenshot; it has the version ID: Once the version is created, you cannot update the function code, shown as follows: Moreover, advanced settings, such as IAM roles, network configuration, and compute usage, cannot be changed, shown as follows: Versions are called immutable, which means they cannot be changed once they're published; only the $LATEST version is editable. Now, we know how to publish a new version from the console. Let's publish a new version with the AWS CLI. But first, we need to update the FindAllMovies function as we cannot publish a new version if no changes were made to $LATEST since publishing version 1.0.0. The new version will have a pagination system. The function will return only the number of items requested by the user. The following code will read the Count header parameter, convert it to a number, and use the Scan operation with the Limit parameter to fetch the movies from DynamoDB: func findAll(request events.APIGatewayProxyRequest) (events.APIGatewayProxyResponse, error) { size, err := strconv.Atoi(request.Headers["Count"]) if err != nil { return events.APIGatewayProxyResponse{ StatusCode: http.StatusBadRequest, Body: "Count Header should be a number", }, nil } ... svc := dynamodb.New(cfg) req := svc.ScanRequest(&dynamodb.ScanInput{ TableName: aws.String(os.Getenv("TABLE_NAME")), Limit: aws.Int64(int64(size)), }) ... } Next, we update the FindAllMovies Lambda function's code with the update-function-code command: aws lambda update-function-code --function-name FindAllMovies \ --zip-file fileb://./deployment.zip Then, publish a new version, 1.1.0, based on the current configuration and code with the following command: aws lambda publish-version --function-name FindAllMovies --description 1.1.0 Go back to AWS Lambda Console and navigate to your FindAllMovies; a new version should be created with a new ID=2, as shown in the following screenshot: Now that our versions are created, let's test them out by using the AWS CLI invoke command. FindAllMovies v1.0.0 Invoke the FindAllMovies v1.0.0 version with its ID in the qualifier parameter with the following command: aws lambda invoke --function-name FindAllMovies --qualifier 1 result.json result.json should have all the movies in the DynamoDB movies table, shown as follows: The output showing all the movies in the DynamoDB movies tableTo know more about the output in the FindAllMovies v1.1.0 and more about Semantic versioning, head over to the book. Aliases The alias is a pointer to a specific version, it allows you to promote a function from one environment to another (such as staging to production). Aliases are mutable, unlike versions, which are immutable. To illustrate the concept of aliases, we will create two aliases, as illustrated in the following diagram: a Production alias pointing to FindAllMovies Lambda function 1.0.0 version, and a Staging alias that points to function 1.1.0 version. Then, we will configure API Gateway to use these aliases instead of the $LATEST version: Head back to the FindAllMovies configuration page. If you click on the Qualifiers drop-down list, you should see a default alias called Unqualified pointing to your $LATEST version, as shown in the following screenshot: To create a new alias, click on Actions and then Create a new alias called Staging. Select the 5 version as the target, shown as follows: Once created, the new version should be added to the list of Aliases, shown as follows: Next, create a new alias for the Production environment that points to version 1.0.0 using the AWS command line: aws lambda create-alias --function-name FindAllMovies \ --name Production --description "Production environment" \ --function-version 1 Similarly, the new alias should be successfully created: Now that our aliases have been created, let's configure the API Gateway to use those aliases with Stage variables. Stage variables Stage variables are environment variables that can be used to change the behavior at runtime of the API Gateway methods for each deployment stage. The following section will illustrate how to use stage variables with API Gateway.   On the API Gateway Console, navigate to the Movies API, click on the GET method, and update the target Lambda Function to use a stage variable instead of a hardcoded Lambda function name, as shown in the following screenshot: When you save it, a new prompt will ask you to grant the permissions to API Gateway to call your Lambda function aliases, as shown in the following screenshot: Execute the following commands to allow API Gateway to invoke the Production and Staging aliases: Production alias: aws lambda add-permission --function-name "arn:aws:lambda:us-east-1:ACCOUNT_ID:function:FindAllMovies:Production" \ --source-arn "arn:aws:execute-api:us-east-1:ACCOUNT_ID:API_ID/*/GET/movies" \ --principal apigateway.amazonaws.com \ --statement-id STATEMENT_ID \ --action lambda:InvokeFunction Staging alias: aws lambda add-permission --function-name "arn:aws:lambda:us-east-1:ACCOUNT_ID:function:FindAllMovies:Staging" \ --source-arn "arn:aws:execute-api:us-east-1:ACCOUNT_ID:API_ID/*/GET/movies" \ --principal apigateway.amazonaws.com \ --statement-id STATEMENT_ID \ --action lambda:InvokeFunction Then, create a new stage called production, as shown in next screenshot: Next, click on the Stages Variables tab, and create a new stage variable called lambda and set FindAllMovies:Production as a value, shown as follows: Do the same for the staging environment with the lambda variable pointing to the Lambda function's Staging alias, shown as follows: To test the endpoint, use the cURL command or any REST client you're familiar with. I opt for Postman. A GET method on the API Gateway's production stage invoked URL should return all the movies in the database, shown as follows: Do the same for the staging environment, with a new Header key called Count=4; you should have only four movies items in return, shown as follows: That's how you can maintain multiple environments of your Lambda functions. You can now easily promote the 1.1.0 version into production by changing the Production pointer to point to 1.1.0 instead of 1.0.0, and roll back in case of failure to the previous working version without changing the API Gateway settings. To summarize, we learned about how to deploy serverless applications using the AWS Lambda functions. If you've enjoyed reading this and want to know more about how to set up a CI/CD pipeline from scratch to automate the process of deploying Lambda functions to production with Go programming language, check out our book, Hands-On Serverless Applications with Go. Keep your serverless AWS applications secure [Tutorial] Azure Functions 2.0 launches with better workload support for serverless How Serverless computing is making AI development easier
Read more
  • 0
  • 0
  • 48105
Modal Close icon
Modal Close icon