Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials

7019 Articles
article-image-understanding-dependencies-c-application
Packt
05 Apr 2017
9 min read
Save for later

Understanding the Dependencies of a C++ Application

Packt
05 Apr 2017
9 min read
This article by Richard Grimes, author of the book, Beginning C++ Programming explains the dependencies of a C++ application. A C++ project will produce an executable or library, and this will be built by the linker from object files. The executable or library is dependent upon these object files. An object file will be compiled from a C++ source file (and potentially one or more header files). The object file is dependent upon these C++ source and header files. Understanding dependencies is important because it helps you understand the order to compile the files in your project, and it allows you to make your project builds quicker by only compiling those files that have changed. (For more resources related to this topic, see here.) Libraries When you include a file within your source file the code within that header file will be accessible to your code. Your include file may contain whole function or class definitions (these will be covered in later chapters) but this will result in a problem: multiple definitions of a function or class. Instead, you can declare a class or function prototype, which indicates how calling code will call the function without actually defining it. Clearly the code will have to be defined elsewhere, and this could be a source file or a library, but the compiler will be happy because it only sees one definition. A library is code that has already been defined, it has been fully debugged and tested, and therefore users should not need to have access to the source code. The C++ Standard Library is mostly shared through header files, which helps you when you debug your code, but you must resist any temptation to edit these files. Other libraries will be provided as compiled libraries. There are essentially two types of compiled libraries: static libraries and dynamic link libraries. If you use a static library then the compiler will copy the compiled code that you use from the static library and place it in your executable. If you use a dynamic link (or shared) library then the linker will add information used during runtime (it may be when the executable is loaded, or it may even be delayed until the function is called) to load the shared library into memory and access the function. Windows uses the extension lib for static libraries and dll for dynamic link libraries. GNU gcc uses the extension a for static libraries and so for shared libraries. If you use library code in a static or dynamic link library the compiler will need to know that you are calling a function correctly—to make sure your code calls a function with the correct number of parameters and correct types. This is the purpose of a function prototype—it gives the compiler the information it needs to know about calling the function without providing the actual body of the function, the function definition. In general, the C++ Standard Library will be included into your code through the standard header files. The C Runtime Library (which provides some code for the C++ Standard Library) will be static linked, but if the compiler provides a dynamic linked version you will have a compiler option to use this. Pre-compiled Headers When you include a file into your source file the preprocessor will include the contents of that file (after taking into account any conditional compilation directives) and recursively any files included by that file. As illustrated earlier, this could result in thousands of lines of code. As you develop your code you will often compile the project so that you can test the code. Every time you compile your code the code defined in the header files will also be compiled even though the code in library header files will not have changed. With a large project this can make the compilation take a long time. To get around this problem compilers often offer an option to pre-compile headers that will not change. Creating and using precompiled headers is compiler specific. For example, with gcc you compile a header as if it is a C++ source file (with the /x switch) and the compiler creates a file with an extension of gch. When gcc compiles source files that use the header it will search for the gch file and if it finds the precompiled header it will use that, otherwise it will use the header file. In Visual C++ the process is a bit more complicated because you have to specifically tell the compiler to look for a precompiled header when it compiles a source file. The convention in Visual C++ projects is to have a source file called stdafx.cpp which has a single line that includes the file stdafx.h. You put all your stable header file includes in stdafx.h. Next, you create a precompiled header by compiling stdafx.cpp using the /Yc compiler option to specify that stdafx.h contains the stable headers to compile. This will create a pch file (typically, Visual C++ will name it after your project) containing the code compiled up to the point of the inclusion of the stdafx.h header file. Your other source files must include the stdafx.h header file as the first header file, but it may also include other files. When you compile your source files you use the /Yu switch to specify the stable header file (stdafx.h) and the compiler will use the precompiled header pch file instead of the header. When you examine large projects you will often find precompiled headers are used, and as you can see, it alters the file structure of the project. The example later in this chapter will show how to create and use precompiled headers. Project Structure It is important to organize your code into modules to enable you to maintain it effectively. Even if you are writing C-like procedural code (that is, your code involves calls to functions in a linear way) you will also benefit from organizing it into modules. For example, you may have functions that manipulate strings and other functions that access files, so you may decide to put the definition of the string functions in one source file, string.cpp, and the definition of the file functions in another file, file.cpp. So that other modules in the project can use these files you must declare the prototypes of the functions in a header file and include that header in the module that uses the functions. There is no absolute rule in the language about the relationship between the header files and the source files that contain the definition of the functions. You may have a header file called string.h for the functions in string.cpp and a header file called file.h for the functions in file.cpp. Or you may have just one file called utilities.h that contains the declarations for all the functions in both files. The only rule that you have to abide by is that at compile time the compiler must have access to a declaration of the function in the current source file, either through a header file, or the function definition itself. The compiler will not look forward in a source file, so if a function calls another function in the same source file that called function must have already been defined before the calling function, or there must be a prototype declaration. This leads to a typical convention of having a header file associated with each source file that contains the prototypes of the functions in the source file, and the source file includes this header. This convention becomes more important when you write classes. Managing Dependencies When a project is built with a building tool, checks are performed to see if the output of the build exist and if not, perform the appropriate actions to build it. Common terminology is that the output of a build step is called a target and the inputs of the build step (for example, source files) are the dependencies of that target. Each target's dependencies are the files used to make them. The dependencies may themselves be a target of a build action and have their own dependencies. For example, the following picture shows the dependencies in a project: In this project there are three source files (main.cpp, file1.cpp, file2.cpp) each of these includes the same header utils.h which is precompiled (and hence why there is a fourth source file, utils.cpp, that only contains utils.h). All of the source files depend on utils.pch, which in turn depends upon utils.h. The source file main.cpp has the main function and calls functions in the other two source files (file1.cpp and file2.cpp), and accesses the functions through the associated header files file1.h and file2.h. On the first compilation the build tool will see that the executable depends on the four object files and so it will look for the rule to build each one. In the case of the three C++ source files this means compiling the cpp files, but since utils.obj is used to support the precompiled header, the build rule will be different to the other files. When the build tool has made these object files it will then link them together along with any library code (not shown here). Subsequently, if you change file2.cpp and build the project, the build tool will see that only file2.cpp has changed and since only file2.obj depends on file2.cpp all the make tool needs to do is compile file2.cpp and then link the new file2.obj with the existing object files to create the executable. If you change the header file, file2.h, the build tool will see that two files depend on this header file, file2.cpp and main.cpp and so the build tool will compile these two source files and link the new two object files file2.obj and main.obj with the existing object files to form the executable. If, however, the precompiled header source file, util.h, changes it means that all of the source files will have to be compiled. Summary For a small project, dependencies are easy to manage, and as you have seen, for a single source file project you do not even have to worry about calling the linker because the compiler will do that automatically. As a C++ project gets bigger, managing dependencies gets more complex and this is where development environments like Visual C++ become vital. Resources for Article: Further resources on this subject: Introduction to C# and .NET [article] Preparing to Build Your Own GIS Application [article] Writing a Fully Native Application [article]
Read more
  • 0
  • 0
  • 45642

article-image-layout-management-python-gui
Packt
05 Apr 2017
13 min read
Save for later

Layout Management for Python GUI

Packt
05 Apr 2017
13 min read
In this article written by Burkhard A. Meierwe, the author of the book Python GUI Programming Cookbook - Second Edition, we will lay out our GUI using Python 3.6 and above: Arranging several labels within a label frame widget Using padding to add space around widgets How widgets dynamically expand the GUI Aligning the GUI widgets by embedding frames within frames (For more resources related to this topic, see here.)  In this article, we will explore how to arrange widgets within widgets to create our Python GUI. Learning the fundamentals of GUI layout design will enable us to create great looking GUIs. There are certain techniques that will help us to achieve this layout design. The grid layout manager is one of the most important layout tools built into tkinter that we will be using. We can very easily create menu bars, tabbed controls (aka Notebooks) and many more widgets using tkinter . Arranging several labels within a label frame widget The LabelFrame widget allows us to design our GUI in an organized fashion. We are still using the grid layout manager as our main layout design tool, yet by using LabelFrame widgets we get much more control over our GUI design. Getting ready We are starting to add more and more widgets to our GUI, and we will make the GUI fully functional in the coming recipes. Here we are starting to use the LabelFrame widget. Add the following code just above the main event loop towards the bottom of the Python module. Running the code will result in the GUI looking like this: Uncomment line 111 and notice the different alignment of the LabelFrame. We can easily align the labels vertically by changing our code, as shown next. Note that the only change we had to make was in the column and row numbering. Now the GUI LabelFrame looks as such: How it works... In line 109 we create our first ttk LabelFrame widget and assign the resulting instance to the variable buttons_frame. The parent container is win, our main window. In lines 114 -116, we create labels and place them in the LabelFrame. buttons_frame is the parent of the labels. We are using the important grid layout tool to arrange the labels within the LabelFrame. The column and row properties of this layout manager give us the power to control our GUI layout. The parent of our labels is the buttons_frame instance variable of the LabelFrame, not the win instance variable of the main window. We can see the beginning of a layout hierarchy here. We can see how easy it is to change our layout via the column and row properties. Note how we change the column to 0, and how we layer our labels vertically by numbering the row values sequentially. The name ttk stands for themed tk. The tk-themed widget set was introduced in Tk 8.5. There's more... In a recipe later in this article we will embed LabelFrame widgets within LabelFrame widgets, nesting them to control our GUI layout. Using padding to add space around widgets Our GUI is being created nicely. Next, we will improve the visual aspects of our widgets by adding a little space around them, so they can breathe. Getting ready While tkinter might have had a reputation for creating ugly GUIs, this has dramatically changed since version 8.5. You just have to know how to use the tools and techniques that are available. That's what we will do next. tkinter version 8.6 ships with Python 3.6. How to do it... The procedural way of adding spacing around widgets is shown first, and then we will use a loop to achieve the same thing in a much better way. Our LabelFrame looks a bit tight as it blends into the main window towards the bottom. Let's fix this now. Modify line 110 by adding padx and pady: And now our LabelFrame got some breathing space: How it works... In tkinter, adding space horizontally and vertically is done by using the built-in properties named padx and pady. These can be used to add space around many widgets, improving horizontal and vertical alignments, respectively. We hard-coded 20 pixels of space to the left and right of the LabelFrame, and we added 40 pixels to the top and bottom of the frame. Now our LabelFrame stands out more than it did before. The screenshot above only shows the relevant change. We can use a loop to add space around the labels contained within the LabelFrame: Now the labels within the LabelFrame widget have some space around them too: The grid_configure() function enables us to modify the UI elements before the main loop displays them. So, instead of hard-coding values when we first create a widget, we can work on our layout and then arrange spacing towards the end of our file, just before the GUI is being created. This is a neat technique to know. The winfo_children() function returns a list of all the children belonging to the buttons_frame variable. This enables us to loop through them and assign the padding to each label. One thing to notice is that the spacing to the right of the labels is not really visible. This is because the title of the LabelFrame is longer than the names of the labels. We can experiment with this by making the names of the labels longer. Now our GUI looks like this. Note how there is now some space added to the right of the long label next to the dots. The last dot does not touch the LabelFrame which it otherwise would without the added space. We can also remove the name of the LabelFrame to see the effect padx has on positioning our labels. By setting the text property to an empty string, we remove the name that was previously displayed for the LabelFrame. How widgets dynamically expand the GUI You probably noticed in previous screenshots and by running the code that widgets have a capability to extend themselves to the space they need to visually display their text. Java introduced the concept of dynamic GUI layout management. In comparison, visual development IDEs like VS.NET lay out the GUI in a visual manner, and are basically hard-coding the x and y coordinates of UI elements. Using tkinter, this dynamic capability creates both an advantage and a little bit of a challenge, because sometimes our GUI dynamically expands when we would prefer it rather not to be so dynamic! Well, we are dynamic Python programmers so we can figure out how to make the best use of this fantastic behavior! Getting ready At the beginning of the previous recipe we added a LabelFrame widget. This moved some of our controls to the center of column 0. We might not wish this modification to our GUI layout. Next we will explore some ways to solve this. How to do it... Let us first become aware of the subtle details that are going on in our GUI layout in order to understand it better. We are using the grid layout manager widget and it lays out our widgets in a zero-based grid. This is very similar to an Excel spreadsheet or a database table. Grid Layout Manager Example with 2 Rows and 3 Columns: Row 0; Col 0 Row 0; Col 1 Row 0; Col 2 Row 1; Col 0 Row 1; Col 1 Row 1; Col 2 Using the grid layout manager, what is happening is that the width of any given column is determined by the longest name or widget in that column. This affects all rows. By adding our LabelFrame widget and giving it a title that is longer than some hard-coded size widget like the top-left label and the text entry below it, we dynamically move those widgets to the center of column 0, adding space to the left and right sides of those widgets. Incidentally, because we used the sticky property for the Checkbutton and ScrolledText widgets, those remain attached to the left side of the frame. Let's look in more detail at the screenshot from the first recipe of this article. We added the following code to create the LabelFrame and then placed labels into this frame. Since the text property of the LabelFrame, which is displayed as the title of the LabelFrame, is longer than both our Enter a name: label and the text box entry below it, those two widgets are dynamically centered within the new width of column 0. The Checkbutton and Radiobutton widgets in column 0 did not get centered because we used the sticky=tk.W property when we created those widgets. For the ScrolledText widget we used sticky=tk.WE, which binds the widget to both the west (aka left) and east (aka right) side of the frame. Let's remove the sticky property from the ScrolledText widget and observe the effect this change has. Now our GUI has new space around the ScrolledText widget both on the left and right sides. Because we used the columnspan=3 property, our ScrolledText widget still spans all three columns. If we remove columnspan=3 we get the following GUI which is not what we want. Now our ScrolledText only occupies column 0 and, because of its size, it stretches the layout. One way to get our layout back to where we were before adding the LabelFrame is to adjust the grid column position. Change the column value from 0 to 1. Now our GUI looks like this: How it works... Because we are still using individual widgets, our layout can get messed up. By moving the column value of the LabelFrame from 0 to 1, we were able to get the controls back to where they used to be and where we prefer them to be. At least the left-most label, text, Checkbutton, ScrolledText, and Radiobutton widgets are now located where we intended them to be. The second label and text Entry located in column 1 aligned themselves to the center of the length of the Labels in a Frame widget so we basically moved our alignment challenge one column to the right. It is not so visible because the size of the Choose a number: label is almost the same as the size of the Labels in a Frame title, and so the column width was already close to the new width generated by the LabelFrame. There's more... In the next recipe we will embed frames within frames to avoid the accidental misalignment of widgets we just experienced in this recipe. Aligning the GUI widgets by embedding frames within frames We have much better control of our GUI layout if we embed frames within frames. This is what we will do in this recipe. Getting ready The dynamic behavior of Python and its GUI modules can create a little bit of a challenge to really get our GUI looking the way we want. Here we will embed frames within frames to get more control of our layout. This will establish a stronger hierarchy among the different UI elements, making the visual appearance easier to achieve. We will continue to use the GUI we created in the previous recipe. How to do it... Here, we will create a top-level frame that will contain other frames and widgets. This will help us to get our GUI layout just the way we want. In order to do so, we will have to embed our current controls within a central ttk.LabelFrame. This ttk.LabelFrame is a child of the main parent window and all controls will be children of this ttk.LabelFrame. Up to this point in our recipes we have assigned all widgets to our main GUI frame directly. Now we will only assign our LabelFrame to our main window and after that we will make this LabelFrame the parent container for all the widgets. This creates the following hierarchy in our GUI layout: In this diagram, win is the variable that holds a reference to our main GUI tkinter window frame; mighty is the variable that holds a reference to our LabelFrame and is a child of the main window frame (win); and Label and all other widgets are now placed into the LabelFrame container (mighty). Add the following code towards the top of our Python module: Next we will modify all the following controls to use mighty as the parent, replacing win. Here is an example of how to do this: Note how all the widgets are now contained in the Mighty Python LabelFrame which surrounds all of them with a barely visible thin line. Next, we can reset the Labels in a Frame widget to the left without messing up our GUI layout: Oops - maybe not. While our frame within another frame aligned nicely to the left, it again pushed our top widgets into the center (a default). In order to align them to the left, we have to force our GUI layout by using the sticky property. By assigning it 'W' (West) we can control the widget to be left-aligned. How it works... Note how we aligned the label, but not the text box below it. We have to use the sticky property for all the controls we want to left-align. We can do that in a loop, using the winfo_children() and grid_configure(sticky='W') properties, as we did before in the second recipe of this article. The winfo_children() function returns a list of all the children belonging to the parent. This enables us to loop through all of the widgets and change their properties. Using tkinter to force left, right, top, bottom the naming is very similar to Java: west, east, north and south, abbreviated to: 'W' and so on. We can also use the following syntax: tk.W instead of 'W'. This requires having imported the tkinter module aliased as tk. In a previous recipe we combined both 'W' and 'E' to make our ScrolledText widget attach itself both to the left and right sides of its container using 'WE'. We can add more combinations: 'NSE' will stretch our widget to the top, bottom and right side. If we have only one widget in our form, for example a button, we can make it fill in the entire frame by using all options: 'NSWE'. We can also use tuple syntax: sticky=(tk.N, tk.S, tk.W, tk.E). Let's align the entry in column 0 to the left. Now both the label and the  Entry are aligned towards the West (left). In order to separate the influence that the length of our Labels in a Frame LabelFrame has on the rest of our GUI layout we must not place this LabelFrame into the same LabelFrame as the other widgets but assign it directly to the main GUI form (win). Summary We have learned the layout management for Python GUI using the following reciepes: Arranging several labels within a label frame widget Using padding to add space around widgets How widgets dynamically expand the GUI Aligning the GUI widgets by embedding frames within frames Resources for Article:  Further resources on this subject: Python Scripting Essentials [article] An Introduction to Python Lists and Dictionaries [article] Test all the things with Python [article]
Read more
  • 0
  • 0
  • 15558

article-image-getting-started-docker-storage
Packt
05 Apr 2017
12 min read
Save for later

Getting Started with Docker Storage

Packt
05 Apr 2017
12 min read
In this article by Scott Gallagher, author of the book Mastering Docker – Second Edition, we will cover the places you store your containers, such as Docker Hub and Docker Hub Enterprises. We will also cover Docker Registry that you can use to run your own local storage for the Docker containers. We will review the differences between them all and when and how to use each of them. It will also cover how to set up automated builds using web hooks as well as the pieces that are all required to set them up. Lastly, we will run through an example of how to set up your own Docker Registry. Let's take a quick look at the topics we will be covering in this article: Docker Hub Docker Hub Enterprise Docker Registry Automated builds (For more resources related to this topic, see here.) Docker Hub In this section, we will focus on that Docker Hub, which is a free public option, but also has a private option that you can use to secure your images. We will focus on the web aspect of Docker Hub and the management you can do there. The login page is like the one shown in the following screenshot: Dashboard After logging into the Docker Hub, you will be taken to the following landing page. This page is known as the Dashboard of Docker Hub. From here, you can get to all the other sub pages of Docker Hub. In the upcoming sections, we will go through everything you see on the dashboard, starting with the dark blue bar you have on the top. Exploring the repositories page The following is the screenshot of the Explore link you see next to Dashboard at the top of the screen: As you can see in the screenshot, this is a link to show you all the official repositories that Docker has to offer. Official repositories are those that come directly from Docker or from the company responsible for the product. They are regularly updated and patched as needed. Organizations Organizations are those that you have either created or have been added to. Organizations allow you to layer on control, for say, a project that multiple people are collaborating on. The organization gets its own setting such as whether to store repositories as public or private by default, changing plans that will allow for different amounts of private repositories, and separate repositories all together from the ones you or others have. You can also access or switch between accounts or organizations from the Dashboard just below the Docker log, where you will typically see your username when you log in. This is a drop-down list, where you can switch between all the organizations you belong to. The Create menu The Create menu is the new item along the top bar of the Dashboard. From this drop-down menu, you can perform three actions: Create repository Create automated build Create organization A pictorial representation is shown in the following screenshot: The Settings Page Probably, the first section everyone jumps to once they have created an account on the Docker Hub—the Settings page. I know, that's what I did at least. The Account Settings page can be found under the drop-down menu that is accessed in the upper-right corner of the dashboard on selecting Settings. The page allows you to set up your public profile; change your password; see what organization you belong to, the subscriptions for e-mail updates you belong to, what specific notifications you would like to receive, what authorized services have access to your information, linked accounts (such as your GitHub or Bitbucket accounts); as well as your enterprise licenses, billing, and global settings. The only global setting as of now is the choice between having your repositories default to public or private upon creation. The default is to create them as public repositories. The Stars page Below the dark blue bar at the top of the Dashboard page are two more areas that are yet to be covered. The first, the Stars page, allows you to see what repositories you yourself have starred. This is very useful if you come across some repositories that you prefer to use and want to access them to see whether they have been updated recently or whether any other changes have occurred on these repositories. The second is a new setting in the new version of Docker Hub called Contributed. In this section, there will be a list of repositories you have contributed to outside of the ones within your Repositories list. Docker Hub Enterprise Docker Hub Enterprise, as it is currently known, will eventually be called Docker Subscription. We will focus on Docker Subscription, as it's the new and shiny piece. We will view the differences between Docker Hub and Docker Subscription (as we will call it moving forward) and view the options to deploy Docker Subscription. Let's first start off by comparing Docker Hub to Docker Subscription and see why each is unique and what purpose each serves: Docker Hub Shareable image, but it can be private No hassle of self-hosting Free (except for a certain number of private images) Docker Subscription Integrated into your authentication services (that is, AD/LDAP) Deployed on your own infrastructure (or cloud) Commercial support Docker Subscription for server Docker Subscription for server allows you to deploy both Docker Trusted Registry as well as Docker Engine on the infrastructure that you manage. Docker Trusted Registry is the location where you store the Docker images that you have created. You can set these up to be internal only or share them out publicly as well. Docker Subscription gives you all the benefits of running your own dedicated Docker hosted registry with the added benefits of getting support in case you need it. Docker Subscription for cloud As we saw in the previous section, we can also deploy Docker Subscription to a cloud provider if we wish. This allows us to leverage our existing cloud environments without having to roll our own server infrastructure up to host our Docker images. The setup is the same as we reviewed in the previous section; but this time, we will be targeting our existing cloud environment instead. Docker Registry In this section, we will be looking at Docker Registry. Docker Registry is an open source application that you can run anywhere you please and store your Docker image in. We will look at the comparison between Docker Registry and Docker Hub and how to choose among the two. By the end of the section, you will learn how to run your own Docker Registry and see whether it's a true fit for you. An overview of Docker Registry Docker Registry, as stated earlier, is an open source application that you can utilize to store your Docker images on a platform of your choice. This allows you to keep them 100% private if you wish or share them as needed. The registry can be found at https://docs.docker.com/registry/. This will run you through the setup and the steps to follow while pushing images to Docker Registry compared to Docker Hub. Docker Registry makes a lot of sense if you want to roll your own registry without having to pay for all the private features of Docker Hub. Next, let's take a look at some comparisons between Docker Hub and Docker Registry, so you can make an educated decision as to which platform to choose to store your images. Docker Registry will allow you to do the following: Host and manage your own registry from which you can serve all the repositories as private, public, or a mix between the two Scale the registry as needed based on how many images you host or how many pull requests you are serving out All are command-line-based for those that live on the command line Docker Hub will allow you to: Get a GUI-based interface that you can use to manage your images A location already set up on the cloud that is ready to handle public and/or private images Peace of mind of not having to manage a server that is hosting all your images Automated builds In this section, we will look at automated builds. Automated builds are those that you can link to your GitHub or Bitbucket account(s) and, as you update the code in your code repository, you can have the image automatically built on Docker Hub. We will look at all the pieces required to do so and, by the end, you'll be automating all your builds. Setting up your code The first step to create automated builds is to set up your GitHub or Bitbucket code. These are the two options you have while selecting where to store your code. For our example, I will be using GitHub; but the setup will be the same for GitHub and Bitbucket. First, we set up our GitHub code that contains just a simple README file that we will edit for our purpose. This file could be anything as far as a script or even multiple files that you want to manipulate for your automated builds. One key thing is that we can't just leave the README file alone. One key piece is that a Dockerfile is required to do the builds when you want it to for them to be automated. Next, we need to set up the link between our code and Docker Hub. Setting up Docker Hub On Docker Hub, we are going to use the Create drop-down menu and select Create Automated Build. After selecting it, we will be taken to a screen that will show you the accounts you have linked to either GitHub or Bitbucket. You then need to search and select the repository from either of the locations you want to create the automated build from. This will essentially create a web hook that when a commit is done on a selected code repository, then a new build will be created on Docker Hub. After you select the repository you would like to use, you will be taken to a screen similar to the following one: For the most part, the defaults will be used by most. You can select a different branch if you want to use one, say a testing branch if you use one before the code may go to the master branch. The one thing that will not be filled out, but is required, is the description field. You must enter something here or you will not be able to continue past this page. Upon clicking Create, you will be taken to a screen similar to the next screenshot: On this screen, you can see a lot of information on the automated build you have set up. Information such as tags, the Dockerfile in the code repository, build details, build settings, collaborators on the code, web hooks, and settings that include making the repository public or private and deleting the automated build repository as well. Putting all the pieces together So, let's take a run at doing a Docker automated build and see what happens when we have all the pieces in place and exactly what we have to do to kick off this automated build and be able to create our own magic: Update the code or any file inside your GitHub or Bitbucket repository. Upon committing the update, the automated build will be kicked off and logged in Docker Hub for that automated repository. Creating your own registry To create a registry of your own, use the following command: $ docker-machine create --driver vmwarefusion registry Creating SSH key... Creating VM... Starting registry... Waiting for VM to come online... To see how to connect Docker to this machine, run the following command: $ docker-machine env registry export DOCKER_TLS_VERIFY="1" export DOCKER_HOST="tcp://172.16.9.142:2376" export DOCKER_CERT_PATH="/Users/scottpgallagher/.docker/machine/machines/ registry" export DOCKER_MACHINE_NAME="registry" # Run this command to configure your shell: # eval "$(docker-machine env registry)" $ eval "$(docker-machine env registry)" $ docker pull registry $ docker run -p 5000:5000 -v <HOST_DIR>:/tmp/registry-dev registry:2 This will specify to use version 2 of the registry. For AWS (as shown in example from https://hub.docker.com/_/registry/): $ docker run -e SETTINGS_FLAVOR=s3 -e AWS_BUCKET=acme-docker -e STORAGE_PATH=/registry -e AWS_KEY=AKIAHSHB43HS3J92MXZ -e AWS_SECRET=xdDowwlK7TJajV1Y7EoOZrmuPEJlHYcNP2k4j49T -e SEARCH_BACKEND=sqlalchemy -p 5000:5000 registry:2 Again, this will use version 2 of the self-hosted registry. Then, you need to modify your Docker startups to point to the newly set up registry. Add the following line to the Docker startup in the /etc/init.d/docker file: -H tcp://127.0.0.1:2375 -H unix:///var/run/docker.sock --insecureregistry <REGISTRY_HOSTNAME>:5000 Most of these settings might already be there and you might only need to add --insecure-registry <REGISTRY_HOSTNAME>:5000: To access this file, you will need to use docker-machine: $ docker-machine ssh <docker-host_name> Now, you can pull a registry from the public Docker Hub as follows: $ docker pull debian Tag it, so when we do a push, it will go to the registry we set up: $ docker tag debian <REGISTRY_URL>:5000/debian Then, we can push it to our registry: $ docker push <REGISTRY_URL>:5000/debian We can also pull it for any future clients (or after any updates we have pushed for it): $ docker pull <REGISTRY_URL>:5000/debian Summary In this article, we dove deep into Docker Hub and also reviewed the new shiny Docker Subscription as well as the self-hosted Docker Registry. We have gone through the extensive review of each of them. You learned of the differences between them all and how to utilize each one. In this article, we also looked deep into setting up automated builds. We took a look at how to set up your own Docker Hub Registry. We have encompassed a lot in this chapter and I hope you have learned a lot and will like to put it all into good use. Resources for Article: Further resources on this subject: Docker in Production [article] Docker Hosts [article] Hands On with Docker Swarm [article]
Read more
  • 0
  • 0
  • 5507

article-image-api-and-intent-driven-networking
Packt
05 Apr 2017
19 min read
Save for later

API and Intent-Driven Networking

Packt
05 Apr 2017
19 min read
In this article by Eric Chou, author of the book Mastering Python Networking we will look at following topics: Treating Infrastructure as code and data modeling Cisco NX-API and Application centric infrastruucture (For more resources related to this topic, see here.) Infrastructure as Python code In a perfect world, network engineers and people who design and manage networks should focus on what they want the network to achieve instead of the device-level interactions. In my first job as an Intern for a local ISP, wide-eyed and excited, I received my first assignment to install a router on customer site to turn up their fractional frame relay link (remember those?). How would I do that? I asked, I was handed a standard operating procedure for turning up frame relay links. I went to the customer site, blindly type in the commands and looked at the green lights flash, happily packed my bag and pad myself on the back for a job well done. As exciting as that first assignment was, I did not fully understand what I was doing. I was simply following instructions without thinking about the implication of the commands I was typing in. How would I troubleshoot something if the light was red instead of green? I think I would have called back to the office. Of course network engineering is not about typing in commands onto a device, it is about building a way that allow services to be delivered from one point to another with as little friction as possible. The commands we have to use and the output we have to interpret are merely a mean to an end. I would like to hereby argue that we should focus as much on the Intent of the network as possible for an Intent-Driven Networking and abstract ourselves from the device-level interaction on an as-needed basis. In using API, it is my opinion that it gets us closer to a state of Intent-Driven Networking. In short, because we abstract the layer of specific command executed on destination device, we focus on our intent instead of the specific command given to the device. For example, if our intend is to deny an IP from entering our network, we might use 'access-list and access-group' on a Cisco and 'filter-list' on a Juniper. However, in using API, our program can start asking the executor for their Intent while masking what kind of physical device it is they are talking to. Screen Scraping vs. API Structured Output Imagine a common scenario where we need to log in to the device and make sure all the interfaces on the devices are in an up/up state (both status and protocol are showing as up). For the human network engineers getting to a Cisco NX-OS device, it is simple enough to issue the show IP interface brief command and easily tell from the output which interface is up: nx-osv-2# show ip int brief IP Interface Status for VRF "default"(1) Interface IP Address Interface Status Lo0 192.168.0.2 protocol-up/link-up/admin-up Eth2/1 10.0.0.6 protocol-up/link-up/admin-up nx-osv-2# The line break, white spaces, and the first line of column title are easily distinguished from the human eye. In fact, they are there to help us line up, say, the IP addresses of each interface from line 1 to line 2 and 3. If we were to put ourselves into the computer's eye, all these spaces and line breaks are only taking away from the real important output, which is 'which interfaces are in the up/up state'? To illustrate this point, we can look at the Paramiko output again: >>> new_connection.send('sh ip int briefn') 16 >>> output = new_connection.recv(5000) >>> print(output) b'sh ip int briefrrnIP Interface Status for VRF "default"(1)rnInterface IP Address Interface StatusrnLo0 192.168.0.2 protocol-up/link-up/admin-up rnEth2/1 10.0.0.6 protocol-up/link-up/admin-up rnrnxosv- 2# ' >>> If we were to parse out that data, of course there are many ways to do it, but here is what I would do in a pseudo code fashion: Split each line via line break. I may or may not need the first line that contain the executed command, for now, I don't think I need it. Take out everything on the second line up until the VRF and save that in a variable as we want to know which VRF the output is showing of. For the rest of the lines, because we do not know how many interfaces there are, we will do a regular expression to search if the line starts with possible interfaces, such as lo for loopback and 'Eth'. We will then split this line into three sections via space, each consist of name of interface, IP address, then the interface status. The interface status will then be split further using the forward slash (/) to give us the protocol, link, and admin status. Whew, that is a lot of work just for something that a human being can tell in a glance! You might be able to optimize the code and the number of lines, but in general this is what we need to do when we need to 'screen scrap' something that is somewhat unstructured. There are many downsides to this method, the few bigger problems that I see are: Scalability: We spent so much time in painstakingly details for each output, it is hard to imagine we can do this for the hundreds of commands that we typically run. Predicability: There is really no guarantee that the output stays the same. If the output is changed ever so slightly, it might just enter with our hard earned battle of information gathering. Vendor and software lock-in: Perhaps the biggest problem is that once we spent all these time parsing the output for this particular vendor and software version, in this case Cisco NX-OS, we need to repeat this process for the next vendor that we pick. I don't know about you, but if I were to evaluate a new vendor, the new vendor is at a severe on-boarding disadvantage if I had to re-write all the screen scrap code again. Let us compare that with an output from an NX-API call for the same 'show IP interface brief' command. We will go over the specifics of getting this output from the device later in this article, but what is important here is to compare the follow following output to the previous screen scraping steps: { "ins_api":{ "outputs":{ "output":{ "body":{ "TABLE_intf":[ { "ROW_intf":{ "admin-state":"up", "intf-name":"Lo0", "iod":84, "ip-disabled":"FALSE", "link-state":"up", "prefix":"192.168.0.2", "proto-state":"up" } }, { "ROW_intf":{ "admin-state":"up", "intf-name":"Eth2/1", "iod":36, "ip-disabled":"FALSE", "link-state":"up", "prefix":"10.0.0.6", "proto-state":"up" } } ], "TABLE_vrf":[ { "ROW_vrf":{ "vrf-name-out":"default" } }, { "ROW_vrf":{ "vrf-name-out":"default" } } ] }, "code":"200", "input":"show ip int brief", "msg":"Success" } }, "sid":"eoc", "type":"cli_show", "version":"1.2" } } NX-API can return output in XML or JSON, this is obviously the JSON output that we are looking at. Right away you can see the answered are structured and can be mapped directly to Python dictionary data structure. There is no parsing required, simply pick the key you want and retrieve the value associated with that key. There is also an added benefit of a code to indicate command success or failure, with a message telling the sender reasons behind the success or failure. You no longer need to keep track of the command issued, because it is already returned to you in the 'input' field. There are also other meta data such as the version of the NX-API. This type of exchange makes life easier for both vendors and operators. On the vendor side, they can easily transfer configuration and state information, as well as add and expose extra fields when the need rises. On the operator side, they can easily ingest the information and build their infrastructure around it. It is generally agreed on that automation is much needed and a good thing, the questions usually centered around which format and structure the automation should take place. As you can see later in this article, there are many competing technologies under the umbrella of API, on the transport side alone, we have REST API, NETCONF, RESTCONF, amongst others. Ultimately the overall market will decide, but in the mean time, we should all take a step back and decide which technology best suits our need. Data modeling for infrastructure as code According to Wikipedia, "A data model is an abstract model that organizes elements of data and standardizes how they relate to one another and to properties of the real world entities. For instance, a data model may specify that the data element representing a car be composed of a number of other elements which, in turn, represent the color and size of the car and define its owner." The data modeling process can be illustrated in the following graph:  Data Modeling Process (source:  https://en.wikipedia.org/wiki/Data_model) When applied to networking, we can applied this concept as an abstract model that describe our network, let it be datacenter, campus, or global Wide Area Network. If we take a closer look at a physical datacenter, a layer 2 Ethernet switch can be think of as a device containing a table of Mac addresses mapped to each ports. Our switch data model described how the Mac address should be kept in a table, which are the keys, additional characteristics (think of VLAN and private VLAN), and such. Similarly, we can move beyond devices and map the datacenter in a model. We can start with the number of devices are in each of the access, distribution, core layer, how they are connected, and how they should behave in a production environment. For example, if we have a Fat-Tree network, how many links should each of the spine router should have, how many routes they should contain, and how many next-hop should each of the prefixes have. These characteristics can be mapped out in a format that can be referenced against as the ideal state that we should always checked against. One of the relatively new network data modeling language that is gaining traction is YANG. Yet Another Next Generation (YANG) (Despite common belief, some of the IETF workgroup do have a sense of humor). It was first published in RFC 6020 in 2010, and has since gain traction among vendors and operators. At the time of writing, the support for YANG varies greatly from vendors to platforms, the adaptation rate in production is therefore relatively low. However, it is a technology worth keeping an eye out for. Cisco API and ACI Cisco Systems, as the 800 pound gorilla in the networking space, has not missed on the trend of network automation. The problem has always been the confusion surrounding Cisco's various product lines and level of technology support. With product lines spans from routers, switches, firewall, servers (unified computing), wireless, collaboration software and hardware, analytic software, to name a few, it is hard to know where to start. Since this book focuses on Python and networking, we will scope the section to the main networking products. In particular we will cover the following: Nexus product automation with NX-API Cisco NETCONF and YANG examples Cisco application centric infrastructure for datacenter Cisco application centric infrastructure for enterprise For the NX-API and NETCONF examples here, we can either use the Cisco DevNet always-on lab devices or locally run Cisco VIRL. Since ACI is a separated produce and license on top of the physical switches, for the following ACI examples, I would recommend using the DevNet labs to get an understanding of the tools. Unless, of course, that you are one of the lucky ones who have a private ACI lab that you can use. Cisco NX-API Nexus is Cisco's product line of datacenter switches. NX-API (http://www.cisco.com/c/en/us/td/docs/switches/datacenter/nexus9000/sw/6-x/programmability/guide/b_Cisco_Nexus_9000_Series_NX-OS_Programmability_Guide/b_Cisco_Nexus_9000_Series_NX-OS_Programmability_Guide_chapter_011.html) allows the engineer to interact with the switch outside of the device via a variety of transports, including SSH, HTTP, and HTTPS. Installation and preparation Here are the Ubuntu Packages that we will install, you may already have some of the packages such as Python development, pip, and Git: $ sudo apt-get install -y python3-dev libxml2-dev libxslt1-dev libffi-dev libssl-dev zlib1g-dev python3-pip git python3-requests If you are using Python 2: sudo apt-get install -y python-dev libxml2-dev libxslt1-dev libffi-dev libssl-dev zlib1g-dev python-pip git python-requests The ncclient (https://github.com/ncclient/ncclient) library is a Python library for NETCONF clients, we will install from the GitHub repository to install the latest version: $ git clone https://github.com/ncclient/ncclient $ cd ncclient/ $ sudo python3 setup.py install $ sudo python setup.py install NX-API on Nexus devices is off by default, we will need to turn it on. We can either use the user already created or create a new user for the NETCONF procedures. feature nxapi username cisco password 5 $1$Nk7ZkwH0$fyiRmMMfIheqE3BqvcL0C1 role networkopera tor username cisco role network-admin username cisco passphrase lifetime 99999 warntime 14 gracetime 3 For our lab, we will turn on both HTTP and sandbox configuration, they should be turned off in production. nx-osv-2(config)# nxapi http port 80 nx-osv-2(config)# nxapi sandbox We are now ready to look at our first NX-API example. NX-API examples Since we have turned on sandbox, we can launch a web browser and take a look at the various message format, requests, and response based on the CLI command that we are already familiar with. In the following example, I selected JSON-RPC and CLI command type for the command show version:   The sandbox comes in handy if you are unsure about the supportability of message format or if you have questions about the field key for which the value you want to retrieve in your code. In our first example, we are just going to connect to the Nexus device and print out the capabilities exchanged when the connection was first made: #!/usr/bin/env python3 from ncclient import manager conn = manager.connect( host='172.16.1.90', port=22, username='cisco', password='cisco', hostkey_verify=False, device_params={'name': 'nexus'}, look_for_keys=False) for value in conn.server_capabilities: print(value) conn.close_session() The connection parameters of host, port, username, and password are pretty self-explanatory. The device parameter specifies the kind of device the client is connecting to, as we will also see a differentiation in the Juniper NETCONF sections. The hostkey_verify bypass the known_host requirement for SSH while the look_for_keys option disables key authentication but use username and password for authentication. The output will show that the XML and NETCONF supported feature by this version of NX-OS: $ python3 cisco_nxapi_1.py urn:ietf:params:xml:ns:netconf:base:1.0 urn:ietf:params:netconf:base:1.0 Using ncclient and NETCONF over SSH is great because it gets us closer to the native implementation and syntax. We will use the library more later on. For NX-API, I personally feel that it is easier to deal with HTTPS and JSON-RPC. In the earlier screenshot of NX-API Developer Sandbox, if you noticed in the Request box, there is a box labeled Python. If you click on it, you would be able to get an automatically converted Python script based on the requests library.  Requests is a very popular, self-proclaimed HTTP for humans library used by companies like Amazon, Google, NSA, amongst others. You can find more information about it on the official site (http://docs.python-requests.org/en/master/). For the show version example, the following Python script is automatically generated for you. I am pasting in the output without any modification: """ NX-API-BOT """ import requests import json """ Modify these please """ url='http://YOURIP/ins' switchuser='USERID' switchpassword='PASSWORD' myheaders={'content-type':'application/json-rpc'} payload=[ { "jsonrpc": "2.0", "method": "cli", "params": { "cmd": "show version", "version": 1.2 }, "id": 1 } ] response = requests.post(url,data=json.dumps(payload), headers=myheaders,auth=(switchuser,switchpassword)).json() In cisco_nxapi_2.py file, you will see that I have only modified the URL, username and password of the preceding file, and parse the output to only include the software version. Here is the output: $ python3 cisco_nxapi_2.py 7.2(0)D1(1) [build 7.2(0)ZD(0.120)] The best part about using this method is that the same syntax works with both configuration command as well as show commands. This is illustrated in cisco_nxapi_3.py file. For multi-line configuration, you can use the id field to specify the order of operations. In cisco_nxapi_4.py, the following payload was listed for changing the description of interface Ethernet 2/12 in the interface configuration mode. { "jsonrpc": "2.0", "method": "cli", "params": { "cmd": "interface ethernet 2/12", "version": 1.2 }, "id": 1 }, { "jsonrpc": "2.0", "method": "cli", "params": { "cmd": "description foo-bar", "version": 1.2 }, "id": 2 }, { "jsonrpc": "2.0", "method": "cli", "params": { "cmd": "end", "version": 1.2 }, "id": 3 }, { "jsonrpc": "2.0", "method": "cli", "params": { "cmd": "copy run start", "version": 1.2 }, "id": 4 } ] In the next section, we will look at examples for Cisco NETCONF and YANG model. Cisco and YANG model Earlier in the article, we looked at the possibility of expressing the network using data modeling language YANG. Let us look into it a little bit. First off, we should know that YANG only defines the type of data sent over NETCONF protocol and NETCONF exists as a standalone protocol as we saw in the NX-API section. YANG being relatively new, the supportability is spotty across vendors and product lines. For example, if we run the same capability exchange script we saw preceding to a Cisco 1000v running  IOS-XE, this is what we would see: urn:cisco:params:xml:ns:yang:cisco-virtual-service?module=ciscovirtual- service&revision=2015-04-09 http://tail-f.com/ns/mibs/SNMP-NOTIFICATION-MIB/200210140000Z? module=SNMP-NOTIFICATION-MIB&revision=2002-10-14 urn:ietf:params:xml:ns:yang:iana-crypt-hash?module=iana-crypthash& revision=2014-04-04&features=crypt-hash-sha-512,crypt-hashsha- 256,crypt-hash-md5 urn:ietf:params:xml:ns:yang:smiv2:TUNNEL-MIB?module=TUNNELMIB& revision=2005-05-16 urn:ietf:params:xml:ns:yang:smiv2:CISCO-IP-URPF-MIB?module=CISCOIP- URPF-MIB&revision=2011-12-29 urn:ietf:params:xml:ns:yang:smiv2:ENTITY-STATE-MIB?module=ENTITYSTATE- MIB&revision=2005-11-22 urn:ietf:params:xml:ns:yang:smiv2:IANAifType-MIB?module=IANAifType- MIB&revision=2006-03-31 <omitted> Compare that to the output that we saw, clearly IOS-XE understand more YANG model than NX-OS. Industry wide network data modeling for networking is clearly something that is beneficial to network automation. However, given the uneven support across vendors and products, it is not something that is mature enough to be used across your production network, in my opinion. For the book I have included a script called cisco_yang_1.py that showed how to parse out NETCONF XML output with YANG filters urn:ietf:params:xml:ns:yang:ietf-interfaces as a starting point to see the existing tag overlay. You can check the latest vendor support on the YANG Github project page (https://github.com/YangModels/yang/tree/master/vendor). Cisco ACI Cisco Application Centric Infrastructure (ACI) is meant to provide a centralized approach to all of the network components. In the datacenter context, it means the centralized controller is aware of and manages the spine, leaf, top of rack switches as well as all the network service functions. This can be done thru GUI, CLI, or API. Some might argue that the ACI is Cisco's answer to the broader defined software defined networking. One of the somewhat confusing point for ACI, is the difference between ACI and ACI-EM. In short, ACI focuses on datacenter operations while ACI-EM focuses on enterprise modules. Both offers a centralized view and control of the network components, but each has it own focus and share of tools. For example, it is rare to see any major datacenter deploy customer facing wireless infrastructure but wireless network is a crucial part of enterprises today. Another example would be the different approaches to network security. While security is important in any network, in the datacenter environment lots of security policy is pushed to the edge node on the server for scalability, in enterprises security policy is somewhat shared between the network devices and servers. Unlike NETCONF RPC, ACI API follows the REST model to use the HTTP verb (GET, POST, PUT, DELETE) to specify the operation intend. We can look at the cisco_apic_em_1.py file, which is a modified version of the Cisco sample code on lab2-1-get-network-device-list.py (https://github.com/CiscoDevNet/apicem-1.3-LL-sample-codes/blob/master/basic-labs/lab2-1-get-network-device-list.py). The abbreviated section without comments and spaces are listed here. The first function getTicket() uses HTTPS POST on the controller with path /api/vi/ticket with username and password embedded in the header. Then parse the returned response for a ticket with limited valid time. def getTicket(): url = "https://" + controller + "/api/v1/ticket" payload = {"username":"usernae","password":"password"} header = {"content-type": "application/json"} response= requests.post(url,data=json.dumps(payload), headers=header, verify=False) r_json=response.json() ticket = r_json["response"]["serviceTicket"] return ticket The second function then calls another path /api/v1/network-devices with the newly acquired ticket embedded in the header, then parse the results. url = "https://" + controller + "/api/v1/network-device" header = {"content-type": "application/json", "X-Auth-Token":ticket} The output displays both the raw JSON response output as well as a parsed table. A partial output when executed against a DevNet lab controller is shown here: Network Devices = { "version": "1.0", "response": [ { "reachabilityStatus": "Unreachable", "id": "8dbd8068-1091-4cde-8cf5-d1b58dc5c9c7", "platformId": "WS-C2960C-8PC-L", &lt;omitted&gt; "lineCardId": null, "family": "Wireless Controller", "interfaceCount": "12", "upTime": "497 days, 2:27:52.95" } ] } 8dbd8068-1091-4cde-8cf5-d1b58dc5c9c7 Cisco Catalyst 2960-C Series Switches cd6d9b24-839b-4d58-adfe-3fdf781e1782 Cisco 3500I Series Unified Access Points &lt;omitted&gt; 55450140-de19-47b5-ae80-bfd741b23fd9 Cisco 4400 Series Integrated Services Routers ae19cd21-1b26-4f58-8ccd-d265deabb6c3 Cisco 5500 Series Wireless LAN Controllers As one can see, we only query a single controller device, but we are able to get a high level view of all the network devices that the controller is aware of. The downside is, of course, the ACI controller only supports Cisco devices at this time. Summary In this article, we looked at various ways to communicate and manage network devices from Cisco. Resources for Article: Further resources on this subject: Network Exploitation and Monitoring [article] Introduction to Web Experience Factory [article] Web app penetration testing in Kali [article]
Read more
  • 0
  • 0
  • 30892

article-image-it-operations-management
Packt
05 Apr 2017
16 min read
Save for later

IT Operations Management

Packt
05 Apr 2017
16 min read
In this article by Ajaykumar Guggilla, the author of the book ServiceNow IT Operations Management, we will learn the ServiceNow ITOM capabilities within ServiceNow, which include: Dependency views Cloud management Discovery Credentials (For more resources related to this topic, see here.) ServiceNow IT Operations Management overview Every organization and business focuses on key strategies, some of them include: Time to market Agility Customer satisfaction Return on investment Information technology is heavily involved in supporting these strategic goals, either directly or indirectly, providing the underlying IT Services with the required IT infrastructure. IT infrastructure includes network, servers, routers, switches, desktops, laptops, and much more. IT supports these infrastructure components enabling the business to achieve their goals. IT continuously supports the IT infrastructure and its components with a set of governance, processes, and tools, which is called IT Operations Management. IT cares and feeds a business, and the business expects reliability of services provided by IT to support the underlying business services. A business cares and feeds the customers who expect satisfaction of the services offered to them without service disruption. Unlike any other tools it is important to understand the underlying relationship between IT, businesses, and customers. IT just providing the underlying infrastructure and associated components is not going to help, to effectively and efficiently support the business IT needs to understand how the infrastructure components and process are aligned and associated with the business services to understand the impact to the business with an associated incident, problem, event, or change that is arising out of an IT infrastructure component. IT needs to have a consolidated and complete view of the dependency between the business and the customers, not compromising on the technology used, the process followed, the infrastructure components used, which includes the technology used. There needs to be a connected way for IT to understand the relations of these seamless technology components to be able to proactively stop the possible outages before they occur and handle a change in the environment. On the other hand, a business expects service reliability to be able to support the business services to the customers. There is a huge financial impact of businesses not being able to provide the agreed service levels to their customers. So there is always a pressure and dependence from the business to IT to provide a reliable service and it does not matter what technology or processes are used. Customers as always expect satisfaction of the services provided by the business, at times these are adversely affected with service outages caused from the IT infrastructure. Customer satisfaction is also a key strategic goal for the business to be able to sustain in the competitive market. IT is also expected as necessarily to be able to integrate with the customer infrastructure components to provide a holistic view of the IT infrastructure view to be able to effectively support the business by proactively identifying and fixing the outages before they happen to reduce the outages and increase the reliability of IT services delivered. Most of the tools do not understand the context of the Service-Oriented Architecture (SOA) connecting the business services to the impacted IT infrastructure components to be able to effectively support the business and also IT to be able to justify the cost and impact of providing end to end service. Most of the traditional tools perform certain aspects of ITOM functions, some partially and some support the integration with the IT Service Management (ITSM) tool suite. The missing integration piece between the traditional tools and a full blown cloud solution platform is leaning to the SOA. ServiceNow, a cloud based solution, has focused the lens of true SOA that brings together the ITOM suite providing and leveraging the native data and that is also able to connect to the customer infrastructure to provide a holistic and end to end view of the IT Service at a given snapshot. With ServiceNow IT has a complete view of the business service and technical dependencies in real time leveraging powerful individual capabilities, applications, and plugins within ServiceNow ITOM. ServiceNow ITOM comprises of the following applications and capabilities, some of the plugins, applications, and technology might have license restrictions that require separate licensing to be purchased: Management, Instrumentation, and Discovery (MID) Server: MID Server helps to establish communication and data movement between ServiceNow and the external corporate network and application Credentials: Is a platform that stores credentials including usernames, passwords, or certificates in an encrypted field on the credentials table that is leveraged by ServiceNow discovery Service mapping: Service mapping discovers and maps the relationships between IT components that comprise specific business services, even in dynamic, virtualized environments Service mapping: Service mapping creates relationships between different IT components and business services Dependency views: Dependency views graphically displays an infrastructure view with relationships of configuration items and the underlying business services Event management: Event management provides a holistic view of all the event that are triggered from various event monitoring tools Orchestration: Orchestration helps in automating IT and business processes for operations management. Discovery: Works with MID Server and explores the IT infrastructure environment to discover the configuration items and populating the Configuration Management Database (CMDB) Cloud management: Helps to easily manage third-party cloud providers, which includes AWS, Microsoft Azure, and VMware clouds Understanding ServiceNow IT Operations Management components Now that we have covered what ITOM is about and focusing on ServiceNow ITOM capabilities, let's deep dive and explore more about each capability. Dependency views Maps like the preceding one are becoming so important in everyday life; imagine a world without GPS devices or electronic maps. There were hard copies of the maps that were available all over the streets for us to get to the place and also there were special maps to the utilities and other public service agencies to be able to identify the impact to either digging a tunnel or a water pipe or an underground electric cable. These maps help them to identify the impact of making a change to the ground. Maps also helps us to understand the relationships between a states, countries, cities, and streets with different set of information in real time that includes real-time traffic information showing accident information, any constructions, and so on. Dependency views is also similar to the real life navigation maps, they provide a map of relationships between the IT Infrastructure components and the business services that are defined under the scope, unlike the real-time traffic updates on the maps the dependency views show real-time active incidents, change, and problems reported on an individual configuration item or an infrastructure component. Changes frequently happen in the environment, some of the changes are handled with a legacy knowledge of how the individual components are connected to the business services through the service mapping plugin down to the individual component level. Making a change without understanding the relationships between each IT infrastructure component might adversely affect the service levels and impact the business service. ServiceNow dependency views provide a snapshot of how the underlying business service is connected to individual Configuration Item (CI) elements. Drilling down to the individual CI elements provides a view of associated service operations and service transition data that includes incidents logged against on a given CI, any underlying problem reported against the given CI, and also changes associated with the given CI. Dependency views are based on D3 and Angular technology that provides a graphical view of configuration items and their relationships. The dependency views provide a view of the CI and their relationships, in order to get a perspective from a business stand point you will need to enable the service mapping plugin. Having a detailed view of how the individual CI components are connected from the Business service to the CI components compliments the change management to perform effective impact analysis before any changes are made to the respective CI: Image source: wiki.servicenow.com A dependency map starts with a root node, which is usually termed as a root CI that is grayed out with a gray frame. Relationships start building up and they map from the upstream and downstream dependencies of the infrastructure components that are scoped to discover by the ServiceNow auto discovery. Administrators have the control of the number of levels to display on the dependency maps. It is also easy to manage the maps that allow creating or modifying existing relationships right from the map that posts the respective changes to the CMDB automatically. Each of the CI component of the dependency maps have an indicator that shows any active and pending issues against a CI that includes any incidents, problems, changes, and any events associated with the respective configuration item. Cloud management In the earlier versions prior to Helsinki, there was not a direct way to manage cloud instances, people had to create orchestration scripts to be able to manage the cloud instances and also create custom roles. Managing and provisioning has become easy with the ServiceNow cloud management application. The cloud management application seamlessly integrates with the ServiceNow service catalog and also provides providing automation capability with orchestration workflows. The cloud management application fully integrates the life cycle management of virtual resources into standard ServiceNow data collection, management, analytics, and reporting capabilities. The ServiceNow cloud management application provides easy and quick options to key private cloud providers, which include: AWS Cloud: Manages Amazon Web Services (AWS) using AWS Cloud Microsoft Azure Cloud: The Microsoft Azure Cloud application integrates with Azure through the service catalog and provides the ability to manage virtual resources easily VMware Cloud: The VMware Cloud application integrates with VMware vCenter to manage the virtual resources by integrating with the service catalog The following figure describes a high-level architecture of the cloud management application: Key features with the cloud management applications include the following: Single pane of glass to manage the virtual services in public and private cloud environment including approvals, notifications, security, asset management, and so on Ability to repurpose configurations through resource templates that help to reuse the capability sets Seamless integration with the service catalog, with a defined workflow and approvals integration can be done end to end right from the user request to the cloud provisioning Ability to control the leased resources through date controls and role-based security access Ability to use the ServiceNow discovery application or the standalone capability to discover virtual resources and their relationships in their environments Ability to determine the best virtualization server for a VM based on the discovered data by the CMDB auto discovery Ability to control and manage virtual resources effectively with a controlled termination shutdown date Ability to increate virtual server resources through a controlled fashion, for example, increasing storage or memory, integrating with the service catalog, and with right and appropriate approvals the required resources can be increased to the required Ability to perform a price calculation and integration of managed virtual machines with asset management Ability to auto or manually provision the required cloud environment with zero click options There are different roles within the cloud management applications, here are some of them: Virtual provisioning cloud administrator: The administrator owns the cloud admin portal and end to end management including configuration of the cloud providers. They have access to be able to configure the service catalog items that will be used by the requesters and the approvals required to provision the cloud environment. Virtual provisioning cloud approver: Who either approves or rejects requests for virtual resources. Virtual provisioning cloud operator: The operator fulfills the requests to manage the virtual resources and the respective cloud management providers. Cloud operators are mostly involved when there is a manual human intervention required to manage or provision the virtual resources. Virtual provisioning cloud user: Users have access to the my virtual assets portal that helps them to manage the virtual resources they own, or requested, or are responsible for.   How clouds are provisioned The cloud administrator creates a service catalog item for users to be able to request for cloud resources The cloud user requests for a virtual machine through the service catalog The request goes to the approver who either approves or rejects it The cloud operator provisions the requests manually or virtual resources are auto provisioned Discovery Imagine how an atlas is mapped and how places have been discovered by the satellite using exploration devices including manually, satellite, survey maps, such as street maps collector devices. These devices crawl through all the streets to collect different data points that include information about the streets, houses, and much more details are collected. This information is used by the consumers for various purposes including GPS devices, finding and exploring different areas, address of a location, on the way finding for any incidents, constructions, road closures, and so on. ServiceNow discovery works the same way, ServiceNow discovery explores through the enterprise network identifying for the devices in scope. ServiceNow discovery probes and sensors perform the collection of infrastructure devices connected to a given enterprise network. Discovery uses Shazzam probes to determine the TCP ports opened and to see if it responds to the SNMP queries and sensors to explore any given computer or device, starting first with basic probes and then using more specific probes as it learns more. Discovery explores to check on the type of device, for each type of device, discovery uses different kinds of probes to extract more information about the computer or device, and the software that is running on it. CMDB is updated or data is federated through the ServiceNow discovery. They are identified with the discovery that is set and actioned to search the CMDB for a CI that again matches the discovered CI on the network. When a device match is found what actions to be taken are defined by the administrator when discovery runs based on the configuration when a CI is discovered; either CMDB gets updated with an existing CI or a new CI is created within the CMDB. Discovery can be scheduled to perform the scan on certain intervals; configuration management keeps the up to date status of the CI through the discovery. During discovery the MID Server looks back on the probes to run from the ServiceNow instance and executes probes to retrieves the results to the ServiceNow instance or the CMDB for processing. No data is retained on the MID Server. The data collected by these probes are processed by sensors. ServiceNow is hosted in the ServiceNow data centers spanned across the globe. ServiceNow as an application does not have the ability to communicate with any given enterprise network. Traditionally, there are two different types of discovery tools on the market: Agent: A piece of software is installed on the servers or individual systems that sends all information about the system to the CMDB. Agentless: Usually doesn't require any individual installations on the systems or components. They utilize a single system or software to usually probe and sense the network by scanning and federating the CMDB. ServiceNow is an agentless discovery that does not require any individual software to be installed, it uses MID Server. Discovery is available as a separate subscription from the rest of the ServiceNow platform and requires the discovery plugin. MID Server is a Java software that runs on any windows or UNIX or Linux system that resides within the enterprise network that needs to be discovered. MID Server is the bridge and communicator between the ServiceNow instance that is sitting somewhere on the cloud and the enterprise network that is secured and controlled. MID Server uses several techniques to probe devices without using agents. Depending on the type of infrastructure components, MID Server uses the appropriate protocol to gather information from the infrastructure component, for example, to gather information from network devices MID Server will use Simple Network Management Protocol (SNMP), to be able to connect to the Unix systems MID Server will use SSH. The following table shows different ServiceNow discovery probe types: Device Probe type Windows computers and servers Remote WMI queries, shell commands UNIX and Linux servers Shell command (via SSH protocol) Storage CIM/WBEM queries Printers SNMP queries Network gear (switches, routers, and so on) SNMP queries Web servers HTTP header examination Uninterruptible Power Supplies (UPS) SNMP queries Credentials ServiceNow discovery and orchestration features require credentials to be able to access the enterprise network; these credentials vary from network and devices. Credentials such as usernames, passwords, and certificates need a secure place to store these credentials. ServiceNow credentials applications store credentials in an encrypted format on a specific table within the credentials table. Credential tagging allows workflow creators to assign individual credentials to any activity in an orchestration workflow or assign different credentials to each occurrence of the same activity type in an orchestration workflow. Credential tagging also works with credential affinities. Credentials can be assigned an order value that forces the discovery and orchestration to try all the credentials when orchestration attempts to run a command or discovery tries to query. Credentials tables contain many credentials, based on pattern of usage the credential applications which places on the highly used list that enables the discovery and orchestration to work faster after first successful connection and system knowing which credential to use for a faster logon to the device next time. Image source: wiki.servicenow.com Credentials are encrypted automatically with a fixed instance key when they are submitted or updated in the credentials (discovery_credentials) table. When credentials are requested by the MID Server, the platform decrypts the credentials using the following process: The credentials are decrypted on the instance with the password2 fixed key. The credentials are re-encrypted on the instance with the MID Server's public key. The credentials are encrypted on the load balancer with SSL. The credentials are decrypted on the MID Server with SSL. The credentials are decrypted on the MID Server with the MID Server's private key. The ServiceNow credential application integrates with the CyberArk credential storage. The MID Server integration with CyberArk vault enables orchestration and discovery to run without storing any credentials on the ServiceNow instance. The instance maintains a unique identifier for each credential, the credential type (such as SSH, SNMP, or Windows), and any credential affinities. The MID Server obtains the credential identifier and IP address from the instance, and then uses the CyberArk vault to resolve these elements into a usable credential. The CyberArk integration requires the external credential storage plugin, which is available by request. The CyberArk integration supports these ServiceNow credential types: CIM JMS SNMP community SSH SSH private key (with key only) VMware Windows Orchestration activities that use these network protocols support the use of credentials stored on a CyberArk vault: SSH PowerShell JMS SFTP Summary In this article, we covered an overview of ITOM, explored different ServiceNow ITOM components including high level architecture, functional aspects of ServiceNow ITOM components that include discovery, credentials, dependency views, and, cloud management.  Resources for Article: Further resources on this subject: Management of SOA Composite Applications [article] Working with Business Rules to Define Decision Points in Oracle SOA Suite 11g R1 [article] Introduction to SOA Testing [article]
Read more
  • 0
  • 0
  • 3777

article-image-using-android-wear-20
Raka Mahesa
04 Apr 2017
6 min read
Save for later

Using Android Wear 2.0

Raka Mahesa
04 Apr 2017
6 min read
As of this writing, Android Wear 2.0 was unveiled by Google a few weeks ago. Like most second iterations of software, this latest version of Android Wear adds various new features that make the platform easier to use and much more functional to its users. But what about its developers? Is there any critical change that developers should know about for the platform? Let's find out together. One of the biggest additions to Android Wear 2.0 is the ability of apps to run on the watch without needing a companion app on the phone. Devices running Android Wear 2.0 will have their own Google Play Store app, as well as reliable internet from Wi-Fi or a cellular connection, allowing apps to be installed and operated without requiring a phone. This feature, known as "Standalone App," is a big deal for developers. While it's not really complicated to implement said feature, we must now reevaluate about how to distribute our apps and whether our apps should work independently, or should they be embedded to a phone app like before. So let's get into the meat of things. Right now Android Wear 2.0 supports the following types of apps: - Standalone apps that do not require a phone app. - Standalone apps that require a phone app. - Non-Standalone apps that are embedded in a phone app. In this case, "Standalone apps" means apps that are not included in a phone app and can be downloaded separately on the Play Store on the watch. After all, a standalone app may still require a phone app to function. To distribute a standalone watch app, all we have to do is designate an app as standalone and upload the APK to the Google Play Developer Console. To designate an app as standalone, simply add the following metadata to the <application> section in the app manifest file. <meta-data android_name="com.google.android.wearable.standalone" android_value="true" /> Do note that any app that has that metadata will be available to download on the watch Play Store, even if the value is set to false. Setting the value to false will simply limit the app to smart devices that have been paired to phones that have Play Store installed. One more thing about Standalone Apps: They are not supported on Android Wear before 2.0. So, to support all versions of Android Wear, we will have to provide both the Standalone and Non-Standalone APKs. Both of them need the same package name and must be uploaded under the same app, with the Standalone APK having a higher versionCode value so the Play Store will install that version when requested by a compatible device. All right, with that settled, let's move on to another big addition introduced by Android Wear 2.0: the Complication API. In case you're not familiar with the world of watchmaking. Complications are areas in a watch that show data other than the current time. In traditional watches, they can be a stopwatch or the current date. In smartwatches, they can be a battery indicator or a display for a number of unread emails. In short, complications are Android widgets for smart watches. Unlike widgets on Android phones, however, the user interface that displays a complication data is not made by the same developer whose data was displayed. Android Wear 2.0 gives the responsibility of displaying the complication data to the watch face developer, so an app developer has no say on how his app data will look on the watch face. To accommodate that Complication system, Android Wear provides a set of complication types that all watch faces have to be able to display, which are: - Icon type - Short Text display - Long Text display - Small Image type - Large Image type - Ranged Value type (value with minimum and maximum limit, like battery life) Some complication types may have additional data that they can show. For example, the Short Text complication may also show an icon if the data provides an icon to show, and the Long Text complication can show a title text if that data was provided. Okay, so now we know how the data is going to be displayed to the user. How then do we provide said data to the watch face? To do that, first we have to create a new Service class that inherits the ComplicationProviderService class. Then, on that class we just created, we override the function onComplicationUpdate() and provide the ComplicationManager object with data from our app like the following: @Override public void onComplicationUpdate(int complicationID, int type, ComplicationManager manager) { if (type == SHORT_TEXT) { ComplicationData data = new ComplicationData.Builder(SHORT_TEXT) .setShortText(dataShortText) .setIcon(appIconResource)) .setTapAction(onTapIntent) .build(); manager.updateComplicationDatra(complicationID, data); } else if (type == LONG_TEXT) { ComplicationData data = new ComplicationData.Builder(.LONG_TEXT) .setLongTitle(dataTitle) .setLongText(dataLongText) .setIcon(appIconResource)) .setTapAction(onTapIntent) .build(); manager.updateComplicationDatra(complicationID, data); } } As can be seen from the code above, we use ComplicationData.Builder to provide the correct data based on the requested Complication type. You may notice the setTapAction() function and wonder what it was for. Well, you may want the user seeing your data to be able to tap the Complication and do an action. Using the setTapAction() you will be able to provide an Intent that will be executed later when the complication was tapped. One last thing to do is to register the service on the project manifest with a filter for android.support.wearable.complications.ACTION_COMPLICATION_UPDATE_REQUEST intent like the following: <service android_name=".ComplicationProviderService" android_label=”ServiceLabel” > <intent-filter> <action android_name="android.support.wearable.complications.ACTION_COMPLICATION_UPDATE_REQUEST" /> </intent-filter> </service> And that's it for all the biggest changes to Android Wear 2.0! For other additions and changes to this version of Android Wear like the new CurvedLayout, a new notification display, Rotary Input API, and more, you can read the official documentation. About the author Raka Mahesa is a game developer at Chocoarts (http://chocoarts.com/) who is interested in digital technology in general. Outside of work hours, he likes to work on his own projects, with Corridoom VR being his latest released game. Raka also regularly tweets as @legacy99.
Read more
  • 0
  • 0
  • 14221
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-how-use-xmlhttprequests-send-post-server
Antonio Cucciniello
03 Apr 2017
5 min read
Save for later

How to use XmlHttpRequests to Send POST to Server

Antonio Cucciniello
03 Apr 2017
5 min read
So, you need to send some bits of information from your browser to the server in order to complete some processing. Maybe you need the information to search for something in a database, or just to update something on your server. Today I am going to show you how to send some data to your server from the client through a POST request using XmlHttpRequest. First, we need to set up our environment! Set up The first thing to make sure you have is Node and NPM installed. Create a new directory for your project; here we will call it xhr-post: $ mkdir xhr-post $ cd xhr-post Then we would like to install express.js and body-parser: $ npm install express $ npm install body-parser Express makes it easy for us to handle HTTP requests, and body-parser allows us to parse incoming request bodies. Let's create two files: one for our server called server.js and one for our front end code called index.html. Then initialize your repo with a package.json file by doing: $ npm init Client Now it’s time to start with some front end work. Open and edit your index.html file with: <!doctype html> <html> <h1> XHR POST to Server </h1> <body> <input type='text' id='num' /> <script> function send () { var number = { value: document.getElementById('num').value } var xhr = new window.XMLHttpRequest() xhr.open('POST', '/num', true) xhr.setRequestHeader('Content-Type', 'application/json;charset=UTF-8') xhr.send(JSON.stringify(number)) } </script> <button type='button' value='Send' name='Send' onclick='send()' > Send </button> </body> </html> This file simply has a input field to allow users to enter some information, and a button to then send the information entered to the server. What we should focus on here is the button's onclick method send(). This is the function that is called once the button is clicked. We create a JSON object to hold the value from the text field. Then we create a new instance of an XMLHttpRequest with xhr. We call xhr.open() to initialize our request by giving it a request method (POST), the url we would like to open the request with ('/num') and determine if it should be asynchronous or not (set true for asynchronous). We then call xhr.setRequestHeader(). This sets the value of the HTTP request to json and UTF-8. As a last step, we send the request with xhr.send(). We pass the value of the text box and stringify it to send the data as raw text to our server, where it can be manipulated. Server Here our server is supposed to handle the POST request and we are simply going to log the request received from the client. const express = require('express') const app = express() const path = require('path') var bodyParser = require('body-parser') var port = 3000 app.listen(port, function () { console.log('We are listening on port ' + port) }) app.use(bodyParser.urlencoded({extended: false})) app.use(bodyParser.json()) app.get('*', function (req, res) { res.sendFile(path.join(__dirname, '/index.html')) }) app.post('/num', function (req, res) { var num = req.body.value console.log(num) return res.end('done') }) At the top, we declare our variables, obtaining an instance of express, path and body-parser. Then we set our server to listen on port 3000. Next, we use bodyParser object to decide what kind of information we would like to parse, we set it to json because we sent a json object from our client, if you recall the last section. This is done with: app.use(bodyParser.json()) Then we serve our html file in order to see our front end created in the last section with: app.get('*', function (req, res) { res.sendFile(path.join(__dirname, '/index.html')) }) The last part of server.js is where we handle the POST request from the client. We access the value sent over by checking for corresponding property on the body object which is part of the request object. Then, as a last step for us to verify we have the correct information, we will log the data received to the console and send a response to the client. Test Let's test what we have done. In the project directory, we can run: $ node server.js Open your web browser and go to the url localhost:3000. This is what your web page should look like: This is what your output to the console should look like if you enter a 5 in the input field: Conclusion You are all done! You now have a web page that sends some JSON data to your server using XmlHttpRequest! Here is a summary of what we went over: Created a front end with an input field and button Created a function for our button to send an XmlHttpRequest Created our server to listen on port 3000 Served our html file Handled our POST request at route '/num' Logged the value to our console If you enjoyed this post, share it on twitter. Check out the code for this tutorial on GitHub. Possible Resources Check out my GitHub View my personal blog Information on XmlHtttpRequest GitHub pages for: express body-parser About the author Antonio Cucciniello is a software engineer with a background in C, C++, and JavaScript (Node.Js). He is from New Jersey, USA. His most recent project called Edit Docs is an Amazon Echo skill that allows users to edit Google Drive files using their voice. He loves building cool things with software, reading books on self-help and improvement, finance, and entrepreneurship. To contact Antonio, e-mail him at Antonio.cucciniello16@gmail.com, follow him on twitter at @antocucciniello, and follow him on GitHub here: https://github.com/acucciniello.
Read more
  • 0
  • 0
  • 39482

article-image-c-compiler-device-drivers-and-useful-developing-techniques
Packt
17 Mar 2017
22 min read
Save for later

C compiler, Device Drivers and Useful Developing Techniques

Packt
17 Mar 2017
22 min read
In this article by Rodolfo Giometti, author of the book GNU/Linux Rapid Embedded Programming, in this article we’re going to focusing our attention to the C compiler (with its counter part: the cross-compiler) and when we have (or we can choose to) to use the native or cross-compilation and the differences between them. (For more resources related to this topic, see here.) Then we’ll see some kernel stuff used later in this article (configuration, recompilation and the device tree) and then we’ll look a bit deeper at the device drivers, how they can be compiled and how they can be put into a kernel module (that is kernel code that can be loaded at runtime). We'll present different kinds of computer peripherals and, for each of them, we'll try to explain how the corresponding device driver works starting from the compilation stage through the configuration till the final usage. As example we’ll try to implement a very simple driver in order to give to the reader some interesting points of view and very simple advices about kernel programming (which is not covered by this article!). We’re going to present the root filesystem’s internals and we’ll spend some words about a particular root filesystem that can be very useful during the early developing stages: the Network File System. As final step we’ll propose the usage of an emulator in order to execute a complete target machine’s Debian distribution on a host PC. This article still is part of the introductory part of this article, experienced developers whose already well know these topics may skip this article but the author's suggestion still remains the same, that is to read the article anyway in order to discover which developing tools will be used in the article and, maybe, some new technique to manage their programs. The C compiler The C compiler is a program that translate the C language) into a binary format that the CPU can understand and execute. This is the vary basic way (and the most powerful one) to develop programs into a GNU/Linux system. Despite this fact most developer prefer using another high level languages rather than C due the fact the C language has no garbage collection, has not objects oriented programming and other issue, giving up part of the execution speed that a C program offers, but if we have to recompile the kernel (the Linux kernel is written in C – plus few assembler), to develop a device driver or to write high performance applications then the C language is a must-have. We can have a compiler and a cross-compiler and till now, we’ve already used the cross-compiler several times to re-compile the kernel and the bootloaders, however we can decide to use a native compiler too. In fact using native compilation may be easier but, in most cases, very time consuming that’s why it’s really important knowing the pros and cons. Programs for embedded systems are traditionally written and compiled using a cross-compiler for that architecture on a host PC. That is we use a compiler that can generate code for a foreign machine architecture, meaning a different CPU instruction set from the compiler host's one. Native & foreign machine architecture For example the developer kits shown in this article are an ARM machines while (most probably) our host machine is an x86 (that is a normal PC), so if we try to compile a C program on our host machine the generated code cannot be used on an ARM machine and vice versa. Let's verify it! Here the classic Hello World program below: #include <stdio.h> int main() { printf("Hello Worldn"); return 0; } Now we compile it on my host machine using the following command: $ make CFLAGS="-Wall -O2" helloworld cc -Wall -O2 helloworld.c -o helloworld Careful reader should notice here that we’ve used command make instead of the usual cc. This is a perfectly equivalent way to execute the compiler due the fact, even if without a Makefile, command make already knows how to compile a C program. We can verify that this file is for the x86 (that is the PC) platform by using the file command: $ file helloworld helloworld: ELF 64-bit LSB executable, x86-64, version 1 (SYSV), dynamically linked (uses shared libs), for GNU/Linux 2.6.24, BuildID[sha1]=0f0db5e65e1cd09957ad06a7c1b7771d949dfc84, not stripped Note that the output may vary according to the reader's host machine platform. Now we can just copy the program into one developer kit (for instance the the BeagleBone Black) and try to execute it: root@bbb:~# ./helloworld -bash: ./helloworld: cannot execute binary file As we expected the system refuses to execute code generated for a different architecture! On the other hand, if we use a cross-compiler for this specific CPU architecture the program will run as a charm! Let's verify this by recompiling the code but paying attention to specify that we wish to use the cross-compiler instead. So delete the previously generated x86 executable file (just in case) by using the rm helloworld command and then recompile it using the cross-compiler: $ make CC=arm-linux-gnueabihf-gcc CFLAGS="-Wall -O2" helloworld arm-linux-gnueabihf-gcc -Wall -O2 helloworld.c -o helloworld Note that the cross-compiler's filename has a special meaning: the form is <architecture>-<platform>-<binary-format>-<tool-name>. So the filename arm-linux-gnueabihf-gcc means: ARM architecture, Linux platform, gnueabihf (GNU EABI Hard-Float) binary format and gcc (GNU C Compiler) tool. Now we use the file command again to see if the code is indeed generated for the ARM architecture: $ file helloworld helloworld: ELF 32-bit LSB executable, ARM, EABI5 version 1 (SYSV), dynamically linked (uses shared libs), for GNU/Linux 2.6.32, BuildID[sha1]=31251570b8a17803b0e0db01fb394a6394de8d2d, not stripped Now if we transfer the file as before on the BeagleBone Black and try to execute it, we get: root@bbb:~# ./helloworld Hello World! Therefore we see the cross-compiler ensures that the generated code is compatible with the architecture we are executing it on. In reality in order to have a perfectly functional binary image we have to make sure that the library versions, header files (also the headers related to the kernel) and cross compiler options match the target exactly or, at least, they are compatible. In fact we cannot execute cross-compiled code against the glibc on a system having, for example, musl libc (or it can run in a no predictable manner). In this case we have perfectly compatible libraries and compilers but, in general, the embedded developer should perfectly know what he/she is doing. A common trick to avoid compatibility problems is to use static compilation but, in this case, we get huge binary files. Now the question is: when should we use the compiler and when the cross-compiler? We should compile on an embedded system because: We can (see below why). There would be no compatibility issues as all the target libraries will be available. In cross-compilation it becomes hell when we need all the libraries (if the project uses any) in the ARM format on the host PC. So we not only have to cross-compile the program but also its dependencies. And if the same version dependencies are not installed on the embedded system's rootfs, then good luck with troubleshooting! It's easy and quick. We should cross-compile because: We are working on a large codebase and we don't want to waste too much time compiling the program on the target, which may take from several minutes to several hours (or even it may result impossible). This reason might be strong enough to overpower the other reasons in favor of compiling on the embedded system itself. PCs nowadays have multiple cores so the compiler can process more files simultaneously. We are building a full Linux system from scratch. In any case, below, we will show an example of both native compilation and cross-compilation of a software package, so the reader may well understand the differences between them. Compiling a C program As first step let's see how we can compile a C program. To keep it simple we’ll start compiling a user-space program them in the next sections, we’re going to compile some kernel space code. Knowing how to compile an C program can be useful because it may happen that a specific tool (most probably) written in C is missing into our distribution or it’s present but with an outdated version. In both cases we need to recompile it! To show the differences between a native compilation and a cross-compilation we will explain both methods. However a word of caution for the reader here, this guide is not exhaustive at all! In fact the cross-compilation steps may vary according to the software packages we are going to cross-compile. The package we are going to use is the PicoC interpreter. Each Real-Programmers(TM) know the C compiler, which is normally used to translate a C program into the machine language, but (maybe) not all of them know that a C interpreter exists too! Actually there are many C interpreters, but we focus our attention on PicoC due its simplicity in cross-compiling it. As we already know, an interpreter is a program that converts the source code into executable code on the fly and does not need to parse the complete file and generate code at once. This is quite useful when we need a flexible way to write brief programs to resolve easy tasks. In fact to fix bugs in the code and/or changing the program behavior we simply have to change the program source and then re-executing it without any compilation at all. We just need an editor to change our code! For instance, if we wish to read some bytes from a file we can do it by using a standard C program, but for this easy task we can write a script for an interpreter too. Which interpreter to choose is up to developer and, since we are C programmers, the choice is quite obvious. That's why we have decided to use PicoC. Note that the PicoC tool is quite far from being able to interpret all C programs! In fact this tool implements a fraction of the features of a standard C compiler; however it can be used for several common and easy tasks. Please, consider the PicoC as an education tool and avoid using it in a production environment! The native compilation Well, as a first step we need to download the PicoC source code from its repository at: http://github.com/zsaleeba/picoc.git into our embedded system. This time we decided to use the BeagleBone Black and the command is as follows: root@bbb:~# git clone http://github.com/zsaleeba/picoc.git When finished we can start compiling the PicoC source code by using: root@bbb:~# cd picoc/ root@bbb:~/picoc# make Note that if we get the error below during the compilation we can safely ignore it: /bin/sh: 1: svnversion: not found However during the compilation we get: platform/platform_unix.c:5:31: fatal error: readline/readline.h: No such file or directory #include <readline/readline.h> ^ compilation terminated. <builtin>: recipe for target 'platform/platform_unix.o' failed make: *** [platform/platform_unix.o] Error 1 Bad news, we have got an error! This because the readline library is missing; hence we need to install it to keep this going. In order to discover which package's name holds a specific tool, we can use the following command to discover the package that holds the readline library: root@bbb:~# apt-cache search readline The command output is quite long, but if we carefully look at it we can see the following lines: libreadline5 - GNU readline and history libraries, run-time libraries libreadline5-dbg - GNU readline and history libraries, debugging libraries libreadline-dev - GNU readline and history libraries, development files libreadline6 - GNU readline and history libraries, run-time libraries libreadline6-dbg - GNU readline and history libraries, debugging libraries libreadline6-dev - GNU readline and history libraries, development files This is exactly what we need to know! The required package is named libreadline-dev. In the Debian distribution all libraries packages are prefixed by the lib string while the -dev postfix is used to mark the development version of a library package. Note also that we choose the package libreadline-dev intentionally leaving the system to choose to install version 5 o 6 of the library. The development version of a library package holds all needed files whose allow the developer to compile his/her software to the library itself and/or some documentation about the library functions. For instance, into the development version of the readline library package (that is into the package libreadline6-dev) we can find the header and the object files needed by the compiler. We can see these files using the following command: #root@bbb:~# dpkg -L libreadline6-dev | egrep '.(so|h)' /usr/include/readline/rltypedefs.h /usr/include/readline/readline.h /usr/include/readline/history.h /usr/include/readline/keymaps.h /usr/include/readline/rlconf.h /usr/include/readline/tilde.h /usr/include/readline/rlstdc.h /usr/include/readline/chardefs.h /usr/lib/arm-linux-gnueabihf/libreadline.so /usr/lib/arm-linux-gnueabihf/libhistory.so So let's install it: root@bbb:~# aptitude install libreadline-dev When finished we can relaunch the make command to definitely compile our new C interpreter: root@bbb:~/picoc# make gcc -Wall -pedantic -g -DUNIX_HOST -DVER="`svnversion -n`" -c -o clibrary.o clibrary.c ... gcc -Wall -pedantic -g -DUNIX_HOST -DVER="`svnversion -n`" -o picoc picoc.o table.o lex.o parse.o expression.o heap.o type.o variable.o clibrary.o platform.o include.o debug.o platform/platform_unix.o platform/library_unix.o cstdlib/stdio.o cstdlib/math.o cstdlib/string.o cstdlib/stdlib.o cstdlib/time.o cstdlib/errno.o cstdlib/ctype.o cstdlib/stdbool.o cstdlib/unistd.o -lm -lreadline Well now the tool is successfully compiled as expected! To test it we can use again the standard Hello World program above but with a little modification, in fact the main() function is not defined as before! This is due the fact PicoC returns an error if we use the typical function definition. Here the code: #include <stdio.h> int main() { printf("Hello Worldn"); return 0; } Now we can directly execute it (that is without compiling it) by using our new C interpreter: root@bbb:~/picoc# ./picoc helloworld.c Hello World An interesting feature of PicoC is that it can execute C source file like a script, that is we don't need to specify a main() function as C requires and the instructions are executed one by one from the beginning of the file as a normal scripting language does. Just to show it we can use the following script which implements the Hello World program as C-like script (note that the main() function is not defined!): printf("Hello World!n"); return 0; If we put the above code into the file helloworld.picoc we can execute it by using: root@bbb:~/picoc# ./picoc -s helloworld.picoc Hello World! Note that this time we add the -s option argument to the command line in order to instruct the PicoC interpreter that we wish using its scripting behavior. The cross-compilation Now let's try to cross-compile the PicoC interpreter on the host system. However, before continuing, we’ve to point out that this is just an example of a possible cross-compilation useful to expose a quick and dirty way to recompile a program when the native compilation is not possible. As already reported above the cross-compilation works perfectly for the bootloader and the kernel while for user-space application we must ensure that all involved libraries (and header files) used by the cross-compiler are perfectly compatible with the ones present on the target machine otherwise the program may not work at all! In our case everything is perfectly compatible so we can go further. As before we need to download the PicoC's source code by using the same git command as above. Then we have to enter the following command into the newly created directory picoc: $ cd picoc/ $ make CC=arm-linux-gnueabihf-gcc arm-linux-gnueabihf-gcc -Wall -pedantic -g -DUNIX_HOST -DVER="`svnversion -n`" -c -o picoc.o picoc.c ... platform/platform_unix.c:5:31: fatal error: readline/readline.h: No such file or directory compilation terminated. <builtin>: recipe for target 'platform/platform_unix.o' failed make: *** [platform/platform_unix.o] Error 1 We specify the CC=arm-linux-gnueabihf-gcc commad line option to force the cross-compilation. However, as already stated before, the cross-compilation commands may vary according to the compilation method used by the single software package. As before the system returns a linking error due to the fact that thereadline library is missing, however, this time, we cannot install it as before since we need the ARM version (specifically the armhf version) of this library and my host system is a normal PC! Actually a way to install a foreign package into a Debian/Ubuntu distribution exists, but it's not a trivial task nor it's an argument. A curious reader may take a look at the Debian/Ubuntu Multiarch at https://help.ubuntu.com/community/MultiArch. Now we have to resolve this issue and we have two possibilities: We can try to find a way to install the missing package, or We can try to find a way to continue the compilation without it. The former method is quite complex since the readline library has in turn other dependencies and we may take a lot of time trying to compile them all, so let's try to use the latter option. Knowing that the readline library is just used to implement powerful interactive tools (such as recalling a previous command line to re-edit it, etc.) and since we are not interested in the interactive usage of this interpreter, we can hope to avoid using it. So, looking carefully into the code we see that the define USE_READLINE exists and changing the code as shown below should resolve the issue allowing us to compile the tool without the readline support: $ git diff diff --git a/Makefile b/Makefile index 6e01a17..c24d09d 100644 --- a/Makefile +++ b/Makefile @@ -1,6 +1,6 @@ CC=gcc CFLAGS=-Wall -pedantic -g -DUNIX_HOST -DVER="`svnversion -n`" -LIBS=-lm -lreadline +LIBS=-lm TARGET = picoc SRCS = picoc.c table.c lex.c parse.c expression.c heap.c type.c diff --git a/platform.h b/platform.h index 2d7c8eb..c0b3a9a 100644 --- a/platform.h +++ b/platform.h @@ -49,7 +49,6 @@ # ifndef NO_FP # include <math.h> # define PICOC_MATH_LIBRARY -# define USE_READLINE # undef BIG_ENDIAN # if defined(__powerpc__) || defined(__hppa__) || defined(__sparc__) # define BIG_ENDIAN The above output is in the unified context diff format; so the code above means that into the file Makefile the option -lreadline must be removed from variable LIBS and that into the file platform.h the define USE_READLINE must be commented out. After all the changes are in place we can try to recompile the package with the same command as before: $ make CC=arm-linux-gnueabihf-gcc arm-linux-gnueabihf-gcc -Wall -pedantic -g -DUNIX_HOST -DVER="`svnversion -n`" -c -o table.o table.c ... arm-linux-gnueabihf-gcc -Wall -pedantic -g -DUNIX_HOST -DVER="`svnversion -n`" -o picoc picoc.o table.o lex.o parse.o expression.o heap.o type.o variable.o clibrary.o platform.o include.o debug.o platform/platform_unix.o platform/library_unix.o cstdlib/stdio.o cstdlib/math.o cstdlib/string.o cstdlib/stdlib.o cstdlib/time.o cstdlib/errno.o cstdlib/ctype.o cstdlib/stdbool.o cstdlib/unistd.o -lm Great! We did it! Now, just to verify that everything is working correctly, we can simply copy the picoc file into our BeagleBone Black and test it as before. Compiling a kernel module As a special example of cross-compilation we'll take a look at a very simple code which implement a dummy module for the Linux kernel (the code does nothing but printing some messages on the console) and we’ll try to cross-compile it. Let's consider this following kernel C code of the dummy module: #include <linux/module.h> #include <linux/init.h> /* This is the function executed during the module loading */ static int dummy_module_init(void) { printk("dummy_module loaded!n"); return 0; } /* This is the function executed during the module unloading */ static void dummy_module_exit(void) { printk("dummy_module unloaded!n"); return; } module_init(dummy_module_init); module_exit(dummy_module_exit); MODULE_AUTHOR("Rodolfo Giometti <giometti@hce-engineering.com>"); MODULE_LICENSE("GPL"); MODULE_VERSION("1.0.0"); Apart some defines relative to the kernel tree the file holds two main functions  dummy_module_init() and  dummy_module_exit() and some special definitions, in particular the module_init() and module_exit(), that address the first two functions as the entry and exit functions of the current module (that is the function which are called at module loading and unloading). Then consider the following Makefile: ifndef KERNEL_DIR $(error KERNEL_DIR must be set in the command line) endif PWD := $(shell pwd) CROSS_COMPILE = arm-linux-gnueabihf- # This specifies the kernel module to be compiled obj-m += module.o # The default action all: modules # The main tasks modules clean: make -C $(KERNEL_DIR) ARCH=arm CROSS_COMPILE=arm-linux-gnueabihf- SUBDIRS=$(PWD) $@ OK, now to cross-compile the dummy module on the host PC we can use the following command: $ make KERNEL_DIR=~/A5D3/armv7_devel/KERNEL/ make -C /home/giometti/A5D3/armv7_devel/KERNEL/ SUBDIRS=/home/giometti/github/chapter_03/module modules make[1]: Entering directory '/home/giometti/A5D3/armv7_devel/KERNEL' CC [M] /home/giometti/github/chapter_03/module/dummy.o Building modules, stage 2. MODPOST 1 modules CC /home/giometti/github/chapter_03/module/dummy.mod.o LD [M] /home/giometti/github/chapter_03/module/dummy.ko make[1]: Leaving directory '/home/giometti/A5D3/armv7_devel/KERNEL' It's important to note that when a device driver is released as a separate package with a Makefile compatible with the Linux's one we can compile it natively too! However, even in this case, we need to install a kernel source tree on the target machine anyway. Not only, but the sources must also be configured in the same manner of the running kernel or the resulting driver will not work at all! In fact a kernel module will only load and run with the kernel it was compiled against. The cross-compilation result is now stored into the file dummy.ko, in fact we have: $ file dummy.ko dummy.ko: ELF 32-bit LSB relocatable, ARM, EABI5 version 1 (SYSV), BuildID[sha1]=ecfcbb04aae1a5dbc66318479ab9a33fcc2b5dc4, not stripped The kernel modules as been compiled for the SAMA5D3 Xplained but, of course, it can be cross-compiled for the other developer kits in a similar manner. So let’s copy our new module to the SAMA5D3 Xplained by using the scp command through the USB Ethernet connection: $ scp dummy.ko root@192.168.8.2: root@192.168.8.2's password: dummy.ko 100% 3228 3.2KB/s 00:00 Now, if we switch on the SAMA5D3 Xplained, we can use the modinfo command to get some information of the kernel module: root@a5d3:~# modinfo dummy.ko filename: /root/dummy.ko version: 1.0.0 license: GPL author: Rodolfo Giometti <giometti@hce-engineering.com> srcversion: 1B0D8DE7CF5182FAF437083 depends: vermagic: 4.4.6-sama5-armv7-r5 mod_unload modversions ARMv7 thumb2 p2v8 Then to load and unload it into and from the kernel we can use the insmod and rmmod commands as follow: root@a5d3:~# insmod dummy.ko [ 3151.090000] dummy_module loaded! root@a5d3:~# rmmod dummy.ko [ 3153.780000] dummy_module unloaded! As expected the dummy’s messages has been displayed on the serial console. Note that if we are using a SSH connection we have to use the dmesg or tail -f /var/log/kern.log commands to see kernel’s messages. Note also that the commands modinfo, insmod and rmmod are explained in detail in a section below. The Kernel and DTS files Main target of this article is to give several suggestions for rapid programming methods to be used on an embedded GNU/Linux system, however the main target of every embedded developer is to realize programs to manage peripherals, to monitor or to control devices and other similar tasks to interact with the real world, so we mainly need to know the techniques useful to get access to the peripheral’s data and settings. That’s why we need to know firstly how to recompile the kernel and how to configure it. Summary In this article we did a very long tour into three of the most important topics of the GNU/Linux embedded programming: the C compiler (and the cross-compiler), the kernel (and the device drivers with the device tree) and the root filesystem. Also we presented the NFS in order to have a remote root filesystem over the network and we introduced the emulator usage in order to execute foreign code on the host PC. Resources for Article: Further resources on this subject: Visualizations made easy with gnuplot [article] Revisiting Linux Network Basics [article] Fundamental SELinux Concepts [article]
Read more
  • 0
  • 0
  • 8365

article-image-how-build-dropdown-menu-using-canjs
Liz Tom
17 Mar 2017
7 min read
Save for later

How to build a dropdown menu using Can.js

Liz Tom
17 Mar 2017
7 min read
This post describes how to build a dropdown menu using Can.js. In this example, we will build a dropdown menu of names. If you'd like to see the complete example of what you'll be building, you can check it out here. Setup The very first thing you will need to do is to import Can.js and jQuery. <script src="https://code.jquery.com/jquery-2.2.4.js"></script> <script src="https://rawgit.com/canjs/canjs/v3.0.0-pre.6/dist/global/can.all.js"></script> Our First Model To make a model, you use can DefineMap. If you're following along using Code Pen or JSBin in the js tab, type the following piece of code[ZB1] : var Person = can.DefineMap.extend({ id: "string", name: "string", }); Here, we have a model named Person that defined the properties of id and name as string types. You can read about the different types that Can.js has here: https://canjs.com/doc/can-map-define._type.html. Can.js 3.0 allows us to declare types in two different ways. We could have also written the following piece of code[ZB2] : var Person = can.DefineMap.extend({ id: { type: "string", }, name: { type: "string", }, }); I tend to use the second syntax only when I have other settings I need to define on a particular property. The short hand of the first way makes things a bit easier. Getting it on the Page Since we're building a dropdown, we will most likely want the user to be able to see it. We're going to use can.stache to help us with this. In our HTML tab, write the following lines of code: <script type='text/stache' id='person-template'> <h1>Person Template</h1> <input placeholder="{{person.test}}"/> </script> The {{person.test}} is there, so you can see if you have it working. We'll add a test property to our model. var Person = can.DefineMap.extend({ id: "string", name: "string", test: { value: 'It's working!' } }); Now, we need to create a View Model. We're going to use Define Map again. Add the following to your js file: var PersonVM = can.DefineMap.extend({ person: {Value: Person}, }); You might notice that I'm using Value with a capitol "V". You have the option of using both value and Value. The difference is that Value causes the new to be used. Now, to use this as our View Model, you'll need to add the following to your js tab. can.Component.extend({ tag: 'person', view: can.stache.from('person-template'), ViewModel: PersonVM }); var vm = new PersonVM(); var template = can.stache.from('person-template') var frag = template(vm); document.body.appendChild(frag); The can.stache.from ('person-template') uses the ID from our script tag. The tag value person is so that we can use this component elsewhere, like <person>. If you check out the preview tab, you should see a header followed by an input box with the placeholder text we set. If you change the value of our test property, you should see the live binding updating. Fixtures Can.js allows us to easily add fixtures so we can test our UI without needing the API set up. This is great for development as the UI and the API don't always sync up in terms of development. We start off by setting up our set Algebra. Put the following at the top of your js tab: var personAlgebra = new set.Algebra( set.props.id('id'), set.props.sort('sort') ); var peopleStore = can.fixture.store([ { name: "Mary", id: 5 }, { name: "John", id: 6 }, { name: "Peter", id: 7 } ], personAlgebra); The set.Algebra helps us with some things. The set.props.id allows us to change the ID property. A very common example is that Mongo uses _id. We can easily change the ID property to map responses from the server with _id to our can model's id. In our fixture, we are faking some data that might already be stored in our database. Here, we have three people that have already been added. We need to add in a fixture route to catch our requests so we can send back our fixture data instead of trying to make a call to our API: can.fixture("/api/people/{id}", peopleStore); Here, we're telling can to use the people store whenever we have any requests using /api/people/{id}. Next, we will need to tell can.js how to use everything we just set up. We're going to use can-connect for that. Add this to your js tab: Person.connection = can.connect.superMap({ Map: Person, List: Person.List, url: "/api/people", name: "person", algebra: personAlgebra }); Does it work? Let's see if it's working. We'll write a function in our viewModel that allows us to save. Can-connect comes with some helper functions that allow us to do basic CRUD functionality. Keeping this in mind, update your Person View Model as follows: var PersonCreateVM = can.DefineMap.extend({ person: {Value: Person}, createPerson: function(){ this.person.save().then(function(){ this.person = new Person(); }.bind(this)); } }); Now, we have a createPerson function that saves a new person to the database and updates the person to be our new person. In order to use this, we can update our input tag to the following: <input placeholder="Name" {($value)}="person.name" ($enter)="createPerson()"/> This two-way binds the value of the input to our viewModel. Now, when we update the input, person.name also gets updated, and when we update person.name, the input updates as well. ($enter)=createPerson() will call createPerson whenever we press Enter. Populating the Select Now that we can create people and save them, we should be able to easily create a list of names. Since we may want to use this list of names at many places in our app, we're making the list its own component. Add this to the HTML tab. First, we will create a view model for our People. We're going to end up passing our people into the component. This way, we can use different people, depending on where this dropdown is being used. var PeopleListVM = can.DefineMap.extend({ peoplePromise: Promise, }); can.Component.extend({ tag: "people-list", view: can.stache.from("people-list-template"), ViewModel: PeopleListVM }); Then update your HTML with a template. Since peoplePromise is a Promise, we want to make sure it is resolved before we populate the select menu. We also have the ability to check isRejected, and isPending. value gives us result of the promise. We also use {{#each}} to cycle through each item in a list. <script type='text/stache' id='people-list-template'> {{#if peoplePromise.isResolved}} <select> {{#each peoplePromise.value}} <option>{{name}}</option> {{/each}} </select> {{/if}} </script> Building Blocks We can use these components, such as building blocks, in various parts of our app. If we create an app view model, we can put people there. We are using a getter in this case to get back a list of people. .getList({}) comes with DefineMap. This will return a promise. var AppVM = can.DefineMap.extend({ people: { get: function(){ return Person.getList({}); } } }); We will update our HTML to use these components. Now, we're using the tags we set up earlier. We can use the following to pass people into our people-list component: <people-list {people-promise}="people"/>. We can't use camel case in our stache file, so we will use hypens. can.js knows how to convert this into camel case for us. <script type='text/stache' id='names-template'> <div id="nameapp"> <h1>Names</h1> <person-create/> <people-list {people-promise}="people"/> </div> </script> Update the vm to use the app view model instead of the people view model. var vm = new AppVM(); var template = can.stache.from("app-template") var frag = template(vm); document.body.appendChild(frag); And that's it! You should have a drop-down menu that updates as you add more people. About the author Liz Tom is a developer at Bitovi in Portland, OR, focused on JavaScript. When she’s not in the office, you can find Liz attempting parkour and going to check out interactive displays at museums.
Read more
  • 0
  • 0
  • 4659

article-image-system-architecture-and-design-ansible
Packt
16 Mar 2017
14 min read
Save for later

System Architecture and Design of Ansible

Packt
16 Mar 2017
14 min read
In this article by Jesse Keating, the author of the book Mastering Ansible - Second Edition, we will cover the following topics in order to lay the foundation for mastering Ansible: Ansible version and configuration Inventory parsing and data sources Variable types and locations Variable precedence (For more resources related to this topic, see here.) This article provides a exploration of the architecture and design of how Ansible goes about performing tasks on your behalf. We will cover basic concepts of inventory parsing and how the data is discovered. We will also cover variable types and find out where variables can be located, the scope they can be used in, and how precedence is determined when variables are defined in more than one location. Ansible version and configuration There are many documents out there that cover installing Ansible in a way that is appropriate for the operating system and version that you might be using. This article will assume the use of the Ansible 2.2 version. To discover the version in use on a system with Ansible already installed, make use of the version argument, that is, either ansible or ansible-playbook: Inventory parsing and data sources In Ansible, nothing happens without an inventory. Even ad hoc actions performed on localhost require an inventory, even if that inventory consists just of the localhost. The inventory is the most basic building block of Ansible architecture. When executing ansible or ansible-playbook, an inventory must be referenced. Inventories are either files or directories that exist on the same system that runs ansible or ansible-playbook. The location of the inventory can be referenced at runtime with the inventory file (-i) argument, or by defining the path in an Ansible config file. Inventories can be static or dynamic, or even a combination of both, and Ansible is not limited to a single inventory. The standard practice is to split inventories across logical boundaries, such as staging and production, allowing an engineer to run a set of plays against their staging environment for validation, and then follow with the same exact plays run against the production inventory set. Static inventory The static inventory is the most basic of all the inventory options. Typically, a static inventory will consist of a single file in the ini format. Here is an example of a static inventory file describing a single host, mastery.example.name: mastery.example.name That is all there is to it. Simply list the names of the systems in your inventory. Of course, this does not take full advantage of all that an inventory has to offer. If every name were listed like this, all plays would have to reference specific host names, or the special all group. This can be quite tedious when developing a playbook that operates across different sets of your infrastructure. At the very least, hosts should be arranged into groups. A design pattern that works well is to arrange your systems into groups based on expected functionality. At first, this may seem difficult if you have an environment where single systems can play many different roles, but that is perfectly fine. Systems in an inventory can exist in more than one group, and groups can even consist of other groups! Additionally, when listing groups and hosts, it's possible to list hosts without a group. These would have to be listed first, before any other group is defined. Let's build on our previous example and expand our inventory with a few more hosts and some groupings: [web] mastery.example.name [dns] backend.example.name [database] backend.example.name [frontend:children] web [backend:children] dns database What we have created here is a set of three groups with one system in each, and then two more groups, which logically group all three together. Yes, that's right; you can have groups of groups. The syntax used here is [groupname:children], which indicates to Ansible's inventory parser that this group by the name of groupname is nothing more than a grouping of other groups. The children in this case are the names of the other groups. This inventory now allows writing plays against specific hosts, low-level role-specific groups, or high-level logical groupings, or any combination. By utilizing generic group names, such as dns and database, Ansible plays can reference these generic groups rather than the explicit hosts within. An engineer can create one inventory file that fills in these groups with hosts from a preproduction staging environment and another inventory file with the production versions of these groupings. The playbook content does not need to change when executing on either staging or production environment because it refers to the generic group names that exist in both inventories. Simply refer to the right inventory to execute it in the desired environment. Dynamic inventories A static inventory is great and enough for many situations. But there are times when a statically written set of hosts is just too unwieldy to manage. Consider situations where inventory data already exists in a different system, such as LDAP, a cloud computing provider, or an in-house CMDB (inventory, asset tracking, and data warehousing) system. It would be a waste of time and energy to duplicate that data, and in the modern world of on-demand infrastructure, that data would quickly grow stale or disastrously incorrect. Another example of when a dynamic inventory source might be desired is when your site grows beyond a single set of playbooks. Multiple playbook repositories can fall into the trap of holding multiple copies of the same inventory data, or complicated processes have to be created to reference a single copy of the data. An external inventory can easily be leveraged to access the common inventory data stored outside of the playbook repository to simplify the setup. Thankfully, Ansible is not limited to static inventory files. A dynamic inventory source (or plugin) is an executable script that Ansible will call at runtime to discover real-time inventory data. This script may reach out into external data sources and return data, or it can just parse local data that already exists but may not be in the Ansible inventory ini format. While it is possible and easy to develop your own dynamic inventory source, Ansible provides a number of example inventory plugins, including but not limited to: OpenStack Nova Rackspace Public Cloud DigitalOcean Linode Amazon EC2 Google Compute Engine Microsoft Azure Docker Vagrant Many of these plugins require some level of configuration, such as user credentials for EC2 or authentication endpoint for OpenStack Nova. Since it is not possible to configure additional arguments for Ansible to pass along to the inventory script, the configuration for the script must either be managed via an ini config file read from a known location or environment variables read from the shell environment used to execute ansible or ansible-playbook. When ansible or ansible-playbook is directed at an executable file for an inventory source, Ansible will execute that script with a single argument, --list. This is so that Ansible can get a listing of the entire inventory in order to build up its internal objects to represent the data. Once that data is built up, Ansible will then execute the script with a different argument for every host in the data to discover variable data. The argument used in this execution is --host <hostname>, which will return any variable data specific to that host. Variable types and location Variables are a key component to the Ansible design. Variables allow for dynamic play content and reusable plays across different sets of inventory. Anything beyond the very basic of Ansible use will utilize variables. Understanding the different variable types and where they can be located, as well as learning how to access external data or prompt users to populate variable data is the key to mastering Ansible. Variable types Before diving into the precedence of variables, we must first understand the various types and subtypes of variables available to Ansible, their location, and where they are valid for use. The first major variable type is inventory variables. These are the variables that Ansible gets by way of the inventory. These can be defined as variables that are specific to host_vars to individual hosts or applicable to entire groups as group_vars. These variables can be written directly into the inventory file, delivered by the dynamic inventory plugin, or loaded from the host_vars/<host> or group_vars/<group> directories. These types of variables might be used to define Ansible behavior when dealing with these hosts or site-specific data related to the applications that these hosts run. Whether a variable comes from host_vars or group_vars, it will be assigned to a host's hostvars. Accessing a host's own variables can be done just by referencing the name, such as {{ foobar }}, and accessing another host's variables can be accomplished by accessing hostvars. For example, to access the foobar variable for examplehost: {{ hostvars['examplehost']['foobar'] }}. These variables have global scope. The second major variable type is role variables. These are variables specific to a role and are utilized by the role tasks and have scope only within the role that they are defined in, which is to say that they can only be used within the role. These variables are often supplied as a role default, which are meant to provide a default value for the variable but can easily be overridden when applying the role. When roles are referenced, it is possible to supply variable data at the same time, either by overriding role defaults or creating wholly new data. These variables apply to all hosts within the role and can be accessed directly much like a host's own hostvars. The third major variable type is play variables. These variables are defined in the control keys of a play, either directly by the vars key or sourced from external files via the vars_files key. Additionally, the play can interactively prompt the user for variable data using vars_prompt. These variables are to be used within the scope of the play and in any tasks or included tasks of the play. The variables apply to all hosts within the play and can be referenced as if they are hostvars. The fourth variable type is task variables. Task variables are made from data discovered while executing tasks or in the facts gathering phase of a play. These variables are host-specific and are added to the host's hostvars and can be used as such, which also means they have global scope after the point in which they were discovered or defined. Variables of this type can be discovered via gather_facts and fact modules (modules that do not alter state but rather return data), populated from task return data via the register task key, or defined directly by a task making use of the set_fact or add_host modules. Data can also be interactively obtained from the operator using the prompt argument to the pause module and registering the result: name: get the operators name pause: prompt: "Please enter your name" register: opname There is one last variable type, the extra variables, or extra-vars type. These are variables supplied on the command line when executing ansible-playbook via --extra-vars. Variable data can be supplied as a list of key=value pairs, a quoted JSON data, or a reference to a YAML-formatted file with variable data defined within: --extra-vars "foo=bar owner=fred" --extra-vars '{"services":["nova-api","nova-conductor"]}' --extra-vars @/path/to/data.yaml Extra variables are considered global variables. They apply to every host and have scope throughout the entire playbook. Accessing external data Data for role variables, play variables, and task variables can also come from external sources. Ansible provides a mechanism to access and evaluate data from the control machine (the machine running ansible-playbook). The mechanism is called a lookup plugin, and a number of them come with Ansible. These plugins can be used to lookup or access data by reading files, generate and locally store passwords on the Ansible host for later reuse, evaluate environment variables, pipe data in from executables, access data in the Redis or etcd systems, render data from template files, query dnstxt records, and more. The syntax is as follows: lookup('<plugin_name>', 'plugin_argument') For example, to use the mastery value from etcd in a debug task: name: show data from etcd debug: msg: "{{ lookup('etcd', 'mastery') }}" Lookups are evaluated when the task referencing them is executed, which allows for dynamic data discovery. To reuse a particular lookup in multiple tasks and reevaluate it each time, a playbook variable can be defined with a lookup value. Each time the playbook variable is referenced, the lookup will be executed potentially providing different values over time. Variable precedence There are a few major types of variables that can be defined in a myriad of locations. This leads to a very important question, what happens when the same variable name is used in multiple locations? Ansible has a precedence for loading variable data, and thus it has an order and a definition to decide which variable will win. Variable value overriding is an advanced usage of Ansible, so it is important to fully understand the semantics before attempting such a scenario. Precedence order Ansible defines the precedence order as follows: Extra vars (from command line) always win Task vars (only for the specific task) Block vars (only for the tasks within the block) Role and include vars Vars created with set_fact Vars created with the register task directive Play vars_files Play vars_prompt Play vars Host facts Playbook host_vars Playbook group_vars Inventory host_vars Inventory group_vars Inventory vars Role defaults Merging hashes We focused on the precedence in which variables will override each other. The default behavior of Ansible is that any overriding definition for a variable name will completely mask the previous definition of that variable. However, that behavior can be altered for one type of variable, the hash. A hash variable (a dictionary in Python terms) is a dataset of keys and values. Values can be of different types for each key, and can even be hashes themselves for complex data structures. In some advanced scenarios, it is desirable to replace just one bit of a hash or add to an existing hash rather than replacing the hash all together. To unlock this ability, a configuration change is necessary in an Ansible config file. The config entry is hash_behavior, which takes one of replace, or merge. A setting of merge will instruct Ansible to merge or blend the values of two hashes when presented with an override scenario rather than the default of replace, which will completely replace the old variable data with the new data. Let's walk through an example of the two behaviors. We will start with a hash loaded with data and simulate a scenario where a different value for the hash is provided as a higher priority variable: Starting data: hash_var: fred: home: Seattle transport: Bicycle New data loaded via include_vars: hash_var: fred: transport: Bus With the default behavior, the new value for hash_var will be: hash_var: fred: transport: Bus However, if we enable the merge behavior we would get the following result: hash_var: fred: home: Seattle transport: Bus There are even more nuances and undefined behaviors when using merge, and as such, it is strongly recommended to only use this setting if absolutely needed. Summary In this article, we covered key design and architecture concepts of Ansible, such as version and configuration, variable types and locations, and variable precedence. Resources for Article: Further resources on this subject: Mastering Ansible – Protecting Your Secrets with Ansible [article] Ansible – An Introduction [article] Getting Started with Ansible [article]
Read more
  • 0
  • 0
  • 17340
article-image-getting-started-metasploitable2-and-kali-linux
Packt
16 Mar 2017
8 min read
Save for later

Getting Started with Metasploitable2 and Kali Linux

Packt
16 Mar 2017
8 min read
In this article, by Michael Hixon, the author of the book, Kali Linux Network Scanning Cookbook - Second Edition, we will be covering: Installing Metasploitable2 Installing Kali Linux Managing Kali services (For more resources related to this topic, see here.) Introduction We need to first configure a security lab environment using VMware Player (Windows) or VMware Fusion (macOS), and then install Ubuntu server and Windows server on the VMware Player. Installing Metasploitable2 Metasploitable2 is an intentionally vulnerable Linux distribution and is also a highly effective security training tool. It comes fully loaded with a large number of vulnerable network services and also includes several vulnerable web applications. Getting ready Prior to installing Metasploitable2 in your virtual security lab, you will first need to download it from the Web. There are many mirrors and torrents available for this. One relatively easy method to acquire Metasploitable is to download it from SourceForge at the following URL: http://sourceforge.net/projects/metasploitable/files/Metasploitable2/. How to do it… Installing Metasploitable2 is likely to be one of the easiest installations that you will perform in your security lab. This is because it is already prepared as a VMware virtual machine when it is downloaded from SourceForge. Once the ZIP file has been downloaded, you can easily extract the contents of this file in Windows or macOS by double-clicking on it in Explorer or Finder respectively. Have a look at the following screenshot: Once extracted, the ZIP file will return a directory with five additional files inside. Included among these files is the VMware VMX file. To use Metasploitable in VMware, just click on the File drop-down menu and click on Open. Then, browse to the directory created from the ZIP extraction process and open Metasploitable.vmx as shown in the following screenshot: Once the VMX file has been opened, it should be included in your virtual machine library. Select it from the library and click on Run to start the VM and get the following screen: After the VM loads, the splash screen will appear and request login credentials. The default credential to log in is msfadmin for both the username and password. This machine can also be accessed via SSH. How it works… Metasploitable was built with the idea of security testing education in mind. This is a highly effective tool, but it must be handled with care. The Metasploitable system should never be exposed to any untrusted networks. It should never be assigned a publicly routable IP address, and port forwarding should not be used to make services accessible over the Network Address Translation (NAT) interface. Installing Kali Linux Kali Linux is known as one of the best hacking distributions providing an entire arsenal of penetration testing tools. The developers recently released Kali Linux 2016.2 which solidified their efforts in making it a rolling distribution. Different desktop environments have been released along side GNOME in this release, such as e17, LXDE, Xfce, MATE and KDE. Kali Linux will be kept updated with latest improvements and tools by weekly updated ISOs. We will be using Kali Linux 2016.2 with GNOME as our development environment for many of the scanning scripts. Getting ready Prior to installing Kali Linux in your virtual security testing lab, you will need to acquire the ISO file (image file) from a trusted source. The Kali Linux ISO can be downloaded at http://www.kali.org/downloads/. How to do it… After selecting the Kali Linux .iso file you will be asked what operating system you are installing. Currently Kali Linux is built on Debian 8.x, choose this and click Continue. You will see a finish screen but lets customize the settings first. Kali Linux requires at least 15 GB of hard disk space and a minimum for 512 MB RAM. After booting from the Kali Linux image file, you will be presented with the initial boot menu. Here, scroll down to the sixth option, Install, and press Enter to start the installation process: Once started, you will be guided through a series of questions to complete the installation process. Initially, you will be asked to provide your location (country) and language. You will then be provided with an option to manually select your keyboard configuration or use a guided detection process. The next step will request that you provide a hostname for the system. If the system will be joined to a domain, ensure that the hostname is unique, as shown in the following screenshot: Next, you will need to set the password for the root account. It is recommended that this be a fairly complex password that will not be easily compromised. Have a look at the following screenshot: Next, you will be asked to provide the time zone you are located in. The system will use IP geolocation to provide its best guess of your location. If this is not correct, manually select the correct time zone: To set up your disk partition, using the default method and partitioning scheme should be sufficient for lab purposes: It is recommended that you use a mirror to ensure that your software in Kali Linux is kept up to date: Next, you will be asked to provide an HTTP proxy address. An external HTTP proxy is not required for any of the exercises, so this can be left blank: Finally, choose Yes to install the GRUB boot loader and then press Enter to complete the installation process. When the system loads, you can log in with the root account and the password provided during the installation: How it works… Kali Linux is a Debian Linux distribution that has a large number of preinstalled, third-party penetration tools. While all of these tools could be acquired and installed independently, the organization and implementation that Kali Linux provides makes it a useful tool for any serious penetration tester. Managing Kali services Having certain services start automatically can be useful in Kali Linux. For example lets say I want to be able to SSHto my Kali Linux distribution. By default theSSH server does not start on Kali, so I would need to log into the virtual machine, open a terminal and run the command to start the service. Getting ready Prior to modifying the Kali Linux configuration, you will need to have installed the operating system on a virtual machine. How to do it… We begin by logging into our Kali Linux distribution and opening a terminal window. Type in the following command: More than likely it is already installed and you will see a message as follows: So now that we know it is installed, let us see if the service is running. From the terminal type: If the SSH server is not running you will see something like this: Type Ctrl+ C to get back to the prompt. Now lets start the service and check the status again by typing the following command: You should now see something like the following: So now the service is running, great, but if we reboot we will see that the service does not start automatically. To get the service to start every time we boot we need to make a few configuration changes. Kali Linux puts in extra measures to make sure you do not have services starting automatically. Specifically, it has a service whitelist and blacklist file. So to get SSH to start at boot we will need to remove the SSH service from the blacklist. To do this open a terminal and type the following command: Navigate down to the section labeled List of blacklisted init scripts and find ssh. Now we will just add a # symbol to the beginning of that line, save the file and exit. The file should look similar to the following screenshot: Now that we have removed the blacklist policy, all we need to do is enable ssh at boot. To do this run the following commands from your terminal: That’s it! Now when you reboot the service will begin automatically.You can use this same procedure to start other services automatically at boot time. How it works… The rc.local file is executed after all the normal Linux services have started. It can be used to start services you want available after you boot your machine. Summary In this article, we learnt aboutMetasploitable2 and it's installation. We also covered what is Kali Linux, how it is installed, and the services it provides.Kali Linux is a useful tool for any serious penetration tester by the organization and implementation provided by it. Resources for Article: Further resources on this subject: Revisiting Linux Network Basics [article] Fundamental SELinux Concepts [article] Creating a VM using VirtualBox - Ubuntu Linux [article]
Read more
  • 0
  • 0
  • 49857

article-image-about-certified-openstack-administrator-exam
Packt
15 Mar 2017
8 min read
Save for later

About the Certified OpenStack Administrator Exam

Packt
15 Mar 2017
8 min read
In this article by Matt Dorn, authors of the book Certified OpenStack Administrator Study Guide, we will learn and understand how we can pass the Certified OpenStack Administrator exam successfully! (For more resources related to this topic, see here.) Benefits of passing the exam Ask anyone about getting started in the IT world and they may suggest looking into industry-recognized technical certifications. IT certifications measure competency in a number of areas and are a great way to open doors to opportunities. While they certainly should not be the only determining factor in the hiring process, achieving them can be a measure of your competence and commitment to facing challenges. If you pass... Upon completion of a passing grade, you will receive your certificate. Laminate, frame, or pin it to your home office wall or work cubicle! It's proof that you have met all the requirements to become an official OpenStack Administrator. The certification is valid for three years from the pass date so don't forget to renew! The OpenStack Foundation has put together a great tool for helping employers verify the validity of COA certifications. Check out the Certified OpenStack Administrator verification tool. In addition to the certification, a COA badge will appear next to your name in the OpenStack Foundation's Member Directory: 7 steps to becoming a Certified OpenStack Administrator! The journey of a thousand miles begins with one step.                                                                       -Lao Tzu Let's begin by walking through some steps to become a Certified OpenStack Administrator! Step 1 – study! Practice! Practice! Practice! Use this article and the included OpenStack all-in-one virtual appliance as a resource as you begin your Certified OpenStack Administrator journey. If you still find yourself struggling with the concepts and objectives in this article, you can always refer to the official OpenStack documentation or even seek out a live training class at the OpenStack training marketplace. Step 2 – purchase! Once you feel that you're ready to conquer the exam, head to the Official Certified OpenStack Administrator homepage and click on the Get Started link. After signing in, you will be directed to checkout to purchase your exam. The OpenStack Foundation accepts all major credit cards and as of March 2017, costs $300.00 USD but is subject to change so keep an eye out on the website. You can also get a free retake within 12 months of the original exam purchase date if you do not pass on the first attempt. To encourage academia students to get their feet wet with OpenStack technologies, the OpenStack Foundation is offering the exam for $150.00 (50% off the retail price) with a valid student ID.  Check out https://www.openstack.org/coa/student/ for more info. Step 3 – COA portal page Once your order is processed, you will receive an email with access to the COA portal. Think of the portal as your personal COA website where you can download your exam receipt and keep track of your certification efforts. Once you take the exam, you can come back to the COA portal to check your exam status, exam score, and even download certificates and badges for displaying on business cards or websites! Step 4 – hardware compatibility check The COA exam can be taken from your personal laptop or desktop but you must ensure that your system meets the exam's minimum system requirements. A link on the COA portal page will present you with the compatibility check tool which will run a series of tests to ensure you meet the requirements.  It will also assist you in downloading a Chrome plugin for taking the exam. At this time, you must use the Chrome or Chromium browser and have access to reliable internet, a webcam, and microphone. Here is a current list of requirements: Step 5 – identification You must be at least 18 years old and have proper identification to take the exam! Any of the following pieces of identification are acceptable: Passport Government-issued driver's license or permit National identity card State or province-issued identity card Step 6 – schedule the exam I personally recommend scheduling your exam a few months ahead of time to give yourself a realistic goal. Click on the schedule exam link on the COA portal to be directed and automatically logged into the exam proctor partner website. Once logged into the site, type OpenStack Foundation in the search box and select the COA exam. You will then choose from available dates and times. The latest possible exam date you can schedule will be 30 days out from the current date. Once you have scheduled it, you can cancel or reschedule up to 24 hours before the start time of the exam. Step 7 – take the exam! Your day has arrived! You've used this article and have practiced day and night to master all of the covered objectives! It's finally time to take the exam! One of the most important factors determining your success on the exam is the location. You cannot be in a crowded place! This means no coffee shops, work desks, or football games! The testing location policy is very strict, so please consider taking the exam from home or perhaps a private room in the office. Log into the COA portal fifteen minutes before your scheduled exam time. You should now see a take exam link which will connect to the exam proctor partner website so you can connect to the testing environment. Once in the exam environment, an Exam Proctor chat window will appear and assist you with starting your exam. You must allow sharing of your entire operating system screen (this includes all applications), webcam, and microphone. It's time to begin! You have two and a half hours to complete all exam objectives. You're almost on your way to becoming a Certified OpenStack Administrator! About the exam environment The exam expects its test-takers to be proficient in interacting with OpenStack via the Horizon dashboard and command-line interface. Here is a visual representation of the exam console as outlined in the COA candidate handbook: The exam console is embedded into the browser. It is composed of two primary parts: the Content Panel and the Dashboard/Terminal Panel. The Content Panel is the section that displays the exam timer and objectives. As per the COA handbook, exam objectives can only be navigated linearly. You can use the Next and Back button to move to each objective. If you struggle with a question, move on! Hit the Next button and try the next objective. You can always come back and tackle it before time is up. The Dashboard/Terminal Panel gives you full access to an OpenStack environment. As of March 2017 and the official COA website, the exam is on the liberty version of OpenStack. The exam will be upgraded to Newton and will launch at the OpenStack Summit Boston in May 2017. The exam console terminal is embedded in a browser and you cannot Secure Copy (SCP) to it from your local system. Within the terminal environment, you are permitted to install a multiplexor such as screen, tmux, or byobu if you think these will assist you but are not necessary for successful completion of all objectives. You are not permitted to browse websites, e-mail, or notes during the exam but you are free to access the official OpenStack documentation webpages. This can be a major waste of time on the exam and shouldn't be necessary after working through the exam objectives in this article. You can also easily copy and paste from the objective window into the Horizon dashboard or terminal. The exam is scored automatically within 24 hours and you should receive the results via e-mail within 72 hours after exam completion. At this time, the results will be made available on the COA portal. Please review the professional code of conduct on the OpenStack Foundation certification handbook. The exam objectives Let's now take a look at the objectives you will be responsible for performing on the exam. As of March 2017, these are all the exam objectives published on the official COA website. These domains cover multiple core OpenStack services as well as general OpenStack troubleshooting. Together, all of these domains make up 100% of the exam. Because some of the objectives on the official COA requirements list overlap, this article utilizes its own convenient strategy to ensure you can fulfill all objectives within all content areas. Summary OpenStack is open source cloud software that provides an Infrastructure as a Service environment to enable its users to quickly deploy applications by creating virtual resources like virtual servers, networks, and block storage volumes. The IT industry's need for individuals with OpenStack skills is continuing to grow and one of the best ways to prove you have those skills is by taking the Certified OpenStack Administrator exam. Matt Dorn Resources for Article: Further resources on this subject: Deploying OpenStack – the DevOps Way [article] Introduction to Ansible [article] Introducing OpenStack Trove [article]
Read more
  • 0
  • 0
  • 14252

article-image-weblogic-server
Packt
15 Mar 2017
24 min read
Save for later

WebLogic Server

Packt
15 Mar 2017
24 min read
In this article by Adrian Ward, Christian Screen and Haroun Khan, the author of the book Oracle Business Intelligence Enterprise Edition 12c - Second Edition, will talk a little more in detail about the enterprise application server that is at the core of Oracle Fusion Middleware, WebLogic. Oracle WebLogic Server is a scalable, enterprise-ready Java Platform Enterprise Edition (Java EE) application server. Its infrastructure supports the deployment of many types of distributed applications. It is also an ideal foundation for building service-oriented applications (SOA). You can already see why BEA was a perfect acquisition for Oracle years ago. Or, more to the point, a perfect core for Fusion Middleware. (For more resources related to this topic, see here.) The WebLogic Server is a robust application in itself. In Oracle BI 12c, the WebLogic server is crucial to the overall implementation, not just from installation but throughout the Oracle BI 12c lifecycle, which now takes advantage of the WebLogic Management Framework. Learning the management components of WebLogic Server that ultimately control the Oracle BI components is critical to the success of an implementation. These management areas within the WebLogic Server are referred to as the WebLogic Administration Server, WebLogic Manager Server(s), and the WebLogic Node Manager. A Few WebLogic Server Nuances Before we move on to a description for each of those areas within WebLogic, it is also important to understand that the WebLogic Server software that is used for the installation of the Oracle BI product suite carries a limited license. Although the software itself is the full enterprise version and carries full functionality, the license that ships with Oracle BI 12c is not a full enterprise license for WebLogic Server for your organization to spin off other siloed JEE deployments on other non-OBIEE servers.: Clustered from the installation:            The WebLogic Server license provided with out-of-the-box Oracle BI 12c does not allow for horizontal scale-out. An enterprise WebLogic Server license needs be obtained for this advanced functionality. Contains an Embedded Web/HTTP Server, not Oracle HTTP Server (OHS): WebLogic Server does not contain a separate HTTP server with the installation. The Oracle BI Enterprise Deployment Guide (available on oracle.com ) discusses separating the Application Tier from the Web/HTTP tier, suggesting Oracle HTTP Server. These items are simply a few nuances of the product suite in relation to Oracle BI 12c. Most software products contain a short list such as this one. However, once you understand the nuances, the easier it will be to ensure that you have a more successful implementation. It also allows your team to be as prepared in advance as possible. Be sure to consult your Oracle sales representative to assist with licensing concerns. Despite these nuances, we highly recommended that, in order to learn more about the installation features, configuration options, administration, and maintenance of WebLogic, you not only research it in relation to Oracle BI, but also in relation to its standalone form. That is to say that there is much more information at large on the topic of WebLogic Server itself than WebLogic Server as it relates to Oracle BI. Understanding this approach to self-educating or web searching should provide you with more efficient results. WebLogic Domain The highest unit of management for controlling the WebLogic Server Installation is called a domain. A domain is a logically related group of WebLogic Server resources that you manage as a unit. A domain always includes, and is centrally managed by, one Administration Server. Additional WebLogic Server instances, which are controlled by the Administration Server for the domain, are called Managed Servers. The configuration for all the servers in the domain is stored in the configuration repository, the config.xml file, which resides on the machine hosting the Administration Server. Upon installing and configuring Oracle BI 12c, the domain, bi, is established within the WebLogic Server. This domain is the recommended name for each Oracle BI 12c implementation and should not be modified. The domain path for the bi domain may appear as ORACLE_HOME/user_projects/domains/bi. This directory for the bi domain is also referred to as the DOMAIN_HOME or BI_DOMAIN folder WebLogic Administration Server The WebLogic Server is an enterprise software suite that manages a myriad of application server components, mainly focusing on Java technology. It is also comprised of many ancillary components, which enables the software to scale well, and also makes it a good choice for distributed environments and high-availability. Clearly, it is good enough to be at the core of Oracle Fusion Middleware. One of the most crucial components of WebLogic Server is WebLogic Administration Server. When installing the WebLogic Server software, the Administration Server is automatically installed with it. It is the Administration Server that not only controls all subsequent WebLogic Server instances, called Managed Servers, but it also controls such aspects as authentication-provider security (for example, LDAP) and other application-server-related configurations. WebLogic Server installs on the operating system and ultimately runs as a service on that machine. The WebLogic Server can be managed in several ways. The two main methods are via the Graphical User Interface (GUI) web application called WebLogic Administration Console, or via command line using the WebLogic Scripting Tool (WLST). You access the Administration Console from any networked machine using a web-based client (that is, a web browser) that can communicate with the Administration Server through the network and/or firewall. The WebLogic Administration Server and the WebLogic Server are basically synonymous. If the WebLogic Server is not running, the WebLogic Administration Console will be unavailable as well. WebLogic Managed Server Web applications, Enterprise Java Beans (EJB), and other resources are deployed onto one or more Managed Servers in a WebLogic Domain. A managed server is an instance of a WebLogic Server in a WebLogic Server domain. Each WebLogic Server domain has at least one instance, which acts as the Administration Server just discussed. One administration server per domain must exist, but one or more managed servers may exist in the WebLogic Server domain. In a production deployment, Oracle BI is deployed into its own managed server. The Oracle BI installer installs two WebLogic server instances, the Admin Server and a managed server, bi_server1. Oracle BI is deployed into the managed server bi_server1, and is configured by default to resolve to port 19502; the Admin Server resolves to port 19500. Historically, this has been port 9704 for the Oracle BI managed server, and port 7001 for the Admin Server. When administering the WebLogic Server via the Administration Console, the WebLogic Administration Server instance appears in the same list of servers, which also includes any managed servers. As a best practice, the WebLogic Administration Server should be used for configuration and management of the WebLogic Server only, and not contain any additionally deployed applications, EJBs, and so on. One thing to note is that the Enterprise Manager Fusion Control is actually a JEE application deployed to the Administration Server instance, which is why its web client is accessible under the same port as the Admin Server. It is not necessarily a native application deployment to the core WebLogic Server, but gets deployed and configured during the Oracle BI installation and configuration process automatically. In the deployments page within the Administration Console, you will find a deployment named em. WebLogic Node Manager The general idea behind Node Manager is that it takes on somewhat of a middle-man role. That is to say, the Node Manager provides a communication tunnel between the WebLogic Administration Server and any Managed Servers configured within the WebLogic Domain. When the WebLogic Server environment is contained on a single physical server, it may be difficult to recognize the need for a Node Manager. It is very necessary and, as part of any of your ultimate start-up and shutdown scripts for Oracle BI, the Node Manager lifecycle management will have to be a part of that process. Node Manager’s real power comes into play when Oracle BI is scaled out horizontally on one or more physical servers. Each scaled-out deployment of WebLogic Server will contain a Node Manager. If the Node Manager is not running on the server on which the Managed Server is deployed, then the core Administration Server will not be able to issue start or stop commands to that server. As such, if the Node Manager is down, communication with the overall cluster will be affected. The following figure shows how machines A, B, and C are physically separated, each containing a Node Manager. You can see that the Administration Server communicates to the Node Managers, and not the Managed Servers, directly: System tools controlled by WebLogic We briefly discussed the WebLogic Administration Console, which controls the administrative configuration of the WebLogic Server Domain. This includes the components managed within it, such as security, deployed applications, and so on. The other management tool that provides control of the deployed Oracle BI application ancillary deployments, libraries, and several other configurations, is called the Enterprise Manager Fusion Middleware Control. This seems to be a long name for single web-based tool. As such, the name is often shortened to “Fusion Control” or “Enterprise Manager.” Reference to either abbreviated title in the context of Oracle BI should ensure fellow Oracle BI teammates understand what you mean. Security It would be difficult to discuss the overall architecture of Oracle BI without at least giving some mention to how the basics of security, authentication, and authorization are applied. By default, installing Oracle WebLogic Server provides a default Lightweight Directory Access Protocol (LDAP) server, referred to as the WebLogic Server Embedded LDAP server. This is a standards-compliant LDAP system, which acts as the default authentication method for out-of-the-box Oracle BI. Integration of secondary LDAP providers, such as Oracle Internet Directory (OID) or Microsoft Active Directory (MSAD), is crucial to leveraging most organizations' identity-management systems. The combination of multiple authentication providers is possible; in fact, it is commonplace. For example, a configuration may wish to have users that exist in both the Embedded LDAP server and MSAD to authenticate and have access to Oracle BI. Potentially, users may want another set of users to be stored in a relational database repository, or have a set of relational database tables control the authorization that users have in relation to the Oracle BI system. WebLogic Server provides configuration opportunities for each of these scenarios. Oracle BI security incorporates the Fusion Middleware Security model, Oracle Platform Security Services (OPSS). This has a positive influence over managing all aspects of Oracle BI, as it provides a very granular level of authorization and a large number of authentication and authorization-integration mechanisms. OPSS also introduces to Oracle BI the concept of managing privileges by application role instead of directly by user or group. It abides by open standards to integrate with security mechanisms that are growing in popularity, such as the Security Assertion Markup Language (SAML) 2.0. Other well-known single-sign-on mechanisms such as SiteMinder and Oracle Access Manager already have pre-configured integration points within Oracle BI Fusion Control.Oracle BI 12c and Oracle BI 11g security is managed differently than the legacy Oracle BI 10g versions. Oracle BI 12c no longer has backward compatibility for the legacy version of Oracle BI 10g, and focus should be to follow the new security configuration best practices of Oracle BI 12c: An Oracle BI best practice is to manage security by Application Roles. Understanding the differences between the Identity Store, Credential Store, and Policy Store is critical for advanced security configuration and maintenance. As of Oracle BI 12c, the OPSS metadata is now stored in a relational repository, which is installed as part of the RCU-schemas installation process that takes place prior to executing the Oracle BI 12c installation on the application server. Managing by Application Roles In Oracle BI 11g, the default security model is the Oracle Fusion Middleware security model, which has a very broad scope. A universal Information Technology security-administration best practice is to set permissions or privileges to a specific point of access on a group, and not individual users. The same idea applies here, except there is another enterprise-level of user, and even group, aggregation, called an Application Role.Application Roles can contain other application roles, groups, or individual users. Access privileges to a certain object, such as a folder, web page, or column should always be assigned to an application role. Application roles for Oracle BI can be managed in the Oracle Enterprise Manager Fusion Middleware Control interface. They can also be scripted using the WLST command-line interface. Security Providers Fusion Middleware security can seem complex at first, but knowing the correct terminology and understanding how the most important components communicate with each other and the application at large is extremely important as it relates to security management. Oracle BI uses three main repositories for accessing authentication and authorization information, all of which are explained in the following sections. Identity Store Identity Store is the authentication provider, which may also provide authorization metadata. A simple mnemonic here is that this store tells Oracle BI how to “Identify” any users attempting to access the system. An example of creating an Identity Store would be to configure an LDAP system such as Oracle Internet Directory or Microsoft Active Directory to reference users within an organization. These LDAP configurations are referred to as Authentication Providers. Credential Store The credential store is ultimately for advanced Oracle configurations. You may touch upon this when establishing an enterprise Oracle BI deployment, but not much thereafter, unless integrating the Oracle BI Action Framework or something equally as complex. Ultimately, the credential store does exactly what its name implies – it stores credentials. Specifically, it is used to store credentials of other applications, which the core application (that is, Oracle BI) may access at a later time without having to re-enter said credentials. An example of this would be integrating Oracle BI with the Oracle Enterprise Management (EPM) suite. In this example, let's pretend there is an internal requirement at Company XYZ for users to access an Oracle BI dashboard. Upon viewing said dashboard, if a report with discrepancies is viewed, the user requires the ability to click on a link, which opens an Oracle EPM Financial Report containing more details about the concern. If not all users accessing the Oracle BI dashboard have credentials to access to the Oracle EPM environment directly, how could they open and view the report without being prompted for credentials? The answer would be that the credential store would be configured with the credentials of a central user having access to the Oracle EPM environment. This central user's credentials (encrypted, of course) are passed along with the dashboard viewer's request and presto, access! Policy Store The policy store is quite unique to Fusion Middleware security and leverages a security standard referred to as XACML, which ultimately provides granular access and privilege control for an enterprise application. This is one of the reasons why managing by Application Roles becomes so important. It is the individual Application Roles that are assigned policies defining access to information within Oracle BI. Stated another way, the application privileges, such as the ability to administer the Oracle BI RPD, are assigned to a particular application role, and these associations are defined in the policy store. The following figure shows how each area of security management is controlled: These three types of security providers within Oracle Fusion Middleware are integral to Oracle BI architecture. Further recommended research on this topic would be to look at Oracle Fusion Middleware Security, OPSS, and the Application Development Framework (ADF). System Requirements The first thing to recognize with infrastructure requirements prior to deploying Oracle BI 12c is that its memory and processor requirements have increased since previous versions. The Java Application server, WebLogic Server, installs with the full version of its software (though under a limited/restricted license, as already discussed). A multitude of additional Java libraries and applications are also deployed. Be prepared for a recommended minimum 8 to16 GB Read Access Memory (RAM) requirement for an Enterprise deployment, and a 6 to 8 GB RAM minimum requirement for a workstation deployment. Client tools Oracle BI 12c has a separate client tools installation that requires Microsoft Windows XP or a more recent version of the Windows Operating System (OS). The Oracle BI 12c client tools provide the majority of client-to-server management capabilities required for normal day-to-day maintenance of the Oracle BI repository and related artefacts. The client-tools installation is usually reserved for Oracle BI developers who architect and maintain the Oracle BI metadata repository, better known as the RPD, which stems from its binary file extension (.rpd). The Oracle BI 12c client-tools installation provides each workstation with the Administration tool, Job Manager, and all command-line Application Programming Interface (API) executables. In Oracle BI 12c, a 64-bit Windows OS is a requirement for installing the Oracle BI Development Client tools. It has been observed that, with some initial releases of Oracle BI 12c client tools, the ODBC DSN connectivity does not work in Windows Server 2012. Therefore, utilizing Windows Server 2012 as a development environment will be ineffective if attempting to open the Administration Tool and connecting to the OBIEE Server in online mode. Multi-User Development Environment One of the key features when developing with Oracle BI is the ability for multiple metadata developers to develop simultaneously. Although the use of the term “simultaneously” can vary among the technical communities, the use of concurrent development within the Oracle BI suite requires Oracle BI's Multi-User Development Environment (MUD) configuration, or some other process developed by third-party Oracle partners. The MUD configuration itself is fairly straightforward and ultimately relies on the Oracle BI administrator’s ability to divide metadata modeling responsibilities into projects. Projects – which are usually defined and delineated by logical fact table definitions – can be assigned to one or more metadata developers. In previous versions of Oracle BI, a metadata developer could install the entire Oracle BI product suite on an up-to-date laptop or commodity desktop workstation and successfully develop, test, and deploy an Oracle BI metadata model. The system requirements of Oracle BI 12c require a significant amount of processor and RAM capacity in order to perform development efforts on a standard-issue workstation or laptop. If an organization currently leverages the Oracle BI multi-user development environment, or plans to with the current release, this raises a couple of questions: How do we get our developers the best environment suitable for developing our metadata? Do we need to procure new hardware? Microsoft Windows is a requirement for Oracle BI client tools. However, the Oracle BI client tool does not include the server component of the Oracle BI environment. It only allows for connecting from the developer's workstation to the Oracle BI server instance. In a multi-user development environment, this poses a serious problem as only one metadata repository (RPD) can exist on any one Oracle BI server instance at any given time. If two developers are working from their respective workstations at the same time and wish to see their latest modifications published in a rapid application development (RAD) cycle, this type of iterative effort fails, as one developer's published changes will overwrite the other’s in real-time. To resolve the issue there are two recommended solutions. The first is an obvious localized solution. This solution merely upgrades the Oracle BI developers’ workstations or laptops to comply with the minimum requirements for installing the full Oracle BI environment on said machines. This upgrade should be both memory- (RAM) and processor- (MHz) centric. 16GB+ RAM and a dual-core processor are recommended. A 64-bit operating system kernel is required. Without an upgraded workstation from which to work, Oracle BI metadata developers will sit at a disadvantage for general iterative metadata development, and will especially be disenfranchised if interfacing within a multi-user development environment. The second solution is one that takes advantage of virtual machines’ (VM) capacity within the organization. Virtual machines have become a staple within most Information Technology departments, as they are versatile and allow for speedy proposition of server environments. For this scenario, it is recommended to create a virtual-machine template of an Oracle BI environment from which to duplicate and “stand up” individual virtual machine images for each metadata developer on the Oracle BI development team. This effectively provides each metadata developer with their own Oracle BI development environment server, which contains the fully deployed Oracle BI server environment. Each developer then has the ability to develop and test iteratively by connecting to their assigned virtual server, without fear that their efforts will conflict with another developer's. The following figure illustrates how an Oracle BI MUD environment can leverage either upgraded developer-workstation hardware or VM images, to facilitate development: This article does not cover the installation, configuration, or best practices for developing in a MUD environment. However, the Oracle BI development team deserves a lot of credit for documenting these processes in unprecedented detail. The Oracle BI 11g MUD documentation provides a case study, which conveys best practices for managing a complex Oracle BI development lifecycle. When ready to deploy a MUD environment, it is highly recommended to peruse this documentation first. The information in this section merely seeks to convey best practice in facilitating a developer’s workstation when using MUD. Certifications matrix Oracle BI 12c largely complies with the overall Fusion Middleware infrastructure. This common foundation allows for a centralized model to communicate with operating systems, web servers, and other ancillary components that are compliant. Oracle does a good job of updating a certification matrix for each Fusion Middleware application suite per respective product release. The certification matrix for Oracle BI 12c can be found on the Oracle website at the following locations: http://www.oracle.com/technetwork/middleware/fusion-middleware/documentation/fmw-1221certmatrix-2739738.xlsx and http://www.oracle.com/technetwork/middleware/ias/downloads/fusion-certification-100350.html. The certification matrix document is usually provided in Microsoft Excel format and should be referenced before any project or deployment of Oracle BI begins. This will ensure that infrastructure components such as the selected operating system, web server, web browsers, LDAP server, and so on, will actually work when integrated with the product suite. Scaling out Oracle BI 12c There are several reasons why an organization may wish to expand their Oracle BI footprint. This can range anywhere from requiring a highly available environment to achieving high levels of concurrent usage over time. The number of total end users, the number of total concurrent end users, the volume of queries, the size of the underlying data warehouse, and cross-network latency are even more factors to consider. Scaling out an environment has the potential to solve performance issues and stabilize the environment. When scoping out the infrastructure for an Oracle BI deployment, there are several crucial decisions to be made. These decisions can be greatly assisted by preparing properly, using Oracle's recommended guides for clustering and deploying Oracle BI on an enterprise scale. Pre-Configuration Run-Down Configuring the Oracle BI product suite, specifically when involving scaling out or setting up high-availability (HA), takes preparation. Proactively taking steps to understand what it takes to correctly establish or pre-configure the infrastructure required to support any level of fault tolerance and high-availability is critical. Even if the decision to scale-out from the initial Oracle BI deployment hasn't been made, if the potential exists, proper planning is recommended. Shared Storage We would be remiss not to highlight one of the most important concepts of scaling out Oracle BI, specifically for High-Availability: shared storage. The idea of shared storage is that, in a fault-tolerance environment, there are binary files and other configuration metadata that needs to be shared across the nodes. If these common elements were not shared, then, if one node were to fail, there is a potential loss of data. Most importantly is that, in a highly available Oracle BI environment, there can be only one WebLogic Administration Server running for that environment at any one time. A HA configuration makes one Administration Server active while the other is passive. If the appropriate pre-configuration steps for shared storage (as well as other items in the high-availability guide) are not properly completed, one should not expect accurate results from their environment. OBIEE 12c requires you to modify the Singleton Data Directory (SDD) for your Oracle BI configuration found at ORACLE_HOME/user_projects/domains/bi/data, so that the files within that path are moved to a shared storage location that would be mounted to the scaled-out servers on which a HA configuration would be implemented. To change this, one would need to modify the ORACLE_HOME/user_projects/domains/bi/config/fmwconfig/bienv/core/bi-environment.xml file to set the path of the bi:singleton-data-directory element to the full path of the shared mounted file location that contains a copy of the bidata folder, which will be referenced by one ore more scaled-out HA Oracle 12c servers. For example, change the XML file element: <bi:singleton-data-directory>/oraclehome/user_projects/domains/bi/bidata/</bi:singleton-data-directory> To reflect a shared NAS or SAN mount whose folder names and structure are inline with the IT team’s standard naming conventions, where the /bidata folder is the folder from the main Oracle BI 12c instance that gets copied to the shared directory: <bi:singleton-data-directory>/mount02/obiee_shared_settings/bidata/</bi:singleton-data-directory> Clustering A major benefit of Oracle BI's ability to leverage WebLogic Server as the Java application server tier is that, per the default installation, Oracle BI gets established in a clustered architecture. There is no additional configuration necessary to set this architecture in motion. Clearly, installing Oracle BI on a single server only provides a single server with which to interface; however, upon doing so, Oracle BI is installed into a single-node clustered-application-server environment. Additional clustered nodes of Oracle BI can then be configured to establish and expand the server, either horizontally or vertically. Vertical vs Horizontal In respect to the enterprise architecture and infrastructure of the Oracle BI environment, a clustered environment can be expanded in one of two ways: horizontally (scale-out) and vertically (scale-up). A horizontal expansion is the typical expansion type when clustering. It is represented by installing and configuring the application on a separate physical server, with reference to the main server application. A vertical expansion is usually represented by expanding the application on the same physical server under which the main server application resides. A horizontally expanded system can then, additionally, be vertically expanded. There are benefits to both scaling options. The decision to scale the system one way or the other is usually predicated on the cost of additional physical servers, server limitations, peripherals such as memory or processors, or an increase in usage activity by end users. Some considerations that may be used to assess which approach is the best for your specific implementation might be as follows: Load-balancing capabilities and need for an Active-Active versus Active-Passive architecture Need for failover or high availability Costs for processor and memory enhancements versus the cost of new servers Anticipated increase in concurrent user queries Realized decrease in performance due to increase in user activity Oracle BI Server (System Component) Cluster Controller When discussing scaling out the Oracle BI Server cluster, it is a common mistake to confuse the WebLogic Server application clustering with the Oracle BI Server Cluster Controller. Currently, Oracle BI can only have a single metadata repository (RPD) reference associated with an Oracle BI Server deployment instance at any single point in time. Because of this, the Oracle BI Server engine leverages a failover concept, to ensure some level of high-availability exists when the environment is scaled. In an Oracle BI scaled-out clustered environment, a secondary node, which has an instance of Oracle BI installed, will contain a secondary Oracle BI Server engine. From the main Oracle BI Managed Server containing the primary Oracle BI Server instance, the secondary Oracle BI Server instance gets established as the failover server engine using the Oracle BI Server Cluster Controller. This configuration takes place in the Enterprise Manager Fusion Control console. Based on this configuration, the scaled-out Oracle BI Server engine acts in an active-passive mode. That is to say that, when the main Oracle BI server engine instance fails, the secondary, or passive, Oracle BI Server engine then becomes active to route requests and field queries. Summary This article proves very beneficial as an introductory document for the beginner about what WebLogic Server is. Resources for Article: Further resources on this subject: Oracle 12c SQL and PL/SQL New Features [article] Schema Validation with Oracle JDeveloper - XDK 11g [article] Creating external tables in your Oracle 10g/11g Database [article]
Read more
  • 0
  • 0
  • 3892
article-image-testing-agile-development-and-state-agile-adoption
Packt
15 Mar 2017
6 min read
Save for later

Testing in Agile Development and the State of Agile Adoption

Packt
15 Mar 2017
6 min read
In this article written by Renu Rajani, author of the book Testing Practitioner Handbook, we will discuss agile development. Organizations are increasingly struggling to reach the right level of quality versus speed. Some key issues with traditional development and testing include the following: Excessively long time to market for products and applications Inadequate customer orientation and regular interaction Over-engineered products--most of the features on a product or application may not be used High project failure rate ROI below expectation Inability to respond quickly to change Inadequate software quality (For more resources related to this topic, see here.) To address this, QA and testing should be blended with agile development. Agile engagements should take a business-centric approach to select the right test focus areas, such as behavior-driven development (BDD), to define acceptance criteria. This requires skills not only in testing but also in business and software development. The latest World Quality Report reveals an increase in the adoption of agile testing methodologies, which helps expedite time to market for products and services. The need for agile development (and testing) is primarily driven by digital transformation. Let's take a look at the major trends in digital transformation: More continual integration fueled by digital transformation Complex integration using multi-channel, omnipresent commerce, making it necessary to integrate multiple channels, devices, and wearable technology Unlike yesterday's nomenclature, when agile meant colocation, today's advanced telepresence infrastructure makes it possible to work in distributed agile models and has removed colocation dependency. Agile is not just a concept. It is a manner of working, made possible with multiple tools to enable development and testing in agile environments. What do agile projects promise compared to traditional waterfall? The next diagram summarizes the value an agile approach offers compared to traditional waterfall. Waterfall engagements are characterized as plan driven. One should know the software requirements and estimate the time and effort needed to accomplish the task at hand. In the case of agile engagements, one knows the time and resources available and needs to estimate the features that can go into a release. Flavors of agile There are various flavors of agile, including the following: Scrum: This prioritizes the highest-value features and incremental delivery once every 2-4 weeks Kanban: This pinpoints bottlenecks to avoid holdups Lean: This eliminates waste and unnecessary documentation and provides future flexibility XP: This reconfigures and ensures the simplest design to deliver iteration features. Let's look at their features. Scrum Reacts quickly in volatile markets Focuses on customer benefits and avoids both unnecessary outlays and time investments Utilizes organized development teams within a structured framework in order to coordinate activities and work together for quick decision-making Involves customers directly in the development process Kanban Works with existing roles and processes and may be introduced either step by step or by establishing pioneer teams. Scrum and Kanban complement one another. While Scrum ensures adaptability and agile, Kanban improves efficiency and throughput. Both techniques increase overall transparency. How is testing done in agile sprints? I have often heard that agile projects do not require testers. Is this true? Would you compromise on quality in the name of agile? Like any other development life cycle, agile also needs quality and testing. Agile engagements involve testers from the start of the sprint, that is, from the requirement analysis stage, in a process known as user story grooming. In sprint planning, the team selects the story points depending on various factors, including availability of resources and user story complexity. All the members of the sprint team (cross-functional teams) are involved in this process (developers, business analysts, testers, configuration teams, build teams, the scrum master, and the production owner). Once the user stories destined for the sprint are finalized, they are analyzed. Then, developers work on the design while testers write the test cases and share these with business analysts for review. At the end of each sprint, the team discloses the user stories selected during the sprint to the product owner and gets a go or no-go ruling. Once the demo is complete, the team gathers for the retrospective. Take a look at the following diagram: The benefits of this approach include: Productive, collaborative, and high-performing teams Predictability and project control featuring transparency and flexibility Superior prioritization and risk management for business success High-value revenue with low upfront and ongoing costs High-quality products delivered with minimum time to market Increased possibility of stakeholder engagement and high customer satisfaction Agile in distributed environments Often, people assume agile means colocation. Today's technology infrastructure and maturity of distributed teams have enabled agile to be practiced in a distributed mode. As per the World Quality Report 2016-2017, more than 42% of the organizations that adopt an agile delivery model use distributed agile. Distributed agile allows the organizations to achieve higher cost savings with the global delivery model. Take a look at the following figure: Key challenges in distributed agile model include: Communication challenges across the distributed team Increasing product backlogs An ever-growing regression pack Poor knowledge management and handover for new people due to less documentation and high-level placeholder tests Little time overlap with isolated regional developers for distributed teams These challenges can be addressed through the following: Communication: Live meetings, video conference calls, and common chat rooms Product backlogs: Better prioritization within the iteration scope Regression scope: Better impact analysis and targeted regression only Knowledge management: Efficient tools and processes along with audio and video recordings of important tests, virtual scrum boards, and the latest communication and tracking tools Distributed teams: Optimal overlap timings through working shifts (40–50 %) State of agile adoption – findings from the World Quality Report 2016-2017 As per the latest World Quality Report, there are various challenges in applying testing to agile environments. Colocation and a lack of required skills are the two biggest challenges that are considered major risks associated with agile adoption. That said, organizations have been able to find solutions to these challenges. Approaches to testing in agile development environments Organizations use different ways to speed up cycle times and utilize agile. Some of these tactics include predictive analytics, BDD/TDD, continuous monitoring, automated test data generation, and test environment virtualization. The following diagram provides a snapshot of the practices used to convert to agile: Skills needed from QA and testing professions for agile The following diagram from WQR2016 depicts the state of skills relating to agile testing as organizations strive to adopt agile methodologies: Conclusion An ideal agile engagement needs a test ecosystem that is flexible and supports both continual testing and quality monitoring. Given the complexity in agile engagements, there would be value from automated decision-making to achieve both speed and quality. Agile development has attained critical mass and is now being widely adopted; the initial hesitation no longer prevails. The QA function is a key enabler in this journey. The coexistence of traditional IT along with agile delivery principles is giving rise to a new methodology based on bimodal development. Resources for Article: Further resources on this subject: Unit Testing and End-To-End Testing [article] Storing Records and Interface customization [article] Overview of Certificate Management [article]
Read more
  • 0
  • 0
  • 2221

Packt
15 Mar 2017
13 min read
Save for later

About Java Virtual Machine – JVM Languages

Packt
15 Mar 2017
13 min read
In this article by Vincent van der Leun, the author of the book, Introduction to JVM Languages, you will learn the history of the JVM and five important languages that run on the JVM. (For more resources related to this topic, see here.) While many other programming languages have come in and gone out of the spotlight, Java always managed to return to impressive spots, either near to, and lately even on, the top of the list of the most used languages in the world. It didn't take language designers long to realize that they as well could run their languages on the JVM—the virtual machine that powers Java applications—and take advantage of its performance, features, and extensive class library. In this article, we will take a look at common JVM use cases and various JVM languages. The JVM was designed from the ground up to run anywhere. Its initial goal was to run on set-top boxes, but when Sun Microsystems found out the market was not ready in the mid '90s, they decided to bring the platform to desktop computers as well. To make all those use cases possible, Sun invented their own binary executable format and called it Java bytecode. To run programs compiled to Java bytecode, a Java Virtual Machine implementation must be installed on the system. The most popular JVM implementations nowadays are Oracle's free but partially proprietary implementation and the fully open source OpenJDK project (Oracle's Java runtime is largely based on OpenJDK). This article covers the following subjects: Popular JVM use cases Java language Scala language Clojure language Kotlin language Groovy The Java platform as published by Google on Android phones and tablets is not covered in this article. One of the reasons is that the Java version used on Android is still based on the Java 6 SE platform from 2006. However, some of the languages covered in this article can be used with Android. Kotlin, in particular, is a very popular choice for modern Android development. Popular use cases Since the JVM platform was designed with a lot of different use cases in mind, it will be no surprise that the JVM can be a very viable choice for very different scenarios. We will briefly look at the following use cases: Web applications Big data Internet of Things (IoT) Web applications With its focus on performance, the JVM is a very popular choice for web applications. When built correctly, applications can scale really well if needed across many different servers. The JVM is a well-understood platform, meaning that it is predictable and many tools are available to debug and profile problematic applications. Because of its open nature, the monitoring of JVM internals is also very well possible. For web applications that have to serve thousands of users concurrently, this is an important advantage. The JVM already plays a huge role in the cloud. Popular examples of companies that use the JVM for core parts of their cloud-based services include Twitter (famously using Scala), Amazon, Spotify, and Netflix. But the actual list is much larger. Big data Big data is a hot topic. When data is regarded too big for traditional databases to be analyzed, one can set up multiple clusters of servers that will process the data. Analyzing the data in this context can, for example, be searching for something specific, looking for patterns, and calculating statistics. This data could have been obtained from data collected from web servers (that, for example, logged visitor's clicks), output obtained from external sensors at a manufacturer plant, legacy servers that have been producing log files over many years, and so forth. Data sizes can vary wildly as well, but often, will take up multiple terabytes in total. Two popular technologies in the big data arena are: Apache Hadoop (provides storage of data and takes care of data distribution to other servers) Apache Spark (uses Hadoop to stream data and makes it possible to analyze the incoming data) Both Hadoop and Spark are for the most part written in Java. While both offer interfaces for a lot of programming languages and platforms, it will not be a surprise that the JVM is among them. The functional programming paradigm focuses on creating code that can run safely on multiple CPU cores, so languages that are fully specialized in this style, such as Scala or Clojure, are very appropriate candidates to be used with either Spark or Hadoop. Internet of Things - IoT Portable devices that feature internet connectivity are very common these days. Since Java was created with the idea of running on embedded devices from the beginning, the JVM is, yet again, at an advantage here. For memory constrained systems, Oracle offers Java Micro Edition Embedded platform. It is meant for commercial IoT devices that do not require a standard graphical or console-based user interface. For devices that can spare more memory, the Java SE Embedded edition is available. The Java SE Embedded version is very close to the Java Standard Edition discussed in this article. When running a full Linux environment, it can be used to provide desktop GUIs for full user interaction. Java SE Embedded is installed by default on Raspbian, the standard Linux distribution of the popular Raspberry Pi low-cost, credit card-sized computers. Both Java ME Embedded and Java SE Embedded can access the General Purpose input/output (GPIO) pins on the Raspberry Pi, which means that sensors and other peripherals connected to these ports can be accessed by Java code. Java Java is the language that started it all. Source code written in Java is generally easy to read and comprehend. It started out as a relatively simple language to learn. As more and more features were added to the language over the years, its complexity increased somewhat. The good news is that beginners don't have to worry about the more advanced topics too much, until they are ready to learn them. Programmers that want to choose a different JVM language from Java can still benefit from learning the Java syntax, especially once they start using libraries or frameworks that provide Javadocs as API documentation. Javadocs is a tool that generates HTML documentation based on special comments in the source code. Many libraries and frameworks provide the HTML documents generated by Javadocs as part of their documentation. While Java is not considered a pure Object Orientated Programming (OOP) language because of its support for primitive types, it is still a serious OOP language. Java is known for its verbosity, it has strict requirements for its syntax. A typical Java class looks like this: package com.example; import java.util.Date; public class JavaDemo { private Date dueDate = new Date(); public void getDueDate(Date dueDate) { this.dueDate = dueDate; } public Date getValue() { return this.dueDate; } } A real-world example would implement some other important additional methods that were omitted for readability. Note that declaring the dueDate variable, the Date class name has to be specified twice; first, when declaring the variable type and the second time, when instantiating an object of this class. Scala Scala is a rather unique language. It has a strong support for functional programming, while also being a pure object orientated programming language at the same time. While a lot more can be said about functional programming, in a nutshell, functional programming is about writing code in such a way that existing variables are not modified while the program is running. Values are specified as function parameters and output is generated based on their parameters. Functions are required to return the same output when specifying the same parameters on each call. A class is supposed to not hold internal states that can change over time. When data changes, a new copy of the object must be returned and all existing copies of the data must be left alone. When following the rules of functional programming, which requires a specific mindset of programmers, the code is safe to be executed on multiple threads on different CPU cores simultaneously. The Scala installation offers two ways of running Scala code. It offers an interactive shell where code can directly be entered and is run right away. This program can also be used to run Scala source code directly without manually first compiling it. Also offered is scalac, a traditional compiler that compiles Scala source code to Java bytecode and compiles to files with the .class extension. Scala comes with its own Scala Standard Library. It complements the Java Class Library that is bundled with the Java Runtime Environment (JRE) and installed as part of the Java Developers Kit (JDK). It contains classes that are optimized to work with Scala's language features. Among many other things, it implements its own collection classes, while still offering compatibility with Java's collections. Scala's equivalent of the code shown in the Java section would be something like the following: package com.example import java.util.Date class ScalaDemo(var dueDate: Date) { } Scala will generate the getter and setter methods automatically. Note that this class does not follow the rules of functional programming as the dueDate variable is mutable (it can be changed at any time). It would be better to define the class like this: class ScalaDemo(val dueDate: Date) { } By defining dueDate with the val keyword instead of the var keyword, the variable has become immutable. Now Scala only generates a getter method and the dueDate can only be set when creating an instance of ScalaDemo class. It will never change during the lifetime of the object. Clojure Clojure is a language that is rather different from the other languages covered in this article. It is a language largely inspired by the Lisp programming language, which originally dates from the late 1950s. Lisp stayed relevant by keeping up to date with technology and times. Today, Common Lisp and Scheme are arguably the two most popular Lisp dialects in use today and Clojure is influenced by both. Unlike Java and Scala, Clojure is a dynamic language. Variables do not have fixed types and when compiling, no type checking is performed by the compiler. When a variable is passed to a function that it is not compatible with the code in the function, an exception will be thrown at run time. Also noteworthy is that Clojure is not an object orientated language, unlike all other languages in this article. Clojure still offers interoperability with Java and the JVM as it can create instances of objects and can also generate class files that other languages on the JVM can use to run bytecode compiled by Clojure. Instead of demonstrating how to generate a class in Clojure, let's write a function in Clojure that would consume a javademo instance and print its dueDate: (defn consume-javademo-instance [d] (println (.getDueDate d))) This looks rather different from the other source code in this article. Code in Clojure is written by adding code to a list. Each open parenthesis and the corresponding closing parenthesis in the preceding code starts and ends a new list. The first entry in the list is the function that will be called, while the other entries of that list are its parameters. By nesting the lists, complex evaluations can be written. The defn macro defines a new function that will be called consume-javademo-instance. It takes one parameter, called d. This parameter should be the javademo instance. The list that follows is the body of the function, which prints the value of the getDueDate function of the passed javademo instance in the variable, d. Kotlin Like Java and Scala, Kotlin, is a statically typed language. Kotlin is mainly focused on object orientated programming but supports procedural programming as well, so the usage of classes and objects is not required. Kotlin's syntax is not compatible with Java; the code in Kotlin is much less verbose. It still offers a very strong compatibility with Java and the JVM platform. The Kotlin equivalent of the Java code would be as follows: import java.util.Date data class KotlinDemo(var dueDate: Date) One of the more noticeable features of Kotlin is its type system, especially its handling of null references. In many programming languages, a reference type variable can hold a null reference, which means that a reference literally points to nothing. When accessing members of such null reference on the JVM, the dreaded NullPointerException is thrown. When declaring variables in the normal way, Kotlin does not allow references to be assigned to null. If you want a variable that can be null, you'll have to add the question mark (?)to its definition: var thisDateCanBeNull: Date? = Date() When you now access the variable, you'll have to let the compiler know that you are aware that the variable can be null: if (thisDateCanBeNull != null) println("${thisDateCanBeNull.toString()}") Without the if check, the code would refuse to compile. Groovy Groovy was an early alternative language for the JVM. It offers, for a large degree, Java syntax compatibility, but the code in Groovy can be much more compact because many source code elements that are required in Java are optional in Groovy. Like Clojure and mainstream languages such as Python, Groovy is a dynamic language (with a twist, as we will discuss next). Unusually, while Groovy is a dynamic language (types do not have to be specified when defining variables), it still offers optional static compilation of classes. Since statically compiled code usually performs better than dynamic code, this can be used when the performance is important for a particular class. You'll give up some convenience when switching to static compilation, though. Some other differences with Java is that Groovy supports operator overloading. Because Groovy is a dynamic language, it offers some tricks that would be very hard to implement with Java. It comes with a huge library of support classes, including many wrapper classes that make working with the Java Class Library a much more enjoyable experience. A JavaDemo equivalent in Groovy would look as follows: @Canonical class GroovyDemo { Date dueDate } The @Canonical annotation is not necessary but recommended because it will generate some support methods automatically that are used often and required in many use cases. Even without it, Groovy will automatically generate the getter and setter methods that we had to define manually in Java. Summary We started by looking at the history of the Java Virtual Machine and studied some important use cases of the Java Virtual Machine: web applications, big data, and IoT (Internet of Things). We then looked at five important languages that run on the JVM: Java (a very readable, but also very verbose statically typed language), Scala (both a strong functional and OOP programming language), Clojure (a non-OOP functional programming language inspired by Lisp and Haskell), Kotlin (a statically typed language, that protects the programmer from very common NullPointerException errors), and Groovy (a dynamic language with static compiler support that offers a ton of features). Resources for Article: Further resources on this subject: Using Spring JMX within Java Applications [article] Tuning Solr JVM and Container [article] So, what is Play? [article]
Read more
  • 0
  • 0
  • 19452
Modal Close icon
Modal Close icon