Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials

7008 Articles
Packt
06 Apr 2017
15 min read
Save for later

Synchronization – An Approach to Delivering Successful Machine Learning Projects

Packt
06 Apr 2017
15 min read
“In the midst of chaos, there is also opportunity”                                                                                                                - Sun Tzu In this article, by Cory Lesmeister, the author of the book Mastering Machine Learning with R - Second Edition, Cory provides insights on ensuring the success and value of your machine learning endeavors. (For more resources related to this topic, see here.) Framing the problem Raise your hand if any of the following has happened or is currently happening to you: You’ve been part of a project team that failed to deliver anything of business value You attend numerous meetings, but they don’t seem productive; maybe they are even complete time wasters Different teams are not sharing information with each other; thus, you are struggling to understand what everyone else is doing, and they have no idea what you are doing or why you are doing it An unknown stakeholder, feeling threatened by your project, comes from out of nowhere and disparages you and/or your work The Executive Committee congratulates your team on their great effort, but decides not to implement it, or even worse, tells you to go back and do it all over again, only this time solve the real problem OK, you can put your hand down now. If you didn’t raise your hand, please send me your contact information because you are about as rare as a unicorn. All organizations, regardless of their size, struggle with integrating different functions, current operations, and other projects. In short, the real-world is filled with chaos. It doesn’t matter how many advanced degrees people have, how experienced they are, how much money is thrown at the problem, what technology is used, how brilliant and powerful the machine learning algorithm is, problems such as those listed above will happen. The bottom line is that implementing machine learning projects in the business world is complicated and prone to failure. However, out of this chaos you have the opportunity to influence your organization by integrating disparate people and teams, fostering a collaborative environment that can adapt to unforeseen changes. But, be warned, this is not easy. If it was easy everyone would be doing it. However, it works and, it works well. By it, I’m talking about the methodology I developed about a dozen years ago, a method I refer to as the “Synchronization Process”. If we ask ourselves, “what are the challenges to implementation”, it seems to me that the following blog post, clearly and succinctly sums it up: https://www.capgemini.com/blog/capping-it-off/2012/04/four-key-challenges-for-business-analytics It enumerates four challenges: Strategic alignment Agility Commitment Information maturity This blog addresses business analytics, but it can be extended to machine learning projects. One could even say machine learning is becoming the analytics tool of choice in many organizations. As such, I will make the case below that the Synchronization Process can effectively deal with the first three challenges. Not only that, the process can provide additional benefits. By overcoming the challenges, you can deliver an effective project, by delivering an effective project you can increase actionable insights and by increasing actionable insights, you will improve decision-making, and that is where the real business value resides. Defining the process “In preparing for battle, I have always found that plans are useless, but planning is indispensable.”                                                                                       - Dwight D. Eisenhower I adopted the term synchronization from the US Army’s operations manual, FM 3-0 where it is described as a battlefield tenet and force multiplier. The manual defines synchronization as, “…arranging activities in time, space and purpose to mass maximum relative combat power at a decisive place and time”. If we overlay this military definition onto the context of a competitive marketplace, we come up with a definition I find more relevant. For our purpose, synchronization is defined as, “arranging business functions and/or tasks in time and purpose to produce the proper amount of focus on a critical event or events”. These definitions put synchronization in the context of an “endstate” based on a plan and a vision. However, it is the process of seeking to achieve that endstate that the true benefits come to fruition. So, we can look at synchronization as not only an endstate, but also as a process. The military’s solution to synchronizing operations before implementing a plan is the wargame. Like the military, businesses and corporations have utilized wargaming to facilitate decision-making and create integration of different business functions. Following the synchronization process techniques explained below, you can take the concept of business wargaming to a new level. I will discuss and provide specific ideas, steps, and deliverables that you can implement immediately. Before we begin that discussion, I want to cover the benefits that the process will deliver. Exploring the benefits of the process When I created this methodology about a dozen years ago, I was part of a market research team struggling to commit our limited resources to numerous projects, all of which were someone’s top priority, in a highly uncertain environment. Or, as I like to refer to it, just another day at the office. I knew from my military experience that I had the tools and techniques to successfully tackle these challenges. It worked then and has been working for me ever since. I have found that it delivers the following benefits to an organization: Integration of business partners and stakeholders Timely and accurate measurement of performance and effectiveness Anticipation of and planning for possible events Adaptation to unforeseen threats Exploitation of unforeseen opportunities Improvement in teamwork Fostering a collaborative environment Improving focus and prioritization In market research, and I believe it applies to all analytical endeavors, including machine learning, we talked about focusing on three specific questions about what to measure: What are we measuring? When do we measure it? How will we measure it? We found that successfully answering those questions facilitated improved decision-making by informing leadership what STOP doing, what to START doing and what to KEEP doing. I have found myself in many meetings going nowhere when I would ask a question like, “what are you looking to stop doing?” Ask leadership what they want to stop, start, or continue to do and you will get to the core of the problem. Then, your job will be to configure the business decision as the measurement/analytical problem. The Synchronization Process can bring this all together in a coherent fashion. I’ve been asked often about what triggers in my mind that a project requires going through the Synchronization Process. Here are some of the questions you should consider, and if you answer “yes” to any of them, it may be a good idea to implement the process: Are resources constrained to the point that several projects will suffer poor results or not be done at all? Do you face multiple, conflicting priorities? Could the external environment change and dramatically influence project(s)? Are numerous stakeholders involved or influenced by a project’s result? Is the project complex and facing a high-level of uncertainty? Does the project involve new technology? Does the project face the actual or potential for organizational change? You may be thinking, “Hey, we have a project manager for all this?” OK, how is that working out? Let me be crystal clear here, this is not just project management! This is about improving decision-making! A Gannt Chart or task management software won’t do that. You must be the agent of change. With that, let’s turn our attention to the process itself. Exploring the process Any team can take the methods elaborated on below and incorporate them to their specific situation with their specific business partners.  If executed properly, one can expect the initial investment in time and effort to provide substantial payoff within weeks of initiating the process. There are just four steps to incorporate with each having several tasks for you and your team members to complete. The four steps are as follows: Project kick-off Project analysis Synchronization exercise Project execution Let’s cover each of these in detail. I will provide what I like to refer to as a “Quad Chart” for each process step along with appropriate commentary. Project kick-off I recommend you lead the kick-off meeting to ensure all team members understand and agree to the upcoming process steps. You should place emphasis on the importance of completing the pre-work and understanding of key definitions, particularly around facts and critical assumptions. The operational definitions are as follows: Facts: Data or information that will likely have an impact on the project Critical assumptions: Valid and necessary suppositions in the absence of facts that, if proven false, would adversely impact planning or execution It is an excellent practice to link facts and assumptions. Here is an example of how that would work: It is a FACT that the Information Technology is beta-testing cloud-based solutions. We must ASSUME for planning purposes, that we can operate machine learning solutions on the cloud by the fourth quarter of this year. See, we’ve linked a fact and an assumption together and if this cloud-based solution is not available, let’s say it would negatively impact our ability to scale-up our machine learning solutions. If so, then you may want to have a contingency plan of some sort already thought through and prepared for implementation. Don’t worry if you haven’t thought of all possible assumptions or if you end up with a list of dozens. The synchronization exercise will help in identifying and prioritizing them. In my experience, identifying and tracking 10 critical assumptions at the project level is adequate. The following is the quad chart for this process step: Figure 1: Project kick-off quad chart Notice what is likely a new term, “Synchronization Matrix”. That is merely the tool used by the team to capture notes during the Synchronization Exercise. What you are doing is capturing time and events on the X-axis, and functions and terms on the Y-axis. Of course, this is highly customizable based on the specific circumstances and we will discuss more about it in process step number 3, that is Synchronization exercise, but here is an abbreviated example: Figure 2: Synchronization matrix example You can see in the matrix that I’ve included a row to capture critical assumptions. I can’t understate how important it is to articulate, capture, and track them. In fact, this is probably my favorite quote on the subject: … flawed assumptions are the most common cause of flawed execution. Harvard Business Review, The High Performance Organization, July-August 2005 OK, I think I’ve made my point, so let’s look at the next process step. Project analysis At this step, the participants prepare by analyzing the situation, collecting data, and making judgements as necessary. The goal is for each participant of the Synchronization Exercise to come to that meeting fully prepared. A good technique is to provide project participants with a worksheet template for them to use to complete the pre-work. A team can complete this step either individually, collectively or both. Here is the quad chart for the process step: Figure 3: Project analysis quad chart Let me expand on a couple of points. The idea of a team member creating information requirements is quite important. These are often tied back to your critical assumptions. Take the example above of the assumption around fielding a cloud-based capability. Can you think of some information requirements that might have as a potential end-user? Furthermore, can you prioritize them? OK, having done that, can you think of a plan to acquire that information and confirm or deny the underlying critical assumption? Notice also how that ties together with decision points you or others may have to make and how they may trigger contingency plans. This may sound rather basic and simplistic, but unless people are asked to think like this, articulate their requirements, share the information don’t expect anything to change anytime soon. It will be business as usual and let me ask again, “how is that working out for you?”. There is opportunity in all that chaos, so embrace it, and in the next step you will see the magic happen. Synchronization exercise The focus and discipline of the participants determine the success of this process step. This is a wargame-type exercise where team members portray their plan over time. Now, everyone gets to see how their plan relates to or even inhibits someone else’s plan and vice versa. I’ve done this step several different ways, including building the matrix on software, but the method that has consistently produced the best results is to build the matrix on large paper and put it along a conference room wall. Then, have the participants, one at a time, use post-it notes to portray their key events.  For example, the marketing manager gets up to the wall and posts “Marketing Campaign One” in the first time phase, “Marketing Campaign Two” in the final time phase, along with “Propensity Models” in the information requirements block. Iterating by participant and by time/event leads to coordination and cooperation like nothing you’ve ever seen. Another method to facilitate the success of the meeting is to have a disinterested and objective third party “referee” the meeting. This will help to ensure that any issues are captured or resolved and the process products updated accordingly. After the exercise, team members can incorporate the findings to their individual plans. This is an example quad chart on the process step: Figure 4: Synchronization exercise quad chart I really like the idea of execution and performance metrics. Here is how to think about them: Execution metrics—are we doing things right? Performance metrics—are we doing the right things? As you see, execution is about plan implementation, while performance metrics are about determining if the plan is making a difference (yes, I know that can be quite a dangerous thing to measure). Finally, we come to the fourth step where everything comes together during the execution of the project plan. Project execution This is a continual step in the process where a team can utilize the synchronization products to maintain situational understanding of the itself, key stakeholders, and the competitive environment. It can determine and how plans are progressing and quickly react to opportunities and threats as necessary. I recommend you update and communicate changes to the documentation on a regular basis. When I was in pharmaceutical forecasting, it was imperative that I end the business week by updating the matrices on SharePoint, which were available to all pertinent team members. The following is the quad chart for this process step: Figure 5: Project execution quad chart Keeping up with the documentation is a quick and simple process for the most part, and by doing so you will keep people aligned and cooperating. Be aware that like everything else that is new in the world, initial exuberance and enthusiasm will start to wane after several weeks. That is fine as long as you keep the documentation alive and maintain systematic communication. You will soon find that behavior is changing without anyone even taking heed, which is probably the best way to actually change behavior. A couple of words of warning. Don’t expect everyone to embrace the process wholeheartedly, which is to say that office politics may create a few obstacles. Often, an individual or even an entire business function will withhold information as “information is power”, and by sharing information they may feel they are losing power. Another issue may rise where some people feel it is needlessly complex or unnecessary. A solution to these problems is to scale back the number of core team members and utilize stakeholder analysis and a communication plan to bring they naysayers slowly into the fold. Change is never easy, but necessary nonetheless. Summary In this article, I’ve covered, at a high-level, a successful and proven process to deliver machine learning projects that will drive business value. I developed it from my numerous years of planning and evaluating military operations, including a one-year stint as a strategic advisor to the Iraqi Oil Police, adapting it to the needs of any organization. Utilizing the Synchronization Process will help any team avoid the common pitfalls of projects and improve efficiency and decision-making. It will help you become an agent of change and create influence in an organization without positional power. Resources for Article: Further resources on this subject: Machine Learning with R [article] Machine Learning Using Spark MLlib [article] Welcome to Machine Learning Using the .NET Framework [article]
Read more
  • 0
  • 0
  • 2181

article-image-layout-management-python-gui
Packt
05 Apr 2017
13 min read
Save for later

Layout Management for Python GUI

Packt
05 Apr 2017
13 min read
In this article written by Burkhard A. Meierwe, the author of the book Python GUI Programming Cookbook - Second Edition, we will lay out our GUI using Python 3.6 and above: Arranging several labels within a label frame widget Using padding to add space around widgets How widgets dynamically expand the GUI Aligning the GUI widgets by embedding frames within frames (For more resources related to this topic, see here.)  In this article, we will explore how to arrange widgets within widgets to create our Python GUI. Learning the fundamentals of GUI layout design will enable us to create great looking GUIs. There are certain techniques that will help us to achieve this layout design. The grid layout manager is one of the most important layout tools built into tkinter that we will be using. We can very easily create menu bars, tabbed controls (aka Notebooks) and many more widgets using tkinter . Arranging several labels within a label frame widget The LabelFrame widget allows us to design our GUI in an organized fashion. We are still using the grid layout manager as our main layout design tool, yet by using LabelFrame widgets we get much more control over our GUI design. Getting ready We are starting to add more and more widgets to our GUI, and we will make the GUI fully functional in the coming recipes. Here we are starting to use the LabelFrame widget. Add the following code just above the main event loop towards the bottom of the Python module. Running the code will result in the GUI looking like this: Uncomment line 111 and notice the different alignment of the LabelFrame. We can easily align the labels vertically by changing our code, as shown next. Note that the only change we had to make was in the column and row numbering. Now the GUI LabelFrame looks as such: How it works... In line 109 we create our first ttk LabelFrame widget and assign the resulting instance to the variable buttons_frame. The parent container is win, our main window. In lines 114 -116, we create labels and place them in the LabelFrame. buttons_frame is the parent of the labels. We are using the important grid layout tool to arrange the labels within the LabelFrame. The column and row properties of this layout manager give us the power to control our GUI layout. The parent of our labels is the buttons_frame instance variable of the LabelFrame, not the win instance variable of the main window. We can see the beginning of a layout hierarchy here. We can see how easy it is to change our layout via the column and row properties. Note how we change the column to 0, and how we layer our labels vertically by numbering the row values sequentially. The name ttk stands for themed tk. The tk-themed widget set was introduced in Tk 8.5. There's more... In a recipe later in this article we will embed LabelFrame widgets within LabelFrame widgets, nesting them to control our GUI layout. Using padding to add space around widgets Our GUI is being created nicely. Next, we will improve the visual aspects of our widgets by adding a little space around them, so they can breathe. Getting ready While tkinter might have had a reputation for creating ugly GUIs, this has dramatically changed since version 8.5. You just have to know how to use the tools and techniques that are available. That's what we will do next. tkinter version 8.6 ships with Python 3.6. How to do it... The procedural way of adding spacing around widgets is shown first, and then we will use a loop to achieve the same thing in a much better way. Our LabelFrame looks a bit tight as it blends into the main window towards the bottom. Let's fix this now. Modify line 110 by adding padx and pady: And now our LabelFrame got some breathing space: How it works... In tkinter, adding space horizontally and vertically is done by using the built-in properties named padx and pady. These can be used to add space around many widgets, improving horizontal and vertical alignments, respectively. We hard-coded 20 pixels of space to the left and right of the LabelFrame, and we added 40 pixels to the top and bottom of the frame. Now our LabelFrame stands out more than it did before. The screenshot above only shows the relevant change. We can use a loop to add space around the labels contained within the LabelFrame: Now the labels within the LabelFrame widget have some space around them too: The grid_configure() function enables us to modify the UI elements before the main loop displays them. So, instead of hard-coding values when we first create a widget, we can work on our layout and then arrange spacing towards the end of our file, just before the GUI is being created. This is a neat technique to know. The winfo_children() function returns a list of all the children belonging to the buttons_frame variable. This enables us to loop through them and assign the padding to each label. One thing to notice is that the spacing to the right of the labels is not really visible. This is because the title of the LabelFrame is longer than the names of the labels. We can experiment with this by making the names of the labels longer. Now our GUI looks like this. Note how there is now some space added to the right of the long label next to the dots. The last dot does not touch the LabelFrame which it otherwise would without the added space. We can also remove the name of the LabelFrame to see the effect padx has on positioning our labels. By setting the text property to an empty string, we remove the name that was previously displayed for the LabelFrame. How widgets dynamically expand the GUI You probably noticed in previous screenshots and by running the code that widgets have a capability to extend themselves to the space they need to visually display their text. Java introduced the concept of dynamic GUI layout management. In comparison, visual development IDEs like VS.NET lay out the GUI in a visual manner, and are basically hard-coding the x and y coordinates of UI elements. Using tkinter, this dynamic capability creates both an advantage and a little bit of a challenge, because sometimes our GUI dynamically expands when we would prefer it rather not to be so dynamic! Well, we are dynamic Python programmers so we can figure out how to make the best use of this fantastic behavior! Getting ready At the beginning of the previous recipe we added a LabelFrame widget. This moved some of our controls to the center of column 0. We might not wish this modification to our GUI layout. Next we will explore some ways to solve this. How to do it... Let us first become aware of the subtle details that are going on in our GUI layout in order to understand it better. We are using the grid layout manager widget and it lays out our widgets in a zero-based grid. This is very similar to an Excel spreadsheet or a database table. Grid Layout Manager Example with 2 Rows and 3 Columns: Row 0; Col 0 Row 0; Col 1 Row 0; Col 2 Row 1; Col 0 Row 1; Col 1 Row 1; Col 2 Using the grid layout manager, what is happening is that the width of any given column is determined by the longest name or widget in that column. This affects all rows. By adding our LabelFrame widget and giving it a title that is longer than some hard-coded size widget like the top-left label and the text entry below it, we dynamically move those widgets to the center of column 0, adding space to the left and right sides of those widgets. Incidentally, because we used the sticky property for the Checkbutton and ScrolledText widgets, those remain attached to the left side of the frame. Let's look in more detail at the screenshot from the first recipe of this article. We added the following code to create the LabelFrame and then placed labels into this frame. Since the text property of the LabelFrame, which is displayed as the title of the LabelFrame, is longer than both our Enter a name: label and the text box entry below it, those two widgets are dynamically centered within the new width of column 0. The Checkbutton and Radiobutton widgets in column 0 did not get centered because we used the sticky=tk.W property when we created those widgets. For the ScrolledText widget we used sticky=tk.WE, which binds the widget to both the west (aka left) and east (aka right) side of the frame. Let's remove the sticky property from the ScrolledText widget and observe the effect this change has. Now our GUI has new space around the ScrolledText widget both on the left and right sides. Because we used the columnspan=3 property, our ScrolledText widget still spans all three columns. If we remove columnspan=3 we get the following GUI which is not what we want. Now our ScrolledText only occupies column 0 and, because of its size, it stretches the layout. One way to get our layout back to where we were before adding the LabelFrame is to adjust the grid column position. Change the column value from 0 to 1. Now our GUI looks like this: How it works... Because we are still using individual widgets, our layout can get messed up. By moving the column value of the LabelFrame from 0 to 1, we were able to get the controls back to where they used to be and where we prefer them to be. At least the left-most label, text, Checkbutton, ScrolledText, and Radiobutton widgets are now located where we intended them to be. The second label and text Entry located in column 1 aligned themselves to the center of the length of the Labels in a Frame widget so we basically moved our alignment challenge one column to the right. It is not so visible because the size of the Choose a number: label is almost the same as the size of the Labels in a Frame title, and so the column width was already close to the new width generated by the LabelFrame. There's more... In the next recipe we will embed frames within frames to avoid the accidental misalignment of widgets we just experienced in this recipe. Aligning the GUI widgets by embedding frames within frames We have much better control of our GUI layout if we embed frames within frames. This is what we will do in this recipe. Getting ready The dynamic behavior of Python and its GUI modules can create a little bit of a challenge to really get our GUI looking the way we want. Here we will embed frames within frames to get more control of our layout. This will establish a stronger hierarchy among the different UI elements, making the visual appearance easier to achieve. We will continue to use the GUI we created in the previous recipe. How to do it... Here, we will create a top-level frame that will contain other frames and widgets. This will help us to get our GUI layout just the way we want. In order to do so, we will have to embed our current controls within a central ttk.LabelFrame. This ttk.LabelFrame is a child of the main parent window and all controls will be children of this ttk.LabelFrame. Up to this point in our recipes we have assigned all widgets to our main GUI frame directly. Now we will only assign our LabelFrame to our main window and after that we will make this LabelFrame the parent container for all the widgets. This creates the following hierarchy in our GUI layout: In this diagram, win is the variable that holds a reference to our main GUI tkinter window frame; mighty is the variable that holds a reference to our LabelFrame and is a child of the main window frame (win); and Label and all other widgets are now placed into the LabelFrame container (mighty). Add the following code towards the top of our Python module: Next we will modify all the following controls to use mighty as the parent, replacing win. Here is an example of how to do this: Note how all the widgets are now contained in the Mighty Python LabelFrame which surrounds all of them with a barely visible thin line. Next, we can reset the Labels in a Frame widget to the left without messing up our GUI layout: Oops - maybe not. While our frame within another frame aligned nicely to the left, it again pushed our top widgets into the center (a default). In order to align them to the left, we have to force our GUI layout by using the sticky property. By assigning it 'W' (West) we can control the widget to be left-aligned. How it works... Note how we aligned the label, but not the text box below it. We have to use the sticky property for all the controls we want to left-align. We can do that in a loop, using the winfo_children() and grid_configure(sticky='W') properties, as we did before in the second recipe of this article. The winfo_children() function returns a list of all the children belonging to the parent. This enables us to loop through all of the widgets and change their properties. Using tkinter to force left, right, top, bottom the naming is very similar to Java: west, east, north and south, abbreviated to: 'W' and so on. We can also use the following syntax: tk.W instead of 'W'. This requires having imported the tkinter module aliased as tk. In a previous recipe we combined both 'W' and 'E' to make our ScrolledText widget attach itself both to the left and right sides of its container using 'WE'. We can add more combinations: 'NSE' will stretch our widget to the top, bottom and right side. If we have only one widget in our form, for example a button, we can make it fill in the entire frame by using all options: 'NSWE'. We can also use tuple syntax: sticky=(tk.N, tk.S, tk.W, tk.E). Let's align the entry in column 0 to the left. Now both the label and the  Entry are aligned towards the West (left). In order to separate the influence that the length of our Labels in a Frame LabelFrame has on the rest of our GUI layout we must not place this LabelFrame into the same LabelFrame as the other widgets but assign it directly to the main GUI form (win). Summary We have learned the layout management for Python GUI using the following reciepes: Arranging several labels within a label frame widget Using padding to add space around widgets How widgets dynamically expand the GUI Aligning the GUI widgets by embedding frames within frames Resources for Article:  Further resources on this subject: Python Scripting Essentials [article] An Introduction to Python Lists and Dictionaries [article] Test all the things with Python [article]
Read more
  • 0
  • 0
  • 15558

article-image-conditional-statements-functions-and-lists
Packt
05 Apr 2017
14 min read
Save for later

Conditional Statements, Functions, and Lists

Packt
05 Apr 2017
14 min read
In this article, by Sai Yamanoor and Srihari Yamanoor, author of the book Python Programming with Raspberry Pi Zero, you will learn about conditional statements and how to make use of logical operators to check conditions using conditional statements. Next, youwill learn to write simple functions in Python and discuss interfacing inputs to the Raspberry Pi's GPIO header using a tactile switch (momentary push button). We will also discuss motor control (this is a run-up to the final project) using the Raspberry Pi Zero and control the motors using the switch inputs. Let's get to it! In this article, we will discuss the following topics: Conditional statements in Python Using conditional inputs to take actions based on GPIO pin states Breaking out of loops using conditional statement Functions in Python GPIO callback functions Motor control in Python (For more resources related to this topic, see here.) Conditional statements In Python, conditional statements are used to determine if a specific condition is met by testing whether a condition is True or False. Conditional statements are used to determine how a program is executed. For example,conditional statements could be used to determine whether it is time to turn on the lights. The syntax is as follows: if condition_is_true: do_something() The condition is usually tested using a logical operator, and the set of tasks under the indented block is executed. Let's consider the example,check_address_if_statement.py where the user input to a program needs to be verified using a yesor no question: check_address = input("Is your address correct(yes/no)? ") if check_address == "yes": print("Thanks. Your address has been saved") if check_address == "no": del(address) print("Your address has been deleted. Try again")check_address = input("Is your address correct(yes/no)? ") In this example, the program expects a yes or no input. If the user provides the input yes, the condition if check_address == "yes"is true, the message Your address has been savedis printed on the screen. Likewise, if the user input is no, the program executes the indented code block under the logical test condition if check_address == "no" and deletes the variable address. An if-else statement In the precedingexample, we used an ifstatement to test each condition. In Python, there is an alternative option named the if-else statement. The if-else statement enables testing an alternative condition if the main condition is not true: check_address = input("Is your address correct(yes/no)? ") if check_address == "yes": print("Thanks. Your address has been saved") else: del(address) print("Your address has been deleted. Try again") In this example, if the user input is yes, the indented code block under if is executed. Otherwise, the code block under else is executed. An if-elif-else statement In the precedingexample, the program executes any piece of code under the else block for any user input other than yesthat is if the user pressed the return key without providing any input or provided random characters instead of no, the if-elif-else statement works as follows: check_address = input("Is your address correct(yes/no)? ") if check_address == "yes": print("Thanks. Your address has been saved") elifcheck_address == "no": del(address) print("Your address has been deleted. Try again") else: print("Invalid input. Try again") If the user input is yes, the indented code block under the ifstatement is executed. If the user input is no, the indented code block under elif (else-if) is executed. If the user input is something else, the program prints the message: Invalid input. Try again. It is important to note that the code block indentation determines the block of code that needs to be executed when a specific condition is met. We recommend modifying the indentation of the conditional statement block and find out what happens to the program execution. This will help understand the importance of indentation in python. In the three examples that we discussed so far, it could be noted that an if-statement does not need to be complemented by an else statement. The else and elif statements need to have a preceding if statement or the program execution would result in an error. Breaking out of loops Conditional statements can be used to break out of a loop execution (for loop and while loop). When a specific condition is met, an if statement can be used to break out of a loop: i = 0 while True: print("The value of i is ", i) i += 1 if i > 100: break In the precedingexample, the while loop is executed in an infinite loop. The value of i is incremented and printed onthe screen. The program breaks out of the while loop when the value of i is greater than 100 and the value of i is printed from 1 to 100. The applications of conditional statements: executing tasks using GPIO Let's discuss an example where a simple push button is pressed. A button press is detected by reading the GPIO pin state. We are going to make use of conditional statements to execute a task based on the GPIO pin state. Let us connect a button to the Raspberry Pi's GPIO. All you need to get started are a button, pull-up resistor, and a few jumper wires. The figure given latershows an illustration on connecting the push button to the Raspberry Pi Zero. One of the push button's terminals is connected to the ground pin of the Raspberry Pi Zero's GPIO pin. The schematic of the button's interface is shown here: Raspberry Pi GPIO schematic The other terminal of the push button is pulled up to 3.3V using a 10K resistor.The junction of the push button terminal and the 10K resistor is connected to the GPIO pin 2. Interfacing the push button to the Raspberry Pi Zero's GPIO—an image generated using Fritzing Let's review the code required to review the button state. We make use of loops and conditional statements to read the button inputs using the Raspberry Pi Zero. We will be making use of the gpiozero. For now, let’s briefly discuss the concept of classes for this example. A class in Python is a blueprint that contains all the attributes that define an object. For example, the Button class of the gpiozero library contains all attributes required to interface a button to the Raspberry Pi Zero’s GPIO interface.These attributes include button states and functions required to check the button states and so on. In order to interface a button and read its states, we need to make use of this blueprint. The process of creating a copy of this blueprint is called instantiation. Let's get started with importing the gpiozero library and instantiate the Button class of the gpiozero. The button is interfaced to GPIO pin 2. We need to pass the pin number as an argument during instantiation: from gpiozero import Button #button is interfaced to GPIO 2 button = Button(2) The gpiozero library's documentation is available athttp://gpiozero.readthedocs.io/en/v1.2.0/api_input.html.According to the documentation, there is a variable named is_pressed in the Button class that could be tested using a conditional statement to determine if the button is pressed: if button.is_pressed: print("Button pressed") Whenever the button is pressed, the message Button pressed is printed on the screen. Let's stick this code snippet inside an infinite loop: from gpiozero import Button #button is interfaced to GPIO 2 button = Button(2) while True: if button.is_pressed: print("Button pressed") In an infinite while loop, the program constantly checks for a button press and prints the message as long as the button is being pressed. Once the button is released, it goes back to checking whether the button is pressed. Breaking out a loop by counting button presses Let's review another example where we would like to count the number of button presses and break out of the infinite loop when the button has received a predetermined number of presses: i = 0 while True: if button.is_pressed: button.wait_for_release() i += 1 print("Button pressed") if i >= 10: break In this example, the program checks for the state of the is_pressed variable. On receiving a button press, the program can be paused until the button is released using the method wait_for_release.When the button is released, the variable used to store the number of presses is incremented by 1. The program breaks out of the infinite loop, when the button has received 10 presses. A red momentary push button interfaced to Raspberry Pi Zero GPIO pin 2 Functions in Python We briefly discussed functions in Python. Functions execute a predefined set of task. print is one example of a function in Python. It enables printing something to the screen. Let's discuss writing our own functions in Python. A function can be declared in Python using the def keyword. A function could be defined as follows: defmy_func(): print("This is a simple function") In this function my_func, the print statement is written under an indented code block. Any block of code that is indented under the function definition is executed when the function is called during the code execution. The function could be executed as my_func(). Passing arguments to a function: A function is always defined with parentheses. The parentheses are used to pass any requisite arguments to a function. Arguments are parameters required to execute a function. In the earlierexample, there are no arguments passed to the function. Let's review an example where we pass an argument to a function: defadd_function(a, b): c = a + b print("The sum of a and b is ", c) In this example, a and b are arguments to the function. The function adds a and b and prints the sum onthe screen. When the function add_function is called by passing the arguments 3 and 2 as add_function(3,2) where a=3 and b=2, respectively. Hence, the arguments a and b are required to execute function, or calling the function without the arguments would result in an error. Errors related to missing arguments could be avoided by setting default values to the arguments: defadd_function(a=0, b=0): c = a + b print("The sum of a and b is ", c) The preceding function expects two arguments. If we pass only one argument to thisfunction, the other defaults to zero. For example,add_function(a=3), b defaults to 0, or add_function(b=2), a defaults to 0. When an argument is not furnished while calling a function, it defaults to zero (declared in the function). Similarly,the print function prints any variable passed as an argument. If the print function is called without any arguments, a blank line is printed. Returning values from a function Functions can perform a set of defined operations and finally return a value at the end. Let's consider the following example: def square(a): return a**2 In this example, the function returns a square of the argument. In Python, the return keyword is used to return a value requested upon completion of execution. The scope of variables in a function There are two types of variables in a Python program:local and global variables. Local variables are local to a function,that is, it is a variable declared within a function is accessible within that function.Theexample is as follows: defadd_function(): a = 3 b = 2 c = a + b print("The sum of a and b is ", c) In this example, the variables a and b are local to the function add_function. Let's consider an example of a global variable: a = 3 b = 2 defadd_function(): c = a + b print("The sum of a and b is ", c) add_function() In this case, the variables a and b are declared in the main body of the Python script. They are accessible across the entire program. Now, let's consider this example: a = 3 defmy_function(): a = 5 print("The value of a is ", a) my_function() print("The value of a is ", a) In this case, when my_function is called, the value of a is5 and the value of a is 3 in the print statement of the main body of the script. In Python, it is not possible to explicitly modify the value of global variables inside functions. In order to modify the value of a global variable, we need to make use of the global keyword: a = 3 defmy_function(): global a a = 5 print("The value of a is ", a) my_function() print("The value of a is ", a) In general, it is not recommended to modify variables inside functions as it is not a very safe practice of modifying variables. The best practice would be passing variables as arguments and returning the modified value. Consider the following example: a = 3 defmy_function(a): a = 5 print("The value of a is ", a) return a a = my_function(a) print("The value of a is ", a) In the preceding program, the value of a is 3. It is passed as an argument to my_function. The function returns 5, which is saved to a. We were able to safely modify the value of a. GPIO callback functions Let's review some uses of functions with the GPIO example. Functions can be used in order tohandle specific events related to the GPIO pins of the Raspberry Pi. For example,the gpiozero library provides the capability of calling a function either when a button is pressed or released: from gpiozero import Button defbutton_pressed(): print("button pressed") defbutton_released(): print("button released") #button is interfaced to GPIO 2 button = Button(2) button.when_pressed = button_pressed button.when_released = button_released while True: pass In this example, we make use of the attributeswhen_pressed and when_releasedof the library's GPIO class. When the button is pressed, the function button_pressed is executed. Likewise, when the button is released, the function button_released is executed. We make use of the while loop to avoid exiting the program and keep listening for button events. The pass keyword is used to avoid an error and nothing happens when a pass keyword is executed. This capability of being able to execute different functions for different events is useful in applications like Home Automation. For example, it could be used to turn on lights when it is dark and vice versa. DC motor control in Python In this section, we will discuss motor control using the Raspberry Pi Zero. Why discuss motor control? In order to control a motor, we need an H Bridge motor driver (Discussing H bridge is beyond our scope. There are several resources for H bridge motor drivers: http://www.mcmanis.com/chuck/robotics/tutorial/h-bridge/). There are several motor driver kits designed for the Raspberry Pi. In this section, we will make use of the following kit: https://www.pololu.com/product/2753. The Pololu product page also provides instructions on how to connect the motor. Let's get to writing some Python code tooperate the motor: from gpiozero import Motor from gpiozero import OutputDevice import time motor_1_direction = OutputDevice(13) motor_2_direction = OutputDevice(12) motor = Motor(5, 6) motor_1_direction.on() motor_2_direction.on() motor.forward() time.sleep(10) motor.stop() motor_1_direction.off() motor_2_direction.off() Raspberry Pi based motor control In order to control the motor, let's declare the pins, the motor's speed pins and direction pins. As per the motor driver's documentation, the motors are controlled by GPIO pins 12,13 and 5,6, respectively. from gpiozero import Motor from gpiozero import OutputDevice import time motor_1_direction = OutputDevice(13) motor_2_direction = OutputDevice(12) motor = Motor(5, 6) Controlling the motor is as simple as turning on the motor using the on() method and moving the motor in the forward direction using the forward() method: motor.forward() Similarly, reversing the motor direction could be done by calling the method reverse(). Stopping the motor could be done by: motor.stop() Some mini-project challenges for the reader: In this article, we discussed interfacing inputs for the Raspberry Pi and controlling motors. Think about a project where we could drive a mobile robot that reads inputs from whisker switches and operate a mobile robot. Is it possible to build a wall following robot in combination with the limit switches and motors? We discussed controlling a DC motor in this article. How do we control a stepper motor using a Raspberry Pi? How can we interface a motion sensor to control the lights at home using a Raspberry Pi Zero. Summary In this article, we discussed conditional statements and the applications of conditional statements in Python. We also discussed functions in Python, passing arguments to a function, returning values from a function and scope of variables in a Python program. We discussed callback functions and motor control in Python. Resources for Article: Further resources on this subject: Sending Notifications using Raspberry Pi Zero [article] Raspberry Pi Gaming Operating Systems [article] Raspberry Pi LED Blueprints [article]
Read more
  • 0
  • 0
  • 67979

article-image-it-operations-management
Packt
05 Apr 2017
16 min read
Save for later

IT Operations Management

Packt
05 Apr 2017
16 min read
In this article by Ajaykumar Guggilla, the author of the book ServiceNow IT Operations Management, we will learn the ServiceNow ITOM capabilities within ServiceNow, which include: Dependency views Cloud management Discovery Credentials (For more resources related to this topic, see here.) ServiceNow IT Operations Management overview Every organization and business focuses on key strategies, some of them include: Time to market Agility Customer satisfaction Return on investment Information technology is heavily involved in supporting these strategic goals, either directly or indirectly, providing the underlying IT Services with the required IT infrastructure. IT infrastructure includes network, servers, routers, switches, desktops, laptops, and much more. IT supports these infrastructure components enabling the business to achieve their goals. IT continuously supports the IT infrastructure and its components with a set of governance, processes, and tools, which is called IT Operations Management. IT cares and feeds a business, and the business expects reliability of services provided by IT to support the underlying business services. A business cares and feeds the customers who expect satisfaction of the services offered to them without service disruption. Unlike any other tools it is important to understand the underlying relationship between IT, businesses, and customers. IT just providing the underlying infrastructure and associated components is not going to help, to effectively and efficiently support the business IT needs to understand how the infrastructure components and process are aligned and associated with the business services to understand the impact to the business with an associated incident, problem, event, or change that is arising out of an IT infrastructure component. IT needs to have a consolidated and complete view of the dependency between the business and the customers, not compromising on the technology used, the process followed, the infrastructure components used, which includes the technology used. There needs to be a connected way for IT to understand the relations of these seamless technology components to be able to proactively stop the possible outages before they occur and handle a change in the environment. On the other hand, a business expects service reliability to be able to support the business services to the customers. There is a huge financial impact of businesses not being able to provide the agreed service levels to their customers. So there is always a pressure and dependence from the business to IT to provide a reliable service and it does not matter what technology or processes are used. Customers as always expect satisfaction of the services provided by the business, at times these are adversely affected with service outages caused from the IT infrastructure. Customer satisfaction is also a key strategic goal for the business to be able to sustain in the competitive market. IT is also expected as necessarily to be able to integrate with the customer infrastructure components to provide a holistic view of the IT infrastructure view to be able to effectively support the business by proactively identifying and fixing the outages before they happen to reduce the outages and increase the reliability of IT services delivered. Most of the tools do not understand the context of the Service-Oriented Architecture (SOA) connecting the business services to the impacted IT infrastructure components to be able to effectively support the business and also IT to be able to justify the cost and impact of providing end to end service. Most of the traditional tools perform certain aspects of ITOM functions, some partially and some support the integration with the IT Service Management (ITSM) tool suite. The missing integration piece between the traditional tools and a full blown cloud solution platform is leaning to the SOA. ServiceNow, a cloud based solution, has focused the lens of true SOA that brings together the ITOM suite providing and leveraging the native data and that is also able to connect to the customer infrastructure to provide a holistic and end to end view of the IT Service at a given snapshot. With ServiceNow IT has a complete view of the business service and technical dependencies in real time leveraging powerful individual capabilities, applications, and plugins within ServiceNow ITOM. ServiceNow ITOM comprises of the following applications and capabilities, some of the plugins, applications, and technology might have license restrictions that require separate licensing to be purchased: Management, Instrumentation, and Discovery (MID) Server: MID Server helps to establish communication and data movement between ServiceNow and the external corporate network and application Credentials: Is a platform that stores credentials including usernames, passwords, or certificates in an encrypted field on the credentials table that is leveraged by ServiceNow discovery Service mapping: Service mapping discovers and maps the relationships between IT components that comprise specific business services, even in dynamic, virtualized environments Service mapping: Service mapping creates relationships between different IT components and business services Dependency views: Dependency views graphically displays an infrastructure view with relationships of configuration items and the underlying business services Event management: Event management provides a holistic view of all the event that are triggered from various event monitoring tools Orchestration: Orchestration helps in automating IT and business processes for operations management. Discovery: Works with MID Server and explores the IT infrastructure environment to discover the configuration items and populating the Configuration Management Database (CMDB) Cloud management: Helps to easily manage third-party cloud providers, which includes AWS, Microsoft Azure, and VMware clouds Understanding ServiceNow IT Operations Management components Now that we have covered what ITOM is about and focusing on ServiceNow ITOM capabilities, let's deep dive and explore more about each capability. Dependency views Maps like the preceding one are becoming so important in everyday life; imagine a world without GPS devices or electronic maps. There were hard copies of the maps that were available all over the streets for us to get to the place and also there were special maps to the utilities and other public service agencies to be able to identify the impact to either digging a tunnel or a water pipe or an underground electric cable. These maps help them to identify the impact of making a change to the ground. Maps also helps us to understand the relationships between a states, countries, cities, and streets with different set of information in real time that includes real-time traffic information showing accident information, any constructions, and so on. Dependency views is also similar to the real life navigation maps, they provide a map of relationships between the IT Infrastructure components and the business services that are defined under the scope, unlike the real-time traffic updates on the maps the dependency views show real-time active incidents, change, and problems reported on an individual configuration item or an infrastructure component. Changes frequently happen in the environment, some of the changes are handled with a legacy knowledge of how the individual components are connected to the business services through the service mapping plugin down to the individual component level. Making a change without understanding the relationships between each IT infrastructure component might adversely affect the service levels and impact the business service. ServiceNow dependency views provide a snapshot of how the underlying business service is connected to individual Configuration Item (CI) elements. Drilling down to the individual CI elements provides a view of associated service operations and service transition data that includes incidents logged against on a given CI, any underlying problem reported against the given CI, and also changes associated with the given CI. Dependency views are based on D3 and Angular technology that provides a graphical view of configuration items and their relationships. The dependency views provide a view of the CI and their relationships, in order to get a perspective from a business stand point you will need to enable the service mapping plugin. Having a detailed view of how the individual CI components are connected from the Business service to the CI components compliments the change management to perform effective impact analysis before any changes are made to the respective CI: Image source: wiki.servicenow.com A dependency map starts with a root node, which is usually termed as a root CI that is grayed out with a gray frame. Relationships start building up and they map from the upstream and downstream dependencies of the infrastructure components that are scoped to discover by the ServiceNow auto discovery. Administrators have the control of the number of levels to display on the dependency maps. It is also easy to manage the maps that allow creating or modifying existing relationships right from the map that posts the respective changes to the CMDB automatically. Each of the CI component of the dependency maps have an indicator that shows any active and pending issues against a CI that includes any incidents, problems, changes, and any events associated with the respective configuration item. Cloud management In the earlier versions prior to Helsinki, there was not a direct way to manage cloud instances, people had to create orchestration scripts to be able to manage the cloud instances and also create custom roles. Managing and provisioning has become easy with the ServiceNow cloud management application. The cloud management application seamlessly integrates with the ServiceNow service catalog and also provides providing automation capability with orchestration workflows. The cloud management application fully integrates the life cycle management of virtual resources into standard ServiceNow data collection, management, analytics, and reporting capabilities. The ServiceNow cloud management application provides easy and quick options to key private cloud providers, which include: AWS Cloud: Manages Amazon Web Services (AWS) using AWS Cloud Microsoft Azure Cloud: The Microsoft Azure Cloud application integrates with Azure through the service catalog and provides the ability to manage virtual resources easily VMware Cloud: The VMware Cloud application integrates with VMware vCenter to manage the virtual resources by integrating with the service catalog The following figure describes a high-level architecture of the cloud management application: Key features with the cloud management applications include the following: Single pane of glass to manage the virtual services in public and private cloud environment including approvals, notifications, security, asset management, and so on Ability to repurpose configurations through resource templates that help to reuse the capability sets Seamless integration with the service catalog, with a defined workflow and approvals integration can be done end to end right from the user request to the cloud provisioning Ability to control the leased resources through date controls and role-based security access Ability to use the ServiceNow discovery application or the standalone capability to discover virtual resources and their relationships in their environments Ability to determine the best virtualization server for a VM based on the discovered data by the CMDB auto discovery Ability to control and manage virtual resources effectively with a controlled termination shutdown date Ability to increate virtual server resources through a controlled fashion, for example, increasing storage or memory, integrating with the service catalog, and with right and appropriate approvals the required resources can be increased to the required Ability to perform a price calculation and integration of managed virtual machines with asset management Ability to auto or manually provision the required cloud environment with zero click options There are different roles within the cloud management applications, here are some of them: Virtual provisioning cloud administrator: The administrator owns the cloud admin portal and end to end management including configuration of the cloud providers. They have access to be able to configure the service catalog items that will be used by the requesters and the approvals required to provision the cloud environment. Virtual provisioning cloud approver: Who either approves or rejects requests for virtual resources. Virtual provisioning cloud operator: The operator fulfills the requests to manage the virtual resources and the respective cloud management providers. Cloud operators are mostly involved when there is a manual human intervention required to manage or provision the virtual resources. Virtual provisioning cloud user: Users have access to the my virtual assets portal that helps them to manage the virtual resources they own, or requested, or are responsible for.   How clouds are provisioned The cloud administrator creates a service catalog item for users to be able to request for cloud resources The cloud user requests for a virtual machine through the service catalog The request goes to the approver who either approves or rejects it The cloud operator provisions the requests manually or virtual resources are auto provisioned Discovery Imagine how an atlas is mapped and how places have been discovered by the satellite using exploration devices including manually, satellite, survey maps, such as street maps collector devices. These devices crawl through all the streets to collect different data points that include information about the streets, houses, and much more details are collected. This information is used by the consumers for various purposes including GPS devices, finding and exploring different areas, address of a location, on the way finding for any incidents, constructions, road closures, and so on. ServiceNow discovery works the same way, ServiceNow discovery explores through the enterprise network identifying for the devices in scope. ServiceNow discovery probes and sensors perform the collection of infrastructure devices connected to a given enterprise network. Discovery uses Shazzam probes to determine the TCP ports opened and to see if it responds to the SNMP queries and sensors to explore any given computer or device, starting first with basic probes and then using more specific probes as it learns more. Discovery explores to check on the type of device, for each type of device, discovery uses different kinds of probes to extract more information about the computer or device, and the software that is running on it. CMDB is updated or data is federated through the ServiceNow discovery. They are identified with the discovery that is set and actioned to search the CMDB for a CI that again matches the discovered CI on the network. When a device match is found what actions to be taken are defined by the administrator when discovery runs based on the configuration when a CI is discovered; either CMDB gets updated with an existing CI or a new CI is created within the CMDB. Discovery can be scheduled to perform the scan on certain intervals; configuration management keeps the up to date status of the CI through the discovery. During discovery the MID Server looks back on the probes to run from the ServiceNow instance and executes probes to retrieves the results to the ServiceNow instance or the CMDB for processing. No data is retained on the MID Server. The data collected by these probes are processed by sensors. ServiceNow is hosted in the ServiceNow data centers spanned across the globe. ServiceNow as an application does not have the ability to communicate with any given enterprise network. Traditionally, there are two different types of discovery tools on the market: Agent: A piece of software is installed on the servers or individual systems that sends all information about the system to the CMDB. Agentless: Usually doesn't require any individual installations on the systems or components. They utilize a single system or software to usually probe and sense the network by scanning and federating the CMDB. ServiceNow is an agentless discovery that does not require any individual software to be installed, it uses MID Server. Discovery is available as a separate subscription from the rest of the ServiceNow platform and requires the discovery plugin. MID Server is a Java software that runs on any windows or UNIX or Linux system that resides within the enterprise network that needs to be discovered. MID Server is the bridge and communicator between the ServiceNow instance that is sitting somewhere on the cloud and the enterprise network that is secured and controlled. MID Server uses several techniques to probe devices without using agents. Depending on the type of infrastructure components, MID Server uses the appropriate protocol to gather information from the infrastructure component, for example, to gather information from network devices MID Server will use Simple Network Management Protocol (SNMP), to be able to connect to the Unix systems MID Server will use SSH. The following table shows different ServiceNow discovery probe types: Device Probe type Windows computers and servers Remote WMI queries, shell commands UNIX and Linux servers Shell command (via SSH protocol) Storage CIM/WBEM queries Printers SNMP queries Network gear (switches, routers, and so on) SNMP queries Web servers HTTP header examination Uninterruptible Power Supplies (UPS) SNMP queries Credentials ServiceNow discovery and orchestration features require credentials to be able to access the enterprise network; these credentials vary from network and devices. Credentials such as usernames, passwords, and certificates need a secure place to store these credentials. ServiceNow credentials applications store credentials in an encrypted format on a specific table within the credentials table. Credential tagging allows workflow creators to assign individual credentials to any activity in an orchestration workflow or assign different credentials to each occurrence of the same activity type in an orchestration workflow. Credential tagging also works with credential affinities. Credentials can be assigned an order value that forces the discovery and orchestration to try all the credentials when orchestration attempts to run a command or discovery tries to query. Credentials tables contain many credentials, based on pattern of usage the credential applications which places on the highly used list that enables the discovery and orchestration to work faster after first successful connection and system knowing which credential to use for a faster logon to the device next time. Image source: wiki.servicenow.com Credentials are encrypted automatically with a fixed instance key when they are submitted or updated in the credentials (discovery_credentials) table. When credentials are requested by the MID Server, the platform decrypts the credentials using the following process: The credentials are decrypted on the instance with the password2 fixed key. The credentials are re-encrypted on the instance with the MID Server's public key. The credentials are encrypted on the load balancer with SSL. The credentials are decrypted on the MID Server with SSL. The credentials are decrypted on the MID Server with the MID Server's private key. The ServiceNow credential application integrates with the CyberArk credential storage. The MID Server integration with CyberArk vault enables orchestration and discovery to run without storing any credentials on the ServiceNow instance. The instance maintains a unique identifier for each credential, the credential type (such as SSH, SNMP, or Windows), and any credential affinities. The MID Server obtains the credential identifier and IP address from the instance, and then uses the CyberArk vault to resolve these elements into a usable credential. The CyberArk integration requires the external credential storage plugin, which is available by request. The CyberArk integration supports these ServiceNow credential types: CIM JMS SNMP community SSH SSH private key (with key only) VMware Windows Orchestration activities that use these network protocols support the use of credentials stored on a CyberArk vault: SSH PowerShell JMS SFTP Summary In this article, we covered an overview of ITOM, explored different ServiceNow ITOM components including high level architecture, functional aspects of ServiceNow ITOM components that include discovery, credentials, dependency views, and, cloud management.  Resources for Article: Further resources on this subject: Management of SOA Composite Applications [article] Working with Business Rules to Define Decision Points in Oracle SOA Suite 11g R1 [article] Introduction to SOA Testing [article]
Read more
  • 0
  • 0
  • 3777

article-image-using-react-router-client-side-redirecting
Antonio Cucciniello
05 Apr 2017
6 min read
Save for later

Using React Router for Client Side Redirecting

Antonio Cucciniello
05 Apr 2017
6 min read
If you are using React in the front end of your web application and you would like to use React Router in order to handle routing, then you have come to the right place. Today, we will learn how to have your client side redirect to another page after you have completed some processing. Let's get started! Installs First we will need to make sure we have a couple of things installed. The first thing here is to make sure you have Node and NPM installed. In order to make this as simple as possible, we are going to use create-react-app to get React fully working in our application. Install this by doing: $ npm install -g create-react-app Now create a directory for this example and enter it; here we will call it client-redirect. $ mkdir client-redirect $ cd client-redirect Once in that directory, initialize create-react-app in the repo. $ create-react-app client Once it is done, test that it is working by running: $ npm start You should see something like this: The last thing you must install is react-router: $ npm install --save react-router Now that you are all set up, let's start with some code. Code First we will need to edit the main JavaScript file, in this case, located at client-redirect/client/src/App.js. App.js App.js is where we handle all the routes for our application and acts as the main js file. Here is what the code looks like for App.js: import React, { Component } from 'react' import { Router, Route, browserHistory } from 'react-router' import './App.css' import Result from './modules/result' import Calculate from './modules/calculate' class App extends Component { render () { return ( <Router history={browserHistory}> <Route path='/' component={Calculate} /> <Route path='/result' component={Result} /> </Router> ) } } export default App If you are familiar with React, you should not be too confused as to what is going on. At the top, we are importing all of the files and modules we need. We are importing React, react-router, our App.css file for styles, and result.js and calculate.js files (do not worry we will show the implementation for these shortly). The next part is where we do something different. We are using react-router to set up our routes. We chose to use a history of type browserHistory. History in react-router listens to the browser's address bar for anything changing and parses the URL from the browser and stores it in a location object so the router can match the specified route and render the different components for that path. We then use <Route> tags in order to specify what path we would like a component to be rendered on. In this case, we are using '/' path for the components in calculate.js and '/result' path for the components in result.js. Let's define what those pages will look like and see how the client can redirect using browserHistory. Calculate.js This page is a basic page with two text boxes and a button. Each text box should receive a number and when the button is clicked, we are going to calculate the sum of the two numbers given. Here is what that looks like: import React, { Component } from 'react' import { browserHistory } from 'react-router' export default class Calculate extends Component { render () { return ( <div className={'Calculate-page'} > <InputBox type='text' name='first number' id='firstNum' /> <InputBox type='text' name='second number' id='secondNum' /> <CalculateButton type='button' value='Calculate' name='Calculate' onClick='result()' /> </div> ) } } var InputBox = React.createClass({ render: function () { return <div className={'input-field'}> <input type={this.props.type} value={this.props.value} name={this.props.name} id={this.props.id} /> </div> } }) var CalculateButton = React.createClass({ result: function () { var firstNum = document.getElementById('firstNum').value var secondNum = document.getElementById('secondNum').value var sum = Number(firstNum) + Number(secondNum) if (sum !== undefined) { const path = '/result' browserHistory.push(path) } window.sessionStorage.setItem('sum', sum) return console.log(sum) }, render: function () { return <div className={'calculate-button'}> <button type={this.props.type} value={this.props.value} name={this.props.name} onClick={this.result} > Calculate </button> </div> } }) The important part we want to focus on is inside the result function of the CalculateButton class. We take the two numbers and sum them. Once we have the sum, we create a path variable to hold the route we would like to go to next. Then browserHistory.push(path) redirects the client to a new path of localhost:3000/result. We then store the sum in sessionStorage in order to retrieve it on the next page. result.js This is simply a page that will display your result from the calculation, but it serves as the page you redirected to with react-router. Here is the code: import React, { Component } from 'react' export default class Result extends Component { render () { return ( <div className={'result-page'} > Result : <DisplayNumber id='result' /> </div> ) } } var DisplayNumber = React.createClass({ componentDidMount () { document.getElementById('result').innerHTML = window.sessionStorage.getItem('sum') }, render: function () { return ( <div className={'display-number'}> <p id={this.props.id} /> </div> ) } }) We simply create a class that wraps a paragraph tag. It also has a componentDidMount() function, which allows us to access the sessionStorage for the sum once the component output has been rendered by the DOM. We update the innerHTML of the paragraph element with the sum's value. Test Let's get back to the client directory of our application. Once we are there, we can run: $ npm start This should open a tab in your web browser at localhost:3000. This is what it should look like when you add numbers in the text boxes (here I added 17 and 17). This should redirect you to another page: Conclusion There you go! You now have a web app that utilizes client-side redirecting! To summarize what we did, here is a quick list of what happened: Installed the prerequisites: node, npm Installed create-react-app Created a create-react-app Installed react-router Added our routing to our App.js Created a module that calculated the sum of two numbers and redirected to a new page Displayed the number on the new redirected page Check out the code for this tutorial on GitHub. Possible Resources Check out my GitHub View my personal blog GitHub pages for: react-router create-react-app About the author Antonio Cucciniello is a software engineer with a background in C, C++, and JavaScript (Node.js). He is from New Jersey, USA. His most recent project called Edit Docs is an Amazon Echo skill that allows users to edit Google Drive files using their voice. He loves building cool things with software, reading books on self-help and improvement, finance, and entrepreneurship. To contact Antonio, e-mail him at Antonio.cucciniello16@gmail.com, follow him on twitter at @antocucciniello, and follow him on GitHub here: https://github.com/acucciniello.
Read more
  • 0
  • 0
  • 69128

article-image-getting-started-c-features
Packt
05 Apr 2017
7 min read
Save for later

Getting started with C++ Features

Packt
05 Apr 2017
7 min read
In this article by Jacek Galowicz author of the book C++ STL Cookbook, we will learn new C++ features and how to use structured bindings to return multiple values at once. (For more resources related to this topic, see here.) Introduction C++ got a lot of additions in C++11, C++14, and most recently C++17. By now, it is a completely different language than it was just a decade ago. The C++ standard does not only standardize the language, as it needs to be understood by the compilers, but also the C++ standard template library (STL). We will see how to access individual members of pairs, tuples, and structures comfortably with structured bindings, and how to limit variable scopes with the new if and switch variable initialization capabilities. The syntactical ambiguities, which were introduced by C++11 with the new bracket initialization syntax, which looks the same for initializer lists, were fixed by new bracket initializer rules. The exact type of template class instances can now be deduced from the actual constructor arguments, and if different specializations of a template class shall result in completely different code, this is now easily expressible with constexpr-if. The handling of variadic parameter packs in template functions became much easier in many cases with the new fold expressions. At last, it became more comfortable to define static globally accessible objects in header-only libraries with the new ability to declare inline variables, which was only possible for functions before. Using structured bindings to return multiple values at once C++17 comes with a new feature which combines syntactic sugar and automatic type deduction: Structured bindings. These help assigning values from pairs, tuples, and structs into individual variables. How to do it... Applying a structured binding in order to assign multiple variables from one bundled structure is always one step: Accessing std::pair: Imagine we have a mathematical function divide_remainder, which accepts a dividend and a divisor parameter, and returns the fraction of both as well as the remainder. It returns those values using an std::pair bundle: std::pair<int, int> divide_remainder(int dividend, int divisor); Instead of accessing the individual values of the resulting pair like this: const auto result (divide_remainder(16, 3)); std::cout << "16 / 3 is " << result.first << " with a remainder of " << result.second << "n"; We can now assign them to individual variables with expressive names, which is much better to read: auto [fraction, remainder] = divide_remainder(16, 3); std::cout << "16 / 3 is " << fraction << " with a remainder of " << remainder << "n"; Structured bindings also work with std::tuple: Let's take the following example function, which gets us online stock information: std::tuple<std::string, std::chrono::time_point, double> stock_info(const std::string &name); Assigning its result to individual variables looks just like in the example before: const auto [name, valid_time, price] = stock_info("INTC"); Structured bindings also work with custom structures: Let's assume a structure like the following: struct employee { unsigned id; std::string name; std::string role; unsigned salary; }; Now we can access these members using structured bindings. We will even do that in a loop, assuming we have a whole vector of those: int main() { std::vector<employee> employees {/* Initialized from somewhere */}; for (const auto &[id, name, role, salary] : employees) { std::cout << "Name: " << name << "Role: " << role << "Salary: " << salary << "n"; } } How it works... Structured bindings are always applied with the same pattern: auto [var1, var2, ...] = <pair, tuple, struct, or array expression>; The list of variables var1, var2, ... must exactly match the number of variables which are contained by the expression being assigned from. The <pair, tuple, struct, or array expression> must be one of the following: An std::pair. An std::tuple. A struct. All members must be non-static and be defined in the same base class. An array of fixed size. The type can be auto, const auto, const auto& and even auto&&. Not only for the sake of performance, always make sure to minimize needless copies by using references when appropriate. If we write too many or not enough variables between the square brackets, the compiler will error out, telling us about our mistake: std::tuple<int, float, long> tup {1, 2.0, 3}; auto [a, b] = tup; This example obviously tries to stuff a tuple variable with three members into only two variables. The compiler immediately chokes on this and tells us about our mistake: error: type 'std::tuple<int, float, long>' decomposes into 3 elements, but only 2 names were provided auto [a, b] = tup; There's more... A lot of fundamental data structures from the STL are immediately accessible using structured bindings without us having to change anything. Consider for example a loop, which prints all items of an std::map: std::map<std::string, size_t> animal_population { {"humans", 7000000000}, {"chickens", 17863376000}, {"camels", 24246291}, {"sheep", 1086881528}, /* … */ }; for (const auto &[species, count] : animal_population) { std::cout << "There are " << count << " " << species << " on this planet.n"; } This particular example works, because when we iterate over a std::map container, we get std::pair<key_type, value_type> items on every iteration step. And exactly those are unpacked using the structured bindings feature (Assuming that the species string is the key, and the population count the value being associated with the key), in order to access them individually in the loop body. Before C++17, it was possible to achieve a similar effect using std::tie: int remainder; std::tie(std::ignore, remainder) = divide_remainder(16, 5); std::cout << "16 % 5 is " << remainder << "n"; This example shows how to unpack the result pair into two variables. std::tie is less powerful than structured bindings in the sense that we have to define all variables we want to bind to before. On the other hand, this example shows a strength of std::tie which structured bindings do not have: The value std::ignore acts as a dummy variable. The fraction part of the result is assigned to it, which leads to that value being dropped because we do not need it in that example. Back in the past, the divide_remainder function would have been implemented the following way, using output parameters: bool divide_remainder(int dividend, int divisor, int &fraction, int &remainder); Accessing it would have looked like the following: int fraction, remainder; const bool success {divide_remainder(16, 3, fraction, remainder)}; if (success) { std::cout << "16 / 3 is " << fraction << " with a remainder of " << remainder << "n"; } A lot of people will still prefer this over returning complex structures like pairs, tuples, and structs, arguing that this way the code would be faster, due to avoided intermediate copies of those values. This is not true any longer for modern compilers, which optimize intermediate copies away. Apart from the missing language features in C, returning complex structures via return value was considered slow for a long time, because the object had to be initialized in the returning function, and then copied into the variable which shall contain the return value on the caller side. Modern compilers support return value optimization (RVO), which enables for omitting intermediate copies. Summary Thus we successfully studied how to use structured bindings to return multiple values at once in C++ 17 using code examples. Resources for Article: Further resources on this subject: Creating an F# Project [article] Hello, C#! Welcome, .NET Core! [article] Exploring Structure from Motion Using OpenCV [article]
Read more
  • 0
  • 0
  • 21933
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at €18.99/month. Cancel anytime
article-image-understanding-dependencies-c-application
Packt
05 Apr 2017
9 min read
Save for later

Understanding the Dependencies of a C++ Application

Packt
05 Apr 2017
9 min read
This article by Richard Grimes, author of the book, Beginning C++ Programming explains the dependencies of a C++ application. A C++ project will produce an executable or library, and this will be built by the linker from object files. The executable or library is dependent upon these object files. An object file will be compiled from a C++ source file (and potentially one or more header files). The object file is dependent upon these C++ source and header files. Understanding dependencies is important because it helps you understand the order to compile the files in your project, and it allows you to make your project builds quicker by only compiling those files that have changed. (For more resources related to this topic, see here.) Libraries When you include a file within your source file the code within that header file will be accessible to your code. Your include file may contain whole function or class definitions (these will be covered in later chapters) but this will result in a problem: multiple definitions of a function or class. Instead, you can declare a class or function prototype, which indicates how calling code will call the function without actually defining it. Clearly the code will have to be defined elsewhere, and this could be a source file or a library, but the compiler will be happy because it only sees one definition. A library is code that has already been defined, it has been fully debugged and tested, and therefore users should not need to have access to the source code. The C++ Standard Library is mostly shared through header files, which helps you when you debug your code, but you must resist any temptation to edit these files. Other libraries will be provided as compiled libraries. There are essentially two types of compiled libraries: static libraries and dynamic link libraries. If you use a static library then the compiler will copy the compiled code that you use from the static library and place it in your executable. If you use a dynamic link (or shared) library then the linker will add information used during runtime (it may be when the executable is loaded, or it may even be delayed until the function is called) to load the shared library into memory and access the function. Windows uses the extension lib for static libraries and dll for dynamic link libraries. GNU gcc uses the extension a for static libraries and so for shared libraries. If you use library code in a static or dynamic link library the compiler will need to know that you are calling a function correctly—to make sure your code calls a function with the correct number of parameters and correct types. This is the purpose of a function prototype—it gives the compiler the information it needs to know about calling the function without providing the actual body of the function, the function definition. In general, the C++ Standard Library will be included into your code through the standard header files. The C Runtime Library (which provides some code for the C++ Standard Library) will be static linked, but if the compiler provides a dynamic linked version you will have a compiler option to use this. Pre-compiled Headers When you include a file into your source file the preprocessor will include the contents of that file (after taking into account any conditional compilation directives) and recursively any files included by that file. As illustrated earlier, this could result in thousands of lines of code. As you develop your code you will often compile the project so that you can test the code. Every time you compile your code the code defined in the header files will also be compiled even though the code in library header files will not have changed. With a large project this can make the compilation take a long time. To get around this problem compilers often offer an option to pre-compile headers that will not change. Creating and using precompiled headers is compiler specific. For example, with gcc you compile a header as if it is a C++ source file (with the /x switch) and the compiler creates a file with an extension of gch. When gcc compiles source files that use the header it will search for the gch file and if it finds the precompiled header it will use that, otherwise it will use the header file. In Visual C++ the process is a bit more complicated because you have to specifically tell the compiler to look for a precompiled header when it compiles a source file. The convention in Visual C++ projects is to have a source file called stdafx.cpp which has a single line that includes the file stdafx.h. You put all your stable header file includes in stdafx.h. Next, you create a precompiled header by compiling stdafx.cpp using the /Yc compiler option to specify that stdafx.h contains the stable headers to compile. This will create a pch file (typically, Visual C++ will name it after your project) containing the code compiled up to the point of the inclusion of the stdafx.h header file. Your other source files must include the stdafx.h header file as the first header file, but it may also include other files. When you compile your source files you use the /Yu switch to specify the stable header file (stdafx.h) and the compiler will use the precompiled header pch file instead of the header. When you examine large projects you will often find precompiled headers are used, and as you can see, it alters the file structure of the project. The example later in this chapter will show how to create and use precompiled headers. Project Structure It is important to organize your code into modules to enable you to maintain it effectively. Even if you are writing C-like procedural code (that is, your code involves calls to functions in a linear way) you will also benefit from organizing it into modules. For example, you may have functions that manipulate strings and other functions that access files, so you may decide to put the definition of the string functions in one source file, string.cpp, and the definition of the file functions in another file, file.cpp. So that other modules in the project can use these files you must declare the prototypes of the functions in a header file and include that header in the module that uses the functions. There is no absolute rule in the language about the relationship between the header files and the source files that contain the definition of the functions. You may have a header file called string.h for the functions in string.cpp and a header file called file.h for the functions in file.cpp. Or you may have just one file called utilities.h that contains the declarations for all the functions in both files. The only rule that you have to abide by is that at compile time the compiler must have access to a declaration of the function in the current source file, either through a header file, or the function definition itself. The compiler will not look forward in a source file, so if a function calls another function in the same source file that called function must have already been defined before the calling function, or there must be a prototype declaration. This leads to a typical convention of having a header file associated with each source file that contains the prototypes of the functions in the source file, and the source file includes this header. This convention becomes more important when you write classes. Managing Dependencies When a project is built with a building tool, checks are performed to see if the output of the build exist and if not, perform the appropriate actions to build it. Common terminology is that the output of a build step is called a target and the inputs of the build step (for example, source files) are the dependencies of that target. Each target's dependencies are the files used to make them. The dependencies may themselves be a target of a build action and have their own dependencies. For example, the following picture shows the dependencies in a project: In this project there are three source files (main.cpp, file1.cpp, file2.cpp) each of these includes the same header utils.h which is precompiled (and hence why there is a fourth source file, utils.cpp, that only contains utils.h). All of the source files depend on utils.pch, which in turn depends upon utils.h. The source file main.cpp has the main function and calls functions in the other two source files (file1.cpp and file2.cpp), and accesses the functions through the associated header files file1.h and file2.h. On the first compilation the build tool will see that the executable depends on the four object files and so it will look for the rule to build each one. In the case of the three C++ source files this means compiling the cpp files, but since utils.obj is used to support the precompiled header, the build rule will be different to the other files. When the build tool has made these object files it will then link them together along with any library code (not shown here). Subsequently, if you change file2.cpp and build the project, the build tool will see that only file2.cpp has changed and since only file2.obj depends on file2.cpp all the make tool needs to do is compile file2.cpp and then link the new file2.obj with the existing object files to create the executable. If you change the header file, file2.h, the build tool will see that two files depend on this header file, file2.cpp and main.cpp and so the build tool will compile these two source files and link the new two object files file2.obj and main.obj with the existing object files to form the executable. If, however, the precompiled header source file, util.h, changes it means that all of the source files will have to be compiled. Summary For a small project, dependencies are easy to manage, and as you have seen, for a single source file project you do not even have to worry about calling the linker because the compiler will do that automatically. As a C++ project gets bigger, managing dependencies gets more complex and this is where development environments like Visual C++ become vital. Resources for Article: Further resources on this subject: Introduction to C# and .NET [article] Preparing to Build Your Own GIS Application [article] Writing a Fully Native Application [article]
Read more
  • 0
  • 0
  • 45642

article-image-getting-started-docker-storage
Packt
05 Apr 2017
12 min read
Save for later

Getting Started with Docker Storage

Packt
05 Apr 2017
12 min read
In this article by Scott Gallagher, author of the book Mastering Docker – Second Edition, we will cover the places you store your containers, such as Docker Hub and Docker Hub Enterprises. We will also cover Docker Registry that you can use to run your own local storage for the Docker containers. We will review the differences between them all and when and how to use each of them. It will also cover how to set up automated builds using web hooks as well as the pieces that are all required to set them up. Lastly, we will run through an example of how to set up your own Docker Registry. Let's take a quick look at the topics we will be covering in this article: Docker Hub Docker Hub Enterprise Docker Registry Automated builds (For more resources related to this topic, see here.) Docker Hub In this section, we will focus on that Docker Hub, which is a free public option, but also has a private option that you can use to secure your images. We will focus on the web aspect of Docker Hub and the management you can do there. The login page is like the one shown in the following screenshot: Dashboard After logging into the Docker Hub, you will be taken to the following landing page. This page is known as the Dashboard of Docker Hub. From here, you can get to all the other sub pages of Docker Hub. In the upcoming sections, we will go through everything you see on the dashboard, starting with the dark blue bar you have on the top. Exploring the repositories page The following is the screenshot of the Explore link you see next to Dashboard at the top of the screen: As you can see in the screenshot, this is a link to show you all the official repositories that Docker has to offer. Official repositories are those that come directly from Docker or from the company responsible for the product. They are regularly updated and patched as needed. Organizations Organizations are those that you have either created or have been added to. Organizations allow you to layer on control, for say, a project that multiple people are collaborating on. The organization gets its own setting such as whether to store repositories as public or private by default, changing plans that will allow for different amounts of private repositories, and separate repositories all together from the ones you or others have. You can also access or switch between accounts or organizations from the Dashboard just below the Docker log, where you will typically see your username when you log in. This is a drop-down list, where you can switch between all the organizations you belong to. The Create menu The Create menu is the new item along the top bar of the Dashboard. From this drop-down menu, you can perform three actions: Create repository Create automated build Create organization A pictorial representation is shown in the following screenshot: The Settings Page Probably, the first section everyone jumps to once they have created an account on the Docker Hub—the Settings page. I know, that's what I did at least. The Account Settings page can be found under the drop-down menu that is accessed in the upper-right corner of the dashboard on selecting Settings. The page allows you to set up your public profile; change your password; see what organization you belong to, the subscriptions for e-mail updates you belong to, what specific notifications you would like to receive, what authorized services have access to your information, linked accounts (such as your GitHub or Bitbucket accounts); as well as your enterprise licenses, billing, and global settings. The only global setting as of now is the choice between having your repositories default to public or private upon creation. The default is to create them as public repositories. The Stars page Below the dark blue bar at the top of the Dashboard page are two more areas that are yet to be covered. The first, the Stars page, allows you to see what repositories you yourself have starred. This is very useful if you come across some repositories that you prefer to use and want to access them to see whether they have been updated recently or whether any other changes have occurred on these repositories. The second is a new setting in the new version of Docker Hub called Contributed. In this section, there will be a list of repositories you have contributed to outside of the ones within your Repositories list. Docker Hub Enterprise Docker Hub Enterprise, as it is currently known, will eventually be called Docker Subscription. We will focus on Docker Subscription, as it's the new and shiny piece. We will view the differences between Docker Hub and Docker Subscription (as we will call it moving forward) and view the options to deploy Docker Subscription. Let's first start off by comparing Docker Hub to Docker Subscription and see why each is unique and what purpose each serves: Docker Hub Shareable image, but it can be private No hassle of self-hosting Free (except for a certain number of private images) Docker Subscription Integrated into your authentication services (that is, AD/LDAP) Deployed on your own infrastructure (or cloud) Commercial support Docker Subscription for server Docker Subscription for server allows you to deploy both Docker Trusted Registry as well as Docker Engine on the infrastructure that you manage. Docker Trusted Registry is the location where you store the Docker images that you have created. You can set these up to be internal only or share them out publicly as well. Docker Subscription gives you all the benefits of running your own dedicated Docker hosted registry with the added benefits of getting support in case you need it. Docker Subscription for cloud As we saw in the previous section, we can also deploy Docker Subscription to a cloud provider if we wish. This allows us to leverage our existing cloud environments without having to roll our own server infrastructure up to host our Docker images. The setup is the same as we reviewed in the previous section; but this time, we will be targeting our existing cloud environment instead. Docker Registry In this section, we will be looking at Docker Registry. Docker Registry is an open source application that you can run anywhere you please and store your Docker image in. We will look at the comparison between Docker Registry and Docker Hub and how to choose among the two. By the end of the section, you will learn how to run your own Docker Registry and see whether it's a true fit for you. An overview of Docker Registry Docker Registry, as stated earlier, is an open source application that you can utilize to store your Docker images on a platform of your choice. This allows you to keep them 100% private if you wish or share them as needed. The registry can be found at https://docs.docker.com/registry/. This will run you through the setup and the steps to follow while pushing images to Docker Registry compared to Docker Hub. Docker Registry makes a lot of sense if you want to roll your own registry without having to pay for all the private features of Docker Hub. Next, let's take a look at some comparisons between Docker Hub and Docker Registry, so you can make an educated decision as to which platform to choose to store your images. Docker Registry will allow you to do the following: Host and manage your own registry from which you can serve all the repositories as private, public, or a mix between the two Scale the registry as needed based on how many images you host or how many pull requests you are serving out All are command-line-based for those that live on the command line Docker Hub will allow you to: Get a GUI-based interface that you can use to manage your images A location already set up on the cloud that is ready to handle public and/or private images Peace of mind of not having to manage a server that is hosting all your images Automated builds In this section, we will look at automated builds. Automated builds are those that you can link to your GitHub or Bitbucket account(s) and, as you update the code in your code repository, you can have the image automatically built on Docker Hub. We will look at all the pieces required to do so and, by the end, you'll be automating all your builds. Setting up your code The first step to create automated builds is to set up your GitHub or Bitbucket code. These are the two options you have while selecting where to store your code. For our example, I will be using GitHub; but the setup will be the same for GitHub and Bitbucket. First, we set up our GitHub code that contains just a simple README file that we will edit for our purpose. This file could be anything as far as a script or even multiple files that you want to manipulate for your automated builds. One key thing is that we can't just leave the README file alone. One key piece is that a Dockerfile is required to do the builds when you want it to for them to be automated. Next, we need to set up the link between our code and Docker Hub. Setting up Docker Hub On Docker Hub, we are going to use the Create drop-down menu and select Create Automated Build. After selecting it, we will be taken to a screen that will show you the accounts you have linked to either GitHub or Bitbucket. You then need to search and select the repository from either of the locations you want to create the automated build from. This will essentially create a web hook that when a commit is done on a selected code repository, then a new build will be created on Docker Hub. After you select the repository you would like to use, you will be taken to a screen similar to the following one: For the most part, the defaults will be used by most. You can select a different branch if you want to use one, say a testing branch if you use one before the code may go to the master branch. The one thing that will not be filled out, but is required, is the description field. You must enter something here or you will not be able to continue past this page. Upon clicking Create, you will be taken to a screen similar to the next screenshot: On this screen, you can see a lot of information on the automated build you have set up. Information such as tags, the Dockerfile in the code repository, build details, build settings, collaborators on the code, web hooks, and settings that include making the repository public or private and deleting the automated build repository as well. Putting all the pieces together So, let's take a run at doing a Docker automated build and see what happens when we have all the pieces in place and exactly what we have to do to kick off this automated build and be able to create our own magic: Update the code or any file inside your GitHub or Bitbucket repository. Upon committing the update, the automated build will be kicked off and logged in Docker Hub for that automated repository. Creating your own registry To create a registry of your own, use the following command: $ docker-machine create --driver vmwarefusion registry Creating SSH key... Creating VM... Starting registry... Waiting for VM to come online... To see how to connect Docker to this machine, run the following command: $ docker-machine env registry export DOCKER_TLS_VERIFY="1" export DOCKER_HOST="tcp://172.16.9.142:2376" export DOCKER_CERT_PATH="/Users/scottpgallagher/.docker/machine/machines/ registry" export DOCKER_MACHINE_NAME="registry" # Run this command to configure your shell: # eval "$(docker-machine env registry)" $ eval "$(docker-machine env registry)" $ docker pull registry $ docker run -p 5000:5000 -v <HOST_DIR>:/tmp/registry-dev registry:2 This will specify to use version 2 of the registry. For AWS (as shown in example from https://hub.docker.com/_/registry/): $ docker run -e SETTINGS_FLAVOR=s3 -e AWS_BUCKET=acme-docker -e STORAGE_PATH=/registry -e AWS_KEY=AKIAHSHB43HS3J92MXZ -e AWS_SECRET=xdDowwlK7TJajV1Y7EoOZrmuPEJlHYcNP2k4j49T -e SEARCH_BACKEND=sqlalchemy -p 5000:5000 registry:2 Again, this will use version 2 of the self-hosted registry. Then, you need to modify your Docker startups to point to the newly set up registry. Add the following line to the Docker startup in the /etc/init.d/docker file: -H tcp://127.0.0.1:2375 -H unix:///var/run/docker.sock --insecureregistry <REGISTRY_HOSTNAME>:5000 Most of these settings might already be there and you might only need to add --insecure-registry <REGISTRY_HOSTNAME>:5000: To access this file, you will need to use docker-machine: $ docker-machine ssh <docker-host_name> Now, you can pull a registry from the public Docker Hub as follows: $ docker pull debian Tag it, so when we do a push, it will go to the registry we set up: $ docker tag debian <REGISTRY_URL>:5000/debian Then, we can push it to our registry: $ docker push <REGISTRY_URL>:5000/debian We can also pull it for any future clients (or after any updates we have pushed for it): $ docker pull <REGISTRY_URL>:5000/debian Summary In this article, we dove deep into Docker Hub and also reviewed the new shiny Docker Subscription as well as the self-hosted Docker Registry. We have gone through the extensive review of each of them. You learned of the differences between them all and how to utilize each one. In this article, we also looked deep into setting up automated builds. We took a look at how to set up your own Docker Hub Registry. We have encompassed a lot in this chapter and I hope you have learned a lot and will like to put it all into good use. Resources for Article: Further resources on this subject: Docker in Production [article] Docker Hosts [article] Hands On with Docker Swarm [article]
Read more
  • 0
  • 0
  • 5507

article-image-api-and-intent-driven-networking
Packt
05 Apr 2017
19 min read
Save for later

API and Intent-Driven Networking

Packt
05 Apr 2017
19 min read
In this article by Eric Chou, author of the book Mastering Python Networking we will look at following topics: Treating Infrastructure as code and data modeling Cisco NX-API and Application centric infrastruucture (For more resources related to this topic, see here.) Infrastructure as Python code In a perfect world, network engineers and people who design and manage networks should focus on what they want the network to achieve instead of the device-level interactions. In my first job as an Intern for a local ISP, wide-eyed and excited, I received my first assignment to install a router on customer site to turn up their fractional frame relay link (remember those?). How would I do that? I asked, I was handed a standard operating procedure for turning up frame relay links. I went to the customer site, blindly type in the commands and looked at the green lights flash, happily packed my bag and pad myself on the back for a job well done. As exciting as that first assignment was, I did not fully understand what I was doing. I was simply following instructions without thinking about the implication of the commands I was typing in. How would I troubleshoot something if the light was red instead of green? I think I would have called back to the office. Of course network engineering is not about typing in commands onto a device, it is about building a way that allow services to be delivered from one point to another with as little friction as possible. The commands we have to use and the output we have to interpret are merely a mean to an end. I would like to hereby argue that we should focus as much on the Intent of the network as possible for an Intent-Driven Networking and abstract ourselves from the device-level interaction on an as-needed basis. In using API, it is my opinion that it gets us closer to a state of Intent-Driven Networking. In short, because we abstract the layer of specific command executed on destination device, we focus on our intent instead of the specific command given to the device. For example, if our intend is to deny an IP from entering our network, we might use 'access-list and access-group' on a Cisco and 'filter-list' on a Juniper. However, in using API, our program can start asking the executor for their Intent while masking what kind of physical device it is they are talking to. Screen Scraping vs. API Structured Output Imagine a common scenario where we need to log in to the device and make sure all the interfaces on the devices are in an up/up state (both status and protocol are showing as up). For the human network engineers getting to a Cisco NX-OS device, it is simple enough to issue the show IP interface brief command and easily tell from the output which interface is up: nx-osv-2# show ip int brief IP Interface Status for VRF "default"(1) Interface IP Address Interface Status Lo0 192.168.0.2 protocol-up/link-up/admin-up Eth2/1 10.0.0.6 protocol-up/link-up/admin-up nx-osv-2# The line break, white spaces, and the first line of column title are easily distinguished from the human eye. In fact, they are there to help us line up, say, the IP addresses of each interface from line 1 to line 2 and 3. If we were to put ourselves into the computer's eye, all these spaces and line breaks are only taking away from the real important output, which is 'which interfaces are in the up/up state'? To illustrate this point, we can look at the Paramiko output again: >>> new_connection.send('sh ip int briefn') 16 >>> output = new_connection.recv(5000) >>> print(output) b'sh ip int briefrrnIP Interface Status for VRF "default"(1)rnInterface IP Address Interface StatusrnLo0 192.168.0.2 protocol-up/link-up/admin-up rnEth2/1 10.0.0.6 protocol-up/link-up/admin-up rnrnxosv- 2# ' >>> If we were to parse out that data, of course there are many ways to do it, but here is what I would do in a pseudo code fashion: Split each line via line break. I may or may not need the first line that contain the executed command, for now, I don't think I need it. Take out everything on the second line up until the VRF and save that in a variable as we want to know which VRF the output is showing of. For the rest of the lines, because we do not know how many interfaces there are, we will do a regular expression to search if the line starts with possible interfaces, such as lo for loopback and 'Eth'. We will then split this line into three sections via space, each consist of name of interface, IP address, then the interface status. The interface status will then be split further using the forward slash (/) to give us the protocol, link, and admin status. Whew, that is a lot of work just for something that a human being can tell in a glance! You might be able to optimize the code and the number of lines, but in general this is what we need to do when we need to 'screen scrap' something that is somewhat unstructured. There are many downsides to this method, the few bigger problems that I see are: Scalability: We spent so much time in painstakingly details for each output, it is hard to imagine we can do this for the hundreds of commands that we typically run. Predicability: There is really no guarantee that the output stays the same. If the output is changed ever so slightly, it might just enter with our hard earned battle of information gathering. Vendor and software lock-in: Perhaps the biggest problem is that once we spent all these time parsing the output for this particular vendor and software version, in this case Cisco NX-OS, we need to repeat this process for the next vendor that we pick. I don't know about you, but if I were to evaluate a new vendor, the new vendor is at a severe on-boarding disadvantage if I had to re-write all the screen scrap code again. Let us compare that with an output from an NX-API call for the same 'show IP interface brief' command. We will go over the specifics of getting this output from the device later in this article, but what is important here is to compare the follow following output to the previous screen scraping steps: { "ins_api":{ "outputs":{ "output":{ "body":{ "TABLE_intf":[ { "ROW_intf":{ "admin-state":"up", "intf-name":"Lo0", "iod":84, "ip-disabled":"FALSE", "link-state":"up", "prefix":"192.168.0.2", "proto-state":"up" } }, { "ROW_intf":{ "admin-state":"up", "intf-name":"Eth2/1", "iod":36, "ip-disabled":"FALSE", "link-state":"up", "prefix":"10.0.0.6", "proto-state":"up" } } ], "TABLE_vrf":[ { "ROW_vrf":{ "vrf-name-out":"default" } }, { "ROW_vrf":{ "vrf-name-out":"default" } } ] }, "code":"200", "input":"show ip int brief", "msg":"Success" } }, "sid":"eoc", "type":"cli_show", "version":"1.2" } } NX-API can return output in XML or JSON, this is obviously the JSON output that we are looking at. Right away you can see the answered are structured and can be mapped directly to Python dictionary data structure. There is no parsing required, simply pick the key you want and retrieve the value associated with that key. There is also an added benefit of a code to indicate command success or failure, with a message telling the sender reasons behind the success or failure. You no longer need to keep track of the command issued, because it is already returned to you in the 'input' field. There are also other meta data such as the version of the NX-API. This type of exchange makes life easier for both vendors and operators. On the vendor side, they can easily transfer configuration and state information, as well as add and expose extra fields when the need rises. On the operator side, they can easily ingest the information and build their infrastructure around it. It is generally agreed on that automation is much needed and a good thing, the questions usually centered around which format and structure the automation should take place. As you can see later in this article, there are many competing technologies under the umbrella of API, on the transport side alone, we have REST API, NETCONF, RESTCONF, amongst others. Ultimately the overall market will decide, but in the mean time, we should all take a step back and decide which technology best suits our need. Data modeling for infrastructure as code According to Wikipedia, "A data model is an abstract model that organizes elements of data and standardizes how they relate to one another and to properties of the real world entities. For instance, a data model may specify that the data element representing a car be composed of a number of other elements which, in turn, represent the color and size of the car and define its owner." The data modeling process can be illustrated in the following graph:  Data Modeling Process (source:  https://en.wikipedia.org/wiki/Data_model) When applied to networking, we can applied this concept as an abstract model that describe our network, let it be datacenter, campus, or global Wide Area Network. If we take a closer look at a physical datacenter, a layer 2 Ethernet switch can be think of as a device containing a table of Mac addresses mapped to each ports. Our switch data model described how the Mac address should be kept in a table, which are the keys, additional characteristics (think of VLAN and private VLAN), and such. Similarly, we can move beyond devices and map the datacenter in a model. We can start with the number of devices are in each of the access, distribution, core layer, how they are connected, and how they should behave in a production environment. For example, if we have a Fat-Tree network, how many links should each of the spine router should have, how many routes they should contain, and how many next-hop should each of the prefixes have. These characteristics can be mapped out in a format that can be referenced against as the ideal state that we should always checked against. One of the relatively new network data modeling language that is gaining traction is YANG. Yet Another Next Generation (YANG) (Despite common belief, some of the IETF workgroup do have a sense of humor). It was first published in RFC 6020 in 2010, and has since gain traction among vendors and operators. At the time of writing, the support for YANG varies greatly from vendors to platforms, the adaptation rate in production is therefore relatively low. However, it is a technology worth keeping an eye out for. Cisco API and ACI Cisco Systems, as the 800 pound gorilla in the networking space, has not missed on the trend of network automation. The problem has always been the confusion surrounding Cisco's various product lines and level of technology support. With product lines spans from routers, switches, firewall, servers (unified computing), wireless, collaboration software and hardware, analytic software, to name a few, it is hard to know where to start. Since this book focuses on Python and networking, we will scope the section to the main networking products. In particular we will cover the following: Nexus product automation with NX-API Cisco NETCONF and YANG examples Cisco application centric infrastructure for datacenter Cisco application centric infrastructure for enterprise For the NX-API and NETCONF examples here, we can either use the Cisco DevNet always-on lab devices or locally run Cisco VIRL. Since ACI is a separated produce and license on top of the physical switches, for the following ACI examples, I would recommend using the DevNet labs to get an understanding of the tools. Unless, of course, that you are one of the lucky ones who have a private ACI lab that you can use. Cisco NX-API Nexus is Cisco's product line of datacenter switches. NX-API (http://www.cisco.com/c/en/us/td/docs/switches/datacenter/nexus9000/sw/6-x/programmability/guide/b_Cisco_Nexus_9000_Series_NX-OS_Programmability_Guide/b_Cisco_Nexus_9000_Series_NX-OS_Programmability_Guide_chapter_011.html) allows the engineer to interact with the switch outside of the device via a variety of transports, including SSH, HTTP, and HTTPS. Installation and preparation Here are the Ubuntu Packages that we will install, you may already have some of the packages such as Python development, pip, and Git: $ sudo apt-get install -y python3-dev libxml2-dev libxslt1-dev libffi-dev libssl-dev zlib1g-dev python3-pip git python3-requests If you are using Python 2: sudo apt-get install -y python-dev libxml2-dev libxslt1-dev libffi-dev libssl-dev zlib1g-dev python-pip git python-requests The ncclient (https://github.com/ncclient/ncclient) library is a Python library for NETCONF clients, we will install from the GitHub repository to install the latest version: $ git clone https://github.com/ncclient/ncclient $ cd ncclient/ $ sudo python3 setup.py install $ sudo python setup.py install NX-API on Nexus devices is off by default, we will need to turn it on. We can either use the user already created or create a new user for the NETCONF procedures. feature nxapi username cisco password 5 $1$Nk7ZkwH0$fyiRmMMfIheqE3BqvcL0C1 role networkopera tor username cisco role network-admin username cisco passphrase lifetime 99999 warntime 14 gracetime 3 For our lab, we will turn on both HTTP and sandbox configuration, they should be turned off in production. nx-osv-2(config)# nxapi http port 80 nx-osv-2(config)# nxapi sandbox We are now ready to look at our first NX-API example. NX-API examples Since we have turned on sandbox, we can launch a web browser and take a look at the various message format, requests, and response based on the CLI command that we are already familiar with. In the following example, I selected JSON-RPC and CLI command type for the command show version:   The sandbox comes in handy if you are unsure about the supportability of message format or if you have questions about the field key for which the value you want to retrieve in your code. In our first example, we are just going to connect to the Nexus device and print out the capabilities exchanged when the connection was first made: #!/usr/bin/env python3 from ncclient import manager conn = manager.connect( host='172.16.1.90', port=22, username='cisco', password='cisco', hostkey_verify=False, device_params={'name': 'nexus'}, look_for_keys=False) for value in conn.server_capabilities: print(value) conn.close_session() The connection parameters of host, port, username, and password are pretty self-explanatory. The device parameter specifies the kind of device the client is connecting to, as we will also see a differentiation in the Juniper NETCONF sections. The hostkey_verify bypass the known_host requirement for SSH while the look_for_keys option disables key authentication but use username and password for authentication. The output will show that the XML and NETCONF supported feature by this version of NX-OS: $ python3 cisco_nxapi_1.py urn:ietf:params:xml:ns:netconf:base:1.0 urn:ietf:params:netconf:base:1.0 Using ncclient and NETCONF over SSH is great because it gets us closer to the native implementation and syntax. We will use the library more later on. For NX-API, I personally feel that it is easier to deal with HTTPS and JSON-RPC. In the earlier screenshot of NX-API Developer Sandbox, if you noticed in the Request box, there is a box labeled Python. If you click on it, you would be able to get an automatically converted Python script based on the requests library.  Requests is a very popular, self-proclaimed HTTP for humans library used by companies like Amazon, Google, NSA, amongst others. You can find more information about it on the official site (http://docs.python-requests.org/en/master/). For the show version example, the following Python script is automatically generated for you. I am pasting in the output without any modification: """ NX-API-BOT """ import requests import json """ Modify these please """ url='http://YOURIP/ins' switchuser='USERID' switchpassword='PASSWORD' myheaders={'content-type':'application/json-rpc'} payload=[ { "jsonrpc": "2.0", "method": "cli", "params": { "cmd": "show version", "version": 1.2 }, "id": 1 } ] response = requests.post(url,data=json.dumps(payload), headers=myheaders,auth=(switchuser,switchpassword)).json() In cisco_nxapi_2.py file, you will see that I have only modified the URL, username and password of the preceding file, and parse the output to only include the software version. Here is the output: $ python3 cisco_nxapi_2.py 7.2(0)D1(1) [build 7.2(0)ZD(0.120)] The best part about using this method is that the same syntax works with both configuration command as well as show commands. This is illustrated in cisco_nxapi_3.py file. For multi-line configuration, you can use the id field to specify the order of operations. In cisco_nxapi_4.py, the following payload was listed for changing the description of interface Ethernet 2/12 in the interface configuration mode. { "jsonrpc": "2.0", "method": "cli", "params": { "cmd": "interface ethernet 2/12", "version": 1.2 }, "id": 1 }, { "jsonrpc": "2.0", "method": "cli", "params": { "cmd": "description foo-bar", "version": 1.2 }, "id": 2 }, { "jsonrpc": "2.0", "method": "cli", "params": { "cmd": "end", "version": 1.2 }, "id": 3 }, { "jsonrpc": "2.0", "method": "cli", "params": { "cmd": "copy run start", "version": 1.2 }, "id": 4 } ] In the next section, we will look at examples for Cisco NETCONF and YANG model. Cisco and YANG model Earlier in the article, we looked at the possibility of expressing the network using data modeling language YANG. Let us look into it a little bit. First off, we should know that YANG only defines the type of data sent over NETCONF protocol and NETCONF exists as a standalone protocol as we saw in the NX-API section. YANG being relatively new, the supportability is spotty across vendors and product lines. For example, if we run the same capability exchange script we saw preceding to a Cisco 1000v running  IOS-XE, this is what we would see: urn:cisco:params:xml:ns:yang:cisco-virtual-service?module=ciscovirtual- service&revision=2015-04-09 http://tail-f.com/ns/mibs/SNMP-NOTIFICATION-MIB/200210140000Z? module=SNMP-NOTIFICATION-MIB&revision=2002-10-14 urn:ietf:params:xml:ns:yang:iana-crypt-hash?module=iana-crypthash& revision=2014-04-04&features=crypt-hash-sha-512,crypt-hashsha- 256,crypt-hash-md5 urn:ietf:params:xml:ns:yang:smiv2:TUNNEL-MIB?module=TUNNELMIB& revision=2005-05-16 urn:ietf:params:xml:ns:yang:smiv2:CISCO-IP-URPF-MIB?module=CISCOIP- URPF-MIB&revision=2011-12-29 urn:ietf:params:xml:ns:yang:smiv2:ENTITY-STATE-MIB?module=ENTITYSTATE- MIB&revision=2005-11-22 urn:ietf:params:xml:ns:yang:smiv2:IANAifType-MIB?module=IANAifType- MIB&revision=2006-03-31 <omitted> Compare that to the output that we saw, clearly IOS-XE understand more YANG model than NX-OS. Industry wide network data modeling for networking is clearly something that is beneficial to network automation. However, given the uneven support across vendors and products, it is not something that is mature enough to be used across your production network, in my opinion. For the book I have included a script called cisco_yang_1.py that showed how to parse out NETCONF XML output with YANG filters urn:ietf:params:xml:ns:yang:ietf-interfaces as a starting point to see the existing tag overlay. You can check the latest vendor support on the YANG Github project page (https://github.com/YangModels/yang/tree/master/vendor). Cisco ACI Cisco Application Centric Infrastructure (ACI) is meant to provide a centralized approach to all of the network components. In the datacenter context, it means the centralized controller is aware of and manages the spine, leaf, top of rack switches as well as all the network service functions. This can be done thru GUI, CLI, or API. Some might argue that the ACI is Cisco's answer to the broader defined software defined networking. One of the somewhat confusing point for ACI, is the difference between ACI and ACI-EM. In short, ACI focuses on datacenter operations while ACI-EM focuses on enterprise modules. Both offers a centralized view and control of the network components, but each has it own focus and share of tools. For example, it is rare to see any major datacenter deploy customer facing wireless infrastructure but wireless network is a crucial part of enterprises today. Another example would be the different approaches to network security. While security is important in any network, in the datacenter environment lots of security policy is pushed to the edge node on the server for scalability, in enterprises security policy is somewhat shared between the network devices and servers. Unlike NETCONF RPC, ACI API follows the REST model to use the HTTP verb (GET, POST, PUT, DELETE) to specify the operation intend. We can look at the cisco_apic_em_1.py file, which is a modified version of the Cisco sample code on lab2-1-get-network-device-list.py (https://github.com/CiscoDevNet/apicem-1.3-LL-sample-codes/blob/master/basic-labs/lab2-1-get-network-device-list.py). The abbreviated section without comments and spaces are listed here. The first function getTicket() uses HTTPS POST on the controller with path /api/vi/ticket with username and password embedded in the header. Then parse the returned response for a ticket with limited valid time. def getTicket(): url = "https://" + controller + "/api/v1/ticket" payload = {"username":"usernae","password":"password"} header = {"content-type": "application/json"} response= requests.post(url,data=json.dumps(payload), headers=header, verify=False) r_json=response.json() ticket = r_json["response"]["serviceTicket"] return ticket The second function then calls another path /api/v1/network-devices with the newly acquired ticket embedded in the header, then parse the results. url = "https://" + controller + "/api/v1/network-device" header = {"content-type": "application/json", "X-Auth-Token":ticket} The output displays both the raw JSON response output as well as a parsed table. A partial output when executed against a DevNet lab controller is shown here: Network Devices = { "version": "1.0", "response": [ { "reachabilityStatus": "Unreachable", "id": "8dbd8068-1091-4cde-8cf5-d1b58dc5c9c7", "platformId": "WS-C2960C-8PC-L", &lt;omitted&gt; "lineCardId": null, "family": "Wireless Controller", "interfaceCount": "12", "upTime": "497 days, 2:27:52.95" } ] } 8dbd8068-1091-4cde-8cf5-d1b58dc5c9c7 Cisco Catalyst 2960-C Series Switches cd6d9b24-839b-4d58-adfe-3fdf781e1782 Cisco 3500I Series Unified Access Points &lt;omitted&gt; 55450140-de19-47b5-ae80-bfd741b23fd9 Cisco 4400 Series Integrated Services Routers ae19cd21-1b26-4f58-8ccd-d265deabb6c3 Cisco 5500 Series Wireless LAN Controllers As one can see, we only query a single controller device, but we are able to get a high level view of all the network devices that the controller is aware of. The downside is, of course, the ACI controller only supports Cisco devices at this time. Summary In this article, we looked at various ways to communicate and manage network devices from Cisco. Resources for Article: Further resources on this subject: Network Exploitation and Monitoring [article] Introduction to Web Experience Factory [article] Web app penetration testing in Kali [article]
Read more
  • 0
  • 0
  • 30892

article-image-using-android-wear-20
Raka Mahesa
04 Apr 2017
6 min read
Save for later

Using Android Wear 2.0

Raka Mahesa
04 Apr 2017
6 min read
As of this writing, Android Wear 2.0 was unveiled by Google a few weeks ago. Like most second iterations of software, this latest version of Android Wear adds various new features that make the platform easier to use and much more functional to its users. But what about its developers? Is there any critical change that developers should know about for the platform? Let's find out together. One of the biggest additions to Android Wear 2.0 is the ability of apps to run on the watch without needing a companion app on the phone. Devices running Android Wear 2.0 will have their own Google Play Store app, as well as reliable internet from Wi-Fi or a cellular connection, allowing apps to be installed and operated without requiring a phone. This feature, known as "Standalone App," is a big deal for developers. While it's not really complicated to implement said feature, we must now reevaluate about how to distribute our apps and whether our apps should work independently, or should they be embedded to a phone app like before. So let's get into the meat of things. Right now Android Wear 2.0 supports the following types of apps: - Standalone apps that do not require a phone app. - Standalone apps that require a phone app. - Non-Standalone apps that are embedded in a phone app. In this case, "Standalone apps" means apps that are not included in a phone app and can be downloaded separately on the Play Store on the watch. After all, a standalone app may still require a phone app to function. To distribute a standalone watch app, all we have to do is designate an app as standalone and upload the APK to the Google Play Developer Console. To designate an app as standalone, simply add the following metadata to the <application> section in the app manifest file. <meta-data android_name="com.google.android.wearable.standalone" android_value="true" /> Do note that any app that has that metadata will be available to download on the watch Play Store, even if the value is set to false. Setting the value to false will simply limit the app to smart devices that have been paired to phones that have Play Store installed. One more thing about Standalone Apps: They are not supported on Android Wear before 2.0. So, to support all versions of Android Wear, we will have to provide both the Standalone and Non-Standalone APKs. Both of them need the same package name and must be uploaded under the same app, with the Standalone APK having a higher versionCode value so the Play Store will install that version when requested by a compatible device. All right, with that settled, let's move on to another big addition introduced by Android Wear 2.0: the Complication API. In case you're not familiar with the world of watchmaking. Complications are areas in a watch that show data other than the current time. In traditional watches, they can be a stopwatch or the current date. In smartwatches, they can be a battery indicator or a display for a number of unread emails. In short, complications are Android widgets for smart watches. Unlike widgets on Android phones, however, the user interface that displays a complication data is not made by the same developer whose data was displayed. Android Wear 2.0 gives the responsibility of displaying the complication data to the watch face developer, so an app developer has no say on how his app data will look on the watch face. To accommodate that Complication system, Android Wear provides a set of complication types that all watch faces have to be able to display, which are: - Icon type - Short Text display - Long Text display - Small Image type - Large Image type - Ranged Value type (value with minimum and maximum limit, like battery life) Some complication types may have additional data that they can show. For example, the Short Text complication may also show an icon if the data provides an icon to show, and the Long Text complication can show a title text if that data was provided. Okay, so now we know how the data is going to be displayed to the user. How then do we provide said data to the watch face? To do that, first we have to create a new Service class that inherits the ComplicationProviderService class. Then, on that class we just created, we override the function onComplicationUpdate() and provide the ComplicationManager object with data from our app like the following: @Override public void onComplicationUpdate(int complicationID, int type, ComplicationManager manager) { if (type == SHORT_TEXT) { ComplicationData data = new ComplicationData.Builder(SHORT_TEXT) .setShortText(dataShortText) .setIcon(appIconResource)) .setTapAction(onTapIntent) .build(); manager.updateComplicationDatra(complicationID, data); } else if (type == LONG_TEXT) { ComplicationData data = new ComplicationData.Builder(.LONG_TEXT) .setLongTitle(dataTitle) .setLongText(dataLongText) .setIcon(appIconResource)) .setTapAction(onTapIntent) .build(); manager.updateComplicationDatra(complicationID, data); } } As can be seen from the code above, we use ComplicationData.Builder to provide the correct data based on the requested Complication type. You may notice the setTapAction() function and wonder what it was for. Well, you may want the user seeing your data to be able to tap the Complication and do an action. Using the setTapAction() you will be able to provide an Intent that will be executed later when the complication was tapped. One last thing to do is to register the service on the project manifest with a filter for android.support.wearable.complications.ACTION_COMPLICATION_UPDATE_REQUEST intent like the following: <service android_name=".ComplicationProviderService" android_label=”ServiceLabel” > <intent-filter> <action android_name="android.support.wearable.complications.ACTION_COMPLICATION_UPDATE_REQUEST" /> </intent-filter> </service> And that's it for all the biggest changes to Android Wear 2.0! For other additions and changes to this version of Android Wear like the new CurvedLayout, a new notification display, Rotary Input API, and more, you can read the official documentation. About the author Raka Mahesa is a game developer at Chocoarts (http://chocoarts.com/) who is interested in digital technology in general. Outside of work hours, he likes to work on his own projects, with Corridoom VR being his latest released game. Raka also regularly tweets as @legacy99.
Read more
  • 0
  • 0
  • 14221
article-image-how-use-xmlhttprequests-send-post-server
Antonio Cucciniello
03 Apr 2017
5 min read
Save for later

How to use XmlHttpRequests to Send POST to Server

Antonio Cucciniello
03 Apr 2017
5 min read
So, you need to send some bits of information from your browser to the server in order to complete some processing. Maybe you need the information to search for something in a database, or just to update something on your server. Today I am going to show you how to send some data to your server from the client through a POST request using XmlHttpRequest. First, we need to set up our environment! Set up The first thing to make sure you have is Node and NPM installed. Create a new directory for your project; here we will call it xhr-post: $ mkdir xhr-post $ cd xhr-post Then we would like to install express.js and body-parser: $ npm install express $ npm install body-parser Express makes it easy for us to handle HTTP requests, and body-parser allows us to parse incoming request bodies. Let's create two files: one for our server called server.js and one for our front end code called index.html. Then initialize your repo with a package.json file by doing: $ npm init Client Now it’s time to start with some front end work. Open and edit your index.html file with: <!doctype html> <html> <h1> XHR POST to Server </h1> <body> <input type='text' id='num' /> <script> function send () { var number = { value: document.getElementById('num').value } var xhr = new window.XMLHttpRequest() xhr.open('POST', '/num', true) xhr.setRequestHeader('Content-Type', 'application/json;charset=UTF-8') xhr.send(JSON.stringify(number)) } </script> <button type='button' value='Send' name='Send' onclick='send()' > Send </button> </body> </html> This file simply has a input field to allow users to enter some information, and a button to then send the information entered to the server. What we should focus on here is the button's onclick method send(). This is the function that is called once the button is clicked. We create a JSON object to hold the value from the text field. Then we create a new instance of an XMLHttpRequest with xhr. We call xhr.open() to initialize our request by giving it a request method (POST), the url we would like to open the request with ('/num') and determine if it should be asynchronous or not (set true for asynchronous). We then call xhr.setRequestHeader(). This sets the value of the HTTP request to json and UTF-8. As a last step, we send the request with xhr.send(). We pass the value of the text box and stringify it to send the data as raw text to our server, where it can be manipulated. Server Here our server is supposed to handle the POST request and we are simply going to log the request received from the client. const express = require('express') const app = express() const path = require('path') var bodyParser = require('body-parser') var port = 3000 app.listen(port, function () { console.log('We are listening on port ' + port) }) app.use(bodyParser.urlencoded({extended: false})) app.use(bodyParser.json()) app.get('*', function (req, res) { res.sendFile(path.join(__dirname, '/index.html')) }) app.post('/num', function (req, res) { var num = req.body.value console.log(num) return res.end('done') }) At the top, we declare our variables, obtaining an instance of express, path and body-parser. Then we set our server to listen on port 3000. Next, we use bodyParser object to decide what kind of information we would like to parse, we set it to json because we sent a json object from our client, if you recall the last section. This is done with: app.use(bodyParser.json()) Then we serve our html file in order to see our front end created in the last section with: app.get('*', function (req, res) { res.sendFile(path.join(__dirname, '/index.html')) }) The last part of server.js is where we handle the POST request from the client. We access the value sent over by checking for corresponding property on the body object which is part of the request object. Then, as a last step for us to verify we have the correct information, we will log the data received to the console and send a response to the client. Test Let's test what we have done. In the project directory, we can run: $ node server.js Open your web browser and go to the url localhost:3000. This is what your web page should look like: This is what your output to the console should look like if you enter a 5 in the input field: Conclusion You are all done! You now have a web page that sends some JSON data to your server using XmlHttpRequest! Here is a summary of what we went over: Created a front end with an input field and button Created a function for our button to send an XmlHttpRequest Created our server to listen on port 3000 Served our html file Handled our POST request at route '/num' Logged the value to our console If you enjoyed this post, share it on twitter. Check out the code for this tutorial on GitHub. Possible Resources Check out my GitHub View my personal blog Information on XmlHtttpRequest GitHub pages for: express body-parser About the author Antonio Cucciniello is a software engineer with a background in C, C++, and JavaScript (Node.Js). He is from New Jersey, USA. His most recent project called Edit Docs is an Amazon Echo skill that allows users to edit Google Drive files using their voice. He loves building cool things with software, reading books on self-help and improvement, finance, and entrepreneurship. To contact Antonio, e-mail him at Antonio.cucciniello16@gmail.com, follow him on twitter at @antocucciniello, and follow him on GitHub here: https://github.com/acucciniello.
Read more
  • 0
  • 0
  • 39482

article-image-how-build-dropdown-menu-using-canjs
Liz Tom
17 Mar 2017
7 min read
Save for later

How to build a dropdown menu using Can.js

Liz Tom
17 Mar 2017
7 min read
This post describes how to build a dropdown menu using Can.js. In this example, we will build a dropdown menu of names. If you'd like to see the complete example of what you'll be building, you can check it out here. Setup The very first thing you will need to do is to import Can.js and jQuery. <script src="https://code.jquery.com/jquery-2.2.4.js"></script> <script src="https://rawgit.com/canjs/canjs/v3.0.0-pre.6/dist/global/can.all.js"></script> Our First Model To make a model, you use can DefineMap. If you're following along using Code Pen or JSBin in the js tab, type the following piece of code[ZB1] : var Person = can.DefineMap.extend({ id: "string", name: "string", }); Here, we have a model named Person that defined the properties of id and name as string types. You can read about the different types that Can.js has here: https://canjs.com/doc/can-map-define._type.html. Can.js 3.0 allows us to declare types in two different ways. We could have also written the following piece of code[ZB2] : var Person = can.DefineMap.extend({ id: { type: "string", }, name: { type: "string", }, }); I tend to use the second syntax only when I have other settings I need to define on a particular property. The short hand of the first way makes things a bit easier. Getting it on the Page Since we're building a dropdown, we will most likely want the user to be able to see it. We're going to use can.stache to help us with this. In our HTML tab, write the following lines of code: <script type='text/stache' id='person-template'> <h1>Person Template</h1> <input placeholder="{{person.test}}"/> </script> The {{person.test}} is there, so you can see if you have it working. We'll add a test property to our model. var Person = can.DefineMap.extend({ id: "string", name: "string", test: { value: 'It's working!' } }); Now, we need to create a View Model. We're going to use Define Map again. Add the following to your js file: var PersonVM = can.DefineMap.extend({ person: {Value: Person}, }); You might notice that I'm using Value with a capitol "V". You have the option of using both value and Value. The difference is that Value causes the new to be used. Now, to use this as our View Model, you'll need to add the following to your js tab. can.Component.extend({ tag: 'person', view: can.stache.from('person-template'), ViewModel: PersonVM }); var vm = new PersonVM(); var template = can.stache.from('person-template') var frag = template(vm); document.body.appendChild(frag); The can.stache.from ('person-template') uses the ID from our script tag. The tag value person is so that we can use this component elsewhere, like <person>. If you check out the preview tab, you should see a header followed by an input box with the placeholder text we set. If you change the value of our test property, you should see the live binding updating. Fixtures Can.js allows us to easily add fixtures so we can test our UI without needing the API set up. This is great for development as the UI and the API don't always sync up in terms of development. We start off by setting up our set Algebra. Put the following at the top of your js tab: var personAlgebra = new set.Algebra( set.props.id('id'), set.props.sort('sort') ); var peopleStore = can.fixture.store([ { name: "Mary", id: 5 }, { name: "John", id: 6 }, { name: "Peter", id: 7 } ], personAlgebra); The set.Algebra helps us with some things. The set.props.id allows us to change the ID property. A very common example is that Mongo uses _id. We can easily change the ID property to map responses from the server with _id to our can model's id. In our fixture, we are faking some data that might already be stored in our database. Here, we have three people that have already been added. We need to add in a fixture route to catch our requests so we can send back our fixture data instead of trying to make a call to our API: can.fixture("/api/people/{id}", peopleStore); Here, we're telling can to use the people store whenever we have any requests using /api/people/{id}. Next, we will need to tell can.js how to use everything we just set up. We're going to use can-connect for that. Add this to your js tab: Person.connection = can.connect.superMap({ Map: Person, List: Person.List, url: "/api/people", name: "person", algebra: personAlgebra }); Does it work? Let's see if it's working. We'll write a function in our viewModel that allows us to save. Can-connect comes with some helper functions that allow us to do basic CRUD functionality. Keeping this in mind, update your Person View Model as follows: var PersonCreateVM = can.DefineMap.extend({ person: {Value: Person}, createPerson: function(){ this.person.save().then(function(){ this.person = new Person(); }.bind(this)); } }); Now, we have a createPerson function that saves a new person to the database and updates the person to be our new person. In order to use this, we can update our input tag to the following: <input placeholder="Name" {($value)}="person.name" ($enter)="createPerson()"/> This two-way binds the value of the input to our viewModel. Now, when we update the input, person.name also gets updated, and when we update person.name, the input updates as well. ($enter)=createPerson() will call createPerson whenever we press Enter. Populating the Select Now that we can create people and save them, we should be able to easily create a list of names. Since we may want to use this list of names at many places in our app, we're making the list its own component. Add this to the HTML tab. First, we will create a view model for our People. We're going to end up passing our people into the component. This way, we can use different people, depending on where this dropdown is being used. var PeopleListVM = can.DefineMap.extend({ peoplePromise: Promise, }); can.Component.extend({ tag: "people-list", view: can.stache.from("people-list-template"), ViewModel: PeopleListVM }); Then update your HTML with a template. Since peoplePromise is a Promise, we want to make sure it is resolved before we populate the select menu. We also have the ability to check isRejected, and isPending. value gives us result of the promise. We also use {{#each}} to cycle through each item in a list. <script type='text/stache' id='people-list-template'> {{#if peoplePromise.isResolved}} <select> {{#each peoplePromise.value}} <option>{{name}}</option> {{/each}} </select> {{/if}} </script> Building Blocks We can use these components, such as building blocks, in various parts of our app. If we create an app view model, we can put people there. We are using a getter in this case to get back a list of people. .getList({}) comes with DefineMap. This will return a promise. var AppVM = can.DefineMap.extend({ people: { get: function(){ return Person.getList({}); } } }); We will update our HTML to use these components. Now, we're using the tags we set up earlier. We can use the following to pass people into our people-list component: <people-list {people-promise}="people"/>. We can't use camel case in our stache file, so we will use hypens. can.js knows how to convert this into camel case for us. <script type='text/stache' id='names-template'> <div id="nameapp"> <h1>Names</h1> <person-create/> <people-list {people-promise}="people"/> </div> </script> Update the vm to use the app view model instead of the people view model. var vm = new AppVM(); var template = can.stache.from("app-template") var frag = template(vm); document.body.appendChild(frag); And that's it! You should have a drop-down menu that updates as you add more people. About the author Liz Tom is a developer at Bitovi in Portland, OR, focused on JavaScript. When she’s not in the office, you can find Liz attempting parkour and going to check out interactive displays at museums.
Read more
  • 0
  • 0
  • 4659

article-image-c-compiler-device-drivers-and-useful-developing-techniques
Packt
17 Mar 2017
22 min read
Save for later

C compiler, Device Drivers and Useful Developing Techniques

Packt
17 Mar 2017
22 min read
In this article by Rodolfo Giometti, author of the book GNU/Linux Rapid Embedded Programming, in this article we’re going to focusing our attention to the C compiler (with its counter part: the cross-compiler) and when we have (or we can choose to) to use the native or cross-compilation and the differences between them. (For more resources related to this topic, see here.) Then we’ll see some kernel stuff used later in this article (configuration, recompilation and the device tree) and then we’ll look a bit deeper at the device drivers, how they can be compiled and how they can be put into a kernel module (that is kernel code that can be loaded at runtime). We'll present different kinds of computer peripherals and, for each of them, we'll try to explain how the corresponding device driver works starting from the compilation stage through the configuration till the final usage. As example we’ll try to implement a very simple driver in order to give to the reader some interesting points of view and very simple advices about kernel programming (which is not covered by this article!). We’re going to present the root filesystem’s internals and we’ll spend some words about a particular root filesystem that can be very useful during the early developing stages: the Network File System. As final step we’ll propose the usage of an emulator in order to execute a complete target machine’s Debian distribution on a host PC. This article still is part of the introductory part of this article, experienced developers whose already well know these topics may skip this article but the author's suggestion still remains the same, that is to read the article anyway in order to discover which developing tools will be used in the article and, maybe, some new technique to manage their programs. The C compiler The C compiler is a program that translate the C language) into a binary format that the CPU can understand and execute. This is the vary basic way (and the most powerful one) to develop programs into a GNU/Linux system. Despite this fact most developer prefer using another high level languages rather than C due the fact the C language has no garbage collection, has not objects oriented programming and other issue, giving up part of the execution speed that a C program offers, but if we have to recompile the kernel (the Linux kernel is written in C – plus few assembler), to develop a device driver or to write high performance applications then the C language is a must-have. We can have a compiler and a cross-compiler and till now, we’ve already used the cross-compiler several times to re-compile the kernel and the bootloaders, however we can decide to use a native compiler too. In fact using native compilation may be easier but, in most cases, very time consuming that’s why it’s really important knowing the pros and cons. Programs for embedded systems are traditionally written and compiled using a cross-compiler for that architecture on a host PC. That is we use a compiler that can generate code for a foreign machine architecture, meaning a different CPU instruction set from the compiler host's one. Native & foreign machine architecture For example the developer kits shown in this article are an ARM machines while (most probably) our host machine is an x86 (that is a normal PC), so if we try to compile a C program on our host machine the generated code cannot be used on an ARM machine and vice versa. Let's verify it! Here the classic Hello World program below: #include <stdio.h> int main() { printf("Hello Worldn"); return 0; } Now we compile it on my host machine using the following command: $ make CFLAGS="-Wall -O2" helloworld cc -Wall -O2 helloworld.c -o helloworld Careful reader should notice here that we’ve used command make instead of the usual cc. This is a perfectly equivalent way to execute the compiler due the fact, even if without a Makefile, command make already knows how to compile a C program. We can verify that this file is for the x86 (that is the PC) platform by using the file command: $ file helloworld helloworld: ELF 64-bit LSB executable, x86-64, version 1 (SYSV), dynamically linked (uses shared libs), for GNU/Linux 2.6.24, BuildID[sha1]=0f0db5e65e1cd09957ad06a7c1b7771d949dfc84, not stripped Note that the output may vary according to the reader's host machine platform. Now we can just copy the program into one developer kit (for instance the the BeagleBone Black) and try to execute it: root@bbb:~# ./helloworld -bash: ./helloworld: cannot execute binary file As we expected the system refuses to execute code generated for a different architecture! On the other hand, if we use a cross-compiler for this specific CPU architecture the program will run as a charm! Let's verify this by recompiling the code but paying attention to specify that we wish to use the cross-compiler instead. So delete the previously generated x86 executable file (just in case) by using the rm helloworld command and then recompile it using the cross-compiler: $ make CC=arm-linux-gnueabihf-gcc CFLAGS="-Wall -O2" helloworld arm-linux-gnueabihf-gcc -Wall -O2 helloworld.c -o helloworld Note that the cross-compiler's filename has a special meaning: the form is <architecture>-<platform>-<binary-format>-<tool-name>. So the filename arm-linux-gnueabihf-gcc means: ARM architecture, Linux platform, gnueabihf (GNU EABI Hard-Float) binary format and gcc (GNU C Compiler) tool. Now we use the file command again to see if the code is indeed generated for the ARM architecture: $ file helloworld helloworld: ELF 32-bit LSB executable, ARM, EABI5 version 1 (SYSV), dynamically linked (uses shared libs), for GNU/Linux 2.6.32, BuildID[sha1]=31251570b8a17803b0e0db01fb394a6394de8d2d, not stripped Now if we transfer the file as before on the BeagleBone Black and try to execute it, we get: root@bbb:~# ./helloworld Hello World! Therefore we see the cross-compiler ensures that the generated code is compatible with the architecture we are executing it on. In reality in order to have a perfectly functional binary image we have to make sure that the library versions, header files (also the headers related to the kernel) and cross compiler options match the target exactly or, at least, they are compatible. In fact we cannot execute cross-compiled code against the glibc on a system having, for example, musl libc (or it can run in a no predictable manner). In this case we have perfectly compatible libraries and compilers but, in general, the embedded developer should perfectly know what he/she is doing. A common trick to avoid compatibility problems is to use static compilation but, in this case, we get huge binary files. Now the question is: when should we use the compiler and when the cross-compiler? We should compile on an embedded system because: We can (see below why). There would be no compatibility issues as all the target libraries will be available. In cross-compilation it becomes hell when we need all the libraries (if the project uses any) in the ARM format on the host PC. So we not only have to cross-compile the program but also its dependencies. And if the same version dependencies are not installed on the embedded system's rootfs, then good luck with troubleshooting! It's easy and quick. We should cross-compile because: We are working on a large codebase and we don't want to waste too much time compiling the program on the target, which may take from several minutes to several hours (or even it may result impossible). This reason might be strong enough to overpower the other reasons in favor of compiling on the embedded system itself. PCs nowadays have multiple cores so the compiler can process more files simultaneously. We are building a full Linux system from scratch. In any case, below, we will show an example of both native compilation and cross-compilation of a software package, so the reader may well understand the differences between them. Compiling a C program As first step let's see how we can compile a C program. To keep it simple we’ll start compiling a user-space program them in the next sections, we’re going to compile some kernel space code. Knowing how to compile an C program can be useful because it may happen that a specific tool (most probably) written in C is missing into our distribution or it’s present but with an outdated version. In both cases we need to recompile it! To show the differences between a native compilation and a cross-compilation we will explain both methods. However a word of caution for the reader here, this guide is not exhaustive at all! In fact the cross-compilation steps may vary according to the software packages we are going to cross-compile. The package we are going to use is the PicoC interpreter. Each Real-Programmers(TM) know the C compiler, which is normally used to translate a C program into the machine language, but (maybe) not all of them know that a C interpreter exists too! Actually there are many C interpreters, but we focus our attention on PicoC due its simplicity in cross-compiling it. As we already know, an interpreter is a program that converts the source code into executable code on the fly and does not need to parse the complete file and generate code at once. This is quite useful when we need a flexible way to write brief programs to resolve easy tasks. In fact to fix bugs in the code and/or changing the program behavior we simply have to change the program source and then re-executing it without any compilation at all. We just need an editor to change our code! For instance, if we wish to read some bytes from a file we can do it by using a standard C program, but for this easy task we can write a script for an interpreter too. Which interpreter to choose is up to developer and, since we are C programmers, the choice is quite obvious. That's why we have decided to use PicoC. Note that the PicoC tool is quite far from being able to interpret all C programs! In fact this tool implements a fraction of the features of a standard C compiler; however it can be used for several common and easy tasks. Please, consider the PicoC as an education tool and avoid using it in a production environment! The native compilation Well, as a first step we need to download the PicoC source code from its repository at: http://github.com/zsaleeba/picoc.git into our embedded system. This time we decided to use the BeagleBone Black and the command is as follows: root@bbb:~# git clone http://github.com/zsaleeba/picoc.git When finished we can start compiling the PicoC source code by using: root@bbb:~# cd picoc/ root@bbb:~/picoc# make Note that if we get the error below during the compilation we can safely ignore it: /bin/sh: 1: svnversion: not found However during the compilation we get: platform/platform_unix.c:5:31: fatal error: readline/readline.h: No such file or directory #include <readline/readline.h> ^ compilation terminated. <builtin>: recipe for target 'platform/platform_unix.o' failed make: *** [platform/platform_unix.o] Error 1 Bad news, we have got an error! This because the readline library is missing; hence we need to install it to keep this going. In order to discover which package's name holds a specific tool, we can use the following command to discover the package that holds the readline library: root@bbb:~# apt-cache search readline The command output is quite long, but if we carefully look at it we can see the following lines: libreadline5 - GNU readline and history libraries, run-time libraries libreadline5-dbg - GNU readline and history libraries, debugging libraries libreadline-dev - GNU readline and history libraries, development files libreadline6 - GNU readline and history libraries, run-time libraries libreadline6-dbg - GNU readline and history libraries, debugging libraries libreadline6-dev - GNU readline and history libraries, development files This is exactly what we need to know! The required package is named libreadline-dev. In the Debian distribution all libraries packages are prefixed by the lib string while the -dev postfix is used to mark the development version of a library package. Note also that we choose the package libreadline-dev intentionally leaving the system to choose to install version 5 o 6 of the library. The development version of a library package holds all needed files whose allow the developer to compile his/her software to the library itself and/or some documentation about the library functions. For instance, into the development version of the readline library package (that is into the package libreadline6-dev) we can find the header and the object files needed by the compiler. We can see these files using the following command: #root@bbb:~# dpkg -L libreadline6-dev | egrep '.(so|h)' /usr/include/readline/rltypedefs.h /usr/include/readline/readline.h /usr/include/readline/history.h /usr/include/readline/keymaps.h /usr/include/readline/rlconf.h /usr/include/readline/tilde.h /usr/include/readline/rlstdc.h /usr/include/readline/chardefs.h /usr/lib/arm-linux-gnueabihf/libreadline.so /usr/lib/arm-linux-gnueabihf/libhistory.so So let's install it: root@bbb:~# aptitude install libreadline-dev When finished we can relaunch the make command to definitely compile our new C interpreter: root@bbb:~/picoc# make gcc -Wall -pedantic -g -DUNIX_HOST -DVER="`svnversion -n`" -c -o clibrary.o clibrary.c ... gcc -Wall -pedantic -g -DUNIX_HOST -DVER="`svnversion -n`" -o picoc picoc.o table.o lex.o parse.o expression.o heap.o type.o variable.o clibrary.o platform.o include.o debug.o platform/platform_unix.o platform/library_unix.o cstdlib/stdio.o cstdlib/math.o cstdlib/string.o cstdlib/stdlib.o cstdlib/time.o cstdlib/errno.o cstdlib/ctype.o cstdlib/stdbool.o cstdlib/unistd.o -lm -lreadline Well now the tool is successfully compiled as expected! To test it we can use again the standard Hello World program above but with a little modification, in fact the main() function is not defined as before! This is due the fact PicoC returns an error if we use the typical function definition. Here the code: #include <stdio.h> int main() { printf("Hello Worldn"); return 0; } Now we can directly execute it (that is without compiling it) by using our new C interpreter: root@bbb:~/picoc# ./picoc helloworld.c Hello World An interesting feature of PicoC is that it can execute C source file like a script, that is we don't need to specify a main() function as C requires and the instructions are executed one by one from the beginning of the file as a normal scripting language does. Just to show it we can use the following script which implements the Hello World program as C-like script (note that the main() function is not defined!): printf("Hello World!n"); return 0; If we put the above code into the file helloworld.picoc we can execute it by using: root@bbb:~/picoc# ./picoc -s helloworld.picoc Hello World! Note that this time we add the -s option argument to the command line in order to instruct the PicoC interpreter that we wish using its scripting behavior. The cross-compilation Now let's try to cross-compile the PicoC interpreter on the host system. However, before continuing, we’ve to point out that this is just an example of a possible cross-compilation useful to expose a quick and dirty way to recompile a program when the native compilation is not possible. As already reported above the cross-compilation works perfectly for the bootloader and the kernel while for user-space application we must ensure that all involved libraries (and header files) used by the cross-compiler are perfectly compatible with the ones present on the target machine otherwise the program may not work at all! In our case everything is perfectly compatible so we can go further. As before we need to download the PicoC's source code by using the same git command as above. Then we have to enter the following command into the newly created directory picoc: $ cd picoc/ $ make CC=arm-linux-gnueabihf-gcc arm-linux-gnueabihf-gcc -Wall -pedantic -g -DUNIX_HOST -DVER="`svnversion -n`" -c -o picoc.o picoc.c ... platform/platform_unix.c:5:31: fatal error: readline/readline.h: No such file or directory compilation terminated. <builtin>: recipe for target 'platform/platform_unix.o' failed make: *** [platform/platform_unix.o] Error 1 We specify the CC=arm-linux-gnueabihf-gcc commad line option to force the cross-compilation. However, as already stated before, the cross-compilation commands may vary according to the compilation method used by the single software package. As before the system returns a linking error due to the fact that thereadline library is missing, however, this time, we cannot install it as before since we need the ARM version (specifically the armhf version) of this library and my host system is a normal PC! Actually a way to install a foreign package into a Debian/Ubuntu distribution exists, but it's not a trivial task nor it's an argument. A curious reader may take a look at the Debian/Ubuntu Multiarch at https://help.ubuntu.com/community/MultiArch. Now we have to resolve this issue and we have two possibilities: We can try to find a way to install the missing package, or We can try to find a way to continue the compilation without it. The former method is quite complex since the readline library has in turn other dependencies and we may take a lot of time trying to compile them all, so let's try to use the latter option. Knowing that the readline library is just used to implement powerful interactive tools (such as recalling a previous command line to re-edit it, etc.) and since we are not interested in the interactive usage of this interpreter, we can hope to avoid using it. So, looking carefully into the code we see that the define USE_READLINE exists and changing the code as shown below should resolve the issue allowing us to compile the tool without the readline support: $ git diff diff --git a/Makefile b/Makefile index 6e01a17..c24d09d 100644 --- a/Makefile +++ b/Makefile @@ -1,6 +1,6 @@ CC=gcc CFLAGS=-Wall -pedantic -g -DUNIX_HOST -DVER="`svnversion -n`" -LIBS=-lm -lreadline +LIBS=-lm TARGET = picoc SRCS = picoc.c table.c lex.c parse.c expression.c heap.c type.c diff --git a/platform.h b/platform.h index 2d7c8eb..c0b3a9a 100644 --- a/platform.h +++ b/platform.h @@ -49,7 +49,6 @@ # ifndef NO_FP # include <math.h> # define PICOC_MATH_LIBRARY -# define USE_READLINE # undef BIG_ENDIAN # if defined(__powerpc__) || defined(__hppa__) || defined(__sparc__) # define BIG_ENDIAN The above output is in the unified context diff format; so the code above means that into the file Makefile the option -lreadline must be removed from variable LIBS and that into the file platform.h the define USE_READLINE must be commented out. After all the changes are in place we can try to recompile the package with the same command as before: $ make CC=arm-linux-gnueabihf-gcc arm-linux-gnueabihf-gcc -Wall -pedantic -g -DUNIX_HOST -DVER="`svnversion -n`" -c -o table.o table.c ... arm-linux-gnueabihf-gcc -Wall -pedantic -g -DUNIX_HOST -DVER="`svnversion -n`" -o picoc picoc.o table.o lex.o parse.o expression.o heap.o type.o variable.o clibrary.o platform.o include.o debug.o platform/platform_unix.o platform/library_unix.o cstdlib/stdio.o cstdlib/math.o cstdlib/string.o cstdlib/stdlib.o cstdlib/time.o cstdlib/errno.o cstdlib/ctype.o cstdlib/stdbool.o cstdlib/unistd.o -lm Great! We did it! Now, just to verify that everything is working correctly, we can simply copy the picoc file into our BeagleBone Black and test it as before. Compiling a kernel module As a special example of cross-compilation we'll take a look at a very simple code which implement a dummy module for the Linux kernel (the code does nothing but printing some messages on the console) and we’ll try to cross-compile it. Let's consider this following kernel C code of the dummy module: #include <linux/module.h> #include <linux/init.h> /* This is the function executed during the module loading */ static int dummy_module_init(void) { printk("dummy_module loaded!n"); return 0; } /* This is the function executed during the module unloading */ static void dummy_module_exit(void) { printk("dummy_module unloaded!n"); return; } module_init(dummy_module_init); module_exit(dummy_module_exit); MODULE_AUTHOR("Rodolfo Giometti <giometti@hce-engineering.com>"); MODULE_LICENSE("GPL"); MODULE_VERSION("1.0.0"); Apart some defines relative to the kernel tree the file holds two main functions  dummy_module_init() and  dummy_module_exit() and some special definitions, in particular the module_init() and module_exit(), that address the first two functions as the entry and exit functions of the current module (that is the function which are called at module loading and unloading). Then consider the following Makefile: ifndef KERNEL_DIR $(error KERNEL_DIR must be set in the command line) endif PWD := $(shell pwd) CROSS_COMPILE = arm-linux-gnueabihf- # This specifies the kernel module to be compiled obj-m += module.o # The default action all: modules # The main tasks modules clean: make -C $(KERNEL_DIR) ARCH=arm CROSS_COMPILE=arm-linux-gnueabihf- SUBDIRS=$(PWD) $@ OK, now to cross-compile the dummy module on the host PC we can use the following command: $ make KERNEL_DIR=~/A5D3/armv7_devel/KERNEL/ make -C /home/giometti/A5D3/armv7_devel/KERNEL/ SUBDIRS=/home/giometti/github/chapter_03/module modules make[1]: Entering directory '/home/giometti/A5D3/armv7_devel/KERNEL' CC [M] /home/giometti/github/chapter_03/module/dummy.o Building modules, stage 2. MODPOST 1 modules CC /home/giometti/github/chapter_03/module/dummy.mod.o LD [M] /home/giometti/github/chapter_03/module/dummy.ko make[1]: Leaving directory '/home/giometti/A5D3/armv7_devel/KERNEL' It's important to note that when a device driver is released as a separate package with a Makefile compatible with the Linux's one we can compile it natively too! However, even in this case, we need to install a kernel source tree on the target machine anyway. Not only, but the sources must also be configured in the same manner of the running kernel or the resulting driver will not work at all! In fact a kernel module will only load and run with the kernel it was compiled against. The cross-compilation result is now stored into the file dummy.ko, in fact we have: $ file dummy.ko dummy.ko: ELF 32-bit LSB relocatable, ARM, EABI5 version 1 (SYSV), BuildID[sha1]=ecfcbb04aae1a5dbc66318479ab9a33fcc2b5dc4, not stripped The kernel modules as been compiled for the SAMA5D3 Xplained but, of course, it can be cross-compiled for the other developer kits in a similar manner. So let’s copy our new module to the SAMA5D3 Xplained by using the scp command through the USB Ethernet connection: $ scp dummy.ko root@192.168.8.2: root@192.168.8.2's password: dummy.ko 100% 3228 3.2KB/s 00:00 Now, if we switch on the SAMA5D3 Xplained, we can use the modinfo command to get some information of the kernel module: root@a5d3:~# modinfo dummy.ko filename: /root/dummy.ko version: 1.0.0 license: GPL author: Rodolfo Giometti <giometti@hce-engineering.com> srcversion: 1B0D8DE7CF5182FAF437083 depends: vermagic: 4.4.6-sama5-armv7-r5 mod_unload modversions ARMv7 thumb2 p2v8 Then to load and unload it into and from the kernel we can use the insmod and rmmod commands as follow: root@a5d3:~# insmod dummy.ko [ 3151.090000] dummy_module loaded! root@a5d3:~# rmmod dummy.ko [ 3153.780000] dummy_module unloaded! As expected the dummy’s messages has been displayed on the serial console. Note that if we are using a SSH connection we have to use the dmesg or tail -f /var/log/kern.log commands to see kernel’s messages. Note also that the commands modinfo, insmod and rmmod are explained in detail in a section below. The Kernel and DTS files Main target of this article is to give several suggestions for rapid programming methods to be used on an embedded GNU/Linux system, however the main target of every embedded developer is to realize programs to manage peripherals, to monitor or to control devices and other similar tasks to interact with the real world, so we mainly need to know the techniques useful to get access to the peripheral’s data and settings. That’s why we need to know firstly how to recompile the kernel and how to configure it. Summary In this article we did a very long tour into three of the most important topics of the GNU/Linux embedded programming: the C compiler (and the cross-compiler), the kernel (and the device drivers with the device tree) and the root filesystem. Also we presented the NFS in order to have a remote root filesystem over the network and we introduced the emulator usage in order to execute foreign code on the host PC. Resources for Article: Further resources on this subject: Visualizations made easy with gnuplot [article] Revisiting Linux Network Basics [article] Fundamental SELinux Concepts [article]
Read more
  • 0
  • 0
  • 8365
article-image-system-architecture-and-design-ansible
Packt
16 Mar 2017
14 min read
Save for later

System Architecture and Design of Ansible

Packt
16 Mar 2017
14 min read
In this article by Jesse Keating, the author of the book Mastering Ansible - Second Edition, we will cover the following topics in order to lay the foundation for mastering Ansible: Ansible version and configuration Inventory parsing and data sources Variable types and locations Variable precedence (For more resources related to this topic, see here.) This article provides a exploration of the architecture and design of how Ansible goes about performing tasks on your behalf. We will cover basic concepts of inventory parsing and how the data is discovered. We will also cover variable types and find out where variables can be located, the scope they can be used in, and how precedence is determined when variables are defined in more than one location. Ansible version and configuration There are many documents out there that cover installing Ansible in a way that is appropriate for the operating system and version that you might be using. This article will assume the use of the Ansible 2.2 version. To discover the version in use on a system with Ansible already installed, make use of the version argument, that is, either ansible or ansible-playbook: Inventory parsing and data sources In Ansible, nothing happens without an inventory. Even ad hoc actions performed on localhost require an inventory, even if that inventory consists just of the localhost. The inventory is the most basic building block of Ansible architecture. When executing ansible or ansible-playbook, an inventory must be referenced. Inventories are either files or directories that exist on the same system that runs ansible or ansible-playbook. The location of the inventory can be referenced at runtime with the inventory file (-i) argument, or by defining the path in an Ansible config file. Inventories can be static or dynamic, or even a combination of both, and Ansible is not limited to a single inventory. The standard practice is to split inventories across logical boundaries, such as staging and production, allowing an engineer to run a set of plays against their staging environment for validation, and then follow with the same exact plays run against the production inventory set. Static inventory The static inventory is the most basic of all the inventory options. Typically, a static inventory will consist of a single file in the ini format. Here is an example of a static inventory file describing a single host, mastery.example.name: mastery.example.name That is all there is to it. Simply list the names of the systems in your inventory. Of course, this does not take full advantage of all that an inventory has to offer. If every name were listed like this, all plays would have to reference specific host names, or the special all group. This can be quite tedious when developing a playbook that operates across different sets of your infrastructure. At the very least, hosts should be arranged into groups. A design pattern that works well is to arrange your systems into groups based on expected functionality. At first, this may seem difficult if you have an environment where single systems can play many different roles, but that is perfectly fine. Systems in an inventory can exist in more than one group, and groups can even consist of other groups! Additionally, when listing groups and hosts, it's possible to list hosts without a group. These would have to be listed first, before any other group is defined. Let's build on our previous example and expand our inventory with a few more hosts and some groupings: [web] mastery.example.name [dns] backend.example.name [database] backend.example.name [frontend:children] web [backend:children] dns database What we have created here is a set of three groups with one system in each, and then two more groups, which logically group all three together. Yes, that's right; you can have groups of groups. The syntax used here is [groupname:children], which indicates to Ansible's inventory parser that this group by the name of groupname is nothing more than a grouping of other groups. The children in this case are the names of the other groups. This inventory now allows writing plays against specific hosts, low-level role-specific groups, or high-level logical groupings, or any combination. By utilizing generic group names, such as dns and database, Ansible plays can reference these generic groups rather than the explicit hosts within. An engineer can create one inventory file that fills in these groups with hosts from a preproduction staging environment and another inventory file with the production versions of these groupings. The playbook content does not need to change when executing on either staging or production environment because it refers to the generic group names that exist in both inventories. Simply refer to the right inventory to execute it in the desired environment. Dynamic inventories A static inventory is great and enough for many situations. But there are times when a statically written set of hosts is just too unwieldy to manage. Consider situations where inventory data already exists in a different system, such as LDAP, a cloud computing provider, or an in-house CMDB (inventory, asset tracking, and data warehousing) system. It would be a waste of time and energy to duplicate that data, and in the modern world of on-demand infrastructure, that data would quickly grow stale or disastrously incorrect. Another example of when a dynamic inventory source might be desired is when your site grows beyond a single set of playbooks. Multiple playbook repositories can fall into the trap of holding multiple copies of the same inventory data, or complicated processes have to be created to reference a single copy of the data. An external inventory can easily be leveraged to access the common inventory data stored outside of the playbook repository to simplify the setup. Thankfully, Ansible is not limited to static inventory files. A dynamic inventory source (or plugin) is an executable script that Ansible will call at runtime to discover real-time inventory data. This script may reach out into external data sources and return data, or it can just parse local data that already exists but may not be in the Ansible inventory ini format. While it is possible and easy to develop your own dynamic inventory source, Ansible provides a number of example inventory plugins, including but not limited to: OpenStack Nova Rackspace Public Cloud DigitalOcean Linode Amazon EC2 Google Compute Engine Microsoft Azure Docker Vagrant Many of these plugins require some level of configuration, such as user credentials for EC2 or authentication endpoint for OpenStack Nova. Since it is not possible to configure additional arguments for Ansible to pass along to the inventory script, the configuration for the script must either be managed via an ini config file read from a known location or environment variables read from the shell environment used to execute ansible or ansible-playbook. When ansible or ansible-playbook is directed at an executable file for an inventory source, Ansible will execute that script with a single argument, --list. This is so that Ansible can get a listing of the entire inventory in order to build up its internal objects to represent the data. Once that data is built up, Ansible will then execute the script with a different argument for every host in the data to discover variable data. The argument used in this execution is --host <hostname>, which will return any variable data specific to that host. Variable types and location Variables are a key component to the Ansible design. Variables allow for dynamic play content and reusable plays across different sets of inventory. Anything beyond the very basic of Ansible use will utilize variables. Understanding the different variable types and where they can be located, as well as learning how to access external data or prompt users to populate variable data is the key to mastering Ansible. Variable types Before diving into the precedence of variables, we must first understand the various types and subtypes of variables available to Ansible, their location, and where they are valid for use. The first major variable type is inventory variables. These are the variables that Ansible gets by way of the inventory. These can be defined as variables that are specific to host_vars to individual hosts or applicable to entire groups as group_vars. These variables can be written directly into the inventory file, delivered by the dynamic inventory plugin, or loaded from the host_vars/<host> or group_vars/<group> directories. These types of variables might be used to define Ansible behavior when dealing with these hosts or site-specific data related to the applications that these hosts run. Whether a variable comes from host_vars or group_vars, it will be assigned to a host's hostvars. Accessing a host's own variables can be done just by referencing the name, such as {{ foobar }}, and accessing another host's variables can be accomplished by accessing hostvars. For example, to access the foobar variable for examplehost: {{ hostvars['examplehost']['foobar'] }}. These variables have global scope. The second major variable type is role variables. These are variables specific to a role and are utilized by the role tasks and have scope only within the role that they are defined in, which is to say that they can only be used within the role. These variables are often supplied as a role default, which are meant to provide a default value for the variable but can easily be overridden when applying the role. When roles are referenced, it is possible to supply variable data at the same time, either by overriding role defaults or creating wholly new data. These variables apply to all hosts within the role and can be accessed directly much like a host's own hostvars. The third major variable type is play variables. These variables are defined in the control keys of a play, either directly by the vars key or sourced from external files via the vars_files key. Additionally, the play can interactively prompt the user for variable data using vars_prompt. These variables are to be used within the scope of the play and in any tasks or included tasks of the play. The variables apply to all hosts within the play and can be referenced as if they are hostvars. The fourth variable type is task variables. Task variables are made from data discovered while executing tasks or in the facts gathering phase of a play. These variables are host-specific and are added to the host's hostvars and can be used as such, which also means they have global scope after the point in which they were discovered or defined. Variables of this type can be discovered via gather_facts and fact modules (modules that do not alter state but rather return data), populated from task return data via the register task key, or defined directly by a task making use of the set_fact or add_host modules. Data can also be interactively obtained from the operator using the prompt argument to the pause module and registering the result: name: get the operators name pause: prompt: "Please enter your name" register: opname There is one last variable type, the extra variables, or extra-vars type. These are variables supplied on the command line when executing ansible-playbook via --extra-vars. Variable data can be supplied as a list of key=value pairs, a quoted JSON data, or a reference to a YAML-formatted file with variable data defined within: --extra-vars "foo=bar owner=fred" --extra-vars '{"services":["nova-api","nova-conductor"]}' --extra-vars @/path/to/data.yaml Extra variables are considered global variables. They apply to every host and have scope throughout the entire playbook. Accessing external data Data for role variables, play variables, and task variables can also come from external sources. Ansible provides a mechanism to access and evaluate data from the control machine (the machine running ansible-playbook). The mechanism is called a lookup plugin, and a number of them come with Ansible. These plugins can be used to lookup or access data by reading files, generate and locally store passwords on the Ansible host for later reuse, evaluate environment variables, pipe data in from executables, access data in the Redis or etcd systems, render data from template files, query dnstxt records, and more. The syntax is as follows: lookup('<plugin_name>', 'plugin_argument') For example, to use the mastery value from etcd in a debug task: name: show data from etcd debug: msg: "{{ lookup('etcd', 'mastery') }}" Lookups are evaluated when the task referencing them is executed, which allows for dynamic data discovery. To reuse a particular lookup in multiple tasks and reevaluate it each time, a playbook variable can be defined with a lookup value. Each time the playbook variable is referenced, the lookup will be executed potentially providing different values over time. Variable precedence There are a few major types of variables that can be defined in a myriad of locations. This leads to a very important question, what happens when the same variable name is used in multiple locations? Ansible has a precedence for loading variable data, and thus it has an order and a definition to decide which variable will win. Variable value overriding is an advanced usage of Ansible, so it is important to fully understand the semantics before attempting such a scenario. Precedence order Ansible defines the precedence order as follows: Extra vars (from command line) always win Task vars (only for the specific task) Block vars (only for the tasks within the block) Role and include vars Vars created with set_fact Vars created with the register task directive Play vars_files Play vars_prompt Play vars Host facts Playbook host_vars Playbook group_vars Inventory host_vars Inventory group_vars Inventory vars Role defaults Merging hashes We focused on the precedence in which variables will override each other. The default behavior of Ansible is that any overriding definition for a variable name will completely mask the previous definition of that variable. However, that behavior can be altered for one type of variable, the hash. A hash variable (a dictionary in Python terms) is a dataset of keys and values. Values can be of different types for each key, and can even be hashes themselves for complex data structures. In some advanced scenarios, it is desirable to replace just one bit of a hash or add to an existing hash rather than replacing the hash all together. To unlock this ability, a configuration change is necessary in an Ansible config file. The config entry is hash_behavior, which takes one of replace, or merge. A setting of merge will instruct Ansible to merge or blend the values of two hashes when presented with an override scenario rather than the default of replace, which will completely replace the old variable data with the new data. Let's walk through an example of the two behaviors. We will start with a hash loaded with data and simulate a scenario where a different value for the hash is provided as a higher priority variable: Starting data: hash_var: fred: home: Seattle transport: Bicycle New data loaded via include_vars: hash_var: fred: transport: Bus With the default behavior, the new value for hash_var will be: hash_var: fred: transport: Bus However, if we enable the merge behavior we would get the following result: hash_var: fred: home: Seattle transport: Bus There are even more nuances and undefined behaviors when using merge, and as such, it is strongly recommended to only use this setting if absolutely needed. Summary In this article, we covered key design and architecture concepts of Ansible, such as version and configuration, variable types and locations, and variable precedence. Resources for Article: Further resources on this subject: Mastering Ansible – Protecting Your Secrets with Ansible [article] Ansible – An Introduction [article] Getting Started with Ansible [article]
Read more
  • 0
  • 0
  • 17340

article-image-getting-started-metasploitable2-and-kali-linux
Packt
16 Mar 2017
8 min read
Save for later

Getting Started with Metasploitable2 and Kali Linux

Packt
16 Mar 2017
8 min read
In this article, by Michael Hixon, the author of the book, Kali Linux Network Scanning Cookbook - Second Edition, we will be covering: Installing Metasploitable2 Installing Kali Linux Managing Kali services (For more resources related to this topic, see here.) Introduction We need to first configure a security lab environment using VMware Player (Windows) or VMware Fusion (macOS), and then install Ubuntu server and Windows server on the VMware Player. Installing Metasploitable2 Metasploitable2 is an intentionally vulnerable Linux distribution and is also a highly effective security training tool. It comes fully loaded with a large number of vulnerable network services and also includes several vulnerable web applications. Getting ready Prior to installing Metasploitable2 in your virtual security lab, you will first need to download it from the Web. There are many mirrors and torrents available for this. One relatively easy method to acquire Metasploitable is to download it from SourceForge at the following URL: http://sourceforge.net/projects/metasploitable/files/Metasploitable2/. How to do it… Installing Metasploitable2 is likely to be one of the easiest installations that you will perform in your security lab. This is because it is already prepared as a VMware virtual machine when it is downloaded from SourceForge. Once the ZIP file has been downloaded, you can easily extract the contents of this file in Windows or macOS by double-clicking on it in Explorer or Finder respectively. Have a look at the following screenshot: Once extracted, the ZIP file will return a directory with five additional files inside. Included among these files is the VMware VMX file. To use Metasploitable in VMware, just click on the File drop-down menu and click on Open. Then, browse to the directory created from the ZIP extraction process and open Metasploitable.vmx as shown in the following screenshot: Once the VMX file has been opened, it should be included in your virtual machine library. Select it from the library and click on Run to start the VM and get the following screen: After the VM loads, the splash screen will appear and request login credentials. The default credential to log in is msfadmin for both the username and password. This machine can also be accessed via SSH. How it works… Metasploitable was built with the idea of security testing education in mind. This is a highly effective tool, but it must be handled with care. The Metasploitable system should never be exposed to any untrusted networks. It should never be assigned a publicly routable IP address, and port forwarding should not be used to make services accessible over the Network Address Translation (NAT) interface. Installing Kali Linux Kali Linux is known as one of the best hacking distributions providing an entire arsenal of penetration testing tools. The developers recently released Kali Linux 2016.2 which solidified their efforts in making it a rolling distribution. Different desktop environments have been released along side GNOME in this release, such as e17, LXDE, Xfce, MATE and KDE. Kali Linux will be kept updated with latest improvements and tools by weekly updated ISOs. We will be using Kali Linux 2016.2 with GNOME as our development environment for many of the scanning scripts. Getting ready Prior to installing Kali Linux in your virtual security testing lab, you will need to acquire the ISO file (image file) from a trusted source. The Kali Linux ISO can be downloaded at http://www.kali.org/downloads/. How to do it… After selecting the Kali Linux .iso file you will be asked what operating system you are installing. Currently Kali Linux is built on Debian 8.x, choose this and click Continue. You will see a finish screen but lets customize the settings first. Kali Linux requires at least 15 GB of hard disk space and a minimum for 512 MB RAM. After booting from the Kali Linux image file, you will be presented with the initial boot menu. Here, scroll down to the sixth option, Install, and press Enter to start the installation process: Once started, you will be guided through a series of questions to complete the installation process. Initially, you will be asked to provide your location (country) and language. You will then be provided with an option to manually select your keyboard configuration or use a guided detection process. The next step will request that you provide a hostname for the system. If the system will be joined to a domain, ensure that the hostname is unique, as shown in the following screenshot: Next, you will need to set the password for the root account. It is recommended that this be a fairly complex password that will not be easily compromised. Have a look at the following screenshot: Next, you will be asked to provide the time zone you are located in. The system will use IP geolocation to provide its best guess of your location. If this is not correct, manually select the correct time zone: To set up your disk partition, using the default method and partitioning scheme should be sufficient for lab purposes: It is recommended that you use a mirror to ensure that your software in Kali Linux is kept up to date: Next, you will be asked to provide an HTTP proxy address. An external HTTP proxy is not required for any of the exercises, so this can be left blank: Finally, choose Yes to install the GRUB boot loader and then press Enter to complete the installation process. When the system loads, you can log in with the root account and the password provided during the installation: How it works… Kali Linux is a Debian Linux distribution that has a large number of preinstalled, third-party penetration tools. While all of these tools could be acquired and installed independently, the organization and implementation that Kali Linux provides makes it a useful tool for any serious penetration tester. Managing Kali services Having certain services start automatically can be useful in Kali Linux. For example lets say I want to be able to SSHto my Kali Linux distribution. By default theSSH server does not start on Kali, so I would need to log into the virtual machine, open a terminal and run the command to start the service. Getting ready Prior to modifying the Kali Linux configuration, you will need to have installed the operating system on a virtual machine. How to do it… We begin by logging into our Kali Linux distribution and opening a terminal window. Type in the following command: More than likely it is already installed and you will see a message as follows: So now that we know it is installed, let us see if the service is running. From the terminal type: If the SSH server is not running you will see something like this: Type Ctrl+ C to get back to the prompt. Now lets start the service and check the status again by typing the following command: You should now see something like the following: So now the service is running, great, but if we reboot we will see that the service does not start automatically. To get the service to start every time we boot we need to make a few configuration changes. Kali Linux puts in extra measures to make sure you do not have services starting automatically. Specifically, it has a service whitelist and blacklist file. So to get SSH to start at boot we will need to remove the SSH service from the blacklist. To do this open a terminal and type the following command: Navigate down to the section labeled List of blacklisted init scripts and find ssh. Now we will just add a # symbol to the beginning of that line, save the file and exit. The file should look similar to the following screenshot: Now that we have removed the blacklist policy, all we need to do is enable ssh at boot. To do this run the following commands from your terminal: That’s it! Now when you reboot the service will begin automatically.You can use this same procedure to start other services automatically at boot time. How it works… The rc.local file is executed after all the normal Linux services have started. It can be used to start services you want available after you boot your machine. Summary In this article, we learnt aboutMetasploitable2 and it's installation. We also covered what is Kali Linux, how it is installed, and the services it provides.Kali Linux is a useful tool for any serious penetration tester by the organization and implementation provided by it. Resources for Article: Further resources on this subject: Revisiting Linux Network Basics [article] Fundamental SELinux Concepts [article] Creating a VM using VirtualBox - Ubuntu Linux [article]
Read more
  • 0
  • 0
  • 49857
Modal Close icon
Modal Close icon